uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
1,314,259,994,763 | arxiv |
\section{Introduction}
In this paper, we explore the problem of sampling from a specific family of probability distributions, generically denoted $\pi$, qualified as sparse and filamentary. This is typically the distribution of a random variable $X$ defined on some measurable space $(\mathsf{X},\mathcal{X})$ where $\mathsf{X}$ is a topological space, for example $\mathsf{X}\subseteq\mathbb{R}^d$ (for some $d>0$) and $\mathcal{X}$ is a sigma-algebra on $\mathsf{X}$, such that $X$ writes as
\begin{equation}
\label{eq:def_fil}
X:= Z+\zeta\,.
\end{equation}
In Eq. \eqref{eq:def_fil}, $Z$ is a random variable that takes its values on a compact subset $\mathsf{Z}\subset\mathsf{X}$ and $\zeta$ is an additive random perturbation. For instance, $Z$ can be a uniform random variable on $\mathsf{Z}$ and $\zeta$ a Gaussian random variable with distribution $\mathcal{N}(0,\sigma^2)$. In this paper, we focus on situations where:
\begin{itemize}
\item The subset $\mathsf{Z}$ is a connected subspace of lower dimension compared to the ambient space $\mathsf{X}$ comprising for instance linear subspaces, hyperplanes, curved submanifolds, etc. In other words, for any point $z\in\mathsf{Z}$ and a neighbourhood of $z$, say $\mathfrak{N}(z)\subset \mathsf{X}$, the dimension of $\mathfrak{N}(z)\cap \mathsf{Z}$ is (potentially significantly) lower than $d$. This feature characterizes the sparse structure of $\pi$, since the sampling problem is defined on a $d$-dimensional space while, locally and at the limit $\zeta\downarrow 0$, it can be reparameterized as a $d'$-dimensional space or submanifold with $d'<d$ (potentially $d'\ll d$).
\item The probability mass of $\pi$ is concentrated around $\mathsf{Z}$. By analogy to electromagnetism, $\pi$ has a filamentary structure where the probability mass in $\mathsf{Z}$ is regarded as the signal, say light, that glows in $\mathsf{X}\backslash \mathsf{Z}$ resulting in a halo effect. The signal-to-noise ratio is assumed to be reasonably high, \textit{i.e.\;} our analysis focuses on the regime $\zeta \to 0$, almost surely.
\end{itemize}
Sparse and filamentary distributions arise in a number of statistical applications including Bayesian inverse problems \citep{knapik2011bayesian}, models involving variables with strong nonlinear relationships \citep{givens1996local} and deterministic simulation models used in Ecology \citep{duan1992effective,bates2003bayesian} and Demography \citep{raftery2010estimating,raftery1995inference}, see also \cite{poole2000inference} and the references therein for more applications. We also mention cosmic matter models in Cosmology, where the terminology \textit{filamentary distribution} is also used \citep{van2009geometry,tempel2014filamentary}.
Sampling from sparse and filamentary distributions usually represents a bottleneck when inferring models they arise in. In particular, simple Markov chain Monte Carlo methods (MCMC) such as the random-walk Metropolis-Hastings algorithm (RWMH) \citep{metropolis1953equation} or random-scan Gibbs sampler (RSGS) (see \cite{geman1984stochastic} and \cite{liu1995covariance} specifically for the random-scan approach) are known to be inefficient in this type of setup. Indeed, none of these two methods include information related to the local topology of the state space making them \textit{de facto} unaware of ridge like features or locally unused or redundant dimensions.
Using \textit{local information} to improve the mixing of the chain has generated an abundant research stream in the field statistical methodology, aiming at designing more sophisticated MCMC algorithms. The Markov chain generated by those methods typically moves according to position dependent information related to the target distribution: the gradient information (MALA) \citep{roberts2002langevin}, the Hamiltonian dynamic (HMC) \citep{duane1987hybrid,neal2011mcmc} or other information geometry objects \citep{girolami2011riemann,livingstone2014information}. We also mention the regional adaptation approach proposed in \cite{craiu2009learn} that leads to different optimal adaptive kernels in different region of the state space (see \cite{andrieu2008tutorial} for an introduction on adaptive MCMC methods) and \cite{conrad2016accelerating} that couple RWMH with a local quadratic approximation of the distribution of interest. These works have brought inspiring concepts and, to some extent, useful tools and softwares to practitioners (see for instance \cite{carpenter2017stan}).
However, experienced users know that most of those methods are particularly computationally involved since they require some analytic quantities (Gradient, Hamiltonian integrator, Fisher Information matrix, etc.) to be calculated, routinely. More importantly, those methods are not specifically taylored to address the special case of sparse and filamentary distributions that features manifolds of lower dimension than the ambient space (this is not the case of the aforementioned works) but are rather designed to sample from multimodal and/or heavy-tailed challenging distributions. Borrowing from geometric measure theory, \cite{diaconis2013sampling} develop a number of Monte Carlo algorithms to sample on manifolds. Although elegant, those methods remain challenging to generalize to practical problems. In practice, the most popular Bayesian approach to infer a posterior distribution where the signal is contained in a low-dimensional subspace drown into a background noise is to use an equivalent representation of the state space based on partitions as is the case in image segmentation applications. MCMC methods (and in particular variants of MH) have been successfully designed but the distribution of interest is defined on the space of possible configurations and not directly on $\mathsf{X}$. Nevertheless, we note that embedding a local information in the MH proposal such that an analysis of local level sets in \citep{chang2011efficient} or via the construction of local shape priors \citep{erdil2016mcmc}, allows to speed up significantly the performance of algorithms. However, there is no clear theoretical investigation justifying the improvements observed with those methods. Perhaps, \cite{livingstone2015geometric} and \cite{beskos2018asymptotic} represent the only relevant works that make precise theoretical statements on locally informed methods sampling from sparse and filamentary distributions, referred to therein as ridged densities. In \cite{beskos2018asymptotic}, the authors study the proposal optimal scaling in a RWMH algorithm that samples from a ridged densities, in the spirit of \cite{roberts1997weak}. Interestingly, they show that when $d-d'\gg 1$, the diffusion regime is specified by a SDE and for optimality (in asymptotic regime) the RW proposal should be scaled, if the jump size is allowed to be position dependent, so that the RWMH acceptance rate is locally (\textit{i.e.\;} dimension wise) $0.234$. This is in line with the intuition that the jump size should be smaller in directions orthogonal to that containing the signal. In \cite{livingstone2015geometric} (see also \cite{mallik2017directional} for a similar algorithm and an adaptive version of it), the author studies the efficiency (in non-asymptotic regime) of RWMH using a position dependent covariance matrix (an approach which actually gathers a number of the aforementioned method under a generic framework). In particular, it is established that for sparse and filamentary distributions and under regulatory assumptions, the convergence of the position dependent RWMH occurs at a geometric rate, something which does not always hold for the standard RWMH.
The approach followed in this paper can be seen as orthogonal to those previously explored in the literature: instead of designing a sophisticated MCMC method that is provably optimal in some sense, we propose and study an non-adaptive MCMC algorithm whose simplicity resembles that of RWMH and RSGS. Motivated by the results of \cite{livingstone2015geometric} and \cite{beskos2018asymptotic}, we tackle the two following questions:
\begin{question}
\label{que1}
Given a fixed collection of $n$ $\pi$-invariant Markov kernels,
$$
\mathfrak{P}:=P_1,P_2,\ldots,P_n
$$
that operate on different subspaces of $\mathsf{X}$ and with different scaling factors, is it possible to find a $\pi$-invariant Markov chain that recursively moves according to a kernel selected from $\mathfrak{P}$ by mean of a position dependent probability distribution $\omega:\equiv\omega(x)$ ($x\in \mathsf{X})$?
\end{question}
\begin{question}
\label{que2}
If such algorithms exist, what can be said about their efficiency, both in terms of mixing time and in asymptotic regime, in the context of sparse and filamentary distributions and especially in the limiting case $\zeta\to 0$ almost surely? In particular, is it always preferable to use a position dependent selection probability $\omega(x)$ ($x\in\mathsf{X}$) compared to a position independent selection probability, \textit{i.e.\;} where $\nabla_x\omega(x)=0$?
\end{question}
Related to Question \ref{que1} but in the specific context of the RSGS, \cite{latuszynski2013adaptive} consider a class of Markov chains in which the kernel selection distribution $\omega$ evolves and depends on the past history of the process. The author investigates conditions under which the Markov chain generated by this so-called adaptive RSGS algorithm is ergodic. In particular, the amount of adaptation of $\omega$ needs to be controlled and should eventually decrease to zero. As a result, should the adaptation scheme construct an optimal selection probability $\omega$, the diminishing adaptation constraint imposes a \textit{global} optimality. In some situations however, a globally optimal $\omega$ might only lead to a marginal improvement compared to a uniform probability and we argue that a position dependent selection probability $\omega(x)$ ($x\in\mathsf{X}$), perhaps not optimal in any sense, might be more efficient. Such situations include distributions that have a high degree of symmetry at a macro level but are locally anisotropic.
Far from reporting an exhaustive series of results related to locally informed MCMC applied to sparse and filamentary distribution, we present some facts both theoretical and empirical, some of them expected and other perhaps counter-intuitive, through a number of examples and algorithms that open up further research perspectives. The main contributions of this paper can be summarized as follows:
\begin{enumerate}[(i)]
\item Method and applications: we construct two (non-adaptive) Markov chains (Algorithms \ref{alg1} and \ref{alg2}), referred generically to as \textit{locally informed}, that answer Question \ref{que1} (see Sections \ref{sec:4} and \ref{sec:5}) and are implemented to sample from a variety of synthetic sparse and filamentary distributions, defined on discrete and general state spaces (Examples 1--7, throughout the paper).
\item Theoretical and empirical observations (mainly addressing Question \ref{que2}):
\begin{itemize}
\item When $\mathfrak{P}=(P_1,\ldots,P_n)$ are absolutely continuous kernels (as is the case in the RSGS), the locally informed MCMC Algorithm \ref{alg1} is asymptotically sub-optimal, see Section \ref{sec:4}.
\item On a specific discrete example (Example \ref{fig:hypercube}) and for the special case $\zeta=0$ almost surely, we prove that the locally informed MCMC (Algorithm \ref{alg1}) is $\mathcal{O}(d)$ faster to converge than the non locally informed algorithm (Section \ref{sec:3}).
\item We study how the theoretical results related to Example \ref{fig:hypercube} transpose to the regime $\zeta\downarrow 0$. Since analytical results are more challenging to establish in presence of noise, most of our observations, apart from Examples \ref{ex3} and \ref{ex5}, are based on empirical results. We find that the idea of ``{continuity}'' with the case $\zeta=0$ is debatable, at least at a theoretical level.
\item In terms of mixing time, our locally informed algorithm presents a consistent convergence pattern. Convergence on the subspace $\mathsf{Z}$ is faster than when using a non locally informed algorithm that uses the same kernels $\mathfrak{P}$ but, as soon as $\zeta$ is not almost surely null, the exploration of $\mathsf{X}\backslash\mathsf{Z}$ slows down the convergence of locally informed algorithm, gradually as $\pi(\mathsf{X}\backslash \mathsf{Z})$ increases. These observations motivates the following conjecture:
\begin{conjecture}
Comparing locally informed and non-locally informed, as defined in the context of this paper, a type of ``\textit{The Tortoise and The Hare}'' scenario\footnote{In reference to the famous Aesop Fable.} is conjectured: the locally informed Markov chain converges quicker to a good approximation of $\pi$ than its non locally informed competitor and there exists a finite pivot time at which the approximation offered by the non locally informed algorithm is better than the locally informed Markov chain.
\end{conjecture}
\item Empirically, we find that when the sparse and filamentary features of $\pi$ are accentuated, this conjectured pivot time is sufficiently large to safely recommend using the locally informed algorithms for practical experiments, hence giving some credit to the methods developed in this paper.
\end{itemize}
\end{enumerate}
\section{Notation}
\label{sec:2}
Let $\mathsf{X}\subseteq\mathbb{R}^d$ and $\mathcal{X}$ any sigma-algebra on $\mathsf{X}$. We denote by $\Delta_n\subset\mathbb{R}^n$ the $n$-simplex \textit{i.e.\;}
\begin{equation}
\label{eq:simplex}
\Delta_n:=\left\{(\omega_1,\ldots,\omega_n)\in\mathbb{R}^n\,,\; \sum_{i=1}^n\omega_i=1,\;\text{and}\;\omega_i\geq 0\;\text{for all}\; i\right\}\,.
\end{equation}
For vectors $x\in\mathsf{X}$, we denote by $x_{i:j}:=(x_i,\ldots,x_j)$ with the convention that $x_{i:j}=\{\emptyset\}$ if $j<i$. Let $x_{-i}:=(x_{1:i-1},x_{i+1:d})$ and similarly for sets $A\in\mathcal{X}$, we denote by $A_{-i}=A_1\times\cdots\times A_{i-1}\times A_{i+1}\times\cdots\times A_d$. For any subset $A\subset(\mathbb{R})$ and two positive integers $p$ and $q$, $\mathcal{M}_{p,q}(A)$ denotes the set of $p\times q$ matrices whose elements belong to $A$. Let $\mathfrak{M}_1(\mathsf{X})$ be the set of probability measures on $(\mathsf{X},\mathcal{X})$ and for any function $f:\mathsf{X}\to\mathbb{R}$ and any measure $\mu\in\mathfrak{M}_1(\mathsf{X})$ we define $\mu f:=\int f\mathrm{d} \mu$. Let $\mathcal{L}^2(\pi)$ be the set of $\pi$-measurable functions on $\mathsf{X}$ such that $\pi f^2<\infty$ and $\mathcal{L}_{0}^2(\pi):=\{f\in\mathcal{L}^2(\pi),\,\pi f=0\}$. For any Markov operator $K$ on $(\mathsf{X},\mathcal{X})$ we have that for all $x\in\mathsf{X}$, $K(x,\,\cdot\,)\in\mathfrak{M}_1(\mathsf{X})$ and for all $A\in\mathcal{X}$, $x\mapsto K(x,A)\in[0,1]$ is a $\pi$-measurable function. Moreover, for any $f\in\mathcal{L}^2(\pi)$ and $\mu\in\mathfrak{M}_1(\mathsf{X})$, we will denote by
\begin{itemize}
\item $Kf:\mathsf{X}\to\mathbb{R}$, the measurable function defined as
$$
Kf(x):=\int K(x,\mathrm{d} y)f(y)\,,
$$
\item $\mu K:\mathcal{X}\to (0,1)$, the measure in $\mathfrak{M}_1(\mathsf{X})$ defined as
$$
\mu K(A):=\int_\mathsf{X} \mu(\mathrm{d} x)K(x,A)\,.
$$
\end{itemize}
For two Markov kernels $P_1$ and $P_2$, $P_1$ dominates $P_2$ in the off-diagonal ordering (or Peskun ordering) \cite{peskun1973optimum,tierney1998note} and we denote $P_1\succeq_P P_2$, if for all $A\in\mathcal{X}$
\begin{equation}
\label{eq:peskun}
P_1(x,A\backslash\{x\})\geq P_2(x,A\backslash\{x\})
\end{equation}
for $\pi$-almost all $x\in\mathsf{X}$. The total variation distance between two probability measures $(\pi,\nu)\in\mathfrak{M}_1(\mathsf{X})^2$ is defined as $\|\pi-\nu\|:=\sup_{A\in\mathcal{X}}|\pi(A)-\nu(A)|$ and when the two distributions are absolutely continuous with respect to a common dominating measure $\lambda\in\mathfrak{M}_1(\mathsf{X})$, we have $\|\pi-\nu\|=(1/2)\int_{\mathsf{X}}|\pi(x)-\nu(x)|\lambda(\mathrm{d} x)$. Finally, we will use the convention that a random variable (r.v.) is written in capital letter and realizations in small letter. The notation $X\rightsquigarrow x$ refers to the process of simulating $X$ and calling $x$ the observed outcome. $X\sim \pi$ means that $X$ is a $\pi$-distributed random variable. For a r.v. $X$ defined on a probability space $(\mathsf{X},\mathcal{X},\mathbb{P})$, $\delta_{x_0}$ denotes the degenerate distribution at some value $x_0\in\mathsf{X}$, \textit{i.e.\;} $\mathbb{P}(X=x_0)=1$.
\section{Introductory Examples}
\label{sec:3}
We start with three illustrative examples in which the distribution of interest is defined on a discrete state space. Example \ref{ex1} is an archetypical case of a sparse and filamentary distribution and highly advocates using a locally informed strategy over a simple RSGS algorithm, in terms of mixing time. Example \ref{ex2} is a noised version of Example \ref{ex1}, in the sense of Eq. \eqref{eq:def_fil}. The advantage of the locally informed approach observed in the noise-free case deteriorates as the noise increases and $d'\to d$, \textit{i.e.\;} when $\pi$ loses its sparse and filamentary structure. Moving further away from the sparse and filamentary framework, Example \ref{ex3} (in which there is no topology) presents a scenario where the locally informed strategy fails remarkably. Not much is said on our locally informed algorithm at this stage: its presentation and some theoretical properties are explored in Sections \ref{sec:4} and \ref{sec:5}.
\begin{exa}
\label{ex1}
Let $\pi$ be the distribution defined on $\mathsf{X}=\{1,\ldots,m\}^d$ where $d\geq 2$ and $m\geq 3$. $\mathsf{X}$ is regarded as a $d$-dimensional discrete hypercube where each edge length is $m$. The probability distribution $\pi$ is uniform on $\mathsf{Z}\subset\mathsf{X}$, a \textit{filament} that comprises the connected edges $\mathcal{E}_1,\mathcal{E}_2,\ldots,\mathcal{E}_d$ defined as follows:
\begin{multline}
\mathsf{Z}:={\bigcup}_{i=1}^d\mathcal{E}_i\,,\\
\mathcal{E}_i:=\left\{x\in\mathsf{X}\;\big|\;x_{1:i-1}=m\quad\text{and}\quad x_i\in(1,m)\quad\text{and}\quad x_{i+1:d}=1\right\}\,.
\end{multline}
The distribution $\pi$ is illustrated graphically at Figure \ref{fig:hypercube} for $m=10$ and $d=3$.
\end{exa}
\begin{figure}[h]
\centering
\includegraphics[scale=0.75]{./figure/hypercube/hypercube-eps-converted-to.pdf}
\caption{(Example \ref{ex1} with $n=10$ and $d=3$) The filament $\mathsf{Z}$ are the states in red and all the mass of $\pi$ is concentrated on $\mathsf{Z}$.\label{fig:hypercube}}
\end{figure}
Sampling from $\pi$ is straightforward but for illustrative purpose we consider the two following MCMC algorithms:
\begin{itemize}
\item A random-scan Gibbs sampler (RSGS) that proceeds by picking a dimension $I\rightsquigarrow i$ uniformly at random, \textit{i.e.\;} according to the distribution $I\sim \omega=\text{unif}\{1,\ldots,d\}$ and then drawing the new state $X'$ conditionally on the current state, say $X$, by refreshing only $X_i'\sim \pi(\,\cdot\,|\, X_{-i})$ and setting $X_{-i}'=X_{-i}$. Note that, with probability $1-1/d$, the selected dimension $i$ will prevent to have $X'\neq X$ as $1-1/d$ full posterior distributions $\pi(\,\cdot\,|\, X_{-i})$ have their probability mass concentrated exclusively on $X_i$. Hence, when $d$ is large, the Markov chain hardly moves.
\item A locally informed sampler that proceeds by first picking a dimension $I\rightsquigarrow i$ with a non-uniform probability distribution $\omega(X)=(\omega_1(X),\ldots,\omega_n(X))$ that depends on the current chain state $X\in\mathsf{X}$. More precisely, $\omega(X)$ is defined as follows: if $X$ belongs to one and only one edge, say $X\in\mathcal{E}_i$, then $\omega(X):=\delta_i$ and if $X$ belongs to two edges, say $X\in\mathcal{E}_i\cap\mathcal{E}_{i+1}$, then $\omega(X):=(1/2)\delta_i+(1/2)\delta_{i+1}$. Conditionally on $X$ and $i$, a proposal $X'$ is drawn as in the RSGS, \textit{i.e.\;} $X'\sim \pi(\,\cdot\,|\,X_{-i})$, and is then accepted as the next state of the Markov chain with probability $1\wedge \omega_i(X')/\omega_i(X)$. If $X'$ is rejected, the chain stays put at $X$.
Intuitively, the distribution $\omega(X)$ is designed so that the sampler takes advantage of the topology by updating a component of $X$ that moves the chain on the same edge but, contrarily to the RSGS, at a state different to $X$ with high probability.
\end{itemize}
In what follows, for any quantity $\alpha$ defined in the RSGS, $\alpha^\ast$ will refer to the corresponding quantity for the locally informed algorithm. Propositions \ref{prop:hypercube:1} and \ref{prop:hypercube:2} suggest that, in this example, the locally informed strategy is $d/2$ times more efficient than the RSGS, where efficiency is measured as time to reach equilibrium. We recall the definition of a coupling time associated with a Markov kernel $P$.
\begin{defi}
Let $\{X_t,X'_t\}_{t}$ be a discrete time process defined on $(\mathsf{X}\times\mathsf{X},\mathcal{X}\otimes\mathcal{X})$ such that marginally $\{X_t\}_t$ and $\{X'_t\}_t$ are both a Markov chain with transition kernel $P$ with initial distribution $\mu$ and $\mu'$, respectively. The coupling time of the joint process $\{X_t,X_t'\}_t$ is the random variable $\tau$ defined as:
$$
\tau:=\text{inf}_{t\in\mathbb{N}}\{X_t=X'_t\}\,.
$$
\end{defi}
We recall that $\tau$ is a time characteristic to the Markov chain speed of convergence since the coupling inequality (see \textit{e.g.\;} \cite{lindvall2002lectures}) states that for all $t\in\mathbb{N}$,
$$
\|\Pr\{X_t\in\,\cdot\,\}-\pi\|\leq \Pr\{\tau>t\}\,.
$$
\begin{prop}
\label{prop:hypercube:1}
In the context of Example \ref{ex1}, the expected coupling time of the RSGS is $d/2$ times larger than of the locally informed algorithm when both algorithms start at state $x_1:=(1,1,\ldots,1)$, \textit{i.e.\;}
\begin{equation}
\mathbb{E}_{x_1}(\tau)= \frac{d}{2}\mathbb{E}_{x_1}(\tau^\ast)\,.
\end{equation}
\end{prop}
\begin{prop}
\label{prop:hypercube:2}
Consider a delayed version of the locally informed Markov chain that moves according to $P^\ast$ with probability $\lambda\in(0,1)$ and remains to its current state with probability $1-\lambda$. Then, in the context of Example \ref{ex1}, the RSGS and the locally informed Markov chain delayed by a factor $\lambda=2/d$ converge to $\pi$ at the same speed.
\end{prop}
The proof of those two propositions can be found in Sections \ref{proof1} and \ref{proof2}, respectively. They follow from a coupling argument applied to an equivalent representation of the Markov chains on a simpler state space.
\begin{rem}
\label{rem0}
The factor $d/2$ in the Propositions \ref{prop:hypercube:1} and \ref{prop:hypercube:2} can be interpreted as follows: since $\pi$ is uniform on $\mathsf{Z}$, the convergence of both Markov chains (starting from one extremity of the filament) is characterized by the speed at which they cross the hypercube vertices that belong to $\mathsf{Z}$, \textit{e.g.\;} $(10,1,1)$ and $(10,10,1)$ for the case illustrated in Figure \ref{fig:hypercube}. While at one of those vertices, the relative speed at which the RSGS moves to one of the two adjacent edges compared to the informed algorithm is $2/d$ since ``only'' two choices of direction may lead to such a transition. We have considered the slight change of definition of $\mathsf{Z}$ in the case $d=3$ with $\mathsf{Z}:=\mathcal{E}_1\cup\mathcal{E}_2\cup \mathcal{E}_3'$ where $\mathcal{E}_3':=\{x\in\mathsf{X}\,|\,x_1=m,x_2=1,x_3\in(1,m)\}$. In this example, the state $(m,1,1)$ connects the three subspaces of dimension one. Hence, the RSGS and the informed algorithms are equally efficient to jump to any edge while at this state and we have verified theoretically that, in this case, the relative speed of convergence between the two algorithms is $d/3=1$.
\end{rem}
To summarize, Example \ref{ex1} confirms the intuition that a state dependent distribution $\omega(X)$ that incorporates geometric and topological information of $\pi$ to draw the updating direction of a Gibbs sampler can speed up the Markov chain convergence. Again, we stress that obtaining those analytical results is eased by the fact that the mass of $\pi$ is here concentrated on the filament, \textit{i.e.\;} $p:=\pi(\mathsf{X}\backslash\mathsf{Z})=0$. Nevertheless, this intuition can be generalized to the more realistic situation where the probability mass in the filament is immersed into an ambient noise. This is the purpose of Example \ref{ex2}.
\begin{exa}
\label{ex2}
We consider the distribution $\pi$ from Example \ref{ex1}, where now $p=\pi(\mathsf{X}\backslash\mathsf{Z})>0$.
\end{exa}
We consider the two algorithms used in Example \ref{ex1} to sample from $\pi$. The locally informed algorithm is implemented with a weight function extending that defined at Example \ref{ex1}. More precisely, if $X\in\mathsf{Z}$, let the subset $\mathcal{S}(X)\subset\{1,\ldots,d\}$ defined so that the update of any dimension $i\in\mathcal{S}(X)$ could take the next state of the Markov chain to $\mathsf{X}\backslash \mathsf{Z}$. The weight function $\omega$ used in the locally informed algorithm is defined as follows: if $X\in\mathsf{Z}$, with probability $p$, pick the update direction uniformly at random on $\mathcal{S}(X)$ and with probability $1-p$, pick the update direction uniformly at random on $\{1,\ldots,d\}\backslash \mathcal{S}(X)$. When $X\not\in\mathsf{Z}$, the update direction is drawn uniformly at random on $\{1,\ldots,d\}$. Figure \ref{fig:hypercube:2} shows that the locally informed algorithm retains its advantage compared to RSGS even when $p>0$, for moderate values of $p$. Interestingly, when $p$ increases (\textit{e.g.\;} $p=0.1$) a shortcoming of the locally informed algorithm is exposed: it clearly outperforms RSGS in terms of exploring $\mathsf{Z}$ quickly but converges on $\mathsf{X}\backslash\mathsf{Z}$ extremely slowly. This is even more involved when the dimension $d$ is small. Figure \ref{fig:hypercube:3} illustrates theoretically this observation, when $d=2$ and $p=0.1$: in this case, the informed algorithm clearly trails behind the random scan algorithm. For example, the informed algorithm requires $25\%$ more time than RSGS to reach a distribution which lies in a ball centered at $\pi$ and radius $10^{-5}$. Example \ref{ex3} conceptualises this situation in a simplified setting and shows that when moving away from sparse and filamentary distributions, one should clearly avoid using the locally informed algorithm.
\begin{figure}
\centering
\includegraphics[scale=0.6]{./figure/hypercube/plot_d7_n4_p0-eps-converted-to.pdf}
\includegraphics[scale=0.6]{./figure/hypercube/plot_d7_n4_p10m3-eps-converted-to.pdf}
\includegraphics[scale=0.6]{./figure/hypercube/plot_d7_n4_p10m2-eps-converted-to.pdf}
\includegraphics[scale=0.6]{./figure/hypercube/plot_d7_n4_p10m1-eps-converted-to.pdf}
\caption{Examples \ref{ex1} and \ref{ex2} (Hypercube) in dimension $d=7$ with $n=4$ possible states per dimension and $p=\pi(\mathsf{X}\backslash\mathsf{Z})\in\{0,10^{-3},10^{-2},10^{-1}\}$. Convergence results (in total variation distance) are obtained from $50000$ independent Markov chains simulated from the two possible algorithms, all starting from the state $(1,1,\ldots,1)$. Note that those results could have been obtained theoretically but would have required handling routine operations on square matrices of dimension $16384$, causing obvious computational difficulties.\label{fig:hypercube:2}}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.8]{./figure/hypercube/TV_p01_th-eps-converted-to.pdf}
\caption{Example \ref{ex2} (Hypercube) in dimension $d=2$ with $n=10$ possible states per dimension and $\pi(\mathsf{X}\backslash\mathcal{S})=10^{-1}$. Initial distribution is a dirac at one extremity of the two-dimensional filament. The total variation distances were calculated analytically.
\label{fig:hypercube:3}}
\end{figure}
The following example showcases a scenario where $\pi$ is not sparse and filamentary. In this case, $\pi$ does not even have a topological structure and the convergence of a locally informed algorithm is shown to be much slower than a non locally informed algorithm using the same proposal kernels.
\begin{exa}
\label{ex3}
We consider the distribution $\pi$ defined on $\mathsf{X}:=\{1,2,3\}$ such that $\pi(1)=\pi(2)$ and $\pi(3)=p$, for some $p>0$.
\end{exa}
In order to sample from $\pi$, we consider Markov chains that attempt moves according to the following proposal distributions
\begin{equation}
\label{eq:ex3}
Q_1(i,\,\cdot\,)=\delta_{\text{inf}\{\mathsf{X}\backslash\{i\}\}}\qquad\text{and}\qquad
Q_2(i,\,\cdot\,)=\delta_{\sup\{\mathsf{X}\backslash\{i\}\}}\,.
\end{equation}
Put simply, Eq. \eqref{eq:ex3} means that $Q_1$ and $Q_2$ attempt to visit a state which is not the current one. We consider an algorithm, that we refer to as uninformed, that picks the proposal independently from the current state of the Markov chain, \textit{i.e.\;} with probability $\omega(i)=(1/2,1/2)$. We compare this uninformed strategy with a locally informed proposal selection, that depends on the current state of the Markov chain. More precisely, if the Markov chain is at state $i$, it will attempt a move to a state $j\in\mathsf{X}\backslash\{i\}$ with probability proportional to $\pi(j)$. This writes formally as $\omega(i)\propto(\pi(\text{inf}\{\mathsf{X}\backslash\{i\}\}),\pi(\sup\{\mathsf{X}\backslash\{i\}\}))$. In other words, while the uninformed Markov chain attempts moving to states regardless their probability mass, the locally informed algorithm is more likely to attempt moving to states with larger probability mass. For both algorithms, the attempted moves are then accepted/rejected according to a probability that guarantees both Markov chains to be $\pi$-invariant:
\begin{itemize}
\item The usual Metropolis-Hastings acceptance ratio for the uninformed Markov chain.
\item A slightly modification of the Metropolis-Hastings acceptance ratio for the locally informed Markov chain, see Algorithm \ref{alg2} at Section \ref{sec:5}.
\end{itemize}
We compare the two Markov chains according to their spectral properties. First, recall that for a reversible Markov chain with transition kernel $P$ and spectrum $\text{Sp}(P)$, the spectral gap, defined as $\gamma(P):=1-\sup\{|\lambda|,\,\lambda\in\text{Sp}(P)\backslash\{1\}\}$, is used as a marker of speed of convergence, the larger the gap the faster the convergence, see \textit{e.g.\;} \cite{rosenthal2003asymptotic}.
\begin{prop}
\label{prop1_3}
In the context of Example \ref{ex3} with $p\in(0,1/3)$, let $\gamma(p)$ (resp. $\gamma^\ast(p)$) be the spectral gap of the Markov chain with uninformed proposal (resp. with locally informed proposal). Then, we have
\begin{equation*}
\gamma(p)=\frac{1-2p}{1-p}\qquad \text{and}\qquad \gamma^\ast(p)=p\frac{3-5p}{1-p^2}\,,
\end{equation*}
and especially when $\epsilon\searrow 0$, $\gamma(p)=o(1)$ while $\gamma^\ast(p)=o(p)$.
\end{prop}
The proof is postponed to Section \ref{proof3}. Proposition \ref{prop1_3} states that it is more efficient in this scenario to propose highly frequent (risky) moves to state $\{3\}$ that are most of the time rejected (the uninformed chain) than essentially jumping between states $\{1\}$ and $\{2\}$ repeatedly (the locally informed chain). Hence perhaps counterintuitively, the uninformed chain that features $\mathbb{P}\{X_n=X_{n+1}\}\approx 1/2$ converges faster than the locally informed that features $\mathbb{P}\{X_n=X_{n+1}\}\approx 0$. In other words, the highly correlated chain is better than the risk averse one for this example, in the sense of convergence speed.
\section{Locally informed algorithm for general Markov kernels}
\label{sec:4}
We consider a collection of $n$ $\pi$-invariant Markov kernels
$$
\mathfrak{P}:=P_1,P_2,\ldots,P_n,
$$
\textit{i.e.\;} for any $i\in\{1,\ldots,n\}$ and $A\in\mathcal{X}$, $\pi P_i(A):=\int_{\mathsf{X}}\pi(\mathrm{d} x)P_i(x,A)=\pi(A)$. If, in addition, each kernel is irreducible and aperiodic, then any Markov chain that makes use (perhaps randomly) of one of those kernels to transition from one state to another will converge to $\pi$. One can readily check that in the case of a random selection $\omega\in\Delta_n$ (where $\Delta_n$ is the $n$-simplex defined at \eqref{eq:simplex}), the Markov kernel writes $P_\omega:=\sum_{i=1}^n\omega_iP_i$ and satisfies for any $A\in\mathcal{X}$
\begin{multline}
\label{eq:omega_const}
\pi P_\omega(A)=\int \pi(\mathrm{d} x)P_\omega(x,A)=\int\pi(\mathrm{d} x)\sum_{i=1}^n\omega_i P_i(x,A)\\
=\sum_{i=1}^n\omega_i\int \pi P_i(x,A)=\sum_{i=1}^n\omega_i\pi(A)=\pi(A)\,.
\end{multline}
In \cite{roberts1997geometric} and \cite{roberts1998two}, the authors study how $P_\omega$, referred therein as the hybrid sampler, ``inherits'' other convergence properties from the kernels in $\mathfrak{P}$, such as geometric ergodicity, rate of convergence, etc. In this paper, we rather study the question whether or not the way (\textit{i.e.\;} the distribution $\omega$) to select the kernels in $\mathfrak{P}$ affects the hybrid Markov kernel $P_\omega$. This is known to be a challenging problem and \cite{andrieu2016random} is probably the only literature available on this topic. The author carries out a thorough exploration of the hybrid Gibbs case, with $n=2$ kernels, and compares the random-scan Gibbs sampler (RSGS) with the deterministic-update Gibbs sampler (DUGS), according to their asymptotic variance of empirical estimators.
Example \ref{ex1} has shown that choosing the kernel in a locally meaningful way may lead to a substantial gain in terms of time to convergence. In this section, we introduce a class of Markov chain Monte Carlo algorithms (MCMC), referred to as \textit{Locally informed MCMC} whose choice of transition kernel $P_i$ at iteration $t$ depends on the state $X_t$ of the Markov chain. Define the function $\omega:\mathsf{X}\mapsto\Delta_n$. For any $i\in\{1,\ldots,n\}$ and $x\in\mathsf{X}$, let us define $\omega_i(x)$ as the probability to select $P_i$ as the next transition kernel if the Markov chain is at state $X=x$. More formally, the transition kernel of such a Markov chain is defined for any $(x,A)\in\mathsf{X}\times\mathcal{X}$ by
\begin{equation}
\label{eq0}
P_\omega(x,A)=\sum_{i=1}^n\omega_i(x)P_i(x,A)\,.
\end{equation}
For example, the case $d=n$ and $P_i(x,\,\cdot\,)=\pi(\,\cdot\,|\,x_{-i})\delta_{x_{-i}}$ is a random-scan Gibbs sampler whose kernel selection distribution depends on the current state. However, such an algorithm is not necessarily $\pi$-invariant since
$$
\pi P_\omega(A)=\sum_{i=1}^n\int\omega_i(x)\pi(\mathrm{d} x)P_i(x,A)
$$
does not, in general, equals $\pi(A)$. We stress that when $\omega$ is independent of the chain position, $P_\omega$ is $\pi$-invariant and corresponds to the case of Eq. \eqref{eq:omega_const}.
We present a way to correct the algorithm $P_\omega$ so as to inherit the $\pi$-invariance from $P_1, P_2,\ldots,P_n$. We refer to this type of algorithm as \textit{Locally informed MCMC}, which is outlined in Algorithm \ref{alg1}.
\begin{algorithm}
\caption{Locally informed MCMC, transition $X_{t}\to X_{t+1}$}
\label{alg1}
\begin{algorithmic}[1]
\Require $X_t=x\in\mathsf{X}$
\State draw $I\sim\omega(x) \rightsquigarrow i$
\State propose $\tilde{X}\sim P_i(x,\cdot)\rightsquigarrow \tilde{x}$ and set $X_{t+1}=\tilde{x}$ with probability
\begin{equation}
\label{eq1}
\alpha_i(x,\tilde{x})=1\wedge\frac{\omega_i(\tilde{x})}{\omega_i(x)}
\end{equation}
and $X_{t+1}=x$ otherwise.
\end{algorithmic}
\end{algorithm}
Let $P^\ast_\omega$ be the transition kernel of the locally informed Markov chain described at Algorithm \ref{alg1}. It can be checked that $P^\ast_\omega$ writes:
\begin{multline}
\label{eq2}
P^\ast_\omega(x,A)=\sum_{i=1}^n\omega_i(x)\left\{\int_A P_i(x,\mathrm{d} y)\alpha_i(x,y)+\delta_x(A)\left(1-r_i(x)\right)\right\}\,,\\
r_i(x):=\int_{\mathsf{X}} P_i(x,\mathrm{d} y)\alpha_i(x,y)\,.
\end{multline}
\begin{prop}
\label{prop2}
Assume that for all $i\in\{1,\ldots,n\}$, $P_i$ is $\pi$-reversible, then for any choice of function $\omega:\mathsf{X}\to\Delta_n$, $P^\ast_\omega$ is $\pi$-reversible.
\end{prop}
\begin{proof}
Let $\rho$ be a measure on $\mathcal{X}\otimes\mathcal{X}$ defined as $\rho(A,B):=\int_A\pi(\mathrm{d} x)P^\ast_\omega(x,B)$ and $H:\mathsf{X}^2\to\mathbb{R}$ a $\rho$-integrable test function. Establishing $\mathbb{E}_\rho\{H(X,Y)\}=\mathbb{E}_\rho\{H(Y,X)\}$ is sufficient to show that $P^\ast_\omega$ is $\pi$-reversible.
\begin{multline*}
\mathbb{E}_\rho\{H(X,Y)\}=\sum_{i=1}^n\iint_{\mathsf{X}}H(x,y)\pi(\mathrm{d} x)P_i(x,\mathrm{d} y)\left\{\omega_i(x)\wedge \omega_i(y)\right\}\\
+\sum_{i=1}^n \iint_{\mathsf{X}}H(x,y)\pi(\mathrm{d} x)\delta_x(\mathrm{d} y)\omega_i(x)\left(1-r_i(x)\right)\\
=\iint_{\mathsf{X}}H(x,y)\sum_{i=1}^n\omega_i(y)\pi(\mathrm{d} y)P_i(y,\mathrm{d} x)\left\{1\wedge\omega_i(x)/\omega_i(y)\right\}\\
+\sum_{i=1}^n \iint_{\mathsf{X}}H(x,y)\pi(\mathrm{d} y)\delta_y(\mathrm{d} x)\omega_i(y)\left(1-r_i(y)\right)\\
=\iint_{\mathsf{X}}H(x,y)\pi(\mathrm{d} y)P^\ast_\omega(y,\mathrm{d} x)\
=\mathbb{E}_\rho\{H(Y,X)\}\,,
\end{multline*}
where the second equality follows from the $\pi$-reversibility of $P_i$ and the symmetry of the measure $\pi(\mathrm{d} x)\delta_x(\mathrm{d} y)$ on $\mathcal{X}\otimes\mathcal{X}$.
\end{proof}
Since $\pi$-reversible Markov kernels are necessarily $\pi$-invariant, an immediate consequence of Proposition \ref{prop2} is the following corollary.
\begin{corollary}
\label{prop1}
Assume that for all $i\in\{1,\ldots,n\}$, $P_i$ is $\pi$-reversible, then for any choice of function $\omega:\mathsf{X}\to\Delta_n$, $P^\ast_\omega$ is $\pi$-invariant.
\end{corollary}
\begin{rem}
\label{rem1}
The locally informed kernel $P^\ast_\omega$ can be shown to be $\pi$-invariant using a probabilistic approach. Let $\mathsf{I}:=\{1,\ldots,n\}$ and its powerset $\mathcal{I}:=\mathcal{P}(1,\ldots,n)$. Consider the distribution $\bar{\pi}$ on $(\mathsf{X}\times \mathsf{I},\mathcal{X}\otimes \mathcal{I})$ defined as
\begin{equation}
\label{eqbpi}
\bar{\pi}(x,i):=\omega_i(x)\pi(x)\,.
\end{equation}
Define by $\{I_t,\,t\in\mathbb{N}\}$ the sequence of random variables drawn recursively at each iteration of Algorithm \ref{alg1}. Noting that $\bar{\pi}(i|x)=\omega_i(x)$, step (1) of Algorithm \ref{alg1} can be regarded as a Gibbs update of $I_t$ given $X_t=x$ and as such is $\bar{\pi}$-invariant. Step (2) of Algorithm \ref{alg1} can be regarded as a Metropolis-Hastings update of $X_{t+1}$ given $(I_t,X_t)=(i,x)$. Indeed taking $P_i(x,\,\cdot\,)$ as the proposal kernel, step (2) consists in simulating $\tilde{X}\sim P_i(x,\,\cdot\,)$ and accepting/rejecting the proposal with the usual MH probability
\begin{equation}
1\wedge\frac{\bar{\pi}(\tilde{X}\,|\,i)P_i(\tilde{X},x)}{\bar{\pi}(x\,|\,i)P_i(x,\tilde{X})}=1\wedge\frac{\omega_i(\tilde{X})}{\omega_i(x)}\frac{\pi(\tilde{X})P_i(\tilde{X},x)}{\pi(x)P_i(x,\tilde{X})}=\alpha_i(x,\tilde{x})\,,
\end{equation}
where $\alpha_i(x,\tilde{x})$ is defined at Eq. \eqref{eq1}. The last equality holds because $P_1, P_2,\ldots$ are all $\pi$-reversible. This shows that a transition $(I_t,X_t)\to (I_{t+1},X_{t+1})$ of Algorithm \ref{alg1} is in fact a series of two $\bar{\pi}$-invariant transitions and is thus $\bar{\pi}$-invariant. Noting that $\pi$ is the marginal of $\bar{\pi}$ with respect to $X$ completes the proof.
\end{rem}
\begin{rem}
The locally informed Markov chains used in Examples \ref{ex1} and \ref{ex2} are instances of Algorithm \ref{alg1}.
\end{rem}
In the sequel, we refer to $P_{\omega^{\text{c}}}$ as the transition kernel defined in Eq. \eqref{eq0} where ${\omega^{\text{c}}}$ is constant on $\mathsf{X}$, in contrast to $P^\ast_\omega$ where the function $\omega$ varies on $\mathsf{X}$. One can wonder if the locally informed Markov chain (Algorithm \ref{alg1}) with kernel $P^\ast_\omega$ is more efficient than the corresponding uninformed one \textit{i.e.\;} the chain with kernel $P_{\omega^{\text{c}}}$. A first negative answer can be formulated as follows. Roughly speaking, the rejection step introduced at Step 2 of Algorithm \ref{alg1} (see Eq. \eqref{eq1}) makes the locally informed chain less efficient in the sense of increasing the asymptotic variance of some Monte Carlo estimators, compared to the uninformed chain. The following Proposition establishes this result more formally.
\begin{prop}
\label{prop3}
Let $f\in\mathcal{L}_{0}^2(\pi)$. For any $\pi$-reversible kernel $P$, define the asymptotic variance of the Monte carlo estimation of $\pi f$ using the Markov chain $\{X_t,\,t\in\mathbb{N}\}$ with kernel $P$ and $X_0\sim\pi$ as
\begin{equation}
\label{eq:asymptotic_var}
v(f,P):=\lim_{t\to\infty}\frac{1}{t}\mathrm{ var}\left\{\sum_{k=0}^{t-1}f(X_k)\right\}\,.
\end{equation}
Assume that
\begin{enumerate}[(i)]
\item $\mathsf{X}$ is a continuous state space
\item for all $i\in\{1,\ldots,n\}$, $P_i$ is absolutely continuous and $\pi$-reversible,
\item the function $f$ satisfies
$$
\sum_{k=1}^\infty \left|\mathrm{cov}\{f(X_0),f(X_k)\}\right|<\infty\,,
$$
\end{enumerate}
then we have
$$
v(f,P^\ast_\omega)\geq v(f,P_{\omega^{\text{c}}})\,.
$$
\end{prop}
\begin{proof}
This proof follows from a slight adaptation of Theorem 4 in \cite{maire2014comparison}. In the sequel, for notational simplicity we refer to $\{X_t,\,t\in\mathbb{N}\}$ as a Markov chain with $X_0\sim\pi$ and transition kernel $P^\ast_\omega$ or $P_\omega$, indifferently. In this proof we embed the Markov chain $\{X_t,\,t\in\mathbb{N}\}$ in the state space $(\mathsf{X}\times\mathsf{I},\mathcal{X}\otimes\mathcal{I})$ and consider the non-homogeneous chain of the type
\begin{multline}
\label{eq3}
\cdots\longrightarrow\left\{X_k,I_k\right\}\overset{Q}{\longrightarrow} \left\{X_{k+1}=X_k,I_{k+1}\sim \omega(X_{k}) \rightsquigarrow i\right\}\\
\overset{R}{\longrightarrow}\left\{X_{k+2}\sim P_{i}(X_{k+1},\cdot\,),I_{k+2}=i\right\}\longrightarrow \cdots\,,
\end{multline}
where $Q$ refers to the Gibbs update of $I$ (Step 1 of Alg. \ref{alg1}) and $R$ to the Metropolis-within-Gibbs update of $X$ (Step 2 of Alg. \ref{alg1}). Recall that the Markov chain $\{(X_k,I_k),\,k\in\mathbb{N}\}$ admits $\bar{\pi}$ \eqref{eqbpi} as stationary distribution. Moreover, we note that both $Q$ and $R$ are $\bar{\pi}$-reversible. In the context of the decomposition of $P$ suggested in \eqref{eq3}, let $Q_\omega^\ast$ and $R_\omega^\ast$ be the two kernels so that $P^\ast_\omega=Q_\omega^\ast R_\omega^\ast$ and similarly write $P_{\omega^{\text{c}}}=Q_{\omega^{\text{c}}} R_{\omega^{\text{c}}}$. Clearly, $R_{\omega^{\text{c}}}\succeq_P R_\omega^\ast$ \textit{i.e.\;} for all $(x,i)\in\mathsf{X}\times\mathsf{I}$ and $A\times B\in\mathcal{X}\otimes\mathcal{I}$,
$$
R_\omega^\ast(x,i;A\times B\backslash\{x,i\})=\int_{A}P_i(x,\mathrm{d}\tilde{x})\alpha_i(x,\tilde{x})\leq P_i(x,A)= R_{\omega^{\text{c}}}(x,i;A\times B\backslash\{x,i\})\,.
$$
A direct application of Theorem 4 of \cite{maire2014comparison} requires also to have $Q_{\omega^{\text{c}}}\succeq_P Q_\omega^\ast$. This holds if and only if for all $(x,i)\in \mathsf{X}\times\mathsf{I}$, $\omega_i^{\text{c}}<\omega_i(x)$. Apart from the trivial case where $\omega$ is constant, this is not true and thus $Q_{\omega^{\text{c}}}\not\succeq_P Q_{\omega^{\text{c}}}^{\ast}$. However, we note that the operator $Q_{\omega^{\text{c}}}-Q_{\omega^{\text{c}}}^\ast$ is null on $\mathcal{L}_{0}^2(\pi)$, since for any $Q\in\{Q_{\omega^{\text{c}}},Q_\omega^\ast\}$
\begin{multline*}
\pscal{f}{Q f}=\sum_{i=1}^n\int_{\mathsf{X}}\mathrm{d}\bar{\pi}(\mathrm{d} x,i)f(x) Q f(x,i)\,,\\
=\sum_{i=1}^n\sum_{j=1}^n\iint_{\mathsf{X}}\bar{\pi}(\mathrm{d} x,i)f(x)Q(x,i;\mathrm{d}\tilde{x}, j)f(\tilde{x})\,,\\
=\int_{\mathsf{X}}\sum_{i=1}^n\bar{\pi}(\mathrm{d} x,i)f(x)\sum_{j=1}^n\omega_j(x)f(x)=\|f\|^2\,.
\end{multline*}
At this stage we refer to the proof of Theorem 4 in \cite{maire2014comparison}. The proof of Theorem 4 can be carried out in the same way, while relaxing the assumption $Q_{\omega^{\text{c}}}\succeq_P Q_\omega^\ast$ by $Q_{\omega^{\text{c}}}-Q_{\omega^{\text{c}}}^\ast$ being the null operator on $\mathcal{L}_{0}^2(\pi)$. More precisely, the last equation in the proof of Lemma 25 holds despite the fact that $Q_{\omega^{\text{c}}}\not\succeq_P Q_\omega^\ast$. Indeed, one of the term in the RHS of Lemma 25's last equation is null and the other is negative, because $R_{\omega^{\text{c}}}\succeq_P R_\omega^\ast$. This completes the proof.
\end{proof}
\begin{rem}
We cannot apply directly Theorem 4 from \cite{tierney1998note} since $P_{\omega^{\text{c}}}\succeq_P P^\ast_\omega$ does not hold. Indeed for all $(x,A)\in\mathsf{X}\times\mathcal{X}$
\begin{multline*}
P^\ast_\omega(x,A\backslash\{x\})=\sum_{i=1}^n \omega_i(x) \int_{A\backslash\{x\}}P_i(x,\mathrm{d} y)\alpha_i(x,y) \\
\leq \sum_{i=1}^n\omega_i^{\text{c}}P_i(x,A\backslash\{x\})=P_{\omega^{\text{c}}}(x,A\backslash\{x\})\\
\Leftrightarrow \omega_i^{\text{c}}=\omega_i(x)\,,\quad\text{a-s}\,.
\end{multline*}
The more sophisticated framework of Theorem 4 from \cite{maire2014comparison} is needed to split the two types of update.
\end{rem}
This shows that asymptotically, the locally informed construction suggested in Algorithm \ref{alg1} is worst than any uninformed strategy moving according to the same kernel collection $\mathfrak{P}$. However, as illustrated in Section \ref{sec:2}, for some sparse and filamentary distributions, the locally informed strategy yields to Markov chains with smaller mixing time. This is also the case in the following example, where theoretical mixing times are reported for the locally informed algorithm and its uninformed counterpart.
\begin{exa}
\label{ex5}
Let $\mathsf{X}=\{1,2,3\}^d$ with $d>1$. Consider the generic distribution on $\mathsf{X}$ defined as follows:
\begin{equation}
\pi\propto
\left\{
\begin{array}{cc}
1& \text{x}\in\mathcal{S}_d\cup \mathcal{T}_d\\
100^{-d} &\text{otherwise}
\end{array}
\right.
\end{equation}
where $(\mathcal{S}_d,\mathcal{T}_d)$ are subspaces of $\mathsf{X}$ of dimension $2$ defined as follows:
$$
\mathcal{S}_d:=\left\{x\in\mathsf{X},\;x_1=\cdots=x_{d-2}=1\right\}\,,\qquad
\mathcal{T}_d:=\left\{x\in\mathsf{X},\;x_3=\cdots=x_{d}=1\right\}\,.
$$
A representation of $\pi$ in the case $d=3$ is given on the right panel of Figure \ref{fig:ex5}.
\end{exa}
\begin{figure}
\centering
\includegraphics[scale=0.7]{./figure/hypercube/ex4_pi-eps-converted-to.pdf}
\includegraphics[scale=0.7]{./figure/hypercube/ex4_ome-eps-converted-to.pdf}
\caption{(Example \ref{ex5}, $d=3$) Left panel: representation of $\pi$. Right panel: calculation of $\omega(x)$ where $x=(3,1,1)$. The weight $\omega_i(x)$ is proportional to the marginal $\pi_i(x)$ which is the sum of $\pi(x_{-i},j)$ for $j=1,2,3$ (states in purple), \textit{i.e.\;} $\omega_1(x)=\omega_2(x)\propto 3 >\omega_3(x)\propto 1+10^{-6}$.\label{fig:ex5}}
\end{figure}
Since the full conditional distributions of $\pi$ are known and $\mathsf{X}$ is a discrete space, one can use a Gibbs sampler to sample from $\pi$. In this case, the collection of kernels $P_1,\ldots,P_n$ corresponds to the full conditional distributions \textit{i.e.\;} $n=d$ and for all $i\in\{1,\ldots,d\}$, $A=\otimes_{i=1}^d A_i$, $P_i(x,A)=\delta_{x_{-i}}(A_{-i})\pi(A_i\,|\,x_{-i})$. We compare the speed of convergence of $P_{\omega^{\text{c}}}$ and $P^\ast_\omega$ with selection probabilities defined as:
\begin{equation}
\omega_i^{\text{c}}=1/d\,,\qquad\omega_i(x_{-i})=\pi_i(x):=\sum_{j=1}^3\pi(x_{1:i-1},j,x_{i+1:d})\,.
\end{equation}
The geometry of the problem leaves $\omega_i^{\text{c}}=1/d$ as the only reasonable option for the constant selection probability. In contrast, when the function $\omega$ is allowed to be state dependent, it is designed so that the Markov chain attempts most of the time to move on either hyperplane where the probability mass of $\pi$ is concentrated. The initial distribution $\mu$ is set as the dirac at state $1_d$. Intuitively this corresponds to the case where one would assign the starting state of the Markov chain at a point $x_0\in\mathcal{S}_d\cup\mathcal{T}_d$ found by a deterministic optimisation strategy so that it does not spend too much time wandering in $\mathsf{X}\backslash\{\mathcal{S}_d\cup \mathcal{T}_d\}$. We report in Figure \ref{tab1} the total variation distances between $\pi$ and the chain distribution for the two algorithms $\|\pi-\mu {P^\ast_\omega}^t\|$ and $\|\pi-\mu P_{\omega^{\text{c}}}^t\|$ for some $t\in\mathbb{N}$. We also provide the $\epsilon$-mixing time $\tau(\epsilon):=\text{inf}_{t\in\mathbb{N}}\{\|\pi-\mu P_{\omega^{\text{c}}}^t\|<\epsilon\}$ and $\tau^\ast(\epsilon):=\text{inf}_{t\in\mathbb{N}}\{\|\pi-\mu {P^\ast_\omega}^t\|<\epsilon\}$, see Table \ref{tab1}. Since $\mathsf{X}$ is discrete, all these quantities are exact.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.6]{./figure/ex_4/TV_d3-eps-converted-to.pdf}
\includegraphics[scale=0.6]{./figure/ex_4/TV_d4-eps-converted-to.pdf}
\includegraphics[scale=0.6]{./figure/ex_4/TV_d5-eps-converted-to.pdf}
\includegraphics[scale=0.6]{./figure/ex_4/TV_d8-eps-converted-to.pdf}
\caption{(Example \ref{ex5}) Convergence in TV for the non locally informed (RSGS) and locally informed Markov chains.\label{fig1}}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{cc|cccc}
&$\epsilon$ & 1/4 & 0.1 & 0.01 &0.001\\
\hline
\multirow{ 2}{*}{$d=2$}&$\tau^\ast(\epsilon)$ & 2 & 3 & 5 & 6 \\
&$\tau(\epsilon)$ & 3 & 4 & 8 & 11 \\
\hline
\multirow{ 2}{*}{$d=5$}&$\tau^{\ast}(\epsilon)$ & 4 & 6 & 11 & 16\\
&$\tau(\epsilon)$ & 5 & 9 & 17 & 25\\
\hline
\multirow{ 2}{*}{$d=8$}&$\tau^{\ast}(\epsilon)$ & 5 & 8 & 14 & 21\\
&$\tau(\epsilon)$ & 8 & 14 & 25 & 42
\end{tabular}
\caption{Mixing times for the locally informed and uninformed Markov chains.\label{tab1}}
\end{table}
In this example, the uninformed algorithm is penalized as it updates most of the time components that will keep the Markov chain at the same state. Indeed, when the chain is on $\mathcal{T}_d\cup\mathcal{S}_d$, then $1-2/d$ full conditional distributions have $1/(1+1/100^d)$ of their mass concentrated on the current state. In the same situation, the locally informed sampler will update components that will keep the Markov chain on $\mathcal{T}_d\cup\mathcal{S}_d$ but, with probability $1-1/d$, it will move to a different state. Note that Proposition \ref{prop3} cannot be applied to compare the asymptotic variance of the two algorithms because $\mathsf{X}$ is discrete.
\section{A locally informed MCMC algorithm for \allowbreak Metropolis-Hastings kernels}
\label{sec:5}
In this section, we assume that $\mathsf{X}$ is uncountable and that the collection of kernels $\mathfrak{P}$ comprises exclusively Metropolis-Hastings kernels, \textit{i.e.\;} for all $i\in\{1,\ldots,n\}$, there exist an absolutely continuous Markov kernel $Q_i$, functions $\beta_i:\mathsf{X}^2\to(0,1)$ and $\varrho_i:\mathsf{X}\to(0,1)$, such that for all $A\in\mathcal{X}$,
\begin{equation}
\label{eq4}
P_i(x,A)=\int_A Q_i(x,\mathrm{d} y)\beta_i(x,y)+\delta_x(A)(1-\varrho_i(x))\,,
\end{equation}
where $\beta_i:\mathsf{X}\times\mathsf{X}\to (0,1)$ is the acceptance probability defined as
$$
\beta_i(x,y)=1\wedge \frac{\pi(y)Q_i(y,x)}{\pi(x)Q_i(x,x)}\qquad \text{and}\qquad \varrho_i(x)=\int_\mathsf{X} Q_i(x,\mathrm{d} y)\beta_i(x,y)\,.
$$
By construction $P_i$ is $\pi$-reversible. In this particular case, Algorithm \ref{alg1} can be written as follows:
\setcounter{algorithm}{0}
\begin{algorithm}
\caption{Locally informed MCMC for MH kernels, transition $X_{k}\to X_{k+1}$}
\label{alg1_bis}
\begin{algorithmic}[1]
\Require $X_k=x\in\mathsf{X}$
\State draw $I\sim\omega(x) \rightsquigarrow i$
\State propose $X\sim Q_i(x,\cdot)\rightsquigarrow \tilde{x}$
\State set $X_{k+1}=\tilde{x}$ with probability
\begin{equation}
\label{eq1_bbi}
\gamma_i(x,\tilde{x})=\underbrace{\left\{1\wedge \frac{\pi(\tilde{x})Q_i(\tilde{x},x)}{\pi(x)Q_i(x,\tilde{x})}\right\}}_{\beta_i(x,\tilde{x})}\underbrace{\left\{1\wedge\frac{\omega_i(\tilde{x})}{\omega_i(x)}\right\}}_{\alpha_i(x,\tilde{x})}
\end{equation}
and $X_{k+1}=x$ otherwise.
\end{algorithmic}
\end{algorithm}
Indeed, in the context of MH kernels, a proposal $X\sim Q_i$ is accepted only if it is accepted at the MH accept/reject step (cf. \eqref{eq4}) and at Step 2 of Algorithm \ref{alg1}. In this version of Alg. \ref{alg1_bis}, an equivalent single accept/reject step (Step 3) is performed.
We introduce a second locally informed Markov chain $\{X_t,\,t\in\mathbb{N}\}$ relevant only when all the kernels $P_1,\ldots,P_n$ fall into the framework of Eq. \eqref{eq4}.
\begin{algorithm}
\caption{A second locally informed MCMC for MH kernels, transition $X_{t}\to X_{t+1}$}
\label{alg2}
\begin{algorithmic}[1]
\Require $X_t=x\in\mathsf{X}$
\State draw $I\sim\omega(x) \rightsquigarrow i$
\State propose $\tilde{X}\sim Q_i(x,\cdot)\rightsquigarrow \tilde{x}$ and set $X_{t+1}=\tilde{x}$ with probability
\begin{equation}
\label{eq1_bis}
\bar{\gamma}_i(x,\tilde{x})=1\wedge \frac{\pi(\tilde{x})Q_i(\tilde{x},x)\omega_i(\tilde{x})}{\pi(x)Q_i(x,\tilde{x})\omega_i(x)}
\end{equation}
and $X_{t+1}=x$ otherwise.
\end{algorithmic}
\end{algorithm}
\begin{prop}
Let $\bar{P}_\omega$ be the transition kernel of the Markov chain $\{X_t,\,t\in\mathbb{N}\}$ described at Algorithm \ref{alg2}. Then, $\bar{P}_\omega$ is $\pi$-reversible.
\end{prop}
The proof is similar to that of Proposition \ref{prop2}.
\begin{rem}
Similarly to Remark \ref{rem1}, the joint Markov chain $\{(X_t,I_t),\,t\in\mathbb{N}\}$ produced by Alg. \ref{alg2} can be regarded as a Gibbs chain on the extended state space $(\mathsf{X}\times\mathsf{I},\mathcal{X}\otimes\mathcal{I})$ targeting the distribution $\bar{\pi}$ defined in Eq. \eqref{eqbpi}. Step (1) is a Gibbs update of $I_t$ given $X_t=x$ and Step (2) is a Metropolis-within-Gibbs update of $X_t$ given $I_t=i$. The only difference with Algorithm \ref{alg1_bis} is that the proposal distribution in this step is $Q_i$ for Alg. \ref{alg2}, as opposed to $P_i$ for Alg. \ref{alg1_bis}.
\end{rem}
In the sequel, we will refer to as $P_\omega$ and $\bar{P}_\omega$, the transition kernels corresponding to Algorithm \ref{alg1_bis} and Algorithm \ref{alg2} respectively, regardless whether or not $\omega$ is uniform on $\mathsf{X}$. In Algorithm \ref{alg1}, a proposal $\tilde{X}$ can be rejected (1) because of the non-zero diagonal mass of $P_i$ or (2) because of the rejection step necessary to keep the locally informed algorithm $\pi$-invariant. In contrast, a proposal $\tilde{X}$ in Algorithm \ref{alg2} faces only one accept/reject step. This naturally induces a Peskun ordering between the Markov kernels $P_\omega$ and $\bar{P}_\omega$.
\begin{prop}
\label{prop:peskunMH}
Let $P_1,\ldots,P_n$ be $n$ Metropolis-Hastings kernels and $f\in\mathcal{L}_{0}^2$. Let $\omega:\mathsf{X}\to \Delta_n$ be any weight function. Denote by $P_\omega$ and $\bar{P}_\omega$ the two transition kernels defined by Algorithms \ref{alg1_bis} and \ref{alg2}, respectively. Then we have
\begin{equation}
\label{eq5}
v(f,P_\omega)\geq v(f,\bar{P}_\omega)\,,
\end{equation}
where for any Markov kernel $P$ and any $f\in\mathcal{L}^2(\pi)$, $v(f,P)$ is the asymptotic variance as defined in Eq. \eqref{eq:asymptotic_var}.
\end{prop}
\begin{proof}
Contrarily to the proof of Proposition 3, we can directly compare the two kernels $P_\omega$ and $\bar{P}_\omega$ since the weight function $\omega$ is the same for both kernels. Note that for all $x\in\mathsf{X}$ and any $A\in\mathcal{X}$,
\begin{enumerate}[(i)]
\item the Markov subkernels associated to $P_\omega$ and $\bar{P}_\omega$ write
\begin{equation*}
\left\{
\begin{array}{l}
P_\omega(x,A\backslash\{x\})=\sum_{i=1}^{n}\omega_i(x)\int_A Q_i(x,\mathrm{d} y) \gamma_i(x,y)\,,\\
\bar{P}_\omega(x,A\backslash\{x\})=\sum_{i=1}^{n}\omega_i(x)\int_A Q_i(x,\mathrm{d} y) \bar{\gamma}_i(x,y)
\end{array}
\right.
\end{equation*}
\item for all $i\in\{1,\ldots,n\}$ and for $(x,y)\in \mathsf{X}^2$,
\begin{multline*}
\gamma_i(x,y)=\left\{1\wedge \frac{\pi(y)Q_i(y,x)}{\pi(x)Q_i(x,y)}\right\}\left\{1\wedge\frac{\omega_i(y)}{\omega_i(x)}\right\}\\
\leq
1\wedge \frac{\pi(y)Q_i(y,x)\omega_i(y)}{\pi(x)Q_i(x,y)\omega_i(x)}=\bar{\gamma}_i(x,y)\,,
\end{multline*}
since for any positive real numbers $(a,b)$, $(1\wedge a)(1\wedge b)<1\wedge ab$.
\end{enumerate}
Combining (i) and (ii), we obtain that $\bar{P}_\omega\succeq_P P_\omega$. Since $P_\omega$ and $\bar{P}_\omega$ are both $\pi$-reversible and $f\in\mathcal{L}_{0}^2(\pi)$, the inequality \eqref{eq5} follows by applying Theorem 4 from \cite{tierney1998note}.
\end{proof}
\begin{rem}
Proposition \ref{prop:peskunMH} indicates that when $P_1,\ldots,P_n$ are MH kernels, the locally informed MCMC of Algorithm \ref{alg2} should be preferred to Algorithm \ref{alg1_bis}, when the efficiency is measured by the asymptotic variance.
\end{rem}
\begin{rem}
Compared to the general case detailed in Section \ref{sec:3}, Proposition \ref{prop:peskunMH} is less negative for locally informed Markov kernels. Indeed, contrarily to Proposition \ref{prop3}, the diagonal component of MH kernels ensures that the locally informed kernel from Algorithm \ref{alg2} is not dominated by any uninformed algorithm using kernels in $\mathfrak{P}$, \textit{i.e.\;} $P_{\omega^{\text{c}}}\not\succeq_P \bar{P}_\omega$. This follows from the fact that for any $i\in\mathbb{N}$ and all $(x,y)\in\mathsf{X}^2$, the quantities $\bar{\gamma}_i(x,y)$ and $\beta_i(x,y)$ cannot be ordered.
\end{rem}
\section{Numerical Examples}
\label{sec:6}
In this section, we consider three general state space examples in which the distribution of interest can be seen as sparse and filamentary. We assume that a collection of $n$ Metropolis-Hastings kernels $P_1,\ldots,P_n$ is available. Indeed, since the full conditional distributions of the models considered here are not straightforward, a Gibbs sampler cannot be implemented. With some abuse of notations, we will keep the acronym RSGS to refer to the algorithm which is sometimes known as ``Metropolis-within-Gibbs''. For each example, we clearly define the collection of available kernels and the weight function $\omega:\mathsf{X}\to(0,1)^n$ used by the locally informed algorithms. Due to the symmetry of the models, the weight function for the RSGS algorithm is set as ${\omega^{\text{c}}}:\propto(1,1,\cdots,1)$.
The non locally informed algorithm (RSGS) is compared with the two locally informed Markov algorithms (Alg. \ref{alg1} or Alg. \ref{alg2}). The two strategies (\textit{i.e.\;} locally informed or not) are compared according to their time to convergence conditionally to some initial distribution $\mu_0$ (\textit{i.e.\;} in transient regime) and their asymptotic efficiency (\textit{i.e.\;} in stationary regime) through the asymptotic variance of the empirical average of some test functions obtained using the sample path of Markov chains started at stationarity. More precisely,
\begin{itemize}
\item The convergence in distribution is assessed by estimating the Kullback-Leibler divergence (KL) between $\pi$ and $p_t$, the Markov chain distribution at iteration $t$, conditionally on $\mu_0$. The KL divergence is estimated using a nearest neighbor entropy estimator, developed in \cite{chauveau2013smoothness} and \cite{chauveau2013simulation}.
\item The asymptotic variances are estimated by simulating a large number of i.i.d. estimators of $\pi f\rightsquigarrow\{\widehat{\pi f}\}_1,\{\widehat{\pi f}\}_2,\cdots$ each obtained through the simulation of a Markov chain trajectory started at stationary for $T$ iterations. The asymptotic variance $\sigma_f$ is thus estimated by
$$
\widehat{\sigma_f}:=T\widehat{\mathrm{ var}}\left(\{\widehat{\pi f}\}_1,\{\widehat{\pi f}\}_2,\ldots\right)\,,
$$
where $\widehat{\mathrm{ var}}(x_1,x_2,\ldots)$ denotes the unbiased variance estimator of the population $(x_1,x_2,\ldots)$.
\end{itemize}
\begin{exa}
\label{ex6}
Let $\pi$ be the two-dimensional distribution (adapted from \cite{latuszynski2013adaptive}) defined on the compact set $\mathsf{X}=[0,1]\times[0,1]$ by the density function with respect to the Lebesgue measure defined as:
\begin{multline*}
\pi(x_1,x_2):= \frac{1}{2}\left\{\varphi_N(x_1,x_2)+\varphi_N(x_2,x_1)\right\}\,,\\
\varphi(x_1,x_2):\propto x_1^{100}\left\{1-\cos(10\pi x_2)\right\}\,.
\end{multline*}
The mass of $\pi$ is concentrated near the subspaces $\{x_1=1\}$ and $\{x_2=1\}$ and varies in the neighborhood of those subspaces according to $5$ sinusoids, see Figure \ref{fig:pi_ex6}.
\end{exa}
\begin{figure}[h]
\centering
\includegraphics[scale=0.35]{./figure/ex_wave/plot_N5.jpg}
\caption{(Example \ref{ex6}) Plot of the density $\pi$. \label{fig:pi_ex6}}
\end{figure}
The distribution $\pi$ is sampled using RSGS and the two locally informed algorithms proposed in this paper (Alg. \ref{alg1} and Alg. \ref{alg2}).
\paragraph{Available kernels.} There are $n=4$ kernels available. $P_1$ freezes $x_1$ and moves $x_2$ according to a MH kernel with a truncated Gaussian proposal with standard deviation $\sigma_1=0.01$. $P_2$ operates in the same way as $P_1$ but with standard deviation $\sigma_2=1$. Finally, $P_3$ and $P_4$ are identical to $P_1$ and $P_2$ respectively but move $x_1$ and freeze $x_2$.
\paragraph{Weight function}
RSGS uses a weight function defined as ${\omega^{\text{c}}}\propto(1,1,1,1)$ while the two locally informed algorithms use
\begin{equation*}
\omega(x)\propto
\left\{
\begin{array}{ll}
\left(x_1,1-x_1,x_2,1-x_2\right) & \text{if}\;\left\{x_1<0.9\,,\,x_2<0.9\right\}\,,\\
\left(x_1,1-x_1,x_1,1-x_1\right)& \text{if}\;\left\{x_1>0.9\,,\,x_2<0.9\right\}\,,\\
\left(x_2,1-x_2,x_2,1-x_2\right)& \text{if}\;\left\{x_1<0.9\,,\,x_2>0.9\right\}\,,\\
\left(1,1,1,1\right)& \text{if}\;\left\{x_1>0.9\,,\,x_2>0.9\right\}\,.
\end{array}
\right.
\end{equation*}
This particular choice of $\omega(x)$ guarantees that large jumps are attempted with a probability that increases with the distance between $x$ and the high density regions of $\pi$. It also ensures that the types of move in the high density regions are attempted according to the local topology of $\pi$. For instance, if the Markov chain is near the subspaces $\{x\in\mathsf{X}, \,x_1=1\}$, large moves in the $x_2$ direction are attempted so as to jump between the different modes of $\pi$ and small moves in the $x_1$ direction are attempted to explore the tail of $\pi(\,\cdot\,|\,x_2)$.
\paragraph{Results}
In terms of distributional convergence, Figure \ref{fig:ex_6_cv} reports the estimated KL divergence between $\pi$ and the three Markov chain distributions. It shows that even though the locally informed methods entropy decreases faster initially, the $\epsilon$-mixing time seems to be quite the same for the algorithms for a small enough $\epsilon$. Note that after $t=100$ iterations, the convergence of Alg. \ref{alg1} is clearly slower than the random scan method. Figure \ref{fig:ex_6_mean} illustrates the convergence of the Markov chains sample path average of a number of test functions (defined in Table \ref{tab:ex_6}) to their corresponding expectation. For this mode of convergence, the locally informed Algorithm \ref{alg2} clearly shows an advantage over its two competitors. Indeed, after $t=1,000$ iterations the bias is significantly lower when using Alg. \ref{alg1} or RSGS. In terms of asymptotic efficiency, Table \ref{tab:ex_6} summarizes our experiments. First, note that Alg. \ref{alg2} is always more efficient than Alg. \ref{alg1}, a fact which illustrates Proposition \ref{prop:peskunMH}. What was however unclear from the theoretical analysis is that, on this example and for those test functions, the locally informed methods (Algorithms \ref{alg1} and \ref{alg2}) appear more efficient asymptotically than the non locally informed method (RSGS). Note that the asymptotic ordering between the RSGS and Algorithm \ref{alg1} established at Proposition \ref{prop3} does not apply to this example because $P_1,\ldots,P_4$ are not absolutely continuous kernels. In particular, we observe that the asymptotic variances are significantly reduced when using Algorithm \ref{alg2} instead of RSGS.
\begin{figure}[h]
\centering
\includegraphics[scale=.8]{./figure/ex_wave/CV_KL-eps-converted-to.pdf}
\caption{(Example \ref{ex6}) Convergence in distribution (measured in KL divergence) of the three Markov chains with initial distribution $\mu_0=\mathcal{N}([0.95\;0.5],\text{Id}_2)$. Estimation based on $20,000$ replications of the three Markov chains. \label{fig:ex_6_cv}}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.56]{./figure/ex_wave/cv_f1_2-eps-converted-to.pdf}
\includegraphics[scale=0.56]{./figure/ex_wave/cv_f2_2-eps-converted-to.pdf}
\includegraphics[scale=0.56]{./figure/ex_wave/cv_f3_2-eps-converted-to.pdf}
\includegraphics[scale=0.56]{./figure/ex_wave/cv_f4_2-eps-converted-to.pdf}
\caption{(Example \ref{ex6}) Convergence of the estimator of $\pi \bar{f}$, $\bar{f}:=f-\pi f$, for different functions $f\in\{f_1,\ldots,f_4\}$ and the three algorithms started with $\mu_0=\mathcal{N}([0.95\;0.5],\text{Id}_2)$. The boxplots show the distribution of the estimator of $\pi f$ for each method after $n=1,000$ MCMC iterations and experiments were replicated $20,000$ times. \label{fig:ex_6_mean}}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{l|c|c|c}
\hspace{1.4cm} functions & RSGS & Alg. \ref{alg1_bis} & Alg. \ref{alg2}\\
\hline
$f_1(x):=1/(1+x_1^{100})$ & 19.75 & 19.04 & 14.87\\
$f_2(x):=(1/x_1)\mathds{1}_{\{0.4<x_2<0.5\}}$ & 2.49 & 2.22 & 1.77\\
$f_3(x):=x_1/(1+x_2)$ & 37.08 & 36.31 & 26.05\\
$f_4(x):=e^{-(x_1-0.8)^{10}}\mathds{1}_{\{x_2<0.9\}}$ & 158.09 & 155.29 & 119.93
\end{tabular}
\caption{(Example \ref{ex6}) Asymptotic variance for different functions $\bar{f}:=f-\pi f \in\mathcal{L}^2_0(\pi)$ and three algorithms (RSGS and the two locally informed algorithms). Estimated from the simulation of $20,000$ i.i.d. Markov chains for each algorithm ran for $n=5,000$ iterations and initiated under $\pi$. \label{tab:ex_6}}
\end{table}
\begin{exa}
\label{ex7}
Let $\pi_\theta$ be the distribution defined on $\mathsf{X}=\mathbb{R}^3$ as the following mixture of three Gaussians parameterized by $\theta>0$:
\begin{multline}
\label{eq:ex7}
\pi_\theta=(1/3)\Bigg\{\mathcal{N}
\left(\begin{bmatrix}
0\\
2\sqrt{\theta}\\
0
\end{bmatrix},\Sigma_\theta^{(1)}\right)
+\mathcal{N}\left(\begin{bmatrix}
-2\sqrt{\theta}\\
0\\
0
\end{bmatrix},\Sigma_\theta^{(2)}\right)\\
+
\mathcal{N}\left(\begin{bmatrix}
-2\sqrt{\theta}\\
-2\sqrt{\theta}\\
2\sqrt{\theta}
\end{bmatrix},\Sigma_\theta^{(3)}\right)
\Bigg\}\,,
\end{multline}
where for $(i,j,k)\in\{1,2,3\}^3$, $\left[\Sigma_\theta^{(i)}\right]_{j,k}:=\mathds{1}_{j=k}(1+(\theta-1)\mathds{1}_{\{j=i\}})$. As $\theta$ increases, $\pi_\theta$ features a more pronounced sparse and filamentary structure, see Figure \ref{fig:cross}.
\end{exa}
\begin{figure}[h]
\centering
\includegraphics[scale=0.65]{./figure/ex_cross/pi10-eps-converted-to.pdf}
\includegraphics[scale=0.65]{./figure/ex_cross/pi100-eps-converted-to.pdf}
\includegraphics[scale=0.65]{./figure/ex_cross/pi1000-eps-converted-to.pdf}
\caption{(Example \ref{ex7}) Representation of the probability density function $\pi_\theta$, for three parameters $\theta\in\{10,100,1000\}$. The ellipsoids cover approximatively 90\% of the probability mass. \label{fig:cross}}
\end{figure}
In this example, $\pi$ is sampled using the RSGS algorithm and the locally informed algorithm (Alg. \ref{alg2}).
\paragraph{Available kernels}
Given the symmetry of $\pi_\theta$ (see Figure \ref{fig:cross}), two types of single-site update MH kernels with Gaussian random walk are considered: one with a large variance parameter $\sigma_1^2$ that allows a fast exploration of the edges and one with a smaller variance parameter $\sigma_2^2$ for local refinements on the boundaries of the filament or for directions orthogonal to it. In total, six MH kernels (two per direction) $P_\sigma^{(i)}$ with $\sigma\in\{\sigma_1,\sigma_2\}$ and $i\in\{1,2,3\}$ are considered.
\paragraph{Weight function}
The weight function gives larger probability to large moves in the edge direction and small moves to directions perpendicular to the edge. This is achieved by identifying which is the closest edge from the current state $x\in\mathsf{X}$. More formally, given some $\epsilon>0$, we define the following functions
\begin{multline}
\label{eq:ex8:weights}
\omega_1(x)\propto\left(
\begin{array}{ccc}
1 & \epsilon & \epsilon\\
1/8 & 1/4 & 1/4
\end{array}
\right)\,,\quad
\omega_2(x)\propto\left(
\begin{array}{ccc}
\epsilon & 1 & \epsilon\\
1/4 & 1/8 & 1/4
\end{array}
\right)\,,\\
\omega_3(x)\propto\left(
\begin{array}{ccc}
\epsilon & \epsilon & 1\\
1/4 & 1/4 & 1/8
\end{array}
\right)\,,
\end{multline}
where the symbol $\propto$ means that for all $x$, the matrix entries of $\omega_r(x)$ sum up to one.
We refer to as $\{\phi_1,\phi_2,\phi_3\}$ the three Gaussian pdfs in the mixture $\pi_\theta$ (Eq. \ref{eq:ex7}) and for all $x\in\mathsf{X}$ and $j\in\{1,2,3\}$, $\xi_j(x):\propto\phi_j(x)$ such that $\xi_1(x)+\xi_2(x)+\xi_3(x)=1$. The weight function is defined by
\begin{equation}
\label{eq:ex8:weights_2}
\omega(x):=\sum_{k=1}^3\xi_k(x)\omega_k(x)\,.
\end{equation}
In Eq. \eqref{eq:ex8:weights_2}, $\omega(x)$ is a matrix whose entry $\omega_{i,j}(x)$ ($i\in\{1,2\}$, $j\in\{1,2,3\}$) corresponds to the probability to draw from the kernel $P_{\sigma_j}^{(i)}$. In our experiments, we have used $\epsilon=1/100$.
\paragraph{Results}
Contrarily to Example \ref{ex6}, $\pi_\theta$ is clearly a filamentary and sparse distribution, at least for large $\theta$. Indeed, the variations of probability mass are smoother compared to the sinusoidal feature of $\pi$ in Ex. \ref{ex6}. In fact, this example can be seen as the counterpart in $d=3$ dimensions of the hypercube distribution described at Examples \ref{ex1} (when $\theta\to\infty$) and \ref{ex2} (for a finite $\theta>0$). Of course, the fact that $\mathsf{X}$ is continuous in this example changes significantly the theory but one can wonder whether the results in terms of distributional convergence developed in Section \ref{sec:3} transposes to this situation. An empirical convergence analysis is carried out and Figure \ref{fig:ex_7_cv} reports the results. Interestingly, for large $\theta$ (\textit{e.g.\;} $\theta\in\{500,1000\}$), the convergence is sped up by a factor $d$ when using the locally informed algorithm (Alg. \ref{alg2}) instead of the RSGS. Of course, the metric used in Figure \ref{fig:ex_7_cv} (KL) is different to that used in Proposition \ref{prop:hypercube:2} (TV), but the speed up factor observed in this example is in line with the $d/2$ factor obtained at Propositions \ref{prop:hypercube:1} and \ref{prop:hypercube:2}. We speculate that the factor $1/2$ is here dropped as $\pi_\theta$ is not a uniform distribution on the filament as is $\pi$ in Examples \ref{ex1} and \ref{ex2}. For that reason, the relative speed of convergence between the two algorithms is not characterized by the speed at which the intersection area between two Gaussians is traversed (see Remark \ref{rem0}) and thus the relative speed of $1/d$ observed outside those areas prevails. Animations showing the convergence of $p_t$ to $\pi$ for the RSGS and the locally informed algorithm (Alg. 2) can be found online at \href{https://maths.ucd.ie/~fmaire/MV18/ex6_theta10.gif}{\texttt{http://maths.ucd.ie/$\sim$fmaire/MV18/ex6\_theta10.gif}} for $\theta=10$ (animations are also available for \href{http://maths.ucd.ie/~fmaire/MV18/ex6_theta100.gif}{$\theta=100$} and \href{https://maths.ucd.ie/~fmaire/MV18/ex6_theta1000.gif}{$\theta=1,000$}). In those animations, each figure contains 20,000 realizations of $\pi$ (for the i.i.d. panel) and $p_t$ for some $t>0$ (for the MCMC panels). For the MCMC algorithms, $\mu_0=\mathcal{N}([3\sqrt{\alpha} \; 2\sqrt{\alpha} \; 1],\text{Id}_3)$ was used as initial measure.
In terms of asymptotic efficiency, Table \ref{tab:ex_7} reports the asymptotic variance related to the Monte Carlo estimator of $\pi \bar{f}$, for four test functions $\bar{f}\in\mathcal{L}^2_0(\pi)$ and for different values of $\theta$. Results seem to point to the same conclusion as Example \ref{ex6}, namely that the locally informed algorithm allows to reduce significantly the variance compared to the RSGS. However, what is interesting is that this ordering is reversed for larger noise levels (\textit{e.g.\;} $\theta=10$): as soon as $\pi_\theta$ loses its filamentary structure, RSGS becomes asymptotically more efficient than the locally informed algorithm. Putting this observation in the same picture as Figure \ref{fig:ex_7_cv}, we conjecture the existence of a cut-off noise level $\theta^\ast$: when $\theta>\theta^\ast$, the locally informed strategy dominates the random scan approach both in terms of distributional convergence and asymptotic efficiency, for a sufficiently large class of initial measures and test functions, and conversely when $\theta<\theta^\ast$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.61]{./figure/ex_cross/CV_theta_10-eps-converted-to.pdf}
\includegraphics[scale=0.61]{./figure/ex_cross/CV_theta_100-eps-converted-to.pdf}
\includegraphics[scale=0.61]{./figure/ex_cross/CV_theta_500-eps-converted-to.pdf}
\includegraphics[scale=0.61]{./figure/ex_cross/CV_theta_1000-eps-converted-to.pdf}
\caption{(Example \ref{ex7}) Convergence in distribution (measured in KL divergence) of the two Markov chains with initial distribution $\mu_0=\mathcal{N}((3\sqrt{\alpha} , 2\sqrt{\alpha} , 1),\text{Id}_3)$, for $\theta\in\{10,100,500,1000\}$. Estimation based on $1,000$ replications of the two Markov chains. Note that the x-axis scale varies across plots. \label{fig:ex_7_cv}}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{l|c|c|c|c|c|c|}
& \multicolumn{2}{c|}{$\theta=1,000$} & \multicolumn{2}{c|}{$\theta=100$} & \multicolumn{2}{c|}{$\theta=10$}\\
\hspace{1.2cm} $f(x)$ & RSGS & Alg. \ref{alg2} & RSGS & Alg. \ref{alg2} & RSGS & Alg. \ref{alg2}\\
\hline
$(x_1+x_2)/(100+x_3)$ & 1,034 & 350 & 35.61 & 33.97 & 1.15 & 2.40\\
$\mathds{1}_{\{x_1>x_2\}}e^{-|x_3|}$ & 0.300 & 0.072 & 0.096 & 0.063 & 0.076 & 0.167\\
$\mathds{1}_{\{x_1>2\sqrt{\theta}\}}$ & 0.831 & 0.120 & 0.210 & 0.114 & 0.115 & 0.193\\
$(x_1+x_2)/2\sqrt{\theta}\vee 1$ & 25.23 & 7.16 & 6.43 & 5.61 & 2.20 & 4.23
\end{tabular}
\caption{(Example \ref{ex7}) Asymptotic variance for different functions $\bar{f}:=f-\pi f \in\mathcal{L}^2_0(\pi)$ and two algorithms (RSGS and the locally informed algorithm \ref{alg2}). Estimated from the simulation of $20,000$ i.i.d. Markov chains for each algorithm ran for $n=5,000$ iterations and initiated under $\pi$. \label{tab:ex_7}}
\end{table}
\begin{exa}
\label{ex8}
Let $\pi_\lambda$ be the distribution of the random variable $X=Z+\zeta$ where $Z\sim\text{unif}(\mathsf{Z})$ and $\zeta=(\zeta_1,\zeta_2,\zeta_3)$. For all $i\in\{1,2,3\}$, the noise variable $\zeta_i$ has the same distribution as $\zeta_0:=(-1)^{Y}T$ with $Y\sim \text{ber}(1/2)$ and $T\sim \text{expo}(\lambda)$, for some noise parameter $\lambda>0$. The subset $\mathsf{Z}\subset \mathsf{X}:=\mathbb{R}^3$ is the union of two cylinders $\mathcal{C}_R$ and $\mathcal{C}_r$ having a fixed radius, $R$ and $r$ respectively, that are connected by a third cylinder $\mathcal{C}_\rho$ having a radius $\rho$ that varies linearly in $(r,R)$. More precisely, $\mathcal{C}_R$ has a large radius $R$ and a small height $\ell$ while $\mathcal{C}_r$ has a small radius $r$ and a large height $L$. Note that $\ell$ and $L$ are set such that $\text{vol}(\mathcal{C}_r)=\text{vol}(\mathcal{C}_\rho)=\text{vol}(\mathcal{C}_R)=1/3$. For a more formal definition, $\mathsf{Z}$ is parameterized by two positive numbers $(R,r)\in\mathbb{R}^2$ such that $r<R$ and is defined as:
\begin{multline}
\mathsf{Z}:=\bigg\{(x_1,x_2,x_3)\in\mathsf{X}\quad\bigg|\,\qquad \left(\sqrt{x_2^2+x_3^2}\leq R\;,\; -\ell<x_1<0\right) \quad \cup\\
\left(\sqrt{x_2^2+x_3^2}\leq R-x_1\;,\; 0<x_1<R-r\right) \quad \cup\\
\left(\sqrt{x_2^2+x_3^2}\leq r\; ,\; R-r<x_1<R-r+L\right)\bigg\}\,,
\end{multline}
where $\ell:=(R^2-r^2)/(2R)$ and $L:=(R^2-r^2)/(2r)$. We have used $r=0.05$ and $R=1$ in the simulations. Illustrations of $\mathsf{Z}$ is given at Figure \ref{fig:pi_ex8} and 10,000 i.i.d. draws from $\pi_\lambda$ for different noise levels are plotted at Figure \ref{fig:pi_ex8_iid}.
\end{exa}
Even though deriving the analytical form of $\pi_\lambda$'s probability density function is not straightforward, it is tractable (as the convolution product $\mathrm{unif}(\mathsf{Z})\otimes\mathrm{d}\mathbb{P}(\zeta\in\,\cdot\,)$) and therefore MCMC can be used to sample from $\pi_\lambda$. We compare the efficiency of the RSGS and the locally informed algorithm (Alg. \ref{alg2}) to sample from $\pi_\lambda$. Of course, in this example, sampling from $\pi_\lambda$ can be achieved in a direct manner by adding noise to a point $Z$ drawn uniformly at random in $\mathsf{Z}$ as suggested by the definition $X=Z+\zeta$. This scenario offers a controlled and tractable example aiming at mimicking distributions similar to $\pi_\lambda$ but for which either the noise process and/or the boundary of $\mathsf{Z}$ is unknown. $\pi_\lambda$ falls in the category of sparse and filamentary distribution since $\text{dim}(\mathsf{X})=3$ while the two-third of the probability mass is concentrated either on a $\mathcal{C}_R$ or $\mathcal{C}_r$ which can be seen, at the limit, as a 2-dimensional subspace (a disk) and a 1-dimensional subspace of $\mathsf{X}$, respectively, see Figure \ref{fig:pi_ex8_iid}.
\begin{figure}
\centering
\includegraphics[scale=.6]{./figure/ex_cylinder/Z_set-eps-converted-to.pdf}
\caption{(Example \ref{ex8}) Points drawn uniformly at random in $\mathsf{Z}$ with $R=1$ and $r=1/100$.\label{fig:pi_ex8}}
\end{figure}
\begin{figure}
\minipage{0.32\textwidth}
\includegraphics[width=\linewidth]{./figure/ex_cylinder/iid_draws_lambda10-eps-converted-to.pdf}
\caption*{$\lambda=10$}
\endminipage\hfill
\minipage{0.32\textwidth}
\includegraphics[width=\linewidth]{./figure/ex_cylinder/iid_draws_lambda100-eps-converted-to.pdf}
\caption*{$\lambda=100$}
\endminipage\hfill
\minipage{0.32\textwidth}%
\includegraphics[width=\linewidth]{./figure/ex_cylinder/iid_draws_lambda1000-eps-converted-to.pdf}
\caption*{$\lambda=1,000$}
\endminipage
\caption{(Example \ref{ex8}) Realizations of $X\sim \pi_\lambda$ for different levels of noise.\label{fig:pi_ex8_iid}}
\end{figure}
\paragraph{Available kernels}
Taking into account the symmetry of $\pi_\lambda$, the RSGS algorithm takes turn (deterministically) in updating $x_1|(x_2,x_3)$ (move $1$) and $(x_2,x_3)|x_1$ (move $2$). For each type of update, the RSMwGS moves according to a collection of $n$ MH kernels $P_1^{(i)},\ldots,P_n^{(i)}$ (for $i\in\{1,2\}$). In particular, the proposal mechanism associated with $P_j^{(i)}$ can be described as follows. First, consider a set of $n$ control points $\{\mathpzc{x}_{1},\ldots,\mathpzc{x}_n\}\in\mathsf{X}^n$ and define the following series of approximation for $j\in\{1,\ldots,n\}$:
\begin{equation}
\hat{\pi}_j^{(1)}:\approx\pi(\,\cdot\,|\,\mathpzc{x}_{2,j},\mathpzc{x}_{3,j})\,,\qquad
\hat{\pi}_j^{(2)}:\approx\pi(\,\cdot\,|\,\mathpzc{x}_{1,j})\,.
\end{equation}
The construction of those approximations is discussed later. Now, assuming $x=(x_1,x_2,x_3)$ as current state, simulating a proposal is achieved as follows:
\begin{itemize}
\item With probability $\epsilon$, a random walk type move is attempted,
\begin{itemize}
\item for move 1: propose $\tilde{X}=[x_1+\sigma U\,,\, x_2\,,\, x_3]$ such that $U\sim \mathcal{N}(0,1)$
\item for move 2: propose $\tilde{X}=[x_1\,,\, R\cos(V)\,,\, R\sin(V)]$ such that $R\sim \mathcal{N}(\sqrt{x_2^2+x_3^2},\sigma^2)$ and $V\sim \text{unif}(0,2\pi)$
\end{itemize}
\item With probability $1-\epsilon$, an independent type move is attempted,
\begin{itemize}
\item for move 1: propose $\tilde{X}=[\tilde{X}_1,\, x_2\,,\, x_3]$ with $\tilde{X}_1\sim \hat{\pi}_j^{(1)}$
\item for move 2: propose $\tilde{X}=[x_1\,,\, \tilde{X}_2\,,\,\tilde{X}_3]$ with $(\tilde{X}_2,\tilde{X}_3)\sim \hat{\pi}_j^{(2)}$
\end{itemize}
\end{itemize}
Hence, with probability $1-\epsilon$, the proposed state is drawn according to a proxy of the full conditional. Since RSGS draws the proposal kernel uniformly at random, there is a possibility that the proposal distribution, say $\hat{\pi}_j^{(2)}$, significantly differs from the full conditional $\pi(\,\cdot\,|\,x_1)$, a situation which is more likely to occur if $\mathpzc{x}_{1,j}$ is \textit{far} from $x_1$. At this stage, one may clearly see the benefit of a locally informed kernel selection: it can be designed so as to pick with high probability those kernels $P_j^{(2)}$ such that $\mathpzc{x}_{1,j}$ is close from $x_1$. The construction of the approximations $\{\hat{\pi}_j^{(1)},\hat{\pi}_j^{(2)}\}_{j=1}^n$ and the weight distribution used by the locally informed algorithm is now described.
\paragraph{Approximation of the full conditional distributions}
For type 1 move, it is defined as $\hat{\pi}_j^{(1)}:=\text{unif}(-\ell,\mu_j)$ and for type 2 move, $\hat{\pi}_j^{(2)}$ is the distribution of the random vector $(R\cos(V), R\sin(V))$ with $R\sim\text{unif}(-\nu_j,\nu_j)$ and $V\sim \text{unif}(0,2\pi)$. The constants $\{\mu_j,\nu_j\}_j$ are defined as follows:
\begin{equation}
\label{eq:control_points}
\left\{
\begin{array}{l}
\mu_1=0,\;\nu_1=R\,,\\
\mu_j=(R-r)(j-1)/(n-2),\;\nu_j=R-\mu_j\,\;\text{for}\;j\in\{2,\ldots,n-2\}\,,\\
\mu_n=R-r+L,\;\nu_n=r\,.
\end{array}
\right.
\end{equation}
In other words, the updated variables are drawn uniformly at random in $\mathsf{Z}$, conditionally on the control points ($\mathpzc{x}_{2,j},\mathpzc{x}_{3,j}$) when $x_1$ is updated or $\mathpzc{x}_{1,j}$ when $(x_2,x_3)$ is updated.
\paragraph{Weight function}
The weight function $\omega^{(i)}(x):=(\omega_1^{(i)}(x),\ldots,\omega_n^{(i)}(x))$ is defined for the two types of move as follows:
\begin{equation}
\omega_j^{(1)}(x):
\left\{
\begin{array}{ll}
=\delta_{j,n}&\text{if}\;\sqrt{x_2^2+x_3^2}<r\,,\\
\propto 1\Big\slash\left|\sqrt{x_2^2+x_3^2}-\sqrt{\mathpzc{x}_{2,j}^2+\mathpzc{x}_{3,j}^2}\right|&\text{if}\;\sqrt{x_2^2+x_3^2}\geq r\,,
\end{array}
\right.
\label{eq:ex_8_weight1}
\end{equation}
\begin{equation}
\omega_j^{(2)}(x):
\left\{
\begin{array}{ll}
=\delta_{j,1}&\text{if}\;x_1<0\,,\\
\propto 1\Big\slash\left|x_1-\mathpzc{x}_{1,j}\right|&\text{if}\;0<x_1<R-r\,,\\
=\delta_{j,n}&\text{if}\;x_1>R-r\,.
\end{array}
\right.
\label{eq:ex_8_weight2}
\end{equation}
The rationale of this design is here again to pick with high probability an independent proposal which is relevant for the local topology of $\pi$. In particular, Eq. \eqref{eq:ex_8_weight1} allows to pick $\hat{\pi}_j^{(1)}$ according to the distance between the chain and the control points $\{\chi_{2,j},\chi_{3,j}\}$ while Eq. \eqref{eq:ex_8_weight2} picks $\hat{\pi}_j^{(2)}$ according to the distance between the chain and the control points $\{\chi_{1,j}\}$.
\paragraph{Results}
Starting from an initial distribution whose mass is located near the extremity of $\mathcal{C}_r$, the convergence of the Markov chains simulated by RSGS and Algorithm \ref{alg2} to $\pi_\lambda$ (with four different values for the noise parameter $\lambda$) are compared at Figure \ref{fig:ex_8_cv}. We again observe that for a large noise level (\textit{e.g.\;} $\lambda=10$), the RSGS converges faster than the locally informed Markov chain with the following pattern: Alg. \ref{alg2} is faster at exploring most of the probability mass (within few iterations) before entering a slow converging mode that eventually sees RSGS catching up and entering an $\epsilon$-ball of $\pi_\lambda$ (for some small $\epsilon>0$) faster. Looking at scenarios with smaller noise level, the locally informed algorithm appears to converge (much) faster to a close neighborhood of $\pi_\lambda$ than RSGS. However, as the precision of our experimental results is limited, it remains to be seen whether the rate of convergence is uniformly larger for Alg. \ref{alg2} (\textit{i.e.\;} for all $t>0$) than for RSGS or if the locally informed algorithm will eventually enter a slow convergence mode (after some large $t$) that can simply be not perceived on the plots. Leaving theoretical considerations aside, the locally informed strategy is appealing as its empirical convergence from this initial distribution is 2, 3 and 5 times faster to RSGS, for $\lambda\in\{50,100,1000\}$ respectively. Animations of this scenario can be found online at \href{https://maths.ucd.ie/~fmaire/MV18/ex7_lambda10.gif}{\texttt{http://maths.ucd.ie/$\sim$fmaire/MV18/ex7\_lambda10.gif}} for $\lambda=10$ (also available for \href{http://maths.ucd.ie/~fmaire/MV18/ex7_lambda100.gif}{$\lambda=100$} and \href{https://maths.ucd.ie/~fmaire/MV18/ex7_lambda1000.gif}{$\lambda=1,000$}). Interestingly, the case $\lambda=10$ shows the obvious difficulty for the locally informed algorithm to visit $\mathsf{X}\backslash \mathsf{Z}$. This is because by definition of $\omega$ (see Eq. \eqref{eq:ex_8_weight2}) moves outside $\mathsf{Z}$ will only be proposed with probability $\epsilon$. More efficient strategies may exist but at the price of compromising the speed of convergence on $\mathsf{Z}$. In terms of asymptotic efficiency, we observe similarly to Example \ref{ex7} that the locally informed strategy can significantly reduce the asymptotic variance of Monte Carlo estimators for a diverse set of test functions, provided that the noise level is limited, \textit{i.e.\;} that $\pi_\lambda$ exhibits a filamentary structure.
\begin{table}
\centering
\begin{tabular}{l|c|c|c|c|c|c|}
& \multicolumn{2}{c|}{$\lambda=1,000$} & \multicolumn{2}{c|}{$\lambda=100$} & \multicolumn{2}{c|}{$\lambda=10$}\\
\hspace{.7cm} $f(x)$ & RSGS & Alg. \ref{alg2} & RSGS & Alg. \ref{alg2} & RSGS & Alg. \ref{alg2}\\
\hline
$\rho(x)$ & 34.68 & 5.31 & 27.30 & 8.44 & 25.12 & 31.06\\
$x_1^{0.1}/(1+\rho(x))$ & 103.4 & 15.26 & 80.85 & 15.38 & 29.34 & 26.10\\
$\mathds{1}_{\{x_1>R\}}$ & 184.3 & 25.7 & 147.39 & 25.15 & 57.39 & 39.89 \\
$\mathds{1}_{\{\rho(x)>0.9R\}}$ & 2.22 & 0.46 & 2.69 & 6.63 & 29.47 & 41.19
\end{tabular}
\caption{(Example \ref{ex8}) Asymptotic variance for different functions $\bar{f}:=f-\pi f \in\mathcal{L}^2_0(\pi)$ and two algorithms (RSGS and the locally informed algorithm \ref{alg2}). Estimated from the simulation of $2,000$ i.i.d. Markov chains for each algorithm ran for $n=5,000$ iterations and started under $\pi$. In this Table, we have defined $\rho(x):=\sqrt{x_2^2+x_3^2}$.\label{fig_ex8_var}}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.61]{./figure/ex_cylinder/KL_lambda10-eps-converted-to.pdf}
\includegraphics[scale=0.61]{./figure/ex_cylinder/KL_lambda50-eps-converted-to.pdf}
\includegraphics[scale=0.61]{./figure/ex_cylinder/KL_lambda100-eps-converted-to.pdf}
\includegraphics[scale=0.61]{./figure/ex_cylinder/KL_lambda1000-eps-converted-to.pdf}
\caption{(Example \ref{ex8}) Convergence in distribution (measured in KL divergence) of the two Markov chains with initial distribution $\mu_0=\mathcal{N}([R-r+L \; r/\sqrt{8} \; r/\sqrt{8}],0.01\text{Id}_3)$, for $\lambda\in\{10,50,500,1000\}$. Estimation based on $1,000$ replications of the two Markov chains. \label{fig:ex_8_cv}}
\end{figure}
\section{Discussion}
The main purpose of this paper was to investigate some properties of locally informed random scan MCMC. Given a fixed collection of $\pi$-reversible Markov kernels $\mathfrak{P}=P_1,\ldots,P_n$ operating on $(\mathsf{X},\mathcal{X})$, a locally informed algorithm simulates a Markov chain that moves, at each iteration, using a kernel drawn from the collection $\mathfrak{P}$ and according to some state-dependent probability distribution $\omega$. To the best of our knowledge, such a selection strategy has never been proposed in the literature. This contrasts with the significant research interest \citep{liu1994covariance,liu1995covariance,rosenthal1995minorization,roberts1998convergence,latuszynski2013adaptive,andrieu2016random} on random scan procedures in which the selection mechanism in state-independent, the random scan Gibbs sampler (RSGS) being a notorious example. A potential explanation is that if they are not carefully designed, locally informed algorithms can easily destroy the convergence properties of the kernels in $\mathfrak{P}$. We have proposed two locally informed algorithms in this paper: Algorithm \ref{alg1} is applicable to any collection of $\pi$-reversible kernels $\mathfrak{P}$ while Algorithm \ref{alg2} needs the kernels in $\mathfrak{P}$ to be Metropolis-Hastings type kernel. Locally informed algorithms are probably not always relevant: in fact we proved that Algorithm \ref{alg1} is always less asymptotically efficient than any non-locally informed strategy making use of the same kernels (see Proposition \ref{prop:peskunMH}) and that the latter may, in some cases, enjoy better convergence properties than the former, see Proposition \ref{prop1_3}, Figures \ref{fig:hypercube:2} ($p=0.1$), \ref{fig:hypercube:3}, \ref{fig:ex_7_cv} ($\theta=10$), \ref{fig:ex_8_cv} ($\lambda=10$). Our point is that for a specific class of probability distributions that we refer to as sparse and filamentary, locally informed algorithms lead to Markov chains that converge faster to their stationary distribution and achieve a substantial auto-correlation reduction compared to their non locally informed counterpart. Even though at this stage, most of our conclusions are based on empirical observations, we believe that this research opens up a number of questions that may interest the Bayesian, machine learning and applied probability communities, among others. We conclude this paper by presenting some of them.
\paragraph{Practical questions}
While the purpose of this paper was essentially to expose some theoretical and empirical observations related to locally informed MCMC, we acknowledge that most of our examples assume that a significant amount of information on $\pi$ is known \textit{apriori}. In real life problems, it is unreasonable to take that knowledge as granted when implementing either Algorithm \ref{alg1} or \ref{alg2} and this leads to the following questions:
\begin{itemize}
\item \textit{Design of $\mathfrak{P}$}. We have assumed that a collection of $\pi$-reversible kernels was already made available, \textit{ex nihilo}. In practice, one first needs to design $\mathfrak{P}$ in order to apply the RSGS or a locally informed algorithm. An easy route consists in defining $\mathfrak{P}$ as a list of MH kernels with different proposals. Proposals may consist in local approximations (parametric or nonparametric) of $\pi$ (or any full conditional distribution thereof), Gaussian random walk kernels with a collection of relevant covariance matrices (see \textit{e.g.\;} \cite{livingstone2015geometric}), etc. When the state space dimension is large, an idea is to apply a Principal Component Analysis algorithm to a dataset comprising of realizations from $\pi$ (available for instance via a preliminary MH run), in order to identify relevant subspaces onto which MH kernels would operate.
\item \textit{Specification of $\omega$}. When $\mathfrak{P}$ is a collection of MH kernels, it is possible to define a time inhomogeneous weight function $\omega_t:\mathsf{X}\to \Delta_n$. Assume that at each iteration, random particles $\tilde{X}_{i,1}^{(t)},\tilde{X}_{i,2}^{(t)},\ldots \sim_{\mathrm{iid}}Q_i(x_t,\,\cdot\,)$ are drawn from the proposal $Q_i$ (for $i\in\{1,\ldots,n\}$) and are then used so that the weight function may be defined as:
\begin{equation}
\label{eq:omega_def}
\omega_{t,i}(x_t)\equiv\omega_{t,i}\left(x_t;\tilde{X}_{i,1}^{(t)},\tilde{X}_{i,2}^{(t)},\ldots\right) \approx \mathbb{E}_{Q_i}(\pi(X)\,|\,x_t)\,.
\end{equation}
This design allows kernels attempting moves to local but reachable higher density regions to be promoted. Even though the locally informed Markov transition kernel is more complex to analyse when $\omega$ is defined as in Eq. \eqref{eq:omega_def}, $\omega$ can be designed such that the locally informed algorithm remains $\pi$-stationary. It is for instance the case when an auxiliary particle is defined as $X_i^{(t)}=x_t+\eta_{t,i}$, where $\{\eta_{t,i}\}_{t,i}$ are exogenous variates. In such a scenario, the resulting locally informed Markov chain can be casted and analysed in a time inhomogeneous framework where each transition is conditioned by the auxiliary particles $\{\eta_{t,i}\}_{t,i}$, see \textit{e.g.\;} \cite{douc2004quantitative} for more details. Figure \ref{fig:ex7:time_inhomogeneous} reports the convergence of Algorithm \ref{alg2} (in the context of Example \ref{ex7}) when $\omega$ is defined as in Eq. \eqref{eq:omega_def}. The convergence is slightly slower than when the function $\omega$ defined in Example \ref{ex7} (whose design required an extensive knowledge of $\pi$) is used but this fully automated choice of $\omega$ remains still very much competitive, especially compared to the RSGS.
\end{itemize}
\begin{figure}
\centering
\includegraphics[scale=0.8]{./figure/ex_cross/CV_omega_emp-eps-converted-to.pdf}
\caption{(Example \ref{ex7}): same experiment as reported in Figure \ref{fig:ex_7_cv} with in addition the locally informed algorithm (Alg. \ref{alg2}) implemented with $\omega$ defined as $\omega_{i,t}(x):=(1/L)\sum_{\ell=1}^L\pi_\theta(\tilde{X}_{i,\ell}^{(t)})$ and $\tilde{X}_{i,\ell}^{(t)}\sim_\mathrm{iid} Q_i(x,\,\cdot\,)$ and $L=100$, \textit{i.e.\;} in the time inhomogeneous framework.\label{fig:ex7:time_inhomogeneous}}
\end{figure}
\newpage
\paragraph{Theoretical considerations}
\begin{itemize}
\item \textit{Convergence on complementary subsets}. A much relevant question is to understand how the spectral analysis of Example \ref{ex1} can be extended to situations where $\pi(\mathsf{X}\backslash \mathsf{Z})>0$, for instance when $\pi$ is the mixture of two uniform distributions on $\mathsf{Z}$ and $\mathsf{X}\backslash \mathsf{Z}$ with mixing probability $1-p$ and $p$ respectively (\textit{i.e.\;} Example \ref{ex2}). Deriving such a result appears technically significantly more challenging since the simpler representation of the RSGS and the locally Markov chain on which the proof of Propositions \ref{prop:hypercube:1} and \ref{prop:hypercube:2} is based is no longer available. However, we mention the two following observations:
\begin{enumerate}[(i)]
\item When considering the restriction of the two algorithms to $\mathsf{Z}$, the locally informed MCMC is of order $d/2$ faster to converge than the RSGS (see Example \ref{ex1}).
\item When considering their restriction to $\mathsf{X}\backslash \mathsf{Z}$, the locally informed algorithm and the RSGS are identical and thus converge at the same speed.
\end{enumerate}
It may appear paradoxical that considering the restriction of the two Markov chains to two complementary subsets of $\mathsf{X}$, the locally informed algorithm is faster or as fast than the RSGS, while the results of Example \ref{ex2} show that the RSGS is in fact faster to converge than the locally informed algorithm when both Markov chains are studied on $\mathsf{X}$. Of course, a notable difference between the two algorithms is the frequency at which the chains switch between $\mathsf{Z}$ and $\mathsf{X}\backslash \mathsf{Z}$. In the case of Example \ref{ex2}, it can readily be checked that $\Pr_{LI}(X_0\in\mathsf{Z}\to X_1\in \mathsf{X}\backslash \mathsf{Z})=p\Pr_{RS}(X_0\in\mathsf{Z}\to X_1\in \mathsf{X}\backslash \mathsf{Z})$ and $\Pr_{LI}(X_0\in\mathsf{X}\backslash \mathsf{Z}\to X_1\in \mathsf{Z})=p\Pr_{RS}(X_0\in\mathsf{X}\backslash \mathsf{Z}\to X_1\in \mathsf{Z})$, where $\Pr_{RS}$ and $\Pr_{LI}$ are the probability distributions generated by the RSGS and the locally informed algorithm respectively. Hence the locally informed algorithm is, by construction, more reluctant to jump on and off the filamentary region $\mathsf{Z}$ than the RSGS. The question to address aims at understanding how transitions between $\mathsf{X}\backslash \mathsf{Z}$ and $\mathsf{Z}$ act as a bottleneck for the locally informed algorithm which eventually slows down its global convergence on $\mathsf{X}$, compared to the RSGS. This point is illustrated in the context of Example \ref{ex5} by the following animation, available at \href{https://maths.ucd.ie/~fmaire/MV18/movie_CV_ex4.gif}{\texttt{http://maths.ucd.ie/$\sim$fmaire/MV18/ex4\_CV.gif}}. It shows that, when initiated by a distribution $\mu_0$ that has its mass concentrated outside the filament, Algorithm \ref{alg1} takes more time to jump on the filament while Algorithm \ref{alg2} and the RSGS exhibits similar speed of convergence. Situations where a Markov process reaches equilibrium very quickly on two complementary subsets but very slowly globally have been deeply studied in chemical physics and especially in the context of protein dynamics. Protein dynamics are usually modeled as a Markov process that has essentially two macro states which correspond to the folded and unfolded conformations of the protein. In particular, those systems are characterized by a large spectral gap between the second and third eigenvalues, see \citet{berezhkovskii2005one,buchete2008coarse}. By analogy, we conjecture that the spectrum of the locally informed Markov chain targeting a sparse and filamentary distribution in presence of noise typically features a first spectral gap of limited amplitude compared to its second spectral gap. It remains to be seen how the spectrum of the RSGS in the same situation is shifted compared to the noise-free case.
\item \textit{Mixing strategies?} Based on the previous observation, assessing the convergence speed of the locally informed algorithm can be analysed by considering a strategy that would mix a locally informed MCMC kernel (Algorithm \ref{alg1} or \ref{alg2}) with an uninformed strategy (\textit{e.g.\;} the RSGS). More formally, considering a collection of kernels $\mathfrak{P}$, a locally informed Markov kernel $P^\ast_\omega$ for some function $\omega:\mathsf{X}\to\Delta_n$ (or $\bar{P}_\omega$ if $\mathfrak{P}$ comprises only MH Markov kernels) and an uninformed Markov kernel $P_{{\omega^{\text{c}}}}$ for some vector ${\omega^{\text{c}}}\in\Delta_n$, define the mixed strategy
$$
P_{\omega,{\omega^{\text{c}}}}^{(\varpi)}:=\varpi P^\ast_\omega+(1-\varpi)P_{{\omega^{\text{c}}}}\,,\qquad \varpi\in(0,1)\,,
$$
that moves according to the locally informed algorithm w.p. $\varpi$ and the uninformed algorithm w.p. $1-\varpi$. In this framework, the locally informed algorithm corresponds to $P_{\omega,{\omega^{\text{c}}}}^{(1)}$ and the uninformed algorithm to $P_{\omega,{\omega^{\text{c}}}}^{(0)}$. A variational analysis of the Markov kernel $P_{\omega,{\omega^{\text{c}}}}^{(\varpi)}$ (seen as a function of $\varpi$) could reveal the existence of some optimal mixing parameter $\varpi^\ast$, in the sense of minimizing the mixing time or the Markov chain autocorrelation. Our work suggests that, when $\pi$ is \textit{purely} filamentary and sparse (see \textit{e.g.\;} the model of Eq. \eqref{eq:def_fil} with $\zeta=0$ almost surely), $\varpi^\ast=1$ while for situations where $\pi$ deviates away from the filamentary and sparse framework, $\varpi^\ast<1$. Our intuition is that for a number of sparse and filamentary distributions, the mixed strategy $P_{\omega,{\omega^{\text{c}}}}^{(\varpi)}$ implemented with a large parameter $\varpi<1$ will inherit best of both worlds: fast convergence on the filament while overcoming the topological bottleneck at the boundary between $\mathsf{Z}$ and $\mathsf{X}\backslash \mathsf{Z}$.
\end{itemize}
\section{Proofs}
\subsection{Proof of Proposition \ref{prop:hypercube:1}}
\label{proof1}
\begin{proof}
We first recall some basic notions related to discrete Markov chains coupling. Let $\pi$ be a distribution on $(\mathsf{X},\mathcal{X})$ and two $\pi$-invariant Markov chains $\{X_t\}:=\{X_t,\,t\in\mathbb{N}\}$ and $\{X'_t\}:=\{X'_t,\,t\in\mathbb{N}\}$ with the same transition matrix $P$. A joint process $\{\Gamma_t\}:=\{(X_t,X'_t)\}$ defined on $(\mathsf{X}\times\mathsf{X},\mathcal{X}\otimes\mathcal{X},\mathbb{P})$ is referred to as a coupling of $\{X_t\}$ and $\{X'_t\}$ if $\{\Gamma_t\}$ admits $\{X_t\}$ and $\{X'_t\}$ as marginal distributions. Defining the coupling time $\tau(\Gamma)$ as
$$
\tau(\Gamma):=\text{inf}_{t\in\mathbb{N}}\{X_t=X'_t\}\,,
$$
a useful property of coupled Markov chains, arising from the coupling inequality states that:
\begin{equation}
\label{eq:coupl}
\|P^t(x,\,\cdot\,)-P^t(y,\,\cdot\,)\|\leq \mathbb{P}_{x,y}\{\tau>t\}\,,
\end{equation}
where $\mathbb{P}_{x,y}$ is the probability distribution generated by the simulation of the coupled Markov chain $\{\Gamma_t\}=\{X_t,X'_t\}$ started at $\Gamma_0=(x,y)$. In Eq. \eqref{eq:coupl}, we have used the shorthand notation $\tau$ for $\tau(\Gamma)$, noting however that a coupling time is relative to a specific coupling. Since we have
\begin{equation}
\label{eq:coupl2}
\sup_{x\in\mathsf{X}}\|P^t(x,\,\cdot)-\pi\|\leq \sup_{(x,y)\in\mathsf{X}^2}\|P^t(x,\,\cdot)-P^t(y,\,\cdot)\|\,,
\end{equation}
combining Eqs. \eqref{eq:coupl} and \eqref{eq:coupl2} shows that the coupling time distribution characterizes the Markov convergence. In particular, using Markov inequality, we have
$$
\sup_{x\in\mathsf{X}}\|P^t(x,\,\cdot)-\pi\|\leq \frac{1}{t}\sup_{(x,y)\in\mathsf{X}^2}\mathbb{E}_{x,y}(\tau)\,,
$$
where $\mathbb{E}_{x,y}$ is the expectation under $\mathbb{P}_{x,y}$.
In this proof, for any quantity $\alpha$ relative to the random-scan Gibbs sampler (RSGS), the equivalent quantity related to the locally informed algorithm (Alg. \ref{alg1}) will be denoted as $\alpha^\ast$. In particular, let $\mathbb{P}^\ast$ be the probability distribution generated by Algorithm \ref{alg1} and $\mathbb{E}^\ast$ be the expectation operator under $\mathbb{P}^\ast$. Our proof shows that $\mathbb{E}_{x,y}^\ast(\tau)=({d}/{2})\mathbb{E}_{x,y}(\tau)$.
Without loss of generality, we order $\mathsf{X}$ such that the states $\{x_1,\ldots,x_{1+d(n-1)}\}$ correspond to the filament (\textit{i.e.\;} $\mathsf{Z}$). We notice that the transition matrices $M$ and $M^\ast$ corresponding respectively to the RSGS and the locally informed sampler (Alg. \ref{alg1}) satisfy in this case:
\begin{equation}
M=
\begin{bmatrix}
P & 0 \\
A & B
\end{bmatrix}
\qquad \text{and}\qquad M^\ast=
\begin{bmatrix}
P^\ast & 0 \\
A^\ast & B^\ast
\end{bmatrix}\,,
\end{equation}
and clearly $\mathsf{Z}$ is an absorbing state. Assuming that both Markov chains start in $\mathsf{Z}$, it is thus sufficient to analyse only the transition matrices $P$ and $P^\ast$ which are essentially the restriction of the Markov chains to $\mathsf{Z}$. Let $\{X_t\}$ and $\{X^\ast_t\}$ be the two Markov chains generated by $P$ and $P^\ast$ respectively.
The first step of the proof consists in projecting the Markov chains $\{X_t\}$ and $\{X^\ast_t\}$ onto a smaller state space by lumping some states from $\mathsf{Z}$ together. Let us write $\mathsf{Z}$ as $\mathsf{Z}=\{\mathcal{V}_1,\mathcal{E}_1,\mathcal{V}_2,\mathcal{E}_2,\ldots,\mathcal{E}_d,\mathcal{V}_{d+1}\}$ where $\mathcal{V}_k$ and $\mathcal{E}_k$ are respectively the $k$-th vertex and the $k$-th edge of the hypercube that belongs to $\mathsf{Z}$ such that $\mathcal{V}_k\cap\mathcal{E}_k=\{\emptyset\}$. The folded representation of the Markov chain $\{X_t\}$ with transition kernel $P$ is the discrete time process $\{Y_t\}$ defined on $\mathsf{Y}=\{1,\ldots,2d+1\}$ as follows: if there is $k\in\{1,\ldots,d+1\}$ such that $X_t=\mathcal{V}_k$, set $Y_t=2k-1$ or if there is $k\in\{1,\ldots,d\}$ such that $X_t\in\mathcal{E}_k$, set $Y_t=2k$. In other words, $\{Y_t\}$ inherits the vertices from $\{X_t\}$ but aggregates together into a unique state, the states that are in between two consecutive vertices. The same mapping allows to define $\{Y_t^\ast\}$ as the \textit{folded} version of the locally informed Markov chain $\{X^\ast_t\}$. An illustration of the folded Markov chains $\{Y_t\}$ and $\{Y_t^\ast\}$ is given in Figure \ref{fig:representation}, in the case where $d=3$. In the following, we refer to as $Q$ (resp. $Q^\ast$) the transition matrix of $\{Y_t\}$ (resp. $\{Y_t'\}$).
\begin{figure}
\centering
\fbox{
\begin{tikzpicture}[,->,>=stealth',shorten >=1pt,auto,node distance=2.2cm,
thick,main node/.style={circle,draw,font=\sffamily\Large\bfseries}]
\node[main node] (1) {1};
\node[main node] (2) [below right of=1] {2};
\node[main node] (3) [above right of=2] {3};
\node[main node] (4) [below right of=3] {4};
\node[main node] (5) [above right of=4] {5};
\node[main node] (6) [below right of=5] {6};
\node[main node] (7) [above right of=6] {7};
\path[every node/.style={font=\sffamily\small}]
(1) edge [bend left] node {$\alpha$} (3)
edge [bend left] node[below] {$\bar{\beta}$} (2)
edge [loop left] node {} (1)
(2) edge [bend right] node[right] {$\alpha$} (3)
edge [bend left] node[below] {$\bar{\alpha}$} (1)
edge [loop below] node {} (2)
(3) edge [bend right] node [below] {$\beta$} (2)
edge node [below] {$\alpha$} (1)
edge [bend left] node[below] {$\beta$} (4)
edge [bend left] node[above] {$\alpha$} (5)
edge [loop above] node {} (3)
(4) edge [bend left] node {$\alpha$} (3)
edge [bend right] node [right] {$\alpha$} (5)
edge [loop below] node {} (4)
(5) edge [loop above] node {} (5)
edge node [below] {$\alpha$} (3)
edge [bend left] node[below] {$\beta$} (6)
edge [bend left] node[above] {$\alpha$} (7)
edge [loop above] node {} (5)
edge [bend right] node [below] {$\beta$} (4)
(6) edge [bend left] node {$\alpha$} (5)
edge [bend right] node [right] {$\bar{\alpha}$} (7)
edge [loop below] node {} (6)
(7) edge node {$\alpha$} (5)
edge [bend right] node[below] {$\bar{\beta}$} (6)
edge [loop right] node {} (7);
\end{tikzpicture}}
\caption{Projection on the folded space of the RSGS and locally informed Markov chains sampling from $\pi$, in the case where $d=3$. The odd states correspond to vertices and the even ones to the aggregated states between two vertices. For the RSGS, the transition probabilities of the \textit{folded} Markov chain $\{Y_t\}$ are $\alpha=\bar{\alpha}=1/dn$ and $\beta=\bar{\beta}=\{1-\frac{2}{n}\}/d$. For the locally informed algorithm, the transition probabilities of $\{Y^\ast_t\}$ are $\alpha^\ast=1/2n$, $\bar\alpha^\ast=1/n$, $\beta^\ast=(n-2)/2n$ and $\bar\beta^\ast=(n-2)/n$. For each state, the self loop indicate the probability to stay put, which equals one minus the sum of outwards probabilities. \label{fig:representation}}
\end{figure}
The second step is to define a coupling for the two folded Markov chains $\{Y_t\}$ and $\{Y^\ast_t\}$. For simplicity, we only present the coupling for $\{Y_t\}$ but the same approach is used for $\{Y^\ast_t\}$. Since there is an order on $\mathsf{Y}$, we consider the reflection coupling presented in Algorithm \ref{alg:coupling} that exploits the symmetry of the Markov chain. Clearly since $U\sim\mathrm{unif}(0,1)$ implies that $1-U\sim\mathrm{unif}(0,1)$, the marginal chains satisfy $Y_t\sim Q^t(Y_0,\,\cdot\,)$ and $Y_t'\sim Q^t(Y_0',\,\cdot\,)$ and the resulting discrete time process $\{(Y_t,Y_t')\}$ jointly defined is a coupling of $\{Y_t\}$ and $\{Y_t'\}$. The coupling introduced in Algorithm \ref{alg:coupling}, allows to derive the expected coupling time, \textit{i.e.\;} the time at which the two Markov chains $\{Y_t\}$ and $\{Y_t'\}$ coalesce. By symmetry, the Markov chains coalesce necessarily when $Y_\tau=Y_\tau'=d+1$. Therefore, denoting by $\mathbb{E}_0^\diamond$ the expectation under the coupling $\{(Y_t,Y_t')\}_t$ on $(\mathsf{Y}\times\mathsf{Y},\mathcal{Y}\otimes\mathcal{Y})$ started at $Y_0=1$ and $Y_0'=2d+1$, we have
$$
\mathbb{E}_{0}^\diamond(\tau)=\mathbb{E}_1^\diamond(T_{d+1})=\mathbb{E}_{2d+1}^\diamond(T_{d+1})\,,\\
$$
where for any $k\in\mathsf{Y}$, $T_k:=\text{inf}\{t>0,\;Y_t=k\}$ and $\mathbb{E}_k^\diamond$ denotes the expectation of the marginal Markov chain $\{Y_t\}$ started at $Y_0=k$. The same coupling for the locally informed Markov chain yields $\mathbb{E}_{0}^{\ast\diamond}(\tau)=\mathbb{E}_1^{\ast\diamond}(T_{d+1})$. Central to this proof is the fact that a reflection coupling similar to Algorithm \ref{alg:coupling} exists for the Markov chains $\{X_t\}$ and $\{X^\ast_t\}$ and since
the average time to reach the middle of the filament $\mathsf{Z}$ when starting from one end is the same regardless whether the space is folded or not we have
\begin{equation}
\label{eq:coupling_folded}
\mathbb{E}_1^\diamond(T_{d+1})=\mathbb{E}_1(T_{d+1})\,,
\end{equation}
which implies that $\mathbb{E}_{0}^\diamond(\tau)=\mathbb{E}_{0}(\tau)$. The same argument holds for the locally informed Markov chain $\{X^\ast_t\}$ and its folded version $\{Y^\ast_t\}$.
\begin{algorithm}
\caption{Reflection coupling on the hypercube}\label{alg:coupling}
\begin{algorithmic}[1]
\State Initialise the two Markov chains with $Y_0=1$ and $Y'_0=2d+1$
\State Set $t=0$, $Y=Y_0$ and $Y'=Y_0'$
\While{$Y_t\neq Y'_t$}
\State Draw $U\sim_\mathrm{iid}\mathrm{unif}(0,1)$ and set $U'=1-U$
\State Define $\eta=\{\sum_{i=1}^j Q(Y,i)\}_{j=1}^d$ and $\eta'=\{\sum_{i=1}^jQ(Y',2d+1-i)\}_{j=1}^d$
\State Set $Y=1+\sum_{k=1}^{d-1}\mathds{1}_{\eta_k<U}$ and $Y'=1+\sum_{k=1}^{d-1}\mathds{1}_{\eta'_k<U'}$
\State Set $t=t+1$, $Y_t=Y$ and $Y_t'=Y'$
\EndWhile
\State Set $\tau=t$
\ForAll{$t=\tau+1,\tau+2,\ldots$}
\State Simulate $Y_t$ using the steps (4)--(7) with $Y=Y_t$
\State Set $Y_{t}'=Y_t$
\EndFor
\end{algorithmic}
\end{algorithm}
Working on the folded space allows to derive $\mathbb{E}_1^\diamond(T_{d+1})$ and $\mathbb{E}_1^{\diamond\ast}(T_{d+1})$ in an easier way and this is the last part of the proof. We take $d$ even so as to make the algebra more immediate. In this case, since $d+1$ is odd, the state $d+1$ corresponds to a vertex. A close examination of Figure \ref{fig:hypercube} shows that $\mathbb{E}_1^\diamond(T_{d+1})$ is the average time to absorption of a fictitious chain that would contain only the $d+1$ first states, replacing the outwards connections of $d+1$ by a self loop with probability 1. Denoting by $Q_{d+1}$ the transition matrix of this fictitious chain, by $Q_d$ the transition matrix of the $d$ first transient states and by $I_d$ the $d$-dimensional identity matrix, the matrix $I_d-Q_d$ is invertible and its inverse, often known as the fundamental matrix of $Q_{d+1}$, contains information related to the absorption time, see \textit{e.g.\;} the Chapter 11 in \citet{grinstead2012introduction}. In particular, we have that for any $i<d$ starting position of the chain, then
$$
\mathbb{E}_i^\diamond(T_{d+1})=\{(I_d-Q_d)^{-1}1_d\}_i\,,
$$
where $1_d$ denotes here the $d$-dimensional $1$ vector. This implies that $\mathbb{E}_1^\diamond(T_{d+1})$ is simply the sum of the first row of $(I_d-Q_d)^{-1}$. It is possible to calculate analytically the fundamental matrix for each chain $Q_{d+1}$ and $Q^\ast_{d+1}$ and the proof follows from comparing each first row sum.
Using symbolic computation provided by Matlab, we found the following entries for the first row of the $d$-dimensional fundamental matrix of the RSGS
\begin{equation}
v_d=\frac{1}{\alpha(2\alpha+\beta)}\left(\beta\;,\; 2\alpha \;,\;3\beta \;,\; 4\alpha\;,\; \cdots\;,\; (d-1)\beta \;,\; d\alpha\right)\,,
\end{equation}
and
\begin{multline*}
v_d^\ast=\frac{1}{\alpha(2\alpha^\ast+\beta^\ast)}\big(\beta^\ast\;,\; 2\alpha^\ast\;,\; \cdots\\
\quad (d-3)\beta^\ast \;,\; (d-2)\alpha^\ast\;,\; \alpha^\ast(2\alpha^\ast+\beta^\ast)\phi_d\;,\;
\alpha^\ast(2\alpha^\ast+\beta^\ast)\psi_d \big)\,,
\end{multline*}
where
\begin{multline*}
\phi_d=\frac{2\beta^\ast}{\alpha^\ast\delta^\ast}\left\{\left(\frac{3d}{2}-1\right)\alpha^\ast+(d-1)\beta^\ast\right\}\,,
\qquad\psi_d=\frac{1}{\delta^\ast}\left\{3d\alpha^\ast+(2d-1)\beta^\ast\right\}\,,
\end{multline*}
and $\delta^\ast=6{\alpha^\ast}^2+7\alpha^\ast\beta^\ast+2{\beta^\ast}^2$.
Letting $d=2p$ and using the fact that
$$
\sum_{k=1}^{2p}k\mathds{1}_{\{k\,\text{is odd}\}}=p^2\,,\qquad \sum_{k=1}^{2p}k\mathds{1}_{\{k\,\text{is even}\}}=p(p+1)\,,
$$
the sum of $v_d$'s elements is
\begin{equation}
\label{eq:proof_ex_1}
\mathbb{E}_1^\diamond(T_d)=\frac{1}{\alpha(2\alpha+\beta)}\left\{\beta p^2+\alpha p(p+1)\right\}=\frac{n-1}{4}d^3+\frac{1}{2}d^2\,,
\end{equation}
by definition of $\alpha$ and $\beta$. Using the same argument the sum of $v^\ast_d$'s elements is
\begin{equation}
\label{eq:proof_ex_2}
\mathbb{E}_1^{\ast\diamond}(T_d)=\frac{n-1}{2}d^2+(3-2n)d+2(n-2)+\phi_d+\psi_d\,.
\end{equation}
By straightforward algebra, we have
\begin{equation*}
\phi_d+\psi_d=2(n-1)d-2(n-2)\,,
\end{equation*}
which plugged into Eq. \eqref{eq:proof_ex_2} yields
\begin{equation}
\label{eq:proof_ex_3}
\mathbb{E}_1^{\ast\diamond}(T_d)=\frac{n-1}{2}d^2+d\,.
\end{equation}
The proof is completed by comparing Eqs. \eqref{eq:proof_ex_1} and \eqref{eq:proof_ex_3} and using Eq. \eqref{eq:coupling_folded}.
\end{proof}
\subsection{Proof of Proposition \ref{prop:hypercube:2}}
\label{proof2}
\begin{proof}
We consider a version of the locally informed kernel delayed by a factor $\lambda\in(0,1)$:
\begin{equation}
P^\ast_\lambda=\lambda P^\ast+(1-\lambda)\mathrm{Id}\,.
\end{equation}
This proof shows that the convergence speed of the RSGS is similar to the locally informed algorithm, delayed by a factor $\lambda=2/d$. The speed of convergence of $\pi$-reversible Markov kernels can be assessed by studying their spectral properties. Indeed, defining the spectral gap of a Markov kernel $P$ as
$$
\gamma(P):=1-\sup\left\{|\lambda|,\;\lambda\in\text{Sp}(P)\backslash\{1\}\right\}\,,
$$
where $\text{Sp}(P)$ is the spectrum of $P$, Proposition 2 from \cite{rosenthal2003asymptotic} states that
\begin{equation}
\label{eq:spe}
\sup_{\mu\in\mathfrak{M}_1(\mathsf{Z})}\lim_{t\to\infty}\frac{1}{t}\log\|\mu P^t-\pi\|=\log(1-\gamma(P))\,.
\end{equation}
We recall that since $P$ is a Markov operator, $\text{Sp}(P)\subset(-1,1)$. Hence, the larger the spectral gap $(\gamma(P)\nearrow 1)$, the faster the convergence. Getting the analytical expression of $\gamma(P)$ and $\gamma(P_\lambda^\ast)$ is challenging. Instead of calculating the eigenvectors of the transition matrices $P$ and $P_\lambda^\ast$, we resort to the folded versions of those Markov chains in the same spirit as the proof of Proposition \ref{prop:hypercube:1}. Indeed, the resulting transition matrices on the folded space $\mathsf{Y}=\{1,2,\ldots,2d+1\}$ are pentadiagonal and this facilitates the derivation of their spectrum.
We define the operators $\Gamma$ and $\Omega$ that map $\mathsf{Z}$ to $\mathsf{Y}$ and $\mathsf{Y}$ to $\mathsf{Z}$, respectively. Using the notation $\mathsf{Z}=\{\mathcal{V}_1,\mathcal{E}_1,\ldots,\mathcal{V}_{d+1}\}$ defined in the proof of Proposition \ref{prop:hypercube:1}, $\Gamma$ maps a state $x\in\mathsf{Z}$ to a step $y\in\mathsf{Y}$ as follows:
\begin{itemize}
\item If there exists $k\in\mathbb{N}$, such that $x=\mathcal{V}_k$, set $y=2(k-1)+1$.
\item If there exists $k\in\mathbb{N}$, such that $x\in\mathcal{E}_k$, set $y=2k$.
\end{itemize}
The operator $\Omega$ maps a state $y\in\mathsf{Y}$ to a step $x\in\mathsf{Z}$ as follows:
\begin{itemize}
\item If there exists $k\in\mathbb{N}$, such that $y=2k+1$, set $x=\mathcal{V}_{k+1}$.
\item If there exists $k\in\mathbb{N}$, such that $y=2k$, pick $x$ uniformly at random in $\mathcal{E}_k$.
\end{itemize}
Hence, contrarily to $\Gamma$, $\Omega$ is a stochastic operator. More precisely, $\Omega$ and $\Gamma$ are matrices such that $\Omega\in\mathcal{M}_{d(n-1)+1,2d+1}((0,1))$ and $\Gamma\in\mathcal{M}_{2d+1,d(n-1)+1}((0,1))$ and their construction is detailed at Algorithm \ref{alg:matrices}.
\begin{algorithm}
\caption{Construction of the mapping matrices}\label{alg:matrices}
\begin{algorithmic}[1]
\State set $\Omega_{1,\cdot}=\{\delta_{1,j}\}_{j\leq 2d+1}$ and $\Gamma_{\cdot,1}=\{\delta_{1,j}\}_{j\leq 2d+1}$
\State $k\gets 2$
\ForAll{$i=2,\ldots,d(n-1)+1$}
\If{it exists $\ell\geq 0$ \,s.t. $i=\ell(n-1)+1$}
\State set $k\gets k+1$
\State set $\Omega_{i,\cdot}=\{\delta_{k,j}\}_{j\leq 2d+1}$ and $\Gamma_{\cdot,i}=\{\delta_{k,j}\}_{j\leq 2d+1}$
\State set $k\gets k+1$
\Else
\State set $\Omega_{i,\cdot}=\{\delta_{k,j}\}_{j\leq 2d+1}$ and $\Gamma_{\cdot,i}=(1/(n-2))\{\delta_{k,j}\}_{j\leq 2d+1}$
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
To circumvent calculating the eigenvalues of $P$ and $P_\lambda^\ast$, a natural idea is to look at the spectrum of their equivalent transition kernels on the folded space $\mathsf{Y}$ defined as
\begin{equation}
Q:=\Gamma P \Omega\,\quad \text{and}\quad Q^\ast_\lambda:=\Gamma P^\ast_\lambda \Omega
\end{equation}
and illustrated at Figure \ref{fig:representation} (in the case $\lambda=0$). Unfortunately, those folded Markov chains cannot be directly used since $\text{Sp}(Q)\neq \text{Sp}(P)$ and $\text{Sp}(Q^\ast_\lambda)\neq \text{Sp}(P^\ast_\lambda)$. Indeed, it can be readily checked that
\begin{equation}
\text{Tr}(P)=1+(n-1)(d-1)\neq 2d-1=\text{Tr}(Q)
\end{equation}
and thus, should $\gamma(Q)$ and $\gamma(Q_\lambda^\ast)$ be analytically tractable, one could not call on to Eq. \eqref{eq:spe} to conclude the proof.
The trick is to consider the unfolded kernels stemming from $Q$ and $Q_\lambda^\ast$ and defined as
\begin{equation}
\label{eq:unfolded}
\bar{P}:=\Omega Q \Gamma\,\quad \text{and}\quad \bar{P}^\ast_\lambda:=\Omega Q^\ast_\lambda \Gamma\,.
\end{equation}
Intuitively, while the dynamic of $P$ is fundamentally on $\mathsf{Z}$, $\bar{P}$ generates a process which fundamentally operates on $\mathsf{Y}$ (via $Q$) and which is then projected back to $\mathsf{Z}$. It can be readily checked that $P\neq \bar{P}$ and $P_\lambda^\ast\neq \bar{P}_\lambda^\ast$. In particular, for any $(i,j)\in\mathcal{E}_k^2$ such that $i\neq j$, $P(i,i)\neq P(i,j)$ while $\bar{P}(i,i)=\bar{P}(i,j)$. The same point can be made about $P_\lambda^\ast$ and $\bar{P}_\lambda^\ast$. Nevertheless, $\bar{P}$ and $\bar{P}_\lambda^\ast$ are still useful for our analysis. Remarkably, Lemma \ref{lem:2} shows that for any $t>0$ and any starting point $x$ in the set of vertices, we have
\begin{equation}
\label{eq:equiv_cv}
\|\delta_xP^t-\pi\|=\|\delta_x\bar{P}^t-\pi\|\quad \text{and}\quad
\|\delta_x{P_\lambda^\ast}^t-\pi\|=\|\delta_x{\bar{P}_\lambda^{\ast\,t}-\pi}\|\,.
\end{equation}
As a consequence, when assessing the efficiency of $P$ one can equivalently study $\bar{P}$ and similarly for $P^\ast$ with $\bar{P}_\lambda^\ast$. It can be checked that $\bar{P}$ and $\bar{P}_\lambda^\ast$ are symmetric and since $\pi$ is the uniform distribution on $\mathsf{Z}$, both Markov kernels are thus $\pi$-reversible. Hence, combining Eq. \eqref{eq:equiv_cv} and Proposition 2 from \cite{rosenthal2003asymptotic} applied to $\bar{P}$ and $\bar{P}^\ast_\lambda$ shows that the relative speed of convergence of the RSGS and the delayed locally informed MCMC can be assessed by comparing $\text{Sp}(\bar{P})$ and $\text{Sp}(\bar{P}^\ast_\lambda)$. Lemma \ref{lem:0} proves that $\gamma(\bar{P})=\gamma(Q)$ and $\gamma(\bar{P}_\lambda^\ast)=\gamma(Q_\lambda^\ast)$. Lemma \ref{lem:1} completes the proof by showing that $\gamma(Q)=\gamma(Q^\ast_{2/d})$.
\end{proof}
\subsection{Proof of Proposition \ref{prop1_3}}
\label{proof3}
\begin{proof}
In the context of Proposition \ref{prop1_3}, let $P$ be the transition matrix associated to uninformed strategy. The matrix $P$ can be seen as a plain Metropolis-Hastings with proposal $Q(i,j):=(1/2)\mathds{1}_{i\neq j}$ and acceptance probability $\alpha(i,j)=1\wedge \pi(j)/\pi(i)$ that guarantees the Markov chain to be $\pi$-reversible. By straightforward algebra, we have:
\begin{multline*}
P=\frac{\mathds{1}_{\{p\leq 1/3\}}}{2(1-p)}\left(
\begin{array}{ccc}
1-3p & 1-p & 2p\\
1-p &1-3p & 2p\\
1-p & 1-p & 0
\end{array}
\right)
\\
+
\frac{\mathds{1}_{\{p> 1/3\}}}{4p}\left(
\begin{array}{ccc}
0 & 2p & 2p\\
2p &0 & 2p\\
1-p & 1-p & 2(3p-1)
\end{array}
\right)\,.
\end{multline*}
Note that $\lambda_0=1\in\text{Sp}(P)$ since by construction $P$ admits a stationary distribution. The general method to derive the two other eigenvalues $(\lambda_1,\lambda_2)\in\text{Sp}(P)$ (we use the convention $\lambda_1\geq \lambda_2$) involves calculating the trace and the determinant of $P$. We note that
\begin{equation}
\left\{
\begin{array}{l}
1+\lambda_1+\lambda_2=\text{tr}(P)\\
\lambda_1\lambda_2=\text{det}(P)
\end{array}
\right.
\end{equation}
and as a consequence $(\lambda_1,\lambda_2)$ are the solution of the quadratic equation
$$
\lambda^2-\{\text{tr}(P)-1\}\lambda+\text{det}(P)=0\,.
$$
Solving this equation yields the following spectrum
\begin{equation*}
\sigma(p)=
{\mathds{1}_{\{p\leq 1/3\}}}\left\{1,\quad \frac{-p}{1-p},\quad \frac{-p}{1-p}\right\}+
{\mathds{1}_{\{p> 1/3\}}}\left\{1,\quad 1-\frac{1}{2p},\quad -\frac{1}{2}\right\}
\end{equation*}
and the spectral gap is thus
\begin{equation*}
\gamma(p)=\mathds{1}_{\{p\leq 1/3\}}\frac{1-2p}{1-p}+\mathds{1}_{\{p> 1/3\}}\frac{1}{p}\,.
\end{equation*}
We now consider the transition kernel $P^\ast$ of the locally informed Markov chains. It corresponds to Algorithm \ref{alg2} implemented with proposals $Q_1$ and $Q_2$ given at Eq. \eqref{eq:ex3} and the weight function $\omega(i)\propto(\pi(\text{inf}\{\mathsf{X}\backslash\{i\}\}),\pi(\sup\{\mathsf{X}\backslash\{i\}\}))$ for any $i\in\mathsf{X}$.
By straightforward algebra, we have:
\begin{multline*}
P^\ast=\frac{\mathds{1}_{\{p\leq1/3\}}}{2(1-p)(1+p)}
\left(
\begin{array}{ccc}
2p(1-3p) & 2(1-p)^2 & 2p(1+p)\\
2(1-p)^2 & 2p(1-3p) & 2p(1+p)\\
(1-p)(1+p) & (1-p)(1+p) & 0
\end{array}
\right)\\
+\frac{\mathds{1}_{\{p>1/3\}}}{(1+p)}
\left(
\begin{array}{ccc}
0 & 1-p & 2p\\
1-p & 0& 2p\\
1-p & 1-p & 0
\end{array}\,,
\right)
\end{multline*}
and the spectrum is given by
\begin{equation*}
\sigma^\ast(p)=\mathds{1}_{\{p\leq1/3\}}\left\{1,
\;-\frac{p}{1-p},\;-\frac{1-3p+4p^2}{1-p^2}\right\}+\mathds{1}_{\{p>1/3\}}\left\{1,\; \frac{p-1}{p+1},\; \frac{p-1}{p+1}\right\}\,.
\end{equation*}
The spectral gap is thus
\begin{equation*}
\gamma^\ast(p)=\frac{p(3-5p)}{1-p^2}\mathds{1}_{\{p\leq 1/3\}}+\frac{2p}{1+p}\mathds{1}_{\{p>1/3\}}\,,
\end{equation*}
which completes the proof.
\end{proof}
\section{Technical Lemmas}
\begin{lemma}
\label{lem:2}
Let $P$ be the transition matrix of the RSGS, $Q$ its equivalent representation on the folded state space and $\Omega$ and $\Gamma$ be the two mapping matrices defined at Algorithm \ref{alg:matrices} and let $\bar{P}:=\Omega Q\Gamma$ and ${\bar{P}^\ast}_\lambda:=\Omega {Q^\ast}_\lambda\Gamma$. Then we have for $x=(1\,,1\,,\cdots\,, 1\,,1)$ and all $t>0$
\begin{equation}
\label{eq:proof_ex_5}
\delta_x P^t=\delta_x\bar{P}^t\,.
\end{equation}
Similarly for the locally informed algorithm, we have for all $t>0$
\begin{equation}
\label{eq:proof_ex_6}
\delta_x{P_\lambda^\ast}^t=\delta_x\bar{P}_\lambda^{\ast\,t}\,.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:2}]
We prove Eqs. \eqref{eq:proof_ex_5} and \eqref{eq:proof_ex_6} by induction. For notational simplicity, we present the proof for $\lambda=1$, \textit{i.e.\;} $\bar{P}_\lambda^\ast\equiv\bar{P}^\ast$. We first establish Eq. \eqref{eq:proof_ex_5}. We use the notation of Proof of Proposition \ref{prop:hypercube:1} and let $\mathcal{V}:=\{\mathcal{V}_1,\ldots,\mathcal{V}_{d+1}\}$. The initialisation follows from noting that $P(x,\,\cdot)=\bar{P}(x,\,\cdot)$, for any $x\in\mathcal{V}$. Now, assume that $\delta_xP^t=\delta_x\bar{P}^t$ and note that
\begin{multline}
\label{eq:proof_ex_6b}
\delta_xP^{t+1}=\sum_{i\in\mathcal{V}}P^t(x,i)P(i,\,\cdot\,)+\sum_{i\in\mathcal{E}}P^t(x,i)P(i,\,\cdot\,)\,,\\
=\sum_{i\in\mathcal{V}}\bar{P}^t(x,i)P(i,\,\cdot\,)+\sum_{i\in\mathcal{E}}\bar{P}^t(x,i)P(i,\,\cdot\,)\,,\\
=\sum_{i\in\mathcal{V}}\bar{P}^t(x,i)\bar{P}(i,\,\cdot\,)+\sum_{k=1}^d\sum_{i\in\mathcal{E}_k}\bar{P}^t(x,i)P(i,\,\cdot\,)\,,
\end{multline}
where the first line comes from the recursion assumption and the second follows from the initialisation stage. The second term in the last line of Eq. \eqref{eq:proof_ex_6b} requires a special attention. In particular, Lemma \ref{lem:2_1} shows that for all $x\in\mathcal{V}$ and any edge state $i$, $\bar{P}^t(x,i)$ depends only on $i$ through the edge it belongs to. In other words, for all $k\in\{1,\ldots,d\}$, there exists a function $\rho_k^t$ such that $\bar{P}^t(x,i)=\varrho_k^t(x)$ for all $i\in\mathcal{E}_k$ and all $x\in\mathcal{V}$. Plugging this into Eq. \eqref{eq:proof_ex_6b} yields
\begin{equation}
\label{eq:proof_ex_8}
\delta_xP^{t+1}=\sum_{i\in\mathcal{V}}\bar{P}^t(x,i)\bar{P}(i,\,\cdot\,)+\sum_{k=1}^d\varrho_k^t(x)\sum_{i\in\mathcal{E}_k}\bar{P}(i,\,\cdot\,)\,.
\end{equation}
Finally, we note that
\begin{equation}
\label{eq:proof_ex_7}
\sum_{i\in\mathcal{E}_k}P(i,\,\cdot\,)=\sum_{i\in\mathcal{E}_k}\bar{P}(i,\,\cdot\,)\,.
\end{equation}
Indeed, by straightforward algebra, denoting $V_{k-1}$ and $V_k$ the adjacent vertices of $\mathcal{E}_k$, it can be readily checked that $\sum_{i\in\mathcal{E}_k}P(i,j)=\sum_{i\in\mathcal{E}_k}\bar{P}(i,j)=\{(n-2)/dn\} \mathds{1}_{j\in\{V_{k-1},V_k\}}+(1-2/dn)\mathds{1}_{j\in\mathcal{E}_k}$.
Combining Eqs. \eqref{eq:proof_ex_8} and \eqref{eq:proof_ex_7} finally yields
\begin{multline*}
\delta_xP^{t+1}=\sum_{i\in\mathcal{V}}\bar{P}^t(x,i)\bar{P}(i,\,\cdot\,)+\sum_{k=1}^d\varrho_k^t(x)\sum_{i\in\mathcal{E}_k}\bar{P}(i,\,\cdot\,)\\
=\sum_{i\in\mathcal{V}}\bar{P}^t(x,i)\bar{P}(i,\,\cdot\,)+\sum_{i\in\mathcal{E}}\bar{P}(x,i)^t\bar{P}(i,\,\cdot\,)=\delta_x\bar{P}^{t+1}\,,
\end{multline*}
which completes the first part of the proof. To prove Eq. \eqref{eq:proof_ex_6}, we note that the initialisation is straightforward since there is a one-to-one mapping on $\mathcal{V}$ between the folded and unfolded representation. The induction is concluded by applying the same reasoning, noting that Lemma \ref{lem:2_1} holds for $\bar{P}^\ast$ also and that
\begin{equation}
\label{eq:proof_ex_7_ast}
\sum_{i\in\mathcal{E}_k}P^\ast(i,\,\cdot\,)=\sum_{i\in\mathcal{E}_k}\bar{P}^\ast(i,\,\cdot\,)\,.
\end{equation}
Indeed,
\begin{itemize}
\item for $k=1$,
\begin{itemize}
\item $\sum_{i\in\mathcal{E}_1}P^\ast(i,j)=(n-2)/n=\sum_{i\in\mathcal{E}_1}\bar{P}^\ast(i,j)$ if $j=\mathcal{V}_1$,
\item $\sum_{i\in\mathcal{E}_1}P^\ast(i,j)=1-3/2n=\sum_{i\in\mathcal{E}_1}\bar{P}^\ast(i,j)$ if $j\in\mathcal{E}_1$,
\item $\sum_{i\in\mathcal{E}_1}P^\ast(i,j)=(n-2)/2n=\sum_{i\in\mathcal{E}_1}\bar{P}^\ast(i,j)$ if $j=\mathcal{V}_2$,
\item $\sum_{i\in\mathcal{E}_1}P^\ast(i,j)=0=\sum_{i\in\mathcal{E}_1}\bar{P}^\ast(i,j)$ for any $j\in\mathcal{V}\backslash\{\mathcal{V}_1,\mathcal{E}_1,\mathcal{V}_2\}$,
\end{itemize}
\item for $1<k<d$,
\begin{itemize}
\item $\sum_{i\in\mathcal{E}_k}P^\ast(i,j)=(n-2)/2n=\sum_{i\in\mathcal{E}_1}\bar{P}^\ast(i,j)$ if $j\in\{\mathcal{V}_k,\mathcal{V}_{k+1}\}$,
\item $\sum_{i\in\mathcal{E}_k}P^\ast(i,j)=1-1/n=\sum_{i\in\mathcal{E}_k}\bar{P}^\ast(i,j)$ if $j\in\mathcal{E}_k$,
\item $\sum_{i\in\mathcal{E}_k}P^\ast(i,j)=0=\sum_{i\in\mathcal{E}_k}\bar{P}^\ast(i,j)$ for any $j\in\mathcal{V}\backslash\{\mathcal{V}_k,\mathcal{E}_k,\mathcal{V}_{k+1}\}$,
\end{itemize}
\item the case $k=d$ is identical to the case $k=1$.
\end{itemize}
\end{proof}
\begin{lemma}
\label{lem:2_1}
In the context of Lemma \ref{lem:2}, for any $x\in\mathcal{V}$, for all $k\in\{1,\ldots,d\}$ and $i\in\mathcal{E}_k$, the transition probabilities $\bar{P}^t(x,i)$ and $\bar{P}^{\ast\,t}(x,i)$ are conditionally independent of $i$ given $i\in\mathcal{E}_k$.
\end{lemma}
\begin{proof}
We prove Lemma \ref{lem:2_1} by recursion for $\bar{P}^t(x,i)$ only, the proof for $\bar{P}^{\ast\,t}(x,i)$ being identical. The initialisation follows from noting that for any $i$ belonging to an edge connected to $x$, $\bar{P}(x,i)=1/dn$. For any $i$ belonging to an edge not connected to $x$, $\bar{P}(x,i)=0$. As a consequence, for all $i$ belonging to the same edge, $\bar{P}(x,i)$ is independent of $i$. Let us assume that for any $x\in\mathcal{V}$, for all $k\in\{1,\ldots,d\}$, for all $i\in\mathcal{E}_k$, $\bar{P}^t(x,i)=\varrho_k^t(x)$, \textit{i.e.\;} $\bar{P}^t(x,i)$ is independent of $i$. We have:
\begin{multline*}
\bar{P}^{t+1}(x,i)=\sum_{j\in\mathsf{X}}\bar{P}^t(x,j)\bar{P}(j,i)\,,\\
=\sum_{j\in\{V_{k-1},V_k\}}\bar{P}^t(x,j)\bar{P}(j,i)+\sum_{j\in\mathcal{E}_k}\bar{P}^t(x,j)\bar{P}(j,i)\,,\\
=\sum_{j\in\{V_{k-1},V_k\}}\bar{P}^t(x,j)\slash{dn}+\varrho_k^t(x)\sum_{j\in\mathcal{E}_k}\bar{P}(j,i)\,,
\end{multline*}
and since for all $(i,j)\in\mathcal{E}_k^2$, $\bar{P}(j,i)$ is independent of $i$, there exists $\rho_k^{t+1}(x)$ such that for all $i\in\mathcal{E}_k$, $\bar{P}^{t+1}(x,i)=\rho_k^{t+1}(x)$, which completes the proof.
\end{proof}
\begin{lemma}
\label{lem:0}
In the context of the proof of Proposition \ref{prop:hypercube:2}, $\gamma(\bar{P})=\gamma(Q)$ and $\gamma(\bar{P}_\lambda^\ast)=\gamma(Q_\lambda^\ast)$.
\end{lemma}
\begin{proof}
Without loss of generality and for notational simplicity, the proof is carried out in the case $\lambda=1$, \textit{i.e.\;} $\bar{P}^\ast_\lambda\equiv \bar{P}^\ast$. Central to this proof is the fact that $\Gamma\Omega=I_{2d+1}$, where $\Gamma$ and $\Omega$ are the two change of basis matrices from $\mathsf{Z}$ to its folded counterpart $\mathsf{Y}$ and conversely, see their formal definition given at Algorithm \ref{alg:matrices}. Indeed, it can be readily checked that $\Omega$ is an injection from $\mathsf{Y}$ to $\mathsf{Z}$ and thus admits a left inverse. This left inverse corresponds to the reverse transformation from $\mathsf{Z}$ to $\mathsf{Y}$, which is precisely $\Gamma$.
We establish $\gamma(\bar{P})=\gamma(Q)$ and $\gamma(\bar{P}^\ast)=\gamma(Q^\ast)$ is obtained in the same way.
Let $\lambda\in\text{Sp}(Q)$. Then, by definition of $\text{Sp}(\bar{P})$, there exists a non null vector $y_0\in\mathbb{R}^{2d+1}$ such that
\begin{equation}
\label{eq:sp2}
Q y_0=\lambda y_0\Leftrightarrow Q \Gamma\Omega y_0=\lambda y_0\Leftrightarrow\Omega Q \Gamma\Omega y_0=\lambda \Omega y_0
\Leftrightarrow \bar{P}\Omega y_0=\lambda\Omega y_0\,.
\end{equation}
Moreover, since $\text{ker}(\Omega)$ is restricted to the null vector $0_{2d+1}$, $\Omega y_0\neq 0_{(n-1)d+1}$ and $\lambda\in\text{Sp}(\bar{P})$.
Let $\lambda\in\text{Sp}(\bar{P})$, then, by definition of $\text{Sp}(\bar{P})$, there exists a non null vector $x_0\in\mathbb{R}^{(n-1)d+1}$ such that, $\bar{P} x_0=\lambda x_0$. By definition of $\bar{P}$, we have that
\begin{equation}
\label{eq:sp1}
\Omega Q \Gamma x_0=\lambda x_0\Leftrightarrow \Gamma \Omega Q \Gamma x_0=\lambda \Gamma x_0 \Leftrightarrow Q \Gamma x_0=\lambda \Gamma x_0\,.
\end{equation}
Now, $\text{ker}(\Gamma)$ is not restricted to $0_{(n-1)d+1}$. Indeed, it can be readily checked that $x_0:=(0,1,-1,0,\ldots,0)\in\text{ker}(\Gamma)$. As a consequence, for any $\lambda\in\text{Sp}(\bar{P})$ if the eigenvector associated $\lambda$ does not belong to $\ker(\Gamma)$, then $\lambda\in\text{Sp}(Q)$. In contrast, if $x_0\in\ker(\Gamma)$, it cannot be concluded whether or not $\lambda\in\text{Sp}(Q)$. A careful look at the transition matrix $\bar{P}$ shows that the columns of $\bar{P}$ are not linearly independent. In particular, the columns corresponding to states $x_0\in\mathsf{Z}$ belonging the same edge $\mathcal{E}_k$ are all equal. As a consequence, $\text{rank}(\bar{P})=2d+1$ which implies that $\text{dim}(\text{ker}(\bar{P}))=(n-1)d+1-2d-1=(n-3)d$. This shows that $0\in\text{Sp}(\bar{P})$ with multiplicity $(n-3)d$ and in fact $\ker(\Gamma)=\ker(\bar{P})$. Conversely, $\text{rank}(Q)=2d+1$ and thus $\text{dim}(\text{ker}(Q))=0$ which implies that $0\not\in\text{Sp}(Q)$. Combining those different observations yield to
\begin{equation}
\text{Sp}(\bar{P})=\text{Sp}(Q)\cup 0\,.
\end{equation}
The proof is concluded by noting that from the definition of the spectral gap, we have
\begin{equation}
\gamma(\bar{P})=1-\sup_{\lambda\in\text{Sp}(\bar{P})}|\lambda|=1-\sup_{\lambda\in\text{Sp}(\bar{P})\backslash\{0\}}|\lambda|=
1-\sup_{\lambda\in\text{Sp}(Q)}|\lambda|=\gamma(Q)\,.
\end{equation}
\end{proof}
\begin{lemma}
\label{lem:1}
In the context of Proposition \ref{prop:hypercube:2} and whenever $d$ is even, we have that
$$
\gamma(Q)=\gamma(Q_{\lambda}^\ast)\qquad \text{for}\;\lambda=2/d\,.
$$
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:1}]
The proof is established from the following series of steps
\begin{itemize}
\item calculating the characteristic polynomial for each matrix:
$$
\chi(\lambda)=\mathrm{det}(Q-\lambda \mathrm{Id})\,,\qquad\chi_{2/d}^\ast(\lambda)=\mathrm{det}(Q_{2/d}^\ast-\lambda \mathrm{Id})
$$
\item developing the determinant in a specific way, we show that $\chi$ and $\chi_{2/d}^\ast$ only differ through one factor:
$$
\chi(\lambda)=\{n(d-1)-dn\lambda\}^2\text{det} M_\lambda\,,\qquad \chi_{2/d}^\ast(\lambda)=\{n(d-2)+1-dn\lambda\}^2\text{det} M_\lambda\,.
$$
\item denoting $\Lambda=\{\lambda\in(-1,1)\,,\;\text{det} M_\lambda =0\}$, we have
\begin{multline*}
\sigma(Q)=\left\{1\;,\quad\lambda_0:=\frac{d-1}{d}\;,\quad \Lambda\right\}\,,\\ \sigma(Q_{2/d}^\ast)=\left\{1\;,\quad\lambda_0^\ast:=\frac{d-2}{d}+\frac{1}{dn}\; ,\quad \Lambda\right\}\,.
\end{multline*}
\item for both cases, the larger eigenvalue smaller than 1 is in $\Lambda$. We first calculate the traces
$$
\text{tr}(Q)=2d-1\,,\qquad
\text{tr}(Q_{2/d}^\ast)=2d+\frac{2}{nd}-1-\frac{2}{d}\,,
$$
and since $\text{tr}(Q)=\sum_{\lambda\in\sigma(Q)}\lambda$, we have
\begin{equation}
\label{eq:proof_ex_4}
\text{tr}(Q)=1+\lambda_0+\sum_{\lambda\in\Lambda}\lambda\,,\qquad \text{tr}(Q^\ast)=1+\lambda_0^\ast+\sum_{\lambda\in\Lambda}\lambda
\end{equation}
If $\lambda_0>\sup\Lambda$, then
$$
1+\lambda_0+\sum_{\lambda\in\Lambda}\lambda<1+2d\lambda_0=2d-1=\text{tr}(Q)\,,
$$
which contradicts the LHS of Eq. \eqref{eq:proof_ex_4} and we have $\lambda_0\leq \sup\Lambda$. Similarly, if $\lambda_0^\ast>\sup\Lambda$, then
$$
1+\lambda_0^\ast+\sum_{\lambda\in\Lambda}\lambda<1+2d\lambda_0^\ast=2d-1+2\frac{1-n}{n}<\text{tr}(Q)\,,
$$
which contradicts the RHS of Eq. \eqref{eq:proof_ex_4} and $\lambda_0^\ast\leq \sup \Lambda$. Therefore, regardless whether or not $(\lambda_0,\lambda_0^\ast)\in\Lambda^2$, the largest eigenvalue smaller than 1 of $Q$ and $Q_{2/d}^\ast$ is identical. The proof is completed by showing that the spectral gaps of $Q$ and $Q_{2/d}^\ast$ are identical.
\end{itemize}
\end{proof}
\section*{Acknowledgements}
Florian Maire would like to thank the Insight Centre for Data Analytics for funding the post-doc fellowship that allowed to develop this research. The Insight Centre for Data Analytics is supported by Science Foundation Ireland under Grant Number SFI/12/RC/2289.
\bibliographystyle{apalike}
|
1,314,259,994,764 | arxiv | \section{Introduction\label{introduction}}
The purpose of this paper is to explain, as clearly as possible, wave
function renormalization for the Wilson actions and their 1PI
(one-particle-irreducible) actions. Especially, we wish to elucidate
how to incorporate an anomalous dimension of the elementary field into
the Wilson and 1PI actions using the formulation of the exact
renormalization group (ERG). We are motivated by the unsatisfactory
status of the existing literature: we cannot find any simple but
general discussion that illuminates wave function renormalization in
the ERG formalism. We wish to expose the simple relation between the
ERG differential equations for the Wilson actions and those for the
corresponding 1PI actions with the anomalous dimension taken into
account.
Let us recall that a Wilson action comes with a momentum cutoff, say
$\Lambda$, and that its dependence on $\Lambda$ is described by the
ERG differential equation. A 1PI action, obtained from the Wilson
action by a Legendre transformation, satisfies its own ERG
differential equation which is equivalent to that for the Wilson
action. There are many reviews available on the subject of
ERG\cite{becchi1996,morris1998,aoki2000,bagnuls2001,berges2002,gies2012,pawlowski2007,PTPreview2010,rosten2012};
we will review everything necessary in Sect.~\ref{review} to make the
paper self-contained.
An anomalous dimension $\gamma_\Lambda$ of the elementary field was
first introduced to the 1PI actions rather than the Wilson actions by
T.~Morris in \cite{morris1994,morris1994a}. (The suffix of $\gamma_\Lambda$ is for
potential $\Lambda$ dependence.) His derivation is rather sketchy and
somewhat intuitive, but it has all the correct ingredients. In
\cite{ellwanger1995} and \cite{berges2002}, $\gamma_\Lambda$ was
introduced differently to the 1PI actions, but the result is trivially
related to Morris's by a rescaling of the field. It is Morris's
result, given as (\ref{ERGdiff-1PI}) in our paper, which has been
extensively used as the general form of the ERG differential equation
for the 1PI actions.
Now, what remains confusing in the literature is how to introduce
$\gamma_\Lambda$ to the Wilson actions. Rather recently, in
\cite{osborn2011} and \cite{rosten2011}, the starting point was taken
as the ERG differential equation given earlier by Ball et
al. \cite{ball1994} for Wilson actions. It has $\gamma_\Lambda$ as
the coefficient of a simple linear field transformation. But this
simplicity is only apparent; the Legendre transformation, that gives
1PI actions obeying Morris's differential equation
(\ref{ERGdiff-1PI}), turns out to be very complicated. Earlier in
\cite{bervillier2004} (and again later in \cite{bervillier2013}) C.~Bervillier
had introduced $\gamma_\Lambda$ to the Wilson actions in a way that
was later shown in \cite{bervillier2014} to admit a simple Legendre
transformation, leading to (\ref{ERGdiff-1PI}). But the relation
between the results of \cite{osborn2011, rosten2011} and those of
\cite{bervillier2004, bervillier2013, bervillier2014} remains to be
clarified.
We will base our discussion on a recent work by one of us (H.S.)
\cite{sonoda2015} in which ERG differential equations are formulated
in terms of two separate cutoff functions, say $K_\Lambda$ and
$k_\Lambda$. (This will be reviewed in Sect.~\ref{review}.)
$K_\Lambda$ determines the linear part of the ERG differential
equation; $k_\Lambda$, together with $K_\Lambda$, determines the
non-linear part. The anomalous dimension is introduced via the cutoff
dependence of $K_\Lambda$. Hence, not only the linear part but also
the non-linear part of the ERG differential equation depend on
$\gamma_\Lambda$. It is this equation, given by
(\ref{ERGdiffwithgamma}), that we will discuss as the counterpart of
the general form (\ref{ERGdiff-1PI}) for the 1PI actions. A simple
and well understood Legendre transformation (\ref{Legendre-SGamma})
relates the Wilson and 1PI actions.
With this understanding, the results of \cite{bervillier2004,
bervillier2013, bervillier2014} and those of \cite{osborn2011} and
\cite{rosten2011} become transparent. We will show that the ERG
differential equations discussed in
\cite{bervillier2004,bervillier2013,bervillier2014} and in
\cite{osborn2011, rosten2011} are obtained from the general form
(\ref{ERGdiffwithgamma}) by specific choices of our two cutoff
functions. We relegate the technical details to appendices. In
Appendix \ref{appendix-Bervillier}, we will show that the ERG
differential equation of \cite{bervillier2004} is obtained if the
cutoff function $k_\Lambda$ is fixed in terms of $K_\Lambda$.
Similarly, in Appendix \ref{appendix-Rosten}, we will show that the
ERG differential equation of Ball et al. \cite{ball1994} is obtained
if we choose $k_\Lambda$ specifically to remove $\gamma_\Lambda$ from
the non-linear term. (In fact the $\gamma_\Lambda$ dependence remains
hidden in $k_\Lambda$.) Consequently we are led to the result of
\cite{osborn2011, rosten2011} for the Legendre transformation. In
passing we note that Wilson's original ERG differential equation and
its variation by Polchinski, both with an anomalous dimension, follow
from the general form (\ref{ERGdiffwithgamma}) by appropriate choices
of cutoff functions. This will be explained in the second half of
Appendix \ref{appendix-supplement}.
The main part of the paper is Sect.~\ref{section-main}, where we
introduce wave function renormalization not by changing the
normalization of an elementary field but by changing that of
$K_\Lambda$. For keeping Sect.~\ref{section-main} short and clear as
much as possible, we have collected all the relevant (mostly known but
some new) results in Sect.~\ref{review}. Almost everything we write
in Sect.~\ref{review} has been written before, but it is handy to have
them all here. This section also serves the purpose of making the
reader familiar with our notation. A well-informed reader can skip
this section. In Sect.~\ref{section-fp} we rewrite the ERG
differential equations obtained in Sect.~\ref{section-main} using the
dimensionless notation. This is necessary to have fixed point
solutions. We will be brief since the rewriting is pretty standard.
In Sect.~\ref{conclusion} we summarize and conclude the paper. We
have added four appendices. Appendix \ref{appendix-supplement}
supplements further the review given in Sect.~\ref{review}. We hope
this makes the paper self-contained without getting too long. We then
explain the relation of our results to those obtained by C.~Bervillier
\cite{bervillier2004, bervillier2013, bervillier2014} in Appendix
\ref{appendix-Bervillier} and those by Osborn and Twigg
\cite{osborn2011} and by Rosten \cite{rosten2011} in Appendix
\ref{appendix-Rosten}. Finally, in Appendix \ref{appendix-examples},
we give examples of calculating the anomalous dimensions using ERG
perturbatively.
\newpage
\section{Mostly review of the relevant results\label{review}}
There are many reviews available on the subject of
ERG \cite{becchi1996,morris1998,aoki2000,bagnuls2001,berges2002,gies2012,pawlowski2007,PTPreview2010,rosten2012}.
Rather than referring the readers to them, we review all the necessary
results in this preparatory section. Parts of 2.3-5 are new for the
viewpoint we provide. Here is a word of warning: we have adopted a
not-so-popular convention for the sign of the Wilson action $S_\Lambda$ so
that the Boltzmann weight is $e^{S_\Lambda}$ instead of the more usual $e^{-
S_\Lambda}$. This reduces the number of minus signs in the formulas. We
work in $D$ dimensional Euclidean space, and we use the notation
\begin{equation}
\int_p = \int \frac{d^D p}{(2\pi)^D},\quad
\delta (p) = (2\pi)^D \delta^{(D)} (p)
\end{equation}
for momentum integrals and the $D$-dimensional delta function.
\subsection{ERG differential equation}
Let $S_\Lambda$ be a Wilson action with a momentum cutoff $\Lambda$. As we
lower $\Lambda$, we change $S_\Lambda$ such that physics is preserved.
The specific $\Lambda$-dependence is given by the exact
renormalization group (ERG) differential equation
\begin{eqnarray}
- \Lambda \frac{\partial S_\Lambda [\phi]}{\partial \Lambda} &=& \int_p \Lambda
\frac{\partial \ln K_\Lambda (p)}{\partial \Lambda} \phi (p)
\frac{\delta S_\Lambda [\phi]}{\delta \phi (p)}\nonumber\\
&& \hspace{-1cm}+ \int_p \Lambda \frac{\partial}{\partial
\Lambda} \ln \frac{K_\Lambda (p)^2}{k_\Lambda (p)} \cdot
\frac{k_\Lambda (p)}{p^2} \frac{1}{2} \left\lbrace
\frac{\delta S_\Lambda[\phi]}{\delta \phi (p)} \frac{\delta
S_\Lambda[\phi]}{\delta \phi (-p)} + \frac{\delta^2 S_\Lambda[\phi]}{\delta \phi
(p) \delta \phi (-p)} \right\rbrace\,,\label{ERGdiff}
\end{eqnarray}
which is characterized by two positive cutoff functions $K_\Lambda
(p)$ and $k_\Lambda (p)$.\cite{sonoda2015} $K_\Lambda (p)$ must have
the properties
\begin{equation}
K_\Lambda (p) \longrightarrow \left\lbrace\begin{array}{c@{\quad}l}
\mathrm{const}& (p^2 \to 0)\,,\\
0& (p^2 \to \infty)\,,
\end{array}\right.
\end{equation}
while $k_\Lambda (p)$ must vanish as $p^2 \to 0$:
\begin{equation}
k_\Lambda (p) \stackrel{p^2 \to 0}{\longrightarrow} 0\,.
\end{equation}
Any ERG differential equation given in the past can be written as
above if the cutoff functions are chosen appropriately. As we will
see in Sect.~\ref{section-main}, this is still the case even with an
anomalous dimension.
For example, the choice
\begin{equation}
K_\Lambda (p) = K_\Lambda^W (p) \equiv e^{- p^2/\Lambda^2},\quad
k_\Lambda (p) = k_\Lambda^W (p) \equiv \frac{p^2}{\Lambda^2}\,,
\end{equation}
was made when the ERG differential equation was written for the first
time \cite{wilson1974}. For perturbative
applications \cite{polchinski1984}, it is convenient to choose
\begin{equation}
K_\Lambda (p) = K(p/\Lambda),\quad
K (0) = 1,\quad k_\Lambda (p) = k^P_\Lambda (p) \equiv K (p/\Lambda)
\left( 1 - K (p/\Lambda)\right)\,
\end{equation}
so that $K(p/\Lambda)/p^2$ and $(1-K (p/\Lambda))/p^2$ are interpreted
as low and high momentum propagators, respectively.
\subsection{Modified correlation functions}
Using $K_\Lambda$ \& $k_\Lambda$ we can introduce modified correlation
functions as
\begin{eqnarray}
&&\vvev{\phi (p_1) \cdots \phi (p_n)}_{S_\Lambda}^{K_\Lambda, k_\Lambda}\nonumber\\
&&\equiv \prod_{i=1}^n \frac{1}{K_\Lambda (p_i)} \cdot
\vev{\exp \left( - \frac{1}{2} \int_p \frac{k_\Lambda (p)}{p^2}
\frac{\delta^2}{\delta \phi (p)\delta \phi (-p)} \right) \phi
(p_1) \cdots \phi (p_n)}_{S_\Lambda}\nonumber\\
&&= \prod_{i=1}^n \frac{1}{K_\Lambda (p_i)} \cdot \int [d\phi] e^{S_\Lambda
[\phi]}\nonumber\\
&&\qquad\times \exp \left( - \frac{1}{2} \int_p \frac{k_\Lambda (p)}{p^2}
\frac{\delta^2}{\delta \phi (p)\delta \phi (-p)} \right)
\, \left\lbrace\phi (p_1) \cdots \phi (p_n)\right\rbrace\,.
\label{modified}
\end{eqnarray}
$S_\Lambda$ is expected to suppress the fluctuations of high momentum modes,
and the division by $K_\Lambda (p)$ enhances those fluctuations. The
propagator $k_\Lambda (p)/p^2$ modifies two-point functions only at
momenta of order $\Lambda$ or higher. The modified correlation
functions thus defined are independent of the cutoff $\Lambda$ if
$S_\Lambda$ satisfies (\ref{ERGdiff}). In fact, as shown in
\cite{sonoda2015}, we can derive the ERG differential equation
(\ref{ERGdiff}) by demanding the $\Lambda$-independence of (\ref{modified}).
In the first half of Appendix \ref{appendix-supplement} we solve
(\ref{ERGdiff}) and show the $\Lambda$ independence of
(\ref{modified}).
\subsection{Equivalent Wilson actions}
Let $K'_\Lambda$ be a cutoff function alternative to $K_\Lambda$.
Substituting $\left(K_\Lambda (p)/K'_\Lambda (p)\right) \phi (p)$ for
$\phi (p)$ in the above modified correlation functions, we obtain
\begin{equation}
\prod_{i=1}^n \frac{1}{K'_\Lambda (p)} \cdot
\vev{\exp \left( - \frac{1}{2} \int_p \frac{k_\Lambda (p)}{p^2}
\frac{K'_\Lambda (p)^2}{K_\Lambda (p)^2} \frac{\delta^2}{\delta \phi
(p) \delta \phi (-p)} \right) \phi (p_1) \cdots \phi (p_n)}_{S_\Lambda'}
\end{equation}
where $S_\Lambda' [\phi]$ is the Wilson action obtained by the substitution:
\begin{equation}
S_\Lambda' [\phi] \equiv S_\Lambda \left[ \frac{K_\Lambda (p)}{K'_\Lambda (p)}
\phi (p)\right]\,.\label{Sprime}
\end{equation}
If we define
\begin{equation}
k'_\Lambda (p) \equiv k_\Lambda (p) \frac{K'_\Lambda (p)^2}{K_\Lambda
(p)^2}\,,\label{kprime}
\end{equation}
then $S_\Lambda$ with $K_\Lambda, k_\Lambda$ has the same modified
correlation functions as $S_\Lambda'$ with $K'_\Lambda, k'_\Lambda$:
\begin{equation}
\vvev{\phi (p_1) \cdots \phi (p_n)}_{S_\Lambda'}^{K'_\Lambda, k'_\Lambda}
= \vvev{\phi (p_1) \cdots \phi (p_n)}_{S_\Lambda}^{K_\Lambda, k_\Lambda}\,.
\end{equation}
These are independent of $\Lambda$ if $S_\Lambda$ satisfies the ERG
differential equation (\ref{ERGdiff}) with $K_\Lambda, k_\Lambda$.
Then, $S_\Lambda'$ is guaranteed to satisfy the ERG differential equation
with $K'_\Lambda, k'_\Lambda$. We can regard the combination $(S_\Lambda',
K'_\Lambda, k'_\Lambda)$ as equivalent to the combination $(S_\Lambda,
K_\Lambda, k_\Lambda)$: equivalent actions give the same modified
correlation functions.
Please note that a somewhat broader definition of equivalence was
introduced in \cite{sonoda2015}: a combination $(S,K,k)$ of a Wilson
action and two cutoff functions is regarded as equivalent to
$(S',K',k')$ if they give the same modified correlation functions.
Hence, $(S_\Lambda, K_\Lambda, k_\Lambda)$ for all $\Lambda$ belongs to the
same class of equivalent actions. (Strictly speaking, $\Lambda$
should not be smaller than the physical mass.) We have adopted a
narrower definition so that two equivalent actions are related to each
other by a simple linear field transformation (\ref{Sprime}).
\subsection{Functional $W_\Lambda[J]$}
Given $S_\Lambda$ satisfying (\ref{ERGdiff}) with $K_\Lambda, k_\Lambda$, we
define
\begin{equation}
\tilde{S}_\Lambda [\phi] \equiv \frac{1}{2} \int_p
\frac{p^2}{k_\Lambda (p)} \phi (p) \phi (-p) + S_\Lambda [\phi]\,.
\end{equation}
This satisfies
\begin{eqnarray}
- \Lambda \frac{\partial \tilde{S}_\Lambda [\phi]}{\partial \Lambda}
&=& \int_p \Lambda \frac{\partial \ln \frac{K_\Lambda (p)}{R_\Lambda
(p)}}{\partial \Lambda} \phi (p) \frac{\delta \tilde{S}_\Lambda
[\phi]}{\delta \phi (p)}\nonumber\\
&& + \int_p \Lambda \frac{\partial R_\Lambda (p)}{\partial \Lambda}
\frac{1}{2} \frac{K_\Lambda (p)^2}{R_\Lambda (p)^2} \left\lbrace \frac{\delta
\tilde{S}_\Lambda [\phi]}{\delta \phi
(p)}\frac{\delta \tilde{S}_\Lambda [\phi]}{\delta \phi (-p)}
+ \frac{\delta^2 \tilde{S}_\Lambda [\phi]}{\delta \phi (p) \delta \phi
(-p)} \right\rbrace\,,
\end{eqnarray}
where we define
\begin{equation}
R_\Lambda (p) \equiv \frac{p^2}{k_\Lambda (p)} K_\Lambda (p)^2\,.
\end{equation}
The first term can be eliminated by the change of field variables from
$\phi (p)$ to
\begin{equation}
J(p) \equiv \frac{R_\Lambda (p)}{K_\Lambda (p)} \phi (p)\,.
\end{equation}
Defining
\begin{eqnarray}
W_\Lambda [J] &\equiv& \tilde{S}_\Lambda
\left[ \frac{K_\Lambda (p)}{R_\Lambda (p)} J (p)\right]\\
&=& \frac{1}{2} \int_p J(p) \frac{1}{R_\Lambda (p)} J(-p) + S_\Lambda
\left[ \frac{K_\Lambda (p)}{R_\Lambda (p)} J (p)\right]\,, \nonumber
\end{eqnarray}
we obtain
\begin{equation}
- \Lambda \frac{\partial W_\Lambda [J]}{\partial \Lambda} =
\frac{1}{2} \int_p \Lambda \frac{\partial R_\Lambda (p)}{\partial \Lambda} \left\lbrace
\frac{\delta W_\Lambda[J]}{\delta J(p)} \frac{\delta W_\Lambda[J]}{\delta J(-p)} +
\frac{\delta^2 W_\Lambda[J]}{\delta J(p) \delta J(-p)}\right\rbrace \,,
\label{W-Lambda}
\end{equation}
which depends only on the single combination $R_\Lambda (p)$ of the
two cutoff functions. This equation was first obtained by T.~Morris
in \cite{morris1994,morris1994a}.
Given $W_\Lambda [J]$ with a choice of $R_\Lambda (p)$, the
corresponding Wilson action $S_\Lambda [\phi]$ is not uniquely determined.
To specify $S_\Lambda [\phi]$ we need to specify one more cutoff function,
say $K_\Lambda (p)$. Then, $k_\Lambda (p)$ is given by
\begin{equation}
k_\Lambda (p) = \frac{p^2}{R_\Lambda (p)} K_\Lambda (p)^2\,,
\end{equation}
and $S_\Lambda [\phi]$ is determined as
\begin{equation}
S_\Lambda [\phi] = - \frac{1}{2} \int_p \frac{p^2}{k_\Lambda (p)} \phi (p)
\phi (-p) + W_\Lambda \left[ \frac{R_\Lambda (p)}{K_\Lambda (p)} \phi
(p) \right]\,.
\end{equation}
If we choose $K'_\Lambda (p)$ instead of $K_\Lambda (p)$, the
resulting $S_\Lambda'$ and $k'_\Lambda$ satisfy respectively (\ref{Sprime})
and (\ref{kprime}). Hence, the pair $(W_\Lambda, R_\Lambda)$
corresponds to a class of equivalent Wilson actions $(S_\Lambda, K_\Lambda,
k_\Lambda), (S_\Lambda', K'_\Lambda, k'_\Lambda), \cdots$ all of which give
rise to the same modified correlation functions.
\subsection{Legendre transformation\label{subsection-Legendre}}
Given $W_\Lambda [J]$, we introduce a Legendre transformation:
\begin{equation}
\tilde{\Gamma}_\Lambda [\Phi] = W_\Lambda [J] - \int_p J(-p) \Phi
(p)\,,
\label{Legendre}
\end{equation}
where, for given $\Phi (p)$, we determine $J (p)$ by
\begin{equation}
\Phi (p) = \frac{\delta W_\Lambda [J]}{\delta J(-p)}\,.
\label{Phi}
\end{equation}
The inverse transformation is given by (\ref{Legendre})
where, for given $J(p)$, $\Phi (p)$ is determined by
\begin{equation}
J(p) = - \frac{\delta \tilde{\Gamma}_\Lambda [\Phi]}{\delta \Phi (-p)}\,.
\end{equation}
Let $(S_\Lambda, K_\Lambda, k_\Lambda)$ be one of the combinations corresponding
to $W_\Lambda [J]$. We can then rewrite (\ref{Phi}) as
\begin{equation}
\Phi (p) = \frac{1}{K_\Lambda (p)} \left( \phi (p) + \frac{k_\Lambda
(p)}{p^2} \frac{\delta S_\Lambda [\phi]}{\delta \phi (-p)} \right)\,.
\label{Phiphi}
\end{equation}
This is a composite operator corresponding to the elementary field
$\phi (p)$. Its modified correlation functions satisfy
\begin{eqnarray}
&&\vvev{\Phi (p) \phi (p_1) \cdots \phi (p_n)}_{S_\Lambda}^{K_\Lambda,
k_\Lambda}\nonumber\\
&&\equiv \prod_{i=1}^n \frac{1}{K_\Lambda (p_i)} \cdot \vev{\Phi (p)
\exp \left( - \frac{1}{2} \int_p \frac{k_\Lambda (p)}{p^2}
\frac{\delta^2}{\delta \phi (p) \delta \phi (-p)} \right) \left\lbrace
\phi (p_1) \cdots \phi (p_n)\right\rbrace }_{S_\Lambda}\nonumber\\
&&= \vvev{\phi (p) \phi (p_1) \cdots \phi (p_n)}_{S_\Lambda}^{K_\Lambda,
k_\Lambda}\,.
\end{eqnarray}
It is a general property of the Legendre transformation that the
second order differentials
\begin{equation}
\frac{\delta W_\Lambda [J]}{\delta J(p) \delta J(-q)} = \frac{\delta
\Phi (-p)}{\delta J(-q)},\quad
(-) \frac{\delta \tilde{\Gamma}_\Lambda [\Phi]}{\delta \Phi (p) \delta
\Phi (-q)} = \frac{\delta J(-p)}{\delta \Phi (-q)}
\end{equation}
are the inverse of each other:
\begin{equation}
\int_q \frac{\delta^2 W_\Lambda [J]}{\delta J(p) \delta J(-q)}
(-) \frac{\delta^2 \tilde{\Gamma}_\Lambda [\Phi]}{\delta \Phi (q) \delta
\Phi (-r)} = \delta (p-r)\,.\label{inverse}
\end{equation}
We will use the notation
\begin{equation}
G_{\Lambda; p,-q} [\Phi] \equiv \frac{\delta^2 W_\Lambda [J]}{\delta
J(p) \delta J(-q)}\label{Gdef}
\end{equation}
when we prefer to regard this as a functional of $\Phi$.
Another general property of the Legendre transformation is that
$W_\Lambda [J]$ and $\tilde{\Gamma}_\Lambda [\Phi]$ share the same
$\Lambda$-dependence:
\begin{eqnarray}
- \Lambda \frac{\partial \tilde{\Gamma}_\Lambda [\Phi]}{\partial
\Lambda} &=& - \Lambda \frac{\partial W_\Lambda [J]}{\partial
\Lambda}\nonumber\\
&=& \frac{1}{2} \int_p \Lambda \frac{\partial R_\Lambda (p)}{\partial
\Lambda} \left\lbrace \Phi (p) \Phi (-p) + \frac{\delta^2 W_\Lambda
[J]}{\delta J(p) \delta J(-p)}\right\rbrace\nonumber\\
&=& \frac{1}{2} \int_p \Lambda \frac{\partial R_\Lambda (p)}{\partial
\Lambda} \left\lbrace \Phi (p) \Phi (-p) + G_{\Lambda; p,-p} [\Phi] \right\rbrace\,,
\end{eqnarray}
where we have used (\ref{W-Lambda}) and (\ref{Gdef}).
We now define the 1PI action $\Gamma_\Lambda [\Phi]$ so that
\begin{equation}
\tilde{\Gamma}_\Lambda [\Phi] = - \frac{1}{2} \int_p R_\Lambda (p)
\Phi (p) \Phi (-p) + \Gamma_\Lambda [\Phi]\,.
\end{equation}
The excluded term is often called a scale dependent mass term. The
1PI action has a very simple $\Lambda$-dependence:
\begin{equation}
- \Lambda \frac{\partial \Gamma_\Lambda [\Phi]}{\partial \Lambda} = \frac{1}{2}
\int_p \Lambda \frac{\partial R_\Lambda (p)}{\partial \Lambda}\,
G_{\Lambda; p,-p} [\Phi]\,.\label{Gamma-Lambda}
\end{equation}
Using $\Gamma_\Lambda$ we can rewrite (\ref{inverse}) as
\begin{equation}
\int_q G_{\Lambda; p,-q} [\Phi]
\left( R_\Lambda (q-r) \delta
(q-r) - \frac{\delta^2 \Gamma_\Lambda [\Phi]}{\delta \Phi (q) \delta \Phi
(-r)} \right)
= \delta (p-r)\,.
\end{equation}
It is important to note that the 1PI action $\Gamma_\Lambda [\Phi]$ (with
$R_\Lambda$) corresponds one-to-one to $W_\Lambda [J]$ (with
$R_\Lambda$). Hence, all the equivalent combinations $(S_\Lambda,
K_\Lambda, k_\Lambda)$, giving rise to the same modified correlation
functions, correspond to the same 1PI action. We end this section by
writing down the Legendre transformation (\ref{Legendre}) using $S_\Lambda$
instead of $W_\Lambda$:
\begin{equation}
- \frac{1}{2} \int_p R_\Lambda (p) \Phi (p) \Phi (-p) + \Gamma_\Lambda [\Phi]
= \frac{1}{2} \int_p \frac{p^2}{k_\Lambda (p)} \phi (p) \phi (-p) +
S_\Lambda [\phi] - \int_p \frac{R_\Lambda (p)}{K_\Lambda (p)} \phi (-p) \Phi
(p)\,.\label{Legendre-SGamma}
\end{equation}
\section{Wave function renormalization for the Wilson and 1PI actions\label{section-main}}
After a long preparation, we are ready to discuss wave function
renormalization in the ERG formalism. We first introduce a cutoff
dependent positive wave function renormalization constant $Z_\Lambda$.
We denote the anomalous dimension by
\begin{equation}
\gamma_\Lambda = - \Lambda \frac{\partial}{\partial \Lambda} \ln
\sqrt{Z_\Lambda}\,.\label{gammaLambda}
\end{equation}
Physics dictates an appropriate choice of $Z_\Lambda$. For now, we
can keep it arbitrary. We wish to construct a Wilson action whose
modified correlation functions are proportional to appropriate powers
of $Z_\Lambda$:
\begin{equation}
\vvev{\phi (p_1) \cdots \phi (p_n)}_{S_\Lambda}^{K_\Lambda, k_\Lambda}
= Z_\Lambda^{\frac{n}{2}} \cdot
\left(\textrm{$\Lambda$-independent}\right) \,.
\label{ZLdependence}
\end{equation}
This implies
\begin{equation}
\frac{1}{Z_\Lambda^{\frac{n}{2}}} \vvev{\phi (p_1) \cdots \phi
(p_n)}_{S_\Lambda}^{K_\Lambda, k_\Lambda} =
\vvev{\phi (p_1) \cdots \phi (p_n)}_{S_\Lambda}^{\sqrt{Z_\Lambda} K_\Lambda,
k_\Lambda} = \left(\textrm{$\Lambda$-independent}\right)\,.
\end{equation}
Hence, $S_\Lambda$ satisfies the ERG differential equation for the cutoff
functions $K^Z_\Lambda \equiv \sqrt{Z_\Lambda}\,K_\Lambda$ and
$k_\Lambda$:
\begin{eqnarray}
- \Lambda \frac{\partial S_\Lambda[\phi]}{\partial \Lambda} &=& \int_p
\Lambda \frac{\partial \ln \left(\sqrt{Z_\Lambda} K_\Lambda
(p)\right)}{\partial \Lambda} \phi (p) \frac{\delta S_\Lambda
[\phi]}{\delta \phi (p)}\nonumber\\
&& \hspace{-1cm}+ \int_p \Lambda \frac{\partial}{\partial \Lambda} \ln
\frac{Z_\Lambda K_\Lambda (p)^2}{k_\Lambda (p)} \cdot \frac{k_\Lambda
(p)}{p^2} \frac{1}{2} \left\lbrace \frac{\delta S_\Lambda [\phi]}{\delta \phi (p)}
\frac{\delta S_\Lambda [\phi]}{\delta \phi (-p)} + \frac{\delta^2 S_\Lambda
[\phi]}{\delta \phi (p) \delta \phi (-p)} \right\rbrace\nonumber\\
&=& \int_p
\Lambda \frac{\partial \ln K_\Lambda
(p)}{\partial \Lambda} \phi (p) \frac{\delta S_\Lambda
[\phi]}{\delta \phi (p)}\nonumber\\
&& \hspace{-1cm}+ \int_p \Lambda \frac{\partial}{\partial \Lambda} \ln
\frac{K_\Lambda (p)^2}{k_\Lambda (p)} \cdot \frac{k_\Lambda
(p)}{p^2} \frac{1}{2} \left\lbrace \frac{\delta S_\Lambda [\phi]}{\delta \phi (p)}
\frac{\delta S_\Lambda [\phi]}{\delta \phi (-p)} + \frac{\delta^2 S_\Lambda
[\phi]}{\delta \phi (p) \delta \phi (-p)} \right\rbrace\nonumber\\
&& \hspace{-1cm}- \gamma_\Lambda \int_p \left[
\phi (p) \frac{\delta S_\Lambda [\phi]}{\delta \phi (p)}
+ \frac{k_\Lambda (p)}{p^2} \left\lbrace\frac{\delta S_\Lambda [\phi]}{\delta \phi (p)}
\frac{\delta S_\Lambda [\phi]}{\delta \phi (-p)} + \frac{\delta^2 S_\Lambda
[\phi]}{\delta \phi (p) \delta \phi (-p)} \right\rbrace\right]\,.
\label{ERGdiffwithgamma}
\end{eqnarray}
This is the general form of the ERG differential equation with an
anomalous dimension. (This result is not unknown; its dimensionless
form was given in \cite{sonoda2015} as (43).)
The last term proportional to $\gamma_\Lambda$ is worthy of a comment.
It is an equation-of-motion composite operator that counts the number
of $\phi$'s:
\[
\mathcal{N}_\Lambda [\phi] \equiv - \int_p K_\Lambda (p) e^{-S_\Lambda}
\frac{\delta}{\delta \phi (p)} \left[ \Phi (p) e^{S_\Lambda} \right]\,,
\]
where $\Phi (p)$ is defined by (\ref{Phiphi}). Using (\ref{Phiphi}),
we obtain
\begin{eqnarray}
\mathcal{N}_\Lambda [\phi] &=& - \int_p e^{-S_\Lambda} \frac{\delta}{\delta \phi (p)}
\left[ \left(
\phi (p) + \frac{k_\Lambda (p)}{p^2} \frac{\delta S_\Lambda}{\delta
\phi (-p)} \right) e^{S_\Lambda} \right]\nonumber\\
&=& - \int_p \left[ \phi (p) \frac{\delta S_\Lambda [\phi]}{\delta \phi (p)}
+ \frac{k_\Lambda (p)}{p^2} \left\lbrace\frac{\delta S_\Lambda [\phi]}{\delta \phi (p)}
\frac{\delta S_\Lambda [\phi]}{\delta \phi (-p)} + \frac{\delta^2 S_\Lambda
[\phi]}{\delta \phi (p) \delta \phi (-p)} \right\rbrace\right] \label{Number}
\end{eqnarray}
up to an additive field independent constant. This has the modified
correlation functions
\begin{eqnarray}
&&\vvev{\mathcal{N}_\Lambda \, \phi (p_1) \cdots \phi (p_n)}_{S_\Lambda}^{K_\Lambda,
k_\Lambda}\nonumber\\
&\equiv& \prod_{i=1}^n \frac{1}{K_\Lambda (p_i)} \cdot
\vev{\mathcal{N}_\Lambda\, \exp \left(- \frac{1}{2} \int_p \frac{k_\Lambda
(p)}{p^2} \frac{\delta^2}{\delta \phi (p) \delta \phi (-p)}
\right) \left\lbrace\phi (p_1) \cdots \phi (p_n)\right\rbrace}_{S_\Lambda}\nonumber\\
&=& n\, \vvev{\phi (p_1) \cdots \phi (p_n)}_{S_\Lambda}^{K_\Lambda,
k_\Lambda}\,.
\end{eqnarray}
$\mathcal{N}_\Lambda$ is a particular example of equation-of-motion composite
operators, also called redundant operators, marginal operators, or
exactly marginal redundant operators in the literature. (See
\cite{wegner1974} for the original discussion. In the context of ERG,
see, for example, \cite{sonoda2007}, \cite{osborn2008}, Appendix A of
\cite{osborn2011}, and the reviews \cite{PTPreview2010,rosten2012}.)
Using the prescription given in Sect.~\ref{subsection-Legendre}, we
can construct a 1PI action. The result depends on whether we take
$K_\Lambda$ or $K^Z_\Lambda = \sqrt{Z_\Lambda} K_\Lambda$ as one of
the cutoff functions together with the fixed $k_\Lambda$. Let us
first consider the combination $(S_\Lambda, K_\Lambda, k_\Lambda)$ that
corresponds to $Z_\Lambda$-dependent modified correlation functions. We
then obtain
\begin{equation}
W_\Lambda [J] \equiv \tilde{S}_\Lambda \left[ \frac{K_\Lambda (p)}{R_\Lambda (p)} J
(p) \right]
\end{equation}
and the 1PI action
\begin{equation}
- \frac{1}{2} \int_p R_\Lambda (p) \Phi (p) \Phi (-p) + \Gamma_\Lambda [\Phi] =
W_\Lambda [J] - \int_p J(-p) \Phi (p)\,.\label{Gamma1}
\end{equation}
We repeat the above for the combination $(S_\Lambda, K^Z_\Lambda,
k_\Lambda)$ that corresponds to $\Lambda$-independent modified
correlation functions. Since
\begin{equation}
R^Z_\Lambda (p) \equiv \frac{p^2}{k_\Lambda (p)} K^Z_\Lambda (p)^2
= Z_\Lambda R_\Lambda (p)\,,
\end{equation}
we obtain
\begin{equation}
W^Z_\Lambda [J] \equiv \tilde{S}_\Lambda \left[ \frac{K^Z_\Lambda (p)}{R^Z_\Lambda
(p)} J(p) \right] = \tilde{S}_\Lambda \left[ \frac{K_\Lambda
(p)}{R_\Lambda (p)} \frac{J (p)}{\sqrt{Z_\Lambda}} \right] = W_\Lambda
\left[ \frac{J}{\sqrt{Z_\Lambda}} \right]
\end{equation}
and
\begin{eqnarray}
- \frac{1}{2} \int_p Z_\Lambda R_\Lambda (p) \Phi (p) \Phi (-p) +
\Gamma_\Lambda^Z [\Phi] &=& W^Z_\Lambda [J] - \int_p J(-p) \Phi (p)\nonumber\\
&=& W_\Lambda \left[\frac{J}{\sqrt{Z_\Lambda}}\right]
- \int_p \frac{J(-p)}{\sqrt{Z_\Lambda}} \sqrt{Z_\Lambda} \Phi (p)\,.
\end{eqnarray}
Comparing this with (\ref{Gamma1}), we obtain
\begin{equation}
\Gamma_\Lambda^Z [\Phi] = \Gamma_\Lambda \left[ \sqrt{Z_\Lambda} \,\Phi \right]\,.
\end{equation}
Please note that the same 1PI action is obtained from any combination
$(S_\Lambda', K'_\Lambda, k'_\Lambda)$ equivalent to $(S_\Lambda, K^Z_\Lambda,
k_\Lambda)$. For example, with the choice $K'_\Lambda = K_\Lambda,
k'_\Lambda = k_\Lambda/Z_\Lambda$, the Wilson action
\begin{equation}
S_\Lambda' [\phi] = S_\Lambda \left[ \sqrt{Z_\Lambda}\, \phi \right]
\end{equation}
gives the same $\Gamma_\Lambda^Z [\Phi]$.
Let us find the $\Lambda$-dependence of the 1PI actions. The
prescription of Sect.~\ref{subsection-Legendre} applies directly to
$\Gamma_\Lambda^Z [\Phi]$ that corresponds to $\Lambda$-independent modified
correlation functions. We obtain
\begin{equation}
- \Lambda \frac{\partial \Gamma_\Lambda^Z [\Phi]}{\partial \Lambda}
= \frac{1}{2} \int_p \Lambda \frac{\partial \left(Z_\Lambda R_\Lambda
(p)\right)}{\partial \Lambda} \,G^Z_{\Lambda; p,-p} [\Phi]\,,
\end{equation}
where
\begin{equation}
\int_q G^Z_{\Lambda; p, -q} [\Phi] \left( Z_\Lambda R_\Lambda (q)
\delta (q-r) - \frac{\delta^2 \Gamma_\Lambda^Z [\Phi]}{\delta \Phi (q) \delta
\Phi (-r)} \right) = \delta (p-r)\,.
\end{equation}
This result for $\Gamma_\Lambda^Z$ is given in \cite{ellwanger1995} for gauge
theories and in \cite{berges2002} for generic scalar theories.
Consequently, the ERG differential equation for
\begin{equation}
\Gamma_\Lambda [\Phi] = \Gamma_\Lambda^Z \left[ \frac{\Phi}{\sqrt{Z_\Lambda}} \right]
\end{equation}
is obtained as
\begin{equation}
- \Lambda \frac{\partial \Gamma_\Lambda [\Phi]}{\partial \Lambda} =
- \gamma_\Lambda \int_p \Phi (p) \frac{\delta \Gamma_\Lambda
[\Phi]}{\delta \Phi (p)} + \frac{1}{2} \int_p \left( \frac{\partial R_\Lambda
(p)}{\partial \Lambda} - 2 \gamma_\Lambda R_\Lambda (p) \right)
G_{\Lambda; p,-p} [\Phi]\,,\label{ERGdiff-1PI}
\end{equation}
where
\begin{equation}
G_{\Lambda; p,-q} [\Phi] \equiv Z_\Lambda G^Z_{\Lambda; p,-q} \left[
\frac{\Phi}{\sqrt{Z_\Lambda}} \right]
\end{equation}
satisfies
\begin{equation}
\int_q G_{\Lambda; p,-q} [\Phi] \left( R_\Lambda (q) \delta (q-r) -
\frac{\delta^2 \Gamma_\Lambda [\Phi]}{\delta \Phi (q) \delta \Phi (-r)}\right) =
\delta (p-r)\,.\label{GLambda}
\end{equation}
We regard (\ref{ERGdiff-1PI}) for $\Gamma_\Lambda$, first obtained by T.~Morris
in \cite{morris1994,morris1994a}, as the general form of the ERG differential
equation for the 1PI actions.
In (\ref{ERGdiff-1PI}), the term proportional to $\gamma_\Lambda$ is
given by the equation-of-motion operator
\begin{equation}
\mathcal{N}_\Lambda^{\mathrm{1PI}} [\Phi] \equiv
- \int_p \left( \Phi (p) \frac{\delta \Gamma_\Lambda [\Phi]}{\delta \Phi (p)} +
R_\Lambda (p) G_{\Lambda; p,-p} [\Phi] \right)\,.
\end{equation}
This equals $\mathcal{N}_\Lambda [\phi]$ given by (\ref{Number}), written in
terms of $\Phi$ instead of $\phi$.
\section{Fixed point\label{section-fp}}
So far we have considered an arbitrary anomalous dimension
$\gamma_\Lambda$ and its integral $Z_\Lambda$. Its introduction
becomes essential when we look for a fixed point of the ERG
differential equation, either for the Wilson action or the 1PI action.
For the differential equation to have a fixed point, we need to adopt
the dimensionless notation by measuring dimensionful quantities in
units of appropriate powers of the momentum cutoff $\Lambda$.
Rewriting of the ERG differential equation (\ref{ERGdiffwithgamma})
and (\ref{ERGdiff-1PI},\ref{GLambda}) is straightforward. We
only give results here. We introduce a logarithmic scale parameter
$t$ by
\begin{equation}
\Lambda = \mu e^{-t}\,,
\end{equation}
where $\mu$ is an arbitrary momentum scale. A different choice of
$\mu$ amounts to a constant shift of $t$. Denoting the cutoff
functions by
\begin{equation}
K_\Lambda (p) = K_t (p/\Lambda)\,,\quad
k_\Lambda (p) = k_t (p/\Lambda)\,,
\end{equation}
we can rewrite (\ref{ERGdiffwithgamma}) as
\begin{eqnarray}
\partial_t S_t [\phi] &=& \int_p \left[ \left\lbrace \left( - \partial_t - p_\mu
\frac{\partial}{\partial p_\mu} \right) \ln K_t (p) +
\frac{D+2}{2} \right\rbrace \phi (p) + p_\mu \frac{\partial \phi (p)}{\partial
p_\mu} \right]
\frac{\delta S_t [\phi]}{\delta \phi (p)}\nonumber\\
&& + \int_p \left(\partial_t + p_\mu \frac{\partial}{\partial p_\mu}
\right) \ln \frac{k_t (p)}{K_t (p)^2} \cdot \frac{k_t (p)}{p^2}
\frac{1}{2} \left\lbrace \frac{\delta S_t [\phi]}{\delta \phi (p)} \frac{\delta
S_t [\phi]}{\delta \phi (-p)} + \frac{\delta^2 S_t [\phi]}{\delta
\phi (p) \delta \phi (-p)} \right\rbrace\nonumber\\
&& - \gamma_t \int_p \left[ \phi (p) \frac{\delta S_t [\phi]}{\delta \phi
(p)} + \frac{k_t (p)}{p^2} \left\lbrace \frac{\delta S_t [\phi]}{\delta \phi
(p)} \frac{\delta S_t [\phi]}{\delta \phi (-p)} + \frac{\delta^2 S_t
[\phi]}{\delta \phi (p) \delta \phi (-p)} \right\rbrace\right]\,,
\end{eqnarray}
where we have denoted $\gamma_\Lambda$ as $\gamma_t$. Similarly, we
can rewrite (\ref{ERGdiff-1PI}) as
\begin{eqnarray}
\partial_t \Gamma_t [\Phi] &=& \int_p \left( \frac{D+2}{2} + p_\mu
\frac{\partial}{\partial p_\mu} \right) \Phi (p) \cdot
\frac{\delta \Gamma_t [\Phi]}{\delta \Phi (p)} \nonumber\\
&& + \int_p \left( 2 - \left(\partial_t + p_\mu
\frac{\partial}{\partial p_\mu} \right) \ln R_t (p) \right)
\cdot R_t (p) \frac{1}{2} G_{t; p,-p} [\Phi]\nonumber\\
&& - \gamma_t \int_p \left( \Phi (p) \frac{\delta \Gamma_t
[\Phi]}{\delta \Phi (p)} + R_t (p) G_{t; p,-p} [\Phi] \right)\,,
\end{eqnarray}
where
\begin{equation}
R_t (p) \equiv \frac{p^2}{k_t (p)} K_t (p)^2\,,
\end{equation}
and (\ref{GLambda}) as
\begin{equation}
\int_q G_{t;p,-q} [\Phi] \left\lbrace R_t (q) \delta (q-r) - \frac{\delta^2
\Gamma_t [\Phi]}{\delta \Phi (q) \delta \Phi (-r)} \right\rbrace = \delta
(p-r)\,.
\end{equation}
To obtain a fixed point, we must choose $t$-independent cutoff
functions:
\begin{equation}
\left\lbrace\begin{array}{c@{~=~}l}
K_t (p) & K(p)\,,\\
k_t (p) & k (p)\,.
\end{array}\right.
\end{equation}
Then, the above ERG differential equations become simpler:
\begin{eqnarray}
\partial_t S_t [\phi] &=& \int_p \left\lbrace - p_\mu \frac{\partial}{\partial
p_\mu} \ln K(p) + \frac{D+2}{2} - \gamma_t + p_\mu \frac{\partial}{\partial
p_\mu} \right\rbrace \phi (p) \cdot \frac{\delta S_t [\phi]}{\delta
\phi (p)}\nonumber\\
&& \hspace{-1cm} + \int_p \left\lbrace - p_\mu \frac{\partial}{\partial p_\mu}
\ln R(p) + 2 - 2 \gamma_t \right\rbrace \frac{k(p)}{p^2} \frac{1}{2}
\left\lbrace \frac{\delta S_t[\phi]}{\delta \phi (p)} \frac{\delta S_t
[\phi]}{\delta \phi (-p)} + \frac{\delta^2 S_t[\phi]}{\delta \phi
(p) \delta \phi (-p)} \right\rbrace\,,
\end{eqnarray}
and
\begin{eqnarray}
\partial_t \Gamma_t [\Phi] &=& \int_p \left( \frac{D+2}{2} - \gamma_t + p_\mu
\frac{\partial}{\partial p_\mu} \right) \Phi (p) \cdot
\frac{\delta \Gamma_t [\Phi]}{\delta \Phi (p)} \nonumber\\
&& + \int_p \left( - p_\mu
\frac{\partial}{\partial p_\mu} \ln R (p) + 2 - 2 \gamma_t\right)
\cdot R (p) \frac{1}{2} G_{t; p,-p} [\Phi]\,,
\end{eqnarray}
where
\begin{equation}
R (p) \equiv \frac{p^2}{k (p)} K (p)^2\,,
\end{equation}
and $G_{t; p,-q} [\Phi]$ is defined by
\begin{equation}
\int_q G_{t;p,-q} [\Phi] \left\lbrace R (q) \delta (q-r) - \frac{\delta^2
\Gamma_t [\Phi]}{\delta \Phi (q) \delta \Phi (-r)} \right\rbrace = \delta
(p-r)\,.
\end{equation}
The anomalous dimension $\gamma_t$ can be chosen so as to fix a
particular term (the kinetic term, for example) in $S_t$.
Alternatively, it can be chosen as the fixed-point value $\gamma^*$ in
a neighborhood of the fixed point.
The fixed point action $S^*$ satisfies
\begin{eqnarray}
0 &=& \int_p \left\lbrace - p_\mu \frac{\partial}{\partial
p_\mu} \ln K(p) + \frac{D+2}{2} - \gamma^* + p_\mu \frac{\partial}{\partial
p_\mu} \right\rbrace \phi (p) \cdot \frac{\delta S^* [\phi]}{\delta
\phi (p)}\nonumber\\
&& \hspace{-1cm} + \int_p \left\lbrace - p_\mu \frac{\partial}{\partial p_\mu}
\ln R (p) + 2 - 2 \gamma^* \right\rbrace \frac{k(p)}{p^2} \frac{1}{2}
\left\lbrace \frac{\delta S^*[\phi]}{\delta \phi (p)} \frac{\delta S^*
[\phi]}{\delta \phi (-p)} + \frac{\delta^2 S^*[\phi]}{\delta \phi
(p) \delta \phi (-p)} \right\rbrace\,,\label{fp-S}
\end{eqnarray}
and the corresponding 1PI action $\Gamma^*$ satisfies
\begin{eqnarray}
0 &=& \int_p \left( \frac{D+2}{2} - \gamma^* + p_\mu
\frac{\partial}{\partial p_\mu} \right) \Phi (p) \cdot
\frac{\delta \Gamma^* [\Phi]}{\delta \Phi (p)} \nonumber\\
&& + \int_p \left( - p_\mu
\frac{\partial}{\partial p_\mu} \ln R (p) + 2 - 2 \gamma^* \right)
\cdot R (p) \frac{1}{2} G^*_{p,-p} [\Phi]\,,\label{fp-Gamma}
\end{eqnarray}
where
\begin{equation}
\int_q G^*_{p,-q} [\Phi] \left\lbrace R (q) \delta (q-r) - \frac{\delta^2
\Gamma^* [\Phi]}{\delta \Phi (q) \delta \Phi (-r)} \right\rbrace = \delta
(p-r)\,.
\end{equation}
We can solve (\ref{fp-S}) and (\ref{fp-Gamma}) only for particular
choices of $\gamma^*$.
\section{Summary and conclusions\label{conclusion}}
In this paper we have made the best effort to elucidate the structure
of the exact renormalization group both for the Wilson actions and for
the 1PI actions. Especially, we have tried to demonstrate the
simplicity of introducing an anomalous dimension to the ERG
differential equations.
We have started with introducing classes of equivalent Wilson actions.
A Wilson action $S_\Lambda$ with a momentum cutoff $\Lambda$ is paired with
two cutoff functions of momentum: $K_\Lambda (p)$ and $k_\Lambda (p)$.
We then construct modified correlation functions (\ref{modified})
using $K_\Lambda$ and $k_\Lambda$. A class of equivalent Wilson
actions consists of those combinations of $(S_\Lambda, K_\Lambda,
k_\Lambda)$ giving the same modified correlation functions. The
equivalence of $(S_\Lambda, K_\Lambda, k_\Lambda)$ and $(S_\Lambda', K'_\Lambda,
k'_\Lambda)$ demands that $R_\Lambda (p) \equiv \frac{p^2}{k_\Lambda
(p)} K_\Lambda (p)^2$ is the same as $R'_\Lambda (p) \equiv
\frac{p^2}{k'_\Lambda (p)} K'_\Lambda (p)^2$, and that $S_\Lambda$ and
$S_\Lambda'$ are related by (\ref{Sprime}).
The crux of the paper is the observation that all the equivalent
Wilson actions correspond to the same 1PI action $\Gamma_\Lambda [\Phi]$ via the
Legendre transformation (\ref{Legendre}) (equivalently
(\ref{Legendre-SGamma}) or (\ref{Gamma1})). This correspondence is
many-to-one, since the Wilson action depends on two cutoff functions
$K_\Lambda, k_\Lambda$; the 1PI action depends only on $R_\Lambda$.
We have introduced the anomalous dimension $\gamma_\Lambda$ of the
elementary field by demanding the $\Lambda$-dependence of the modified
correlation functions as given by (\ref{ZLdependence}). From this we
have derived the general form (\ref{ERGdiffwithgamma}) of the ERG
differential equation for the Wilson action. We regard
(\ref{ERGdiffwithgamma}) as the counterpart of the general form
(\ref{ERGdiff-1PI}) for the 1PI action, introduced previously by
Morris \cite{morris1994,morris1994a}.
As long as two Wilson actions share the same $R_\Lambda$, we can
transform one Wilson action to another equivalent one just by changing
$K_\Lambda$. For example, the Wilson action introduced by Bervillier
in \cite{bervillier2004} has an arbitrary cutoff function $K_\Lambda
(p)$ which he took to be the same as $R_\Lambda (p)$. By choosing
this $R_\Lambda (p)$ appropriately (its explicit form is given in
Appendix \ref{appendix-Rosten}), we can convert Bervillier's ERG
differential equation into the one by Ball et al. given in
\cite{ball1994}.
Though we have discussed a generic scalar theory in this paper,
nothing prevents us from introducing anomalous dimensions to the
fermionic and gauge fields by extending our results.
\section*{Acknowledgment}
The work of Y.~I. and K.~I. was partially supported by the JSPS
grant-in-aid \#R2209 and \#22540270. The work of H.~S. was partially
supported by the JSPS grant-in-aid \# 25400258.
|
1,314,259,994,765 | arxiv | \section{Introduction}
\label{sec:intro}
Continual learning is a field of machine learning where the data distribution is not static.
It is a natural framework for many practical problems where the data arrives progressively, and the model learns continuously.
For example, in robotics, robots need to adapt to their environments to interact and realize actions constantly, or recommendation systems also need to adapt constantly to the new content available and the new needs of users.
However, in recent years, the field of continual learning has focused mainly on one type of classification scenario: class-incremental.
This scenario evaluates how models can learn a class once and remember it when new class data arrives.
While it is important to solve this problem, using only one type of scenario can lead to over-specialized solutions that cannot generalize to different settings.
In this paper, we propose to review the literature dealing with other settings than the default one (class-incremental) and, more generally, fully supervised scenarios.
The goal is to shed light on efforts made to diversify the evaluation of continual learning.
We introduce the continual learning framework and the goals of continual learning (Sec~\ref{sec:framework}).
Then, we describe the default scenario and its characteristics (Sec~\ref{sec:default}).
In addition, we introduce a scenario that goes beyond the default scenario in supervised learning (Sec~\ref{sec:generalization_default}), unsupervised learning (Sec~\ref{sec:unsupervised}) and reinforcement learning (Sec~\ref{sec:reinforcement}).
\\
\mybox{gray}{
\textbf{Disclaimer: } This article compares the differences between supervised continual learning (CL) and other settings.
Each of these settings can have appropriate use cases and application fields.
Therefore, the goal is not to push for a different kind of CL that is supposedly more \say{natural} or \say{realistic}, but to point out that other feasible settings for CL exist, with partially overlapping challenges and solutions.
Thus, we review existing literature, list commonly made assumptions, and point out remaining challenges specific to non-supervised continual learning.
Moreover, benchmarking diversity is of high value if different benchmarks are built with the intent to evaluate one particular criterion (of which there are several).
Benchmarks or scenarios that are not built for such purposes may contribute less to progress in CL.
}
\section{Framework and goals of continual learning}
\label{sec:framework}
Continual learning (CL) is a machine learning sub-field that studies learning under time-varying data distributions.
This relaxes one of the fundamental assumptions of statistical learning theory \cite{vapnik1999nature}, which states that the data follows a stationary distribution.
One advantage of this assumption is its simplicity, whereas CL scenarios are very diverse, depending on the nature of the non-stationarity.
In CL, it is therefore crucial to clearly define the scenario, the goals of learning, the evaluation measures, and the loss functions.
The following section describes typical non-stationarities of the data distribution that have been considered in the literature (see also \cite{gepperth2016incremental,lesort2021understanding}).
\subsection{Data distribution drifts}
\label{sub:drifts}
CL under data distribution shifts needs memorization mechanisms adapted for the type of non-stationarity, which requires assumptions by the used algorithms in the face of an infinite number of possible ways a distribution can be non-stationary.
For simplicity, we will list several typical definitions used for supervised learning (e.g., classification) but that can be generalized to other settings.
A simple way to categorize non-stationarities is based on class information.
We may distinguish two types of shifts \cite{gepperth2016incremental} in this case: \textit{concept shift}, where the annotation of existing data changes \cite{caccia2020online} and \textit{virtual shift}, where we get new data, but the annotation does not change.
Usually, the term \textit{shift} is employed for sudden changes in data distributions, whereas the term \textit{drift} is used for gradual changes over time.
In supervised CL, virtual shifts are the most common non-stationarities that have been studied.
We can distinguish the special cases of \textit{virtual concept shift}, implying new data and new labels, and \textit{domain shift}, where new data of known labels are observed.
Those two settings are also known as respectively \textit{class-incremental} and \textit{instance-incremental} \cite{Lesort2019Continual,van2019three}.
The objectives to be optimized may change over time as well \cite{lesort2021understanding}, as in continual reinforcement learning \cite{Normandin2021,Khetarpal2020,Bagus2022}.
\subsection{Common CL constraints}
\label{sub:constraints}
If CL were not subject to constraints, there would be a simple solution to any scenario.
It involves storing all incoming samples and re-training every time a decision is expected, \cite{prabhu2020gdumb}.
This entails a time and memory complexity that is at least linear in the number of processed samples.
However, most CL proposals assume that memory is limited in some way, preventing this (obvious) solution.
Many approaches similarly assume that the storage of samples is restricted.
Other resources constraints subject of study are: \textit{computational cost}, \textit{memory}, \textit{data privacy}, \textit{fast adaptation}, \textit{inference speed}, \textit{transfer}.
Other constraints that are more related to the reliability of approaches are \textit{stability} and \textit{explainability}.
A discussion of CL constraints can be found in, e.g., \cite{pfuelb2019}, whereas evaluation measures that take these constraints into account are given in \cite{Lesort2019Continual}.
\section{The default scenario for CL}\label{sec:default}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{cl_schema.pdf}
\caption{
The default scenario for CL.
The data stream is assumed to be partitioned into \textit{sub-tasks} defined by data and labels (targets).
Data statistics within a sub-task are assumed to be stationary.
In addition, sub-task data and labels are assumed to be disjoint, i.e., from different classes.
The sub-task onsets are generally assumed to be known as well.
}
\label{fig:cl:default}
\end{figure}
A particular supervised setting, which we will refer to as the \textit{default scenario}, is currently dominating CL literature.
It is based on a classification problem divided into a small number of \textit{sub-tasks}.
Virtual concept shift occurs abruptly at sub-task boundaries by the apparition of new samples belonging to previously unseen classes, see Fig~\ref{fig:cl:default}.
Usually, the sub-task onsets/boundaries are known.
A consequence of this disjointness of sub-tasks is that no pure concept shift is involved: the annotation for a given data point will never change or be subject to conflict as learning progresses.
In the default scenario, the goal of CL is usually to learn the complete statistics of all sub-tasks as if they had been processed all at once, rather than one after the other.
Sometimes, samples are processed one by one, or all samples in a given sub-task are simultaneously available.
In some works, the sub-task index is known at test time, which is used for selecting the correct head of a multi-headed DNN for inference.
The assumptions made in the default scenario are justified in many use cases.
However, it is obvious that other scenarios, e.g., in robotics, may be found where they impose too severe restrictions.
Moreover, many characteristics of the default scenario, such as \say{drift are abrupt} or, \say{tasks are not revisited} lend themselves to \textit{benchmark overfitting}.
As an example, consider sub-task $T_2$ in Fig~\ref{fig:cl:default}: here, a DNN could punish incorrect decisions for class \say{$1$} more strongly than incorrect decisions for classes from the current sub-task since the default scenario assumes that sub-tasks are disjoint.
Even if researchers do not consciously exploit these assumptions, the employed CL algorithms may still rely on them indirectly.
It is thus fundamental to perform experiments in scenarios where these assumptions do not hold.
Thus, creating diverse benchmarks, as well as approaches that do not critically rely on the assumptions from the default scenario, should be an ongoing effort.
This effort should be pushed notably by existing continual learning libraries such as Continuum \cite{douillard2021continuum}, Avalanche \cite{continualai2021avalanche} or Sequoia \cite{Normandin2021}.
\subsection{CL approaches for the default scenario}
\label{sub:approaches}
This section is not meant to be exhaustive, since the default scenario is not the focus of this article.
Please refer to recent reviews \cite{deLange2019continual,BELOUADAH202138} for more details on CL methods for the default scenario.
Broad strategies for performing CL in the default scenario are regularization \cite{kirkpatrick2017overcoming,lopez2017gradient}, replay \cite{rebuffi2017icarl, Rolnick2019,shin2017continual} and dynamic architectures \cite{Rusu16progressive,fernando2017pathnet,veniat2021efficient,ostapenko2021continual,mendez2022modular}.
Regularization penalizes changes to model parameters that are deemed important for past sub-tasks.
This is usually achieved by adding penalty terms to the loss function, and it is implicitly assumed that new sub-tasks add only new data and classes.
Dynamic architecture methods extend models over time in order to separate previously learned parameters from currently optimized ones, thus reducing cross-talk and catastrophic forgetting (but equally assuming that new sub-tasks contain only new data).
Replay methods store received data for subsequent use in re-training (rehearsal).
Instead of relying on stored data, re-training can also be performed using samples produced by generative models (generative replay).
Replay is known to be an effective method for preventing catastrophic forgetting (CF), especially in class-incremental settings \cite{lesort2021continual,deLange2019continual}, but also for continual reinforcement learning \cite{Kalifou2019,Traore2019,Rolnick2019} or unsupervised learning \cite{lesort2018generative,rao2019continual,madaan2021representational}.
\subsection{Metrics and evaluation procedures}
\label{sub:metrics}
In the default scenario, various measures related to the classification error are common, which have been discussed in, e.g., \cite{Kemker2017Measuring,diaz2018don,maltoni2019continuous}.
A common baseline is termed \textit{cumulative performance}, obtained by evaluating models on the merged test sets from all sub-tasks, which corresponds to learning with stationary statistics.
This baseline is often considered an upper bound for CL performance.
In addition, \cite{lopez2017gradient} proposed the notions of forward and backward transfer: forward transfer (FT) measures how training on sub-task $i$ impacts performance on a future sub-task $j>i$.
For backward transfer (BT), the impact on previous sub-tasks $j<i$ is considered.
The common case in CL is negative BT indicating forgetting, but positive BT is theoretically possible as well.
Many authors, e.g., \cite{kirkpatrick2017overcoming}, assume that (although sub-tasks are presented sequentially) all sub-task data are available for model selection and hyper-parameter tuning.
For example, tuning EWC's regularization strengths $\lambda_i$ for each sub-task is often done in hindsight.
Some authors \cite{serra2018overcoming,Doan2021Theoretical,farajtabar2019orthogonal}, especially in works using multi-head DNNs, assume that the sub-task ID is known during testing, although this does not seem to be the current consensus.
In the limit where each sub-task contains only a single class, providing the sub-task ID at test time means providing the class label.
Even if sub-tasks are more diverse, the sub-task ID contains significant information and may thus confer unfair advantages.
The question of evaluation protocols in CL is discussed in \cite{pfuelb2019,pfuelb22diss}.
\subsection{Benchmarks}
\label{sub:benchmark}
Benchmarks for the default CL scenario are mostly derived from datasets such as MNIST, CIFAR10/100, Imagenet, SVHN etc. to create \textit{class-incremental} or \textit{domain-incremental} scenarios.
The \textit{permuted MNIST} benchmark, where successive sub-tasks are created by permuting all pixels according to a sub-task specific permutation scheme, was initially popular \cite{kirkpatrick2017overcoming,lopezpaz2017gem,Wortsman2020Supermasks} but is less so now because it can, to good accuracy, be solved even without dedicated CL schemes \cite{pfuelb2019}.
Some authors used Atari games \cite{mnih2013atari}, Mujoco \cite{Todorov2012mojoco} or Meta-World \cite{Yu2019} as benchmarks.
CL specific variants of standard benchmarks such as, e.g., \textit{colored MNIST} \cite{kim2019learning} are widely used as well since they can be used to investigate specific aspects of CL, see \cite{gat2020removing,li2019repair}.
\section{Generalizations of the default scenario}
\label{sec:generalization_default}
The default scenario is convenient for evaluation and represents a rather controlled setting for CL.
In less controlled settings, fully annotated data may not be available, or supplementary constraints may be imposed.
We find it convenient to introduce a new taxonomy of CL approaches based on their level of autonomy.
\subsection{Classifying the autonomy level of CL algorithms}
\label{sub:autonomy}
The various applications of continual learning can be classified in autonomy levels, as for autonomous vehicles \cite{kiran2021deep}.
Obviously, CL should get harder as less and less human supervision is supplied.
We identify two dimensions of autonomy, both of which will be discussed in-depth to characterize generalized supervised CL approaches better.
\par\smallskip\noindent\textbf{Objective Autonomy }
denotes the autonomy regarding the objective to achieve (labels, targets, rewards), which we group into 4 levels:
\begin{itemize}
\item \textbf{Level 0:} Full data annotation: supervised.
\item \textbf{Level 1:} Sparse labelization: RL, active learning, sparse training.
\item \textbf{Level 2:} No annotation for training, query for fast adaptation.
\item \textbf{Level 3:} No annotation for training, zero-shot adaptation.
\end{itemize}
For objective supervision levels 2 and 3, continual training can be seen as pretraining for the unknown future objective task.
We can note that if the scenario objective is unsupervised, then we can assume that it is similar to full data annotation.
\par\smallskip\noindent\textbf{Continual Learning Autonomy }
is concerned with autonomy regarding the distribution shifts (task label, task boundaries):
\begin{itemize}
\item \textbf{Level 0:} Full task annotation at train and test.
\item \textbf{Level 1:} Full task annotation at train, no test task labels.
\item \textbf{Level 2:} Sparse task annotation at train, no test task labels: example task boundaries only without train task label.
\item \textbf{Level 3:} No task labels at all: task-agnostic.
\end{itemize}
This classification still holds for smooth transitions (concept drift).
We can note that for class-incremental problems, the task-agnostic setting does not make sense since the task information is in the class labels.
We can now characterize CL contributions by a pair \say{($i$~$j$)}, where $i$ represents the objective autonomy level, and $j$ stands for the CL autonomy level.
Hence, the pair (0~1) describes a class-incremental scenario, i.e., a classification setting with fully annotated data and task labels for training but not for test data.
It is important to note that those ratings classify the complexity of a given scenario and not approaches.
For example, the default scenario (class-incremental) assesses a complexity level of (0~1), domain incremental without task labels would be (0~2), task agnostic continual reinforcement learning \cite{Caccia2022} (1~3).
To validate approaches' autonomy, they should be evaluated on adequate scenarios.
Investigating the cost of lowering or increasing a task's complexity level is fundamental for applications of CL.
We want to make our algorithms scale up to arbitrary complexity levels, but in practice, we would always choose the lowest possible complexity.
If permitted by a given application, solving a scenario with a complexity level (0~1) is obviously more efficient than solving the same scenario with a level of (3~3).
\subsection{Towards a generalization of the default setting}
Some variants of supervised CL exist that alleviate the need for annotated data.
The reduction of annotations can apply to restricted access to the task labels as in task-agnostic CL \cite{zeno2018task,He2019TaskAC}, or by reducing the labels' availability as in continual active learning \cite{mundt2020wholistic,perkonigg2021continual}, or in semi-supervised continual learning, \cite{smith2021memory}.
As described in Sec~\ref{sub:autonomy}, reducing access to task supervision lead to evaluating a CL autonomy level of 2, and removing all access to it lead to evaluating a CL autonomy level of 3.
On the other hand, reducing data annotation lead to an objective autonomy of 1 instead of 0 when the full annotation is available.
Among potential supplementary constraints, data can be streamed without the possibility of multi-epoch training as in online training \cite{Chaudhry19,schwarz2018progress}, or the data can be imbalanced \cite{kimimbalanced,chrysakis2020online}, or mixed with spurious features \cite{lesort2022Spurious}.
Scenarios where the annotations change over time (real concept drift) have been investigated in \cite{lesort2021understanding,abdelsalam2021iirc,caccia2020online}.
Some contributions \cite{dz22} relax the condition of disjoint sub-tasks and assess the impact of several fundamental CL strategies such as regularization and replay.
Yet others \cite{pfuelb2021a,pfuelb2021b} demonstrate that detecting sub-task boundaries autonomously is generally feasible by using density estimation methods.
To the generalized settings cited above, supplementary constraints (as discussed in Sec~\ref{sub:constraints}) may be added, making them even harder.
If this brings CL closer to real-life applications, solutions are required that do not overfit a particular CL setting.
Currently, the field of CL is fundamentally meta: implicitly, the goal is not to train the best possible model w.r.t. non-continual baselines, but rather to create algorithms that show maximal generalization to other CL settings.
Therefore, experimenting with generalized supervised scenarios can assess algorithms' robustness and improve generalized CL.
\section{Unsupervised continual learning}
\label{sec:unsupervised}
Whether a machine learning task is considered supervised or not depends on the formulation of the loss function.
In fact, no assumptions whatsoever are made concerning the loss in the definition of CL given in Sec~\ref{sec:intro}.
Therefore, CL is naturally transferred to unsupervised methods of machine learning, typical examples of which are density modeling, clustering, generative learning, and unsupervised representation learning.
\subsection{Density modeling}
Density modeling aims at approximating the probability density of a given set of data samples directly by minimizing a log-likelihood loss.
Typically, this is achieved using \textit{mixture models}, which model the data density as a weighted sum of $N$ parameterized component densities, e.g., multivariate Gaussian densities or Dirichlet distributions.
Density modeling allows performing, among other functions, Bayesian inference and sampling.
These functionalities spawned increased interest in mixture modeling a few years back, particularly in robotics \cite{pinto2015fast,shmelkov2017incremental,pokrajac2007incremental,kristan2008incremental}.
The main issue for CL in mixture modeling is that concept drift or shift may require an adaptation of $N$, which motivates heuristics for adding and removing components.
Current approaches using generative replay are proposed in, e.g., \cite{rao2019continual}.
Mixture models usually adapt only a small subset of components for each update step due to their intrinsic reliance on distances instead of scalar products.
This is why they are less prone to catastrophic forgetting than DNNs, an effect that has been demonstrated for self-organizing maps in \cite{gepperth2020a} which are an approximation to Gaussian Mixture Models (GMMs), see \cite{gepperth2019}.
Modeling the data density allows partitioning data space into Voronoi cells, in each of which a separate linear classifier model can be trained.
This is the essence of the popular Locally Weighted Projection Regression (LWPR) algorithm \cite{vijayakumar2000locally} which was explicitly constructed for continual classification in robotics.
\subsection{Clustering}
Clustering is, in a certain sense, an approximation to density modeling, although the inference is limited to determining the precise component a given data sample was generated from.
Clustering methods are normally trained using a k-means type of algorithm, which approximates gradient descent on a loss function that again approximates a GMM log-likelihood.
CL for clustering algorithms faces the same basic issue as in density modeling: a potentially variable number $N$ of cluster centers during concept drift or shift.
This has been demonstrated in, e.g., \cite{pham2004incremental,aaron2014dynamic,bagirov2011fast}.
\subsection{Generative learning}
Generative learning aims to generate realistic samples (typically images) that are similar to a set of training data.
Typical models are generative adversarial networks (GANs), variational auto-encoders (VAEs), PixelCNN, FLoW or GLoW, but many other variants have been proposed, see, e.g., \cite{turhan2018recent} for a review.
Training such generative models can be performed, e.g., in the CL default scenario introduced in Sec~\ref{sec:default} (apart from the supervision information), which leads to catastrophic forgetting (CF) without additional measures.
Several of the approaches used in supervised learning have been successfully applied to training generative models: knowledge distillation \cite{gancl1}, EWC \cite{zhai2019lifelong} and replay \cite{8682702,9190980,lesort2018marginal,ramapuram2017lifelong,lesort2018generative}.
To our knowledge, no generic approaches that are specific to generative learning have been proposed, apart perhaps \cite{varshney2021cam} where it is proposed to learn specific transformations.
This, however, is very specific to a particular kind of (image) data and would have to be adapted if other kinds of data were targeted.
\subsection{Continual representation learning}
\label{sub:representation}
Unsupervised training for learning representations for downstream applications is a common use case for unsupervised learning.
It was one of the motivations to develop various types of auto-encoders and generative models in the early days of deep learning.
In CL, using an unsupervised criterion to learn representations might be useful to avoid representations that overfit a specific task and, at the same time, improve performance on downstream tasks \cite{fini2022self,rao2019continual,madaan2021representational}.
Unsupervised pre-training can also be useful for learning a general feature extractor that can be frozen for future tasks \cite{Traore19DisCoRL,Ostapenko2022Foundational,caccia2021special}.
\subsection{Challenges of unsupervised CL}
Unsupervised learning offers general learning criteria that can avoid the over-specialization of supervised training and reduce forgetting.
Nevertheless, unsupervised CL faces the same challenges as supervised CL, and the default scenario for supervised CL can be transferred.
Moreover, in practice, unsupervised training tends to be more complex than supervised training, especially for generation and density modeling, since it is harder to model a distribution than to determine a separating hypersurface between classes in data space.
With the added complexity of CL, unsupervised learning can be a formidable problem, especially w.r.t. model and hyper-parameter selection.
\section{Continual reinforcement learning}
\label{sec:reinforcement}
In reinforcement learning (RL), an agent learns to interact with its environment by choosing a specific action for each state based on a reward signal.
The (unknown) underlying process is formalized as a Markov decision process (MDP), where an optimal policy maximizes an expected reward.
This scenario is inherently a CL setting, since the distribution of the observed data depends on the specific policy.
The evolution of the policy throughout the learning process will mechanically lead to the non-stationarity of the data distribution.
Hence, RL requires the ability to cope with non-stationary data.
However, supplementary non-stationary, for example, in the environment or in the objective to fulfill, can increase the training difficulty and lead to a continual setting.
We will use the term \textit{Continual Reinforcement Learning} (CRL) for denoting RL in settings that go beyond the usual assumptions of non-stationarity made in conventional RL.
\subsection{Existing approaches}
\label{sub:RW_RL}
The works presented in \cite{ring94,Thrun1995} introduce the importance of CL at an early stage and especially investigated them in the context of reinforcement learning.
More recent works, e.g., \cite{Xu2018,Rolnick2019} revisit this area and consider additional aspects such as catastrophic forgetting.
Some frameworks to guide future research have also been published \cite{Lesort2019,Khetarpal2020}.
Both provide a comprehensive overview of the synergies between continual and reinforcement learning.
\par\smallskip\noindent\textbf{RL Approaches }
Experience replay \cite{Zhang2017,Rolnick2019,Fedus2020} is the most common approach to counter non-stationarities in RL.
Several variants are introduced, e.g., \cite{Schaul2015,Andrychowicz2017,Isele2018,Novati2019,Hu2021}.
Continuous control, multi-task, and multi-goal are also research topics intersecting with continual reinforcement learning, but their scenarios are not always defined in a consistent fashion in the literature.
In general, the goal is to enable transfer learning between policies, which, however, omits the capacity for forgetting or re-adaptation.
Some works assume a static objective \cite{Ammar2014,Teh2017,Sorokin2019,Ribeiro2019,Schiewer2021}, others a static agent and/or environment \cite{Zhao2019,Yang2020,Gupta2021} or none of both \cite{Xu2020a,Kalashnikov2021,Kelly2021}.
In contrast, multi-agent reinforcement learning is mostly related to some kind of joint training and is hence not related to CL.
For CRL, the agent needs to acquire new skills to handle time-varying conditions, such as changes in environment \cite{Padakandla2021}, observations or actions, and additionally must retain the old knowledge.
A variety of approaches has been published, among which knowledge-based distillations \cite{Kalifou2019,Traore2019} and context-based decompositions \cite{Mendez2021,Zhang2022} are popular.
Other works are concerned with the employed model \cite{Kaplanis2018,Kaplanis2019,Lo2019,Huang2021}, off-policy algorithms \cite{Xie2020}, policy gradient \cite{Mendez2020} or a task-agnostic perspective \cite{Caccia2022}.
Evaluations of known CL methods (e.g., GEM, A-GEM, and replay) are also applied in the RL domain \cite{Atkinson2021,Bagus2022}.
\par\smallskip\noindent\textbf{Benchmarks }
An overview of CRL environments can be found in \cite{Khetarpal2018}.
Dedicated benchmarks which allow a systematical assessment are: \textit{Meta-World} \cite{Yu2019}, \textit{Continual World} \cite{Wolczyk2021} and \textit{L2Explorer} \cite{Johnson2022}.
\par\smallskip\noindent\textbf{Libraries }
Some libraries aim at unifying CRL development to improve comparability and accelerate progress:
\textit{Sequoia} \cite{Normandin2021}, \textit{Avalanche rl} \cite{Lucchesi2022}, \textit{SaLinA} \cite{denoyer2021salina}, \textit{Reverb} \cite{Cassirer2021} and \textit{CORA} \cite{Powers2021}.
\subsection{Assumptions in CRL}
Three assumptions are commonly made in CRL:
Foremost, a decomposition into sub-tasks is assumed, even if their onset is unknown since most dedicated CL methods (see Sec~\ref{sub:approaches}) assume the existence of distinct sub-tasks.
Another assumption concerns samples, which are assumed to be non-contradictory within sub-tasks, meaning the assessment of rewards changes only between sub-tasks, if it changes at all.
Finally, it is a common assumption that knowledge of sub-task boundaries is provided.
Most existing works are using information about sub-task boundaries as if they were provided by an oracle, without the possibility to recognize or determine them autonomously.
\subsection{Challenges}
In CRL, various types of drifts/shifts can appear:
\par\smallskip\noindent\textbf{Environment-related }
The agent successively observes its environment.
Therefore, on a short timescale, observations will always be non-stationary, even if the environment is.
In addition, the environment itself can change over time, or rapid modifications can be encountered (\textit{environment shift}).
This would result in novel states or transitions between these, resulting in an enlargement of the actual involved MDP.
\par\smallskip\noindent\textbf{Goal-related }
By maximizing the reward signal, the agent attains a defined objective.
If the reward function changes, the agent experiences divergent information, leading to an inconsistent policy.
In this setting, the definition of states, actions and transitions does not change, so the underlying MDP remains structurally intact.
However, other rewards are assigned to previously learned mappings, enforcing changes of transition probabilities.
\par\smallskip\noindent\textbf{Agent-related }
The decreasing influence of exploration, regardless of whether off-policy methods such as Q-Learning or on-policy methods such as policy gradient are used, temporarily creates a source of non-stationarity, resulting in a time-varying sampling of the state-action space even with a static policy and a static environment.
Additionally, it is easily possible that sensors or actuators degrade or undergo deliberate manipulations.
Affecting possible actions, immediate effects on the MDP, while a changed perception of states also impacts transitions.
\par\smallskip\noindent\textbf{Sub-tasks and data acquisition }
For scenarios where the environment changes in a discrete fashion, we can introduce the notion of sub-tasks as in the default scenario for supervised CL, see Sec~\ref{sec:default}.
A general challenge stems from the fact that samples are acquired as an online time series and have no balancing guarantees at all.
Moreover, it is possible that similar states and actions appear in various sub-tasks, but with different assigned rewards, so sub-tasks are usually not disjoint and may even be contradictory, requiring un- or re-learning, a concept absent from the default supervised CL scenario.
Depending on the type of non-stationarity, sub-task onset can be unknown, and the detection of boundaries may be difficult if the drifts are gradual rather than abrupt.
In addition, the number of sub-tasks can be significantly higher than in supervised scenarios, up to a point where the entire concept of sub-tasks becomes questionable.
Lastly, actions must be explicitly performed to transition to the appropriate subsequent state.
Therefore, a generative or offline sampling is of limited usefulness, at least for exploration.
\section{Discussion}
The field of CL has expanded rapidly in recent years, which is why many aspects of CL are still fluid and not subject to a common consensus among researchers.
This is evidenced by a wide variety of assumptions, evaluation metrics, see Sec~\ref{sub:metrics} and constraints, see Sec~\ref{sub:constraints}.
The so-called default scenario, see Sec~\ref{sec:default}, is the nearest thing to a commonly agreed scenario, yet many details fluctuate strongly between contributions.
This leads to several interesting consequences and opportunities for further research:
\par\smallskip\noindent\textbf{CL comparability }
A direct consequence is the difficulty to directly compare results of different articles.
This underscores the need, in CL more than in other domains of machine learning, to precisely describe evaluation procedures and, where possible, make use of existing libraries (see Sec~\ref{sec:default} and \ref{sub:RW_RL}) and evaluation procedures.
Furthermore, as stated in Sec~\ref{sub:constraints}, CL is a multi-objective problem where achieving the cumulative baseline is important, but where other measures (see Sec~\ref{sub:metrics}) matter as well.
\par\smallskip\noindent\textbf{CL autonomy }
As explained in Sec~\ref{sub:autonomy}, CL approaches should also be evaluated based on the complexity and autonomy of the scenario they can generalize to, to prevent them from overfitting to a specific CL scenario or assumptions.
\par\smallskip\noindent\textbf{CL scalability }
An aspect that is often omitted in current works in favor of quantitative performance measures is scalability.
Depending on a potential application context, CL, even in the default scenario, may be faced with a huge number of sub-tasks, each again containing enormous amounts of samples.
If this were not the case, the cumulative baseline, or equivalently some variant of GDumb (see Sec~\ref{sub:metrics}), would be a much less costly and superior (w.r.t. performance) alternative to using dedicated CL methods.
So time and memory complexity for the case where the number of sub-tasks is large should be included in all new works on CL algorithms to ensure comparability, at least in this respect.
\par\smallskip\noindent\textbf{CL generalization }
As was shown in Sec~\ref{sec:generalization_default}, \ref{sec:unsupervised} and \ref{sec:reinforcement}, the default CL scenario of Sec~\ref{sec:default} can be generalized in many ways.
Moreover, these chapters show that many open issues remain, both technical and conceptual, when attempting to generalize CL.
\section{Conclusion}
This review article attempts to give an overview of the current state of CL beyond the purely supervised default scenario, see Sec~\ref{sec:default}.
We describe the various complexification of the default scenario and the different learning paradigms, and propose a classification based on the autonomy characteristics of algorithms.
We believe that attempts to generalize CL pose important questions about the fundamental assumptions behind CL.
We thus encourage CL researchers to carefully reflect upon the implicit, hidden assumptions in each CL approach they are dealing with and whether they can (and should) be relaxed.
In a still-fluid field such as CL, a continuous re-examination of assumptions may lead to new solutions that strongly contribute to the advancement of the field.
\begin{footnotesize}
\bibliographystyle{unsrt}
|
1,314,259,994,766 | arxiv | \section{Introduction}
In a physical class system, a whiteboard or blackboard is an inevitable tool for teaching. Writing on the board is vital not just for transmitting class content, but also for attracting students' attention. However, this tool is intensively missing in online teaching and learning technique.
The online teaching method is overgrowing in this current pandemic, where many countries are still not prepared for full-fledged physical classes.
Moreover, as we can see in different online video streaming platforms, recorded tutorials are becoming vastly popular and content creators are getting more interested to produce such educational videos.
When it comes to online classrooms, the ability to record lectures is important. In this aspect, it is critical to capture the lecture content written by the teachers in order to make the teaching more engaging.
There are various techniques for recording lectures digital, the two most popular of which are: \textit{(a)} utilizing an overhead camera arrangement and \textit{(b)} using a graphics tablet or tab to capture the writings. The decision between these two strategies is mostly influenced by financial considerations and the comfort of using. In the first scenario, the lecture is mostly written down on paper by the person, and a recording device (for example, a smartphone) mounted on a dedicated camera stand is used to get a flawless ``bird's eye" view of the paper being photographed from top~\cite{hellerman_2020}. The second method of recording lectures is quite advanced, since it makes use of a Graphics Tablet to do so~\cite{Tab2021}. These tablets, like the \textit{Intuos} model series from \textit{Wacom Co., Ltd.}\footnote{\texttt{www.wacom.com/en-us/products/pen-tablets}}, are becoming increasingly popular in the current e-Teaching environment as a viable alternative to the traditional whiteboard setting. However, many people find it difficult to use these gadgets for a variety of reasons, the most significant reason is that the graphics tabs are very expensive~\cite{linda_2018}. Finally, because the most of the teachers are accustomed to the conventional writing method---such as a marker on a whiteboard or a pen on paper---they lack the necessary abilities to utilize such instruments. Due to the fact that the majority of general-purpose Graphic Tablets do not include a built-in screen, users must maintain their gaze on the computer screen while hovering their hands over the tab. This necessitates the development of certain abilities in order to preserve hand-eye synchronization~\cite{santos_2020}, which can be difficult for many educators and trainers to master. In addition to that, excessive use of these gadgets might lead to \textit{musculoskeletal} problems like aching shoulders, cervical spine pain, and neck pain~\cite{Xu2020}. However, certain tablets---which are designed specifically for artists containing integrated touch displays in the device---are quite pricey.
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{LaTeX/image5035.jpg}
\caption{Overview of our proposed \textit{DIY Graphics Tab}}
\label{fig: sys}
\end{figure}
\begin{table*}[ht]
\centering
\begin{tabular}{lll}
\hline \hline
\textit{\textbf{Adverse issues}} &
\textit{\textbf{WebcamPaperPen}} &
\textit{\textbf{Ours (DIY Graphics Tab)}} \\ \hline \hline
\begin{tabular}[c]{@{}l@{}}Is the paper detected with manual\\ procedure?\end{tabular} &
{\cmark} &
\begin{tabular}[c]{@{}l@{}}{\xmark} {(}paper is detected using \\machine learning technique{)}\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Do we need to perform additional\\ task for callibration?\end{tabular} &
\begin{tabular}[c]{@{}l@{}}{\cmark} {(}user requires to provide cross marks in\\ the four corners of the paper{)}\end{tabular} &
\xmark \\ \hline
\begin{tabular}[c]{@{}l@{}}Do we need extra camera other than\\the built-in one?\end{tabular} &
\begin{tabular}[c]{@{}l@{}}{\cmark} {(}a movable webcam is needed{)}\end{tabular} &
\begin{tabular}[c]{@{}l@{}}{\xmark}\\ \end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Do we need extra equipment other\\ than pen and paper?\end{tabular} &
\begin{tabular}[c]{@{}l@{}}{\cmark} {(}a lamp is required to generate shadow of\\ the pen to detect pen \& paper interactions{)}\end{tabular} &
\xmark \\ \hline
\begin{tabular}[c]{@{}l@{}}Do we need to perform additional\\ task for setup?\end{tabular} &
\begin{tabular}[c]{@{}l@{}} {\cmark} {(}the lamp \& camera need to be set in such\\ a way that there is a visible shadow of the pen{)}\end{tabular} &
\xmark \\ \hline
\begin{tabular}[c]{@{}l@{}}Do we need to keep the paper steady\\ while writing?\end{tabular} &
\cmark &
\xmark \\ \hline
\begin{tabular}[c]{@{}l@{}}Do we need to use specific type of pen?\end{tabular} &
\begin{tabular}[c]{@{}l@{}}{\cmark} {(}user must use BIC blue pen with cap-closed{)}\end{tabular} &
\xmark \\ \hline
Is there any restriction on handedness? &
\begin{tabular}[c]{@{}l@{}}{\cmark} {(}user must be right-handed{)}\end{tabular} &
\xmark \\
\hline
Is eye-hand coordination necessary? &
\begin{tabular}[c]{@{}l@{}}{\cmark} {(}user needs to look at the monitor while\\ write on the paper{)}\end{tabular} &
\begin{tabular}[c]{@{}l@{}} {\xmark} {(}user only needs to look\\ at the paper only{)}\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Does the user need to be previously\\ experienced with graphics tablet?\end{tabular} &
\cmark &
\xmark \\ \hline
\end{tabular}
\caption{A comparison between the \textit{WebcamPaperPen}~\cite{webcamPP2014} and our DIY Graphics Tab on the issues based on difficulties. Here, ({\cmark}) and ({\xmark}) indicate \textit{having difficulties} and \textit{no difficulties} respectively. As we can observe, our system needs less constraints to perform the task.}
\label{tab:compare}
\end{table*}
\subsection{Contributions}
In this paper, we strive to make the usage of a graphic tablet for lecture recording affordable to the teaching community while still providing the most essential capabilities. Our goal is to blend the two recording systems described above; while working within a limited budget, we want to combine the old-fashioned pen and paper technique with the modern computer-based recording technology. We make the assumption that the user---at the very least---possesses a laptop computer. Our solution can avoid the significant amount of money required for a graphics tablet, and it does not necessitate a significant amount of previous skill of hand-eye coordination. Our technology can be used to give a rapid shortcut for persons who need to record lecture content but only have a laptop, a pen, and some paper available to them. We call this system ``\textit{Do-It-Yourself Graphics Tab}'', or ``\textit{DIY Graphics Tab}'' in short.
\subsubsection{System Configuration.}
Figure~\ref{fig: sys} depicts a representation of our DIY Graphics Tab. The educator places a piece of paper in front of his/ her laptop and tilts the laptop's lid focusing the area of interest to capture the paper by webcam. According to our findings, a tilting angle of around $45$ degrees with respect to the base is adequate.
After that, it stores the frames that are created while the user is writing, and our system processes these frames so that just the region of the paper is kept. The extraction of the document is carried out regardless of whether or not palms are present over the document during the process of extraction. Because of the application of machine learning, this extraction procedure has been made possible. A non-affine transformation is then applied to the paper in order to experience it from a ``bird's eye" perspective. Additional post-processing is also carried out in order to filter out the content of the document. Due to the unpredictable movement of the paper while writing, the low resolution of standard webcams used for the task, shadings, and insufficient illumination, it is difficult to accomplish the assignment.\\
The following portion of this article will address researches that are relevant to our work. After that, we discuss our methodologies, which are followed by the outcomes and user experience evaluations of our system. Our paper comes to a close with a discussion of the limitations of our study as well as possible future possibilities.
\section{Related Works}
\label{sec:related}
Few works have been done on capturing physical class lectures and analyzing the contents, such as---
\cite{Davila2021, UralaKota2019, kota2018automated, davila2017whiteboard, lee2017robust, Yeh2014, Wienecke2003}.
These works offer different methods for generating static picture summaries of written whiteboard information from lecture recordings taken with still cameras, which can be used in educational settings.
The works analyze the contents by extracting the whiteboard area, removing the teacher's body and extracting the contents on the board. Therefore, we leave the descriptions of these works from our literature review as we try to utilize a laptop's webcam, a pen and paper. All of the above-mentioned works, on the other hand, are performed on the videos of whiteboards or blackboards that have been properly configured with standby camera.\\
To the best of our knowledge, work that inclines mostly with our domain is \textit{WebcamPaperPen}~\cite{webcamPP2014}. In this work, on the desk, the user lays a sheet of white paper over it, and the camera is placed on the desk between the paper and the display, only slightly above it and facing the user. Using four crosses on a piece of paper---which represent the appropriate corners---the system is set up for calibration. To ensure that mouse clicks are recognized, it is typically important to position a light on the left side of the keyboard (supposing the user is right-handed). The pen must have a blue cap (a typical BIC blue pen), as the user will always use the pen with the cap closed, never allowing ink to spill onto the paper while writing. The method detects clicks from the pen shadow and determines cursor position via predicting where the pen tip and its shadow will hit each other.
When it comes to ease of use and simplicity in configuration, our work outperforms that of \textit{WebcamPaperPen}. On the basis of specific challenges, the results in Table~\ref{tab:compare} demonstrate how our DIY Graphics Tab minimizes the complexity of \textit{WebcamPaperPen} work.
\begin{figure}[htb]
\begin{subfigure}[t]{1\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{LaTeX/g1005.jpg}
\subcaption{A frame \textit{(left)} and corresponding coordinates of the paper in it \textit{(right)}.}\label{fig:label}
\end{subfigure}
\begin{subfigure}[t]{1\linewidth}
\centering
\includegraphics[width=0.5\textwidth]{LaTeX/mask.jpeg}
\subcaption{Some regions of the papers based on coordinates.}\label{fig:mask}
\end{subfigure}
\begin{subfigure}[t]{1\linewidth}
\centering
\includegraphics[width=\textwidth]{LaTeX/dlyData.png}
\subcaption{Examples of the images from our dataset}
\label{fig:samples}
\end{subfigure}
\caption{Our dataset for paper region extraction.} \label{fig: dataset}
\end{figure}
\section{Description of our System}
In Fig.~\ref{fig: method} we exhibit the pipeline of our method for an individual input frame. There several steps associated with our system: \textit{(i)} capturing input frame, \textit{(ii)} extracting paper region and segmentation, \textit{(iii)} perspective transformation, and \textit{(iv)} post-processing. For step \textit{(i)}, we already presented the setup of our system in the earlier sections and in Fig.~\ref{fig: sys} to take input frame.
In this section we describe the workflow of the system after capturing the frames and the corresponding methodologies.
\subsection{Extracting the Paper}
The main challenge of our work is to extract the paper from the desk robustly. Since the frames are captured from a tilted camera, the appearance of the paper in the images is not rectangular due to the perspective deformation. One solution can be applying traditional image processing techniques, i.e. line detection using \textit{Hough} transformation~\cite{hough1972, JungRecHough}, and \textit{Harris} corner point detection~\cite{BMVC.2.23}, followed by further processing. However, these techniques fail since the palms and fingers holding pens occlude the paper most of the time. For this purpose we exploit a machine learning approach to detect and extract paper's region.
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{LaTeX/text9161.jpg}
\caption{The workflow of our \textit{DIY Graphics Tab}}
\label{fig: method}
\end{figure}
\subsubsection{Dataset for paper segmentation.}
As we can see, the captured frame includes the palms, fingers, pens, table and background portion (Fig. \ref{fig: method}b); but our aim is to extract only the true paper region. To make sure that the system is reliable, we want it to be able to handle a variety of paper movement scenarios. For this purpose, we developed a dataset containing the $1800$ images of papers in different positions and light conditions, and occluded by palms and pens. Fig.~\ref{fig:samples} shows several examples of our dataset. We labelled all the images by selecting the corners of the paper and storing the coordinates (see Fig.~\ref{fig:label} and~\ref{fig:mask}). The coordinates for each sample are ordered in a convenient manner to generate the convex hull of the quadrilateral mask covering paper region.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\linewidth]{LaTeX/allres.png}
\caption{Result of our DIY Graphics Tab. \textit{For each column:} input frame from webcam (\textit{top row}), segmented and perspective transformed paper region (\textit{middle}), and processed output (\textit{bottom}).}
\label{fig: output}
\end{figure*}
\begin{figure}[ht]
\centering
\includegraphics[width=0.73\linewidth]{LaTeX/left_handed.jpg}
\caption{Result for left-handed person.}
\label{fig:lefth}
\end{figure}
\subsubsection{Paper segmentation.}
We exploited the instance segmentation method Mask-RCNN~\cite{he2017mask} to extract the true paper area from the image frame. We trained the model using our dataset, and the segmentation provided us a blob region covering the paper instead of the linear edges of it (Fig. \ref{fig: method}c). We utilized the masks from Mask-RCNN module from which we extracted the largest contour to obtain the corners (Fig. \ref{fig: method}d \& \ref{fig: method}e). This segmentation functions effectively even in the circumstance where the paper is not steady and is obscured by the palms of the user's hands.
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{LaTeX/uxrate3.JPG}
\caption{Representation of the ratings collected from the \textbf{teachers}. \textit{From left to right:} ratings from teachers merged from all groups, teachers with few technical background (but experienced with laptop), teachers with technical background but no experience with graphics tablet, and teachers with technical background with experience in graphics tablet.}
\label{fig: ratings}
\end{figure*}
\begin{figure}[ht]
\centering
\includegraphics[width=0.52\linewidth]{LaTeX/uxstu2.JPG}
\caption{Ratings collected from \textbf{students}.}
\label{fig:student}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\linewidth]{LaTeX/x.jpg}
\caption{\textit{Left:} writings from a user on a paper and captured via an overhead camera ($f_o$). \textit{Right:} output of our DIY Graphics Tab for the same paper with writings ($f_d$).}
\label{fig:rmse}
\end{figure}
\subsection{Bird's Eye Transformation and Post-processing}
Due to the fact that the frame was recorded at an angle, the segmented paper has perspective distortion. Our goal is to turn it into a bird's eye perspective (like a photo is captured via overhead camera setup). For this purpose, the corners of the segmented paper from previous step are considered and a $4$-point perspective transformation \cite{rosebrock_2014} is applied (Fig. \ref{fig: method}f). The key concept here is to perform an \textit{Inverse Perspective Mapping} which is described in Equation~\ref{eq:IPM}~\cite{articleIPM, DBLP:journals/corr/abs-1812-00913}.
\begin{equation}
\label{eq:IPM}
\left[\begin{array}{c}
x^{\prime} \\
y^{\prime} \\
w^{\prime}
\end{array}\right]=\left[\begin{array}{lll}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{array}\right]\left[\begin{array}{c}
u \\
v \\
w
\end{array}\right]
\end{equation}
Where $x=\frac{x^{\prime}}{w^{\prime}} \quad$ and $\quad y=\frac{y^{\prime}}{w^{\prime}}$. The matrix maps the pixel $(x,y)$ of the image from bird's eye view and pixel $(u,v)$ from the input image. The warped image is then scaled with appropriate scaling factor based on paper size.
In our last stage, we perform adaptive thresholding~\cite{nina2011recursive} followed by morphological operations and connected component analysis~\cite{dougherty2003hands} to avoid palm's and fingers' portions, and to remove noises (Fig. \ref{fig: method}g).
\section{Experiment and Results}
The outcomes of our system are shown in this section. We conducted our machine learning part on a machine of \texttt{AMD Ryzen} $8$ core processor with $16$GB RAM (CPU) and \texttt{Nvidia GeForce GTX 1060} $6$GB (GPU).
The training of Mask-RCNN took $60$ epochs and resulted in a loss of 0.0860 and a cross validation loss of $0.3740$. Figure~\ref{fig: output} depicts some of the results of our research, demonstrating that our approach can segment the paper, conduct perspective warping, and remove the hand from the region in a variety of situations. Our technique is also built to facilitate those who are left-handed. Initially, the user must specify handedness options, and for left-handed person our system then flips the frame for the segmentation phase along the axis (Fig.~\ref{fig:lefth}).
\subsubsection{Evaluation.} We conduct a test study for assessment purposes because our primary target customers are instructors. We performed a voluntary study with $29$ educators from university level, inviting them to utilize DIY Graphics Tab to record their lectures. We divide the teachers into groups depending on their technological abilities. The following are the categories.
\begin{itemize}
\item Teachers with few technical background except for experience of using laptops (fTB). Individuals who do not have an institutional background in computer science and engineering are considered for inclusion in this category. To be more specific, professors from the arts and commerce disciplines were mainly represented in this group.
\item Teachers with technical background but has no previous experience of using graphics tablet {(TB--GT)}.
\item Teachers with technical background and also has previous experience of using graphics tablet {(TB+GT)}.
\end{itemize}
\begin{table}[htb]
\centering
\begin{tabular}{rc}
\hline \hline
\textit{\textbf{\begin{tabular}[c]{@{}r@{}} Standpoints (regarding overall
\\performance)\end{tabular}}} &
\textit{\textbf{\begin{tabular}[c]{@{}c@{}}Response from\\ Teachers (\%)\end{tabular}}} \\ \hline \hline
\begin{tabular}[c]{@{}r@{}}It replaces the graphics tablet perfectly
\\for my lecture recording\end{tabular} & $31.03$ \\ \hline
\begin{tabular}[c]{@{}r@{}}The actual graphics tablet is better but
\\it is too expensive for recording lectures,
\\I would rather use this one.\end{tabular} &
$41.38$ \\ \hline
\begin{tabular}[c]{@{}r@{}}
I think that after some time I would get
\\tired of it and would end up buying a
\\proper graphics tablet to improve my
\\teaching content.\end{tabular} &
$20.69$ \\ \hline
It does not fit my lecture recording & $6.90$ \\ \hline
Not at all & $0.00$ \\ \hline
\end{tabular}
\caption{Response to the question ``\textit{Is this DIY Graphics Tab capable of serving as an alternative for an actual graphics tablet?}'' from users (teachers).}
\label{tab:userQ}
\end{table}
\begin{table}[htb]
\centering
\begin{tabular}{rc}
\hline \hline
\textit{\textbf{\begin{tabular}[c]{@{}r@{}} Standpoints (regarding tilting lid)\end{tabular}}} &
\textit{\textbf{\begin{tabular}[c]{@{}c@{}}Response from\\ teachers (\%)\end{tabular}}} \\ \hline \hline
\begin{tabular}[c]{@{}r@{}}
It is totally okay for me as I
\\only focus on the paper while writing.\end{tabular} & $72.41$ \\ \hline
\begin{tabular}[c]{@{}r@{}}
It is somewhat okay for me. But
\\sometimes I need to look at the laptop
\\screen for other uses.\end{tabular} & $27.59$ \\ \hline
\begin{tabular}[c]{@{}r@{}}No, I do not want to tilt\end{tabular} & $0.00$ \\ \hline
\end{tabular}
\caption{Response to the question ``\textit{Is tilting the lid a significant issue for lecture recording using DIY Graphics Tab?}'' from users (teachers).}
\label{tab:easeofuse}
\end{table}
We urge all instructors to utilize our method and grade it on a scale of one to five according to well-known \textit{Likert} rating method~\cite{likert1932technique, mcleod_1970}, with $1$ being \textit{very poor} and $5$ being \textit{excellent}. We studied their review and presented those in Fig.~\ref{fig: ratings} including the evaluations from all categories combined, and also from individual ones. The average rating points received by our system is $4.44$. As can be seen from the statistics, most teachers who have prior expertise with graphics tablets gave the system a rating of $4$ since they are already familiar with the sophisticated features offered by graphics tablets ($44.4\%$). Our method has received five-star ratings from a sizable majority ($57.1\%$) of TB--GT teachers. Almost $77\%$ percent of our target users---the inexperienced teachers---gave our system a perfect score. Hence, we can conclude that teachers with little technological knowledge were pleased with our work.
Our work is mostly motivated by a desire to reduce budget. Though our system does not have all of the sophisticated capabilities of a graphic tablet, we intend to supply educators with the minimum required functionalities at a very cheap price. We conduct a questionnaire-based evaluation of the testers to determine the cost-effectiveness of our DIY Graphics Tab. To maintain our evaluation on the same ground, we mostly use the questions from \textit{WebcamPaperPen}~\cite{webcamPP2014, mastersthesis}. The user replies are listed in Table~\ref{tab:userQ}. According to the statistics, the majority of users believe that using a graphical tablet to record lectures is highly expensive; on the other hand, they found our technology to be very cost efficient.\\
The lid must be slanted in our system. While recording lectures, one concern that may arise as the screen becomes inaccessible (non-visible) to the professors. Motivation from an overhead camera configuration---where the user primarily concentrates on the desk---is one possible solution to the topic. We did, however, conduct a poll to assess if the tilting may be a problem (see Table~\ref{tab:easeofuse}). Based on the survey, we can observe that around $72$ percent of teachers are comfortable with the problem of tilting the lid down. No one considers the non-visible screen to be a major concern as long as they can cover the costs of obtaining a graphic tablet.\\
Because the output of our system would be used by students, we conducted a \textit{Likert} rating poll on $49$ students to get their opinions on it. We ask them to watch a recorded lecture made by DIY Graphics Tab and to score the quality of the lecture in terms of how well it teaches. Fig.~\ref{fig:student} shows the statistics. The average rating from the student is around $3.98$ where a majority portion of the students rated our system with $4$ and $5$ points.
\subsubsection{Quantitative Analysis.} We attempt to perform a precision measurement of DIY Graphic Tab's output. We experimented root mean squared error: $RMSE(f_o, f_d)$; where $f_o$ is the user's writing captured them using overhead camera, and $f_d$ is the output of our system for the same writing. $f_o$ and $f_d$ both are binarized and scaled to the same size. Fig.~\ref{fig:rmse} shows an example of the procedure. We applied this measurement on $820$ cases and the average $RMSE$ is $0.091$. Note that, the result may vary because it is not a direct measurement. Unwanted distortion from overhead capture of $f_o$, scaling disturbances, noises, and a lack of alignment between $f_o$ and $f_d$ can all have a significant impact on the final precision assessment.
\section{Conclusion and Future Works}
In this paper, we proposed a method---\textit{DIY Graphics Tab}---to use a pen and paper as a substitute for the graphic tab with the help of a laptop's webcam by simply tilting its lid. We used Mask R-CNN to predict the region of the paper in the image and applied a perspective transformation to obtain the top-down view of the paper. The adaptive threshold and other post processing was used to remove hands occluding the paper.\\
At the moment, our system has a few shortcomings that may be addressed in the future.
Flickering can occur in some circumstances as a result of frame-by-frame processing. Writings that are located at a long distance from the camera, the final result may appear hazy at times; due to the fact that pixels after IMP are interpolated in the final image plane.
Lastly, contents obscured by the palms are not retrievable for some frames in the present edition of our system.
For future work, along with solving the above mentioned limitations, we would also like to enrich our dataset---with more irregular situations and diverse backgrounds, different lighting conditions, and variations on pen and paper. We would also like to use more advanced instance segmentation techniques, e.g., \textit{YOLACT} or \textit{YOLACT++} \cite{bolya2019yolact, yolactplus}. We also like to introduce some gesture recognition to add new feature. For example, different actions from the fingers of user may be treated as a sign to change the color of the stroke, or increase/ decrease its width. Currently, DIY Graphics Tab works on pre-recorded videos and real-time scenarios with reduced amount of FPS; however we plan to optimize the implementation to work smoothly in real-time.
|
1,314,259,994,767 | arxiv | \section{Introduction}
\label{sectintroduction}
The response of a grandstand can be resolved quite easily by linear dynamics methods, if we neglect all the randomness of the system. However, for a more accurate description, at least the most significant uncertainties need to be taken into account. The main uncertainties include
\begin{itemize}
\item forcing terms resulting from active crowd movements, especially synchronized jumping,
\item the uncertainties of the parameters in discrete biodynamic models---randomness of stiffness, mass and damping matrices,
\item the size and spatial distribution of an active crowd and a passive crowd.
\end{itemize}
Further generalizations can take into account various kinds of nonlinearities, e.g. geometrical and material non-linearities and non-linearities of biodynamic models \citep{Huang}. However, the following restrictions will be assumed for the purposes of this paper: the material parameters of the structure are treated as deterministic, since their influence is negligible in comparison with the sources listed above and the scope of the overall response; the spatial distribution of the crowd is fixed; biodynamic models of the passive crowd are treated as deterministic.
The results of measurements carried out on simple structures indicate that an active spectator might be represented as a time dependent force, \textit{cf} \citep{Ellis}. In other words, under certain circumstances, an active spectator does not influence the properties of the structure. Several generators of normalized artificial load processes have been developed in the literature, e.g. \citep{Sim} and \citep{Racic}, which can be supplemented by human body weights, e.g. \citep{Hermanussen}. However, the passive part of the crowd is assumed to be stationary in space and in permanent contact with the structure, excluding accelerations that exceed the gravity of the earth. Thus a passive spectator can be modeled as a biodynamic system. A survey of models of this type can be found in \citep{Sachse}. These remarks allow us to model the structure considered here in terms of random vibrations of linear systems.
Monte Carlo simulation (MC) is one of the ways used in advanced grandstand design procedures for reflecting all kinds of randomness. MC is a general method, but it has considerable disadvantages in its original form, e.g. rather slow convergence when estimating low probabilities, and high computational demands. MC will be employed in this paper mainly for controlling the accuracy and the performance of semi-analytical methods introduced in terms of stochastic differential equations. These simplified methods can be applicable and useful in the preliminary stages of design procedures, when quick and approximate solutions are sufficient.
\section{Stochastic differential equations}
\label{SDE}
Employing the Finite Element Method, and taking into account the above-mentioned assumptions, a mathematical model of the structural system considered here can be written as a set of hyperbolic differential equations
\begin{equation}
\bs{M}\bs{\ddot{Z}}(t)+\bs{C}\bs{\dot{Z}}(t)+\bs{K}\bs{Z}(t)=\bs{GY}(t),\quad t\geq 0
\label{2eq1}
\end{equation}
where $\bs{Z}$ and $\bs{Y}$ are $\mathbb{R}^{d/2}$ and $\mathbb{R}^{d'}$-valued stochastic processes, and $\bs{M}$, $\bs{C}$, $\bs{K}$ and $\bs{G}$ are $(d/2,d/2)$ and $(d/2,d')$ matrices of mass, damping, stiffness and input distribution. Over-dot denotes a derivative with respect to time $\dot{g}(t)=d/dt\,g(t)$. To simplify subsequent expressions, let us apply expectation operator $\mathsf{E}$ in equation (\ref{2eq1}) and substract the result from (\ref{2eq1}). We arrive at the set of two equations
\begin{eqnarray}
&&\bs{M}\bs{\ddot{\mu}}_Z(t)+\bs{C}\bs{\dot{\mu}}_Z(t)+\bs{K}\bs{\mu}_Z(t)=\bs{G\mu}_Y(t),\quad t\geq 0\label{2eq2}\\
&&\bs{M}\bs{\ddot{\tilde{Z}}}(t)+\bs{C}\bs{\dot{\tilde{Z}}}(t)+\bs{K}\bs{\tilde{Z}}(t)=\bs{G\tilde{Y}}(t),\quad t\geq 0\label{2eq3}
\end{eqnarray}
for the mean value $\bs{\mu}_Z(t)=\mathsf{E}\bs{Z}(t)$ and centered process $\bs{\tilde{Z}}(t)=\bs{Z}(t)-\bs{\mu}_Z(t)$, $\bs{\mu}_Y(t)=\mathsf{E}\bs{Y}(t)$ and $\bs{\tilde{Y}}(t)=\bs{Y}(t)-\bs{\mu}_Y(t)$. As will become apparent in section \ref{appToGrandstands}, under certain conditions process $\bs{\tilde{Z}}(t)$ is approximately normal, and since differential equation (\ref{2eq3}) is linear with deterministic coefficients, it is reasonable to accept Gaussian approximation also for $\bs{\tilde{Y}}(t)$. This consideration leads us to stochastic differential equations and to the It\^{o} calculus. Under Gaussian assumptions, the response will be completely specified by its mean $\bs{\mu}_Z(t)$ and covariance $\bs{c}_Z(t,s)=\mathsf{E}[\bs{\tilde{Z}}(t)\bs{\tilde{Z}}(s)^T]$. The required quantities can be obtained from the time domain or from the frequency domain.
\subsection{Solution in the time domain}
\label{timedomain}
Equation (\ref{2eq2}) can be solved by direct integration or, more conveniently, by a Fourier series (or Fourier transform) assuming periodic mean $\bs{\mu}_Y(t)$, \textit{cf} section \ref{frequencydomain}. Let us rewrite (\ref{2eq3}) as
\begin{equation}
\frac{d}{dt}
\left[\begin{array}{c}
\bs{\tilde{Z}}(t)\\
\bs{\dot{\tilde{Z}}}(t)
\end{array}\right]
=
\left[\begin{array}{c c}
\bs{0} & \bs{I} \\
-\bs{M}^{-1}\bs{K} & -\bs{M}^{-1}\bs{C}
\end{array}\right]
\left[\begin{array}{c}
\bs{\tilde{Z}}(t)\\
\bs{\dot{\tilde{Z}}}(t)
\end{array}\right]
+
\left[\begin{array}{c}
\bs{0}\\
\bs{M}^{-1}\bs{G}
\end{array}\right]\bs{\tilde{Y}}(t)
\label{2eq4}
\end{equation}
or, in a more compact form,
\begin{equation}
\bs{\dot{\tilde{X}}}(t)=\bs{a}\bs{\tilde{X}}(t)+\bs{b}\bs{\tilde{Y}}(t),\quad t\geq 0
\label{2eq5}
\end{equation}
where $\bs{\tilde{X}}$ is an $\mathbb{R}^d$-valued state-space vector stochastic process with zero mean, and $\bs{a}$ and $\bs{b}$ are $(d,d)$ and $(d,d')$-matrices. The solution of this differential equation is given in the form
\begin{equation}
\bs{\tilde{X}}(t)=\bs{\theta}(t)\bs{\tilde{X}}(0)+\int_0^t\bs{\theta}(t-s)\bs{b}\bs{\tilde{Y}}(s)\,ds
\label{2eq6}
\end{equation}
where $\bs{\theta}(t-s)$ denotes a Green function or the unit impulse response satisfying
\begin{equation}
\frac{\partial\bs{\theta}(t-s)}{\partial t}=\bs{a\theta}(t-s),\quad t\geq s\geq 0,
\label{2eq7}
\end{equation}
$\bs{\theta}(0)=\bs{I}$ the identity and $\bs{\theta}(t-s)=\mbox{exp}[\bs{a}(t-s)]$ can be expressed as a matrix exponential, \textit{cf} \citep{Soong}. Initial conditions $\bs{\tilde{X}}(0)$ will be set to zero for simplicity. Forcing term $\bs{\tilde{Y}}(t)$ can also satisfy its own stochastic differential equation driven by Gaussian white noise $\bs{W}(t)=d\bs{B}(t)/dt$. For example, let $\hat{Y}_1(t)$ be a continuous-time Gaussian auto-regression scalar process of order $p$, denoted as $AR(p)$, \textit{cf} \citep{Brockwell}. Then $\hat{Y}_1(t)=\bs{e}_1^T\bs{S}_1(t)$ where the state vector $\bs{S}_1(t)=[S_{1,1}(t),\ldots,S_{1,p}(t)]^T$ satisfies the It\^{o} equation
\begin{equation}
d\bs{S}_1(t)=\bs{A}_1\bs{S}_1(t)dt+\bs{b}_1dB(t),
\label{2eq8}
\end{equation}
\begin{equation*}
\bs{A}_1=\left[\begin{array}{c c c c c}
0 & 1 & 0 & \ldots & 0 \\
0 & 0 & 1 & \ldots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \ldots & 1 \\
-a_p & -a_{p-1} & -a_{p-2} & \ldots & -a_1
\end{array}\right]
\mbox{,}\quad
\bs{e}_1=\left[\begin{array}{c}
1 \\ 0 \\ \vdots \\ 0 \\ 0
\end{array}\right]
\quad\mbox{and}\quad
\bs{b}_1=\left[\begin{array}{c}
0 \\ 0 \\ \vdots \\ 0 \\ a_0
\end{array}\right].
\end{equation*}
Processes of this kind are also called filtered white noise processes or colored processes, and they have a specific frequency content. Let us assume that $\tilde{Y}_i(t)$ of $\bs{\tilde{Y}}(t)=[\tilde{Y}_1(t),\ldots,\tilde{Y}_{d'}(t)]^T$ are mutually independent $AR(p_i)$ processes. Then we can merge equations (\ref{2eq4}) and (\ref{2eq8}) to obtain one coupled system
\begin{eqnarray}
&d\left[\begin{array}{c}
\bs{\tilde{Z}}(t)\\
\bs{\dot{\tilde{Z}}}(t)\\
\bs{S}_1(t)\\
\vdots\\
\bs{S}_{d'}(t)
\end{array}\right]
=
\left[\begin{array}{c c c c c}
\bs{0} & \bs{I} & \bs{0} & \ldots & \bs{0} \\
-\bs{M}^{-1}\bs{K} & -\bs{M}^{-1}\bs{C} & \bs{M}^{-1}\bs{G}\bs{d}_1\bs{e}_1^T & \ldots & \bs{M}^{-1}\bs{G}\bs{d}_{d'}\bs{e}_{d'}^T \\
\bs{0} & \bs{0} & \bs{A}_1 & \ldots & \bs{0} \\
\bs{0} & \bs{0} & \bs{0} & \ddots & \bs{0} \\
\bs{0} & \bs{0} & \bs{0} & \ldots & \bs{A}_{d'} \\
\end{array}\right]&
\left[\begin{array}{c}
\bs{\tilde{Z}}(t)\\
\bs{\dot{\tilde{Z}}}(t)\\
\bs{S}_1(t)\\
\vdots\\
\bs{S}_{d'}(t)
\end{array}\right]dt+\nonumber\\
&+
\left[\begin{array}{c c c}
\bs{0} & \ldots & \bs{0} \\
\bs{0} & \ldots & \bs{0} \\
\bs{b}_1 & \ldots & \bs{0} \\
\bs{0} & \ddots & \bs{0} \\
\bs{0} & \ldots & \bs{b}_{d'} \\
\end{array}\right]d\bs{B}(t)&
\label{2eq9}
\end{eqnarray}
where $\bs{d}_i$ are column vectors with the unit in $i$-th position, and $\bs{B}(t)$ is an $\mathbb{R}^{d'}$-valued Brownian motion. This approach is called a state augmentation method \citep{Grigoriu_stoch}. An extension to the case $\tilde{Y}_i(t)=\sum_{k=1}^n\hat{Y}_k(t)$, where $\hat{Y}_k(t)$ are mutually independent $AR(p)$ processes, is carried out in an obvious manner. This methodology will be employed in section \ref{appToGrandstands} for $AR(2)$ processes. Equation (\ref{2eq9}) can again be rewritten in compact form
\begin{equation}
d\bs{X}(t)=\bs{a}\bs{X}(t)dt+\bs{b}d\bs{B}(t),\quad t\geq 0,
\label{2eq10}
\end{equation}
and employing the It\^{o} formula for semimartingales we arrive at the system of evolutionary equations for the response mean $\bs{\mu}_X(t)$ and covariance $\bs{c}_X(t,s)$
\begin{eqnarray}
\bs{\dot{\mu}}_X(t)&=&\bs{a\mu}_X(t),\quad t\geq0,\label{2eq11}\\
\bs{\dot{c}}_X(t,t)&=&\bs{ac}_X(t,t)+\bs{c}_X(t,t)\bs{a}^T+\bs{bb}^T,\quad t\geq0,\label{2eq12}\\
\frac{\partial\bs{c}_X(t,s)}{\partial t}&=&\bs{ac}_X(t,s),\quad t>s\geq0.\label{2eq13}
\end{eqnarray}
Since the driving forces $d\bs{B}(t)$ are Gaussian white noise and the coefficients are constant in time, the solution is an Ornstein-Uhlenbeck process with an existing stationary solution. In our case, stationary mean $\bs{\mu}_X=\bs{0}$ and covariance $\bs{\dot{c}}_X(t,t)=\bs{\dot{c}}_X(t-t)=\bs{\dot{c}}_X=\bs{0}$ which leads to the so-called continuous Lyapunov equation
\begin{equation}
\bs{0}=\bs{ac}_X+\bs{c}_X\bs{a}^T+\bs{bb}^T.
\label{2eq14}
\end{equation}
For details and further developments, see \citep{Grigoriu_stoch}. Since the stationary matrix $\bs{c}_X$ contains only response displacements and velocities, the variances of the acceleration have to be computed through the following formulas which are valid for weakly stationary processes
\begin{equation}
\bs{c}_{\dot{X}}=\left.-\frac{d^2\bs{c}_X(t)}{dt^2}\right|_{t=0}=-\bs{a}^2\bs{c}_X,
\label{2eq14a}
\end{equation}
where $\bs{a}^2$ denotes matrix power and $\bs{c}_{\dot{X}}$ denotes the stationary covariance matrix of velocities and accelerations. Equation (\ref{2eq14a}) is evaluated employing (\ref{2eq13}), which in our special case is simplified to
\begin{equation}
\bs{c}_X(t)=\bs{\theta}(t)\bs{c}_X=\exp[\bs{a}t]\bs{c}_X.
\label{2eq14b}
\end{equation}
\subsection{Solution in the frequency domain}
\label{frequencydomain}
Taking the Fourier transform of equation (\ref{2eq2}) leads to
\begin{equation}
\bs{\hat{\mu}}_Z(\omega)=\bs{H}(\omega)\bs{G\hat{\mu}}_Y(\omega)
\label{2eq15}
\end{equation}
where the FRF (Frequency Response Function) $\bs{H}(\omega)$ is
\begin{equation}
\bs{H}(\omega)=[-\omega^2\bs{M}+\mathbbm{i}\omega\bs{C}+\bs{K}]^{-1},
\label{2eq16}
\end{equation}
$\mathbbm{i}$ denotes a complex unit, $\omega$ denotes angular frequency and $\hat{g}(\omega)$ denotes the Fourier transform of function $g(t)$. Assuming $\bs{\mu}_Y(t)$ in periodic form $\bs{\hat{\mu}}_Y(\omega)=\sum_{k=1}^{n_{\mathrm{harm}}}\sqrt{2\pi}/2\,\bs{r}_k[\exp\{\mathbbm{i}\varphi_k\}\delta(\omega-\bar{\omega}_k)+\exp\{-\mathbbm{i}\varphi_k\}\delta(\omega+\bar{\omega}_k)]$ where $\bs{r}_k$, $\varphi_k$ and $\bar{\omega}_k$ are amplitude, phase shift and angular frequency of the $k$-th harmonic, we obtain the mean response as a solution of the complex linear system of equations.
Employing spectral decomposition of stationary random processes, the response variances are acquired in terms of spectral density matrices $\bs{S}_{\tilde{Y}\tilde{Y}}$ and $\bs{S}_{\tilde{Z}\tilde{Z}}$, $(S_{\tilde{Y}\tilde{Y}}(\omega))_{ii}=\hat{f}_{\tilde{Y}}(\omega)$, where $\hat{f}_{\tilde{Y}}(\omega)$ is a spectral density estimate of the centered forcing term
\begin{equation}
\hat{f}_{\tilde{Y}}(\omega)=\mathsf{E}\int_{-\infty}^\infty b(x-\omega)I_T(\omega)\,dx,
\label{2eq17}
\end{equation}
$b(x)$ is some weight function, \textit{cf} \citep{Andel}, and $I_{T}(\omega)$ denotes the corresponding periodogram
\begin{equation}
I_T(\omega)=\frac{1}{2\pi T}\left|\int_0^T \tilde{Y}(t)e^{-\mathbbm{i}t\omega}\,dt\right|^2, \qquad -\infty<\omega<\infty.
\label{2eq18}
\end{equation}
The diagonal form of $\bs{S}_{\tilde{Y}\tilde{Y}}$ suggests that we treat all input processes as independent. Knowing the spectral density matrix of the input vector stochastic process, we obtain the spectral density matrix of the output process according to \cite{Soong}
\begin{equation}
\bs{S}_{\tilde{Z}\tilde{Z}}(\omega)=\bs{H}(\omega)\bs{GS}_{\tilde{Y}\tilde{Y}}(\omega)\bs{G}^T\bs{H}^{\dagger}(\omega)
\label{2eq19}
\end{equation}
where $\bs{H}^{\dagger}(\omega)$ denotes a Hermitian transpose to $\bs{H}(\omega)$. The variance of the stationary scalar process $\tilde{Z}(t)$ with two-sided spectral density $f_{\tilde{Z}}(\omega)$ or with one-sided spectral density $g_{\tilde{Z}}(\omega)$ is evaluated as
\begin{equation}
\sigma_{\tilde{Z}}^2=\int_{-\infty}^\infty f_{\tilde{Z}}(\omega)\ d\omega=\int_{0}^\infty g_{\tilde{Z}}(\omega)\,d\omega
\label{2eq20}
\end{equation}
and the variance of time derivative $\dot{\tilde{Z}}(t)$
\begin{equation}
\sigma_{\dot{{\tilde{Z}}}}^2=\dot{\sigma}_{\tilde{Z}}^2=\int_{-\infty}^\infty \omega^2f_{\tilde{Z}}(\omega)\,d\omega=\int_{0}^\infty \omega^2g_{\tilde{Z}}(\omega)\,d\omega.
\label{2eq21}
\end{equation}
By analogy for higher time derivatives, applying higher powers of angular frequency $\omega$.
\subsection{Transformation to modal coordinates}
\label{modalsol}
It is desirable to reduce the system unknowns in equations (\ref{2eq1}), (\ref{2eq2}), (\ref{2eq14}) and (\ref{2eq19}) for MC, mean value, time domain and frequency domain solutions. The usual modal transform technique can be employed directly when a structure with proportional damping is loaded by active spectators only. Some problems arise when the passive part of the crowd is introduced. All matrices are extended according to the added degrees of freedom (dofs). Biodynamic models possess non-proportional damping, so complex eigenvectors should be used. Simultaneously, all these models have the first eigenfrequency close to 5 Hz (and the second eigenfrequency close to 8 Hz if using two-degrees-of-freedom models). This implies that the number of eigenvectors employed in the transform is considerably enlarged. Another possibility is partial modal transformation. The system matrices can be decomposed into four parts according to
\begin{equation}
\bs{A}=\left(\begin{array}{cc}
\bs{A}_{SS} & \bs{A}_{SH} \\
\bs{A}_{HS} & \bs{A}_{HH} \\
\end{array}\right)
\label{modal1}
\end{equation}
where $\bs{A}$ stands either for mass, stiffness or damping matrix. Sub-matrix $\bs{A}_{SS}$ corresponds to the structure, $\bs{A}_{HH}$ to the passive crowd, and $\bs{A}_{SH}$ or $\bs{A}_{HS}$ represents the mutual interactions. Let us note that sub-matrix $\bs{A}_{HH}$ is a diagonal or a band, and $\bs{A}_{SS}$ is sparse. Computing the eigenvectors corresponding to an empty structure, i.e. corresponding to $\bs{A}_{SS}$, arranged in $\bs{\tilde{V}}$ and writing
\begin{equation}
\bs{V}=\left(\begin{array}{cc}
\bs{\tilde{V}} & \bs{0} \\
\bs{0} & \bs{I} \\
\end{array}\right),
\label{modal2}
\end{equation}
all equations can be partially transformed by $\bs{V}$, as in the standard procedure. Then the dofs inherent to a passive crowd are unchanged, but the structure response is described through several modal coordinates.
Concerning the Lyapunov equation (\ref{2eq14}) partially transformed into modal coordinates, we can further reduce the computational effort by some prior information. Let us assume that the system response corresponds to the equation (\ref{2eq9}) with $p_i=2$. Splitting all matrices in (\ref{2eq14}) leads to
\begin{equation}
\left(\begin{array}{cc}
\bs{a}_{11} & \bs{a}_{12} \\
\bs{a}_{21} & \bs{a}_{22} \\
\end{array}\right)
\left(\begin{array}{cc}
\bs{c}_{11} & \bs{c}_{12} \\
\bs{c}_{21} & \bs{c}_{22} \\
\end{array}\right)
+
\left(\begin{array}{cc}
\bs{c}_{11} & \bs{c}_{12} \\
\bs{c}_{21} & \bs{c}_{22} \\
\end{array}\right)
\left(\begin{array}{cc}
\bs{a}^T_{11} & \bs{a}^T_{21} \\
\bs{a}^T_{12} & \bs{a}^T_{22} \\
\end{array}\right)
+
\left(\begin{array}{cc}
\bs{0} & \bs{0} \\
\bs{0} & \bs{Q}_{22} \\
\end{array}\right)
=\bs{0},
\label{modal3}
\end{equation}
where
\begin{equation}
\bs{a}_{11}=\left(\begin{array}{cc}
\bs{0} & \bs{I} \\
-[\bs{V}^T\bs{M}\bs{V}]^{-1}[\bs{V}^T\bs{K}\bs{V}] & -[\bs{V}^T\bs{M}\bs{V}]^{-1}[\bs{V}^T\bs{C}\bs{V}] \\
\end{array}\right),
\label{modal4}
\end{equation}
$\bs{a}_{21}=\bs{0}$ and remaining sub-matrices have obvious structure. Dropped subscript at covariance matrix $\bs{c}$ emphasizes partial transformation to modal coordinates. Since $\bs{bb}^T$ resp. $\bs{Q}_{22}$ are symmetric, in fact diagonal, the solution will be also symmetric, $\bs{c}_{12}=\bs{c}^T_{21}$. In the case of $AR(2)$ processes, sub-matrix $\bs{c}_{22}$ can be computed explicitly; single $AR(2)$ process has uncorrelated state variables $S_1$, $S_2$ and $\mathsf{var}S_1=a_0^2/(2a_1a_2)$, $\mathsf{var}S_2=a_0^2/(2a_1)$, based on moment equations, thus $\bs{c}_{22}$ is a diagonal matrix. Introduced considerations reduce the system of four equations (\ref{modal3}) in expanded form, to the set of two coupled equations
\begin{eqnarray}
\bs{a}_{11}\bs{c}_{12}+\bs{c}_{12}\bs{a}^T_{22}+\bs{a}_{12}\bs{c}_{22} &=& \bs{0} \label{modal5}\\
\bs{a}_{11}\bs{c}_{11}+\bs{c}_{11}\bs{a}^T_{11}+\bs{c}_{12}\bs{a}^T_{12} + \bs{a}_{12}\bs{c}^T_{12} &=& \bs{0}\label{modal6}
\end{eqnarray}
for unknowns $\bs{c}_{12}$ and $\bs{c}_{11}$. The set resembles Sylvester and Lyapunov equations respectively with reduced size. Backward transformation $\bs{\bar{c}}_X=\bs{V\bar{c}}\bs{V}^T$ gives covariance matrix for displacement or velocity vector, here $\bs{\bar{c}}$ denotes an appropriate sub-matrix of $\bs{c}$ storing the modal displacements or velocities, $\bs{\bar{c}}_X$ then contains the nodal displacements or velocities.
\subsection{Crossings of Gaussian processes}
\label{upcrossing}
One of the measures of system performance is level crossing. Under some circumstances, in a stationary case, it can be shown that up-crossing is directly connected with the reliability of the system. The $x$-up-crossing rate of Gaussian process $X(t)$ with non-stationary mean value $\mu(t)$ and stationary variance $\sigma^2$ is estimated as \citep{Soong}
\begin{equation}
\nu_x^+(t)=\frac{\dot{\sigma}}{\sigma}\left[\phi\left(\frac{\dot{\mu}(t)}{\dot{\sigma}}\right)+\frac{\dot{\mu}(t)}{\dot{\sigma}}\Phi\left(\frac{\dot{\mu}(t)}{\dot{\sigma}}\right)\right]\phi\left(\frac{x-\mu(t)}{\sigma}\right),
\label{2eq22}
\end{equation}
where $\nu_x^+(t)$ is the $x$-up-crossing rate of level $x$ at time $t$, $\phi(\alpha)=1/\sqrt{2\pi}\exp{-\alpha^2/2}$, $\Phi(u)=\int_{-\infty}^u\phi(\alpha)\,d\alpha$, $\sigma^2=\mathsf{var}X(t)$, $\dot{\sigma}^2=\mathsf{var}\dot{X}(t)$. The total mean number of upcrossings in time interval $[0,T]$ is computed according to
\begin{equation}
n_x^+(T)=\int_0^T\nu_x^+(t)\,dt.
\label{2eq23}
\end{equation}
The relations can be generalized to $D$-out-crossings of a $d$-valued stochastic process, where $D$ is some set in $\mathbb{R}^d$.
Another measure of system performance from the point of view of serviceability is the root mean square value estimated as
\begin{equation}
RMS=\sqrt{\frac{1}{T}\int_0^TX(t)^2\,dt}=\sqrt{\frac{1}{T}\int_0^T\mu_X(t)^2\,dt+\sigma^2},
\label{2eq24}
\end{equation}
where $\sigma^2$ denotes the stationary variance of centered process $\tilde{X}(t)$, and $\mu_X(t)$ its mean value. Analogous formulas are valid for velocity and acceleration.
\section{Applications to the response of grandstands}
\label{appToGrandstands}
As was noted in section \ref{sectintroduction}, an active spectator can be treated as a time-dependent process. Figure \ref{3fig1} shows a single realization and the spectral density of a unit process, i.e. of the process with $G_H=1$, where $G_H$ denotes the weight of a spectator, jumping frequency $\bar{f}=2.67$ Hz. Spectral densities were computed from (\ref{2eq17}) with Parzen weight. The realization was generated according to \cite{Sim}.
\begin{figure}
\centering
\subfloat[]{\includegraphics[scale=0.7]{fig1a.pdf}}
\subfloat[]{\includegraphics[scale=0.7]{fig1b.pdf}}
\caption{Single time history (a) and power spectral density of 10~000 realizations (b) of forcing term $Y(t)$ generated according to \cite{Sim}.}
\label{3fig1}
\end{figure}
Since this function is highly periodic, we will search the mean value in the form
\begin{equation}
\mu_Y(t)=\alpha_0+\sum_{k=1}^p\alpha_k\cos(k\cdot2\pi\bar{f}t)+\beta_k\sin(k\cdot2\pi\bar{f}t).
\label{3eq1}
\end{equation}
Then vector $\bs{\hat{\alpha}}$ of the estimated parameters $\hat{\alpha}_0,\hat{\alpha}_1,\dots,\hat{\alpha}_p,\hat{\beta}_1,\dots,\hat{\beta}_p$ can be found by the linear Least Squares Method as
\begin{equation}
\bs{\hat{\alpha}}=(\bs{\Phi}^T\bs{\Phi})^{-1}\bs{\Phi}^T{\bs{\bar{\mu}}_Y},
\label{3eq2}
\end{equation}
where
\begin{equation*}
\bs{\Phi}=
\left(\begin{array}{c c c c c c}
1 & \cos(2\pi\bar{f}t_1) & \sin(2\pi\bar{f}t_1) & \dots & \cos(p\cdot2\pi\bar{f}t_1) & \sin(p\cdot2\pi\bar{f}t_1)\\
1 & \cos(2\pi\bar{f}t_2) & \sin(2\pi\bar{f}t_2) & \dots & \cos(p\cdot2\pi\bar{f}t_2) & \sin(p\cdot2\pi\bar{f}t_2)\\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
1 & \cos(2\pi\bar{f}t_n) & \sin(2\pi\bar{f}t_n) & \dots & \cos(p\cdot2\pi\bar{f}t_n) & \sin(p\cdot2\pi\bar{f}t_n)\\
\end{array}\right),
\end{equation*}
$t_1,\dots,t_n$ is a fine enough and equidistant partition of the time interval, ${\bs{\bar{\mu}}_Y}=[\bar{\mu}_Y(t_1),\dots,\bar{\mu}_Y(t_n)]^T$ with
\begin{equation*}
\bar{\mu}_Y(t_i)=\frac{1}{N}\sum_{k=1}^NY_k(t_i),
\end{equation*}
are means over $N$ realizations $Y_k(t_i)$ in time instants $t_i$. Generating 10 000 trajectories provides the coefficients given in tab. \ref{3tab1}.
\begin{table}
\centering
\caption{Coefficients $\bs{\hat{\alpha}}$ for an approximation of the mean value in equation (\ref{3eq1}) for $\bar{f}=2.67$ Hz.}
\begin{tabular}{|c|c|c|c|}\hline
$\hat{\alpha}_0$ & \multicolumn{3}{c|}{0.9958} \\\hline
$\hat{\alpha}_1$ & 0.2939 & $\hat{\beta}_1$ & 1.1170 \\
$\hat{\alpha}_2$ & -0.2471 & $\hat{\beta}_2$ & 0.0984 \\
$\hat{\alpha}_3$ & -0.0037 & $\hat{\beta}_3$ & -0.0153 \\
$\hat{\alpha}_4$ & -0.0008 & $\hat{\beta}_4$ & -0.0001 \\\hline
\end{tabular}
\label{3tab1}
\end{table}
Realization centered with the mean value according to equation (\ref{3eq1}) and the coefficients from table \ref{3tab1} is depicted in figure \ref{3fig2}, together with the spectral density and a normalized histogram, i.e. a histogram of the process $\bar{Y}(t)=[Y(t)-\mu(t)]/\sigma_Y=\tilde{Y}(t)/\sigma_Y$ where $\sigma_Y=\sqrt{\mathsf{var}Y(t)}=\sqrt{0.7627}$ is a stationary standard deviation.
\begin{figure}
\centering
\subfloat[]{\includegraphics[scale=0.7]{fig2a.pdf}}
\subfloat[]{\includegraphics[scale=0.7]{fig2b.pdf}}\\
\subfloat[]{\includegraphics[scale=0.7]{fig2c.pdf}}
\subfloat[]{\includegraphics[scale=0.7]{fig2d.pdf}}
\caption{Single centered time history of $\tilde{Y}(t)$ (a), the corresponding power spectral density of 10~000 realizations (b), a normalized histogram with standard normal density (c) and a normalized histogram of a non-unit process scaled with $G_H\mbox{[kN]}\sim \mathcal{N}(0.7709,0.0167)$ (d).}
\label{3fig2}
\end{figure}
The centered process resembles non-Gaussian colored noise. To be more precise, for $\bar{Y}(t)$ we have $\mu_{\bar{Y}}=0$ for the mean, $\mathsf{var}\bar{Y}=1$ for the variance, $\gamma_{3,\bar{Y}}=0.424$ for the coefficient of skewness, and $\gamma_{4,\bar{Y}}=4.076$ for the coefficient of kurtosis. For comparison, the standard Gaussian process has coefficients $0$, $1$, $0$ and $3$.
Let us briefly analyze the response of a harmonic oscillator with unit mass forced by jumping process $Y(t)$ with $\bar{f}=2.67$ Hz, employing MC to justify the normality assumptions. The coefficients of skewness and kurtosis of the state vector $\bs{X}(t)=[Z(t),\dot{Z}(t)]^T$ as functions of the oscillator eigenfrequency $\mbox{f}_1$ for two different values of viscous damping $\zeta$ are depicted in figure \ref{3fig3}.
\begin{figure}
\centering
\includegraphics[scale=0.7]{fig3.pdf}
\caption{Coefficient of skewness $\gamma_3$ and coefficient of kurtosis $\gamma_4$ for the normalized displacement and velocity of the harmonic oscillator forced by the jumping process as functions of eigenfrequency $\mbox{f}_1$. Solid line - viscous damping $\zeta=0.001$; dash-dot line - $\zeta=0.07$; dashed line - corresponding values for the Gaussian random variable.}
\label{3fig3}
\end{figure}
Note that for frequency range $0.5-7$ Hz the response is approximately Gaussian. As was expected, worse convergence is achieved for higher damping values, \textit{cf} the Rosenblatt theorem \citep{Grigoriu_nong}. Normalized histograms of displacement for eigenfrequencies $\mbox{f}_1=4$ and $12$~Hz and both damping values are depicted in figure \ref{3fig4}. Other techniques can be applied for approximations of the response outside this frequency range, e.g. memoryless transformations of Brownian colored noise, but this lies beyond the scope of our paper. Based on heuristic arguments and the Central Limit Theorem, we can assume that the higher the number of active spectators, and the more complex the grandstand geometry is, the more Gaussian the response will be.
\begin{figure}
\centering
\subfloat[$\mbox{f}_1=4$ Hz, $\zeta=0.001$]{\includegraphics[scale=0.7]{fig4a.pdf}}
\subfloat[$\mbox{f}_1=12$ Hz, $\zeta=0.001$]{\includegraphics[scale=0.7]{fig4b.pdf}}\\
\subfloat[$\mbox{f}_1=4$ Hz, $\zeta=0.07$]{\includegraphics[scale=0.7]{fig4c.pdf}}
\subfloat[$\mbox{f}_1=12$ Hz, $\zeta=0.07$]{\includegraphics[scale=0.7]{fig4d.pdf}}
\caption{Histograms of normalized displacement $\bar{Z}(t)$ with standard normal density based on 1000 MC realizations.}
\label{3fig4}
\end{figure}
A spectral density approximation of the forcing process for the frequency domain solution, figure \ref{3fig2} (b), cannot be further simplified. This is because the FRF of the structure has sharp peaks, and thus exact function values are needed. Any approximation employing indicator functions in the vicinities of significant harmonics preserving variance would be inaccurate.
However, we can employ filtered white noise processes $AR(2)$, which arise as a solution of the second order It\^{o} equation
\begin{equation}
c_{2,i}\ddot{\hat{Y}}_i(t)+c_{3,i}\dot{\hat{Y}}_i(t)+c_{1,i}\hat{Y}_i(t)=W(t)
\label{3eq3}
\end{equation}
with spectral density
\begin{equation}
s_i(\omega)=\frac{1}{[c_{1,i}-c_{2,i}\omega^2]^2+(\omega c_{3,i})^2}
\label{3eq4}
\end{equation}
where $(c_{1,i},c_{2,i},c_{3,i})$ correspond to the stiffness, mass and damping of a harmonic oscillator. This function has a sharp peak positioned at $\mbox{f}_1=\sqrt{c_1/c_2}/2\pi$ when we neglect shifts due to damping effects. For a closer approximation, we assume $\tilde{Y}(t)\approx\sum_{i=1}^n\hat{Y}_i(t)$, where $\hat{Y}_i(t)$ are mutually independent $AR(2)$ processes. Identification leads to a nonlinear optimization problem: find such $c_{k,i}$, $k=1,2,3$ and $i=1,\dots,n$, that minimize $L_2$ norm $||\bullet||_{L_2}$ of the difference
\begin{eqnarray}
&&e(\omega,\bs{c})=\hat{f}_{\tilde{Y}}(\omega)-\sum_{i=1}^n\frac{1}{[c_{1,i}-c_{2,i}\omega^2]^2+(\omega c_{3,i})^2}\label{3eq5}\\
&&\min_{\boldsymbol{c}=\{c_{1,1},\dots,c_{3,n}\}}||e(\omega,\bs{c})||_{L_2},\,\omega\in A\subset \mathbb{R}^+\mbox{ compact},\label{3eq6}
\end{eqnarray}
where $\hat{f}_{\tilde{Y}}$ denotes a spectral density estimate of the centered force term $\tilde{Y}(t)$. The problem can be solved by the Nonlinear Least Squares method, by Simulated Annealing etc, with easily estimated initial vector $\bs{c}_0$. Optimized coefficients for $n=6$ of the centered process $\tilde{Y}(t)$ are presented in table \ref{3tab2}, and the corresponding spectral density and spectral distribution function are presented in figure \ref{3fig5}. Note that the spectral density is two-sided, and only one half was integrated in the spectral distribution function, thus the variance indicated is $0.7627/2=0.3814$.
The performance of the structure was briefly quantified in section \ref{upcrossing}, where the two expressions (\ref{2eq22}) and (\ref{2eq24}) depended on the response variance. Thus an alternative approach is to optimize the response variance directly across some eigenfrequency range of a harmonic oscillator. Such coefficients are summarized in table \ref{3tab2}, with indices $\mathsf{var}$, spectral density and distribution function are depicted in figure \ref{3fig5}, frequency range $0.5-10$ Hz.
\begin{table}
\centering
\caption{Coefficients $c_{k,i}$, $k=1,2,3$ and $i=1,\dots,6$ of the six independent $AR(2)$ members used for approximation of the centered forcing term $\tilde{Y}(t)$ in frequency range $0.5-10$ Hz.}
\begin{tabular}{|c|c|c|c|c|}\hline
$i$ & $c_{1,i}$ & $c_{2,i}$ & $c_{3,i}$ & $\sqrt{c_{1,i}/c_{2,i}}/2\pi$ \\\hline
1 & 90.8657 & 0.3227 & 0.0148 & 2.67 \\
2 & 35.9464 & 0.1276 & 0.1167 & 2.67 \\
3 & 283.6701 & 0.2520 & 0.0118 & 5.34 \\
4 & 74.1544 & 0.0661 & 0.0737 & 5.33 \\
5 & 913.9890 & 1.1120 & 21.5186 & 4.56 \\
6 & 228.5270 & 0.0907 & 0.1576 & 7.99 \\\hline
1$_{\mathsf{var}}$ & 91.0909 & 0.3237 & 0.0076 & 2.67 \\
2$_{\mathsf{var}}$ & 40.3066 & 0.1430 & 0.0804 & 2.67 \\
3$_{\mathsf{var}}$ & 281.0462 & 0.2490 & 0.0209 & 5.35 \\
4$_{\mathsf{var}}$ & 83.7399 & 0.0746 & 0.0573 & 5.33 \\
5$_{\mathsf{var}}$ & 914.0015 & 0.9376 & 21.6921 & 4.97 \\
6$_{\mathsf{var}}$ & 228.7642 & 0.0908 & 0.1552 & 7.99 \\\hline
\end{tabular}
\label{3tab2}
\end{table}
\begin{figure}
\centering
\subfloat[spectral density]{\includegraphics[scale=0.7]{fig5a.pdf}}
\subfloat[spectral distribution function]{\includegraphics[scale=0.7]{fig5b.pdf}}
\caption{Spectral density and spectral distribution function of centered forcing $\tilde{Y}(t)$ and of its approximation $\sum_{i=1}^6\hat{Y}_i(t)$, where $\hat{Y}_i(t)$ are independent $AR(2)$ processes with the coefficients in table~\ref{3tab2} based on spectral and variance optimization.}
\label{3fig5}
\end{figure}
Let us briefly note the situation when the forcing process $Y(t)$ is not a unit process, i.e. $G_H\neq 1$. Since we are limiting our considerations to Gaussian approximation, only the first two moments of $G_H$ will apply. The deterministic weight corresponds to a singular case $\mathsf{var}G_H=0$. Then the forcing term has the form $Y_G(t)=G_HY(t)$, mean response $\mu_{Z,G}(t)=\mathsf{E}[G_H]\mu_Z(t)$ and stationary response variance $\sigma_{Z,G}^2=\mathsf{E}[G_H^2]\sigma_Z^2$, where $\mu_Z(t)$ and $\sigma_Z^2$ are the response mean and the variance of the structure loaded by the unit forcing term $Y(t)$. However we should be aware that even when process $Y(t)$ was a Gaussian, process $Y_G(t)$ as a product of a random variable with a stochastic process, is not Gaussian. To quantify the influence of such scaling, compare the histograms in figure \ref{3fig2} (c) and \ref{3fig2} (d), where $G_H$ has normal distribution with mean value $0.7709$ and variance $0.0167$.
\section{Numerical examples and comparison}
\label{examples}
In this section, the quality of the approximation in the time or frequency domain will be compared with MC simulation.
\begin{figure}
\centering
\begin{tabular}{ccc}
\includegraphics[scale=0.6]{fig6a.pdf} &
\includegraphics[scale=0.3]{fig6b.pdf} &
\includegraphics[scale=0.4]{fig6c.pdf} \\
\includegraphics[scale=0.5]{fig6d.pdf} &
\includegraphics[scale=0.5]{fig6e.pdf} &
\includegraphics[scale=0.5]{fig6f.pdf}
\end{tabular}
\caption{Tested structures, geometries with labeled points of interest and normed histograms of the displacement, structures occupied by an active crowd only.}
\label{4fig1}
\end{figure}
A total of four mechanical systems will be tested: a harmonic oscillator, a simply supported beam, a simple cantilever grandstand, and a realistic grandstand with 1, 4, 72 and 630 positions for spectators, respectively. Geometries with centered normed response histograms for an active crowd only, based on MC simulation, are depicted in figure \ref{4fig1}, and several lowest eigenfrequencies corresponding to the vertical bending modes are presented in table \ref{4tab1}. Note also that assumptions on convergence to normal distribution are approximately fulfilled. All examples are artificial, not realistic, so the measured responses would be unacceptable.
\begin{table}
\centering
\caption{First eigenvalues corresponding to vertical bending modes [Hz], \textit{cf} figure \ref{4fig1}.}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Structure & f$_1$ & f$_2$ & f$_3$ & f$_4$ & f$_5$ & f$_6$ & f$_7$ & f$_8$ \\\hline
beam & 7.5 & 22.5 & 40.3 & --- & --- & --- & --- & --- \\
cantilever & 5.4 & 7.0 & 8.2 & 25.1 & 28.4 & --- & --- & --- \\
grandstand & 2.5 & 2.6 & 3.8 & 4.5 & 4.8 & 4.9 & 5.4 & 6.0 \\\hline
\end{tabular}
\label{4tab1}
\end{table}
\subsection{Harmonic oscillator}
\label{harmosc}
Let us assume a harmonic oscillator with unit mass, two values of viscous damping $\zeta$ and variable stiffness. The mean value response is presented in figure \ref{4fig2}. The response was acquired by direct integration of (\ref{2eq2}), starting at $t=0.5$~s. The approximation utilizes equation (\ref{3eq1}) and the coefficients in table \ref{3tab1}. A comparison of the total mean up-crossings $n_x^+(T)$ in the time interval $[0,T]$, $T=160$~s, as functions of oscillator eigenfrequency $\mbox{f}_1$ for two values of viscous damping $\zeta=0.001$ and $\zeta=0.07$ and for two fixed levels $x=0.002$ and $x=0.005$~m are depicted in figure \ref{4fig3}, employing formulas (\ref{2eq22}) and (\ref{2eq23}). The size of time interval $T$ is based on heuristic considerations about the average length of the musical compositions. The time domain solution is based on the sum of six independent $AR(2)$ processes with the coefficients in table \ref{3tab2} for $\mathsf{var}$ optimization. The stationary response variances are computed according to formulas (\ref{modal5}), (\ref{modal6}) and (\ref{2eq14a}). The frequency domain approximation employs equations (\ref{2eq19}), (\ref{2eq20}) and (\ref{2eq21}). The spectral density estimate $\hat{f}_{\tilde{Y}}(\omega)$ of the centered input process has $307$ values over the frequency range $0-10$~Hz, using variable division. Figure \ref{4fig3} shows that the results are roughly in agreement with MC in the frequency range $0.2-10$~Hz. Note that the total number of mean zero-up-crossings for a deterministic periodic function with frequency $2.67$ Hz is $160\cdot 2.67\approx 427$, \textit{cf} figure \ref{4fig3} (c) and (d), where distinct plateaux are found. The results for MC are based on $1000$ realizations $160$ s in length.
\begin{figure}
\centering
\includegraphics[scale=0.6]{fig7.pdf}
\caption{Response mean displacement in comparison with an approximation based on the first four harmonics for a harmonic oscillator, $f_1=5$ Hz $\zeta=0.07$.}
\label{4fig2}
\end{figure}
\begin{figure}
\centering
\subfloat[$\zeta=0.001$]{\includegraphics[scale=0.7]{fig8a.pdf}}
\subfloat[$\zeta=0.001$]{\includegraphics[scale=0.7]{fig8b.pdf}}\\
\subfloat[$\zeta=0.07$]{\includegraphics[scale=0.7]{fig8c.pdf}}
\subfloat[$\zeta=0.07$]{\includegraphics[scale=0.7]{fig8d.pdf}}\\
\subfloat[$\zeta=0.001$]{\includegraphics[scale=0.7]{fig8e.pdf}}
\subfloat[$\zeta=0.07$]{\includegraphics[scale=0.7]{fig8f.pdf}}
\caption{Total mean up-crossings $n_x^+(T)$ and acceleration $RMS$ values as functions of $\mbox{f}_1$ for a harmonic oscillator, distinct levels $x$ and viscous damping $\zeta$, $T=160$ s.}
\label{4fig3}
\end{figure}
\subsection{A simply supported beam}
\label{beam}
The next example is a simply supported beam with Rayleigh damping $\zeta_1=0.05$ and $\zeta_2=0.08$ for the first two vertical modes and total mass $1700$~kg. Poor approximations are anticipated, since the structure is quite stiff with high eigenfrequencies, see table \ref{4tab1} and the results for the harmonic oscillator. Two cases are studied: a structure occupied by an active crowd only; and a structure occupied by a mixed crowd. In the second case, the two left hand side positions are loaded by forces and the two right hand side positions are occupied by passive spectators. Deterministic biodynamic models according to Coermann are used. These are simple-degree-of-freedom oscillators with mass $86.2$~kg, stiffness $85.25$~kN/m and viscous damping $1.72$~kNs/m, eigenfrequency $5$~Hz, \textit{cf} \citep{Sachse}. Results are presented only for the labeled point in figure \ref{4fig1}. The total mean up-crossings for the first case $n_x^+(160)$ as a function of level $x$ are depicted in figure \ref{4fig4} (a). The $RMS$ values for acceleration are $1.947$~m/s$^2$ for the MC solution, $2.135$~m/s$^2$ for the frequency domain solution, and $1.914$~m/s$^2$ for the time domain solution. All input processes are treated as independent, so matrix $\bs{S}_{\tilde{Y}\tilde{Y}}$ in equation (\ref{2eq19}) has nonzero only diagonal entries. The mean value response is depicted in figure \ref{4fig4} (d) with a single realization (c). The total up-crossings for a mixed crowd are depicted in figure \ref{4fig4} (b), and the $RMS$ values are $1.048$~m/s$^2$ for the MC solution, $1.094$~m/s$^2$ for the frequency domain solution, and $0.990$~m/s$^2$ for the time domain solution. Let us also recall our assumption of fixed spatial distribution of the crowd, mass coefficient $\gamma=m_H/m_S=0.1$, where $m_H$ denotes the total mass of passive spectators, and $m_S$ denotes the total mass of the structure. Results for MC based on $2000$ realizations.
\begin{figure}
\centering
\subfloat[active crowd only]{\includegraphics[scale=0.7]{fig9a.pdf}}
\subfloat[2 active, 2 passive spectators]{\includegraphics[scale=0.7]{fig9b.pdf}}\\
\subfloat[single realization]{\includegraphics[scale=0.7]{fig9c.pdf}}
\subfloat[response mean]{\includegraphics[scale=0.7]{fig9d.pdf}}
\caption{Total mean up-crossings $n_x^+(T)$ of a simply supported beam as functions of $x$, $T=160$~s for an active crowd (a) and for a mixed crowd (b), single realization for an active crowd (c) and mean response for an active crowd (d).}
\label{4fig4}
\end{figure}
The approximate shape of the total up-crossings, especially for a mixed crowd, differ from the MC simulation, because of the non-Gaussian response due to high structure eigenfrequencies.
\subsection{Cantilever grandstand}
\label{cantilever}
This system has total mass $18.2$~t, geometry according to figure \ref{4fig1}, and is loaded with 72 active spectators in the first case, Rayleigh damping with $\zeta_1=0.05$ and $\zeta_2=0.08$ is used for the first two vertical modes. The total up-crossings of the response displacement are depicted in figure \ref{4fig5} (a), and single realization and the mean response are depicted in sub-figures (c) and (d), $RMS$ accelerations $5.583$~m/s$^2$ for the MC solution, $5.682$~m/s$^2$ for the frequency domain solution, and $5.616$~m/s$^2$ for the time domain solution. For 36 spectators chosen to be passive according to Coermann with uniformly random but fixed positions, the resulting up-crossings are depicted in sub-figure (b), mass coefficient $\gamma=0.17$. The acceleration $RMS$ values in this case appear to be $1.327$~m/s$^2$ for the MC solution, $1.347$~m/s$^2$ for the frequency domain solution, and $1.309$~m/s$^2$ for the time domain solution. The results for MC are again based on $2000$ realizations $160$ s in length.
\begin{figure}
\centering
\subfloat[active crowd only]{\includegraphics[scale=0.7]{fig10a.pdf}}
\subfloat[36 active, 36 passive spectators]{\includegraphics[scale=0.7]{fig10b.pdf}}\\
\subfloat[single realization]{\includegraphics[scale=0.7]{fig10c.pdf}}
\subfloat[response mean]{\includegraphics[scale=0.7]{fig10d.pdf}}
\caption{Total mean up-crossings of a cantilever grandstand, active crowd (a) and mixed crowd (b), single realization (c) and mean response (d) for an active crowd.}
\label{4fig5}
\end{figure}
\subsection{Realistic grandstand}
\label{grandstand}
In this concluding example, let us briefly examine the results acquired for a complex structure. The geometry is sketched in figure \ref{4fig1} with the point of interest labeled, total mass $148.6$~t, Rayleigh damping with $\zeta_1=0.01$ for the first and $\zeta_2=0.02$ for the sixth vertical mode used here. The results for the structure loaded by an active crowd only are depicted in figure \ref{4fig6} (a) (c) (d). The acceleration $RMS$ values appear to be $5.546$~m/s$^2$ for the MC solution, $5.565$~m/s$^2$ for the frequency domain solution, and $5.597$~m/s$^2$ for the time domain solution. In the second case, $315$ positions are occupied by passive spectators according to Coermann, mass coefficient $\gamma=0.18$. The results for this case are presented in figure \ref{4fig6} (b), $RMS$ accelerations $1.905$~m/s$^2$ for the MC solution, $1.904$~m/s$^2$ for the frequency domain solution, and $1.919$~m/s$^2$ for the time domain solution. MC simulation based on $2000$ realizations.
\begin{figure}
\centering
\subfloat[active crowd only]{\includegraphics[scale=0.7]{fig11a.pdf}}
\subfloat[315 active, 315 passive spectators]{\includegraphics[scale=0.7]{fig11b.pdf}}\\
\subfloat[single realization]{\includegraphics[scale=0.7]{fig11c.pdf}}
\subfloat[response mean]{\includegraphics[scale=0.7]{fig11d.pdf}}
\caption{Total mean up-crossings of a realistic grandstand, active crowd (a), mixed crowd (b), single realization (c) and mean response (d) for an active crowd.}
\label{4fig6}
\end{figure}
\subsection{Comparison and performance}
\label{performance}
The time consumption for the different solution techniques is summarized in table \ref{4tab2}, where the demands of MC simulation are presented for 50 realizations only. This value is based on the convergence tests of total up-crossings presented in figures \ref{4fig7} (a) and (b) for a cantilever grandstand. Obviously these tests are highly sensitive to level $x$, and the value used can be considered as a lower bound. The size of the time integration step is chosen to be $h=0.01$~s, Newmark integration scheme used. The number of dofs of each system is also mentioned, together with the size of the Lyapunov equation (\ref{2eq14}) $n_\mathrm{L}$ for a mixed crowd. Obviously $n_\mathrm{L}=2n_{\mathrm{dof}}+2n_{\mathrm{p}}+12n_{\mathrm{a}}$ where $n_{\mathrm{dof}}$ denotes the number of dofs of the empty structure, $n_{\mathrm{p}}$ is the number of dofs of the passive spectators, and $n_{\mathrm{a}}$ is the number of active spectators. In the case of a cantilever grandstand we therefore have $2\cdot 504+2\cdot 36+12\cdot 36=1512$, since passive spectators are modeled as systems with a single degree of freedom. For the purposes of comparison, the time consumption for the partial modal transform are also summarized together with the number of eigenvectors used $n_{\mathrm{eig}}$ and according highest frequency f$_{\mathrm{eig}}$. All simulations were performed on core i7 with a 16~GB RAM computer, Matlab\textsuperscript{\textregistered} parallel implementation.
\begin{table}
\centering
\caption{Comparison of computational demands.}
\begin{tabular}{|c|r|r|r|r|}\hline
Method/Struct. & \multicolumn{1}{c|}{Harm. osc.} & \multicolumn{1}{c|}{Beam} & \multicolumn{1}{c|}{Cant. grand.} & \multicolumn{1}{c|}{Real. grand.} \\\hline
\multicolumn{5}{|c|}{full system} \\\hline
$n_{\mathrm{dof}}$ & 1 & 29 & 504 & 4\ 068 \\
$n_\mathrm{L}$ & 14 & 86 & 1512 & 12\ 546 \\
MC 50 & 0.601 s & 20.078 s & 380.451 s & 5\ 484 s \\
Freq. domain & 0.106 s & 2.672 s & 74.639 s & 4\ 178 s \\
Time domain & 0.112 s & 2.392 s & 38.915 s & 6\ 507 s \\\hline
\multicolumn{5}{|c|}{partial modal transform, \textit{cf} equations (\ref{modal4}), (\ref{modal5}) and (\ref{modal6})} \\\hline
$n_{\mathrm{eig}}$ & -- & 7 & 10 & 50 \\
f$_{\mathrm{eig}}$ & -- & $37$ Hz & $30$ Hz & $23$ Hz \\
$n_\mathrm{L}$ & -- & 42 & 524 & 4\ 510 \\
MC 50 & -- & 13.240 s & 36.214 s & 307 s \\
Freq. domain & -- & 1.764 s & 4.452 s & 734 s \\
Time domain & -- & 1.588 s & 2.582 s & 67 s \\\hline
\end{tabular}
\label{4tab2}
\end{table}
\begin{figure}
\centering
\subfloat[$x=0.005$]{\includegraphics[scale=0.7]{fig12a.pdf}}
\subfloat[$x=0.006$]{\includegraphics[scale=0.7]{fig12b.pdf}}
\caption{MC convergence tests of $n_x^+(160)$ for a cantilever grandstand, \textit{cf} figure \ref{4fig5} (b).}
\label{4fig7}
\end{figure}
The results in the table suggest that when 50 realizations are employed, only structures of moderate size can be effectively solved by semi-analytical methods.
\section{Conclusions}
\label{concl}
This paper has presented a study of the vibration of grandstands loaded by an active crowd, using Gaussian approximation of the response. The main results can be summarized as follows:
\begin{enumerate}
\item A mathematical description of the response of a mechanical system employing spectral and time domain solutions for weakly stationary Gaussian excitations has been recalled. Partial modal transformation due to a passive crowd has been briefly discussed.
\item A motivating example of a harmonic oscillator has shown that the normalized displacement and velocity have approximately normal distribution under the conditions on eigenfrequencies and damping.
\item Taking this fact into account, the mean value of the force has been approximated as a truncated Fourier series, and the spectral density of the centered process has been estimated employing the Parzen window for the frequency domain solution. For the time domain solution, we have employed a linear combination of independent auto-regression processes of the second order with coefficients optimized to achieve the least error in the response variance of a harmonic oscillator.
\item Three different examples of varying complexity have shown the quality of the response approximation in terms of total displacement up-crossings and acceleration $RMS$ in comparison with Monte Carlo simulation. Limitations following from a simple oscillator have been confirmed on multi-degree-of-freedom systems.
\item The computational demands have been measured and summarized in terms of the time needed for solution, and the applicability of the techniques has been proved.
\end{enumerate}
Finally, let us note that these methods are approximations and can be further refined. In particular, the solution can be reformulated for the non-Gaussian processes for which higher moments can be derived. These apply in improving the mean up-crossing rate estimates for stiff structures.
\section*{Acknowledgement}
Financial support for this work from the Czech Technical University in Prague
under project No. SGS12/027/OHK1/1T/11 and from the Czech Science Foundation under project No. GAP105/11/1529 is gratefully acknowledged.
|
1,314,259,994,768 | arxiv | \section{Introduction}
Advances in imaging hardware, fabrication techniques, and computational methods have enabled novel camera design strategies that go beyond mimicking the human eye. \textit{Lensless imaging} is one of those approaches,
replacing a lens (and necessary focusing distances) with a thinner and potentially inexpensive optical element and a computational image formation step~\cite{boominathan2022recent}.
A variety of applications in virtual/augmented reality, wearables, and robotics can benefit from the low-cost and compact form factor that the lensless imaging paradigm has to offer.
The optical element in such systems is typically a passive or programmable mask placed at a short distance from the sensor. The resulting measurements are highly multiplexed, as seen in \Cref{fig:celeba_0_diffuser_raw,fig:celeba_mls_raw_measurement,fig:mnist_test_0_diffuser_raw,fig:mls_digit_raw_measurement}, due to a system response, i.e.\ \emph{point spread function} (PSF), of large support unlike that of a lens. \Cref{fig:diffuser_psf_down4,fig:mls4mm_psf_diffracted} show the PSFs of typical lensless encoders, namely a caustic pattern of a height-varying phase mask~\cite{Antipa:18,phlatcam} and a diffracted coded aperture (CA) mask~\cite{Chi:11,10.1117/1.OE.54.2.023102,flatcam}.
\begin{figure}[t!]
\centering
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/celeba_0_diffuser_raw.png}
\caption{}
\label{fig:celeba_0_diffuser_raw}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/celeba_mls_raw_measurement.png}
\caption{}
\label{fig:celeba_mls_raw_measurement}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/mnist_test_0_diffuser_raw.png}
\caption{}
\label{fig:mnist_test_0_diffuser_raw}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/mls_digit_raw_measurement.png}
\caption{}
\label{fig:mls_digit_raw_measurement}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/diffuser_psf_down4.png}
\caption{}
\label{fig:diffuser_psf_down4}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/mls4mm_psf_diffracted.png}
\caption{}
\label{fig:mls4mm_psf_diffracted}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/diffuser_psf_down4.png}
\caption{}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/mls4mm_psf_diffracted.png}
\caption{}
\end{subfigure}
\\
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/celeba_0_diffuser_admm100.png}
\caption{}
\label{fig:celeba_0_diffuser_admm100}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/celeba_mls_down4_recon.png}
\caption{}
\label{fig:celeba_mls_down4_recon}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/mnist_test_0_diffuser_admm50.png}
\caption{}
\label{fig:mnist_test_0_diffuser_admm50}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/mls_admm_reconstruction.png}
\caption{}
\label{fig:mls_admm_reconstruction}
\end{subfigure}
\\
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/celeba_diffuser_down_recon.png}
\caption{}
\label{fig:celeba_diffuser_down_recon}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/celeba_mls_down_recon.png}
\caption{}
\label{fig:celeba_mls_down_recon}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/recon_mnist0_down}
\caption{}
\label{fig:recon_mnist0_down}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figs/mnist_mls_down}
\caption{}
\label{fig:mnist_mls_down}
\end{subfigure}
\caption{Discerning content from lensless camera raw measurements (top row) is next to impossible, motivating privacy-preserving imaging with such cameras.
However, with sufficient knowledge about the camera (e.g.\ a point spread function, second row) and an appropriate computational algorithm, one is able to recover an estimate of the underlying object (third row, using ADMM~\cite{admm} and a total variation prior).
If the raw sensor measurement is under-sampled, it becomes increasingly difficult for classical recovery algorithms to recover an meaningful estimate of the underlying object. (Bottom row) PSF and raw measurements are downsampled by the factor in the top left corner, simulating a sensor of lower resolution prior to reconstruction.
}
\label{fig:raw2reconstruction}
\end{figure}
The majority of contributions in lensless imaging have focused on improving the computational methods to go from raw measurements to demultiplexed images, e.g.\ from \Cref{fig:celeba_0_diffuser_raw,fig:celeba_mls_raw_measurement,fig:mnist_test_0_diffuser_raw,fig:mls_digit_raw_measurement} to \Cref{fig:celeba_0_diffuser_admm100,fig:celeba_mls_down4_recon,fig:mnist_test_0_diffuser_admm50,fig:mls_admm_reconstruction}. Data-driven techniques and deep learning have had an influential role in this progress, yielding faster reconstruction times and improved reconstruction quality~\cite{Khan_2019_ICCV,Monakhova:19,Pan:22} with respect to classical techniques based on system inversion~\cite{10.1117/1.OE.54.2.023102,flatcam,sweepcam2020} and convex optimization~\cite{huang2013,Antipa:18,phlatcam}.
While machine learning advances have been readily incorporated in lensless imaging reconstruction and classification tasks~\cite{8590781,Pan:21}, the design of the optical element itself remains rather heuristic-based. Criteria such as sparsity, a large number of directional filters, high contrast, a delta-like autocorrelation, or designs to simplify the computational recovery~\cite{Chi:11,10.1117/1.OE.54.2.023102,flatcam,Antipa:18,phlatcam} have been used to tackle the task of PSF engineering, independent of the down-stream task. The potential to \textit{jointly} optimize the optical encoding and a digital post-processing has been successfully demonstrated in other computational tasks, albeit with lenses, for extended depth-of-field~\cite{sitzmann2018,pinilla2022}, super-resolution~\cite{sitzmann2018}, classification~\cite{chang2018hybrid}, 3-D imaging~\cite{markley2021physicsbased,deb2022programmable}, and hyperspectral imaging~\cite{Vargas_2021_ICCV,9157577}.
In this paper, we apply end-to-end optimization to a
lensless camera to jointly learn (1) a programmable mask pattern prior to the sensor measurement \textit{and} (2) the subsequent digital processing. As well as exploiting edge components for compute and the compactness of lensless cameras, privacy-preserving classification is one of the key motivations for this approach. The multiplexed measurements of lensless cameras have been touted to maintain visual privacy~\cite{boominathan2022recent,9021989,DBLP:journals/corr/abs-2106-14577,shi2022loen} as they contain hardly any perceivable features.
However, a malicious user with access to the camera can still recover an image of the underlying object through a couple measurements and clever post-processing.
The objective of this work is to jointly optimize the optical encoding and the digital classifier in order to significantly reduce the size of the sensor ``embedding'' by exploiting this multiplexing characteristic. As the sensor resolution decreases, it becomes increasingly difficult for lensless imaging reconstruction techniques to recover a meaningful image, as demonstrated in \Cref{fig:celeba_diffuser_down_recon,fig:celeba_mls_down_recon,fig:recon_mnist0_down,fig:mnist_mls_down}.
Jointly optimizing this one-to-many mapping for a particular task, e.g.\ classification, has the potential to produce richer embeddings, much like digital encoders~\cite{doi:10.1126/science.1127647}, with a lower resolution sensor, all the while maintaining performance on the task at hand and enhancing visual privacy.
\paragraph{Contributions} In this work, we exploit multiplexing properties of lensless cameras in order to learn privacy-preserving embeddings by training the imaging system end-to-end. Concretely, we determine the optimal pattern for a programmable component prior to the sensor, i.e.\ an amplitude spatial light modulator (SLM), in order to perform image classification.
To the best of our knowledge, one recent work has applied end-to-end optimization for lensless imaging with passive masks~\cite{shi2022loen}; however none with programmable components. Using such components can help reduce model mismatch through hardware-in-the-loop (HITL)~\cite{Peng:2020:NeuralHolography} or equivalently, physics-aware training~\cite{wright2022deep}. Moreover, the re-programmability of an SLM means the end-to-end optimized camera does not have to be relegated to a single application or setting. It can be updated after deployment or conveniently reconfigured for a different task or in the case of a malicious user.
Our experiments on handwritten digit classification demonstrate the potential of significantly reducing the embedding at the sensor, as our end-to-end approach consistently performs better than lensless cameras with a fixed encoder. Moreover, we show that jointly learning the SLM pattern with the classification task is more robust to typical image transformations: shifting, rescaling, rotating, perspective changes. We are unaware of any other work that has studied the consequences of such effects on lensless imaging.
Our end-to-end approach is based upon an imaging system that can be put together from cheap and accessible components, totaling at around $100$~USD. As an SLM, we use a low-cost liquid crystal display (LCD), as in~\cite{zomet2006,huang2013}, which costs about $20$~USD.
To the best of our knowledge, we are the first to employ such a device in an end-to-end optimization for computational optics, as opposed to commercial SLMs which cost a few thousand USD. Our differentiable digital twin of the imaging system models incoherent, polychromatic propagation with the selected LCD component, using the bandlimited angular spectrum method (BLAS)~\cite{Matsushima:09} to account for diffraction.
In order to foster reproducibility, we open source the following under the GNU General Public License v3.0: wave propagation simulation\footnote{\url{https://github.com/ebezzam/waveprop}} and training software\footnote{\url{https://github.com/ebezzam/LenslessClassification}} Moreover, we have previously released a package to interface with the baseline and proposed cameras.\footnote{\url{https://github.com/LCAV/LenslessPiCam}}
\section{Problem statement}
\label{sec:problem}
\begin{figure}[t!]
\centering
\includegraphics[width=0.99\linewidth]{figs/encoding_decoding_imaging_2.png}
\caption{Encoder-decoder perspective of cameras for end-to-end optimization. The scene could be four-dimensional (height, width, depth, and color) whereas the embedding measured at the sensor is at most three-dimensional (height, width, and color).}
\label{fig:enc_dec}
\end{figure}
End-to-end approaches for optimizing optical components, also known as \emph{deep optics}~\cite{wetzstein2020inference} is a recent trend enabled by improved fabrication techniques and the continual development of more powerful and efficient hardware and libraries for machine learning. It is motivated by faster and cheaper inference for edge computing (taking advantage of the speed of light) and a desire to co-design the optics and the computational algorithm to obtain optimal performance for a particular application.
An encoder-decoder perspective is often used to frame such end-to-end approaches, casting the optics as the encoder and the subsequent computational algorithm as the decoder, as shown in \Cref{fig:enc_dec}, and can be formulated as the following optimization problem minimized for a labeled dataset $ \{\bm{x}_i, \bm{y}_i\}_{i = 1}^N $:
\begin{equation}
\hat{\bm{\theta}}_E, \hat{\bm{\theta}}_D = \argmin_{\bm{\theta}_E, \bm{\theta}_D} \sum_{i=1}^{N} \mathcal{L} \Big(\bm{y}_i, \underbrace{ D_{\bm{\theta}_E,\bm{\theta}_D} \big( \overbrace{O_{\bm{\theta}_E} ( \bm{x}_i)}^{\text{embedding } \bm{v}_i} \big)}_{\text{decoder output }\bm{\hat{y}}_i} \Big). \label{eq:optimization}
\end{equation}
$ O_{\bm{\theta}_E}(\cdot) $ is the optical encoder, including additive noise, that outputs the sensor embedding $ \bm{v}_i $ of an input $ \bm{x}_i $. The encoder encapsulates propagation in free space and through all optical components prior to the sensor. While this component can be simulated via a digital twin, the hardware itself can be used to produce physical realizations of $ \bm{v}_i $. Moreover, if the encoder parameters $\bm{\theta}_E$ of the physical system can be modified,
the device itself can be used for forward propagation, and a differentiable digital model for backpropagating the error between the ground truth $ \bm{y}_i$ and the decoder output $ \bm{\hat{y}}_i $ that arose from $ \bm{v}_i $ that came directly from the device. This is the essence of HITL / physics-aware training. In some cases, the hardware can also be used for backpropagation~\cite{Zhou2020}.
$ D_{\bm{\theta}_E,\bm{\theta}_D}(\cdot) $ is the digital decoder, which can perform a whole slew of tasks: deblurring, denoising, image reconstruction, classification, etc. It has its own set of parameters $ \bm{\theta}_D $ and can optionally make use of the optical encoder parameters, e.g.\ for physics-based learning~\cite{markley2021physicsbased}. Its output is fed to a loss function $ \mathcal{L}(\cdot) $ along with the ground-truth output $ \bm{y}_i $.
In \Cref{sec:proposed} we present the hardware for our proposed lensless imaging system and how we model the digital twin for our optical encoder. In \Cref{sec:experiments}, as we explain our task, we present the architecture of our digital decoder, the loss function, and the labeled data $ \{\bm{x}_i, \bm{y}_i\}_{i = 1}^N $ for our experimental setup.
\section{Proposed solution for lensless classification}
\label{sec:proposed}
Our proposed camera design is motivated by the benefits of lensless cameras (compact, low-cost, privacy-preserving) and programmability.
To this end, a transmissive SLM serves as the only optical component in our encoder, specifically an off-the-shelf LCD driven by the ST7735R device which can be purchased for $\$20$.\footnote{\url{https://www.adafruit.com/product/358}} It can be wired to a Raspberry Pi ($\$35$) with the Raspberry Pi High Quality $12.3$ MP Camera ($\$50$) as a sensor, totaling our design to just $\$105$.
An experimental prototype of the proposed design with the aforementioned components can be seen in \Cref{fig:prototype_labeled}. The prototype includes an adjustable aperture and a stepper motor for programmatically setting the distance between the SLM and the sensor, both of which can be removed to produce a more compact design, similar to \Cref{fig:diffuser} of a lensless camera with a fixed diffuser.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\linewidth]{figs/prototype_labeled.png}
\caption{Experimental prototype of programmable, amplitude SLM-based camera.}
\label{fig:prototype_labeled}
\end{figure}
\paragraph{Digital twin of optical encoder}
\label{sec:model}
End-to-end optimization requires a sufficiently accurate and differentiable simulation of the physical setup.
Our digital twin of the imaging system shown in Figure~\ref{fig:prototype_labeled} accounts for wave-based image formation for spatially incoherent, polychromatic illumination, as is typical of natural scenes. A simulation based on wave-optics is necessary to account for diffraction due to the small SLM features and for wavelength-dependent propagation.
We adopt a common assumption from Fourier optics, namely that image formation is a linear shift-invariant (LSI) system between two parallel planes for a given wavelength~\cite{Goodman2005}. This implies that, there exists an impulse response, i.e.\ a PSF, that can be convolved with the \emph{scaled} scene in order to obtain its image at a given distance and for a specific wavelength. This convolution relationship is described in \Cref{sec:model_prop}.
Therefore, our digital twin modeling amounts to obtaining a PSF that encapsulates propagation from a given plane in the scene to the sensor plane. There are two ways to obtain this PSF: measuring it with a physical setup or simulating it.
For end-to-end approaches, a differentiable simulator is typically necessary in order to backpropagate the error to update the optical encoder parameters. In \Cref{sec:psf_modeling} we describe our modeling of this PSF for an SLM placed at a short distance in front of the sensor, as is the case for our imaging device.
The learnable parameters $\bm{\theta}_E$ of our optical encoder with the ST7735R component include: the SLM pixel amplitude values $ \{w_k\}_{k=1}^{K} $ and the distance between the SLM and the image plane $ d_2 $.
While both can be optimized in an end-to-end fashion, in this work we concentrate on optimizing $ \{w_k\}_{k=1}^{K} $ jointly with the digital decoder parameters $\bm{\theta}_D$.
\section{Experiments}
\label{sec:experiments}
In this section, we apply our proposed camera and end-to-end optimization to handwritten digit classification (MNIST)~\cite{lecun1998mnist}.
\Cref{eq:optimization} can be slightly modified to
\begin{equation}
\label{eq:mnist}
\hat{\bm{\theta}}_E, \hat{\bm{\theta}}_D = \argmin_{\bm{\theta}_D, \bm{\theta}_E} \sum_{i=1}^{N} \mathcal{L} \Big( y_i, \underbrace{D_{\bm{\theta}_D} \overbrace{\big( O_{\bm{\theta}_E} ( \bm{x}_i)}^{\text{embedding } \bm{v}_i} \big)}_{\text{decoder output }\bm{\hat{p}}_i} \Big),
\end{equation}
as our decoder $ D_{\bm{\theta}_D}(\cdot) $ does not require information from the encoder in order to classify digits. The original $ \{\bm{x}_i\}_{i = 1}^N $ coming from MNIST are $ (28\times 28) $ images of handwritten digits and $ \{y_i\}_{i = 1}^N $ are labels from $ 0 $ to $ 9 $. $ \{\bm{x}_i\}_{i = 1}^N $ are simulated and resized to the dimensions of the PSF, as described in \Cref{sec:simulation}, and the decoder outputs $ \{\bm{\hat{p}}_i\}_{i = 1}^N $ are length-$ 10 $ vectors of scores for each label.
We conduct two experiments for evaluating the effectiveness of jointly optimizing the optical encoder and the classification task:
\begin{enumerate}
\item \Cref{sec:vary_dimension}: reduce the dimension of the embedding $ \bm{v}_i $ at the sensor and study its impact on classification performance. A lower resolution embedding at the sensor corresponds to enhanced visual privacy, as demonstrated in the introduction with \Cref{fig:celeba_diffuser_down_recon,fig:celeba_mls_down_recon,fig:recon_mnist0_down,fig:mnist_mls_down}.
\item \Cref{sec:robustness}: apply common real-world image transformations (shifting, rescaling, rotating, perspective changes) to evaluate the robustness of the proposed camera and the end-to-end optimization to such deformations.
\end{enumerate}
$ D_{\bm{\theta}_D}(\cdot) $ takes on one of two architectures in our experiments: multi-class logistic regression or a two-layer fully-connected neural network (FCNN), which are detailed in \Cref{sec:logistic} and \Cref{sec:nn} respectively. In training both architectures, we use a cross entropy loss between the ground truth labels and the outputs of the decoder, and train for $ 50 $ epochs with a batch size of $ N = 200 $ and the Adam optimizer~\cite{adam}. More information on the training hyperparameters and compute hardware can be found in \Cref{sec:training_param}.
We use the provided train-test split of MNIST: $60'000$ training and $10'000$ test examples. Each example is simulated as per the approach described in \Cref{sec:simulation} for an object-to-camera distance of $ \SI{40}{\centi\meter} $, a signal-to-noise ratio of $ \SI{40}{\decibel} $, and an object height of \SI{12}{\centi\meter} (unless specified otherwise). We compare six imaging systems in our experiments, and for each camera, a PSF is needed to perform this simulation. Below is a brief description of each camera and how we obtain its PSF for an object-to-camera distance of $ \SI{40}{\centi\meter} $:
\begin{itemize}
\item \textit{Lens}: measured PSF for the camera shown in \Cref{fig:lensed_camera} with the lens focused at \SI{40}{\centi\meter}.
\item \textit{CA} (coded aperture): a binary mask is generated by taking the outer product of a maximum length sequence (MLS), as is done in~\cite{flatcam}. A simulation of its diffraction pattern is used as the PSF.
\item \textit{Diffuser}: measured PSF for the camera shown in \Cref{fig:diffuser}, where the diffuser is placed roughly \SI{4}{\milli\meter} from the sensor. The diffuser is double-sided tape as in the DiffuserCam tutorial~\cite{diffusercam_tut}. In~\cite{lenslesspicam}, the authors demonstrate the effectiveness of this simple diffuser for imaging when used with the Raspberry Pi High Quality Camera.
\item \textit{Fixed SLM (m)}: measured PSF for the proposed camera shown in \Cref{fig:prototype_labeled} for a randomly programmed pattern. The mask-to-sensor distance is programmatically set to \SI{4}{\milli\meter} via the stepper motor to match the distance of the diffuser-based camera.
\item \textit{Fixed SLM (s)}: simulated PSF for the proposed camera, using the approach described in \Cref{sec:psf_modeling} for a random set of SLM amplitude values and a mask-to-sensor distance of \SI{4}{\milli\meter}.
\item \textit{Learned SLM}: simulated PSF for the proposed camera that is obtained by optimizing \Cref{eq:mnist} for the SLM weights and then simulating the corresponding PSF using the approach described in \Cref{sec:psf_modeling} for a mask-to-sensor distance of \SI{4}{\milli\meter}. During training, the PSF changes at each batch as the SLM values are updated after backpropagation.
\end{itemize}
More details such as the components for the measured PSFs and simulation details can be found in~\Cref{sec:baseline}.
For the fixed optical encoders, the embeddings $ \{ \bm{v}_i \}_{i=1}^{N} $ can be pre-computed with the approach described in \Cref{sec:simulation}. The resulting augmented dataset is normalized (according to the augmented training set statistics) prior to optimizing the classifier $ D_{\bm{\theta}_D}(\cdot) $. For \textit{Learned SLM}, we apply batch normalization~\cite{10.5555/3045118.3045167} and a ReLu activation to the sensor embedding prior to passing it to the classifier.
At inference, the parameters of batch normalization are fixed.
\subsection{Varying embedding dimension}
\label{sec:vary_dimension}
\Cref{tab:mnist_vary_embedding} reports the best test accuracy for each optical encoder, for a varying sensor embedding dimension and for two digital classification architectures: logistic regression and two-layer FCNN. The test accuracy curves can be found in \Cref{sec:test_acc_dim}.
While all approaches decrease in performance as the embedding dimension reduces, \textit{Learned SLM} is the most resilient as quantified by \Cref{tab:performance_drop}.
The performance gap between \textit{Learned SLM} and fixed lensless encoders, as shown in \Cref{tab:mnist_vary_embedding}, decreases when a two layer FCNN is used. However, the benefits of learning this multiplexing are still evident for a very low embedding dimension of $ (3 \times 4)$.
\begin{table}[b!]
\caption{MNIST accuracy on test set, simulated accordingly.}
\label{tab:mnist_vary_embedding}
\centering
\begin{tabular}{lcccc cccc}
\toprule
\textit{Classifier} $\rightarrow $ & \multicolumn{4}{c}{Logistic regression} & \multicolumn{4}{c}{Single hidden layer, 800 units} \\
\cmidrule(r){2-5} \cmidrule(r){6-9}
\textit{Embedding} $\rightarrow $ & 24$\times$32 & 12$\times$16 & 6$\times$8 & 3$\times$4 & 24$\times$32 & 12$\times$16 & 6$\times$8 & 3$\times$4\\
\textit{Encoder} $\downarrow $ & =768 & =192 & =48 & =12 & =768 & =192 & =48 & =12\\
\midrule
Lens & $92.3\%$ & $74.8\%$ & $42.8\%$ & \multicolumn{1}{c|}{$18.4\%$} & $97.7 \%$ & $83.0 \%$ & $41.8 \%$ & $18.8 \%$\\
CA & $74.1 \%$ & $74.2 \%$ & $64.4 \%$ & \multicolumn{1}{c|}{$59.1 \%$} & $97.3 \%$ & $96.3 \%$ & $91.0 \%$ & $69.9 \%$ \\
Diffuser & $91.0\%$ & $81.6\%$ & $72.6\%$ & \multicolumn{1}{c|}{$48.5\%$} & $95.8 \% $& $95.7 \%$ & $92.8 \%$ & $77.2\%$\\
Fixed SLM (m) & $92.7\%$ & $91.5\%$ & $82.1\%$& \multicolumn{1}{c|}{$68.7\%$} & $97.2 \% $ & $97.1$ \% & $95.9 \%$ & $84.2\%$\\
Fixed SLM (s) & $92.6\%$ & $91.5\%$ & $84.9\%$ & \multicolumn{1}{c|}{$65.8\%$ } & $97.3\%$& $97.4\%$ & $95.9\%$ & $86.4\%$\\
Learned SLM & $\bm{94.2\%}$ & $\bm{92.9\%}$ & $\bm{91.9\%}$ & \multicolumn{1}{c|}{$\bm{83.0\%}$} & $\bm{97.9\%}$ & $\bm{97.7\%}$ & $\bm{96.6\%}$ & $\bm{90.3\%}$\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[t!]
\caption{Relative drop in performance due to embedding compression from $ (24 \times 32) $ to $ (3 \times 4) $.}
\label{tab:performance_drop}
\centering
\begin{tabular}{lcc}
\toprule
& \makecell{Logistic \\regression} & \makecell{Single hidden \\layer, 800 units} \\
\midrule
Lens & \SI{80.0}{\percent} & \SI{80.8}{\percent} \\
CA & \SI{20.2}{\percent} & \SI{28.2}{\percent} \\
Diffuser & \SI{46.7}{\percent} & \SI{19.4}{\percent} \\
Fixed SLM (m)& \SI{25.9}{\percent} & \SI{13.4}{\percent} \\
Fixed SLM (s) & \SI{28.9}{\percent} & \SI{11.2}{\percent} \\
Learned SLM & $\bm{11.9\%}$ & $\bm{8.42\%}$ \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[t!]
\begingroup
\renewcommand{\arraystretch}{1}
\setlength{\tabcolsep}{0.2em}
\begin{tabular}{cccccc}
\\
& PSF & 24$\times$32 & 12$\times$16 & 6$\times$8 & 3$\times$4 \\
Lens
& \includegraphics[width=0.16\linewidth,valign=m]{figs/lens_zoom_section.png} & \includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_lens768.png} &\includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_lens192} & \includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_lens48} & \includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_lens12}\\[30pt]
\makecell{Coded \\aperture\\\cite{flatcam}}
& \includegraphics[width=0.16\linewidth,valign=m]{figs/coded_aperture_psf.png} & \includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_ca768.png} &\includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_ca192} & \includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_ca48} & \includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_ca12}\\[30pt]
\makecell{Diffuser~\cite{lenslesspicam}}
& \includegraphics[width=0.16\linewidth,valign=m]{figs/diffuser_psf.png} & \includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_diff768.png} &\includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_diff192} & \includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_diff48} & \includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_diff12}\\[30pt]
\makecell{Fixed\\SLM\\(m)}
& \includegraphics[width=0.16\linewidth,valign=m]{figs/adafruit_psf.png} & \includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_fixslm768.png} &\includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_fixslm192} & \includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_fixslm48} & \includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_fixslm12}\\[30pt]
\makecell{Learned\\SLM}
& \includegraphics[width=0.16\linewidth,valign=m]{figs/learned_psf_in12_singlehidden.png} & \includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_learn768.png} &\includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_learn192} & \includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_learn48} & \includegraphics[width=0.16\linewidth,valign=m]{figs/mnist_example3_learn12}\\[30pt]
\end{tabular}
\endgroup
\caption{PSFs of the baseline and proposed camera systems, and example sensor embeddings that are simulated with the approach described in \Cref{sec:simulation}. \textit{Lens}, \textit{Diffuser}, and \textit{Fixed SLM (m)} used measured PSFs while \textit{Coded aperture} and \textit{Learned SLM} used simulated ones. \textit{Fixed SLM (s)} is not shown as it is similar to \textit{Fixed SLM (m)}. The PSF for \textit{Learned SLM} is unique per model and embedding dimension. The one shown above was optimized for an embedding dimension of $ (3\times 4) $ and a two-layer fully connected neural network. All of the learned PSFs can be seen \Cref{sec:learned_psfs}.}
\label{tab:mnist_examples}
\end{figure}
Moreover, the benefits of lensless multiplexing (for both fixed and learned encoders) can be clearly observed as the sensor dimension decreases and the \textit{Lens}' performance deteriorates. \Cref{tab:mnist_examples} provides some insight into this as downsampled measurements of \textit{Lens} can consist of a single pixel. On the other hand, the multiplexing property of lensless cameras leads to much richer measurements, for both \textit{Learned SLM} and the fixed lensless encoders. In \Cref{sec:low_res_recon}, we show example reconstructions for the varying embedding dimensions to show how visual privacy is enhanced by these lower resolution sensor embeddings even with knowledge of the PSF.
The PSFs corresponding to \textit{Learned SLM} for the different embedding dimensions and the two classifiers can be found in \Cref{sec:learned_psfs_vary}. It is interesting to note the PSFs for the embedding dimension of $ (3 \times 4)$ (also visible in the last row of \Cref{tab:mnist_examples}). They resemble the small kernels of convolutional neural networks (CNNs) which has motivated the design of amplitude masks in other end-to-end optimization tasks~\cite{chang2018hybrid,shi2022loen}. In our optimization, these small kernels were not set as an explicit constraint, but the resizing to a $ (3 \times 4)$ sensor embedding may explain why we observe $ 12 $ equally-spaced sub-masks.
\subsection{Robustness to common image transformations}
\label{sec:robustness}
The MNIST dataset is size-normalized and centered, making it ideal for training and testing image classification systems but is not representative of how images may be taken in-the-wild. In this experiment, we evaluate the robustness of lensless encoders to common image transformation, namely we study the effects of:
\begin{itemize}
\item \textit{Shift}: while maintaining an object height of \SI{12}{\centi\meter} and an object-to-camera distance of \SI{40}{\centi\meter}, shift the image in any direction along the object plane such that it is still fully captured by the sensor.
\item \textit{Rescale}: while maintaining an object-to-camera distance of \SI{40}{\centi\meter}, set a random height uniformly drawn from $[ \SI{2}{\centi\meter}, \SI{20}{\centi\meter} ]$.
\item \textit{Rotate}: while maintaining an object height of \SI{12}{\centi\meter} and an object-to-camera distance of \SI{40}{\centi\meter}, uniformly draw a rotation angle from $ [\SI{-90}{\degree}, \SI{90}{\degree}]$.
\item \textit{Perspective}: while maintaining an object height of \SI{12}{\centi\meter} and an object-to-camera distance of \SI{40}{\centi\meter}, perform a random perspective transformation via PyTorch's \texttt{RandomPerspective} with \SI{100}{\percent} probability and a distortion factor of $ 0.5 $.\footnote{\texttt{RandomPerspective} documentation: \url{https://pytorch.org/vision/main/generated/torchvision.transforms.RandomPerspective.html}}
\end{itemize}
Both the train and test set of MNIST are augmented with the approach described in \Cref{sec:simulation} along with each of the above image transformations, i.e.\ one new dataset per transformation. The same image transformation distribution is used in both training and testing. An illustration of the various image transformations for each camera can be found in \Cref{sec:viz_transformation}.
\begin{table}[t!]
\caption{MNIST accuracy on randomly transformed test set, simulated accordingly.}
\label{tab:robustness}
\centering
\begin{tabular}{clccccc}
\toprule
\makecell{\textit{Embedding} \\ \textit{dimension} $\downarrow $} &\textit{Encoder} $\downarrow $& Original & Shift & Rescale & Rotate & Perspective \\
\midrule
\multirow{6}{*}{$ (24 \times 32) = 768 $} & Lens&\SI{97.7}{\percent} & $\bm{84.4\%}$ & \SI{84.8}{\percent} & \SI{94.4}{\percent} & \SI{83.0}{\percent} \\
& CA & \SI{97.3}{\percent} & \SI{22.8}{\percent} &\SI{82.1}{\percent} & \SI{90.9}{\percent} & \SI{31.6}{\percent} \\
& Diffuser &\SI{95.8}{\percent}& \SI{42.8}{\percent} &\SI{77.8}{\percent} & \SI{89.7}{\percent} & \SI{63.2}{\percent}\\
& Fixed SLM (m) &\SI{97.2}{\percent}& \SI{50.9}{\percent} & \SI{83.0}{\percent} & \SI{93.2}{\percent} & \SI{77.4}{\percent} \\
&Fixed SLM (s) & \SI{97.3}{\percent}&\SI{48.8}{\percent} & \SI{84.3}{\percent} & \SI{93.7}{\percent} & \SI{78.0}{\percent} \\
& Learned SLM &$\bm{97.9\%}$ & $71.7\%$ & $\bm{92.0\%}$ & $\bm{95.0\%}$& $\bm{83.7\%}$\\
\midrule
\multirow{5}{*}{$ (6 \times 8) = 48 $} & CA & \SI{91.0}{\percent} & \SI{14.9}{\percent} & \SI{71.8}{\percent} & \SI{76.8}{\percent} & \SI{20.1}{\percent}\\
& Diffuser & \SI{92.8}{\percent} &\SI{26.7}{\percent} & \SI{69.7}{\percent} & \SI{84.9}{\percent} & \SI{43.9}{\percent} \\
& Fixed SLM &\SI{95.9}{\percent} &\SI{29.5}{\percent} & \SI{78.3}{\percent} & \SI{89.3}{\percent} & \SI{58.7}{\percent} \\
&Fixed SLM (sim.) &\SI{95.9}{\percent} & \SI{29.0}{\percent} &
\SI{77.9}{\percent}
& \SI{89.7}{\percent} & \SI{57.8}{\percent} \\
& Learned SLM & $\bm{96.6}\%$ & $\bm{59.3}\%$ & $\bm{88.4\%}$ & $\bm{93.1}\%$ & $\bm{73.4}\%$ \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[t!]
\caption{Relative drop in performance due to image transformations.}
\label{tab:performance_drop_trans}
\centering
\begin{tabular}{clcccc}
\toprule
\makecell{\textit{Embedding} \\ \textit{dimension} $\downarrow $} &\textit{Encoder} $\downarrow $ & Shift & Rescale & Rotate & Perspective \\
\midrule
\multirow{6}{*}{$ (24 \times 32) = 768 $} & Lens& $\bm{13.6\%}$ & \SI{13.2}{\percent} & \SI{3.38}{\percent} & \SI{15.0}{\percent} \\
& CA & \SI{76.6}{\percent}& \SI{15.6}{\percent} &\SI{6.58}{\percent} &\SI{67.5}{\percent}\\
& Diffuser & \SI{55.3}{\percent}& \SI{18.8}{\percent}&\SI{6.37}{\percent} &\SI{34.0}{\percent} \\
& Fixed SLM (m)&\SI{47.6}{\percent} & \SI{14.6}{\percent} &\SI{4.12}{\percent} & \SI{20.4}{\percent}\\
&Fixed SLM (s)& \SI{49.8}{\percent}& \SI{13.4}{\percent} &\SI{3.70}{\percent} &\SI{19.8}{\percent}\\
& Learned SLM &\SI{26.8}{\percent} & $\bm{6.03\%}$ & $\bm{2.96\%}$& $\bm{14.5\%}$\\
\midrule
\multirow{5}{*}{$ (6 \times 8) = 48 $} & CA & \SI{83.6}{\percent} &\SI{21.1}{\percent} &\SI{15.6}{\percent} & \SI{77.9}{\percent}\\
& Diffuser &\SI{71.2}{\percent} & \SI{24.9}{\percent} & \SI{8.51}{\percent} & \SI{52.7}{\percent} \\
& Fixed SLM (m)& \SI{69.2}{\percent} & \SI{18.4}{\percent} & \SI{6.88}{\percent} & \SI{38.8}{\percent} \\
&Fixed SLM (s) & \SI{69.8}{\percent} & \SI{18.8}{\percent}
&\SI{6.47}{\percent} & \SI{39.7}{\percent} \\
& Learned SLM &$\bm{38.6\%}$ & $\bm{8.49\%}$& $\bm{3.62\%}$ & $\bm{24.0\%}$ \\
\bottomrule
\end{tabular}
\end{table}
\Cref{tab:robustness} reports the best test accuracy for each optical encoder and for each image transformation when using a two-layer FCNN classifier, see \Cref{sec:nn} for architecture. In the top half of the table, we evaluate the impact of each image transformation for an embedding dimension of $ (24 \times 32) $ to see how lensless imaging techniques fare against a lensed camera. The main difficulty for lensless cameras is shifting as the whole sensor no longer captures multiplexed information, as shown in \Cref{sec:shift_effect}. This leads to a significant reduction in classification accuracy for all lensless approaches. \textit{Learned SLM} is able to cope with shifting much better than the fixed encoding strategies for lensing imaging, most likely because it is able to adapt its multiplexing for such perturbations. For the remaining image transformations, \textit{Learned SLM} is able to outperform the lensed camera, while all of the fixed lensless encodings exhibit worse performance than the lensed camera.
In the bottom half of \Cref{tab:robustness}, we evaluate the impact of each image transformations for an embedding dimension of $ (6 \times 8) $, for which all lensless approaches exhibited satisfactory performance (above $ 90\% $) on the original dataset and more ``protection'' against post-processing recovery, as shown in \Cref{sec:low_res_recon}. We do not consider \textit{Lens} for this embedding dimension as it performed poorly for the simulated dataset without any transformations. Once again, \textit{Learned SLM} is more robust to image transformations, in particular shifts and perspective changes. \Cref{tab:performance_drop_trans} quantifies the reduction in classification performance due to each of the image transformations, with \textit{Learned SLM} being the least affected among the lensless imaging approaches.
\Cref{fig:learned_masks_robust} shows the PSFs corresponding to the \textit{Learned SLM} masks for the two embedding dimensions and the various image transformations. It is worth noting that the masks for the embedding dimension of $ (6 \times 8) $ trained with image transformations are denser than the mask that was obtained without image transformations (\Cref{fig:learned_psf_in48_singlehidden}). This may be a result of a need for more degrees-of-freedom to account for the higher complexity in the input space due to these distortions.
\section{Conclusion}
\label{sec:conclusion}
We have introduced a low-cost and programmable lensless imaging system that can produce robust, privacy-preserving classification results. This is achieved through an end-to-end training that jointly optimizes (1) an optical encoding that produces highly multiplexed embeddings directly at the sensor and (2) the architecture that classifies these privacy-preserving measurements. Our experiments on handwritten digit classification show that the proposed design and training strategy outperforms lensless systems that employ a fixed optical encoding, and is more resilient to common real-world image transformations. Moreover, jointly training the optical encoder and the digital decoder allows one to reduce the sensor resolution, further enhancing the visual privacy of the measurements.
Adding to the security of the camera is the ability to re-configure the optical encoder if a malicious user obtains information that can be used to decode the sensor embeddings.
For future work, we plan exploit the programmability aspect of our proposed camera with real world data, namely employ hardware-in-the-loop (HITL) techniques as this has been shown to reduce model mismatch~\cite{Peng:2020:NeuralHolography,wright2022deep}. Moreover, a programmable mask allows for time-multiplexed measurements that can be used as additional features for an imaging or classification task~\cite{sweepcam2020,Vargas_2021_ICCV}.
\paragraph{Limitations} Training end-to-end is expensive due to optical wave propagation simulation.
This could be alleviated by using the hardware itself to perform the forward propagation, as is done in HITL, but this approach comes with its own limitations as forward propagation cannot be parallelized for a batch of training examples.
Relying on physical devices for computation can also have drawbacks. They are more susceptible to degradation (due to usage and over time) than purely digital computations. Moreover, device tolerances can lead to unwanted differences between two seemingly identical setups. Such differences may be more prominent for low-cost components such as the cheap LCD used in this paper, as opposed to commercial SLMs.
\section*{Acknowledgments and disclosure of funding}
We thank Sepand Kashani for his input and insight at the initial stages of the project, Julien Fageot and Karen Adam for their feedback and discussions, and Arnaud Latty and Adrien Hoffet for their help in building the experimental prototype.
This work was in part funded by the Swiss National Science Foundation (SNSF)
under grants CRSII5 193826 ``AstroSignals - A New Window on the Universe, with
the New Generation of Large Radio-Astronomy Facilities'' (M.~Simeoni) and
200 021 181 978/1 ``SESAM - Sensing and Sampling: Theory and Algorithms''
(E.~Bezzam).
{
\small
\bibliographystyle{plain}
|
1,314,259,994,769 | arxiv | \subsection{#1}\label{#2}}
\def\operatorname{Perm}{\operatorname{Perm}}
\def\operatorname{Cov}{\operatorname{Cov}}
\def\operatorname{CSep}{\operatorname{CSep}}
\def\operatorname{Sep}{\operatorname{Sep}}
\def\operatorname{Ob}{\operatorname{Ob}}
\def\operatorname{ind}{\operatorname{ind}}
\def\operatorname{sep}{\operatorname{sep}}
\def\operatorname{Br}{\operatorname{Br}}
\def\operatorname{dBr}{\operatorname{dBr}}
\def\rightarrow{\rightarrow}
\let\oldmarginpar\marginpar
\def\marginpar#1{\oldmarginpar{\tiny #1}}
\def\operatorname{DPic}{\operatorname{DPic}}
\def\operatorname{Pic}{\operatorname{Pic}}
\def\operatorname{pr}{\operatorname{pr}}
\def\mathbin{\overset{L}{\otimes}}{\mathbin{\overset{L}{\otimes}}}
\def\operatorname{cosk}{\operatorname{cosk}}
\def\operatorname{rk}{\operatorname{rk}}
\def\operatorname{Supp}{\operatorname{Supp}}
\def\operatorname{Spec}{\operatorname{Spec}}
\begin{document}
\title[Schur-finiteness (and Bass-finiteness) conjecture]{Schur-finiteness (and Bass-finiteness) conjecture \\ for quadric fibrations and \\for families of sextic du Val del Pezzo surfaces}
\author{Gon{\c c}alo~Tabuada}
\address{Gon{\c c}alo Tabuada, Department of Mathematics, MIT, Cambridge, MA 02139, USA}
\email{tabuada@math.mit.edu}
\urladdr{http://math.mit.edu/~tabuada}
\thanks{The author was supported by a NSF CAREER Award}
\subjclass[2010]{14A20, 14A22, 14C15, 14D06, 16H05, 19E08}
\date{\today}
\abstract{Let $Q \to B$ be a quadric fibration and $T \to B$ a family of sextic du Val del Pezzo surfaces. Making use of the recent theory of noncommutative mixed motives, we establish a precise relation between the Schur-finiteness conjecture for $Q$, resp. for $T$, and the Schur-finiteness conjecture for $B$. As an application, we prove the Schur-finiteness conjecture for $Q$, resp. for $T$, when $B$ is low-dimensional. Along the way, we obtain a proof of the Schur-finiteness conjecture for smooth complete intersections of two or three quadric hypersurfaces. Finally, we prove similar results for the Bass-finiteness conjecture.}}
\maketitle
\section{Introduction}
\subsection*{Schur-finiteness conjecture}
Let ${\mathcal C}$ be a $\mathbb{Q}$-linear, idempotent complete, symmetric monoidal category. Given a partition $\lambda$ of an integer $n\geq 1$, consider the corresponding $\mathbb{Q}$-linear representation $V_\lambda$ of the symmetric group $\mathfrak{S}_n$ and the associated idempotent $e_\lambda \in \mathbb{Q}[\mathfrak{S}_n]$. Under these notations, the Schur-functor $S_\lambda\colon {\mathcal C} \to {\mathcal C}$ sends an object $a\in {\mathcal C}$ to the direct summand of $a^{\otimes n}$ determined by $e_\lambda$. Following Deligne \cite[\S1]{Deligne}, $a \in {\mathcal C}$ is called {\em Schur-finite} if it is~annihilated by some Schur-functor.
Voevodsky introduced in \cite{Voevodsky} a triangulated category of geometric mixed motives $\mathrm{DM}_{\mathrm{gm}}(k)_\mathbb{Q}$ (over a perfect base field $k$). By construction, this category is $\mathbb{Q}$-linear, idempotent complete, symmetric monoidal, and comes equipped with a $\otimes$-functor $M(-)_\mathbb{Q} \colon \mathrm{Sm}(k) \to \mathrm{DM}_{\mathrm{gm}}(k)_\mathbb{Q}$ defined on smooth $k$-schemes of finite type. Given $X \in \mathrm{Sm}(k)$, an important conjecture in the theory of motives is the following:
\vspace{0.1cm}
{\bf Conjecture} $\mathrm{S}(X)$: The geometric mixed motive $M(X)_\mathbb{Q}$ is Schur-finite.
\vspace{0.1cm}
Thanks to the (independent) work of Guletskii \cite{Guletskii} and Mazza \cite{Mazza}, the conjecture $\mathrm{S}(X)$ holds in the case where $\mathrm{dim}(X)\leq 1$. Thanks to the work of Kimura \cite{Kimura} and Shermenev \cite{Shermenev}, the conjecture $\mathrm{S}(X)$ also holds in the case where $X$ is an abelian variety. Besides these cases (and some other cases scattered in the literature), the Schur-finiteness conjecture remains wide open.
The main goal of this note is to prove the Schur-finiteness conjecture in the new cases of quadric fibrations and families of sextic du Val del Pezzo surfaces.
\subsection*{Quadric fibrations} Our first main result is the following:
\begin{theorem}\label{thm:main}
Let $q\colon Q \to B$ a flat quadric fibration of relative dimension $d-2$. Assume that $B$ and $Q$ are $k$-smooth, that all the fibers of $q$ have corank $\leq 1$, and that the locus $D \subset B$ of the critical values of the fibration $q$ is $k$-smooth. Under these assumptions, the following holds:
\begin{itemize}
\item[(i)] When $d$ is even, we have $\mathrm{S}(Q) \Leftrightarrow \mathrm{S}(B) + \mathrm{S}(\widetilde{B})$, where $\widetilde{B}$ stands for the discriminant $2$-fold cover of $B$ (ramified over $D$).
\item[(ii)] When $d$ is odd and $\mathrm{char}(k)\neq 2$, we have $\{\mathrm{S}(V_i)\} + \{\mathrm{S}(\widetilde{D}_i)\} \Rightarrow \mathrm{S}(Q)$, where $V_i$ is any affine open of $B$ and $\widetilde{D}_i$ is any Galois $2$-fold cover of $D_i:=D\cap V_i$.
\end{itemize}
\end{theorem}
To the best of the author's knowledge, Theorem \ref{thm:main} is new in the literature. Intuitively speaking, it relates the Schur-finiteness conjecture for the total space $Q$ with the Schur-finiteness conjecture for certain coverings/subschemes of the base $B$. Among other ingredients, its proof makes use of Kontsevich's noncommutative mixed motives of twisted root stacks; consult \S\ref{sec:root}-\ref{sec:proof} below for details.
Making use of Theorem \ref{thm:main}, we are now able to prove the Schur-finiteness conjecture in new cases. Here are two low-dimensional examples:
\begin{corollary}[Quadric fibrations over curves]\label{cor:main}
Let $q\colon Q \to B$ be a quadric fibration as in Theorem \ref{thm:main} with $B$ a curve\footnote{Since $B$ is a curve, the locus $D\subset B$ of the critical values of $q$ is necessarily $k$-smooth.}. In this case, the conjecture $\mathrm{S}(Q)$ holds.
\end{corollary}
\begin{corollary}[Quadric fibrations over surfaces]\label{cor:main2}
Let $q\colon Q \to B$ be a quadric fibration as in Theorem \ref{thm:main} with $B$ a surface and $d$ odd. In this case, $\mathrm{S}(B)\Rightarrow\mathrm{S}(Q)$.
\end{corollary}
\begin{proof}
Given a smooth $k$-surface $X$, we have $\mathrm{S}(X)\Leftrightarrow \mathrm{S}(U)$ for any open $U$ of $X$. Therefore, thanks to Theorem \ref{thm:main}(ii), the proof follows from the fact that when $B$ is a surface, the conjectures $\{\mathrm{S}(V_i)\}$ can be replaced by the~conjecture~$\mathrm{S}(B)$.
\end{proof}
Corollary \ref{cor:main2} can be applied, for example, to the case where $B$ is (an open subscheme of) an abelian surface or a smooth projective surface with $p_g=0$ which satisfies Bloch's conjecture (see Guletskii-Pedrini \cite[\S4 Thm.~7]{GP}). Recall that Bloch's conjecture holds for surfaces not of general type (see Bloch-Kas-Leiberman \cite{BKL}), for surfaces which are rationally dominated by a product of curves (see Kimura \cite{Kimura}), for Godeaux, Catanese and Barlow surfaces (see Voisin \cite{Voisin2, Voisin}),~etc.
\begin{remark}[Related work]
Let $q\colon Q \to B$ be a quadric fibration as in Theorem \ref{thm:main}. In the particular case where $Q$ and $B$ are smooth {\em projective}, Bouali \cite{Bouali} and Vial \cite[\S4]{Vial} ``computed'' the Chow motive $\mathfrak{h}(Q)_\mathbb{Q}$ of $Q$ using smooth projective $k$-schemes of dimension $\leq \mathrm{dim}(B)$. Since the category of Chow motives (with $\mathbb{Q}$-coefficients) embeds fully-faithfully into $\mathrm{DM}_{\mathrm{gm}}(k)_\mathbb{Q}$ (see \cite[\S4]{Voevodsky}), these computations lead to an alternative ``geometric'' proof of Corollaries \ref{cor:main}-\ref{cor:main2}. Note that in Theorem \ref{thm:main} and in Corollaries \ref{cor:main}-\ref{cor:main2} we do {\em not} assume that $Q$ and $B$ are projective; we are (mainly) interested in geometric mixed motives and {\em not} in pure motives.
\end{remark}
\subsection*{Intersections of quadrics}
Let $Y\subset \mathbb{P}^{d-1}$ be a smooth complete intersection of $m$ quadric hypersurfaces. The linear span of these quadrics gives rise to a flat quadric fibration $q\colon Q \to \mathbb{P}^{m-1}$ of relative dimension $d-2$, with $Q$ $k$-smooth. Under these notations, our second main result is the following:
\begin{theorem}\label{thm:intersection}
We have $\mathrm{S}(Q) \Rightarrow \mathrm{S}(Y)$. When $2m\leq d$, the converse also holds.
\end{theorem}
By combining Theorem \ref{thm:intersection} with the above Corollaries \ref{cor:main}-\ref{cor:main2}, we hence obtain a proof of the Schur-finiteness conjecture in the following cases:
\begin{corollary}[Intersections of two or three quadrics]\label{cor:intersection}
Assume that $q\colon Q \to \mathbb{P}^{m-1}$ is as in Theorem \ref{thm:main}. In this case, the conjecture $\mathrm{S}(Y)$ holds when $Y$ is a smooth complete intersection of two, or of three odd-dimensional, quadric hypersurfaces.\end{corollary}
\subsection*{Families of sextic du Val del Pezzo surfaces}
Recall that a {\em sextic du Val del Pezzo surface $X$} is a projective $k$-scheme with at worst du Val singularities and ample anticanonical class such that $K_X^2=6$. Consider a {\em family of sextic du Val del Pezzo surfaces $f\colon T \to B$}, \textsl{i.e.}\ a flat morphism $f$ such that for every geometric point $x \in B$ the associated fiber $T_x$ is a sextic du Val del Pezzo surface. Following Kuznetsov \cite[\S5]{Pezzo}, given $d \in \{2,3\}$, let us write ${\mathcal M}_d$ for the relative moduli stack of semistable sheaves on fibers of $T$ over $B$ with Hilbert polynomial $h_d(t):=(3t+d)(t+1)$, and $Z_d$ for the coarse moduli space of ${\mathcal M}_d$. By construction, we have finite flat morphisms $Z_2 \to B$ and $Z_3 \to B$ of degrees $3$ and $2$, respectively. Under these notations, our third main result is the following:
\begin{theorem}\label{thm:Pezzo}
Let $f\colon T \to B$ be a family of sextic du Val del Pezzo surfaces. Assume that $\mathrm{char}(k)\not\in \{2,3\}$ and that $T$ is $k$-smooth. Under these assumptions, we have the equivalence of conjectures $\mathrm{S}(T) \Leftrightarrow \mathrm{S}(B) + \mathrm{S}(Z_2) + \mathrm{S}(Z_3)$.
\end{theorem}
To the best of the author's knowledge, Theorem \ref{thm:Pezzo} is new in the literature. It leads to a proof of the Schur-finiteness conjecture in new cases. Here is an example:
\begin{corollary}[Families of sextic du Val del Pezzo surfaces over curves]\label{cor:Pezzo}
Let $f\colon T \to B$ be a family of sextic du Val del Pezzo surfaces as in Theorem \ref{thm:Pezzo} with $B$ a curve. In this case, the conjecture $\mathrm{S}(T)$ holds.
\end{corollary}
\begin{remark}
Let $f\colon T \to B$ be a family of sextic du Val del Pezzo surfaces as in Theorem \ref{thm:Pezzo}. To the best of the author's knowledge, the associated geometric mixed motive $M(T)_\mathbb{Q}$ has {\em not} been ``computed'' (in any non-trivial particular case). Nevertheless, consult Helmsauer \cite{thesis} for the ``computation'' of the Chow motive $\mathfrak{h}(X)_\mathbb{Q}$ of certain {\em smooth} (projective) del Pezzo surfaces $X$.
\end{remark}
\subsection*{Bass-finiteness conjecture}
Let $k$ be a finite base field and $X$ a smooth $k$-scheme of finite type. The Bass-finiteness conjecture $\mathrm{B}(X)$ (see \cite[\S9]{Bass}) is one of the oldest and most important conjectures in algebraic $K$-theory. It asserts that the algebraic $K$-theory groups $K_n(X), n \geq 0$, are finitely generated. In the same vein, given an integer $r \geq 2$, we can consider the conjecture $\mathrm{B}(X)_{1/r}$, where $K_n(X)$ is replaced by $K_n(X)_{1/r}:=K_n(X) \otimes \mathbb{Z}[1/r]$. Our fourth main result is the following:
\begin{theorem}\label{thm:Bass}
The following holds:
\begin{itemize}
\item[(i)] Theorem \ref{thm:main} and Corollaries \ref{cor:main}-\ref{cor:main2} hold\footnote{Corollary \ref{cor:main2} (for the conjecture $\mathrm{B}(-)_{1/2}$) can also be applied to the case where $B$ is (an open subscheme of) an abelian surface; see \cite[Cor.~70 and Thm.~82]{Kahn}.} similarly for the conjecture $\mathrm{B}(-)_{1/2}$. In Corollary \ref{cor:main}, the groups $K_n(Q)_{1/2}, n \geq 2$, are moreover finite.
\item[(ii)] Theorem \ref{thm:intersection} holds similarly for the conjecture $\mathrm{B}(-)$.
\item[(iii)] Corollary \ref{cor:intersection} holds similarly for the conjecture $\mathrm{B}(-)_{1/2}$. In the case where $Y$ is a smooth complete intersection of two quadric hypersurfaces, the groups $K_n(Y)_{1/2}, n \geq 2$, are moreover finite.
\item[(iv)] Theorem \ref{thm:Pezzo} and Corollary \ref{cor:Pezzo} hold similarly for the conjecture $\mathrm{B}(-)_{1/6}$. In Corollary \ref{cor:Pezzo}, the groups $K_n(T)_{1/6}, n \geq 2$, are moreover finite.
\end{itemize}
\end{theorem}
\section{Preliminaries}
In what follows, all schemes/stacks are of finite type over the perfect base~field~$k$.
\subsection*{Dg categories} For a survey on dg categories we invite the reader to consult \cite{ICM-Keller}. In what follows, we will write $\mathsf{dgcat}(k)$ for the category of (essentially small) dg categories and dg functors. Every (dg) $k$-algebra $A$ gives naturally rise to a dg category with a single object. Another source of examples is provided by schemes/stacks. Given a $k$-scheme $X$ (or stack ${\mathcal X}$), the category of perfect complexes of ${\mathcal O}_X$-modules $\mathrm{perf}(X)$ admits a canonical dg enhancement $\mathrm{perf}_\mathrm{dg}(X)$; consult \cite[\S4.6]{ICM-Keller}\cite{LO} for details. More generally, given a sheaf of ${\mathcal O}_X$-algebras ${\mathcal F}$, we can consider the dg category of perfect complexes of ${\mathcal F}$-modules $\mathrm{perf}_\mathrm{dg}(X;{\mathcal F})$.
\subsection*{Noncommutative mixed motives} For a book, resp. recent survey, on noncommutative motives we invite the reader to consult \cite{book}, resp. \cite{survey}. Recall from~\cite[\S8.5.1]{book} (see also \cite{Miami, finMot, IAS}) the definition of Kontsevich's triangulated category of noncommutative mixed motives $\mathrm{NMot}(k)$. By construction, this category is idempotent complete, symmetric monoidal, and comes equipped with a $\otimes$-functor $U \colon \mathsf{dgcat}(k) \to \mathrm{NMot}(k)$. In what follows, given a $k$-scheme $X$ (or stack ${\mathcal X}$) equipped with a sheaf of ${\mathcal O}_X$-algebras ${\mathcal F}$, we will write $U(X;{\mathcal F}):=U(\mathrm{perf}_\mathrm{dg}(X;{\mathcal F}))$.
\section{Noncommutative mixed motives of twisted root stacks}\label{sec:root}
Let $X$ be a $k$-scheme, ${\mathcal L}$ a line bundle on $X$, $\sigma \in \Gamma(X,{\mathcal L})$ a global section, and $r>0$ an integer. In what follows, we will write $D\subset X$ for the zero locus of $\sigma$. Recall from \cite[Def.~2.2.1]{Codman} (see also \cite[Appendix B]{GW}) that the associated {\em root stack} ${\mathcal X}$ is defined as the following fiber-product of algebraic stacks
$$
\xymatrix{
{\mathcal X}:=\sqrt[r]{({\mathcal L},\sigma)/X} \ar[d]_-p \ar[r] & [\mathbb{A}^1/\mathbb{G}_m] \ar[d]^-{\theta_r} \\
X \ar[r]_-{({\mathcal L},\sigma)} & [\mathbb{A}^1/\mathbb{G}_m]\,,
}
$$
where $\theta_r$ stands for the morphism induced by the $r^{\mathrm{th}}$ power maps on $\mathbb{A}^1$ and $\mathbb{G}_m$. A {\em twisted root stack} $({\mathcal X};{\mathcal F})$ consists of a root stack ${\mathcal X}$ equipped with a sheaf of Azumaya algebras ${\mathcal F}$. In what follows, we will write $s$ for the product of the ranks of ${\mathcal F}$ (at each one of the connected components of ${\mathcal X}$). The following result, which is of independent interest, will play a key role in the proof of Theorem \ref{thm:main}.
\begin{theorem}\label{thm:aux}
Assume that $X$ and $D$ are $k$-smooth.
\begin{itemize}
\item[(i)] We have an isomorphism $U({\mathcal X})\simeq U(X) \oplus U(D)^{\oplus (r-1)}$.
\item[(ii)] Assume moreover that $\mathrm{char}(k)\neq r$ and that $k$ contains the $r^{\mathrm{th}}$ roots of unity. Under these extra assumptions, $U({\mathcal X};{\mathcal F})_{1/rs}$ belongs to the smallest thick triangulated subcategory of $\NMot(k)_{1/rs}$ containing the noncommutative mixed motives $\{U(V_i)_{1/rs}\}$ and $\{U(\widetilde{D}_i^l)_{1/rs}\}$, where $V_i$ is any affine open subscheme of $X$ and $\widetilde{D}_i^l$ is any Galois $l$-fold cover of $D_i:=D\cap V_i$ with $l\nmid r$ and $l\neq 1$.
\end{itemize}
\end{theorem}
\begin{proof}
We start by proving item (i). Following \cite[Thm.~1.6]{Ueda}, the pull-back functor $p^\ast$ is fully-faithful and we have the following semi-orthogonal decomposition\footnote{Consult \cite{BO, BO1} for the definition of semi-orthogonal decomposition.} $\mathrm{perf}(X)=\langle \mathrm{perf}(D)_{r-1},\ldots,\mathrm{perf}(D)_1, p^\ast(\mathrm{perf}(X)\rangle$. All the categories $\mathrm{perf}(D)_j$ are equivalent (via a Fourier-Mukai type functor) to $\mathrm{perf}(D)$. Therefore, since the functor $U\colon \mathsf{dgcat}(k) \to \NMot(k)$ sends semi-orthogonal decompositions to direct sums, we obtain the searched direct sum decomposition $U({\mathcal X}) \simeq U(X) \oplus U(D)^{\oplus (r-1)}$.
Let us now prove item (ii). We consider first the particular case where $X=\mathrm{Spec}(A)$ is affine and the line bundle ${\mathcal L} ={\mathcal O}_X$ is trivial. Let $\mu_r$ be the group of $r^{\mathrm{th}}$ roots of unity and $\chi \colon \mu_r \to k^\times$ a (fixed) primitive character. Under these notations, consider the global quotient $[\mathrm{Spec}(B)/\mu_r]$, where $B:=A[t]/\langle t^r - \sigma\rangle$ and the $\mu_r$-action on $B$ is given by $g\cdot t:= \chi(g)^{-1} t$ for every $g \in \mu_r$ and by $g \cdot a := a$ for every $a \in A$. As explained in \cite[Example~2.4.1]{Codman}, the root stack ${\mathcal X}$ agrees, in this particular case, with the global quotient $[\mathrm{Spec}(B)/\mu_r]$. By construction, the induced map $\mathrm{Spec}(B) \to X$ is a $r$-fold cover ramified over $D \subset X$. Moreover, for every $l$ such that $l\mid r$ and $l\neq 1$, the associated closed subscheme $\mathrm{Spec}(B)^{\mu_l}$ agrees with the ramification divisor $D \subset \mathrm{Spec}(B)$. Therefore, since the functor $U(-)_{1/rs}\colon \mathsf{dgcat}(k) \to \NMot(k)_{1/rs}$ is an additive invariant of dg categories in the sense of \cite[Def.~2.1]{book} (see \cite[\S8.4.5]{book}), we conclude from \cite[Cor.~1.28(ii)]{Orbifold} that, in this particular case, $U({\mathcal X}; {\mathcal F})_{1/rs}$ belongs to the smallest thick additive subcategory of $\NMot(k)_{1/rs}$ containing the noncommutative mixed motives $U(\mathrm{Spec}(B))^{\mu_l}_{1/rs}$ and $\{U(\widetilde{D}^l)_{1/rs}\}$, where $\widetilde{D}^l$ is any Galois $l$-fold cover of $D$ with $l \nmid r$ and $l\neq 1$. Furthermore, since the geometric quotient $\mathrm{Spec}(B)/\!\!/\mu_r$ agrees with $X$ and the latter scheme is $k$-smooth, \cite[Thm.~1.22]{Orbifold} implies that $U(\mathrm{Spec}(B))^{\mu_l}_{1/rs}$ is isomorphic to $U(X)_{1/rs}$. This finishes the proof of item (ii) in the particular case where $X$ is affine and the line bundle ${\mathcal L}$ is trivial.
Let us now prove item (ii) in the general case. As explained above, given any affine open subscheme $V_i$ of $X$ which trivializes the line bundle ${\mathcal L}$, the noncommutative mixed motive $U({\mathcal V}_i;{\mathcal F}_i)_{1/rs}$, with ${\mathcal V}_i:=p^{-1}(V_i)$ and ${\mathcal F}_i:={\mathcal F}_{|{\mathcal V}_i}$, belongs to the smallest thick additive subcategory of $\NMot(k)_{1/rs}$ containing $U(V_i)_{1/rs}$ and $\{U(\widetilde{D}^l_i)_{1/rs}\}$, where $\widetilde{D}^l_i$ is any Galois $l$-fold cover of $D_i:=D \cap V_i$ with $l\mid r$ and $l\neq 1$. Let us then choose an affine open cover $\{W_i\}$ of $X$ which trivializes the line bundle ${\mathcal L}$. Since $X$ is quasi-compact (recall that $X$ is of finite type over $k$), this affine open cover admits a {\em finite} subcover. Consequently, the proof follows by induction from the $\mathbb{Z}[1/rs]$-linearization of the distinguished triangles of Lemma \ref{lem:key} below.
\end{proof}
\begin{lemma}\label{lem:key}
Given an open cover $\{W_1, W_2\}$ of $X$, we have an induced Mayer-Vietoris distinguished triangle of noncommutative mixed motives
\begin{equation}\label{eq:triangle}
U({\mathcal X};{\mathcal F}) \longrightarrow U({\mathcal W}_1;{\mathcal F}_1) \oplus U({\mathcal W}_2;{\mathcal F}_2) \stackrel{\pm}{\longrightarrow} U({\mathcal W}_{12}; {\mathcal F}_{12}) \stackrel{\partial}{\longrightarrow} \Sigma U({\mathcal X};{\mathcal F})\,,
\end{equation}
where ${\mathcal W}_{12}:={\mathcal W}_1 \cap {\mathcal W}_2$ and ${\mathcal F}_{12}:={\mathcal F}_{|{\mathcal W}_{12}}$.
\end{lemma}
\begin{proof}
Consider the following commutative diagram of dg categories
$$
\xymatrix{
\mathrm{perf}_\mathrm{dg}({\mathcal X};{\mathcal F})_{\mathcal Z} \ar[d] \ar[r] & \mathrm{perf}_\mathrm{dg}({\mathcal X};{\mathcal F}) \ar[d] \ar[r] & \mathrm{perf}_\mathrm{dg}({\mathcal W}_1;{\mathcal F}_1) \ar[d] \\
\mathrm{perf}_\mathrm{dg}({\mathcal W}_2;{\mathcal F}_2)_{\mathcal Z} \ar[r] & \mathrm{perf}_\mathrm{dg}({\mathcal W}_2;{\mathcal F}_2) \ar[r] & \mathrm{perf}_\mathrm{dg}({\mathcal W}_{12}; {\mathcal F}_{12})\,,
}
$$
where ${\mathcal Z}$ stands for the closed complement ${\mathcal X}- {\mathcal W}_1 = {\mathcal W}_2 - {\mathcal W}_{12}$ and $\mathrm{perf}_\mathrm{dg}({\mathcal X};{\mathcal F})_{\mathcal Z}$, resp. $\mathrm{perf}_\mathrm{dg}({\mathcal W}_2; {\mathcal F}_2)_{\mathcal Z}$, stands for the full dg subcategory of $\mathrm{perf}_\mathrm{dg}({\mathcal X};{\mathcal F})$, resp. $\mathrm{perf}_\mathrm{dg}({\mathcal W}_2;{\mathcal F}_2)$, consisting of those perfect complexes of ${\mathcal F}$-modules, resp. ${\mathcal F}_2$-modules, that are supported on ${\mathcal Z}$. Both rows are short exact sequences of dg categories in the sense of Drinfeld/Keller (see \cite[\S4.6]{ICM-Keller}) and the left vertical dg functor is a Morita equivalence. Therefore, since the functor $U\colon \mathsf{dgcat}(k) \to \NMot(k)$ is a localizing invariant of dg categories in the sense of \cite[\S8.1]{book}, we obtain the following induced morphism of distinguished triangles:
$$
\xymatrix@C=1.7em@R=2.5em{
U(\mathrm{perf}_\mathrm{dg}({\mathcal X};{\mathcal F})_{\mathcal Z}) \ar[d]^-\simeq \ar[r] & U({\mathcal X};{\mathcal F}) \ar[d] \ar[r] & U({\mathcal W}_1;{\mathcal F}_1) \ar[d] \ar[r]^-\partial & \Sigma U(\mathrm{perf}_\mathrm{dg}({\mathcal X};{\mathcal F})_{\mathcal Z}) \ar[d]^-\simeq \\
U(\mathrm{perf}_\mathrm{dg}({\mathcal W}_2;{\mathcal F}_2)_{\mathcal Z}) \ar[r] & U({\mathcal W}_2;{\mathcal F}_2) \ar[r] & U({\mathcal W}_{12};{\mathcal F}_{12}) \ar[r]^-\partial & \Sigma U(\mathrm{perf}_\mathrm{dg}({\mathcal W}_2;{\mathcal F}_2)_{\mathcal Z})\,.
}
$$
Finally, since the middle square is homotopy (co)cartesian, we hence obtain the claimed Mayer-Vietoris distinguished triangle \eqref{eq:triangle}.
\end{proof}
\section{Proof of Theorem \ref{thm:main}}\label{sec:proof}
Following \cite[\S3]{Quadrics} (see also \cite[\S1.2]{ABB}), let $E$ be a vector bundle of rank $d$ on $B$, $q'\colon \mathbb{P}(E) \to B$ the projectivization of $E$ on $B$, ${\mathcal O}_{\mathbb{P}(E)}(1)$ the Grothendieck line bundle on $\mathbb{P}(E)$, ${\mathcal L}$ a line bundle on $B$, and finally $\rho\in \Gamma(B, S^2(E^\vee) \otimes {\mathcal L}^\vee) = \Gamma(\mathbb{P}(E), {\mathcal O}_{\mathbb{P}(E)}(2)\otimes {\mathcal L}^\vee)$ a global section. Given this data, recall that $Q\subset \mathbb{P}(E)$ is defined as the zero locus of $\rho$ on $\mathbb{P}(E)$ and that $q\colon Q \to B$ is the restriction of $q'$ to $Q$; note that the relative dimension of $q$ is equal to $d-2$. Consider also the discriminant global section $\mathrm{disc}(q) \in \Gamma(B, \mathrm{det}(E^\vee)^{\otimes 2} \otimes ({\mathcal L}^\vee)^{\otimes d})$ and the associated zero locus $D\subset B$; note that $D$ agrees with the locus of the critical values of $q$.
Recall from \cite[\S3.5]{Quadrics} (see also \cite[\S1.6]{ABB}) that when $d$ is even, we can consider the {\em discriminant cover} $\widetilde{B}:=\mathrm{Spec}_B(Z({\mathcal C} l_0(q)))$ of $B$, where $Z({\mathcal C} l_0(q))$ stands for the center of the sheaf ${\mathcal C} l_0(q)$ of even parts of the Clifford algebra associated to $q$; see \cite[\S3]{Quadrics} (and also \cite[\S1.5]{ABB}). By construction, $\widetilde{B}$ is a $2$-fold cover ramified over $D$. Moreover, since $D$ is $k$-smooth, $\widetilde{B}$ is also $k$-smooth.
Recall from \cite[\S3.6]{Quadrics} (see also \cite[\S1.7]{ABB}) that when $d$ is odd and $\mathrm{char}(k)\neq 2$, we can consider the {\em discriminant stack} ${\mathcal X}:=\sqrt[2]{(\mathrm{det}(E^\vee)^{\otimes 2} \otimes ({\mathcal L}^\vee)^{\otimes d}, \mathrm{disc}(q))/B}$. Since $\mathrm{char}(k)\neq 2$, ${\mathcal X}$ is a Deligne-Mumford stack with coarse moduli space $B$.
\begin{proposition}\label{prop:computation}
Under the above notations, and assumptions, the following holds:
\begin{itemize}
\item[(i)] When $d$ is even, we have $U(Q)_{1/2} \simeq U(\widetilde{B})_{1/2} \oplus U(B)_{1/2}^{\oplus (d-2)}$.
\item[(ii)] When $d$ is odd and $\mathrm{char}(k)\neq 2$, $U(Q)_{1/2}$ belongs to the smallest thick triangulated subcategory of $\NMot(k)_{1/2}$ containing the noncommutative mixed motives $\{U(V_i)_{1/2}\}$ and $\{U(\widetilde{D}_i)_{1/2}\}$, where $V_i$ is any affine open subscheme of $B$ and $\widetilde{D}_i$ is any Galois $2$-fold cover of $D_i:=D\cap V_i$.
\end{itemize}
\end{proposition}
\begin{proof}
As proved in \cite[Thm.~4.2]{Quadrics} (see also \cite[Thm.~2.2.1]{ABB}), we have the semi-orthogonal decomposition $\mathrm{perf}(Q) = \langle \mathrm{perf}(B; {\mathcal C} l_0(q)), \mathrm{perf}(B)_1, \ldots, \mathrm{perf}(B)_{d-2}\rangle$, where $\mathrm{perf}(B)_j:=q^\ast(\mathrm{perf}(B)) \otimes {\mathcal O}_{Q/B}(j)$. All the categories $\mathrm{perf}(B)_j$ are equivalent (via a Fourier-Mukai type functor) to $\mathrm{perf}(B)$. Therefore, since the functor $U\colon \mathsf{dgcat}(k) \to \NMot(k)$ sends semi-orthogonal decompositions to direct sums, we obtain the direct sum decomposition $U(Q) \simeq U(B;{\mathcal C} l_0(q)))\oplus U(B)^{\oplus (d-2)}$.
We start by proving item (i). As explained in \cite[\S3.5]{Quadrics} (see also \cite[\S1.6]{ABB}), when $d$ is even, the category $\mathrm{perf}(B; {\mathcal C} l_0(q))$ is equivalent (via a Fourier-Mukai type functor) to $\mathrm{perf}(\widetilde{B}; {\mathcal F})$ where ${\mathcal F}$ is a certain sheaf of Azumaya algebras on $\widetilde{B}$ of rank $2^{\frac{d}{2}-1}$. This leads to an isomorphism $U(B; {\mathcal C} l_0(q))\simeq U(\widetilde{B}; {\mathcal F})$. Making use of \cite[Thm.~2.1]{Azumaya}, we hence conclude that $U(B; {\mathcal C} l_0(q))_{1/2}$ is isomorphic to $U(\widetilde{B}; {\mathcal F})_{1/2}\simeq U(\widetilde{B})_{1/2}$. Consequently, we obtain the isomorphism of item (i).
Let us now prove item (ii). As explained in \cite[\S3.6]{Quadrics} (see also \cite[\S1.7]{ABB}), when $d$ is odd, the category $\mathrm{perf}(B; {\mathcal C} l_0(q))$ is equivalent (via a Fourier-Mukai type functor) to $\mathrm{perf}({\mathcal X}; {\mathcal F})$ where ${\mathcal F}$ is a certain sheaf of Azumaya algebras on ${\mathcal X}$ of rank $2^{\frac{d-1}{2}}$. This leads to an isomorphism $U(B; {\mathcal C} l_0(q))\simeq U({\mathcal X}; {\mathcal F})$. By combining Theorem \ref{thm:aux}(ii) with the isomorphism $U(Q) \simeq U({\mathcal X};{\mathcal F}) \oplus U(B)^{\oplus (d-2)}$, we hence conclude that $U(Q)_{1/2}$ belongs to the smallest thick triangulated subcategory of $\NMot(k)_{1/2}$ containing $U(B)_{1/2}$, $\{ U(V_i)_{1/2}\}$, and $\{U(\widetilde{D}_i)_{1/2}\}$, where $V_i$ is any affine open subscheme of $B$ and $\widetilde{D}_i$ is any Galois $2$-fold cover of $D_i$. We now claim that $U(B)_{1/2}$ belongs to the smallest thick triangulated subcategory of $\NMot(k)_{1/2}$ containing $\{U(V_i)_{1/2}\}$; note that this would conclude the proof. Choose an affine open cover $\{W_i\}$ of $B$. Since $B$ is quasi-compact (recall that $B$ is of finite type over $k$), this affine open cover admits a {\em finite} subcover. Therefore, similarly to the proof of Theorem \ref{thm:aux}, our claim follows from an inductive argument using the $\mathbb{Z}[1/2]$-linearization of the Mayer-Vietoris distinguished triangles $ U(B) \to U(W_1) \oplus U(W_2) \stackrel{\pm}{\to} U(W_{12}) \stackrel{\partial}{\to} \Sigma U(B)$.
\end{proof}
As proved in \cite[Thm.~2.8]{Bridge}, there exists a $\mathbb{Q}$-linear, fully-faithful, $\otimes$-functor $\Phi$ making the following diagram commute
\begin{equation}\label{eq:diagram-big}
\xymatrix{
\mathrm{Sm}(k) \ar[rrr]^-{X\mapsto \mathrm{perf}_\mathrm{dg}(X)} \ar[d]_-{M(-)_\mathbb{Q}} &&& \mathsf{dgcat}(k) \ar[d]^-{U(-)_\mathbb{Q}} \\
\mathrm{DM}_{\mathrm{gm}}(k)_\mathbb{Q} \ar[d]_-\pi &&& \mathrm{NMot}(k)_\mathbb{Q} \ar[d]^-{\underline{\mathrm{Hom}}(-,U(k)_\mathbb{Q})}\\
\mathrm{DM}_{\mathrm{gm}}(k)_\mathbb{Q}/_{\!-\otimes \mathbb{Q}(1)[2]} \ar[rrr]_-{\Phi} &&& \mathrm{NMot}(k)_\mathbb{Q}\,,
}
\end{equation}
where $\mathrm{DM}_{\mathrm{gm}}(k)_\mathbb{Q}/_{\!-\otimes \mathbb{Q}(1)[2]}$ stands for the orbit category with respect to the Tate motive $\mathbb{Q}(1)[2]$ and $\underline{\mathrm{Hom}}(-,-)$ for the internal Hom of the monoidal structure; note that the functors $X \mapsto \mathrm{perf}_\mathrm{dg}(X)$ and $\underline{\mathrm{Hom}}(-,U(k)_\mathbb{Q})$ are contravariant. By construction, $\pi$ is a faithful $\otimes$-functor. Therefore, it follows from \cite[Lem.~1.11]{Mazza} that we have the following equivalence:
\begin{equation}\label{eq:equivalence}
\mathrm{S}(X) \Leftrightarrow \text{the}\,\,\text{noncommutative}\,\,\text{mixed}\,\,\text{motive}\,\,(\Phi \circ \pi) (M(X)_\mathbb{Q})\,\,\text{is}\,\,\text{Schur}\text{-}\text{finite}.
\end{equation}
We now have all the ingredients necessary to conclude the proof of Theorem \ref{thm:main}.
\subsection*{Item (i)}
The above functors $\pi$ and $\underline{\mathrm{Hom}}(-,U(k)_\mathbb{Q})$ are $\mathbb{Q}$-linear. Therefore, by combining Proposition \ref{prop:computation}(i) with the commutative diagram \eqref{eq:diagram-big}, we conclude that
\begin{equation}\label{eq:iso}
(\Phi\circ \pi)(M(Q)_\mathbb{Q})\simeq (\Phi\circ \pi)(M(\widetilde{B})_\mathbb{Q}) \oplus (\Phi\circ \pi)(M(B)_\mathbb{Q})^{\oplus (d-2)}\,.
\end{equation}
Since Schur-finiteness is stable under direct sums and direct summands, the proof of the equivalence $\mathrm{S}(Q) \Leftrightarrow \mathrm{S}(B) + \mathrm{S}(\widetilde{B})$ follows then from \eqref{eq:equivalence}-\eqref{eq:iso}.
\subsection*{Item (ii)}
Recall from \cite[\S8.5.1-8.5.2]{book} that, by construction, $\NMot(k)_\mathbb{Q}$ is a $\mathbb{Q}$-linear closed symmetric monoidal triangulated category in the sense of Hovey \cite[\S6-7]{Hovey}. As proved in \cite[Thm.~1]{Guletskii}, this implies that Schur-finiteness has the 2-out-of-3 property with respect to distinguished triangles. The functor $\underline{\mathrm{Hom}}(-,U(k)_\mathbb{Q})$ is triangulated. Hence, by combining Proposition \ref{prop:computation}(ii) with the commutative diagram \eqref{eq:diagram-big}, we conclude that $(\Phi\circ \pi)(M(Q)_\mathbb{Q})$ belongs to the smallest thick triangulated subcategory of $\NMot(k)_\mathbb{Q}$ containing the noncommutative mixed motives $\{(\Phi\circ \pi)(M(V_i)_\mathbb{Q})\}$ and $\{(\Phi \circ \pi)(M(\widetilde{D}_i)_\mathbb{Q})\}$, where $V_i$ is any affine open subscheme of $B$ and $\widetilde{D}_i$ is any Galois $2$-fold cover of $D_i$. Since by assumption the conjectures $\{\mathrm{S}(V_i)\}$ and $\{\mathrm{S}(\widetilde{D}_i)\}$ hold, \eqref{eq:equivalence} implies that the noncommutative mixed motives $\{(\Phi\circ \pi)(M(V_i)_\mathbb{Q})\}$ and $\{(\Phi \circ \pi)(M(\widetilde{D}_i)_\mathbb{Q})\}$ are Schur-finite. Therefore, making use of the 2-out-of-3 property of Schur-finiteness with respect to distinguished triangles (and of the stability of Schur-finiteness under direct summands), we conclude that $(\Phi\circ \pi)(M(Q)_\mathbb{Q})$ is also Schur-finite. The proof follows now from the above equivalence \eqref{eq:equivalence}.
\section{Proof of Theorem \ref{thm:intersection}}
Recall from the proof of Proposition \ref{prop:computation} that we have the semi-orthogonal decomposition $\mathrm{perf}(Q)=\langle \mathrm{perf}(\mathbb{P}^{m-1};{\mathcal C} l_0(q)), \mathrm{perf}(\mathbb{P}^{m-1})_1, \ldots, \mathrm{perf}(\mathbb{P}^{m-1})_{d-2} \rangle$, and consequently the following direct sum decompositon:
\begin{equation}\label{eq:direct}
U(Q)\simeq U(\mathbb{P}^{m-1}; {\mathcal C} l_0(q)) \oplus U(\mathbb{P}^{m-1})^{\oplus (d-2)}\,.
\end{equation}
As proved in \cite[Thm.~5.5]{Quadrics} (see also \cite[Thm.~2.3.7]{ABB}), the following also holds:
\begin{itemize}
\item[(a)] When $2m<d$, we have $\mathrm{perf}(Y)=\langle \mathrm{perf}(\mathbb{P}^{m-1}; {\mathcal C} l_0(q)), {\mathcal O}(1), \ldots, {\mathcal O}(d-2m)\rangle$. Consequently, since the functor $U\colon \mathsf{dgcat}(k) \to \NMot(k)$ sends semi-orthogonal decompositions to direct sums, we obtain the following direct sum decomposition $U(Y) \simeq U(\mathbb{P}^{m-1}; {\mathcal C} l_0(q)) \oplus U(k)^{\oplus (d-2m)}$.
\item[(b)] When $2m=d$, the category $\mathrm{perf}(Y)$ is equivalence (via a Fourier-Mukai type functor) to $\mathrm{perf}(\mathbb{P}^{m-1}; {\mathcal C} l_0(q))$. Consequently, we obtain an isomorphism of noncommutative mixed motives $U(Y) \simeq U(\mathbb{P}^{m-1}; {\mathcal C} l_0(q))$.
\item[(c)] When $2m >d$, $\mathrm{perf}(Y)$ is an admissible subcategory of $\mathrm{perf}(\mathbb{P}^{m-1}; {\mathcal C} l_0(q))$. This implies that $U(Y)$ is a direct summand of $U(\mathbb{P}^{m-1}; {\mathcal C} l_0(q))$.
\end{itemize}
Let us now prove the implication $\mathrm{S}(Q) \Rightarrow \mathrm{S}(Y)$. If the conjecture $\mathrm{S}(Q)$ holds, then it follows from the decomposition \eqref{eq:direct}, from the commutative diagram \eqref{eq:diagram-big}, from the equivalence \eqref{eq:equivalence}, and from the stability of Schur-finiteness under direct summands, that the noncommutative mixed motive $\underline{\mathrm{Hom}}(U(\mathbb{P}^{m-1};{\mathcal C} l_0(q))_\mathbb{Q}, U(k)_\mathbb{Q})$ is Schur-finite. Making use of the above descriptions (a)-(c) of $U(Y)$ and of the commutative diagram \eqref{eq:diagram-big}, we hence conclude that the noncommutative mixed motive $(\Phi \circ \pi)(M(Y)_\mathbb{Q})$ is also Schur-finite. Consequently, the conjecture $\mathrm{S}(Y)$ follows now from the above equivalence \eqref{eq:equivalence}. Finally, note that when $2m\leq d$, a similar argument proves the converse implication $\mathrm{S}(Y) \Rightarrow \mathrm{S}(Q)$.
\section{Proof of Theorem \ref{thm:Pezzo}}
Recall first from \cite[Prop.~5.12]{Pezzo} that since $\mathrm{char}(k)\not\in \{2,3\}$ and $T$ is $k$-smooth, the $k$-schemes $B, Z_2$ and $Z_3$ are also $k$-smooth.
\begin{proposition}\label{prop:Pezzo}
We have $U(T)_{1/6} \simeq U(B)_{1/6} \oplus U(Z_2)_{1/6}\oplus U(Z_3)_{1/6}$.
\end{proposition}
\begin{proof}
As proved in \cite[Thm.~5.2 and Prop.~5.10]{Pezzo}, we have the semi-orthogonal decomposition $\mathrm{perf}(T) = \langle \mathrm{perf}(B), \mathrm{perf}(Z_2;{\mathcal F}_2), \mathrm{perf}(Z_3; {\mathcal F}_3)\rangle$, where ${\mathcal F}_2$ (resp. ${\mathcal F}_3$) is a certain sheaf of Azumaya algebras over $Z_2$ (resp. $Z_3$) of order $2$ (resp. $3$). Recall that the functor $U\colon \mathsf{dgcat}(k) \to \NMot(k)$ sends semi-orthogonal decompositions to direct sums. Therefore, we obtain the following direct sum decomposition:
\begin{equation}\label{eq:decomp-last}
U(T) \simeq U(B) \oplus U(Z_2; {\mathcal F}_2) \oplus U(Z_3; {\mathcal F}_3)\,.
\end{equation}
Since ${\mathcal F}_2$ (resp. ${\mathcal F}_3$) is of order $2$ (resp. $3$), the rank of ${\mathcal F}_2$ (resp. ${\mathcal F}_3$) is necessarily a power of $2$ (resp. $3$). Making use of \cite[Thm.~2.1]{Azumaya}, we hence conclude that the noncommutative mixed motive $U(Z_2;{\mathcal F}_2)_{1/2}$ (resp. $U(Z_3; {\mathcal F}_3)_{1/3}$) is isomorphic to $U(Z_2)_{1/2}$ (resp. $U(Z_3)_{1/3}$). Consequently, the proof follows now from the $\mathbb{Z}[1/6]$-linearization of \eqref{eq:decomp-last}.
\end{proof}
The functors $\pi$ and $\underline{\mathrm{Hom}}(-,U(k)_\mathbb{Q})$ in \eqref{eq:diagram-big} are $\mathbb{Q}$-linear. Therefore, similarly to the proof of item (i) of Theorem \ref{thm:main}, by combining Proposition \ref{prop:Pezzo} with the commutative diagram \eqref{eq:diagram-big}, we conclude that
\begin{equation}\label{eq:last}
(\Phi\circ \pi)(M(T)_\mathbb{Q})\simeq (\Phi\circ \pi)(M(\widetilde{B})_\mathbb{Q}) \oplus (\Phi\circ \pi)(M(Z_2)_\mathbb{Q})\oplus (\Phi\circ \pi)(M(Z_3)_\mathbb{Q}) \,.
\end{equation}
Since Schur-finiteness is stable under direct sums and direct summands, the proof follows then from the combination of \eqref{eq:last} with the above equivalence \eqref{eq:equivalence}.
\section{Proof of Theorem \ref{thm:Bass}}
\subsection*{Item (i)} We start by proving the first claim. As explained in \cite[\S8.6]{book} (see also \cite[Thm.~15.10]{Duke}), given $X \in \mathrm{Sm}(k)$, we have the isomorphisms of abelian groups:
\begin{eqnarray}\label{eq:iso-key}
\mathrm{Hom}_{\NMot(k)}(U(k), \Sigma^{-n} U(X))\simeq K_n(X) && n \in \mathbb{Z}\,.
\end{eqnarray}
Assume that $d$ is even. By combining Proposition \ref{prop:computation}(i) with the $\mathbb{Z}[1/2]$-linearization of \eqref{eq:iso-key}, we conclude that $K_n(Q)_{1/2}\simeq K_n(\widetilde{B})_{1/2} \oplus K_n(B)_{1/2}^{\oplus (d-2)}$. Therefore, since finite generation is stable under direct sums and direct summands, we obtain the equivalence $\mathrm{B}(Q)_{1/2} \Leftrightarrow \mathrm{B}(B)_{1/2} + \mathrm{B}(\widetilde{B})_{1/2}$. Assume now that $d$ is odd and that $\mathrm{char}(k)\neq 2$. Finite generation has the 2-out-of-3 property with respect to (short or long) exact sequences and is stable under direct summands. Therefore, the proof of the implication $\{\mathrm{B}(V_i)_{1/2}\} + \{\mathrm{B}(\widetilde{D}_i)_{1/2}\} \Rightarrow \mathrm{B}(Q)_{1/2}$ follows from the combination of Proposition \ref{prop:computation}(ii) with the $\mathbb{Z}[1/2]$-linearization of \eqref{eq:iso-key}. Finally, recall from \cite{Quillen, Quillen2, Quillen1} that the conjecture $\mathrm{B}(X)$ holds in the case where $\mathrm{dim}(X)\leq 1$. Therefore, the Corollaries \ref{cor:main}-\ref{cor:main2} also hold similarly for the conjecture $\mathrm{B}(-)_{1/2}$.
We now prove the second claim. Let $q\colon Q \to B$ be a quadric fibration as in Theorem \ref{thm:main} with $B$ a curve. Thanks to Corollary \ref{cor:main} (for the conjecture $\mathrm{B}(-)_{1/2}$), it suffices to show that the groups $K_n(Q), n \geq 2$, are torsion. Assume first that $d$ is even. By combining Proposition \ref{prop:computation}(i) with the $\mathbb{Q}$-linearization of \eqref{eq:iso-key}, we obtain an isomorphism $K_n(Q)_\mathbb{Q} \simeq K_n(\widetilde{B})_\mathbb{Q} \oplus K_n(B)_\mathbb{Q}^{\oplus (d-2)}$. Thanks to Proposition \ref{prop:curve} below, we have $K_n(\widetilde{B})_\mathbb{Q}=K_n(B)_\mathbb{Q}=0$ for every $n \geq 2$. Therefore, we conclude that the groups $K_n(Q), n \geq 2$, are torsion. Assume now that $d$ is even and that $\mathrm{char}(k)\neq 2$. Thanks to Proposition \ref{prop:computation}(ii), $U(Q)_\mathbb{Q}$ belongs to the smallest thick triangulated subcategory of $\NMot(k)_\mathbb{Q}$ containing the noncommutative mixed motives $\{U(V_i)_\mathbb{Q}\}$ and $\{U(\widetilde{D}_i)_\mathbb{Q}\}$, where $V_i$ is any affine open subscheme of $B$ and $\widetilde{D}_i$ is any Galois $2$-fold cover of $D_i$. Moreover, $U(Q)_\mathbb{Q}$ may be explicitly obtained from $\{U(V_i)_\mathbb{Q}\}$ and $\{U(\widetilde{D}_i)_\mathbb{Q}\}$ using solely the $\mathbb{Q}$-linearization of the Mayer-Vietoris distinguished triangles. Therefore, since $K_n(V_i)_\mathbb{Q}=0$ for every $n \geq 2$ (see Proposition \ref{prop:curve} below) and $K_n(\widetilde{D}_i)_\mathbb{Q}=0$ for every $n \geq 1$ (see Quillen's computation \cite{Quillen1} of the algebraic $K$-theory of a finite field), an inductive argument using the $\mathbb{Q}$-linearization of \eqref{eq:iso-key} and the $\mathbb{Q}$-linearization of the Mayer-Vietoris distinguished triangles implies that the groups $K_n(Q), n \geq 2$, are torsion.
\begin{proposition}\label{prop:curve}
We have $K_n(X)_\mathbb{Q}=0$ for every $n \geq 2$ and smooth $k$-curve $X$.
\end{proposition}
\begin{proof}
In the particular case where $X$ is affine, this result was proved in \cite[Cor.~3.2.3]{Harder} (see also \cite[Thm.~0.5]{Quillen}). In the general case, choose an affine open cover $\{W_i\}$ of $X$. Since $X$ is quasi-compact, this affine open cover admits a {\em finite} subcover. Therefore, the proof follows from an inductive argument (similar to the one in the proof of Theorem \ref{thm:aux}(ii)) using the $\mathbb{Q}$-linearization of \eqref{eq:iso-key} and the $\mathbb{Q}$-linearization of the Mayer-Vietoris distinguished triangles
\end{proof}
\subsection*{Item (ii)} If the conjecture $\mathrm{B}(Q)$ holds, then it follows from the decomposition \eqref{eq:direct} and from the isomorphisms \eqref{eq:iso-key} that the algebraic $K$-theory groups $K_n(\mathrm{perf}_\mathrm{dg}(\mathbb{P}^{m-1}; {\mathcal C} l_0(q))), n \geq 0$, are finitely generated. Therefore, by combining the descriptions (a)-(c) of the noncommutative mixed motive $U(Y)$ (see the proof of Theorem \ref{thm:intersection}) with \eqref{eq:iso-key}, we conclude that the conjecture $\mathrm{B}(Y)$ also holds. Note that when $2m\leq d$, a similar argument proves the converse implication $\mathrm{B}(Y) \Rightarrow \mathrm{B}(Q)$.
\subsection*{Item (iii)} Items (i)-(ii) of Theorem \ref{thm:Bass} imply that Corollary \ref{cor:intersection} holds similarly for the conjecture $\mathrm{B}(-)_{1/2}$. We now address the second claim. Let $q\colon Q \to \mathbb{P}^1$ be the quadric fibration associated to the smooth complete intersection $Y$ of two quadric hypersurfaces. Thanks to item (i), the groups $K_n(Q)_{1/2}, n \geq 2$, are finite. Therefore, making use of the decomposition \eqref{eq:direct}, of the $\mathbb{Z}[1/2]$-linearization of \eqref{eq:iso-key}, and of the above descriptions (a)-(c) of $U(Y)$ (see the proof of Theorem \ref{thm:intersection}), we conclude that the groups $K_n(Y)_{1/2}, n \geq 2$, are also finite.
\subsection*{Item (iv)}
We start by proving the first claim. By combining Proposition \ref{prop:Pezzo} with the $\mathbb{Z}[1/6]$-linearization of \eqref{eq:iso-key}, we conclude that
$$K_n(T)_{1/6}\simeq K_n(B)_{1/6} \oplus K_n(Z_2)_{1/6} \oplus K_n(Z_3)_{1/6}\,.$$
Therefore, since finite generation is stable under sums and direct summands, we obtain the equivalence $\mathrm{B}(T)_{1/6} \Leftrightarrow \mathrm{B}(B)_{1/6} + \mathrm{B}(Z_2)_{1/6} + \mathrm{B}(Z_3)_{1/6}$. As mentioned in the proof of item (i), the conjecture $\mathrm{B}(X)$ holds in the case where $\mathrm{dim}(X)\leq 1$. Hence, Corollary \ref{cor:Pezzo} also holds similarly for the conjecture $\mathrm{B}(-)_{1/6}$.
We now prove the second claim. Let $f\colon T \to B$ be a family of sextic du Val del Pezzo surfaces as in Theorem \ref{thm:Pezzo} with $B$ a curve. Similarly to the proof of item (i) of Theorem \ref{thm:Bass}, it suffices to show that the groups $K_n(T), n \geq 2$, are torsion. By combining Proposition \ref{prop:Pezzo} with the $\mathbb{Q}$-linearization of \eqref{eq:iso-key}, we obtain an isomorphism $K_n(T)_\mathbb{Q} \simeq K_n(B)_\mathbb{Q} \oplus K_n(Z_2)_\mathbb{Q} \oplus K_n(Z_3)_\mathbb{Q}$. Thanks to Proposition \ref{prop:curve}, we have moreover $K_n(B)_\mathbb{Q} = K_n(Z_2)_\mathbb{Q} = K_n(Z_3)_\mathbb{Q} =0$ for every $n \geq 2$. Therefore, we conclude that the groups $K_n(T), n \geq 2$, are torsion.
\subsection*{Acknowledgments:} The author is grateful to Joseph Ayoub for useful e-mail exchanges concerning the Schur-finiteness conjecture. The author also would like to thank the Hausdorff Research Institute for Mathematics for its hospitality.
|
1,314,259,994,770 | arxiv | \section*{Introduction}
Mass is the most fundamental invariant of asymptotically flat manifolds. Originally defined in
General Relativity, it has since played an important role in Riemannian geometric issues.
Other interesting invariants, still motivated by physics, include the energy momentum, the angular momentum
or the center of mass (which will be of interest in this note).
Moreover, they have been extended to other types of asymptotic behaviours such as asymptotically hyperbolic manifolds.
One of the main difficuties when handling the mass of an asymptotically flat or hyperbolic manifolds (or any of its companion
invariants) comes from the fact that they are defined as a limit of an integral expression on larger and larger spheres,
and depending on the first derivatives of the metric tensor written in a special chart where the metric coefficients are
asymptotic to those of the model (flat, hyperbolic) metric at infinity.
It seems unavoidable that a limiting process is involved in the definitions. But
finding expressions that do not depend on the first derivatives but on rather more geometric quantities is an old
question that has attracted the attention of many authors. It was suggested by A. Ashtekhar and R. O. Hansen
\cite{ashtekar-hansen_unified} (see also P. Chru\'sciel \cite{chrusciel_rk-positive-energy})
that the mass could be rather defined from the Ricci tensor and a conformal Killing field of
the Euclidean space. Equality between the two definitions, as well as a similar identity for the center of mass, has
then been proved rigorously by L.-H. Huang using a density theorem \cite{huang_conf-chinoise} due to
previous work by J. Corvino an H. Wu \cite{corvino-wu} for conformally flat manifolds,
and by P. Miao and L.-F.~Tam \cite{miao-tam} through a direct computation in coordinates.
The goal of this short note is twofold: we shall provide first a simple proof of the equality between both sets of
invariants. Although similar in spirit to Miao-Tam \cite{miao-tam}, our approach completely avoids computations in
coordinates. Moreover, it clearly explains why the equality should hold, by connecting it to a natural
integration by parts formula related to the contracted Bianchi identity. As a corollary, we shall extend the
definition of the mass through the Ricci tensor to the asymptotically hyperbolic setting.
\section{Basic facts}
We begin by recalling the classical definitions of the mass and the center of mass of an asymptotically flat
manifold, together with their alternative definitions involving the Ricci tensor. In all that follows, the dimension $n$
of the manifolds considered will be taken to be at least $3$.
\begin{defi}
An asymptotically flat manifold is a complete Riemannian manifold $(M,g)$ such that there exists a diffeomorphism $\Phi$
(called a chart at infinity) from the complement of a compact set in $M$ into the complement of a ball in ${\mathbb R}^n$,
such that, in these coordinates and for some $\tau>0$,
$$ |g_{ij} - \delta_{ij} | = O(r^{-\tau}), \quad |\partial_kg_{ij}| = O(r^{-\tau-1}),
\quad |\partial_k\partial_{\ell}g_{ij}| = O(r^{-\tau-2}). $$
\end{defi}
\begin{defi}\label{defn1.2}
If $\tau>\tfrac{n-2}{2}$ and the scalar curvature of $(M,g)$ is integrable, the quantity\footnote{The normalization
factor in front of the integral is chosen here to give the expected answer for the so-called \emph{generalized Schwarzschild
metrics}; the same applies to the definition of the center of mass below.}
\begin{equation}\label{eq_defn1.2}
m(g) = \frac{1}{2(n-1)\omega_{n-1}}\, \lim_{r\to\infty} \int_{S_r} (-\delta^e g - d \operatorname{tr}_e g)(\nu)\,d\!\operatorname{vol}^e_{s_r}
\end{equation}
exists (where $e$ refers to the Euclidean metric in the given chart at infinity, $\delta$ is the divergence defined as the
adjoint of the exterior derivative, $\nu$ denotes the field of outer unit normals to the coordinate spheres $S_r$,
and $\omega_{n-1}$ is the volume of the unit round sphere of
dimension $n-1$) and is independent of the chart chosen around infinity. It is called the \emph{mass}
of the asymptotically flat manifold $(M,g)$.
\end{defi}
\begin{defi}\label{defn1.3}
If $\tau>\tfrac{n-2}{2}$, the scalar curvature of $(M,g)$ is integrable, $m(g)\neq 0$, and the following so-called
\emph{Regge-Teitelboim (RT) conditions} are satisfied:
$$ |g_{ij}^{\textrm{odd}} | = O(r^{-\tau-1}), \quad |\partial_k\left(g_{ij}^{\textrm{odd}}\right)| = O(r^{-\tau-2})$$
(where $\cdot^{\textrm{odd}}$ denotes the odd part of a function on the chart at infinity),
the quantities
\begin{equation*
c^{\alpha}(g) = \frac{1}{2(n-1)\omega_{n-1}m(g)}\, \lim_{r\to\infty} \int_{S_r}
\left[ x^{\alpha}(-\delta^e g - d \operatorname{tr}_e g) - (g-e)(\partial_\alpha,\cdot) + \operatorname{tr}_e (g-e)\, dx^{\alpha} \right](\nu)\,d\!\operatorname{vol}^e_{s_r}
\end{equation*}
exists for each $\alpha$ in $\{1,...,n\}$. Moreover, the vector $\mathbf{C}(g) = (c^1(g),\dots,c^n(g))$ is independent
of the chart chosen around infinity, up to the action of rigid Euclidean isometries. It is called the \emph{center of
mass} of the asymptotically flat manifold $(M,g)$.
\end{defi}
Existence and invariance of the mass have been proved by R. Bartnik \cite{ba} or P. T. Chru{\'s}ciel \cite{chrusciel-mass}.
The center of mass has been introduced by Regge and Teitelboim \cite{regge-teitelboim_surface-integrals,regge-teitelboim_improved},
and Beig and \'O Murchadha \cite{bom}, see also the more recent works of J. Corvino and R. Schoen
\cite{corvino_scalar-curvature,corvino-schoen}. We shall recall here the basic idea, following the approach due to B. Michel
\cite{michel_geometric-invariance}. Let $g$ and $b$ be two metrics on a complete manifold $M$, the latter one being considered as a
\emph{background} metric. Let also $\mathcal{F}^g$ (resp. $\mathcal{F}^b$) be a (scalar) polynomial invariant in the curvature
tensor and its subsequent derivatives, $V$ a function, and $(M_r)_{r\geq 0}$ an exhaustion of $M$ by compact subsets, whose boundaries will
be denoted by $S_r$ (later taken as large coordinate spheres in a chart at infinity). One then may compute:
\begin{equation*}
\int_{M_r} V\, \left(\mathcal{F}^g - \mathcal{F}^b\right) \, d\!\operatorname{vol}^b
\ = \ \int_{M_r} V\, (D\mathcal{F})_b(g-b)\, d\!\operatorname{vol}^b \ + \ \int_{M_r} V \, \mathcal{Q}(b,g) \, d\!\operatorname{vol}^b
\end{equation*}
where $\mathcal{Q}$ denotes the (quadratic) remainder term in the Taylor formula for the functional $\mathcal{F}$.
Integrating by parts the linear term leads to:
\begin{equation*
\int_{M_r} V\, \left(\mathcal{F}^g - \mathcal{F}^b\right) \, d\!\operatorname{vol}^b
\ = \ \int_{M_r} \langle (D\mathcal{F})_b^*V\, ,\, g-b\rangle \, d\!\operatorname{vol}^b \ + \ \int_{S_r} \mathbb{U}(V,g,b)
\ + \ \int_{M_r} V\, \mathcal{Q}(b,g) \, d\!\operatorname{vol}^b .
\end{equation*}
This formula shows that $\lim\limits_{r\to\infty} \int_{S_r} \mathbb{U}(V,g,b)$ exists if the following three natural conditions are satisfied:
(1) $g$ is asymptotic to $b$ so that $V\, \left(\mathcal{F}^g - \mathcal{F}^b\right)$ and $V\, \mathcal{Q}(b,g)$ are integrable;
(2) the background geometry $b$ is rigid enough (this means that any two `charts at infinity' where $g$ is asymptotic to $b$ differ by a
diffeomorphism whose leading term is an isometry of $b$); (3) $V$ belongs to the kernel of $(D\mathcal{F})_b^*$ (the adjoint of the
first variation operator of the Riemannian functional $\mathcal{F}$). Moreover, Michel proves in \cite{michel_geometric-invariance}
that it always defines an asymptotic invariant, independent of the choice of chart at infinity, as a consequence of the diffeomorphism invariance
of the integrated scalar invariant $\mathcal{F}^g$.
If one chooses $\mathcal{F}^g = \operatorname{Scal}^g$ on an asymptotically flat manifold (hence $b=e$, the Euclidean metric),
$$ (D\operatorname{Scal})_e^*V \, = \, \operatorname{Hess}^e V \, + \, (\Delta^eV)\,e, $$
and its kernel consists of affine functions. Letting $V\equiv 1$,
it is easy to check that the limit over spheres above yields the classical definition of the mass:
$$ 2(n-1)\omega_{n-1}\, m(g) \ = \ \lim_{r\to\infty} \int_{S_r} \mathbb{U}(1,g,e) . $$
Thus,
\begin{equation}\label{eqn1.3bis}
2(n-1)\omega_{n-1}\, m(g) \ = \ \lim_{r\to\infty} \int_{M_r} V\, \operatorname{Scal}^g \, d\!\operatorname{vol}^e \
- \ \lim_{r\to\infty} \int_{M_r} \mathcal{Q}(e,g) \, d\!\operatorname{vol}^e
\end{equation}
Integrable scalar curvature yields convergence of the first term, whereas the integrand in the second term is a
combination of terms in $(g-b)\partial^2g$ and $g^{-1}(\partial g)^2$: it is then integrable since $\tau>\tfrac{n-2}{2}$.
Moreover, Michel's analysis shows that it defines an asymptotic invariant, independent of the choice of chart at infinity
\cite{michel_geometric-invariance}.
If one takes $V=V^{(\alpha)}=x^{\alpha}$ (the $\alpha$-th coordinate function in the chart at infinity, for any
$\alpha$ in $\{1,...,n\}$), the integral over spheres now yields the classical definition of the center of mass, \emph{i.e.}
$$ 2(n-1)\omega_{n-1}\, m(g)\, c^{\alpha}(g) \ = \ \lim_{r\to\infty} \int_{S_r} \mathbb{U}(V^{(\alpha)},g,e)
\quad \textrm{ for any } \alpha \in \{1,...,n\}. $$
Under the RT conditions, these converge as well and the vector $\mathbf{C}(g)$ is again an
asymptotic invariant.
It now remains to state the alternative definitions of these asymptotic invariants \emph{via} the Ricci tensor.
\begin{defi}\label{defn1.4}
Let $X$ be the radial vector field $X = r\partial_r$ in the chosen chart at infinity. Then we define
\emph{the Ricci version of the mass} of $(M,g)$ by
\begin{equation}\label{eqn1.4}
m_R(g) \ = \ - \frac{1}{(n-1)(n-2)\omega_{n-1}}\,
\lim_{r\to\infty} \int_{S_r} \left(\Ric^g - \frac12\operatorname{Scal}^g g\right)(X,\nu)\, d\!\operatorname{vol}^g
\end{equation}
whenever this limit is convergent. For $\alpha$ in $\{1,\dots,n\}$, let $X^{(\alpha)}$ be the Euclidean conformal Killing field
$X^{(\alpha)} = r^2\partial_{\alpha} - 2 x^{\alpha}x^i\partial_i$ and define
\begin{equation}\label{eqn1.4bis}
c^{\alpha}_R(g) \ = \ \frac{1}{2(n-1)(n-2)\omega_{n-1}m(g)}\,
\lim_{r\to\infty} \int_{S_r} \left(\Ric^g - \frac12\operatorname{Scal}^g g\right)(X^{(\alpha)},\nu)\, d\!\operatorname{vol}^g
\end{equation}
whenever this limit is convergent. We will call this vector $\mathbf{C}_R(g)=(c^{1}_R(g),\dots,c^n_R(g))$.
\end{defi}
Notice that these definitions of the asymptotic invariants rely on the Einstein tensor,
which seems to be consistent with the physical motivation.
\section{Equality in the asymptotically flat case}
In this section, we will prove the equality between the classical expressions $m(g), \mathbf{C}(g)$ of the mass or the
center of mass and their Ricci versions $m_R(g), \mathbf{C}_R(g)$. The proof we will give relies on Michel's approach
described above together with two elementary computations in Riemannian geometry.
\begin{lemm}[The integrated Bianchi identity]\label{lemma2.1}
Let $h$ be a $C^3$ Riemannian metric on a smooth compact domain with boundary $\Omega$ and $X$ be a conformal Killing field.
Then
$$ \int_{\partial \Omega} \left(\Ric^h - \frac12\operatorname{Scal}^h h\right)(X,\nu)\, d\!\operatorname{vol}^h_{\partial \Omega}
\ = \ \frac{n-2}{2n} \int_{\Omega} \operatorname{Scal}^h \left(\delta^h X\right)\, d\!\operatorname{vol}^h_{\Omega} \ ,$$
where $\nu$ is the outer unit normal to $\partial \Omega$.
\end{lemm}
\begin{proof} This equality is a variation of the well known \emph{Pohozaev identity} in conformal geometry, as
stated by R. Schoen \cite{schoen_ricci-mass}. Our version has the advantage that the divergence of $X$ appears in
the bulk integral (the classical Pohozaev identity is rather concerned with the derivative of the scalar curvature
in the direction of $X$).The proof being very simple, we will
give it here. From the contracted Bianchi identity $\delta^h\left(\Ric^h - \tfrac12\operatorname{Scal}^h h\right)=0$,
one deduces that
$$
\int_{\partial \Omega} \left(\Ric^h - \frac12\operatorname{Scal}^h h\right)(X,\nu)\, d\!\operatorname{vol}^h_{\partial \Omega} \ = \
\int_{\Omega} \langle \Ric^h - \frac12\operatorname{Scal}^h h,(\delta^h)^*X\rangle_h \, d\!\operatorname{vol}^h_{\Omega}
$$
where $(\delta^h)^*$ in the above computation denotes the adjoint of the divergence on vectors, \emph{i.e.} the symmetric
part of the covariant derivative. Since $X$ is conformal Killing, $(\delta^h)^*X = -\tfrac1n(\delta^hX)h$ and
\begin{align*} \int_{\partial \Omega} \left(\Ric^h - \frac12\operatorname{Scal}^h h\right)(X,\nu)\, d\!\operatorname{vol}^h_{\partial \Omega}
& \ = \ - \frac1n \int_{\Omega} \operatorname{tr}_h\left( \Ric^h - \frac12\operatorname{Scal}^h h\right)(\delta^h)X \, d\!\operatorname{vol}^h_{\Omega} \\
& \ = \ \frac{n-2}{2n} \int_{\Omega} \operatorname{Scal}^h (\delta^hX) \, d\!\operatorname{vol}^h_{\Omega}
\end{align*}
and this concludes the proof. \end{proof}
Lemma \ref{lemma2.1} provides a link between the integral expression appearing in the Ricci definition
of the asymptotic invariants (see \ref{defn1.3}) and the bulk integral $\int \operatorname{Scal}^h \left(\delta^h X\right)$. This latter
quantity also looks like the one used by Michel to derive the definitions of the asymptotic invariants, provided that
some connection can be made between divergences of conformal Killing fields and elements in the kernel of the adjoint of
the linearized scalar curvature operator. Such a connection stems from our second Lemma:
\begin{lemm}\label{lemma2.2}
Let $h$ be a $C^3$ Riemannian metric and $X$ a conformal Killing field. If $h$ is Einstein with Einstein constant
$\lambda(n-1)$, then $\delta^hX$ sits in the kernel of $(D\operatorname{Scal})_h^*$; more precisely:
\begin{equation}
\label{eqlemm2.2}
\operatorname{Hess}^h \delta^hX \ = \ - \lambda\, (\delta^hX)\, h .
\end{equation}
\end{lemm}
\begin{proof} Recall that $(D\operatorname{Scal})_h^* V = \operatorname{Hess}^h V + (\Delta^hV)h - V\Ric^h$ \cite[1.159(e)]{besse}, so that
its kernel is precisely the set of solutions of \eqref{eqlemm2.2} if $\Ric^h = \lambda(n-1)h$.
Let $\phi_t$ be the (local) flow of $X$, which acts by conformal diffeomorphisms, and $e^{2u_t}$ the conformal
factor at time $t\geq 0$, with $u_0=0$. Hence $\Ric^{\phi_t^*h} = \lambda(n-1)\,\phi_t^*h$,
which can be translates as $\Ric^{e^{2u_t}h} = \lambda(n-1)\,e^{2u_t}h$ as $\phi_t$ is conformal. From \cite[1.159(d)]{besse},
$$
Ric^{e^{2u_t}h} \ = \ Ric^{h} - (n-2)\left(\operatorname{Hess}^h u_t - du_t\otimes du_t\right)
\ + \ \left(\Delta^h u_t - (n-2)\,|du_t|^2_h\right)\, h ,
$$
from which one deduces that
$$ - (n-2)\left(\operatorname{Hess}^h u_t - du_t\otimes du_t\right) \ + \ \left(\Delta^h u_t - (n-2)|du_t|^2_h\right)\, h
\ = \ \lambda(n-1)\,\left(e^{2u_t} - 1\right) h . $$
We now differentiate at $t=0$. Denoting by $\dot{u}$ the first variation of $u_t$, which is related to $X$ through
$\delta^hX = -n\,\dot{u}$, and taking into account that $u_0=0$, one gets:
\begin{equation}\label{eqnpreuvelemm2.2}
- (n-2)\operatorname{Hess}^h \dot{u} + (\Delta^h \dot{u})\, h \ = \ 2(n-1)\lambda\,\dot{u}\, h.
\end{equation}
Tracing this identity yields $2(n-1)\,\Delta^h \dot{u} = 2n(n-1)\,\lambda\,\dot{u}$, so that
$\Delta^h \dot{u} = n\lambda\,\dot{u}$. Inserting this in Equation~\eqref{eqnpreuvelemm2.2} leads to
$\operatorname{Hess}^h \dot{u} = - \lambda\,\dot{u}\, h$, which is the desired expression.
\end{proof}
We now have all the necessary elements to prove the equality between the classical expressions of the
asymptotic invariants and their Ricci versions in the asymptotically flat case.
\begin{theo
If $(M,g)$ is a $C^3$ asymptotically flat manifold with integrable scalar curvature and decay rate $\tau > \tfrac{n-2}{2}$,
then the classical and Ricci definitions of the mass agree: $m (g) \, = \, m_R(g)$.
If $m(g)\neq 0$ and the RT asymptotic conditions are moreover assumed, the same holds for the center of mass, \emph{i.e.}
$c^{\alpha}(g) \, = \, c^{\alpha}_R(g)$ for any $\alpha \in \{1,...,n\}$.
\end{theo}
\begin{proof}
We shall give the complete proof for the mass only, the case of the center of mass being entirely similar.
Fix a chart at infinity on $M$. As the mass is defined asymptotically, we may freely replace a compact part in $M$
by a (topological) ball, which we shall decide to be the unit ball $B_0(1)$ in the chart at infinity. The manifold is
unchanged outside that compact region. For any $R>>1$ we define a cut-off function $\chi_R$ which vanishes inside the
sphere of radius $\tfrac{R}{2}$, equals $1$ outside the sphere of radius $\tfrac{3R}{4}$ and moreover satisfies
$$ |\nabla\chi_R| \leq C_1 R^{-1}, \quad |\nabla^2\chi_R| \leq C_2 R^{-2}, \ \ \textrm{and } \,
|\nabla^3\chi_R| \leq C_3 R^{-3}$$
for some universal constants $C_i$ ($i=1,2,3$) not depending on $R$. We shall now denote $\chi=\chi_R$ unless some
confusion is about to occur. We then define for each $R>4$ a metric on the annulus $\Omega_R = A(\tfrac{R}{4},R)$:
$$h \ = \ \chi g \, + \, (1-\chi) e , $$
and we shall also denote by $h$ the complete metric obtained by gluing the Euclidean metric inside the ball
$B_0(\tfrac{R}{4})$ and the original metric $g$ outside the ball $B_0(R)$.
Let now $X$ be a conformal Killing field for the Euclidean metric. From Lemma \ref{lemma2.2},
$V = \delta^e X$ sits in the kernel of the adjoint of the linearized scalar curvature
operator $(D\operatorname{Scal})_e^*$. Computing as in Lemma \ref{lemma2.1} over the annulus $\Omega_R = A(\tfrac{R}{4},R)$,
\begin{align*}
\int_{S_R} \left(\Ric^h - \frac12\operatorname{Scal}^h h\right)(X,\nu^h)
& \ = \ \int_{\Omega_R} \langle \Ric^h - \frac12\operatorname{Scal}^h h\, , (\delta^h)^*X\rangle \\
& \ = \ \int_{\Omega_R} \langle \Ric^h - \frac12\operatorname{Scal}^h h, , (\delta^h)_0^*X - \frac{\delta^h X}{n}h \rangle \\
& \ = \ -\frac1n \int_{\Omega_R} \operatorname{tr}_h\left(\Ric^h - \frac12\operatorname{Scal}^h h\right)\, \delta^h X
\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\, + \, \int_{\Omega_R} \langle \Ric^h - \frac12\operatorname{Scal}^h h\, , (\delta^h)_0^*X\rangle \ ,
\end{align*}
where the volume forms and scalar products are all relative to $h$ but have been removed for clarity
(notice moreover that all boundary contributions at $\tfrac{R}{4}$ vanish since $h$ is flat there). Hence
\begin{equation}\label{eq2.1}
\int_{S_R} \left(\Ric^h - \frac12\operatorname{Scal}^h h\right)(X,\nu^h)
\ = \ \frac{n-2}{2n} \int_{\Omega_R} (\delta^h X) \,\operatorname{Scal}^h
\, + \, \int_{\Omega_R} \langle \Ric^h - \frac12\operatorname{Scal}^h h,(\delta^h)_0^*X\rangle \ .
\end{equation}
We now choose $X=r\partial_r$ (the radial dilation vector field), and recall that $\delta^eX=-n$ in this case.
We can now replace the volume form $d\!\operatorname{vol}^h$,
the divergence $\delta^h$, and the the tracefree Killing operator $(\delta^h)^*_0$ by their Euclidean counterparts
$d\!\operatorname{vol}^e$, $\delta^e$, and $(\delta^e)^*_0$: from our asymptotic decay conditions, our choice of cut-off function $\chi$,
and the facts that $\tau>\tfrac{n-2}{2}$ and $|X|=r$, one has for the first term in the right-hand side
of \eqref{eq2.1}:
$$ \int_{\Omega_R} (\delta^h X) \,\operatorname{Scal}^h d\!\operatorname{vol}^h \, - \, \int_{\Omega_R} (\delta^e X) \,\operatorname{Scal}^h d\!\operatorname{vol}^e
\, = \, O\left(R^{n-2\tau-2}\right) \, = \, o(1)$$
as $R$ tends to infinity (note that the second term in the left-hand side does not tend to zero at infinity as
the scalar curvature of $h$ may not be uniformly integrable).
As $(\delta^e)_0^*X=0$, the last term in \eqref{eq2.1} can be treated in the same way and it is $o(1)$, too.
One concludes that, in the case $X$ is the radial field,
\begin{equation}\label{eq2.2}
\int_{S_R} \left(\Ric^h - \frac12\operatorname{Scal}^h h\right)(X,\nu^h)\, d\!\operatorname{vol}^e_{S_R}
\ = \ \frac{n-2}{2n} \int_{\Omega_R} (\delta^e X) \,\operatorname{Scal}^h d\!\operatorname{vol}^e
\, + \ o(1).
\end{equation}
It remains to apply Michel's analysis over the annulus $\Omega_R$:
\begin{equation*}
\int_{\Omega_R} (\delta^e X) \,\operatorname{Scal}^h d\!\operatorname{vol}^e
\ = \ \int_{S_R} \mathbb{U}(\delta^eX,g,e)
\, + \, \int_{\Omega_R} (\delta^eX)\,\mathcal{Q}(e,h)\, d\!\operatorname{vol}^e \, + \, o(1)
\end{equation*}
(the boundary contribution at $r=\tfrac{R}{4}$ vanishes again since $h=e$ there). Taking into account $\delta^eX=-n$,
our asymptotic decay conditions, the assumptions on $\chi$, and $\tau>\tfrac{n-2}{2}$, the $\mathcal{Q}$-term tends to $0$
at infinity (for the very same reason that made it integrable in Michel's analysis) and one gets
$$ \frac{1}{2(n-1)\omega_{n-1}}\,\int_{S_r} \left(\Ric^h - \frac12\operatorname{Scal}^h h\right)(r\partial_r,\nu^h)\, d\!\operatorname{vol}_{S_r}
\ = \ \frac{2-n}{2}\, m(g) \, + \ o(1).$$
If one now chooses $X=X^{(\alpha)} = r^2\partial_{\alpha} - 2 x^{\alpha}x^i\partial_i$,
\emph{i.e.} $X$ is the essential conformal Killing field of $\mathbb{R}^n$ obtained by conjugating a translation by
the inversion map, one has $\delta^ eX^{(\alpha)} = 2n x^{\alpha} = 2n V^{(\alpha)}$ and one can use the same argument.
Some careful bookkeeping shows that all appropriate terms are $o(1)$ due to the Regge-Teitelboim conditions and
one concludes that
$$ \frac{1}{2(n-1)\omega_{n-1}}\,\int_{S_r} \left(\Ric^h - \frac12\operatorname{Scal}^h h\right)(X^{(\alpha)},\nu^h) \, d\!\operatorname{vol}_{S_r}
\ = \ (n-2) \, m(g)\, c^{\alpha}(g) \, + \, o(1)$$
as expected.
\end{proof}
\section{Asymptotically hyperbolic manifolds}
The mass of asymptotically hyperbolic manifolds was defined by P. T. Chru{\'s}ciel and the author
\cite{ptc-mh} and independently by X. Wang \cite{wang_hyperbolic-mass}; we shall use here the definition of
\cite{ptc-mh}, see \cite{mh_survey-AH-mass} for a comparison.
\begin{defi}
An asymptotically hyperbolic manifold is a complete Riemannian manifold $(M,g)$ such that there exists a diffeomorphism $\Phi$
(chart at infinity) from the complement of a compact set in $M$ into the complement of a ball in
$\mathbb{R}\times S^{n-1}$ (equipped with the background hyperbolic metric $b = dr^2 + \sinh^2 r g_{S^{n-1}}$), satisfying the
following condition: if $\epsilon_0=\partial_r$, $\epsilon_1$, ..., $\epsilon_n$ is some $b$-orthonormal basis, and
$g_{ij} = g(\epsilon_{i},\epsilon_{j})$, there exists some $\tau>0$ such that,
$$ |g_{ij} - \delta_{ij} | = O(e^{-\tau r}), \quad |\epsilon_k\cdot g_{ij}| = O(e^{-\tau r}),
\quad |\epsilon_k\cdot \epsilon_{\ell}\cdot g_{ij}| = O(e^{-\tau r}). $$
\end{defi}
\begin{defi}\label{defn3.2}
If $\tau>\tfrac{n}{2}$ and $\left(\operatorname{Scal}^g + n(n-1)\right)$ is integrable in $L^1(e^rd\!\operatorname{vol}_b)$,
the linear map $\mathbf{M}(g)$ defined by\footnote{As in the asymptotically flat case, the normalization
factor comes from the computation for a reference family of metrics, which is the \emph{generalized Kottler
metrics} in the asymptotically hyperbolic case.}:
\begin{equation*}
V \, \mapsto \, \frac{1}{2(n-1)\omega_{n-1}}\, \lim_{r\to\infty} \int_{S_r}
\left[ V \, (-\delta^b g - d \operatorname{tr}_b g) + \operatorname{tr}_b(g-b) dV - (g-b)(\nabla^bV,\cdot)\right] (\nu) \,d\!\operatorname{vol}_{s_r}
\end{equation*}
is well-defined on the kernel of $(D\operatorname{Scal})^*_b$ and is independent of the chart at infinity. It is called the
\emph{mass of the asymptotically hyperbolic manifold} $(M,g)$.
\end{defi}
Existence and invariance of the mass can be proven by using Michel's approach \cite{michel_geometric-invariance}.
The space $\mathcal{K} = \ker (D\operatorname{Scal})^*_b$ consists of functions $V$ solutions of $\operatorname{Hess}^h V = V\, h$.
It is $(n+1)$-dimensional and is generated, in the coordinates above, by the functions $V^{(0)}=\cosh r$,
$V^{(\alpha)}=x^{\alpha}\sinh r$ (for $\alpha\in\{1,\dots,n\}$),
where $(x^{\alpha}) = (x^{1},\dots,x^n)$ are the Euclidean coordinates on the unit sphere induced by the standard embedding
$S^{n-1} \subset \mathbb{R}^n$.
Contrarily to the asymptotically flat case, the center of mass is already included here and doesn't need to
be defined independently. Indeed, the space $\mathcal{K}$ is an irreducible representation of $O_0(n,1)$ (the isometry group
of the hyperbolic space), so that
all functions $V$ contribute to the single (vector-valued) invariant $\mathbf{M}(g)$. In the asymptotically flat case,
this kernel splits into a trivial $1$-dimensional representation
(the constant functions) which gives rise to the mass, and the standard representation of $\mathbb{R}^n\rtimes O(n)$ on
$\mathbb{R}^n$ (the linear functions), which gives birth to the center of mass.
The hyperbolic conformal Killing fields are the same as those of the Euclidean space, but their divergences must now be
explicited with respect to the hyperbolic metric. In the ball model of the hyperbolic space, one computes that
$\delta^bX^{(0)} = -nV^{(0)}$ for the radial dilation vector field $X^{(0)}$, whereas $\delta^bX^{(\alpha)} = -nV^{(\alpha)}$
for the (inverted) translation fields. We can now argue as above, but starting with the modified Einstein tensor
$$\tilde{G}^g \ = \ \Ric^g \, - \, \frac12 \operatorname{Scal}^g g \, - \, \frac{(n-1)(n-2)}{2} g \ . $$
The formula analogous to that of Lemma \ref{lemma2.1} reads
$$ \int_{\partial \Omega} \tilde{G}^g(X,\nu)\, d\!\operatorname{vol}^g_{\partial \Omega}
\ = \ \frac{n-2}{2n} \int_{\Omega} \left(\operatorname{Scal}^g + \, n(n-1)\right)\,\delta^g X\, d\!\operatorname{vol}^g_{\Omega} \ , $$
which is the expected expression to apply Michel's approach for the mass. The argument is now completely similar to
the one given above, and we will skip the details. One concludes with the following alternative definition of the mass
involving the Ricci tensor:
\begin{theo}
For any $i\in\{0,...,n\}$,
$$
\ \mathbf{M}(g)\left[V^{(i)}\right] \
= - \frac{1}{n} \,\mathbf{M}(g)\left[\delta^bX^{(i)}\right] \
= \ - \frac{1}{(n-1)(n-2)\omega_{n-1}} \lim_{r\to\infty} \int_{S_r} \tilde{G}^g(X^{(i)},\nu)\, d\!\operatorname{vol}_{S_r}.
$$
\end{theo}
\smallskip
{\flushleft\textsc{Acknowledgements}. The author thanks Piotr Chru\'sciel for useful comments.}
\bigskip
\bibliographystyle{smfplain}
|
1,314,259,994,771 | arxiv | \section{Introduction}
\subsection{Preliminaries}
\IEEEPARstart{W}{eather} affects people's daily lives and changes their decisions since the beginning of humanity. The weather molds the flow of events and creates a snowballing effect on real-life systems such as agriculture, tourism, traffic, flight navigation, and energy consumption. In these topics, the data scientists try to predict their system's next state, where the weather becomes a critical information to estimate. With the success of modelling spatio-temporal series \cite{6826537, 5989832, 8352745, 7399418, 7274747, 8101019}, data scientists create new deep-learning-based solutions to the weather forecasting, which have many advantages over traditional methods. Compared to physical models for \textit{Numerical Weather Prediction} (NWP), deep learning models can provide results within the minutes of receiving data, exploit the big data aggregated in years, and make accurate predictions with less costly models. This paper introduces a model architecture for the NWP by combining multiple spatio-temporal data sources with an attention mechanism and using \textit{Convolutional Long-short Term Memory} (ConvLSTM) \cite{convlstm} cells as building blocks to capture spatial and temporal correlations. Our model can select relevant cells in multiple spatio-temporal series to make long term spatial predictions by preserving long term dependencies with attention and context matcher mechanisms. Further, we give a weather prediction pipeline for short-middle-long term predictions, and our code and data are available at https://github.com/sftekin/ieee\_weather.
Today's traditional NWP methods model atmospheric events using physics, which is first introduced in the early twentieth century \cite{diff}. NWP models simulate the underlying physics of atmosphere by solving non-linear partial differential equations at millions of points per time stamp and providing reliable forecasts. Daily simulations run on NWP can forecast minutes to weeks ahead for regions with meter to kilometer resolutions \cite{Rodrigues2018}. However, numerical models are getting more complex, and their demand for high computation power increases day by day. Obtaining results from these models can take hours of waiting, limiting their ability to provide actionable predictions. For example, Turkish State Meteorological Service uses supercomputers to upsample large-scale NWPs such as \cite{wrf, ecmwf, arpege, alora} to local regions. However, with deep learning models, one can obtain local predictions with fewer computations.
The NWP is a multiple time series problem and classical ML algorithms have shown great success in time series applications. In \cite{Dimoulkas2019}, authors successfully train Neural Networks (NNs) for energy forecasting of each zone. Likewise, authors of \cite{Cai2019} formulated Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) under multistep and recursive manners and showed great success compared to statistical models. In a more challenging task of forecasting non-stationary time series data, authors of \cite{Smyl2020} combine statistical method and deep learning models to forecast the M4 dataset. Moreover, with the development of new methods on time series prediction, one can combine multiple time series with attention mechanisms, \cite{dual_attn}, or perform dilated casual convolutions in time, \cite{Bai2018, Oord2016}, to generate accurate long term predictions. However, all of these methods focused on the next value of the one target series. In \cite{Sen2019}, authors predict multiple time series using covariates of multiple input series. In the spatio-temporal time series prediction, however, there are multiple series to predict, which have both temporal and spatial covariances.
Furthermore, previous studies on spatio-temporal series \cite{trajgru, google_ai, optical}, including ConvLSTMS, do not exploit the numerous data set available for weather, such as satellite images, numerical grid values, and observations in meteorological stations. These works have focused on the precipitation nowcasting problem, where this task aims to give a precise and timely prediction of rainfall intensity in a local region over a relatively short period \cite{convlstm}. The inputs and outputs are a sequence of radar images, which makes an image translation problem rather than a time series problem, and it focused on one source of data. The weather, however, is a chaotic system affected by many parameters, e.g., vegetation, geographical contour, and human factors. Thus, less amount of information is not enough to model such chaotic behaviours. Besides, \cite{trajgru, google_ai, optical, convlstm}, made accurate short-term predictions, yet their performance deteriorated rapidly as the output sequence length raises.
Additionally, we show that classical side information integration in computer vison \cite{DeVries2017, Perez2018, Wang2018} or in spatio-temporal forecasting \cite{Wang2018, Liang2018} does not apply to NWP with ConvLSTMs. We confirm that any unnormalized non-positive multiplication operation, including creating embeddings with NNs or CNNs, create perturbations on the natural flow, which changes both spatial and temporal covariances. Moreover, we show that the natural flows are preserved with the convolutional attention mechanism.
To this end, we introduce attention based ConvLSTM encoder-decoder architecture with context matcher for spatio-temporal forecasting as shown in Figure \ref{fig:model_graph}. Input sequences first pass from the attention mechanism, where the model selects relevant driving series to make predictions by referring to the past encoder hidden state at each time step. The encoder block consists of stacked ConvLSTM units, which encode the input sequence of data to hidden states. In the second stage, the context-matcher mechanism matches the decoder's hidden states by summing the encoder's hidden states across all time steps to carry out the long-term dependencies by increasing the length of gradient flows. Each ConvLSTM unit in the decoder block, decodes the passed information from the hidden layers of the encoder. Output of the decoder then passes through the output convolutions to produce predictions. The architecture provides selection on input time series from extracted spatial features and capture long term dependencies. We call the architecture as \textit{Weather Model}. We demonstrate significant performance gains through an extensive set of experiments compared to the conventional methods and show that the weather model is easy to interpret.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{figures/new_en_de}
\caption{Graph of \textit{Weather Model}. Each time instance of the spatio-temporal input series $\mathbf{X}_t=\{\mathbf{X}^1_t, \cdots, \mathbf{X}^i_t, \cdots, \mathbf{X}^d_t\}$ first multiplied by the attention matrices $\mathbf{{A}}_t=(\mathbf{{A}}_{t}^{1}, \cdots, \mathbf{{A}}_{t}^{i}, \cdots, \mathbf{{A}}_{t}^{d})$ to create $\mathbf{\tilde{X}}_t=(\mathbf{X}^{1}_{t}\cdot \mathbf{{A}}_{t}^{1}, \cdots, \mathbf{X}^{i}_{t} \cdot \mathbf{{A}}_{t}^{i}, \cdots, \mathbf{X}^{d}_{t} \cdot\mathbf{{A}}_{t}^{d})$. The attention mechanism computes the attention weights $\mathbf{{A}}_{t}^{i}$ for extracted feature conditioned on first layer of previous hidden state $\mathbf{H}_{t-1}^1$ of encoder. Weighted inputs for each time step, $\mathbf{\tilde{X}}_t$, feeds to the ConvLSTM unit of the first layer of the encoder, where $\mathbf{\tilde{X}}_{t}^{i}\in\mathbb{R}^{M \times N \times d}$. After the input sequence has passed from the encoder layers, the encoder's hidden states, $\mathbf{H}=({\mathbf{H}^1, \cdots, \mathbf{H}^k, \cdots, \mathbf{H}^K})$, pass to Context Matcher, where each $\mathbf{H}^k = (\mathbf{H}^k_1, \cdots, \mathbf{H}^k_t, \cdots, \mathbf{H}^k_{T_{\mathrm{in}}})$. The Context Matcher sums the hidden states in time dimension and reverses the states layer order to create $\mathbf{D}_{t-1} = (\mathbf{D}^1_{t-1}, \cdots, \mathbf{D}^k_{t-1}, \cdots, \mathbf{D}^K_{t-1})$. Final output of the final decoder layer, $\mathbf{D}^K_{t}$ passes to output convolutions to create prediction $\mathbf{\hat{Y}}_{t+1}$.}
\label{fig:model_graph}
\end{figure*}
\subsection{Prior Art and Comparisions}
The application of deep learning methods to weather forecasting is an extensively studied area. We can divide the works into two by the ones that predict single and the ones that predict multiple time-series.
For the single time-series forecasting, \cite{liu2015deep} leveraged a massive volume of weather dataset to train Auto-encoder with Deep Neural Network (DNNs) to simulate hourly weather data and learn features. \cite{Hossain2015}, improved this approach by introducing stacked auto-encoders to temperature forecasting with corrupted inputs by noise. A review \cite{Salman2016} compared the NN, CNN, and Conditional Restricted Boltzmann Machine performances. Further, \cite{Lee2018} implemented simple NN using different climate indices from different stations and historical rainfall data to forecast rainfall. These works showed the high performance of simple deep learning architectures when used with a high volume of data. For the severe convective weather event prediction, \cite{Zhou2019} showed the success of deep CNNs, in classifying atmospheric data from multiple pressure levels. Moreover, Recurrent Neural Network (RNN) \cite{recurrent} and Long-short Term Memory (LSTM) \cite{lstm_sequence} applied for the weather forecasting and \cite{Poornima2019} focused on the vanishing gradient problem by implementing intensified LSTM architecture for rainfall forecasting. \cite{Wang2019} introduced Deep uncertainty quantification, which uses RNN based architecture to single-value forecast and uncertainty quantification. Recently, \cite{Hewage2020} used Temporal Convolutional Network (TCN) for multiple-output and single-output weather predictions and showed better results than a physical NWP model, WRF \cite{wrf}. However, this approach is costly and did not incorporate the multiple features and spatial covariances and showed high performance on multiple-inputs and single-output models.
We categorized the works on video forecasting and precipitation nowcasting as the multiple time-series prediction. \cite{convlstm} introduced the ConvLSTM model with the forecasting network, which consists of an encoder and a decoder built by stacked ConvLSTMs. This model can capture the spatial-temporal covariances and showed high performance in moving the MNIST dataset and predicting a region's rainfall intensity. Also, \cite{Kim2019} used this architecture to track hurricanes. Later, \cite{trajgru} introduced Trajectory Gated Recurrent Unit (Traj-GRU) to learn location invariant movements between input radar images and showed higher performance compared to \cite{convlstm} and optical flow networks \cite{optical}. However, the proposed network architectures in \cite{convlstm} and \cite{trajgru} must take a one-dimensional tensor to predict one-dimensional output. Further, \cite{Tran2019} improved the forecasting architecture by the downsampling and the upsampling layers between TrajGRU units and output convolutions omits the problem of multidimensional input. \cite{Heye2017} introduced the recursive prediction of decoder in the forecasting architecture. To solve this architecture's blurred outputs caused by MSE, \cite{Jing2019} implemented Generative Adversarial Neural Network with different loss. Besides, U-Net showed high performance on precipitation nowcasting in \cite{Agrawal2019}, which is comparable with ConvLSTMs.
Attention mechanisms in the spatio-temporal domain are studied in various tasks. For the human action recognition, \cite{Song2017} developed temporal and spatial attention mechanism with fully connected NNs and LSTMs. Likewise, \cite{dual_attn} implemented a dual attention mechanism in time-series forecasting. \cite{Liang2018} implemented a multi-level attention mechanism fusing side-information with multiple geo-sensory time-series data to forecast air quality. For the traffic prediction, \cite{Yao2019} implemented an attention mechanism to the outputs of LSTM, which takes inputs from different past periods. Although these works successfully implement attention mechanism, input is single or multiple time-series rather than 3D tensors. The authors of \cite{Zhang2018} implement an attention mechanism to improve ConvLSTM further, which takes 3D tensors as input; however, the performance increased scarcely. Recently, \cite{Abdellaoui2020} CNN following LSTM (Conv + LSTM) with ConvLSTM in temporally unrolled architecture and temporally concatenated architecture. They observed the loss change according to different inputs, e.g., features, locations to interpret the model decisions. Compared to Conv + LSTM, ConvLSTM showed high performance. However, they obtained interpretable results only for the Conv + LSTM model, and their interpretation is on feature and location level rather than the activations of the model.
Finally, \cite{Rasp2020} introduced a benchmark dataset, ERA5 \cite{Hersbach2020}, to predict global weather pattern days in advance. They provide baseline scores from simple linear regression techniques, deep learning models, and purely physical forecasting models.
\subsection{Contributions}
Our main contributions are as follows:
\begin{itemize}
\item We introduce attention-based recurrent network architecture to be used in spatio-temporal series with convolution operations on attention layers and ConvLSTMs as memory units. With this framework, we can improve long-term predictions of the encoder-decoder structure of ConvLSTMs and learn spatial correlations better.
\item To the best of our knowledge, the first time in the literature, we apply an attention-based ConvLSTM network to spatio-temporal weather forecasting. This model architecture allows us to use multiple side-information and produce refined predictions with high interpretability.
\item We show the performance decrease on ConvLSTMs with the traditional side information integration methods with the flow vectors.
\item We present a pipeline for numerical weather forecasting. Even if the input series have different spatial and temporal resolutions, we can produce highly accurate predictions. Our code and the data are publicly available.
\item We show that our framework can be used to combine different forecasts that can be results of NWP models and produce refined predictions as mixture of experts.
\item Through an extensive set of experiments over real datasets, we demonstrate that our method brings significant improvements both long and short term predictions compared to state-of-the-art methods.
\end{itemize}{}
\subsection{Organization}
The remainder of the paper is organized as follows. In Section II, we define spatio-temporal weather forecasting and the introduce the interpolation method for the single time series of information. Next, we continue with describing model structure step by step in Section III. Then, in Section III-A, we describe the ConvLSTM unit and how it captures the spatial-temporal relationship in the data. We explain the encoder and the attention mechanism in Section III-B, and introduce the context matcher mechanism in Section III-C. Additionally, we describe the recursive prediction with Weather Model in the same section. In Section IV, we analyze our model's performance over real datasets and compare with the baseline methods. In the same section we also provide some exploratory data analysis to our dataset. Lastly, we interpret Section IV's results and conclude the paper in Section V with several remarks.
\section{Problem Description}
In this paper, all vectors are column vectors and denoted by boldface lower case letters. Matrices and tensors are represented by boldface upper letters. For a vector $\mathbf{a}$, $\mathbf{a}^T$ is ordinary transpose and $||\mathbf{a}|| = \sqrt{\mathbf{a}^T\mathbf{a}}$ is the $l^2$-norm. The time index is given subscript and upper-script refers to other signals, e.g $\mathbf{X}_{i}^{k}$ shows $k^{th}$ input signal's $i^{th}$ time step. Concatenation of two vectors is shown as brackets with semicolon e.g $[\mathbf{h}_t;\mathbf{s}_t]$.
\begin{figure}[t]
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{figures/tmnd}
\caption{An example of spatio-temporal input sequence where $T$ defines temporal window. At each time step $t$ there is a spatial grid with area of $xy$. Each cell represents value of a region at time $t$.}
\label{fig:tmn}
\end{subfigure}
\hfill
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{figures/station}
\caption{Illustration of weather data coming from stations located as points inside grid. This information is known as observational data. Meteorologists use this information for data assimilation of NWP models. In this paper, we interpolated these values to the target grid and used as one of the sources spatio-temporal input series.}
\label{fig:station}
\end{subfigure}
\caption{Illustrations of two type of meteorology data that is used in this paper.}
\label{fig:data_visual}
\end{figure}
This paper's primary goal is to forecast numeric weather data using multiple sources of historical numeric weather data. Mathematically the problem can be defined as follows: Multiple spatio-temporal series $\{\mathbf{X}^1, \cdots, \mathbf{X}^k, \cdots, \mathbf{X}^d\}$ enters to model where $d$ is the number of series and each $\mathbf{X}^k \in \mathbb{R}^{M \times N \times T}$ where $M$ and $N$ defines spatial dimensions and $T$ defines temporal dimension. An example spatio-temporal input is shown in Figure \ref{fig:tmn}. $\mathbf{X}_{i}^{k}$ shows input series coming from source $i$ at time $t$ where $i\in\{t-T_{\mathrm{in}}, \cdots, t-T_{\mathrm{in}}+j, \cdots, t\}$. Likewise, the output of model can be described as $\mathbf{Y}_i$ where $i\in\{t+j, \cdots, t+T_{\mathrm{out}}\}$. Every advance in time step $t$, is showed with $j$. Input series have same frequencies i.e input series will have $j = a \cdot f$ where $f$ is the frequency of time series and $f, a \in \mathbb{Z}$. $T_{\mathrm{in}}$ and $T_{\mathrm{out}}$ show input and output temporal window sizes. Similarly, dimension of output tensor can be defined as $\mathbf{Y} \in \mathbb{R}^{M \times N \times T}$ and for any target value at time $t$, $\mathbf{Y}_t \in \mathbb{R}^{M \times N}$.
Another source of information is the observations coming from ground stations. These observations from station $i$ are time series and can be represented as $\mathbf{x}^i \in \mathbb{R}^{T}$. Since our target series are spatio-temporal we interpolated this information by Inverse Distance Weighting (IDW) \cite{Lu2008} to the value cells in target grid. As shown in Figure \ref{fig:station}, there are multiple stations located in different cells, and we can represent the known values from stations as:
\begin{equation*}
\{(\mathbf{x}^1, \mathbf{u}^1), (\mathbf{x}^2, \mathbf{u}^2), \cdots, (\mathbf{x}^n, \mathbf{u}^n)\}
\end{equation*}
where, $\mathbf{x}^i$ is the known value for the location $\mathbf{u}^i \in \mathbb{R}^{2}$ and $i=1,2,\cdots,n$. The interpolated features for time $t$, can be represented as $\hat{x}_t=f(\mathbf{\hat{u}})$, where $\hat{x}_t$ is the cell's interpolated value, $\mathbf{\hat{u}}$ is the cell's location, and $f(.)$ is the IDW defined as,
\begin{equation}
f(\mathbf{\hat{u}}) = \begin{cases}
\frac{\sum_{i=1}^{n} w_i(\mathbf{\hat{u}})x^i_t}{\sum_{i=1}^{n} w_i(\mathbf{\hat{u}})} & d(\mathbf{\hat{u}}, \mathbf{{u}}_i)\neq 0 \\
x^i_t, & d(\mathbf{\hat{u}}, \mathbf{{u}}_i)= 0
\end{cases}
\label{eq:idw}
\end{equation}
where the weight equation is:
\begin{equation*}
w_i(\mathbf{\hat{u}}) = \frac{1}{d(\mathbf{\hat{u}}, \mathbf{{u}}_i)^p}
\end{equation*}
and the $d(.)$ is the Haversine distance and the $p$ is the power value which determines the strength of interpolation. If $p$ value is too high, cells take the mean of the station values, and if it is too low, only the cells which are very close to stations will be interpolated. Thus, it is a critical parameter to select, and we implemented cross validation to find the $p$ value for each $t$. Algorithm \ref{alg:idw} shows the implementation of \eqref{eq:idw} and performing leave-one-out cross validation for each time step.
\begin{algorithm}
\begin{algorithmic}
\STATE $\mathbf{X}^{\mathrm{all}}\gets \mathrm{all\_station\_data}$
\STATE $\mathbf{u}\gets \mathrm{station\_locations}$
\STATE $\hat{\mathbf{u}}\gets \mathrm{cell\_locations}$
\STATE $\mathbf{D}\gets \mathrm{calculate\_distance\_matrix}(u, \hat{u})$
\STATE $\mathbf{P}\gets \mathbf{0}$
\STATE $t\gets 0$
\STATE $j\gets 0$
\FOR{$t \leq L$}
\FOR{$j \leq d$}
\STATE $\mathbf{x}_{t,j}\gets \mathbf{X}^{\mathrm{all}}_{t, j}$
\STATE $rmse \gets \{\}$
\FOR{$p \in power\_values$}
\STATE $error \gets 0$
\STATE $i \gets 0$
\FOR{$i \leq n$}
\STATE $x_{\mathrm{validation}} \gets \mathbf{x}_{t,j,i}$
\STATE $\mathbf{x}_{\mathrm{train}} \gets \mathbf{x}_{t,j,i'}$
\STATE $\mathbf{d}_{\mathrm{train}} \gets \mathbf{D}_{i}$
\STATE $\mathbf{w} \gets \mathbf{d}_{\mathrm{train}}^{-p}$
\STATE $\hat{x}\gets (\mathbf{x}_{\mathrm{train}})^T \mathbf{w} / sum(\mathbf{w})$
\STATE $error \gets error + (x_{\mathrm{validation}} - \hat{x})^2$
\ENDFOR
\STATE $rmse.\mathrm{insert}(error / n) $
\ENDFOR
\STATE $k \gets argmin(rmse)$
\STATE $\mathbf{P}_{t,j} \gets power\_values_k$
\ENDFOR
\ENDFOR
\caption{IDW Algorithm}
\label{alg:idw}
\end{algorithmic}
\end{algorithm}
The distance matrix $D\in\mathbb{R}^{MN\times n}$, is the collection of distances of $n$ stations to the centers of the cells in target grid. The power matrix $P\in\mathbb{R}^{T\times d}$, is the collection of $p$ values for each time step and feature. Note that power operation in $\mathbf{d}_{\mathrm{train}}^{-p}$ is elementwise.
After this process we can define our dataset as follows:
\begin{equation*}
\mathcal{D}:\{\{\mathbf{X}_{i}^{1}, \cdots, \mathbf{X}_{i}^{d}\}_{i=t-T_{\mathrm{in}}}^{t}, \{\mathbf{Y}_j\}_{j=t+1}^{t+T_{\mathrm{out}}}\}_{t=T_{\mathrm{in}}}^{L-T_{\mathrm{out}}}
\end{equation*}
where $L$ is the total time step. We divide the data into different batches where each batch is,
\begin{align*}
& \mathcal{B}_j =\{\{\mathbf{X}_{i}^{1}, \cdots, \mathbf{X}_{i}^{d}\}_{i=t-T_{\mathrm{in}}}^{t}, \{\mathbf{Y}_j\}_{j=t+1}^{t+T_{\mathrm{out}}}\}_{t=T_{\mathrm{in}}}^{L-T_{\mathrm{out}}}, \\
& j \in \{0, \cdots, b\}, t \in \{0, \cdots, L\}
\end{align*}
where $b$ is the batch size and $\mathcal{D}=\cup_{j=0}^{b}\mathcal{B}_j$, and $\mathcal{B}_i \cap \mathcal{B}_j \neq \emptyset, \forall i \neq j$ since batches can include same elements of the dataset.
Input series have different units and scales, which requires normalization. However, the dataset is massive, and we need high RAM capacity to normalize such a dataset. Thus, we apply min-max normalization to the batches in an adaptive manner:
\begin{align*}
& \mathbf{X}^{k} = \frac{\mathbf{X^k} - min(\mathbf{X^k})}{max(\mathbf{X^k}) - min(\mathbf{X^k})}, \\
& \mathbf{X}^k = \{\mathbf{X}^k_1,\cdots, \mathbf{X}^k_{T}\}, \mathbf{X}^k \in \mathcal{B}_j, \\
& k\in{1, ..., d}, T = T_{\mathrm{in}} + T_{\mathrm{out}}.
\end{align*}
For each experiment dataset is divided into three as $\mathcal{D}_{test}$, $\mathcal{D}_{train}$ and $\mathcal{D}_{validation}$. In numerical weather prediction the target value, $\mathbf{Y}_j$ is continuous. Thus, our task is regression. In addition since we have targets to every input data, learning is supervised and we suffer mean squared error,
\begin{equation}
\mathcal{L}=\frac{1}{T_{\mathrm{out}MN}}\sum_{t=1}^{T_{out}}\sum_{k=1}^{M}\sum_{l=1}^{N} (\mathbf{y}_{t,k,l}-\hat{\mathbf{y}}_{t,k,l})^2
\label{mse_loss}
\end{equation}
where $\hat{\mathbf{y}}$ represents the predicted cell value and $\mathbf{y}$ is the ground truth. Loss function is optimized with Adam \cite{adam} using batches of training data.
\section{The Weather Model}
In this section, we describe the developed model architecture of the weather model by referring to every step and formulation shown in Figure \ref{fig:model_graph}.
\subsection{Insights into Convolutional LSTMs}
One of the key temporal modelling elements in deep learning is the RNN \cite{elman}. RNNs carry information through time steps in their memory i.e. the hidden state. This allows RNNs to store temporal information between samples of the input sequence and make them successful in time series prediction tasks. An RNN unit consists of a hidden state $\mathbf{h}$ and an output $\mathbf{y}$, which operates on variable length data sequence $\mathbf{x}=(\mathbf{x}_1, \cdots, \mathbf{x}_T)$, $\mathbf{x}_t \in \mathbb{R}^{d}$ where $T$, $d$ refers to temporal window \cite{translation} and dimension of inputs respectively. At each time step $t$, the hidden state is updated by,
\begin{equation*}
\mathbf{h}_t = f(\mathbf{h}_{t-1}, \mathbf{x}_t)
\end{equation*}
where $f$ is an non-linear activation function e.g $\mathrm{tanh}$ or $\mathrm{\sigma}$. For the same input $\mathbf{x}$, an Elman Network \cite{elman} update equations are as follows:
\begin{align*}
&\mathbf{h}_t = f_h(\mathbf{W}_{h} \mathbf{x}_t + U_h \mathbf{h}_{t-1} + b_h),&\\
&\mathbf{y}_t = f_y(\mathbf{W}_{y} \mathbf{h}_t + b_y)&
\end{align*}
where $\mathbf{W}_y$, $\mathbf{W}_h$, $b_h$ and $b_y$ represent training weights of the network and $f_h$, $f_y$ are the non-linear activation functions. Since RNNs have only one state $\mathbf{h}_t$ to store the temporal information, they have difficulty of capturing long term dependencies \cite{lstm}. Moreover, RNNs suffer from vanishing gradients problem \cite{dual_attn}.
To remedy the problems of RNNs, LSTMs are introduced \cite{lstm}, which are more successful in modeling long term dependencies. A standard fully connected LSTM has a memory cell, which accumulates the state information $\mathbf{s}_t$ that is updated every time step $t$. At every new input, the cell is updated by three sigmoid gates; the forget gate $\mathbf{f}_t$, the input gate $\mathbf{i}_t$ and the output gate $\mathbf{o}_t$. If the input gate is activated, then the input will be accumulated into the cell. If the forget gate is activated, the past cell status $\mathbf{s}_{t-1}$ will be forgotten. The output gate controls whether the cell output $\mathbf{s}_t$ will be propagated to the final state $\mathbf{h}_t$ \cite{convlstm}. The cell state allows to keep the gradient in the cell and preserve the long-term dependencies. The update equations for the gates in LSTM are as follows:
\begin{align*}
&\mathbf{i}_t = \sigma(\mathbf{W}_i[\mathbf{h}_{t-1}; \mathbf{x}_t] + \mathbf{b}_i),&\\
&\mathbf{f}_t = \sigma(\mathbf{W}_f[\mathbf{h}_{t-1}; \mathbf{x}_t] + \mathbf{b}_f),&\\
&\mathbf{o}_t = \sigma(\mathbf{W}_o[\mathbf{h}_{t-1}; \mathbf{x}_t] + \mathbf{b}_o),&\\
&\Tilde{\mathbf{s}}_t = \mathrm{tanh}(\mathbf{W}_s[\mathbf{h}_{t-1}; \mathbf{x}_t] + b_s),&\\
&\mathbf{s}_t = \mathbf{f}_t \odot \mathbf{s}_{t-1} + \mathbf{i}_t \odot \Tilde{\mathbf{s}}_{t},&\\
&\mathbf{h}_t = \mathbf{o}_t \odot \mathrm{tanh}(\mathbf{s_t})&
\end{align*}
where $\sigma$ and $\odot$ are logistic sigmoid function and elementwise multiplication respectively.
ConvLSTM improves this setup by introducing convolutions to the input-to-state and the state-to-state transitions and captures spatial correlations. Each ConvLSTM unit has inputs $\mathbf{X}_1, \cdots, \mathbf{X}_t$, cell outputs $\mathbf{S}_1, \cdots, \mathbf{S}_t$, hidden states $\mathbf{H}_1, \cdots, \mathbf{H}_t$ and gates $\mathbf{i}_t$, $\mathbf{f}_t$, $\mathbf{o}_t$ in 3D tensor format. The update equations for the gates in ConvLSTM are as follows:
\begin{align*}
&\mathbf{i}_t = \sigma(\mathbf{W}_{xi} \ast \mathbf{X}_t + \mathbf{W}_{hi} \ast \mathbf{H}_{t-1} + \mathbf{b}_i),\\
&\mathbf{f}_t = \sigma(\mathbf{W}_{xf} \ast \mathbf{X}_t + \mathbf{W}_{hf} \ast \mathbf{H}_{t-1} + \mathbf{b}_f),\\
&\mathbf{o}_t = \sigma(\mathbf{W}_{xo} \ast \mathbf{X}_t + \mathbf{W}_{ho} \ast \mathbf{H}_{t-1} + \mathbf{b}_o),\\
&\Tilde{\mathbf{S}}_t = \mathrm{tanh}(\mathbf{W}_{xs} \ast \mathbf{X}_t + \mathbf{W}_{hs} \ast \mathbf{H}_{t-1} + \mathbf{b}_s),\\
&\mathbf{S}_t = \mathbf{f}_t \odot \mathbf{S}_{t-1} + \mathbf{i}_t \odot \Tilde{\mathbf{S}}_{t},\\
&\mathbf{H}_t = \mathbf{o}_t \odot \mathrm{tanh}(\mathbf{S_t})
\end{align*}
where $\ast$ is the 2D convolution operator. Recall that the input $\mathbf{X}_t \in \mathbb{R}^{M \times N \times n}$ and $\mathbf{H}_t, \mathbf{S}_t \in \mathbb{R}^{M \times N \times m}$ where $m$ equals to the number of hidden dimension. $\mathbf{W}_{x}$, $\mathbf{W}_{h} \in \mathbb{R}^{M \times N}$ and $\mathbf{b}_i, \mathbf{b}_f, \mathbf{b}_o \in \mathbb{R}^{M \times N}$ are the parameters to learn and $M$, $N$ are the height and width of the filters respectively.
States in the ConvLSTM are the hidden representation of moving objects, where the kernels of ConvLSTM capture these movements \cite{convlstm}. We observe these movements in the weather features as well. For a target grid $\mathbf{Y}_t \in \mathbb{R}^{M\times N}$ we define the following matrices $\{\mathbf{A}_{t-1}, \mathbf{B}_{t-1}, \mathbf{C}_{t-1}, \mathbf{D}_{t-1}\} \in \mathbb{R}^{M-2\times N-2}$ for $t$,
\begin{align*}
& \mathbf{A}_{k,l} = \mathbf{Y}_{i,j}\;k=0,\cdots,i-2,\;l=1,\cdots, j-1 \\
& \mathbf{B}_{k,l} = \mathbf{Y}_{i,j}\;k=2,\cdots,i,\;l=1,\cdots, j-1 \\
& \mathbf{C}_{k,l} = \mathbf{Y}_{i,j}\;k=1,\cdots,i-1,\;l=0,\cdots, j-2 \\
& \mathbf{D}_{k,l} = \mathbf{Y}_{i,j}\;k=1,\cdots,i-1,\;l=2,\cdots, j
\end{align*}
and we calculate the flow matrix by,
\begin{align*}
& \mathbf{Y}_t^A = \mathbf{Y}_t - \mathbf{A}_{t-1} \\
& \mathbf{Y}_t^B = \mathbf{Y}_t - \mathbf{B}_{t-1} \\
& \mathbf{Y}_t^C = \mathbf{Y}_t - \mathbf{C}_{t-1} \\
& \mathbf{Y}_t^D = \mathbf{Y}_t - \mathbf{D}_{t-1} \\
& \mathbf{F}_t = [\mathbf{Y}_t^A + \mathbf{Y}_t^B; \mathbf{Y}_t^C + \mathbf{Y}_t^D]
\end{align*}
where the $\mathbf{F}_t \in \mathbb{R}^{M-2\times N-2 \times 2}$. This means there is a flow vector $\mathbf{f}_t\in\mathbb{R}^2$ for each cell at each time step, $t$. Figure \ref{fig:flow_vec_ex} is an example calculated flow matrix.
ConvLSTM learns these flow vectors and passes this information to the next states. However, the traditional side-information integration operations such as creating embeddings using fully connected layers disorient these vectors with non-positive multiplications. Thus, the input to the ConvLSTM structures lost their spatial covariances results with the drastic performance decrease. All in all, the former operations to the inputs of ConvLSTM units must preserve spatial covariances.
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{figures/flow_vec_ex.png}
\caption{We obtained flow vectors from two consecutive inputs. The top image shows the magnified version of the flow matrix, which is obtained by the transitions from $t=0$ to $t=1$}
\label{fig:flow_vec_ex}
\end{figure}
\subsection{Encoder and Attention Mechanism}
\begin{figure}[b]
\centering
\includegraphics[scale=0.4]{figures/encoder.png}
\caption{Input sequence first passes to the attention mechanism and then to the ConvLSTM units, which encode the input spatio-temporal sequence. We represented both hidden states as $\mathbf{H}_t$.}
\label{fig:encoder}
\end{figure}
In machine translation \cite{cho2_encoder, sutskever2014sequence} proposed the RNN encoder-decoder model. This model can learn a semantically and syntactically meaningful representation of linguistic phrases. The model aims to encode the input sequence to a latent representation and decode into the target sequence. In the early architectures, the encoder-decoder blocks consist of RNNs. An RNN encodes the input sequence into the encoder's hidden state, referring to the previous hidden state. An example sequence, $\mathbf{x} = ({x}_1, {x}_2, \cdots, {x}_T)$ where $\mathbf{x} \in \mathbb{R}^T$, enters the model, and the encoder block learns mapping of
\begin{equation*}
\mathbf{h}_t = f(\mathbf{x}, \mathbf{h}_{t-1})
\label{eq:rnn1}
\end{equation*}
where $\mathbf{h}_t \in \mathbb{R}^m$ is the hidden state of the encoder at time $t$, and $m$ is the size of hidden state. $f$ represents a non-linear function such as RNN, LSTM \cite{lstm}, GRU \cite{gru}.
In this paper, we use this structure for modelling spatio-temporal sequences. For the $n$ number of spatio-temporal input sequences $\mathbf{X} = (\mathbf{X}_1, \mathbf{X}_2, \cdots, \mathbf{X}_T)$ with $\mathbf{X}_t \in \mathbb{R}^{M \times N \times n}$, the encoder learns the mapping of,
\begin{equation*}
\mathbf{H}_t = f(\mathbf{X}_t, \mathbf{H}_{t-1})
\label{eq:rnn2}
\end{equation*}
where $\mathbf{H}_t \in \mathbb{R}^{M \times N \times m}$ is the hidden state of the encoder at time $t$, and $m$ is the size of hidden state. $f$ represents the ConvLSTM units. We show the unrolled view of the encoder in Figure \ref{fig:encoder}. Output of the attention mechanism $\mathbf{\tilde{X}} = (\mathbf{\tilde{X}}_{t-T_\mathrm{in}}, \cdots, \mathbf{\tilde{X}}_{t})$ passes to the ConvLSTM units in each layer of the encoder. The layers of the encoder is shown in Figure \ref{fig:model_graph}. Each layer encodes the input spatio-temporal series referring to the hidden state on the previous time step. As shown in Figure \ref{fig:model_graph}, after the input layer, the hidden layers take the previous layer's outputs as input to their ConvLSTM unit. Each layer has unique hidden states $\mathbf{H}_t^k, \mathbf{S}_t^k$ where $k$ represents the layer index. After every sample the input sequence passed i.e. $t>T_{\mathrm{in}}$, the hidden states in every layer, $\mathbf{H}_t^k, \mathbf{S}_t^k$ pass to the context matcher mechanism, and the encoding operation finishes.
We adapt the attention mechanism in \cite{dual_attn} and \cite{Zhang2018} to the spatio-temporal domain. This mechanism aims to extract relevant spatio-temporal series at each time step by referring to the previous encoder's hidden state. As shown in Figure \ref{fig:model_graph}, for each time step $t$, the attention mechanism takes input features of the spatio-temporal time series $\mathbf{X} = \{\{\mathbf{X}_t^i\}_{t=1}^{T}\}_{i=1}^d$, and the encoder's first layer of the previous hidden state $\mathbf{H}_{t-1}^1$ to calculate the energy matrices for each $i = 1, \cdots, d$,
\begin{align}
& \mathbf{E}^i = \mathbf{V}_E * \mathrm{tanh}(\mathbf{W}_E \ast \mathbf{H}_{t-1}^1 + \mathbf{U}_E \ast \mathbf{X}^i) \\
& \mathbf{A}^i_{k,l} = p(att_{k,l}|\mathbf{X}^i, \mathbf{H}_{t-1}^1) = \frac{\mathrm{exp}(\mathbf{E}^i_{k,l})}{\sum_{k}^{M}\sum_{l}^{N}\mathrm{exp}(\mathbf{E}^i_{k,l})} \\
& \mathbf{\tilde{X}}_t = \mathbf{A} \odot \mathbf{X}_t
\end{align}
where $\mathbf{A} \in \mathbb{R}^{M\times N \times d}$, $\mathbf{E}^i \in \mathbb{R}^{M \times N \times 1}$, $\mathbf{X}^i \in \mathbb{R}^{T\times M \times N}$, and $\mathbf{X}_t \in \mathbb{R}^{M \times N \times d}$. Note that the convolution operations in equation 3 are in 2D, where $\mathbf{W}_E \in \mathbb{R}^{K\times K\times m \times Q}$, $\mathbf{U}_E \in \mathbb{R}^{K\times K\times d \times Q}$, $\mathbf{V}_E \in \mathbb{R}^{K\times K\times Q \times 1}$. Here $m$, $d$, $Q$, $K$ specify, hidden, input, attention and kernel dimension numbers respectively. As shown in Figure \ref{fig:model_graph}, after performing the convolutions, we obtain the 2D energy matrices for each feature, and we calculate the attention matrix with \textit{softmax} operation in feature dimension as shown in equation 4. Each cell in the $\mathbf{A}$ represents the attention amount given to each of that particular cell's features. The final operation is the Hadamard product of the attention matrix with the input time series, as shown in equation 5. Then, the encoder mapping becomes,
\begin{equation}
\mathbf{H}_t = f(\mathbf{\tilde{X}}_t, \mathbf{H}_{t-1})
\label{rnn}
\end{equation}
where $\mathbf{H}_t$ represents all the hidden states in the encoder layers at $t$. With the attention mechanism, the encoder can selectively focus on the different spatio-temporal series at each time step by referring to the previous encoder's hidden state. The critical point in this mechanism is the \textit{softmax} function being non-negative. When we multiply the attention matrix with the input series, we do not change orientations of the flow matrices since the operation is scalar. Thus, we preserve the nature of the input series. Moreover, we used the convolutional operations in the mechanism because the fully-connected methods require taking 2D inputs. If we flattened the input tensor, we would have lost the spatial covariances, and it would be much more costly than the convolutions.
\subsection{Decoder and Context Matcher Mechanism}
The purpose of the decoder is to decode the encoded information. As shown in Figure \ref{fig:model_graph}, the decoder consists of stacked ConvLSTM units same as the encoder. To prevent performance loss on long length inputs, we introduce a context matcher mechanism. This mechanism sums the hidden states, $\mathbf{H}_t^k$ of the encoder's layers in time dimension. This way, we can increase the length of gradient flow up to the first time step. As observed in \cite{trajgru, traj_cv}, the reversed matching of the hidden states of the encoder's layers increased the performance. As shown in Figure \ref{fig:model_graph}, we used that symmetry in our architecture, and we set the number of layers in the encoder and the decoder the same. The encoder's hidden state dimensions decrease as the number of layers increases, naturally vice-versa for the decoder. With this architecture, we obey the encoder-decoder design convention, encoding to dense decoding to extend the representation. We can describe context matcher mechanism as,
\begin{equation*}
\mathbf{D}_t^{K-k+1} = \sum_{i=t-T_{\mathrm{in}}}^{t} \mathbf{H}_i^k
\end{equation*}
where $\mathbf{D}_t^{K-k+1}\in \mathbb{R}^{M\times N \times m}$ is the decoder hidden state, $K$ is the total number of layers, and $k \in \{1, \cdots K\}$. As shown in Figure \ref{fig:model_graph}, the context matcher sums the hidden states in the time dimension and reverses the order of the states. Next, these states pass to the decoder's corresponding layers, and the input passes the first layer of the decoder. The first layer produces the next hidden state and the input to the next layer. The decoder's final layer produces the inputs to the output convolutions, as shown in Figure \ref{fig:model_graph}. Output convolutions allow us to use higher dimensions in the first layer of the encoder. With this layer, we can produce predictions with the desired number of dimensions.
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{figures/decoder.png}
\caption{After the context matcher feeds the initial states, the decoder takes $\mathbf{Y}_t$ as the input to predict $\mathbf{\hat{Y}}_{t+1}$ as the output. Next, the decoder uses this prediction and $\mathbf{D}_{t+1}$ to predict the next output. This process continues until $t=T_{\mathrm{out}}$. We represented both hidden states as $\mathbf{D}_t$.}
\label{fig:decoder}
\end{figure}
We show the recursive prediction of the decoder in Figure \ref{fig:decoder}. The decoder uses the predictions produced in the previous time step. Thus, we can make any sequence length of predictions with this set-up. We observed that this architecture prevents overfit and increases generalization.
\section{Experiments}
In this section, we compare our model performance with the baseline models on real weather dataset. We have selected the ConvLSTM, UNet\cite{Ronneberger2015}, Moving Average as our baseline models. We first described the online training process and our pipeline. Then, we introduced weather dataset with exploratory data analysis to describe our properties of the dataset and select parameters of our model. Finally, we perform the experiments and interpret the results of our attention matrices.
\subsection{The Online Training and Pipeline}
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{figures/online-training.png}
\caption{We show the online training for the experiments between 2000 - 2005. For each experiment, we select two years of data, where the stride is 6 month. We split each selection with $80\%$ of training, $10\%$ of validation, and $10\%$ of test data.}
\label{fig:online-training}
\end{figure}
We have performed online training since it requires smaller data storage, speeds up the learning, and adapts better for the new data changes. As shown in Figure \ref{fig:online-training}, we have selected two years of period for each experiment and shifted the start date of our period with the stride amount to run the next experiment. In each period we split the data into three sets; train, validation, and test. We have selected the length of period as two years to observe the seasonality, and the stride is six months to make validation and testing in both warm and cold weathers. Totally we have run 10 experiments.
In training, we collected the pieces of batches randomly. The temporal length of each piece is the sum of input and output window sizes. We took samples of this length from the beginning of the set and shifted one time-step in each selection. After covering the whole set, we have shuffled the choices and group them into batches. With this method, we increased the performance of our models.
Note that online training is crucial to assess the performance of the prediction algorithms in real-life applications. We show our pipeline in Figure \ref{fig:pipeline}. As the new data comes, we update the model with the online training manner. We make the new short-middle-long term predictions using the model, which has the closest training date to the current date. With this pipeline, we combine the information coming from stations and the database.
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{figures/pipeline.png}
\caption{There are two data sources: the data centers in the grid format and the other from the meteorological stations in time-series format. After spatial interpolation of observations, we train the model and make short-middle-long term predictions. As the new data come, we update the model to make new predictions.}
\label{fig:pipeline}
\end{figure}
\subsection{The Weather Dataset}
To compare our model with the baseline models and the literature results, e.g, \cite{Rasp2020} we have used the benchmark dataset, \textit{ERA5 hourly data on pressure levels} \cite{Hersbach2020}. Temporal resolution of the data is one hour while the spatial resolution is thirty kilometers, and the data belongs to 100 hPa atm pressure level. All in all, our data is in $(61\times121)$ shape for each time step and feature. Our data belongs to dates between $2000$ and $2005$ and spatial covers in latitude $30^\circ$ - $45^\circ$ and longitude $20^\circ$ - $50^\circ$. The features belongs to the data is shown in table \ref{table:features}. There are many parameters in the ECMWF's database. We have selected these parameters because of the availability and their distribution in time. Some of the parameters we choose did not include many zeros or possess a high percentage of the same value. Among these parameters, we have selected the temperature as our target. Thus, our task is to find the next temperature values using the previous values of the weather features.
\begin{table}[t]
\centering
\begin{tabular}{p{2.1cm} p{2.1cm}}
\hline
& Units \\
\hline
Geopotential & $\mathrm{m^2s^{-2}}$ \\
Potential vorticity & $\mathrm{Km^2kg^{-1}s^{-1}}$ \\
Relative humidity & $\mathrm{\%}$ \\
Specific humidity & $\mathrm{kg\:kg^{-1}}$\\
Temperature & $\mathrm{K}$ \\
U-wind & $\mathrm{ms^{-1}}$ \\
V-wind & $\mathrm{ms^{-1}}$ \\
Verticle Velocity & $\mathrm{Pa}\: s^{-1}$\\
\hline
\end{tabular}
\caption{The corresponding units of the features in ERA5 dataset.}
\label{table:features}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.4]{figures/correlation.png}
\caption{The shown matrix is the mean of the correlation matrices calculated in each of the target grid's cells. The temporal length of the cells is two years}
\label{fig:correlation}
\end{figure}
We calculated the correlation matrix as shown in Figure \ref{fig:correlation}. According to this matrix we can see strong relations between the features. For example, the temperature is strongly correlated with the geopotential and specific humidity while negatively correlated with relative humidity.
\begin{figure}[t!]
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/temperature_8.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/temperature_56}
\caption{}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/temperature_240}
\caption{}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/temperature_seasonal.png}
\caption{}
\end{subfigure}
\caption{We show the temporal trend of temperature. (a) shows the comparison of the current value with one day shift, (b) for one week shift, and (c) for one month shift. (d) shows the seasonal trend of the data from $2004$ to $2006$. These temporal graphs belong to a randomly selected cell.}
\label{fig:temporal_trend}
\end{figure}
We performed temporal trend analysis on our data, as shown in Figure \ref{fig:temporal_trend}. For a randomly selected cell, we compare the temperature's shifted values, where we aim to find temporal relations between one day, week, and the previous month's value and current value of the temperature. However, we could not find any consistent relationship, and we can see that the temperature value highly fluctuates. Even though we observe intersections with the shifted values, this rarely happens in one week and one month shifted graphs, and the difference between shifted and current value is significant. When we look at the first time shift in Figure \ref{fig:temporal_trend}a, we observe steeps and valleys where the previous temperature values are close. We can conclude that the next value at most depends on the ten last time step. After ten time steps, it highly varies. To this end, we have selected 10 time steps of input data and try to predict 10 time step ahead. Furthermore, when we look at Figure \ref{fig:temporal_trend}d, we observe a seasonal trend in the series. Thus, in our experiments, we selected the temporal length at least two years to learn the data's seasonality.
The other parameters of the \textit{Weather Model} is as follows: There are three layers of ConvLSTM units in the encoder and the decoder. The hidden dimensions of the encoder is $32, 32, 16$ and for the decoder $16, 32, 32$. The kernel sizes of the layers in the encoder are $5, 3, 1$, and the decoder $3, 3, 1$. We set the hidden states to zero at the beginning of each epoch. Next, we select the attention dimension as $5$, where the kernel sizes of the convolutions are $3$. Lastly, there are two layers of convolution units at the end of the decoder's last layer, where the middle channel is $5$, and the output channel is $1$. Below, we describe the parameters of baseline models.
\begin{figure*}[t]
\centering
\begin{subfigure}{0.65\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/y}
\caption{Predicted sequence and ground truth}
\label{fig:y}
\end{subfigure}
\begin{subfigure}{0.65\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/x}
\caption{Input sequences to model}
\label{fig:x}
\end{subfigure}
\begin{subfigure}{0.65\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/alpha}
\caption{Attention weights to input sequences at each time step}
\label{fig:alpha}
\end{subfigure}
\caption{We show the \textit{Weather Model} result with the input sequence, output sequence, and attention weights at each time step. Observe the cyclone in the input sequence's features; potential vorticity, relative-specific humidity, and temperature. Towards the first time steps, the model focuses on the stationary parts in the input sequences. Towards the end of the series, other than the relative humidity most of the attention weights converge to 1 since most of these stationary parts fade away. The model needs every movement in the last time-steps since they are very close to the current values. Besides, note that the inverse attention weight relation between the relative humidity and the temperature features. We have shown in Figure \ref{fig:correlation} they are inversely correlated.}
\label{fig:xy_alpha}
\end{figure*}
\subsection{The Baseline Models}
We find the best parameters of all the models using grid-search. After 4 consequtive validation loss increment the training stops and saves the best model, which has the smallest validation loss.
\subsubsection{Simple Moving Average (SMA)}
In this model, the estimate at time $t$ is obtained by averaging values of the input spatio-temporal series within $k$ periods of $t$ where $k$ is the window length \cite{time_series_forecasting}. The model has a temporal mask called the window. Input window length equals to $T_{\mathrm{in}}$ and we selected as $T_{\mathrm{in}}=30$. Moreover, the model provides recurrent predictions, which means, after the first prediction, $t=T_{\mathrm{in}}+1$, the predicted value is inserted to the end of the input values to provide the next prediction. This process repeats $T_{\mathrm{out}}$ times and produce the output sequence. In this paper, unlike SMA, we train the input window. We initialize the weights with a uniform random variable where the range is $[0, 1]$. At every time step, we suffer the square loss given in \eqref{mse_loss}, and train the weights of the input window using Stochastic Gradient Descent (SGD).
\subsubsection{U-net}
We have implemented the model architecture introduced in \cite{Ronneberger2015}. Model takes 10 steps of temperature spatio-temporal series and predict 10 steps ahead. The architecture consists of 4 downsample and 4 upsample layers. We suffer the square loss given in \eqref{mse_loss}, and train the model with Adam optimizer.
\subsubsection{ConvLSTM}
The model's architecture in experiments is the same as the forecasting network proposed in \cite{convlstm}. However, there are some differences, which improved the performance of this model. Since the reversing of the encoder's hidden states enhances the performance of ConvLSTM architectures, we used that method in this model. The encoder consists of three layers of ConvLSTM units. $1, 16, 32$ are the number of hidden dimensions in each encoder layer, and $5, 3, 1$ are the kernel sizes. ConvLSTM units have the same number of hidden dimensions and kernel sizes in each layer of the decoder with reverse order of these series. We set the hidden states to zero at the beginning of each epoch, and the decoder takes zero tensors as input. To prevent exploding gradient problems in LSTM based architectures, we select the clip parameter as 5.
\subsection{Performance Analysis and Results}
We perform our experiments on our dataset with three hours of frequency, which means eight time-steps belong to the same day. We compare the performance of all models by square loss given in \eqref{mse_loss}. We show the results of our experiments in Table \ref{table:results}, the learning curves belong to these experiment are in Appendix-\ref{SecondAppendix}. Our model obtains the best validation and the best test score, which shows that our model generalizes the data better. Moreover, we observe that the U-Net and the ConvLSTM models have a high-performance gap between validation and test scores, caused by overfitting to the training data.
In addition, we observe a fluctuation on the test and validation scores in every model. Since we shift the experiment range by six months, we select the validation and test scores from winter season to summer season. Since there are more weather events happen during winter time, the movements of the weather are less predictable. Thus, there is a performance decrease during winter season in every model.
\begin{table}[t]
\centering
\begin{tabular}{p{1.5cm} p{1.3cm} p{1.3cm} p{1.3cm} p{1.3cm}}
\hline
\multicolumn{5}{c}{Validation Scores} \\
\hline
Experiment & SMA & U-Net & ConvLSTM & Weather Model \\
\hline
1 & 2.4 & 2.11 & 1.7 & $\mathbf{1.18}$ \\
2 & 7.13 & 4.41 & 3.32 & $\mathbf{3.08}$ \\
3 & 3.89 & 1.85 & 1.8 & $\mathbf{1.57}$ \\
4 & 5.48 & 3.06 & 2.49 & $\mathbf{2.11}$ \\
5 & 2.72 & 2.19 & 1.75 & $\mathbf{1.43}$ \\
6 & 4.41 & 2.29 & 3.42 & $\mathbf{2.19}$ \\
7 & 2.58 & 1.53 & 1.42 & $\mathbf{0.96}$ \\
8 & 3.30 & 2.59 & $\mathbf{1.75}$ & 1.95 \\
9 & 3.85 & 1.88 & 1.71 & $\mathbf{1.45}$ \\
10 & 5.48 & 2.35 & 2.72 & $\mathbf{1.75}$ \\
\hline
\hline
\multicolumn{5}{c}{Test Scores} \\
\hline
Experiment & SMA & U-Net & ConvLSTM & Weather Model \\
\hline
1 &4,44 & 3,77 & 3,54 & $\mathbf{2,33}$ \\
2 &4,83 & 3,39 & 2,29 & $\mathbf{2,04}$ \\
3 &6,36 & 3,02 & 2,92 & $\mathbf{2,47}$ \\
4 &3,26 & 2,34 & 1,46 & $\mathbf{1,32}$ \\
5 &5,05 & 3,22 & 2,91 & $\mathbf{2,40}$ \\
6 &3,21 & 1,77 & 2,67 & $\mathbf{1,76}$ \\
7 &5,38 & 2,96 & 2,76 & $\mathbf{1,77}$ \\
8 &4,12 & 2,68 & $\mathbf{1,93}$ & 2,11 \\
9 &5,49 & 2,87 & 2,37 & $\mathbf{2,05}$ \\
10 &3,42 & 1,91 & 1,85 & $\mathbf{1,38}$ \\
\hline
\end{tabular}
\caption{We show model's validation and test MSE scores. Bold scores are the lowest.}
\label{table:results}
\end{table}
To analyze the model's spatial and temporal performance, we show the predicted sequence's heat maps with the ground truth in Figure \ref{fig:y}. When we examine the graph, we can say our model can learn temperature fluctuations in time. Naturally, the performance of our model deteriorates as the length of the prediction interval increases. However, when compared with the ground truth, our model successfully generalizes the fluctuations in temperature. Moreover, when we compare our model with the baseline models in Appendix-\ref{FirstAppendix}; between time steps, we observe that our model can represent the formation of temperature waves and free from making constant predictions.
We show the multiple spatio-temporal series inputs in Figure \ref{fig:x}. We can realize that a cyclone or swirl of winds located at the center of the data. After ten time-step, this cyclone fades away, and another one occurs in other location. Figure \ref{fig:alpha} shows that our model focuses on these areas with different attention weights in various features. We observe that these weights show the same correlation pattern in the Figure \ref{fig:correlation}. However, towards the current time step, i.e. $t_{10}$ weights becomes less distinctive and converge to one caused by the fading of cyclone and the data's temporal trend, as we mentioned in Section-III-B. Considering the current temperature values are positively correlated with the early pasted values, the model needs to focus on every input data movement. Thus, the attention matrices converge to 1 towards the end of input data series.
\section{Conclusion}
We studied spatio-temporal weather data and incorporated the observations to perform weather forecasting with a deep learning pipeline. We introduced a novel ConvLSTM network structure with attention and context matcher mechanism to extend the encoder-decoder method. The model jointly trains both attention weights and ConvLSTM units. Moreover, we provided the problem as a spatio-temporal sequence prediction problem with multiple sources of information and implemented a spatial interpolation algorithm. We analytically showed our objective function and preprocessing steps. We show that, we can learn spatial-temporal correlations with the introduced model and provide predictions using multiple spatio-temporal series with high interpretability.
Furthermore, by giving the innate nature of the input spatio-temporal series with flow vectors, we explained incorrect side-information integration to ConvLSTM networks in the literature. Then, we analyzed our network's performance with real-life datasets, which have high spatial and temporal resolution. In our experiments, by providing quantitative and qualitative comparisons between the predicted and the ground truth values, we evaluated our model's performance in temporal and spatial dimensions. Our model can learn the formation and decay movements of temperature waves. We observe that our model shows significant performance gains compared to the baseline models in our experiments.
\bibliographystyle{IEEEtran}
|
1,314,259,994,772 | arxiv | \section{Introduction}
\label{se:introduction}
Stochastic games are widely used in economic, engineering and social science applications, and the notion of Nash equilibrium is one of the most prevalent notion of equilibrium used in their analyses. However, when the number of players is large, exact Nash equilibria are notoriously difficult to identify and construct explicitly. In an attempt to circumvent this roadblock, Lasry and Lions in \cite{MFG1,MFG2,MFG3} initiated the theory of mean field games for a type of games in which all the players are \emph{statistically identical}, and only interact through their empirical distributions. These authors successfully identify the limiting problem as a set of two coupled PDEs, the first one of Hamilton-Jacobi-Bellman type and the second one of Kolmogorov type. Approximate Nash equilibria for the finite-player games are then derived from the solutions of the limiting problem. Motivated by the analysis of large communication networks, Huang, Malham\'e and Caines developed independently a very similar program, see \cite{HuangMalhameCaines}, under the name of Nash Certainty Equivalence. A probabilistic approach was developed by Carmona and Delarue, see \cite{CarmonaDelarue_sicon}, in which the limiting system of coupled PDEs is replaced by a fully coupled forward-backward stochastic differential equation (FBSDE for short). Recently, an approach based on the weak formulation of stochastic controls was introduced in \cite{CarmonaLacker1} and models with a common noise studied in \cite{CarmonaDelarueLacker}..
From a modeling perspective, one of the major shortcomings of the standard mean field game theory is the strong symmetry requirement that all the players in the game are statistically identical. See nevertheless \cite{HuangMalhameCaines} where the asymptotic theory is applied to several groups of players.
The second requirement of the mean field games theory is that, when the number of players is large, the influence of one single player on the system becomes asymptotically negligible. This is in sharp contrast with some real-world applications. For example, in the banking system there are a few \emph{too big to fail} banks, and a large number of small banks whose actions and status impact the system no-matter how large the number of small banks.
In \cite{Huang}, Huang introduced a linear-quadratic infinite-horizon model in which there exists a major player, whose influence will not fade away when the number of players tends to infinity. \cite{NguyenHuang1} introduces the finite-horizon counterpart, and \cite{NourianCaines} generalizes this model to the nonlinear case. These models are usually called '\emph{mean field game with major and minor players'}. Unfortunately, the scheme proposed in \cite{NguyenHuang1,NourianCaines} fails to accommodate the case where the state of the major player enters the dynamics of the minor players. To be more specific, in \cite{NguyenHuang1,NourianCaines}, the major player influences the minor players solely via their cost functionals. \cite{NguyenHuang2} proposes a new scheme to solve the general case for linear-quadratic-Guassian (LQG for short) games in which the major player's state enters the dynamics of the minor players. The limiting control problem for the major player is solved by what the authors call ``anticipative variational calculation''. In \cite{BensoussanChauYam}, the authors take, like in \cite{NourianCaines}, a stochastic Hamilton-Jacobi-Bellman approach to a type of general mean field games with major and minor players, and the limiting problem is characterized by a set of stochastic PDEs.
In this paper, we analyze a type of general mean field games with major and minor players, and develop a systematic scheme to find approximate Nash equilibria for the finite-player games using a purely probabilistic approach. The limiting problem is identified as a two-player stochastic differential game, in which the control problem faced by the major player is of conditional McKean-Vlasov type, while the optimization problem faced by the representative minor player is a standard control problem. A matching procedure then follows the solution of the two-player game, which gives a FBSDE of McKean-Vlasov type as a characterization of the solution of the limiting problem. The construction of approximate Nash equilibria for the finite-player games with the aid of the limiting problem is also elaborated, with the approximate Nash equilibrium property carefully proved both for the major player and minor players, which fully justifies the scheme we propose. We believe that the results in this paper lead to a much more comprehensive understanding of this type of problems.
While \cite{BensoussanChauYam} is clearly the closest contribution to ours, our paper differs from \cite{BensoussanChauYam} in the following ways: first, we use a probabilistic approach based on a new version of the Pontryagin stochastic maximum principle for conditional McKean-Vlasov dynamics in order to solve the embedded stochastic control problems, while in \cite{BensoussanChauYam} a HJB equation approach is taken. Second, the limiting problem is defined as a two-player game as opposed to the three problems articulated in \cite{BensoussanChauYam}. We believe that this gives a better insight into this kind of mean field games with a major player. Third, the finite-player game in \cite{BensoussanChauYam} is a $N$-player game including only the minor players, and the major player is considered exogenous, and doesn't provide an active participation in the game. The associated propagation of chaos is then just a randomized version of the usual propagation of chaos associated to the usual mean field games, and the limiting scheme is not completely justified. Here we define the finite-player game as an $(N+1)$-player game including the major player. The construction of approximate Nash equilibriums is proved for the minor players and most importantly, for the major player as well, fully justifying our limiting scheme for finding approximate Nash equilibria.
The classical theory of propagation of chaos, in which particles are identical is well developed. See for example the elegant treatment in \cite{Sznitman} and a more recent account in \cite{JourdainMeleardWoyczynski}. However, when introducing a major particle in the system, even when the number of particles tends to infinity, the influence of this \emph{major} particle on the other particles does not average out in the limit. This creates interesting novel features not present in the classical theory. They involve conditioning with respect to the information flow associated to the major particle.
Our propagation of chaos result for SDEs of McKean-Vlasov type with conditional distributions is given in the stand alone Section \ref{se:conditional_chaos}.
The results of this section play a crucial role in the construction of approximate Nash equilibriums for the limiting two-player game in Section \ref{se:approximate}. They are independent of the results on Mean Field Games. For this reason, we include them at the end of the paper, not to disrupt the flow.
The advantages of using the probabilistic approach are threefold. First, the probabilistic framework is natural when dealing with open-loop controls. In the present situation, the persistence of the influence of the major player forces the controls to be random, at least partially, even when looking for strategies in closed loop form. Second, the limiting conditional McKean-Vlasov control problem faced by the major player can be treated most elegantly using an appropriate version of the Pontryagin stochastic maximum principle. Since such a form of the stochastic maximum principle is not available in the published literature, we provide it in an appendix at the end of the paper. Third, our approach can rely on existing results in the literature on the well-posedness of FBSDEs and their associated decoupling fields in order to address the solvability of the limiting problem.
The mean field game model with major and minor players investigated in this paper is as follows. The major player which is indexed by $0$, can choose a control process $u^{0,N}$ taking values in a convex set $U_0 \subset \mathbb{R}^{k_0}$, and every minor player indexed by $i\in\{1,\cdots,N\}$ can choose a control process $u^{i,N}$ taking values in a convex set $U \subset \mathbb{R}^k$. The state of the system at time $t$ is given by a vector $X^N_t=(X^{0,N}_t,X^{1,N}_t,\cdots,X^{N,N}_t)\in \RR^{d_0+Nd}$ whose controlled dynamics are given by
\begin{equation}\label{fo:dynamics}
\begin{cases}
dX^{0,N}_t=b_0(t,X^{0,N}_t,\mu^N_t,u^{0,N}_t)dt+\sigma_0(t,X^{0,N}_t,\mu^N_t, u^{0,N}_t) dW^0_t,\\
dX^{i,N}_t=b(t,X^{i,N}_t,\mu^N_t,X^{0,N}_t,u^{i,N}_t)dt+\sigma(t,X^{i,N}_t,\mu^N_t,X^{0,N}_t,u^{i,N}_t) dW^i_t,\quad 1 \leq i \leq N,
\end{cases}
\end{equation}
where $(W^i_t)_{i \geq 0}$ is a sequence of independent Wiener processes, and
\begin{equation}
\label{fo:muN}
\mu^N_t=\frac{1}{N}\sum^N_{i=1}\delta_{X^{i,N}_t}
\end{equation}
is the empirical distribution of the states of the minor players, $\delta_x$ standing for the point Dirac mass at $x$.
The Wiener process $W^0$ is assumed to be $m_0$ dimensional while all the other Wiener processes $W^i$ for $i\ge 1$ are assumed to be $m$-dimensional. $X^{0,N}_t$ (and hence $b_0$) is $d_0$-dimensional while all the other $X^{i,N}_t$ (and hence $b$) are $d$-dimensional. Finally, for consistency reasons, the matrices $\sigma_0$ and $\sigma$ are $d_0\times m_0$ and $d\times m$ dimensional respectively.
The major player aims at minimizing the cost functional given by
\begin{equation}\label{fo:majorcost}
J^{0,N}(u^{0,N},u^N)=\mathbb{E}\left[\int^T_0 f_0(t,X^{0,N}_t,\mu^N_t,u^{0,N}_t)dt+g_0(X^{0,N}_T,\mu^N_T)\right],
\end{equation}
and the minor players aim at minimizing the cost functionals:
\begin{equation}\label{fo:minorcost}
J^{i,N}(u^{0,N},u^N)=\mathbb{E}\left[\int^T_0 f(t,X^{i,N}_t,\mu^N_t,X^{0,N}_t,u^{i,N}_t)dt+g(X^{i,N}_T,\mu^N_T,X^{0,N}_T)\right], \quad 1 \leq i \leq N.
\end{equation}
We use the notation $u^N$ for $(u^{1,N},\cdots,u^{N,N})$. We observe readily that an important difference between the current model and the usual mean field game model is the presence of the state of the major player in the state dynamics and the cost functionals of the minor players. Even when the number of minor players is large, the major player can still influence the behavior of the system in a non-negligible manner.
The rest of the paper is organized as follows. In the preliminary section \ref{se:prelim} we review briefly the usual mean field game scheme, and then proceed to the scheme for the mean field games with major and minor players proposed in this paper. Some heuristic arguments leading to the scheme are also provided, and the difference between the current scheme and the one used in \cite{NguyenHuang1,NourianCaines} are highlighted. In section \ref{se:mfg} we carry out the scheme described in section \ref{se:prelim} for a type of mean field games with major and minor players with scalar interactions, and we use the Pontryagin maximum principle to solve the embedded stochastic control problems. The FBSDE of conditional mean field type characterizing the Nash equilibria for the limiting two-player game is derived. In section \ref{se:approximate}, we prove that the solution of the limiting problem can actually be used to build approximate Nash equilibria for the finite-player games, justifying our scheme. In section \ref{se:lqg}, we apply the scheme to the case of Linear Quadratic Gaussian (LQG for whort) models, and find explicit approximate Nash equilibria for the finite-player games, and in section \ref{se:example} a concrete example is given to show that the current scheme leads to different results from the scheme proposed in \cite{NguyenHuang2} and \cite{NourianCaines}. In the independent section \ref{se:conditional_chaos}, we prove a conditional version of propagation of chaos which plays a pivotal role in the construction of approximate Nash equilibria in section \ref{se:approximate}. Finally, in the appendix at the end of the paper, we prove a version of the sufficient part of the Pontryagin stochastic maximum principle for conditional McKean-Vlasov dynamics used in solving the stochastic control problem faced by the major player.
\section{Preliminaries}
\label{se:prelim}
\subsection{Brief Review of the Standard Mean Field Game Problem}
A standard introduction to the mean field game (MFG for short) theory starts with an $N$-player stochastic differential game, the dynamics of the states of the players being governed by stochastic differential equations (SDEs)
$$
dX^{i,N}_t=b(t,X^{i,N}_t, \mu^N_t,u_t^{i,N})dt+\sigma(t,X^{i,N}_t,\mu^N_t,u^{i,N}_t) dW_t^i, \quad i=1,2,...,N,
$$
each player aiming at the minimization of a cost functional
$$
J^{i,N}(u)=\mathbb{E}\left[\int_0^T f(t,X_t^{i,N},\mu^N_t,u_t^{i,N})dt+g(X_T^{i,N},\mu^N_T)\right],
$$
where $\mu^N_t$ stands for the empirical distribution of the $X^{N,i}_t$ for $i=1,\cdots,N$. The usual MFG scheme can be summarized in the following 3 steps:
\begin{enumerate}
\item Fix a deterministic flow $(\mu_t)_{0 \leq t \leq T}$ of probability measures.
\item Solve the standard stochastic control problem: minimize
$$J(u)=\mathbb{E}\left[\int_0^Tf(t,X_t, \mu_t,u_t)dt+g(X_T,\mu_T)\right],$$
when the controlled dynamics of the process $X_t$ are given by
$$dX_t=b(t,X_t, \mu_t, u_t)dt+\sigma(t,X_t,\mu_t,u_t) dW_t.$$
\item Solve the fixed point problem $\Phi(\mu)=\mu$, where for each flow $\mu$ as in step (1), $\Phi(\mu)$ denotes the flow of marginal distributions of the optimally controlled state process found in step (2).
\end{enumerate}
If the above scheme can be carried out successfully, it is usually possible to prove that the optimal control found in step (2) can be used to provide approximate Nash equilibriums for the finite-player game. The interested reader is referred to \cite{MFG1,MFG2,MFG3,HuangMalhameCaines} for detailed discussions of the PDE approach of the above scheme and to \cite{CarmonaDelarue_sicon, CarmonaLacker1} for two different probabilistic approaches.
\subsection{Heuristic derivation of MFG approach}
In this subsection we provide a heuristic argument which leads to a scheme for mean field games with major and minor players. The finite-player games are described by equations (\ref{fo:dynamics})-(\ref{fo:minorcost}) above. Because all the minor players are identical and influenced by the major player in exactly the same way, it is reasonable to assume that they are exchangeable, even when the optimal strategies (in the sense of Nash equilibrium) are implemented. On the other hand, for any sequence of integrable exchangeable random variables $(X_i)_{i \geq 1}$, de Finetti's law of large numbers states that almost surely,
$$
\frac{1}{N}\sum^N_{i=1} \delta_{X_i} \Longrightarrow \mathcal{L}(X_1 \vert \mathcal{G}),
$$
for some $\sigma$-field $\mathcal{G}$ where $ \Longrightarrow$ denotes convergence in distribution. We may want to apply this result for each time $t$ to the individual states $X^{i,N}_t$ in which case, a natural candidate for the $\sigma$-field $\mathcal{G}$ could be the element $\mathcal{F}^0_t$ of the filtration generated by the Wiener process $W^0$ driving the dynamics of the state of the major player. This suggests that in mean field games with major and minor players, we can proceed essentially in the same way as in the standard mean field game theory, except for the fact that instead of fixing a \emph{deterministic} measure flow in the first step, we fix an adapted \emph{stochastic} measure flow, and in the last step, match this stochastic measure flow to the flow of marginal conditional distribution of the state of the representative minor player given $\mathcal{F}^0_t$. This is in accordance with intuition since, as all the minor players are influenced by the major player, they should make their decisions conditioned on the information provided by the major player. Notice that this is also consistent with the procedure used in the presence of a so-called common noise as
investigated in \cite{CarmonaDelarueLacker}.
However, the above argument fails to apply to the major player. Indeed, no matter how many minor players are present in the game, the major player's control influences all the minor players, and in particular, the empirical distribution formed by the minor players. When we construct the limiting problem for the major player, it is thus more reasonable to allow the major player to control the stochastic measure flow, instead of fixing it \emph{a priori}. This asymmetry between major and minor players was also observed in \cite{BensoussanChauYam}.
\subsection{Precise formulation of the MFG problem with major and minor players}
Using the above heuristic argument, we arrive at the following scheme for the major-minor mean field game problem. The limiting control problem for the major player is of conditional McKean-Vlasov type, where the measure flow is endogenous, and the limiting control problem for the representative minor player is a standard one, where the measure flow is exogenous and fixed at the beginning of the scheme. As a consequence, the limiting problem becomes a two-player stochastic differential game between the major player and a representative minor player, instead of two consecutive stochastic control problems for each of them. Specifically:
\begin{enumerate}
\item Fix a $\FF^0$-progressively measurable stochastic measure flow $(\mu_t)_{0 \leq t \leq T}$ where $\FF^0=(\cF^0_t)_{t\ge 0}$ denotes the filtration generated by the Wiener process $W^0$.
\item Consider the following two-player stochastic differential game where the control $(u^0_t)_{0 \leq t \leq T}$ of the first player is assumed to be adapted to $\FF^0$, and the control $(u_t)_{0 \leq t \leq T}$ of the second player is assumed to be adapted to the filtration $\FF=(\cF_t)_{t\ge 0}$ generated by $W$, and where the controlled dynamics of the state of the system are given by
\begin{equation}
\label{fo:limitSDE}
\begin{cases}
dX^0_t=b_0(t,X^0_t,\mathcal{L}(X_t \vert \mathcal{F}^0_t),u^0_t)dt+\sigma_0 (t,X^0_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),u^0_t)dW^0_t,\\
dX_t=b(t,X_t,\mathcal{L}(X_t\vert \mathcal{F}^0_t),X^0_t,u_t)dt+\sigma(t,X_t,\mathcal{L}(X_t\vert \mathcal{F}^0_t),X^0_t,u_t) dW_t,\\
d\check{X}^0_t=b_0(t,\check{X}^0_t,\mu_t,u^0_t)dt+\sigma_0(t,\check{X}^0_t,\mu_t,u^0_t) dW^0_t,\\
d\check{X}_t=b(t,\check{X}_t,\mu_t,\check{X}^0_t,u_t)dt+\sigma(t,\check{X}_t,\mu_t,\check{X}^0_t,u_t) dW_t,
\end{cases}
\end{equation}
and the cost functionals for the two players are given by
$$\begin{aligned}
&J^0(u^0,u)=\mathbb{E}\left[\int^T_0 f_0(t,X^0_t,\mathcal{L}(X_t \vert \mathcal{F}^0_t),u^0_t)dt+g_0(X^0_T,\mathcal{L}(X_T\vert \mathcal{F}^0_T))\right],\\
&J(u^0,u)=\mathbb{E}\left[\int^T_0 f(t,\check{X}_t,\mu_t,\check{X}^0_t,u_t)dt+g(\check{X}_T,\mu_T,\check{X}^0_T)\right],
\end{aligned}$$
where $\mathcal{L}(X_t \vert \mathcal{F}^0_t)$ stands for the conditional distribution of $X_t$ given $\mathcal{F}^0_t$. We look for Nash equilibria for this game.
\item Satisfy the consistency condition
\begin{equation}\label{fo:consistency}
\mu_t=\mathcal{L}(X_t \vert \mathcal{F}^0_t), \quad \forall t \in [0,T],
\end{equation}
where $X_t$ is the second component of the state controlled by $u^0$ and $u$ giving the Nash equilibrium found in step (2).
\end{enumerate}
Notice that the above consistency condition amounts to solving a fixed point problem in the space of stochastic measure flows.
Notice also that even when the $X^{i,N}_t$ are scalar, the system \eqref{fo:limitSDE} describes the dynamics of a $4$-dimensional state driven by two independent Wiener processes. The dynamics of the first two components are of the conditional McKean-Vlasov type (because of the presence of the conditional distribution $\mathcal{L}(X_t\vert \mathcal{F}^0_t)$ of $X_t$ in the coefficients) while the dynamics of the last two components are given by standard stochastic differential equations with random coefficients. In this two player game, the cost functional $J^0$ of the major player is of the McKean-Vlasov type while the cost functional $J$ of the representative minor player is of the standard type. As explained earlier, this is the main feature of our formulation of the problem.
Later in the paper, we show that if we are able to find a fixed point in the third step, i.e. a stochastic measure flow $(\mu_t)_{0 \leq t \leq T}$ satisfying (\ref{fo:consistency}), we can use it to construct approximate Nash equilibria for the finite-player games when the number of players is sufficiently large. The precise meaning of this statement will be made clear in section \ref{se:approximate}.
\section{Mean Field Games with Major and Minor Players: The General Case}
\label{se:mfg}
In this section we analyze in detail the scheme explained in the previous section, and we derive a FBSDE characterizing the solution to the limiting problem.
We assume that $\Omega$ is a standard space and $\cF$ is its Borel $\sigma$-field, so that regular conditional distributions exist for all sub-$\sigma$-fields. The definition of standard probability spaces we use here can be found in \cite{Cardaliaguet}.
The finite-player games are described by (\ref{fo:dynamics})-(\ref{fo:minorcost}) where $(W^i)_{i \geq 0}$ is a sequence of independent Wiener processes. We shall use the following assumptions.
\noindent (\textbf{A1}) There exists a constant $c>0$ such that for all $t \in [0,T]$, $x'_0,x_0 \in \mathbb{R}^{d_0}$, $x',x \in \mathbb{R}^d$, $\mu',\mu \in \mathcal{P}_2(\mathbb{R}^d)$, $u_0 \in U_0$ and $u \in U$ we have
\begin{equation}
\begin{aligned}
&\vert (b_0,\sigma_0)(t,x'_0,\mu',u'_0)-(b_0,\sigma_0)(t,x_0,\mu,u_0)\vert+\vert (b,\sigma)(t,x',\mu',x'_0,u')-(b,\sigma)(t,x,\mu,x_0,u)\vert \\
&\phantom{????????}\leq c\bigg(\vert x'_0-x_0\vert+\vert x'-x\vert+\vert u'_0-u_0\vert+\vert u'-u\vert+W_2(\mu',\mu)\bigg).
\end{aligned}
\end{equation}
\noindent (\textbf{A2}) For all $u_0 \in U_0$ and $u \in U$ we have
$$
\mathbb{E}\left[\int^T_0 \vert (b_0,\sigma_0)(t,0,\delta_0,u_0)\vert^2+\vert (b,\sigma)(t,0,\delta_0,0,u)\vert^2\right]< \infty.
$$
\noindent (\textbf{A3}) There exists a constant $c_L>0$ such that for all $x_0,x'_0 \in \mathbb{R}^{d_0}$, $u_0,u'_0 \in \mathbb{R}^{k_0}$ and $\mu,\mu' \in \mathcal{P}_2(\mathbb{R}^d)$, we have
$$
\begin{aligned}
&\vert (f_0,g_0)(t,x'_0,\mu',u'_0)-(f_0,g_0)(t,x_0,\mu,u_0)\vert\\
&\phantom{????????} \leq c_L\bigg(1+\vert (x'_0,u'_0)\vert+\vert (x_0,u_0)\vert+M_2(\mu')+M_2(\mu)\bigg)\bigg(\vert (x'_0,u'_0)-(x_0,u_0)\vert+W_2(\mu',\mu)\bigg),
\end{aligned}
$$
and for all $x_0 \in \mathbb{R}^{d_0}$, $x,x'\in \mathbb{R}^{d}$, $u,u' \in \mathbb{R}^{k}$ and $\mu,\mu' \in \mathcal{P}_2(\mathbb{R}^d)$,
$$
\begin{aligned}
&\vert (f,g)(t,x',\mu',x_0,u')-(f,g)(t,x,\mu,x_0,u)\vert\\
&\phantom{????????}\leq c_L\bigg(1+\vert (x',u')\vert+\vert (x,u)\vert+M_2(\mu')+M_2(\mu)\bigg)\bigg(\vert (x',u')-(x,u)\vert+W_2(\mu,\mu')\bigg).
\end{aligned}
$$
where $\mathcal{P}_2(\mathbb{R}^d)$ denotes the set of probability measures of order $2$ (i.e. with a finite second moment), and $W_2(\mu,\mu')$ the $2$-Wasserstein distance between $\mu,\mu'\in\cP_2(\RR^d)$. Also, we used the notation $M_2(\mu)=\int |x|^2 \mu(dx)$ for the second moment of $\mu$.
\vskip 2pt\noindent
(\textbf{A4}) The functions $b_0$, $b$, $f$ and $g$ are differentiable in $x_0$, $x$ and $\mu$. Differentiability with respect to measure arguments is discussed in the appendix at the end of the paper.
Assumptions (A1)-(A2) guarantee that for all admissible controls, the SDEs (\ref{fo:dynamics})-(\ref{fo:minorcost}) and (\ref{fo:limitSDE}) have unique solutions, and (A3) guarantees that the associated cost functionals are well-defined. Assumption (A4) will be used when we define adjoint processes for the limiting control problems.
In the following, we use $\mathbb{S}^{2,d}(\FF;U)$ to denote all $\FF$-progressively measurable processes $X$ taking values in $U \subset \mathbb{R}^d$ such that
\begin{equation}
\label{fo:S2d}
\mathbb{E}\left[\sup_{0\leq t \leq T} | X_t|^2\right]< \infty,
\end{equation}
$\mathbb{H}^{2,d}(\FF;U)$ to denote all $U$-valued $\FF$-progressively measurable processes $X$ such that
\begin{equation}
\label{fo:H2d}
\mathbb{E}\left[\int^T_0 |X_t|^2\right]< \infty,
\end{equation}
and finally we use $\mathcal{M}^{2,d}(\FF)$ to denote the set of $\FF$-progressively measurable stochastic measure flows $\mu$ on $\mathbb{R}^d$ such that
\begin{equation}
\label{fo:M2d}
\mathbb{E}\left[\int^T_0 \int_{\mathbb{R}^d}| x|^2 d\mu_t\right]<\infty.
\end{equation}
We will omit the filtration $\FF$ and the domain $U$ when there is no risk of confusion.
\subsection{Control problem for the major player}
In this subsection we consider the limiting two-player game and search for the major player's best response $u^0$ to the control $u$ of the representative minor player. This amounts to solving the optimal control problem based on the controlled dynamics
\begin{equation}
\begin{cases}
dX^0_t=b_0(t,X^0_t,\mathcal{L}(X_t \vert \mathcal{F}^{0}_t),u^0_t)dt+\sigma_0(t,X^0_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),u^0_t) dW^0_t,\quad X^0_0=x^0_0,\\
dX_t=b(t,X_t,\mathcal{L}(X_t \vert \mathcal{F}^{0}_t),X^0_t,u_t)dt+\sigma(t,X_t,\mathcal{L}(X_t \vert \mathcal{F}^{0}_t),X^0_t,u_t) dW_t,\quad X_0=x_0,
\end{cases}
\end{equation}
and the cost functional
$$
J^0(u^0)=\mathbb{E}\left[\int^T_0 f_0(t,X^0_t,\mathcal{L}(X_t \vert \mathcal{F}^{0}_t),u^0_t)dt+g_0(X^0_T,\mathcal{L}(X_T\vert \mathcal{F}^0_T))\right],
$$
where it is assumed that the control $u$ is given, the set of admissible controls $u^0$ being the space $\mathbb{H}^{2,k_0}(\FF^0;U_0)$. In what follows, this stochastic control problem will be denoted by (P1). We check readily that conditions (A2.1) - (A2.3) in the appendix at the end of the paper are satisfied. The Hamiltonian is defined as
\begin{equation}
\label{fo:H0}
\begin{aligned}
&H_0(t,x_0,x,\mu,p_0,p,q_{00},q_{11},u_0,u)=\langle p_0, b_0(t,x_0,\mu,u_0)\rangle+\langle p, b(t,x,\mu,x_0,u)\rangle\\
&\phantom{???????????????}+\langle q_{00},\sigma_0(t,x_0,\mu,u_0)\rangle+\langle q_{11},\sigma(t,x,\mu,x_0,u)\rangle+ f_0(t,x_0,\mu,u_0).
\end{aligned}
\end{equation}
We then introduce the following assumption regarding minimization of this Hamiltonian.
\noindent (\textbf{M0}) For all fixed $(t,x_0,x,\mu,p_0,p,q_{00},q_{11},u)$ there exists a unique minimizer of the Hamiltonian $H_0$ as a function of $u_0$. Note that this minimizer should not depend upon $p$, $q_{11}$ and $u$. It will be denoted by $\hat{u}^0(t,x_0,\mu,p_0,q_{00})$.
\begin{remark}
This assumption is satisfied when the running cost $f_0$ is strictly convex in $u^0$, the drift $b_0$ is linear in $u^0$ and the volatility $\sigma_0$ is uncontrolled in the sense that it does not depend upon $u^0$. This will be the case in the examples considered later on.
\end{remark}
For each admissible control $u^0$, the associated adjoint process $(P^0,P,Q^{00}, Q^{01}, Q^{10}, Q^{11})$ is defined as the solution of the backward stochastic differential equation (BSDE):
\begin{equation}\label{fo:BSDEmajor}
\begin{cases}
dP^0_t=-\partial_{x_0} H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,u^0_t,u_t)dt+Q^{00}_t dW^0_t+Q^{01}_t dW_t,\\
\begin{aligned}
dP_t=&-\partial_x H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,u^0_t,u_t)dt+Q^{10}_t dW^0_t+Q^{11}_t dW_t\\
&-\mathbb{E}^{\mathcal{F}^0_t}[\partial_\mu H_0(t,\tilde{\underline{X}}_t,\mathcal{L}(\tilde{X}_t\vert\mathcal{F}^0_t),\tilde{\underline{P}}_t,\tilde{\underline{Q}}_t,u^0_t,u_t)(X_t)]dt
\end{aligned},\\
P^0_T=\partial_{x_0} g(X^0_T,\mathcal{L}(X_T\vert \mathcal{F}^{0}_t)),\\
P_T=\mathbb{E}^{\mathcal{F}^0_T}[\partial_\mu g(\tilde{X}^0_T,\mathcal{L}(\tilde{X}_T\vert \mathcal{F}^0_T))(X_T)],
\end{cases}
\end{equation}
where to lighten the notations we write $\underline{X}=(X^0,X)$, $\underline{P}=(P^0,P)$ and $\underline{Q}=(Q^{00},Q^{01},Q^{10},Q^{11})$. We refer the reader to appendix at the end of the paper for 1) definitions of the tilde notation, which provides a natural extension of random variables to an extension of the original probability space, and of $\mathbb{E}^{\mathcal{F}^0_t}[\cdot]$ which denotes expectation with respect to the regular conditional distribution on an extension of the original probability space, and 2) references to the definition and the properties of the differentiation with respect to the measure argument. Despite the presence of the conditional distributions in the coefficients, standard proofs of existence and uniqueness of solutions of BSDEs with Lipschitz coefficients still apply to (\ref{fo:BSDEmajor}), for example when the derivatives of assumption (A4) are uniformy Lipshitz with linear growth. See for example \cite{CarmonaDelarue_ecp}.
In order to minimize the complexity of the notation, we systematically add a bar on the top of a random variable to denote its conditional expectation with respect to $\mathcal{F}^0_t$, for example $\bar{P}^0$ stands for $\EE[P^0|\cF^0_t]$.
Once properly extended to cover the present situation, (see \cite{CarmonaDelarue_ap} for the necessary condition in the unconditional case, and the appendix for the sufficient condition) the necessary part of the Pontryagin stochastic maximum principle says that, if the control $u^0=(u^0_t)_t$ is optimal, then the Hamiltonian \eqref{fo:H0} is minimized along the trajectory of $(X^0_t,X_t,\underline{P}_t,\underline{Q}_t)$. So given assumption (M0) and the sufficient condition of the stochastic maximum principle proven in the appendix at the end of the paper, $\hat{u}^0_t=\hat{u}^0(t,X^0_t,\mathcal{L}(X_t \vert \mathcal{F}^0_t),\bar{P}^0_t,\bar{Q}^{00}_t)$ will be an optimal control for the problem at hand if we can solve the forward backward stochastic differential equation (FBSDE):
\begin{equation}
\label{fo:FBSDEM}
\begin{cases}
dX^0_t=\partial_{p_0} H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,u_t)dt+\partial_{q_{00}}H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,u_t) dW^0_t,\\
dX_t=\partial_p H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,u_t)dt+\partial_{q_{11}}H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,u_t) dW_t,\\
dP^0_t=-\partial_{x_0} H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,u_t)dt+Q^{00}_t dW^0_t+Q^{01}_t dW_t,\\
\begin{aligned}
dP_t=&-\partial_x H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,u_t)dt+Q^{10}_t dW^0_t+Q^{11}_t dW_t\\
&-\mathbb{E}^{\mathcal{F}^0_t}[\partial_\mu H_0(t,\tilde{\underline{X}}_t,\mathcal{L}(\tilde{X}_t\vert\mathcal{F}^0_t),\tilde{\underline{P}}_t,\tilde{\underline{Q}}_t,\tilde{\hat{u}}^0_t,u_t)(X_t)]dt
\end{aligned}
\end{cases}
\end{equation}
with the initial and terminal conditions given by
$$
X^0_0=x^0_0,\quad X_0=x_0,\qquad P^0_T=\partial_{x_0} g(X^0_T,\mathcal{L}(X_T\vert \mathcal{F}^{0}_t)),\qquad P_T=\mathbb{E}^{\mathcal{F}^0_T}[\partial_\mu g(\tilde{X}^0_T,\mathcal{L}(\tilde{X}_T\vert \mathcal{F}^0_T))(X_T)].
$$
In general, FBSDEs are more difficult to solve than BSDEs. This is even more apparent in the case of equations of the McKean-Vlasov type. See nevertheless \cite{CarmonaDelarue_ecp} for an existence result in the unconditional case. In its full generality, the solvability of FBSDE \eqref{fo:FBSDEM} of conditional McKean-Vlasov type is beyond the scope of this paper. We will solve it only in the linear quadratic case.
We show in the appendix that appropriate convexity assumptions are sufficient for optimality. We summarize them for later reference.
\noindent (\textbf{C0}) The function $\mathbb{R}^{d_0} \times \mathcal{P}_2(\mathbb{R}^d) \ni (x,\mu) \hookrightarrow g(x,\mu)$ is convex. The function
$$\mathbb{R}^{d_0} \times \mathbb{R}^d \times \mathcal{P}_2(\mathbb{R}^d) \times U_0 \ni (x_0,x,\mu,u_0)\hookrightarrow H(t,x_0,x,\mu,p_0,p,q_{00},q_{11},u_0,u)$$
is convex for all fixed $(t,p_0,p,q_{00},q_{11},u)$.
We then have the following proposition.
\begin{proposition}
Let us assume that (A1)-(A3), (M0) and (C0) are in force. If
$$(X^0,X,P^0,P,Q^{00}, Q^{01}, Q^{10}, Q^{11}) \in \mathbb{S}^{2,d_0+d} \times \mathbb{S}^{2,d_0+d} \times \mathbb{H}^{2, (d_0+d)\times(d_0+d)}
$$
is a solution to the FBSDE (\ref{fo:FBSDEM}), then $u^0_t=\hat{u}^0(t,X^0_t,\mathcal{L}(X_T\vert \mathcal{F}^0_t),\bar{P}^0_t,\bar{Q}^{00}_t),$ is an optimal control for problem (P1) and $(X^0,X)$ is the associated optimally controlled state process.
\end{proposition}
\subsection{Control problem for the representative minor player}
For the representative minor player's best response control problem, for each fixed stochastic measure flow $\mu$ in $\mathcal{M}^{2,d}(\FF^0)$ and for each admissible control $u^0=(u^0_t)_t$ of the major player, we solve the optimal control problem of the controlled dynamics
\begin{equation}
\label{fo:minorsde}
\begin{cases}
d\check{X}^0_t=b_0(t,\check{X}^0_t,\mu_t,u^0_t)dt+\sigma_0(t,\check{X}^0_t,\mu_t,u^0_t) dW^0_t,\quad \check{X}^0_0=x^0_0,\\
d\check{X}_t=b(t,\check{X}_t,\mu_t,\check{X}^0_t,u_t)dt+\sigma(t,\check{X}_t,\mu_t,\check{X}^0_t,u_t) dW_t,\quad \check{X}_0=x_0
\end{cases}
\end{equation}
for the cost functional
\begin{equation}
J(u)=\mathbb{E}\bigg[\int^T_0 f(t,\check{X}_t,\mu_t,\check{X}^0_t,u_t)+g(\check{X}_T,\mu_T,\check{X}^0_T)\bigg].
\end{equation}
Note that since $u^0$ and $\mu$ are fixed, the first SDE in \eqref{fo:minorsde} can be solved \emph{off line}, and its solution appears in the second SDE of \eqref{fo:minorsde} and the cost functional only as an exogenous source of randomness.
If we choose the set of admissible controls for the representative minor player to be $\mathbb{H}^{2,k}(\FF^{W_0,W};U)$ where $\FF^{W_0,W}$ is the filtration generated by both Wiener processes $W^0$ and $W$, this problem is a standard non-Markovian stochastic control problem. We shall denote it by (P2) in the following. For this reason, we introduce only adjoint variables for $\check{X}_t$, and use the reduced Hamiltonian:
\begin{equation}
\label{fo:minorH}
H(t,x_0,x,\mu,y,z_{11},u^0,u)=\langle y, b(t,x,\mu,x_0,u)\rangle\\
+\langle z_{11},\sigma(t,x,\mu,x_0,u)\rangle+f(t,x,\mu,x_0,u).
\end{equation}
As before, in order to find a function satisfying the Isaacs condition, we introduce the following assumption regarding its minimization.
\noindent (\textbf{M}) For all fixed $(t,x_0,x,\mu,y,z_{11},u_0)$, there exists a unique minimizer of the above reduced Hamiltonian $H$ as a function of $u$. This minimizer will be denoted by $\hat{u}(t,x_0,x,\mu,y,z_{11})$.
For all admissible control $u$ we can define the adjoint process $(\underline{Y},\underline{Z})=(Y^0,Y,Z^{00},Z^{01},Z^{10},Z^{11})$ associated to $u$ as the solution of the following BSDE:
\begin{equation}\label{fo:BSDEminor}
\begin{cases}
dY^0_t=-\partial_{x_0}H(t,\underline{\check{X}}_t,\mu_t,\underline{Y}_t,\underline{Z}_t,u^0_t,u_t)dt+Z^{00}_t dW^0_t+Z^{01}_t dW_t,\\
dY_t=-\partial_x H(t,\underline{\check{X}}_t,\mu_t,\underline{Y}_t,\underline{Z}_t,u^0_t,u_t)dt+Z^{10}_t dW^0_t+Z^{11}_t dW_t,\\
Y^0_T=\partial_{x_0}g(\check{X}_T,\mu_T,\check{X}^0_T),\qquad
Y_T=\partial_x g(\check{X}_T,\mu_T,\check{X}^0_T).
\end{cases}
\end{equation}
The existence of the adjoint processes associated to a given admissible control $u$ is a consequence of the standard existence result of solutions of BSDEs
when the partial derivatives of $b$, $\sigma$ and $f$ with respect to $x_0$ and $x$ are uniformly bounded in $(t,x_0,x,\mu)$.
The necessary part of the Pontryagin stochastic maximum principle says that, if the admissible control $u=(u_t)_t$ is optimal, then the Hamiltonian \eqref{fo:minorH} is minimized along the trajectory of $(X^0_t,X^t,\underline{Y}_t,\underline{Z}_t)$. So given assumption (M) and the sufficient condition of the stochastic maximum principle (see for example the appendix in section \ref{se:Pontryagin}), $\hat{u}_t=\hat{u}(t,X^0_t,X_t, \mathcal{L}(X_t \vert \mathcal{F}^0_t),\underline{Y}_t,\underline{Z}_t)$ will be an optimal control for the problem at hand if we can solve the forward backward stochastic differential equation (FBSDE):
The standard Pontryagin maximum principle tells us that the optimal control should be given by $\hat{u}_t=\hat{u}(t,\check{X}^0_t,\check{X}_t,\mu_t,Y_t,Z^{11}_t)$, and plugging this expression into the controlled dynamics and BSDE (\ref{fo:BSDEminor}) gives us the following FBSDE:
\begin{equation}\label{fo:FBSDEminor}
\begin{cases}
d\check{X}^0_t=\partial_{y_0} H(t,\underline{\check{X}}_t,\mu_t,\underline{Y}_t,\underline{Z}_t,u^0_t,\hat{u}_t)dt+\partial_{z_{00}}H(t,\underline{\check{X}}_t,\mu_t,\underline{Y}_t,\underline{Z}_t,u^0_t,\hat{u}_t) dW^0_t,\\
d\check{X}_t=\partial_{y} H(t,\underline{\check{X}}_t,\mu_t,\underline{Y}_t,\underline{Z}_t,u^0_t,\hat{u}_t)dt+\partial_{z_{11}}H(t,\underline{\check{X}}_t,\mu_t,\underline{Y}_t,\underline{Z}_t,u^0_t,\hat{u}_t) dW_t,\\
dY^0_t=-\partial_{x_0} H(t,\underline{\check{X}}_t,\mu_t,\underline{Y}_t,\underline{Z}_t,u^0_t,\hat{u}_t)dt+Z^{00}_t dW^0_t+Z^{01}_t dW_t,\\
dY_t=-\partial_x H(t,\underline{\check{X}}_t,\mu_t,\underline{Y}_t,\underline{Z}_t,u^0_t,\hat{u}_t)dt+Z^{10}_t dW^0_t+Z^{11}_t dW_t,
\end{cases}
\end{equation}
with the initial and terminal conditions given by
$$
\check{X}^0_0=x^0_0,\quad
\check{X}_0=x_0,\qquad
Y^0_T=\partial_{x_0} g(\check{X}_T,\mu_T,\check{X}^0_T),\quad
Y_T=\partial_x g(\check{X}_T,\mu_T,\check{X}^0_T).
$$
We also need the following convexity assumption.
\noindent (\textbf{C}) The function $\mathbb{R}^d \times \mathcal{P}_2(\mathbb{R}^d) \times \mathbb{R}^{d_0} \ni (x,\mu,x_0) \hookrightarrow g(x,\mu,x_0)$ is convex in $(x_0,x)$. The function
$$\mathbb{R}^{d_0} \times \mathbb{R}^d \times \mathcal{P}_2(\mathbb{R}^d) \times U \ni (x_0,x,\mu,u) \hookrightarrow H(t,x_0,x,\mu,y_0,y,z_{00},z_{11},u_0,u)$$
is convex for all $(t,y_0,y,z_{00},z_{11},u_0)$. Then we have the following proposition.
\begin{proposition}
Assuming that (A1-2), (M) and (C) are in force, if $(\check{X}^0,\check{X},Y^0,Y,Z^{00},Z^{01},Z^{10},Z^{11}) \in \mathbb{S}^{2,d_0+d} \times \mathbb{S}^{2,d_0+d} \times \mathbb{H}^{2, (d_0+d)\times(d_0+d)}$ is a solution to the FBSDE (\ref{fo:FBSDEminor}), then an optimal control of the control problem (P2) is given by
$$u_t=\hat{u}(t,\check{X}^0_t,\check{X}_t,\mu_t,Y_t,Z^{11}_t),$$
and $(\check{X}^0,\check{X})$ is the associated optimally controlled state process.
\end{proposition}
\subsection{Nash equilibrium for the limiting two-player game}
By the very definition of Nash equilibria, the following proposition is self-explanatory.
\begin{proposition}
Assume that (A1-2), (M0), (M), (C0) and (C) are in force. Consider the following FBSDE:
\begin{equation}\label{fo:FBSDEMm}
\begin{cases}
dX^0_t=\partial_{p_0} H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,\hat{u}_t)dt+\partial_{q_{00}}H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,\hat{u}_t) dW^0_t,\\
dX_t=\partial_p H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,\hat{u}_t)dt+\partial_{q_{11}}H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,\hat{u}_t) dW_t,\\
d\check{X}^0_t=\partial_{y_0} H(t,\underline{\check{X}}_t,\mu_t,\underline{Y}_t,\underline{Z}_t,\hat{u}^0_t,\hat{u}_t)dt+\partial_{z_{00}}H(t,\underline{\check{X}}_t,\mu_t,\underline{Y}_t,\underline{Z}_t,\hat{u}^0_t,\hat{u}_t) dW^0_t,\\
d\check{X}_t=\partial_{y} H(t,\underline{\check{X}}_t,\mu_t,\underline{Y}_t,\underline{Z}_t,\hat{u}^0_t,\hat{u}_t)dt+\partial_{z_{11}}H(t,\underline{\check{X}}_t,\mu_t,\underline{Y}_t,\underline{Z}_t,\hat{u}^0_t,\hat{u}_t) dW_t,\\
dP^0_t=-\partial_{x_0} H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,\hat{u}_t)dt+Q^{00}_t dW^0_t+Q^{01}_t dW_t,\\
\begin{aligned}
dP_t=&-\partial_x H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,\hat{u}_t)dt+Q^{10}_t dW^0_t+Q^{11}_t dW_t\\
&-\mathbb{E}^{\mathcal{F}^0_t}[\partial_\mu H_0(t,\tilde{\underline{X}}_t,\mathcal{L}(\tilde{X}_t\vert\mathcal{F}^0_t),\tilde{\underline{P}}_t,\tilde{\underline{Q}}_t,\tilde{\hat{u}}^0_t,\tilde{\hat{u}}_t)(X_t)]dt
\end{aligned}\\
dY^0_t=-\partial_{x_0} H(t,\underline{\check{X}}_t,\mu_t,\underline{Y}_t,\underline{Z}_t,\hat{u}^0_t,\hat{u}_t)dt+Z^{00}_t dW^0_t+Z^{01}_t dW_t,\\
dY_t=-\partial_x H(t,\underline{\check{X}}_t,\mu_t,\underline{Y}_t,\underline{Z}_t,\hat{u}^0_t,\hat{u}_t)dt+Z^{10}_t dW^0_t+Z^{11}_t dW_t,
\end{cases}
\end{equation}
with the initial and terminal conditions given by
$$\begin{cases}X^0_0=x^0_0,\quad
X_0=x_0,\\
P^0_T=\partial_{x_0} g(X^0_T,\mathcal{L}(X_T\vert \mathcal{F}^{0}_t)),\\
P_T=\mathbb{E}^{\mathcal{F}^0_T}[\partial_\mu g(\tilde{X}^0_T,\mathcal{L}(\tilde{X}_T\vert \mathcal{F}^0_T))(X_T)],
\end{cases},\quad \begin{cases}
\check{X}^0_0=x^0_0,\quad \check{X}_0=x_0,\\
Y^0_T=\partial_{x_0} g(\check{X}_T,\mu_T,\check{X}^0_T),\\
Y_T=\partial_x g(\check{X}_T,\mu_T,\check{X}^0_T),
\end{cases}$$
where
$$\hat{u}^0_t=\hat{u}^0(t,X^0_t,\mathcal{L}(X_t \vert \mathcal{F}^0_t),\bar{P}^0_t,\bar{Q}^{00}_t), \quad \hat{u}_t=\hat{u}(t,\check{X}^0_t,\check{X}_t,\mu_t,Y_t,Z^{11}_t).$$
If this FBSDE has a solution, then $(\hat{u}^0,\hat{u})$ is a Nash equilibrium for the limiting two-player stochastic differential game.
\end{proposition}
\subsection{The consistency condition}
The last step in the scheme amounts to imposing the consistency condition which writes
$$\mu_t=\mathcal{L}(X_t\vert \mathcal{F}^0_t), \quad \forall t \in [0,T].$$
Plugging it into FBSDE (\ref{fo:FBSDEMm}) gives the following ultimate FBSDE:
\begin{equation}\label{fo:Ultimate}
\begin{cases}
dX^0_t=\partial_{p_0} H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,\hat{u}_t)dt+\partial_{q_{00}}H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,\hat{u}_t) dW^0_t,\\
dX_t=\partial_p H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,\hat{u}_t)dt+\partial_{q_{11}}H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,\hat{u}_t) dW_t,\\
dP^0_t=-\partial_{x_0} H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,\hat{u}_t)dt+Q^{00}_t dW^0_t+Q^{01}_t dW_t,\\
\begin{aligned}
dP_t=&-\partial_x H_0(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{P}_t,\underline{Q}_t,\hat{u}^0_t,\hat{u}_t)dt+Q^{10}_t dW^0_t+Q^{11}_t dW_t\\
&-\mathbb{E}^{\mathcal{F}^0_t}[\partial_\mu H_0(t,\tilde{\underline{X}}_t,\mathcal{L}(\tilde{X}_t\vert\mathcal{F}^0_t),\tilde{\underline{P}}_t,\tilde{\underline{Q}}_t,\tilde{\hat{u}}^0_t,\tilde{\hat{u}}_t)(X_t)]dt
\end{aligned}\\
dY^0_t=-\partial_{x_0} H(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{Y}_t,\underline{Z}_t,\hat{u}^0_t,\hat{u}_t)dt+Z^{00}_t dW^0_t+Z^{01}_t dW_t,\\
dY_t=-\partial_x H(t,\underline{X}_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\underline{Y}_t,\underline{Z}_t,\hat{u}^0_t,\hat{u}_t)dt+Z^{10}_t dW^0_t+Z^{11}_t dW_t,
\end{cases}
\end{equation}
with initial and terminal conditions given by
\begin{equation}
\begin{cases}
X^0_0=x^0_0, \quad X_0=x_0,\\
P^0_T=\partial_{x_0} g(X^0_T,\mathcal{L}(X_T \vert \mathcal{F}^0_T)),\\
P_T=\mathbb{E}^{\mathcal{F}^0_T}[\partial_\mu g(\tilde{X}^0_T,\mathcal{L}(\tilde{X}_T \vert\mathcal{F}^0_T))(X_T)],\\
Y^0_T=\partial_{x_0} g(X_T, \mathcal{L}(X_T\vert\mathcal{F}^0_T),X^0_T),\\
Y_T=\partial_x g(X_T, \mathcal{L}(X_T\vert\mathcal{F}^0_T),X^0_T).
\end{cases}
\end{equation}
where this time we define
$$\hat{u}^0_t=\hat{u}^0(t,X^0_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\bar{P}^0_t,\bar{Q}^{00}_t), \quad \hat{u}_t=\hat{u}(t,X^0_t,X_t,\mathcal{L}(X_t\vert\mathcal{F}^0_t),Y_t,Z^{11}_t).$$
\begin{remark}
Note that after implementing the consistency condition, $(X^0,X)$ and $(\check{X}^0,\check{X})$ become the same. We can also check that if we replace the current consistency condition by
$$\mu_t=\mathcal{L}(\check{X}_t\vert \mathcal{F}^0_t), \quad \forall t\in[0,T]$$
we arrive at the same FBSDE as above.
\end{remark}
\begin{remark}\label{rk:redundant}
In the limiting control problem faced by the representative minor player, the dynamic of the major player is not affected by the control $u$ and can be considered given. As a result, the adjoint process $Y^0$ is redundant and independent of the rest of the system, and could have been discarded from the system (\ref{fo:Ultimate}). It is there in (\ref{fo:Ultimate}) because we want to write the system in a symmetric and compact fashion using the Hamiltonians $H_0$ and $H$.
\end{remark}
The solvability of conditional McKean-Vlasov FBSDEs in the form of (\ref{fo:Ultimate}) is a hard problem. If the conditional distributions in (\ref{fo:Ultimate}) are replaced by plain distributions, the resulting FBSDEs are usually called ``mean field FBSDEs'' and are studied in some recent papers, see for example \cite{CarmonaDelarue_ecp}. The conditioning with respect to $\mathcal{F}^0_t$ makes (\ref{fo:Ultimate}) substantially harder to solve compared to the ones already considered in the literature, and we leave the well-posedness of FBSDEs of the form of (\ref{fo:Ultimate}) to future research.
\section{Propagation of chaos and $\epsilon$-Nash equilibrium}
\label{se:approximate}
In this section we prove a central result stating that, when we apply the optimal control law found in the limiting regime to all the players in the original $N$-player game, we will find an approximate Nash equilibrium. This justifies the whole scheme as an effective way to find approximate Nash equilibria for the finite-player games. Throughout this section we assume that (A1-4), (M), (M0), (C) and (C0) hold. In addition, we assume that
\noindent (\textbf{A5}) The diffusion coefficients $\sigma_0$ and $\sigma$ are constants.
Assumption (A5) is too strong for what we really need. We should merely assume that the two volatility $\sigma_0$ and $\sigma$ are independent of the controls $u^0$ and $u$. All the derivations given below can be adapted to this more general setting, but in order to limit the complexity of the formulas appearing in the arguments, we limit ourselves to assumption (A5).
Let's first recall the finite-player game setup under the assumption (A5): the controlled dynamics are now given by
\begin{equation}\label{fo:SDEfinite}
\begin{cases}
dX^{0,N}_t=b_0(t,X^{0,N}_t,\mu^N_t,u^{0,N}_t)dt+\sigma_0 dW^0_t, \quad X^{0,N}_0=x^0_0,\\
dX^{i,N}_t=b(t,X^{i,N}_t,\mu^N_t,X^{0,N}_t,u^{i,N}_t)dt+\sigma dW^i_t, \quad X^{i,N}_0=x_0, \quad i=1,2,...,N,
\end{cases}
\end{equation}
and the cost functionals by
$$\begin{aligned}
&J^{0,N}=\mathbb{E}\left[\int^T_0 f_0(t,X^{0,N}_t,\mu^N_t,u^{0,N}_t)dt+g_0(X^{0,N}_T,\mu^N_T)\right],\\
&J^{i,N}=\mathbb{E}\left[\int^T_0 f(t,X^{i,N}_t,\mu^N_t,X^{0,N}_t,u^{i,N}_t)dt+g(X^{i,N}_T,\mu^N_T,X^{0,N}_T)\right], 1 \leq i \leq N.
\end{aligned}$$
The sets of admissible controls for this $(N+1)$-player game are defined as follows.
\begin{definition}
In the above $(N+1)$-player game, a process $u^{0,N}$ is said to be admissible for the major player if $u^{0,N}\in\HH^{2,d_0}(\FF^0,U_0)$ and it is said to be $\kappa$-admissible for the major player if additionally we have
\begin{equation}
\label{fo:adm}
\mathbb{E}\left[\int^T_0 \vert u^{0,N}_t \vert^p\right] \leq \kappa.
\end{equation}
with $i=0$ and $p=d+5$. On the other hand, a process $u^{i,N}$ is said to be admissible for the $i$-th minor player if $u^{1,N}\in\HH^{2,d}(\FF^{W^0,W^1,\cdots,W^N},U)$, and $\kappa$-admissible for the $i$-th minor player if additionally it satisfies \eqref{fo:adm} with $p=2$.
The set of admissible controls and $\kappa$-admissible controls for the $i$-th player are respectively denoted by $\mathcal{A}_i$ and $\mathcal{A}^\kappa_i$, $i \geq 0$. Note that $\mathcal{A}_i$ and $\mathcal{A}^\kappa_i$ are independent of $i \geq 1$.
\end{definition}
Note that due to (A1-3), for all $(u^{0,N}, u^{1,N},..., u^{N,N}) \in \prod^N_{i=0}\mathcal{A}_i$, the controlled SDE (\ref{fo:SDEfinite}) always has a unique solution. On the other hand, we will see that the notion of $\kappa$-admissible controls plays an important role in Theorem \ref{th:CLT} to obtain a quantitative uniform speed of convergence. We then give the definition of $\epsilon$-Nash equilibrium in the context of the above finite-player game.
\begin{definition}
A set of admissible controls $(u^{0,N},u^{1,N},...,u^{N,N}) \in \prod^N_{i=0}\mathcal{A}_i$ is called an $\epsilon$-Nash equilibrium in $\mathcal{A}^\kappa_0 \times \prod^N_{i=1}\mathcal{A}^\kappa_i$ for the above $(N+1)$-player stochastic differential game if for all $u^{0} \in \mathcal{A}^\kappa_0$ we have
$$J^{0,N}(u^{0,N},u^{1,N},...,u^{N,N})-\epsilon \leq J^{0,N}(u^0,u^{1,N},...,u^{N,N}),$$
and for all $1 \leq i \leq N$ and $u \in \mathcal{A}^\kappa_i$ we have
$$J^{i,N}(u^{0,N},u^{1,N},...,u^{N,N})-\epsilon \leq J^{i,N}(u^{0,N},...,u^{i-1,N},u,u^{i+1,N},...,u^{N,N}).$$
\end{definition}
The following lemma is useful to derive explicit bounds on the rate of convergence of approximate Nash equilibrium. In order to obtain a quantitative convergence estimate, we rely on the following result of Horowitz and Karandikar which can be found in \cite{RachevRuschendorf}.
\begin{lemma}
\label{le:RR}
Let $(X_n)$ be a sequence of exchangeable random variables taking values in $\mathbb{R}^d$ with directing (random) measure $\mu$ satisfying
$$
c:=\int \vert u \vert^{d+5}\beta(du) < \infty.
$$
where $\beta$ is the marginal of $\mu$ in the sense that $\beta(A)=\EE[\mu(A)]$.
Then there exists a constant $C$ depending only upon $c$ and $d$ such that
$$
\mathbb{E}[W^2_2(\mu^N,\mu)]\leq c N^{-2/(d+4)},$$
where as usual, $\mu^N$ is the empirical measure of $X_1, \cdots, X_N$.
\end{lemma}
Recall that the \emph{directing measure} of the sequence is the almost sure limit as $N\to\infty$ of the empirical measures $\mu^N$
Before stating and proving the central theorem of this section, we introduce two additional assumptions.
\noindent (\textbf{A7}) The FBSDE (\ref{fo:Ultimate}) admits a unique solution. Moreover, there exists a random decoupling field $\theta:[0,T] \times \Omega \times \mathbb{R}^{d_0} \times \mathbb{R}^d \hookrightarrow \theta(t,\omega,x_0,x)$ such that
$$Y_t=\theta(t,X^0_t,X_t),\quad \text{a.s..}$$
Finally $\theta$ satisfies:\\
(1) There exists a constant $c_\theta$ such that
$$\vert \theta(t,\omega,x'_0,x')-\theta(t,\omega,x_0,x)\vert \leq c_\theta(\vert x'_0-x_0\vert+\vert x'-x\vert).$$
(2) For all $(t,x_0,x) \in [0,T]\times\mathbb{R}^{d_0} \times \mathbb{R}^d$, $\theta(t,\cdot,x_0,x)$ is $\mathcal{F}^0_t$-measurable.
The concept of (deterministic) decoupling field lies at the core of many investigations of the well-posedness of standard FBSDEs, see for example \cite{Delarue02,MaYong}. Its non-Markovian counterpart corresponding to non-Markovian FBSDEs was introduced in \cite{MaWuZhangZhang}. The possibility of applying existing results concerning the well-posedness of non-Markovian FBSDEs is appealing, but due to the conditional McKean-Vlasov nature of FBSDE (\ref{fo:Ultimate}) a general sufficient condition is hard to come by, and it is highly likely that well-posedness can only be established on a case-by-case basis. A concrete sufficient condition of well-posedness and the existence of a decoupling field will be given in Section 5 for Linear Quadratic Gaussian (LQG for short) models.
The following theorem is the central result in this section. It stipulates that when the number of players is sufficiently large, the solution of the limiting problem provides approximate Nash equilibriums. Note that an important consequence of assumption (A5) is that the minimizer $\hat{u}^0$ identified in the previous section is now independent of $q_{00}$, and by an abuse of notation, we use $\hat{u}^0(t,x_0)$ to denote $\hat{u}^0(t,x_0,\mathcal{L}(X_t\vert\mathcal{F}^0_t), \bar{P}^0_t)$. Accordingly, $\hat{u}$ is now independent of $z_{11}$, and if we assume that (A7) is in force, $Y_t$ can then be written as $\theta(t, X^0_t, X_t)$, and again by a similar abuse of notation we use $\hat{u}(t,x_0,x)$ to denote $\hat{u}(t,x_0,x,\mathcal{L}(X_t\vert\mathcal{F}^0_t),\theta(t,x_0,x))$, where $X$, $P^0$ solve the FBSDE (\ref{fo:Ultimate}). Finally we impose
\noindent (\textbf{A8}) There exists a constant $c$ such that for all $t \in [0,T]$ and $x'_0,x_0 \in \mathbb{R}^{d_0}$,
$$\vert \hat{u}^0(t,x'_0)-\hat{u}^0(t,x_0)\vert \leq c \Vert x'_0-x_0\Vert,\quad \text{a.s..}$$
Moreover,
$$\mathbb{E}\left[\int^T_0 \vert \hat{u}^0(t,0)\vert^2 dt\right]< \infty.$$
\begin{theorem}
\label{th:CLT}
There exists a sequence $(\epsilon_N)_{N \geq 1}$ and a non-decreasing function $\rho: \mathbb{R}^+ \rightarrow \mathbb{R}^+$ such that \\
(i) There exists a constant $c$ such that for all $N \geq 1$,
$$\epsilon_N \leq c N^{-1/(d+4)}.$$
(ii) The feedback profile $(\hat{u}^0(t,X^{0,N}_t),(\hat{u}(t,X^{0,N}_t,X^{i,N}_t))_{1 \leq i \leq N})$ forms an $(\rho(\kappa)\epsilon_N)$-Nash equilibrium for the $(N+1)$-player game when the admissible control sets are taken as $\mathcal{A}^\kappa_0 \times \prod^N_{i=1} \mathcal{A}^\kappa_i$.
\end{theorem}
\begin{proof}
For a fixed $N$, we start with investigating what happens if the major player deviates from the strategy $\hat{u}^0(t,\hat{X}^{0,N}_t)$ unilaterally. When all the players apply the feedback controls identified in the statement of the theorem, the resulting controlled state processes will be denoted by $(\hat{X}^{i,N})_{i \geq 0}$ and solve
\begin{equation}\label{fo:hatfinite}
\begin{cases}
d\hat{X}^{0,N}_t=b_0(t,\hat{X}^{0,N}_t, \hat{\mu}^N_t, \hat{u}^0(t,\hat{X}^{0,N}_t))dt+\sigma_0 dW^0_t,\quad \hat{X}^{0,N}_0=x^0_0,\\
d\hat{X}^{i,N}_t=b(t,X^{i,N}_t,\hat{\mu}^N_t,\hat{X}^0_t, \hat{u}(t,\hat{X}^{0,N}_t,\hat{X}^{i,N}_t))dt+\sigma dW^i_t, \quad \hat{X}^{i,N}_0=x_0, \quad i \geq 1,
\end{cases}
\end{equation}
where the empirical measures are defined as in \eqref{fo:muN}.
Following the approach presented in Section \ref{se:conditional_chaos}, we define the limiting nonlinear processes as the solution of
\begin{equation}\label{fo:hatlimit}
\begin{cases}
d\hat{X}^0_t=b_0(t,\hat{X}^0_t,\mathcal{L}(\hat{X}^1_t|\mathcal{F}^0_t),\hat{u}^0(t,\hat{X}^0_t))dt+\sigma_0 dW^0_t,\quad \hat{X}^0_0=x^0_0,\\
d\hat{X}^i_t=b(t,\hat{X}^i_t,\mathcal{L}(\hat{X}^1_t|\mathcal{F}^0_t),\hat{X}^0_t,\hat{u}(t,\hat{X}^0_t,\hat{X}^i_t))dt+\sigma dW^i_t, \quad \hat{X}^{i}_0=x_0, \quad i \geq 1.
\end{cases}
\end{equation}
The stochastic measure flow $\mathcal{L}(\hat{X}^1_t\vert\mathcal{F}^0_t)$ will be sometimes denoted by $\hat{\mu}_t$ in the following. A direct application of Theorem \ref{tpropagation} in Section \ref{se:conditional_chaos} yields the existence of a constant $\hat{c}$ such that
\begin{equation}\label{fo:hatcon}
\max_{0 \leq i \leq N}\mathbb{E}\left[\sup_{0 \leq t \leq T} \vert \hat{X}^{i,N}_t-\hat{X}^i_t\vert^2\right] \leq \hat{c} N^{-2/(d+4)},
\end{equation}
and by applying the usual upper bound for 2-Wasserstein distance we also have
\begin{equation}\label{fo:hatcon2}
\mathbb{E}\left[\sup_{0 \leq t \leq T} W^2_2\left(\hat{\mu}^N_t, \frac{1}{N}\sum^N_{i=1}\delta_{\hat{X}^i_t}\right)\right] \leq \hat{c}N^{-2/(d+4)},
\end{equation}
where $\hat{c}$ depends upon $T$, the Lipschitz constants of $b_0$, $b$, $\hat{u}^0$ and $\hat{u}$, and
$$\hat{\eta}=\mathbb{E}\int^T_0 \vert \hat{X}^1_t\vert^{d+5} dt.$$
Now we turn our attention to the cost functionals. We define
$$\begin{aligned}
&\hat{J}^{0,N}=\mathbb{E}\left[\int^T_0 f_0(t,\hat{X}^{0,N}_t,\hat{\mu}^N_t,\hat{u}^0(t,\hat{X}^{0,N}_t))dt+g_0(\hat{X}^{0,N}_T,\hat{\mu}^N_T)\right],\\
&\hat{J}^{0}=\mathbb{E}\left[\int^T_0 f_0(t,\hat{X}^0_t,\hat{\mu}_t,\hat{u}^0(t,\hat{X}^0_t))dt+g_0(\hat{X}^0_T,\hat{\mu}_T)\right],
\end{aligned}$$
and we have, by assumptions (A3) and (A7), that
\begin{equation}\label{fo:hatcost}
\begin{aligned}
&\vert\hat{J}^{0,N}-\hat{J}^0\vert
= \bigg| \mathbb{E}\left[\int^T_0 f_0(t,\hat{X}^{0,N}_t,\hat{\mu}^N_t,\hat{u}^0(t,\hat{X}^{0,N}_t))+g_0(\hat{X}^{0,N}_T,\hat{\mu}^N_T)\right]\\
&-\mathbb{E}\left[\int^T_0 f_0(t,\hat{X}^0_t,\hat{\mu}_t,\hat{u}^0(t,\hat{X}^0_t))+g_0(\hat{X}^0_T, \mu_T)\right] \bigg| \\
\leq & \mathbb{E}\int^T_0 c\left( 1+\vert \hat{X}^{0,N}_t\vert+\vert \hat{X}^0_t\vert +\vert\hat{u}^0(t,\hat{X}^{0,N}_t)\vert+\vert \hat{u}^0(t,\hat{X}^0_t)\vert+M_2(\hat{\mu}^N_t)+M_2(\hat{\mu}_t)\right)\\
& \quad \quad \quad \left(\vert \hat{X}^{0,N}_t-\hat{X}^0_t\vert+W_2(\hat{\mu}^N_t,\hat{\mu}_t)\right)dt\\
\leq & c\mathbb{E}\left[\int^T_0 1+\vert \hat{X}^{0,N}_t\vert^2+\vert \hat{X}^0_t\vert^2+\frac{1}{N}\sum^N_{i=1}\vert \hat{X}^{i,N}_t\vert^2+\vert \hat{X}^1_t\vert^2 dt\right]^{1/2} \mathbb{E}\left[\int^T_0 \vert \hat{X}^{0,N}_t-\hat{X}^0_t\vert^2+W^2_2(\hat{\mu}^N_t,\hat{\mu}_t) dt\right]^{1/2}
\end{aligned}
\end{equation}
and by applying (\ref{fo:hatcon}) and (\ref{fo:hatcon2}) we deduce that
\begin{equation}\label{fo:hatJ}
\hat{J}^{0,N}=\hat{J}^0+O(N^{-1/(d+4)}).
\end{equation}
Assume now that the major player uses a different admissible control $v^0 \in \mathcal{A}^\kappa_0$, and other minor players keep using the strategies $(\hat{u}(t,\hat{X}^{i,N}_t))_{i \geq 1}$. The resulting perturbed state processes will be denoted by $(\tilde{X}^{i,N}_t)_{i \geq 0}$ and is the solution of the system
\begin{equation}\label{fo:tildefinite}
\begin{cases}
d\tilde{X}^{0,N}_t=b_0(t,\tilde{X}^{0,N}_t,\tilde{\mu}^N_t, v^0_t)dt+\sigma_0 dW^0_t, \quad \tilde{X}^{0,N}_0=x^0_0,\\
d\tilde{X}^{i,N}_t=b(t,\tilde{X}^{i,N}_t,\tilde{\mu}^N_t,\tilde{X}^{0,N}_t,\hat{u}(t,\hat{X}^{i,N}_t))dt+\sigma dW^i_t, \quad \tilde{X}^{i,N}_0=x_0, \quad 1 \leq i \leq N,
\end{cases}
\end{equation}
where as usual, $\tilde{\mu}^N_t$ denotes the empirical distribution of the $\tilde{X}^{i,N}_t$.
Note that $\hat{X}^{i,N}$ is not $\mathcal{F}^0_t$-progressively measurable in general, in order to apply Theorem \ref{tpropagation} we combine (\ref{fo:hatfinite}) and (\ref{fo:tildefinite}) and consider the limiting nonlinear processes defined as the solution of
\begin{equation}
\begin{cases}
d\hat{X}^0_t=b_0(t,\hat{X}^0_t,\mathcal{L}(\hat{X}^1_t|\mathcal{F}^0_t),\hat{u}^0(t,\hat{X}^0_t))dt+\sigma_0 dW^0_t, \quad \hat{X}^{0}_0=x^0_0,\\
d\hat{X}^i_t=b(t,\hat{X}^i_t,\mathcal{L}(\hat{X}^i_t|\mathcal{F}^0_t),\hat{X}^0_t,\hat{u}(t,\hat{X}^i_t))dt+\sigma dW^i_t, \quad \hat{X}^i_0=x_0, \quad i \geq 1,\\
d\tilde{X}^{0}_t=b_0(t,\tilde{X}^0_t,\mathcal{L}(\tilde{X}^i_t\vert \mathcal{F}^0_t),v^0_t)dt+\sigma_0 dW^0_t,\quad \tilde{X}^0_0=x^0_0,\\
d\tilde{X}^{i}_t=b(t,\tilde{X}^i_t,\mathcal{L}(\tilde{X}^i_t\vert \mathcal{F}^0_t),\tilde{X}^0_t,\hat{u}(t,\hat{X}^i_t))dt+\sigma dW^i_t, \quad \tilde{X}^i_0=x_0, \quad i\geq 1,
\end{cases}
\end{equation}
and now Theorem \ref{tpropagation} yields the existence of a constant $\tilde{c}$ such that
$$\mathbb{E}\left[\sup_{0 \leq t \leq T}\vert \tilde{X}^{i,N}_t-\tilde{X}^i_t\vert^2\right] \leq \tilde{c}N^{-2/(d+4)},$$
where $\tilde{c}$ depends upon $T$, the Lipschitz constants of $b_0$, $b$, $\hat{u}^0$, $u$, $\hat{\eta}$ and
$$\tilde{\eta}=\mathbb{E}\int^T_0 \vert \tilde{X}^1_t\vert^{d+5} dt.$$
It is important to note that $\tilde{\eta}$ depends on the control $v^0$. On the other hand the coefficients $b_0$ and $b$ are globally Lipschitz-continuous, so by usual estimates and Gronwall's inequality, for all $\kappa >0$ there exists a constant $\rho_1(\kappa)$ such that
$$\mathbb{E}\int^T_0 \vert v^0_t\vert^{d+5} dt \leq \kappa \Longrightarrow \tilde{\eta} \leq \rho^0_1(\kappa).$$
It is then clear that for all $\kappa>0$ there exists a constant $\rho_2(\kappa)$ such that
$$\mathbb{E}\int^T_0 \vert v^0_t\vert^{d+5} dt \leq \kappa \Longrightarrow \tilde{c} \leq \rho^0_2(\kappa).$$
By using the same estimates as in (\ref{fo:hatcost}), we deduce that there exists a constant $\rho(\kappa)$ such that for all $v^0 \in \mathcal{A}^\kappa_0$, we have
\begin{equation}\label{fo:tildeJ}
\vert\tilde{J}^{0,N}-\tilde{J}^0\vert \leq \rho(\kappa)\epsilon_N N^{-1/(d+4)}.
\end{equation}
Finally, since $(\hat{u}^0(t,\hat{X}^0_t),\hat{u}(t,\hat{X}_t))$ solves the limiting two-player game problem, it is clear that
\begin{equation}\label{fo:limitJ}
\hat{J}^0 \leq \tilde{J}^0,
\end{equation}
and combining (\ref{fo:hatJ}), (\ref{fo:tildeJ}) and (\ref{fo:limitJ}) we get the desired result for the major player.
We then consider the case when a minor player changes his strategy unilaterally, and without loss of generality we consider the case when the minor player with index 1 changes his strategy to $v \in \mathcal{A}_1$. This part of the proof is highly similar with that of Theorem 3 in \cite{CarmonaDelarue_sicon}, and we will refer to \cite{CarmonaDelarue_sicon} for some details of the proof in the following. The resulting perturbed controlled dynamics are given by
$$\begin{cases}
d\bar{X}^{0,N}_t=b_0(t,\bar{X}^{0,N}_t,\bar{\mu}^N_t,\hat{u}^0(t,\hat{X}^{0,N}_t))dt+\sigma_0 dW^0_t, \quad \bar{X}^{0,N}_0=x^0_0,\\
d\bar{X}^{1,N}_t=b(t,\bar{X}^{1,N}_t,\bar{\mu}^N_t,\bar{X}^{0,N}_t,v_t)dt+\sigma dW^1_t, \quad \bar{X}^{1,N}_0=x_0,\\
d\bar{X}^{i,N}_t=b(t,\bar{X}^{i,N}_t,\bar{\mu}^N_t,\bar{X}^{0,N}_t,\hat{u}(t,\hat{X}^{i,N}_t))dt+\sigma dW^i_t, \quad \bar{X}^{i,N}_0=x_0, \quad 2 \leq i \leq N.
\end{cases}$$
By the usual estimates on the difference between $\bar{X}^{i,N}$ and $\hat{X}^{i,N}$, and by applying Gronwall's inequality we can show that
\begin{equation}
\mathbb{E}\left[\sup_{0 \leq t \leq T} \vert \bar{X}^{0,N}_t-\hat{X}^{0,N}_t\vert^2\right]+\frac{1}{N}\sum^N_{i=1} \mathbb{E}\left[\sup_{0 \leq t \leq T} \vert \bar{X}^{i,N}_t-\hat{X}^{i,N}_t\vert^2\right]
\leq \frac{c}{N}\int^T_0 \vert v_t - \hat{u}(t,\hat{X}^{1,N}_t)\vert^2 dt.
\end{equation}
Combining the above bound, the growth properties of $\hat{u}$ and (\ref{fo:hatcon}), we see that for all $\kappa >0$, there exists a non-decreasing function $\rho_1: \mathbb{R}^+ \rightarrow \mathbb{R}^+$ such that
$$\int^T_0 \vert v_t\vert^2 \leq \kappa \quad \Rightarrow \quad \mathbb{E}\left[ \sup_{0 \leq t \leq T} \vert \bar{X}^{0,N}_t -\hat{X}^0_t\vert^2\right] +\mathbb{E}\left[ \sup_{0 \leq t \leq T}W^2_2(\bar{\mu}_t,\mu_t)\right]\leq \rho_1(\kappa)N^{-2/(d+4)}.$$
We hence conclude that there exists a non-decreasing function $\rho_2: \mathbb{R}^+\rightarrow \mathbb{R}^+$ such that when $\int^T_0 \vert v_t\vert^2 \leq \kappa$, we have
$$\mathbb{E}\left[\sup_{0 \leq t \leq T} \vert \bar{X}^{1,N}_t-\bar{X}^1_t\vert^2\right] \leq \rho_2(\kappa) N^{-2/(d+4)},$$
where $\bar{X}^1$ is the solution of the SDE
\begin{equation}
d\bar{X}^1_t=b(t,\bar{X}^1_t,\mu_t,X^0_t,v_t)dt+\sigma dW^1_t, \quad \bar{X}^1_0=x_0,
\end{equation}
where $\mu$ and $X^0$ are in the solution of the FBSDE (\ref{fo:Ultimate}). We then conclude in the same way as for the major player.
\end{proof}
\section{MFG with Major-Minor Agents: the LQG Case}
\label{se:lqg}
The linear-quadratic-gaussian (LQG) stochastic control problems are among the best-understood models in stochastic control theory. It is thus natural to expect explicit results for the major-minor mean field games in a similar setting. This type of model was first treated in \cite{Huang} in infinite horizon. The finite-horizon case was treated in \cite{NguyenHuang1}. However, the state of the major player does not enter the dynamics of the states of the minor players in \cite{NguyenHuang1}. The general finite-horizon case is solved in \cite{NguyenHuang2} by the use of the so-called \emph{nonanticipative variational calculus}. It is important to point out that the notion of Nash equilibrium used in \cite{NguyenHuang2} corresponds to the \emph{Markovian feedback Nash equilibrium'} while here, we work with open-loop Nash equilibriums. In what follows, we carry out the general systematic scheme introduced in the previous discussions and derive approximate Nash equilibria for the LQG major-minor mean field games.
The dynamics of the states of the players are given by the following linear SDEs:
$$
\begin{cases}
dX^{0,N}_t=(A_0 X^{0,N}_t+B_0 u^{0,N}_t+F_0 \bar{X}^N_t)dt+D_0 dW^0_t,\\
dX^{i,N}_t=(AX^{i,N}_t+Bu^{i,N}_t+F\bar{X}^N_t+GX^0_t)dt+DdW^i_t.
\end{cases}
$$
For the sake of presentation we introduce the linear transformations $\Phi$ and $\Psi$ defined by:
$$
\Phi(X)=H_0 X+\eta_0,
\quad\text{ and }\quad
\Psi(X,Y)=H X+\hat{H}Y+\eta.
$$
The cost functionals for the major and minor players are given by
$$J^0(u)=\mathbb{E}\left[\int^T_0\left\{(X^0_t-\Phi(\bar{X}^N_t))^\dagger Q_0 (X^0_t-\Phi(\bar{X}^N_t))+u^{0\dagger}_t R_0 u^0_t\right\}dt\right],$$
$$J^{i,N}(u)=\mathbb{E}\left[\int^T_0 \left\{(X^{i,N}_t-\Psi(X^0_t,\bar{X}^N_t))^\dagger Q (X^{i,N}_t-\Psi(X^0_t,\bar{X}^N_t))+u^{i,N\dagger}_t R u^{i,N}_t\right\}dt\right],$$
in which $Q$, $Q_0$, $R$ and $R_0$ are symmetric matrices and $R$ and $R_0$ are assumed to be positive definite. We use the notation $a^\dagger$ for the transpose of $a$.
We check readily that all previously mentioned assumptions hold in the above LQG setting. We then arrive directly at the non-Markovian conditional McKean-Vlasov FBSDE (\ref{fo:Ultimate}) which writes (note Remark \ref{rk:redundant})
\begin{equation}\label{fo:UltimateLQG}
\begin{cases}
dX^0_t=(A_0 X^0_t-\frac{1}{2}B_0 R^{-1}_0 B^\dagger_0 \mathbb{E}[P^0_t\vert \mathcal{F}^0_t]+F_0 \mathbb{E}[X_t \vert \mathcal{F}^0_t])dt+D_0 dW^0_t,\\
dX_t=(AX_t-\frac{1}{2}BR^{-1}B^\dagger Y_t+F\mathbb{E}[X_t\vert \mathcal{F}^0_t]+GX^0_t)dt+DdW_t,\\
dP^0_t=(-A^\dagger_0 P^0_t-G^\dagger P_t-2Q_0(X^0_t-\Phi(\mathbb{E}[X_t \vert \mathcal{F}^0_t])))dt+Q^{00}_t dW^0_t+Q^{01}_t dW_t,\\
\begin{aligned}
dP_t=&-A^\dagger P_t+Q^{10}_t dW^0_t+Q^{11}_t dW_t\\
&-F^\dagger_0\mathbb{E}[P^0_t\vert \mathcal{F}^0_t]dt-F^\dagger\mathbb{E}[ P_t\vert \mathcal{F}^0_t]dt-2 H_0^\dagger Q_0(X^0_t-\Phi(\mathbb{E}[X_t\vert \mathcal{F}^0_t]))dt,
\end{aligned}\\
dY_t=(-A^\dagger Y_t-2Q(X_t-\Psi(X^0_t,\mathbb{E}[X_t \vert \mathcal{F}^0_t])))dt+Z^0_t dW^0_t+Z_t dW_t,
\end{cases}
\end{equation}
with the initial and terminal conditions given by
$$X^0_0=x^0_0, \text{ } X_0=x_0, \text{ } P^0_T=P_T=Y_T=0.$$
As already explained at the end of Section 3, the solvability of general conditional McKean-Vlasov FBSDEs is a difficult problem. However, due to the special linear structure of (\ref{fo:UltimateLQG}) we can go a step further and look for more explicit sufficient conditions of well-posedness. As before, we use a bar to denote the conditional expectation with respect to $\cF^0_t$, so we arrive at the following more compact form:
\begin{equation}\label{fo:UltimateLQG2}
\begin{cases}
dX^0_t=(A_0 X^0_t-\frac{1}{2}B_0 R^{-1}_0 B^\dagger_0 \bar{P}^0_t+F_0 \bar{X}_t)dt+D_0 dW^0_t,\\
dX_t=(AX_t-\frac{1}{2}BR^{-1}B^\dagger Y_t+F\bar{X}_t+GX^0_t)dt+DdW_t,\\
dP^0_t=(-A^\dagger_0 P^0_t-G^\dagger P_t-2Q_0 X^0_t+2Q_0H_0\bar{X}_t+2Q_0\eta_0)dt+Q^{00}_t dW^0_t+Q^{01}_t dW_t,\\
\begin{aligned}
dP_t=&-A^\dagger P_t+Q^{10}_t dW^0_t+Q^{11}_t dW_t\\
&-F^\dagger_0 \bar{P}^0_t dt-F^\dagger\bar{P}_t dt-(2H^\dagger_0 Q_0 X^0_t-2 H^\dagger_0 Q_0 H_0 \bar{X}_t -2 H^\dagger_0 Q_0 \eta_0)dt
\end{aligned}\\
dY_t=(-A^\dagger Y_t-2QX_t+2QHX^0_t+2Q\hat{H}\bar{X}_t+2Q\eta)dt+Z^0_t dW^0_t+Z_t dW_t,
\end{cases}
\end{equation}
We then condition all the equations by the filtration $\mathcal{F}^0_t$. The following lemma will be useful when we deal with the Ito stochastic integral terms.
\begin{lemma}
Let $\mathcal{F}_t$ be a filtration and $B$ a $\mathcal{F}_t$-Brownian motion. Let $H$ be a $\mathcal{F}_t$-progressively measurable process, then
$$\mathbb{E}\left[\int^T_0 H_t dB_t\vert \mathcal{F}_T\right]=\int^T_0 \mathbb{E}\left[ H_t \vert \mathcal{F}_t\right]dB_t.$$
\end{lemma}
We then use this lemma to derive the SDEs satisfied by the conditional versions of the above processes. We add a bar on the various processes to denote the conditional version, and since $X^0_t$ is already $\mathcal{F}^0_t$-adapted, its notation will stay unchanged.
\begin{equation}\label{fo:UltimateLQGbar}
\begin{cases}
dX^0_t=(A_0 X^0_t-\frac{1}{2}B_0 R^{-1}_0 B^\dagger_0 \bar{P}^0_t+F_0 \bar{X}_t)dt+D_0 dW^0_t,\\
d\bar{X}_t=(A\bar{X}_t-\frac{1}{2}BR^{-1}B^\dagger \bar{Y}_t+F \bar{X}_t+GX^0_t)dt,\\
d\bar{P}^0_t=(-A^\dagger_0 \bar{P}^0_t-G^\dagger \bar{P}_t-2Q_0 X^0_t+2Q_0 H_0 \bar{X}_t+2Q_0 \eta_0)dt+\bar{Q}^{00}_t dW^0_t,\\
\begin{aligned}
d\bar{P}_t=&-A^\dagger \bar{P}_t+\bar{Q}^{10}_t dW^0_t\\
&-F^\dagger_0 \bar{P}^0_t-F^\dagger \bar{P}_t-(2H^\dagger_0 Q_0 X^0_t-2H^\dagger_0 Q_0 H_0 \bar{X}_t-2H^\dagger_0 Q_0 \eta_0)dt
\end{aligned}\\
d\bar{Y}_t=(-A^\dagger \bar{Y}_t-2Q \bar{X}_t+2QHX^0_t+2Q\hat{H}\bar{X}_t+2Q\eta)dt+\bar{Z}^0_t dW^0_t.\\
\end{cases}
\end{equation}
If we use $\mathbf{X}$ to denote $(X^0,\bar{X})$ and $\mathbf{Y}$ for $(\bar{P}^0,\bar{P},\bar{Y})$, we can write the above FBSDE in the following standard form
\begin{equation}\label{fo:compact}
\begin{cases}
d\mathbf{X}_t=(\mathbb{A}\mathbf{X}_t+\mathbb{B}\mathbf{Y}_t+\mathbb{C})dt+\mathbb{D}dW^0_t,\\
d\mathbf{Y}_t=-(\hat{\mathbb{A}}\mathbf{X}_t+\hat{\mathbb{B}}\mathbf{Y}_t+\hat{\mathbb{C}})dt+ \mathbf{Z}_tdW^0_t,
\end{cases}
\end{equation}
with initial and terminal conditions given by
$$\mathbf{X}_0=\begin{pmatrix}
x^0_0\\
x_0
\end{pmatrix}, \text{ } \mathbf{Y}_T=\begin{pmatrix}
0\\
0\\
0
\end{pmatrix},$$
in which
$$\mathbb{A}=\begin{pmatrix}
A_0 & F_0\\
G & A+F
\end{pmatrix},
\mathbb{B}=\begin{pmatrix}
-\frac{1}{2}B_0 R^{-1}_0 B^\dagger_0 & 0 & 0\\
0 & 0 & -\frac{1}{2}BR^{-1}B^\dagger
\end{pmatrix},
\mathbb{D}=\begin{pmatrix}
D_0\\
0
\end{pmatrix},$$
$$\hat{\mathbb{A}}=\begin{pmatrix}
2Q_0 & -2Q_0 H_0 \\
2H^\dagger_0 Q_0 & -2H^\dagger_0 Q_0 H_0 \\
-2QH & 2Q-2Q\hat{H}
\end{pmatrix},
\hat{\mathbb{B}}=\begin{pmatrix}
A^\dagger_0 & G^\dagger & 0\\
F^\dagger_0 & A^\dagger+F^\dagger & 0 \\
0 &0 & A^\dagger
\end{pmatrix}.$$
In order to find explicit sufficient conditions of the well-posedness of the linear FBSDE (\ref{fo:compact}) we follow the usual four step scheme and look for solutions in the form $\mathbf{Y}_t=S_t \mathbf{X}_t+s_t$, where $S$ and $s$ are two deterministic functions defined on $[0,T]$. Consider the following matrix Riccati equation with terminal condition:
\begin{equation}\label{fo:Riccati}
\dot{S}_t+S_t \mathbb{A}+\hat{\mathbb{B}}S_t+S_t\mathbb{B}S_t+\hat{\mathbb{A}}=0, \text{ } S_T=0,
\end{equation}
and the linear ODE
\begin{equation}\label{fo:ODE}
\dot{s}_t=-(\hat{\mathbb{B}}+S_t \mathbb{B})s_t-(\hat{\mathbb{C}}+S_t \mathbb{C}), \text{ } s_T=0.
\end{equation}
We observe that, when $S$ is well-defined, the backward ODE (\ref{fo:ODE}) is always uniquely solvable. We have the following proposition.
\begin{proposition}
If the matrix Riccati equation (\ref{fo:Riccati}) and the backward ODE (\ref{fo:ODE}) admit solutions denoted by
$$S_t=\begin{pmatrix}
S^{1,1}_t & S^{1,2}_t\\
S^{2,1}_t & S^{2,2}_t\\
S^{3,1}_t & S^{3,2}_t
\end{pmatrix}, \text{ }
s_t=\begin{pmatrix}
s^1_t \\
s^2_t \\
s^3_t
\end{pmatrix},$$
then the FBSDE (\ref{fo:UltimateLQGbar}) is uniquely solvable. The first two components in the solution, namely $(X^0,\bar{X}^0)$, is given by the solution of the linear SDE
$$\begin{cases}
d\bar{X}^0_t=(A_0 \bar{X}^0_t-\frac{1}{2}B_0 R^{-1}_0 B^\dagger_0 (S^{1,1}_t X^0_t+S^{1,2}_t \bar{X}_t+s^2_t)+F_0 \bar{X}_t)dt+D_0 dW^0_t,\\
d\bar{X}_t=(A\bar{X}_t-\frac{1}{2}BR^{-1}B^\dagger (S^{3,1}_t X^0_t+S^{3,2}_t \bar{X}_t+s^3_t)+F\bar{X}_t+G\bar{X}^0_t)dt,\\
\end{cases}$$
with initial conditions given by
$$X^0_0=x^0_0, \text{ }, \bar{X}_0=x_0.$$
The processes $(\bar{P}^0,\bar{P}, \bar{Y})$ are given by
$$\text{ }\bar{P}^0_t=S^{1,1}_t X^0_t+S^{1,2}_t \bar{X}_t + s^1_t,\text{ }\bar{P}_t=S^{2,1}_t X^0_t+S^{2,2}_t \bar{X}_t + s^2_t,\bar{Y}_t=S^{3,1}_t X^0_t+S^{3,2}_t \bar{X}_t + s^3_t.$$
\end{proposition}
\begin{proof}
The proof is a pure verification procedure.
\end{proof}
We now turn to the original conditional FBSDE (\ref{fo:UltimateLQG2}). Now that $X^0$, $\bar{X}_t$, $\bar{P}^0$ and $\bar{P}$ are found, we plug them into the FBSDE and it becomes a standard linear FBSDE with random coefficients. By using the fact that $X^0$, $\bar{X}_t$, $\bar{P}^0$ and $\bar{P}$ are actually solutions of linear SDEs with deterministic coefficients, we have the following proposition.
\begin{proposition}
The FBSDE (\ref{fo:UltimateLQG2}) has a unique solution. Moreover, there exist a deterministic function $K$ and a $\mathcal{F}^0_t$-progressively measurable process $k$ such that
\begin{equation}\label{fo:YX}
Y_t=K_t X_t+k_t.
\end{equation}
\end{proposition}
\begin{proof}
We plug $X^0$, $\bar{X}$, $\bar{Y}$, $\bar{P}^0$ and $\bar{P}$ into (\ref{fo:UltimateLQG2}), and we readily observe that the second and the last equations form a standard FBSDE with random coefficients. The structure of this FBSDE is standard in the sense that it can be derived from an stochastic optimal control problem, which yields (\ref{fo:YX}). We now plug all the known processes into the third and the fourth equations in (\ref{fo:UltimateLQG2}), which yields a standard BSDE whose well-posedness is well known. The processes $P^0$ and $P$ thus follow.
\end{proof}
It becomes apparent that the solvability of the Riccati equation (\ref{fo:Riccati}) plays an instrumental role in the study of the unique solvability of (\ref{fo:UltimateLQG}). In order to address this problem we first define the $(2d_0+3d) \times (2d_0+3d)$-matrix $\mathcal{B}$ as
$$\mathcal{B}=\begin{pmatrix}
\mathbb{A} & \mathbb{B}\\
\hat{\mathbb{A}} & \hat{\mathbb{B}}
\end{pmatrix}.$$
We then define $\Psi(t,s)$ as
$$\Psi(s,s)=\exp(\mathcal{B}(t-s)),$$
in other words $\Psi(t,s)$ is the propagator of the matrix ODE $\dot{X}_t=\mathcal{B}X_t$ and satisfies
$$
\frac{d}{dt}\Psi(t,s)=\mathcal{B} \Psi(t,s),
$$
with initial condition $\Psi(s,s)=I_{2d_0+3d}$.
We further consider the block structure of $\Psi(T,t)$ and write
$$\Psi(T,t)=\begin{pmatrix}
\Gamma^{1,1}_t & \Gamma^{1,2}_t\\
\Gamma^{2,1}_t & \Gamma^{2,2}_t
\end{pmatrix}.$$
We have the following sufficient condition for the unique solvability of (\ref{fo:Riccati}).
\begin{proposition}\label{pRiccati}
If for each $t \in [0,T]$, the $(d_0+2d)\times (d_0+2d)$-matrix $\Gamma^{2,2}_t$ is invertible and the inverse is a continuous function of $t$, then
$$S_t=-\left(\Gamma^{2,2}_t\right)^{-1} \Gamma^{2,1}_t$$
solves the Riccati equation (\ref{fo:Riccati}).
\end{proposition}
The assumption in Proposition \ref{pRiccati} will be denoted by assumption (A'). The above 3 propositions tell us that if assumption (A') holds, then we can apply Theorem \ref{th:CLT}. Consequently we have
\begin{theorem}\label{tCentralLQG}
Assume that assumption (A') is in force. There exist a sequence $(\epsilon_N)_{N \geq 1}$ and a non-decreasing function $\rho:\mathbb{R}^+ \rightarrow \mathbb{R}^+$such that \\
(i) There exists a constant $c$ such that for all $N \geq 1$,
$$\epsilon_N \leq c N^{-1/(d+4)}.$$
(ii) The partially feedback profile
$$\left(-\frac{1}{2}R^{-1}_0 B^\dagger_0 (S^{1,1}_t X^{0,N}_t+S^{1,2}_t \bar{X}_t+s^1_t), (-\frac{1}{2}R^{-1}B^\dagger(K_t X^{i,N}_t+k_t))_{1 \leq i \leq N}\right)$$
forms an $\rho(\kappa)\epsilon_N$-Nash equilibrium for the $(N+1)$-player LQG game when the sets of admissible controls are taken as $\mathcal{A}^\kappa_0 \times \prod^N_{i=1}\mathcal{A}^\kappa_i$.
\end{theorem}
\section{A Concrete Example}
\label{se:example}
The scheme proposed in this paper differs from the one proposed in \cite{NguyenHuang2,NourianCaines} as the control problem faced by the major player is here of the conditional McKean-Vlasov type, and the measure flow is endogenous to the controller. This makes the limiting problem a \emph{bona fide} two-player game instead of a succession of two consecutive standard optimal control problems. Essentially, this adds another fixed point problem, coming from the Nash equilibrium for the two-player game, on top of the fixed point problem of step 3 of the standard mean field game paradigm. The reader may wonder whether after solving the two fixed point problems of the current scheme, we could end up with the same solution as in the scheme proposed in \cite{NguyenHuang2,NourianCaines}. In order to answer this question, we provide a concrete example, in which we show that the two solutions are different, and the Nash equilibria for finite-player games indeed converge to the solution of the scheme proposed in this paper.
We consider the $(N+1)$-player game whose state dynamics are given by
$$
\begin{cases}
dX^{0,N}_t=(\frac{a}{N}\sum^N_{i=1}X^{i,N}_t+b u^{0,N}_t)dt+ D_0 dW^0_t,\quad X^{0,N}_0=x^0_0,\\
dX^{i,N}_t=c X^{0,N}_t dt+ D dW^i_t, \quad X^{i,N}_0=x_0, \quad i=1,2,...,N,
\end{cases}
$$
the objective function of the major player is given by
$$
J^{0,N}=\mathbb{E}\bigg[\int^T_0 \left(q | X^{0,N}_t|^2+|u^{0,N}_t|^2\right) dt\bigg],
$$
and the objective functions of the minor players are given by
$$
J^{i,N}=\mathbb{E}\bigg[\int^T_0 | u^{i,N}_t|^2 dt\bigg].
$$
All the processes considered in this section one-dimensional. We search for an open loop Nash equilibrium. As we can readily observe, in this finite-player stochastic differential game, the minor players' best responses are always $0$, regardless of other players' control processes. Therefore, the only remaining issue is to determine the major player's best response to the minor players using a zero control. This amounts to solving a stochastic control problem. This minimalist structure of the problem will facilitate the task of differentiating the current scheme from those of \cite{NguyenHuang2,NourianCaines}.
\subsection{Finite-player Game Nash Equilibrium}
We use the stochastic maximum principle. The admissible controls for the major player are the square-integrable $\mathcal{F}^0_t$-progressively measurable processes. His Hamiltonian is given by
$$
H=y_0 (\frac{a}{N}\sum^N_{i=1} x_i +b u_0)+cx_0\sum^N_{i=1} y_i+q x^2_0+u^2_0.
$$
The minimization of the Hamiltonian is straightforward. We get $\hat u_0=-by_0/2$. Applying the \emph{game version} of the Pontryagin stochastic
maximum principle leads to the FBSDE:
$$
\begin{cases}
dX^{0,N}_t=(\frac{a}{N}\sum^N_{i=1} X^{i,N}_t-\frac{1}{2}b^2 Y^{0,N}_t)dt+D_0 dW^0_t,\\
dX^{i,N}_t=cX^{0,N}_t dt+ D dW^i_t, \quad 1 \leq i \leq N,\\
dY^{0,N}_t=-(c\sum^N_{i=1}Y^{i,N}_t +2q X^{0,N}_t)dt+ \sum^{N}_{j=0} Z^{0,j,N}_t dW^j_t,\\
dY^{i,N}_t=-\frac{a}{N}Y^{0,N}_t dt + \sum^{N}_{j=0} Z^{i,j,N}_t dW^j_t, \quad 1 \leq i \leq N.
\end{cases}
$$
The initial conditions for the state processes are the same as always, and will be omitted systematically in the following.
The terminal conditions read $Y^{i,N}_T=0$ for $0\le i\le N$.
Keeping in mind the fact that the optimal control identified by the necessary condition of the Pontryagin stochastic maximum principle is
$\hat u^{0,N}_t=-bY^{0,N}_t/2$ it is clear that, what matters in the above equations, is the aggregate behavior of the processes
$(X^{i,N})$ and $(Y^{i,N})$. Accordingly we introduce
$$
X^N_t=\frac{1}{N}\sum^N_{i=1}X^{i,N}_t, \quad Y^N_t=\sum^N_{i=1} Y^{i,N}_t,
$$
and the above FBSDE leads to the system:
$$
\begin{cases}
dX^{0,N}_t=(a X^N_t-\frac{1}{2}b^2 Y^{0,N}_t)dt+ D_0 dW^0_t,\\
d X^N_t=cX^{0,N}_t dt+ \frac{D}{N} d(\sum^N_{i=1}W^i_t),\\
dY^{0,N}_t=-(c Y^N_t +2q X^{0,N}_t)dt+ \sum^{N}_{j=1} Z^{0,j,N}_t dW^j_t,\\
d Y^N_t=-a Y^{0,N}_t dt+\sum^N_{i=1} \sum^N_{j=0} Z^{i,j,N}_t dW^j_t,
\end{cases}
$$
and by conditioning with respect to $\mathcal{F}^0_t$ for the last two equations we have
$$
\begin{cases}
dX^{0,N}_t=(aX^N_t-\frac{1}{2}b^2 \bar{Y}^{0,N}_t)dt+ D_0 dW^0_t,\\
dX^N_t=cX^{0,N}_t dt+ \frac{D}{N} d(\sum^N_{i=1}W^i_t),\\
d\bar{Y}^{0,N}_t=-(c \bar{Y}^N_t +2q X^{0,N}_t)dt+ Z^{0,0}_t dW^0_t,\\
d\bar{Y}^N_t=-a \bar{Y}^{0,N}_t dt+\sum_{i} Z^{i,0}_t dW^0_t,
\end{cases}
$$
where we used an over line on top of a random variable to denote its conditional expectation with respect to $\mathcal{F}^0_t$.
by following the usual scheme of solving FBSDEs we see that the solvability of the above FBSDE depends on the solvability of
\begin{equation}\label{fo:Riccati2}
\dot{S}_t+S_t A+\hat{B}P_t+P_t B P_t +\hat{A}=0,\quad S_T=0,
\end{equation}
where we define
$$A=\begin{pmatrix}
0 & a \\
c & 0
\end{pmatrix}, B=\begin{pmatrix}
-\frac{b^2}{2} & 0\\
0 & 0
\end{pmatrix}, \hat{A}=\begin{pmatrix}
2q & 0\\
0 & 0
\end{pmatrix}, \hat{B}=\begin{pmatrix}
0 & c \\
a & 0
\end{pmatrix},$$
and $S_t$ is a $2 \times 2$ matrix which can be decomposed as
$$S=\begin{pmatrix}
S^{0,0}_t & S^{0,1}_t\\
S^{1,0}_t & S^{{1,1}}_t
\end{pmatrix}.$$
If the Riccati equation (\ref{fo:Riccati2}) is uniquely solvable, we solve the following forward SDE
$$\begin{cases}
dX^{0,N}_t=(aX^N_t-\frac{1}{2}b^2 (S^{0,0}_t X^{0,N}_t+S^{0,1}_t X^N_t))dt+ D_0 dW^0_t,\\
dX^N_t=cX^{0,N}_t dt+ \frac{D}{N} d(\sum^N_{i=1}W^i_t).
\end{cases}$$
and we obtain the optimally controlled dynamic for the major player. The optimal control is given by
$$u^0_t=-\frac{b}{2}\bar{Y}^{0,N}_t.$$
\subsection{The Current Scheme}
The scheme introduced in this paper proposes to solve the McKean-Vlasov control problem consisting of the controlled dynamics
$$\begin{cases}
dX^0_t=(a \mathbb{E}[X_t\vert \mathcal{F}^0_t]+b u^0_t)dt + D_0 dW^0_t,\\
dX_t= c X^0_t dt+ D dW_t,
\end{cases}$$
the objective function remains to be
$$J^0=\mathbb{E}\int^T_0 [q (X^0_t)^2+(u^0_t)^2 ]dt.$$
Applying directly the result in the LQG part of the paper we get the FBSDE
$$\begin{cases}
dX^0_t=(a\bar{X}_t-\frac{1}{2}b^2 \bar{P}^0_t)dt+D_0 dW^0_t,\\
dX_t=c X^0_t dt+ D dW_t,\\
dP^0_t=-(2q X^0_t+cP_t)dt+Q^{00}_t dW^0_t+Q^{01}_t dW_t,\\
dP_t=-a\bar{P}^0_t dt+Q^{10}_t dW^0_t+Q^{11}_t dW_t,
\end{cases}$$
and after conditioning we get
\begin{equation}\label{fo:FBSDEnew}
\begin{cases}
dX^0_t=(a\bar{X}_t-\frac{1}{2}b^2 \bar{P}^0_t)dt+D_0 dW^0_t,\\
d\bar{X}_t=c X^0_t dt,\\
d\bar{P}^0_t=-(2q X^0_t+c\bar{P}_t)dt+\bar{Q}^{00}_t dW^0_t,\\
d\bar{P}_t=-a\bar{P}^0_t dt+\bar{Q}^{10}_t dW^0_t.
\end{cases}
\end{equation}
We still use the four-step scheme to solve this FBSDE, and we see that the associated Riccati equation is again (\ref{fo:Riccati2}). We then solve the forward SDE
$$\begin{cases}
dX^0_t=(a\bar{X}_t-\frac{1}{2}b^2 (S^{0,0}_t X^0_t+S^{0,1}_t \bar{X}_t))dt+D_0 dW^0_t,\\
d\bar{X}_t=c X^0_t dt,
\end{cases}$$
and we obtain the solution. The optimal control $u^0$ is given by $-\frac{b}{2}\bar{P}^0_t$. We have the following proposition.
\begin{proposition}\label{pnew}
For all $t \in [0,T]$ we have
$$\vert X^{0,N}_t-X^0_t\vert+ \vert X^N_t-\bar{X}_t\vert \leq e^{Kt}\frac{D}{N}\sum^N_{i=1}W^i_t.$$
As a result, we have that for all $t\in [0,T]$,
$$X^{0,N}_t \rightarrow X^0_t, X^{N}_t \rightarrow \bar{X}_t, Y^{0,N}_t \rightarrow \bar{P}^0_t, Y^N_t \rightarrow \bar{P}_t, \text{a.s.,}$$
and finally we have the convergence of the optimal controls for the finite-player games towards the limiting optimal control, namely
$$u^{0,N}_t \rightarrow u^0_t \quad \text{a.s.,}\quad \forall t \in [0,T].$$
\end{proposition}
\proof{
For a fixed $t > 0$, by calculating the difference between the SDEs satisfied by processes $X^{0,N}$, $X^N$, $X^0$ and $\bar{X}$, we see that there exists a constant $K$ such that
$$\vert X^{0,N}_t-X^0_t\vert+\vert X^N_t-\bar{X}_t\vert \leq K\int^t_0 \vert X^{0,N}_s-X^0_s\vert+\vert X^N_s-\bar{X}_s\vert ds + \frac{D}{N} \sum^N_{i = 1} \max_{0 \leq s \leq t} \vert W^i_s\vert,$$
and since the function $t \rightarrow \frac{D}{N} \sum^n_{i = 1} \max_{0 \leq s \leq t} \vert W^i_s\vert$ is increasing in $t$, we have the desired inequality. The convergence of the processes follows by letting $N$ go to infinity.
}
\subsection{The Scheme in \cite{NguyenHuang2,NourianCaines}}
We now turn to the scheme proposed in \cite{NguyenHuang2,NourianCaines}. We start by fixing a $\mathcal{F}^0_t$-progressively measurable process $m$,
and solve the control problem consisting of the dynamics
$$
dX^0_t=(am_t+b u^0_t)dt+ D_0 dW^0_t,\quad X^0_0=x^0_0,
$$
and the objective function
$$
J^0=\mathbb{E}\int^T_0 [q (X^0_t)^2+(u^0_t)^2] dt.
$$
By applying the usual Pontryagin maximum principle we quickly arrive at the following FBSDE characterizing the optimally controlled system:
$$
\begin{cases}
dX^0_t= (a m_t-\frac{1}{2}b^2 Y^0_t)dt+D_0 dW^0_t,\\
dY^0_t=-2q X^0_t dt+ Z^0_t dW^0_t,\\
X^0_0=x^0_0,\quad
Y^0_T=0.
\end{cases}
$$
We then impose the consistency condition $m_t=\mathbb{E}\left[ X_t \vert \mathcal{F}^0_t\right] := \bar{X}_t$ which leads to
the FBSDE:
\begin{equation}
\label{fo:FBSDEold}
\begin{cases}
dX^0_t= (a \bar{X}_t-\frac{1}{2}b^2 Y^0_t)dt+D_0 dW^0_t,\\
d\bar{X}_t=c X^0_t dt,\\
dY^0_t=-2q X^0_t dt+ Z^0_t dW^0_t,\\
\end{cases}
\end{equation}
The comparison of (\ref{fo:FBSDEold}) and (\ref{fo:FBSDEnew}) will be based on the following proposition.
\begin{proposition}\label{pold}
There exists $t \in [0,T]$ and an event $E \subset \Omega$ such that $\mathbb{P}(E)>0$ and on $E$,
$$\bar{P}^0_t \neq Y^0_t.$$
\end{proposition}
\proof{
We prove this proposition by contradiction. Assume that for all $t$, almost surely $\bar{P}^0_t = Y^0_t$. Plugging them into the first two equations of (\ref{fo:FBSDEold}) and (\ref{fo:FBSDEnew}), by uniqueness of solutions of SDEs, we know that the $X^0$ and $\bar{X}$ in these two systems are equal. Computing the difference between the third equations of (\ref{fo:FBSDEold}) and (\ref{fo:FBSDEnew}), we conclude that $\bar{P}$ is 0 by uniqueness of solutions of BSDE. Using the fourth equation in (\ref{fo:FBSDEnew}) we see that $\bar{P}$ is 0, and finally again by uniqueness of solutions of BSDE we see that $X^0$ is 0 because it is the driver in the third equation in (\ref{fo:FBSDEnew}). This is a contradiction.}
Note that the optimal control provided by the scheme in \cite{NguyenHuang2,NourianCaines} is given by $-\frac{b}{2}Y^0$. In light of Proposition \ref{pnew} and \ref{pold}, we conclude that the two schemes lead to different optimal controls, and the Nash equilibria for the finite-player games converge towards the one produced by the current scheme, instead of the one produced by the scheme proposed in \cite{NguyenHuang2,NourianCaines}.
\section{Conditional Propagation of Chaos}
\label{se:conditional_chaos}
In this section we consider a system of $(N+1)$ interacting particles with stochastic dynamics:
\begin{equation}\label{fo:finitepoc}
\begin{cases}
dX^{0,N}_t=b_0(t,X^{0,N}_t,\mu^N_t)dt+\sigma_0(t,X^{0,N}_t,\mu^N_t) dW^0_t,\\
dX^{i,N}_t=b(t,X^{i,N}_t,\mu^N_t,X^{0,N}_t)dt+\sigma(t,X^{i,N}_t,\mu^N_t,X^{0,N}_t) dW^i_t, \ \ i=1,2,...,N,\\
X^{0,N}_0=x^0_0,\quad
X^{i,N}_0=x_0, \ \ i=1,2,...,N,
\end{cases}
\end{equation}
on a probability space $(\Omega, \mathcal{F},\PP)$, where the empirical measure $\mu^N$ was defined in \eqref{fo:muN}.
Here $(W^i)_{i \geq 0}$ is a sequence of independent Wiener processes, $W^0$ being $n_0$-dimensional and $W^i$ $n$-dimensional for $i \geq 1$. The major-particle process $X^{0,N}$ is $d_0$-dimensional, and the minor-particle processes $X^{i,N}$ are $d$-dimensional for $i \geq 1$. The coefficient functions
$$
\begin{aligned}
&(b_0,\sigma_0) : [0,T]\times \Omega \times \mathbb{R}^{d_0}\times \mathcal{P}_2(\mathbb{R}^d) \to \mathbb{R}^{d_0}\times \mathbb{R}^{d_0\times m_0},,\\
&(b,\sigma) : [0,T] \times \Omega \times \mathbb{R}^{d} \times \mathcal{P}_2(\mathbb{R}^d) \times \mathbb{R}^{d_0}\to \mathbb{R}^{d}\times \mathbb{R}^{d\times m},
\end{aligned}
$$
are allowed to be random, and as usual, $\mathcal{P}_2(E)$ denotes the space of probability measures on $E$ having a finite second moment.
We shall make the following assumptions.\\
\textbf{(A1.1)} The functions $b_0$ and $\sigma_0$ (resp. $b$ and $\sigma$) are $\mathcal{P}^{W^0} \otimes \mathcal{B}(\mathbb{R}^{d_0})\otimes \mathcal{B}(\mathcal{P}(\mathbb{R}^d))$-measurable (resp. $\mathcal{P}^{W^0} \otimes \mathcal{B}(\mathbb{R}^{d})\otimes \mathcal{B}(\mathcal{P}(\mathbb{R}^d)) \otimes \mathcal{B}(\mathbb{R}^{d_0})$-measurable), where $\mathcal{P}^{W^0}$ is the progressive $\sigma$-field associated with the filtration $\mathcal{F}^0_t$ on $[0,T] \times \Omega$ and $\mathcal{B}(\mathcal{P}(\mathbb{R}^d))$ is the Borel $\sigma$-field generated by the metric $W_2$.\\
\textbf{(A1.2)} There exists a constant $K>0$ such that for all $t \in [0,T]$, $\omega \in \Omega$, $x, x' \in \mathbb{R}^d$, $x_0, x'_0 \in \mathbb{R}^{d_0}$ and $\mu, \mu' \in \mathcal{P}_2(\mathbb{R}^d)$,
$$\begin{aligned}
&|(b_0,\sigma_0)(t,\omega, x_0, \mu)-b_0(t, \omega, x'_0, \mu')| \leq K(|x_0-x'_0|+W_2(\mu, \mu')),\\
&|(b,\sigma)(t,\omega,x,\mu,x_0)-b(t, \omega, x', \mu', x'_0)| \leq K(|x-x'|+|x_0-x'_0|+W_2(\mu, \mu')).
\end{aligned}
$$
\textbf{(A1.3)} We have
$$
\mathbb{E}\left[\int^T_0 \vert (b_0,\sigma_0)(t,0,\delta_0) \vert^2+\vert (b,\sigma)(t,0,\delta_0,0) \vert^2 dt\right] < \infty.
$$
\vskip 2pt
Our goal is to study the limiting behaviour of the solution of the system (\ref{fo:finitepoc}) when $N$ tends to infinity. The limit will be given by the so-called \emph{limiting nonlinear processes}, but before defining it, we need to introduce notations and definitions for the regular versions of conditional probabilities
which we use throughout the remainder of the paper.
\subsection{Regular conditional distributions and optional projections}
\label{sub:conditional}
We consider a measurable space $(\Omega,\cF)$ and we assume that $\Omega$ is standard and $\cF$ is its Borel $\sigma$-field to allow us to use regular conditional distributions for any sub-$\sigma$-field of $\mathcal{F}$. In fact, if $(\cG_t)$ is a right continuous filtration, we make use of the existence of a map $\Pi^\cG:[0,\infty)\times\Omega \hookrightarrow \cP(\Omega)$ which is $(\cO,\cB(\cP(\Omega))$-measurable and such that for each $t\ge 0$, $\{\Pi^\cG_t(\omega,A);\,\omega\in\Omega,\,A\in\cF\}$ is a regular version of the conditional probability of $\PP$ given the $\sigma$-field $\cG_t$.
Here $\cO$ denotes the optional $\sigma$-field of the filtration $(\cG_t)$. This result is a direct consequence of Proposition 1 in \cite{Yor} applied to the the process $(X_t)$ given by the identity map of $\Omega$ and the constant filtration $\cF_t\equiv\cF$.
For each $t\ge 0$, we define the probability measures $\mathbb{P}\otimes \Pi^\mathcal{G}_t$ and $ \Pi^\mathcal{G}_t\otimes\mathbb{P}$ on $\Omega^2=\Omega \times \Omega$ via the formulas
\begin{equation}
\label{fo:probas}
\mathbb{P}\otimes \Pi^\mathcal{G}_t(A \times B)=\int_A \Pi^\mathcal{G}_t(\omega, B) \mathbb{P}(d\omega).
\quad\text{and}\quad
\Pi^\mathcal{G}_t\otimes\mathbb{P}(A \times B)=\int_B \Pi^\mathcal{G}_t(\omega, A) \mathbb{P}(d\omega).
\end{equation}
It is easy to check that, integrals of functions of the form $\Omega^2\ni (\omega,\tilde\omega)\hookrightarrow \varphi(\omega)\psi(\tilde\omega)$ with respect to these two measures are equal. This shows that these two measures are the same. We will use this result in the following way: if $X$ is measurable and bounded on $\Omega^2$, we can interchange $\omega$ and $\tilde\omega$ in the integrand of
$$
\int_{\Omega^2}X(\omega,\tilde\omega)\Pi^\mathcal{G}_t(\omega, d\tilde\omega) \mathbb{P}(d\omega)
$$
without changing the value of the integral.
In this section, we often use the notation $\mathbb{E}^{\mathcal{G}_t}$ for the expectation with respect to the transition kernel $\Pi_t^\mathcal{G}$, i.e. for all random variable $X : \Omega^2 \ni (\omega,\tilde{\omega})\hookrightarrow X(\omega,\tilde{\omega})\in\mathbb{R}$, we define
$$
\mathbb{E}^{\mathcal{G}_t}[X(\omega,\tilde{\omega})] =\int_\Omega X(\omega, \tilde{\omega}) \Pi^\mathcal{G}_t(\omega, d\tilde{\omega}),
$$
which, as a function of $\omega$, is a random variable on $\Omega$. Also, we still use $\mathbb{E}$ to denote the expectation with respect to the first argument, i.e.
$$
\mathbb{E}[X]=\int_\Omega X(\omega, \tilde{\omega}) \mathbb{P}(d\omega),
$$
which, as a function of $\tilde\omega$, is a random variable on $\Omega$. Finally, whenever we have a random variable $X$ defined on $\Omega$, we define the random variable $\tilde{X}$ on $\Omega^2$ via the formula
$\tilde{X}(\omega,\tilde{\omega})=X(\tilde{\omega})$.
\subsection{Conditional McKean-Vlasov SDEs}
In order to define properly the limiting nonlinear processes, we first derive a few technical properties of the conditional distribution of a process with respect to a filtration. We now assume that the filtration $(\mathcal{G}_t)$ is a sub-filtration of a right continuous filtration $(\mathcal{F}_t)$, in particular $\mathcal{G}_t \subseteq \mathcal{F}_t$ for all $t\ge 0$, and that $(X_t)$ is an $\mathcal{F}_t$-adapted continuous process taking values in a Polish space $(E,\mathcal{E})$. Defining $\mu^X_t(\omega)$ as the distribution of the random variable $X_t$ under the probability measure $\Pi^\cG_t(\omega,\,\cdot\,)$, we obtain the following result which we state as a lemma for future reference.
\begin{lemma}
\label{proj}
There exists a stochastic measure flow $\mu^X:[0,\infty)\times \Omega \rightarrow \mathcal{P}(E)$ such that
\begin{enumerate}
\item $\mu^X$ is $\mathcal{P}$/$\mathcal{B}(\mathcal{P}(E))$-measurable, where $\mathcal{P}$ is the progressive $\sigma$-field associated to $(\mathcal{G}_t)$ on $[0,\infty)\times \Omega$, and $\mathcal{B}(\mathcal{P}(E))$ the Borel $\sigma$-field of the weak topology on $\mathcal{P}(E)$.
\item $\forall t \geq 0$, $\mu^X_t$ is a regular conditional distribution of $X_t$ given $\mathcal{G}_t$;
\end{enumerate}
\end{lemma}
\vskip 4pt\noindent
We first study the well-posedness of the SDE:
\begin{equation}\label{fo:cMkV}
dX_t=b(t,X_t,\mathcal{L}(X_t\vert \mathcal{G}_t))dt+\sigma (t,X_t,\mathcal{L}(X_t\vert \mathcal{G}_t))dW_t.
\end{equation}
We say that this SDE is of the conditional McKean-Vlasov type because the conditional distribution of $X_t$ with respect to $\mathcal{G}_t$ enters the dynamics. Note that when $\mathcal{G}_t$ is the trivial $\sigma$-field, (\ref{fo:cMkV}) reduces to a classical McKean-Vlasov SDE.
In the following, when writing $\mathcal{L}(X_t \vert \mathcal{G}_t)$ we always mean $\mu^X_t$, for the stochastic flow $\mu^X$ whose existence is given in Lemma \ref{proj}.
The analysis of the SDE (\ref{fo:cMkV}) is done under the following assumptions. We let $W$ be a $m$-dimensional Wiener process on a probability space $(\Omega,\mathcal{F},\mathbb{P})$, $\mathcal{F}^0_t$ its (raw) filtration, $\mathcal{F}_t=\mathcal{F}_t^W$ its usual $\mathbb{P}$-augmentation, and $\mathcal{G}_t$ a sub-filtration of $\mathcal{F}_t$ also satisfying the usual conditions. We impose the following standard assumptions on $b$ and $\sigma$:
\noindent \textbf{(B1.1)} The function
$$
(b,\sigma):[0,T]\times \Omega \times \mathbb{R}^n \times \mathcal{P}(\mathbb{R}^n) \ni (t,\omega,x,\mu) \hookrightarrow (b(t,\omega,x,\mu),\sigma(t,\omega,x,\mu)) \in \mathbb{R}^n\times \RR^{n \times m}
$$
is $\mathcal{P}^{\mathcal{G}} \otimes \mathcal{B}(\mathbb{R}^{n})\otimes \mathcal{B}(\mathcal{P}(\mathbb{R}^n))$-measurable, where $\mathcal{P}^{\mathcal{G}}$ is the progressive $\sigma$-field associated with the filtration $\mathcal{G}_t$ on $[0,T] \times \Omega$;\\
\textbf{(B1.2)} There exists $K>0$ such that for all $t \in [0,T]$, $\omega \in \Omega$, $x, x' \in \mathbb{R}^n$, and $\mu, \mu' \in \mathcal{P}_2(\mathbb{R}^n)$, we have:
$$
|b(t,\omega, x, \mu)-b(t, \omega, x', \mu')|+|\sigma(t,\omega, x, \mu)-\sigma(t, \omega, x', \mu')| \leq K(|x-x'|+W_2(\mu, \mu')).
$$
\textbf{(B1.3)} It holds:
$$\mathbb{E}\left[\int^T_0 \vert b(t,0,\delta_0) \vert^2+\vert \sigma(t,0,\delta_0) \vert^2 dt\right] < \infty.$$
\begin{definition}
By a (strong) solution of (\ref{fo:cMkV}) we mean an $\mathcal{F}_t$-adapted continuous process $X$ taking values in $\mathbb{R}^n$ such that for all $t \in [0,T]$,
$$X_t=x_0 +\int^t_0 b(s,X_s,\mathcal{L}(X_s\vert \mathcal{G}_s))ds+\int^t_0 \sigma (s,X_s,\mathcal{L}(X_s\vert \mathcal{G}_s))dW_s, \quad \text{a.s..}$$
\end{definition}
In order to establish the well-posedness of (\ref{fo:cMkV}) we need some form of control on the 2-Wasserstein distance between two conditional distributions. We shall use the following dual representation:
\begin{proposition}
If $\mu,\nu\in\mathcal{P}_2(E)$ where $E$ is an Euclidean space, then:
$$
W^2_2(\mu,\nu)=\sup_{\phi \in \mathcal{C}^{\text{Lip}}_b(E)}\bigg(\int_E \phi^* d\mu - \int_E \phi d\nu\bigg),
$$
where $\phi^*(x) := \inf_{z \in E} \phi(z)+|x-z|^2. $
\end{proposition}
\noindent
We shall use the following consequences of this representation.
\begin{lemma}
\label{lwasserstein}
If $X$ and $Y$ are two random variables of order $2$ taking values in a Euclidean space, and $\mathcal{G}$ a sub-$\sigma$-field of $\mathcal{F}$, then for all $p \geq 2$ we have:
$$
W^p_2(\mathcal{L}(X|\mathcal{G}),\mathcal{L}(Y|\mathcal{G}))\leq \mathbb{E}[|X-Y|^p\vert \mathcal{G}], \text{a.s.}.
$$
By taking expectations on both sides we further have
$$\mathbb{E}\left[W^p_2(\mathcal{L}(X|\mathcal{G}),\mathcal{L}(Y|\mathcal{G}))\right] \leq \mathbb{E}[|X-Y|^p].$$
\end{lemma}
\begin{proof}
By using the above dual representation formula and the characteristic equation for conditional distributions, we get
$$W^2_2(\mathcal{L}(X|\mathcal{G}),\mathcal{L}(Y|\mathcal{G}))=\sup_{\phi \in \mathcal{C}^{\text{Lip}}_b(E)}\mathbb{E}[\phi^*(X)-\phi(Y)|\mathcal{G}] \leq \mathbb{E}[|X-Y|^2\vert \mathcal{G}],
$$
and the first inequality follows by applying the conditional Jensen's inequality.
\end{proof}
We then have the following well-posedness result.
\begin{proposition}\label{pMkV}
The conditional McKean-Vlasov SDE (\ref{fo:cMkV}) has a unique strong solution. Moreover, for all $p \geq 2$, if we replace the assumption (B1.3) by
$$\mathbb{E}\int^T_0 \vert b(t,0,\delta_0)\vert^p+\vert \sigma(t,0,\delta_0)\vert^p dt < \infty,$$
then, the solution of (\ref{fo:cMkV}) satisfies
$$\mathbb{E}\left[\sup_{0 \leq t \leq T} \vert X_t\vert^p \right] < \infty.$$
\end{proposition}
\begin{proof}
The proof is an application of the contraction mapping theorem. For each $c>0$, we consider the space of all $\mathcal{F}_t$-progressively measurable processes satisfying
$$
\|X\|_c^2:=\mathbb{E}\left[\int^T_0 e^{-ct}|X_t|^2 dt\right]<\infty.
$$
This space will be denoted by $\mathbb{H}^2_c$. It can be easily proven to be a Banach space. Furthermore, for all $X \in \mathbb{H}^2_c$, we have
$$
\mathcal{L}(X_t \vert \mathcal{G}_t) \in \mathcal{P}_2(\mathbb{R}^n), \quad \text{a.s., a.e..}
$$
and we can define
$$
U_t=x_0 +\int^t_0 b(s,X_s,\mathcal{L}(X_s\vert \mathcal{G}_s))ds+\int^t_0 \sigma(s,X_s,\mathcal{L}(X_s\vert \mathcal{G}_s))dW_s.
$$
It is easy to show that $U \in \mathbb{H}^2_c$. On the other hand, if we fix $X,X' \in \mathbb{H}^2_c$ and let $U$ and $U'$ be the processes defined via the above equality from $X$ and $X'$ respectively, we have
$$
\begin{aligned}
&\mathbb{E}\left[\left|\int^t_0 b(s,X'_s,\mathcal{L}(X'_s\vert \mathcal{G}_s))-b(s,X_s,\mathcal{L}(X_s\vert \mathcal{G}_s)) ds\right|^2\right]\\
&\phantom{????}\leq 2 T K^2 \mathbb{E}\left[\int^t_0 \vert X'_s-X_s \vert^2+W^2_2(\mathcal{L}(X'_s\vert \mathcal{G}_s),\mathcal{L}(X_s\vert \mathcal{G}_s))ds\right]\\
&\phantom{????}\leq 2 T K^2 \mathbb{E}\left[\int^t_0 \vert X'_s-X_s\vert^2 ds\right],
\end{aligned}
$$
and we have the same type of estimate for the stochastic integral term by replacing the Cauchy-Schwarz inequality by the Ito isometry. This yields
$$\begin{aligned}
\Vert U'-U \Vert^2_c=&\mathbb{E}\left[\int^T_0 e^{-ct} \vert U'_t-U_t \vert^2dt\right] \\
\leq & 2 (T+1) K^2\mathbb{E}\left[\int^T_0 e^{-ct} \left(\int^t_0 \vert X'_s-X_s \vert^2 ds\right)dt\right]\\
\leq &\frac{2 (T+1) K^2}{c}\Vert X'-X \Vert^2_c,
\end{aligned}$$
and this proves that the map $X\to U$ is a strict contraction in the Banach space $\mathbb{H}^2_c$ if we choose $c$ sufficiently large. The fact that the solution possesses finite moments can be obtained by using standard estimates and Lemma ~\ref{lwasserstein}. We omit the proof here.
\end{proof}
In the above discussion, $\mathcal{G}_t$ is a rather general sub-filtration of the Brownian filtration $\mathcal{F}^{W}_t$. From now on, we shall restrict ourselves to sub-filtrations $\mathcal{G}_t$ equal to the Brownian filtration generated by the first $r$ components of $W$ for some $r < m$. We rewrite (\ref{fo:cMkV}) as
\begin{equation}\label{fo:cMkV2}
dX_t=b(t,X_t,\mathcal{L}(X_t\vert \mathcal{G}^W_t))dt+\sigma (t,X_t,\mathcal{L}(X_t\vert \mathcal{G}^W_t))dW_t,
\end{equation}
and we expect that the solution of the SDE (\ref{fo:cMkV2}) is given by a deterministic functional of the Brownian paths. In order to prove this fact in a rigorous way, we need the following notion.
\begin{definition}
By a set-up we mean a 4-tuple $(\Omega, \mathcal{F}, \mathbb{P}, W)$ where $(\Omega, \mathcal{F}, \mathbb{P})$ is a probability space with a $d$-dimensional Wiener process $W$. We use $\mathcal{F}^W_t$ to denote the natural filtration generated by $W$ and $\mathcal{G}^W_t$ to denote the natural filtration generated by the first $r$ components of $W$. By the canonical set-up we mean $(\Omega^c, \mathcal{F}^c, \mathbb{W}, B)$, where $\Omega^c=C([0,T];\mathbb{R}^m)$, $\mathcal{F}^c$ is the Borel $\sigma$-field associated with the uniform topology, $\mathbb{W}$ is the Wiener measure and $B_t$ is the coordinate (marginal) projection.
\end{definition}
Proposition \ref{pMkV} basically states that the SDE (\ref{fo:cMkV2}) is uniquely solvable on any set-up, and in particular it is uniquely solvable on the canonical set-up. The solution in the canonical set-up, denoted by $X^c$, gives us a measurable functional from $C([0,T];\mathbb{R}^d)$ to $C([0,T];\mathbb{R}^n)$. Because of the important role played by this functional, in the following we use $\Phi$ (instead of $X^c$) to denote it.
\begin{lemma}\label{ltransform}
Let $\psi:C([0,T];\mathbb{R}^m) \rightarrow \mathbb{R}^n$ be $\mathcal{F}^B_t$-measurable, then we have
$$\mathcal{L}(\psi \vert \mathcal{G}^B_t)(W_\cdot)=\mathcal{L}(\psi(W_\cdot)\vert \mathcal{G}^W_t).$$
\end{lemma}
\begin{proof}
By the definition of conditional distributions, it suffices to prove that for all bounded measurable functions $f:\mathbb{R}^n \rightarrow \mathbb{R}^+$ we have
$$\mathbb{E}\left[f(\psi(W_\cdot))\vert \mathcal{G}^W_t\right]=\mathbb{E}\left[f(\psi)\vert \mathcal{G}^B_t\right](W_\cdot),$$
and by using the definition of conditional expectations the above equality can be easily proved.
\end{proof}
With the help of Lemma \ref{ltransform}, we can state and prove
\begin{proposition}\label{pfunctional}
On any set-up $(\Omega, \mathcal{F}, \mathbb{P}, W)$, the solution of (\ref{fo:cMkV2}) is given by
$$X_\cdot=\Phi(W_\cdot).$$
\end{proposition}
\begin{proof}
We are going to check directly that $\Phi(W_\cdot)$ is a solution of (\ref{fo:cMkV2}). By the definition of $\Phi$ as the solution of (\ref{fo:cMkV2}) on the canonical set-up, we have
$$
\Phi(\mathtt{w})=x_0+\int^t_0 b(s,\Phi(\mathtt{w})_s, \mathcal{L}(\Phi(\cdot)_s \vert \mathcal{G}^B_s)(\mathtt{w}))ds+\int^t_0 \sigma(s,\Phi(\mathtt{w})_s, \mathcal{L}(\Phi(\cdot)_s \vert \mathcal{G}^B_s)(\mathtt{w}))dB_s, \mathbb{W}-\text{a.s.,}
$$
where $\mathtt{w}$ stands for a generic element in the canonical space $C([0,T];\mathbb{R}^m)$. By using Lemma \ref{ltransform} we thus have
$$
\Phi(W_\cdot)=x_0+\int^t_0 b(s,\Phi(W_\cdot)_s, \mathcal{L}(\Phi(W_\cdot)_s \vert \mathcal{G}^B_s))ds+\int^t_0 \sigma(s,\Phi(W_\cdot)_s, \mathcal{L}(\Phi(W_\cdot)_s \vert \mathcal{G}^B_s))dW_s, \mathbb{P}-\text{a.s.,}
$$
which proves the desired result.
\end{proof}
\subsection{The Nonlinear Processes}
The limiting nonlinear processes associated with the particle system (\ref{fo:finitepoc}) is defined as the solution of
\begin{equation}
\label{fo:Nonlinear}
\begin{cases}
dX^0_t=b_0(t,X^0_t, \mathcal{L}(X^1_t\vert \mathcal{F}^0_t))dt+\sigma_0(t,X^0_t, \mathcal{L}(X^1_t\vert \mathcal{F}^0_t)) dW^0_t,\\
dX^i_t=b(t,X^i_t, \mathcal{L}(X^i_t\vert \mathcal{F}^0_t),X^0_t)dt+\sigma(t,X^i_t, \mathcal{L}(X^i_t\vert \mathcal{F}^0_t),X^0_t) dW^i_t, \quad i \geq 1,\\
X^0_0=x^0_0,\qquad
X^i_0=x_0, \quad i \geq 1.
\end{cases}
\end{equation}
Under the assumptions (A1.1)-(A1.3), the unique solvability of this system is ensured by Proposition \ref{pMkV}. Due to the strong symmetry among the processes $(X^i)_{i \geq 1}$, we first prove the following proposition.
\begin{proposition}
For all $i \geq 1$, the solution of (\ref{fo:Nonlinear}) solves the conditional McKean-Vlasov SDE
$$\begin{cases}
dX^0_t=b_0(t,X^0_t,\mathcal{L}(X^i_t\vert \mathcal{F}^0_t))dt+\sigma_0(t,X^0_t,\mathcal{L}(X^i_t\vert \mathcal{F}^0_t)) dW^0_t,\\
dX^i_t=b(t,X_t,\mathcal{L}(X^i_t\vert\mathcal{F}^0_t),X^0_t)dt+\sigma(t,X^i_t, \mathcal{L}(X^i_t\vert \mathcal{F}^0_t),X^0_t) dW^i_t,
\end{cases}$$
and for all fixed $t \in [0,T]$, the random variables $(X^i_t)_{i \geq 1}$ are $\mathcal{F}^0_t$-conditionally i.i.d..
\end{proposition}
\begin{proof}
This is an immediate consequence of Proposition \ref{pfunctional}.
\end{proof}
Now that the nonlinear processes are well-defined, in the next subsection we prove that these processes give the limiting behaviour of (\ref{fo:finitepoc}) when $N$ tends to infinity.
\subsection{Conditional Propagation of Chaos}
We extend the result of the unconditional theory to the conditional case involving the influence of a major player.
As in the classical case, the propagation appears in a strong path wise sense.
\begin{theorem}\label{tpropagation}
There exists a constant $C$ such that
$$\max_{0 \leq i \leq N}\mathbb{E}[\sup _{0 \leq t \leq T}|X^{i,N}_t-X^i_t|^2] \leq CN^{-2/(d+4)},$$
where $C$ only depends on $T$, the Lipschitz constants of $b_0$ and $b$ and
$$\eta=\mathbb{E}\left[\int^T_0 \vert X^1_t\vert^{d+5}dt\right]$$
\end{theorem}
\begin{proof}
We first note that, by the SDEs satisfied by $X^0$ and $X^{0,N}$ and the Lipschitz conditions on the coefficients,
$$\begin{aligned}
&|X^{0,N}_t-X^0_t|^2\\
= &(\int^t_0 b_0(s,X^{0,N}_s, \frac{1}{N}\sum^N_{j=1}\delta _{X^{j,N}_s})-b(s,X^0_s,\mu_s)ds)^2 \\
\leq &K(\int^t_0 |X^{0,N}_s-X^0_s|^2ds+ \int^t_0 W^2_2(\frac{1}{N}\sum^N_{j=1}\delta _{X^{j,N}_s},\frac{1}{N}\sum^N_{j=1}\delta_{X^j_s})ds\\
&+ \int^t_0 W^2_2(\frac{1}{N}\sum^N_{j=1}\delta_{X^j_s},\mu_s)ds)\\
\leq &K(\int^t_0 |X^{0,N}_s-X^0_s|^2ds+ \int^t_0 \frac{1}{N}\sum^N_{j=1}(X^{j,N}_s-X^j_s)^2ds\\
&+ \int^t_0 W^2_2(\frac{1}{N}\sum^N_{j=1}\delta_{X^j_s},\mu_s)ds).
\end{aligned}$$
We take the supremum and the expectation on both sides, by the exchangeability we get
$$\begin{aligned}
&\mathbb{E}[\sup_{0 \leq s \leq t}|X^{0,N}_s-X^0_s|^2]\\
\leq &K(\int^t_0 \mathbb{E}[\sup_{0 \leq u \leq s}|X^{0,N}_u-X^0_ u|^2]ds+\int^t_0\mathbb{E}[(X^{1,N}_s-X^1_s)^2]ds\\
&+\int^t_0 \mathbb{E}[W^2_2(\frac{1}{N}\sum^N_{j=1}\delta _{X^j_s},\mu_s)]ds)\\
\leq &K(\int^t_0 \mathbb{E}[\sup_{0 \leq u \leq s}|X^{0,N}_u-X^0_u|^2]ds+\int^t_0\mathbb{E}[\sup_{0 \leq u \leq s}|X^{1,N}_u-X^1_u|^2]ds\\
&+\int^t_0 \mathbb{E}[W^2_2(\frac{1}{N}\sum^N_{j=1}\delta _{X^j_s},\mu_s)]ds),
\end{aligned}
$$
By following the above computation we can readily obtain the same type of estimate for $X^{1,N}-X^1$:
$$\begin{aligned}
&\mathbb{E}[\sup_{0 \leq s \leq t}|X^{1,N}_s-X^1_s|^2]
\leq K'(\int^t_0 \mathbb{E}[\sup_{0 \leq u \leq s}|X^{0,N}_u-X^0_u|^2]ds+\int^t_0\mathbb{E}[\sup_{0 \leq u \leq s}|X^{1,N}_u-X^1_u|^2]ds\\
&\phantom{?????????????????????????????}+\int^t_0 \mathbb{E}[W^2_2(\frac{1}{N}\sum^N_{j=1}\delta _{X^j_s},\mu_s)]ds),
\end{aligned}$$
by summing up the above two inequality and using the Gronwall's inequality we get
$$\begin{aligned}
&\mathbb{E}[\sup_{0 \leq t \leq T}|X^{0,N}_t-X^0_t|^2]+\mathbb{E}[\sup_{0 \leq t \leq T}|X^{1,N}_t-X^1_t|^2]\\
\leq &K\int^T_0 \mathbb{E}[W^2_2(\frac{1}{N}\sum^N_{j=1}\delta _{X^j_t},\mu_t)]ds \leq K\mathbb{E}\left[\int^T_0 \vert X^1_t\vert^{d+5}\right] N^{-2/(d+4)},
\end{aligned}$$
where the second inequality comes from a direct application of Lemma ~\ref{le:RR}, with the help of Lemma ~\ref{lwasserstein}, and this proves the desired result.
\end{proof}
\section{Appendix: A Maximum Principle for Conditional McKean-Vlasov Control Problems}
\label{se:Pontryagin}
In this last section, we establish a version of the sufficient part of the stochastic Pontryagin maximum principle for a type of conditional McKean-Vlasov control problem. In some sense these results are extensions of the results in \cite{CarmonaDelarue_ecp}, and we will refer the reader to \cite{CarmonaDelarue_ecp} for details and proofs. The setup is the following: $(\Omega, \mathcal{F},\mathbb{P})$ is a probability space, $(\mathcal{F}_t)$ is a filtration satisfying the usual conditions defined on $\Omega$. $(\mathcal{G}_t)$ and $(\mathcal{H}_t)$ are two sub-filtrations of $(\mathcal{F}_t)$ also satisfying the usual conditions, and $(W_t)$ is a $n$-dimensional $\mathcal{F}_t$-Wiener process. We assume that the probability space $\Omega$ is standard. \\
\indent The controlled dynamics are given by
\begin{equation}
\label{fo:SDEB}
dX_t=b(t,X_t,\mathcal{L}(X_t \vert \mathcal{G}_t),u_t)dt+\sigma(t,X_t,\mathcal{L}(X_t \vert \mathcal{G}_t),u_t)dW_t,
\end{equation}
with initial condition $X_0 = x_0$, and the objective function to minimize is given by
$$
J(u)=\mathbb{E}\left[\int^T_0 f(t,X_t,\mathcal{L}(X_t \vert \mathcal{G}_t),u_t)+g(X_T,\mathcal{L}(X_T \vert \mathcal{G}_T))\right],
$$
where $X$ is $d$-dimensional and $u$ takes values in $U \subset \mathbb{R}^k$ which is convex. The set of admissible controls is the space $\HH^{2,k}(\cH_t,U)$ defined in \eqref{fo:H2d}. We shall use the assumptions:
\noindent (\textbf{A2.1}) For all $x\in \mathbb{R}^d$, $\mu \in \mathcal{P}_2(\mathbb{R}^d)$ and $u \in U$, $[0,T] \ni t \hookrightarrow (b,\sigma) (t,x,\mu,u)$
is square-integrable.\\
\noindent (\textbf{A2.2}) For all $t \in [0,T]$, $x,x' \in \mathbb{R}^d$, $\mu,\mu' \in \mathcal{P}_2(\mathbb{R}^d)$ and $u \in U$, we have
$$\vert b(t,x',\mu',u)-b(t,x,\mu,u)\vert+\vert \sigma(t,x',\mu',u)-\sigma(t,x,\mu,u)\vert \leq c(\vert x'-x\vert+W_2(\mu',\mu)).$$
\noindent (\textbf{A2.3}) The coefficient functions $b$, $\sigma$, $f$ and $g$ are differentiable with respect to $x$ and $\mu$.\\
We note that under (A2.1-2), for all admissible controls the controlled SDE (\ref{fo:SDEB}) has a unique solution which is square-integrable. (A2.3) will be used in defining the adjoint processes.
\subsection{Hamiltonian and Adjoint Processes}
The Hamiltonian of the problem is defined as
$$
H(t,x,\mu,y,z,u)=\langle y,b(t,x,\mu,u)\rangle+\langle z, \sigma(t,x,\mu,u)\rangle+f(t,x,\mu,u).
$$
Given an admissible control $u\in\HH^{2,k}(\cH_t,U)$, the associated adjoint equation is defined as the following BSDE:
\begin{equation}
\begin{cases}
\begin{aligned}
dY_t=&-\partial_x H(t,X_t,\mathcal{L}(X_t\vert \mathcal{G}_t),Y_t,Z_t,u_t)dt+Z_t dW_t\\
&-\mathbb{E}^{\mathcal{G}_t}[\partial_\mu H(t,\tilde{X}_t,\mathcal{L}(\tilde{X}_t\vert\mathcal{G}_t),\tilde{Y}_t,\tilde{Z}_t,\tilde{u}_t)(X_t)],
\end{aligned}\\
Y_T=\partial_x g(X_T,\mathcal{L}(X_T \vert \mathcal{G}_T))+\mathbb{E}^{\mathcal{G}_T}[\partial_\mu g(\tilde{X}_T,\mathcal{L}(\tilde{X}_T\vert \mathcal{G}_T))(X_T)],
\end{cases}
\end{equation}
where $X=X^u$ denotes the state controlled by $u$, and whose dynamics are given by \eqref{fo:SDEB}. We refer the reader to \cite{CarmonaDelarue_ecp} for the definition of differentiability with respect to the measure argument. This BSDE is of the McKean-Vlasov type because of the presence of the conditional distributions of various $X_t$ in the coefficients and the terminal condition. However, standard fixed point arguments can be used to prove existence and uniqueness of a solution to these equations.
\subsection{Sufficient Pontryagin Maximum Principle}
The following theorem gives us a sufficient condition of optimality.
\begin{theorem}
On the top of assumptions (A2.1-3), we assume that\\
\noindent (1) The function $\mathbb{R}^d \times \mathcal{P}_2(\mathbb{R}^d) \ni (x,\mu) \hookrightarrow g(x,\mu)$ is convex.\\
\noindent (2) The function $\mathbb{R}^d \times \mathcal{P}_2(\mathbb{R}^d) \times U \ni (x,\mu,u) \hookrightarrow H(t,x,\mu,Y_t,Z_t,u)$ is convex $dt \otimes \mathbb{P}$ a.e.\\
\noindent (3) For any admissible control $u'$ we have the following integrability condition
\begin{equation}
\begin{aligned}
\mathbb{E}\left[\left(\int^T_0 \Vert \sigma(t,X'_t,\mathcal{L}(X'_t\vert \mathcal{G}_t),u'_t)\cdot Y_t\Vert^2 dt\right)^{\frac{1}{2}}\right]< \infty,\quad \mathbb{E}\left[\left(\int^T_0 \Vert X'_t\cdot Z_t\Vert^2 dt\right)^{\frac{1}{2}}\right]< \infty.
\end{aligned}
\end{equation}
Moreover, if
\begin{equation}\label{fo:BDG}
\mathbb{E}[H(t,X_t,\mathcal{L}(X_t\vert \mathcal{G}_t),Y_t,Z_t,u_t)\vert \mathcal{H}_t]=\inf_{u \in U} \mathbb{E}[H(t,X_t,\mathcal{L}(X_t\vert \mathcal{G}_t),Y_t,Z_t,u)\vert \mathcal{H}_t],
\end{equation}
then $(u_t)_{0 \leq t \leq T}$ is an optimal control of the conditional McKean-Vlasov control problem.
\end{theorem}
\begin{proof}
The various steps of the proof of theoremTheorem 4.6 in \cite{CarmonaDelarue_ecp} can be followed \emph{mutatis mutandis} once we remarks that, the interchanges of variables made in Theorem 4.6 of \cite{CarmonaDelarue_ecp} when using independent copies can be done in the same way in the present situation. Indeed, the justification for these interchanges was given at the end of Subsection \ref{sub:conditional} earlier in the previous section.
\end{proof}
One final observation is that a sufficient condition for the integrability condition (\ref{fo:BDG}) is that, on the top of (A2.1)-(A2.3), we have $Y \in \mathbb{S}^{2,d}$ and $Z \in \mathbb{H}^{2,d\times n}$, which is an easy consequence of the Burkholder-Davis-Gundy inequality.
|
1,314,259,994,773 | arxiv | \section{INTRODUCTION AND RESULTS}
\hspace{8mm} I shall report here about some of my results on the physics of black holes and the dynamics of fields in the vicinity of such objects, describing at the same time, the Black Hole under its triple aspect of Scatterer, Absorber and Emitter
of particles.
I shall first report about the Absorption, it appears in the concept of black hole itself, the gravitational field being so intense that even light can not escape of it. Absorption is one of the properties that characterizes the black hole description in classical physics: black holes absorb waves but they can not emit them. If a quantum description of perturbation fields is considered, black holes also emit particles. For a static black hole, the quantum particle emission rate $H(k)$, and the classical wave absorption cross section $\sigma_A(k)$ are related by the Hawking's formula (1975, \cite{haw})
\begin{equation}
H(k) = {{\sigma_A(k)}\over{e^{8 \pi k M} - 1}} \; \; ,
\end{equation}
the factor relating them being planckien. Here $k$ and $M$ stands for the frequency of waves and for the mass of the hole respectively.
We see the role played by the absorption in the emission. In spite of the extensive litterature discussing about the interaction of waves with black holes, the absorption spectrum $\sigma_A(k)$ (and other scattering parameters), as well as their theoretical foundations, was largely unsolved. In fact, the complexity of analytic solutions of the perturbation fields equations made this problem very difficult.
We have studied in detail the absorption spectrum of the black hole and we have found the total absorption cross section $\sigma_A(k)$ in the Hawking formula, obtaining a very simple expression (N. S\'{a}nchez, 1978 \cite{ns}), which is valid to very high accuracy over the entire range of k, namely
\begin{eqnarray} \label{sigma}
\sigma_A(k) = 27 \pi M^2 - 2\sqrt{2} M {{\sin{(2\sqrt{27} \pi k M)}\over{k}}} & , \hspace{0.5cm} & kM \: \geq 0.07 \\
\sigma_A(0) = 16 \pi M^2 {\mathrm{\hspace{4.7cm}}} & &\nonumber
\end{eqnarray}
The absorption spectrum presents, as a function of the frequency, a remarkable oscillatory behaviour characteristic of a diffraction pattern (Fig.2). It oscillates around its constant geometrical optics value $ \sigma(\infty) = 27 \pi M^2$ with decreasing amplitude (as $ {1\over{(\sqrt{2} k M)}}$) and constant period ($ {2\over{3}} \sqrt{3} $). The value of $\sigma_A(0)$ is exactly given by $16 \pi M^2$. See below.
We have also calculated the Hawking radiation. This is only important in the interval $ 0 \leq k \leq {1\over{M}} $. The emission spectrum (Fig.1) does not show any of the interference oscillations characteristic of the absorption cross section, because the contribution of the S-wave dominates the Hawking radiation. The rapidly decrease of the Planck factor for $ kM \geq 1 $ supresses the contribution of higher partial waves.
Thus, for a black hole the emission follows a planckian spectrum, given by eq. (1), (Fig.1), and the absorption follows an oscillatory spectrum, given by eq.(2), (Fig.2).
\begin{figure}[h]
\centering
\psfrag{k}{\bf {k}}
\psfrag{h}[r]{\bf{\it {H(k)}}$\ $}
\includegraphics[width=120mm, trim=0 170 0 150]{figura1.ps}
\caption[fig1]{EMISSION BY A BLACK HOLE} \label{fig1}
\end{figure}
\begin{figure}[h]
\centering
\psfrag{eqi}{${\sigma(\infty)=27 \pi M^2}$}
\psfrag{eq}{${\sigma_{A}(k)}$}
\psfrag{k}{\bf {k}}
\psfrag{s}[br]{${\sigma(\infty)} \ $}
\includegraphics[width=120mm, trim=0 170 0 150]{figura2.ps}
\caption[Fig.II]{ABSORPTION BY A BLACK HOLE} \label{fig2}
\end{figure}
\begin{figure}[h]
\centering
\psfrag{x}{$\:$ }
\psfrag{y}{$\:$ }
\psfrag{k}{\bf {k}}
\psfrag{s}{\bf ${\sigma_{A}(k)} \ $}
\psfrag{eq}{\bf ${\sigma(\infty)=\pi R^2}$}
\includegraphics[width=120mm, trim=0 170 0 150]{figura3.ps}
\caption[Fig.III]{ABSORPTION BY A MATERIAL SPHERE WITH A COMPLEX REFRACTION INDEX} \label{fig3}
\end{figure}
It is interesting to compare the absorption by a black hole with that of other physical systems. Fig.3 shows the total absorption cross section for an ordinary material sphere with a complex refraction index. It is a monotonically increasing function of the frequency. It attains its geometrical optics limit without any oscillation.
Comparison of Fig.2 with Fig.3 shows the differences between the absorptive properties of a black hole and those corresponding to ordinary absorptive bodies or described by complex potentials (optical models). For a black hole, the presence of absorption processes is due to the non hermitian character of the effective potential that describes the wave - black hole interaction \cite{ns77}. The effective Hamiltonian is non hermitian, despite of being real, due to its singularity at the origin ($ r = 0 $), as we have shown in ref.\cite{ns77}, so that the absorption takes place only at the origin.
We have generalized to the black hole case the well known unitarity theorem (optical theorem) of the elastic potential scattering theory, explicitly relating the presence of a non zero absorption cross section to the existence of a singularity in the space-time \cite{ns77}. All these results allowed me to give a simple physical interpretation of the total absorption cross section $\sigma_A(k)$ in the context of the Fresnel-Kirchoff diffraction theory. The oscillatory behaviour of $\sigma_A(k)$ is explained in a good approximation by the interference of the absorbed rays arriving at the origin through different optical paths. \cite{ns}
Usually, in scattering theory, absorption processes are related to complex (and non-singular) potentials. On the contrary, in the black hole case, the potential is real and singular at the origin. All partial absorption amplitudes have absolute maxima at the frequence $k = {3\over4} ({{\sqrt{3}}\over{M}})(l + {1\over{2}})$. By summing up for all angular momenta, each absolute maximum of partial absorption cross section produces a relative maximum in the total absorption spectrum giving rise to the presence of oscillations.
It can be pointed out that associated to the planckian spectrum, the black hole has a temperature equal to ${1\over{(8 \pi M)}}$. In what concerns the absorption spectrum it is not possible to associate a refraction index to the black hole. For optical materials, the absorption takes place in the whole volume, whereas for the black hole, it takes place only at the origin.
It is also interesting to calculate the angular distribution of absorbed waves. For it one must study too the black hole as elastic scatterer.
The distribution of scattered waves, as a function of the scattering angle $\theta$, has been computed in a wide range of the frequency \cite{ns77},\cite{ns2}. It presents a strong peak $(\sim {\theta^{-4}})$ characteristic of long range interactions in the forward direction, and a ``glory'' in the backward, characteristic of the presence of strongly attractive interactions for short distances. For intermediate $\theta$, it shows a complicated behaviour with peaks and drops that disappear only at the geometrical-optics limit.
The angular distribution of absorbed waves is shown in [\ref{fig4}]. It is isotropic for low frequencies and gradually shows features of a diffraction pattern, as the frequency increases. It presents an absolute maximum in the forward direction which grows and narrows as the frequency increases. In the geometrical-optics limit, this results in a Dirac Delta distribution. The analytic behaviour expresses in terms of the Bessel function $ {\mathcal{J}}_{1} $, as given by eq. (\ref{bessel}) below.
In the course of this research, we have developed accurate and useful computational methods based on the analytical resolution of the wave equation, which in addition, have allowed us to determine the range of validity of different approximations for low and high frequencies made by other authors (Starobinsky, Sov. Phys. JETP {\bf 37}, 1, 1973; Unruh, Phys. Rev. {\bf D14}, 3251, 1976), respectively, and by ourselves \cite{ns76}. It follows that the analytical computation of elastic scattering parameters for low frequencies is a rather open problem.
We have also obtained several properties concerning the scattering, absorption and emission parameters in a partial wave analysis. They are repported in references \cite{ns} and \cite{ns2}. Some of them are also reported in references \cite{ns79} and \cite{fu}.
The work presented here has also a direct interest for the field and string quantization in curved space-times, related issues and other current problems. See the Conclusions Section at the end of this paper.
\begin{figure}[h!]
\centering
\psfrag{t}[b]{\bf $\theta$}
\psfrag{g}[r]{${\mid g(\theta) \mid}^2 \ \ \ \ $}
\psfrag{a}{$x_s = 0.1$}
\psfrag{b}{$x_s = 0.25$}
\psfrag{c}{$x_s = 0.5$}
\psfrag{d}[]{$\ \ \ \ \ \ \ \ \ x_s = 1.0$}
\psfrag{e}[]{$\ \ \ \ \ \ \ x_s = 2.0$}
\psfrag{i}{$\ $}
\includegraphics[width=145mm]{figura4.ps}
\caption[fig4]{ANGULAR DISTRIBUTION $\mid g(\theta) \mid ^2 $ OF ABSORBED WAVES\label{fig4}}
\end{figure}
\section{PARTIAL WAVE ANALYSIS}
{\hspace{8mm}} The partial scattering matrix is given by
\begin{displaymath}
S_l = e^{2i {\delta}_l}
\end{displaymath}
\begin{displaymath}
{\delta}_l = {\eta}_l + i {\beta}_l
\end{displaymath}
We have found (\cite{ns}, \cite{ns2}) that the real and imaginary parts of the Black Hole phase shifts $ \delta_l $ are {\underline{odd}} and {\underline{even}} functions of the frequency respectively:
\begin{eqnarray}
\eta_l (x_s) = - \eta_l(-x_s) & & \\
\beta_l(x_s) = \beta_l(-x_s) & , & x_s \equiv k r_s = 2 k M \nonumber
\end{eqnarray}
\pagebreak[4]
In terms of the phase shifts, the partial elastic and absorption cross sections are respectively given by:
\begin{eqnarray*}
\epsilon_l = {{\pi}\over{{x_s}^2}} (2 l + 1) (1 - e^{-2 \beta_l}\cos{2 \eta_l} + e^{-4 \beta_l}) & & \mathrm{partial\; elastic \; cross \; section}\\
\sigma_l = {{\pi}\over{{x_s}^2}} (2 l + 1) (1 - e^{- 4 \beta_l}) \mathrm{\hspace{2.5cm}} & & \mathrm{partial \; absorption \; cross \; section}
\end{eqnarray*}
\subsection{Absorption cross sections}
{\hspace{8mm}} For all value of the angular momentum $ l $, the imaginary part of the phase shifts, $\beta_l(x_s)$, is a monotonically increasing function of $x_s$.
All $\beta_l(x_s)$ are zero at $x_s = 0$ and tend to infinity linearly with $x_s$ as $x_s$ increases to infinity.
\addcontentsline{toc}{subsubsection}{Low frequencies}
\noindent {\bf \underline{Low frequencies}}: For low frequencies, $(x_s \ll 1)$, $\beta_l(x_s)$ behaves as:
\begin{eqnarray}
{\beta_l}^{exact}(x_s \ll 1) = C_l {x_s}^{2 l + 2} & \mathrm{\hspace{1.5cm}} & x_s \ll 1 \; , \; x_s \equiv k \: r_s \nonumber \\
C_l = {{2^{2 l} (l !)^6}\over{{[(2l)!]}^2 {[(2 l + 1)!]^2}}} & &
\end{eqnarray}
We have found for $C_l$ values in agreement with Starobinsky's formulae (Starobinsky, Sov. Phys. JETP {\bf 37} (1973) 1), for $x_s = 0$ and $l = 0$. However, the Starobinsky's approximation is accurate only in a small neighborhood of $x_s = 0$. For example, the ratio
\begin{eqnarray*}
{{{\beta_0}^{exact}(x_s) - C_0 {x_s}^2}\over{{\beta_0}^{exact}(x_s)}} \equiv X_0
\end{eqnarray*}
varies as
\begin{eqnarray*}
0.15 \leq X_0 \leq 0.5 & \mathrm{\hspace{7mm} for \hspace{7mm}} & 0.05 \leq x_s \leq 0.1
\end{eqnarray*}
For $ l = 1 $:
\begin{eqnarray*}
0.18 \leq X_1 \leq 0.6 & \mathrm{\hspace{7mm} for \hspace{7mm}} & 0.05 \leq x_s \leq 0.1
\end{eqnarray*}
For small $x_s$, the inaccuracy of Starobinsky's approximation increases with $ l $.
At $x_s = 0$, all absorption cross sections $\sigma_l(x_s)$ are zero, except for $l=0$. For the S-wave:
\begin{equation}
\sigma_0(0) = 4 \pi
\end{equation}
The presence of a pole at $x_s = 0$ for $l \geq 1$ in the Jost function of the Black Hole (\cite{ns}, \cite{ns2}) means that waves with very small frequency and $ l \neq 0 $ are repelled out of the vecinity of the black hole.
\addcontentsline{toc}{subsubsection}{High frequencies}
\noindent {\bf \underline{High frequencies:}} For high frequencies, $(x_s \gg 1)$, the imaginary part of the phase shifts are given by
\begin{eqnarray}
{\beta_l}^{exact}(x_s) = {\beta_l}^{as}(x_s) + O({{l + {1/2}}\over{{x_s}^{3/2}}}) & \mathrm{\hspace{1cm}} & x_s \gg 1
\end{eqnarray}
${\beta_l}^{as} $ is the asymptotic expression derived with the DWBA (Distorted Wave Born Approximation):
\begin{eqnarray}
{\beta_l}^{as}(x_s) = {\pi x_s} - {1/4} \ln 2 - 1/{16}{({\pi}/{x_s})}^{1/2} - {{{\pi}\over{2 \sqrt{2}}}} {{{(l + {1/2})}^2}\over{x_s}}
\end{eqnarray}
There is very good agreement between ${\beta_l}^{exact}$ and ${\beta_l}^{as}$. For example:
\begin{eqnarray*}
{\beta_0}^{exact}(2) = 5.79 & , & {\beta_0}^{as}(2) = 5.89 \\
{\beta_1}^{exact}(2) = 4.98 & , & {\beta_1}^{as}(2) = 4.78
\end{eqnarray*}
For all $x_s$, $ \beta_l(x_s) $ are described in ref. \cite{ns}.
The differential absorption cross section per unit solid angle $d \Omega$ for the Black Hole
\begin{eqnarray*}
{{d \sigma_A(\theta)}\over{d \Omega}} = {|\sum_{l=0}^{\infty} (2 l + 1) g_l(x_s) P_l(\cos \theta)|}^2
\end{eqnarray*}
is expressed in terms of the Bessel function $J_1$ as \cite{ns2}:
\begin{equation}
\label{bessel}
{{d \sigma_A(\theta)}\over{d \Omega}} \ \ \stackrel{x_s \gg 1 \; \theta \rightarrow 0}{\simeq} \ \ {{{27}\over{4}} {{{x_s}^2}\over{{\theta}^2}} {[J_1({{\sqrt{27}}\over{2}} x_s \theta)]}^2}
\end{equation}
\vspace{-2mm}
\subsection{Elastic Scattering}
\vspace{-2mm}
{\hspace{8mm}} For all angular momenta $l \neq 0$, the real part of the phase shifts, $\eta_l(x_s)$, as a function of $x_s$ has three zeros in the range $ 0 \leq x_s \leq \infty $ \cite{ns2}.
For the S-wave, $\eta_0(x_s)$ has two zeros.
We denote as ${x_s}^i(l)$, the frequencies at which $\eta_l(x_s)$ vanishes ($ i = 0, 1, 2 $ stands for first, second or third zero, respectively).
\begin{eqnarray*}
{\eta_l}({x_s}^{l_(i)}) \ = \ 0 & {,} & \ \ i = 1, 2, 3.
\end{eqnarray*}
The first zero of $\eta_l$ is at $x_s \; = \; 0$.
\pagebreak[3]
\addcontentsline{toc}{subsubsection}{Low frequencies}
\noindent {\bf \underline{Low frequencies:}} For low $x_s$, $x_s \ll 1$:
\begin{eqnarray}
\eta_l(x_s) \ \stackrel{x_s \ll 1}{\simeq} \ a_l \: x_s & ,{\mathrm{\hspace{1cm}}} & {\mathrm{for \ all \ l}}
\end{eqnarray}
The detailed behavior is discussed in ref.\cite{ns2}. Usually, the presence of long-range interactions produces a divergence in the low-frequency behaviour of the phase shifts. This is not the case here since the Coulomb interaction vanishes when the wave energy tends to zero.
In the interval $ 0 \leq x_s \leq {x_s}^{(1)}(l) $ :
\begin{eqnarray*}
\eta_l < 0 \ \ & & \ \ {\mathrm{for \; low \;}} l \neq 0 \\
\eta_0 > 0 \ \ & & \ \ {\mathrm{for\;}} l = 0
\end{eqnarray*}
and very nearly equal to zero.
For increasing $ l $, $ \eta_l $ becomes more and more negative according to the general variation $ \Delta \eta_l(x_s) $, due to variations $ \Delta V_{eff} $ of the effective potential:
\begin{displaymath}
\Delta \eta_l = - {1\over{k}} \int_{0}^{\infty} {{dr}\over{1-{{r_s}\over{r}}}} {({{R_l}\over{r}})}^2 \Delta V_{eff}
\end{displaymath}
where $R_l$ is the solution to radial wave equation and
\begin{eqnarray*}
V_{eff}(x^*)=(1-{{x_s}\over{x}})\left[{{x_s}\over{x^3}} + {{l(l+1)}\over{x^2}}\right] \ & , & x^* = x + x_s \ln(1-{{x_s}\over{x}}) \; \; .
\end{eqnarray*}
\addcontentsline{toc}{subsubsection}{High frequencies}
\noindent {\bf \underline{High frequencies:}} For large $x_s$, $x_s \gg 1$:
\begin{eqnarray} \label{dieza}
{\eta_l}^{exact}(x_s) = {\eta_l}^{as}(x_s) + O({({{1}\over{x_s}})}^{3/2}) \ & , & \ \ l \ll x_s
\end{eqnarray}
where
\begin{eqnarray}
{\eta_l}^{as}(x_s) & = & {{\delta_c}\over{2}} - x_s + {{\pi}\over{2}}(l + {3\over{2}}) + {1\over{16}}{({{\pi}\over{x_s}})}^{1/2} + O({{1}\over{{x_s}^{3/2}}}) \label{diezb} \\
\delta_c & = & Im \ln \Gamma({1\over{2}} - 2 i\: x_s) \nonumber \\
& = & - 2 x_s \ln({{2 x_s}\over{e}}) + O({{1}\over{x_s}}).\nonumber
\end{eqnarray}
${\eta_l}^{as}$ is the asymptotic formula derived by us in the DWBA (Distorted Wave Born Approximation) scheme \cite{ns76}.
\noindent ${\eta_l}^{as}$ and ${\eta_l}^{exact}$ are in good agreement. For example,
\begin{eqnarray*}
{\eta_0}^{exact}(2.3) = - 1.2 & , & {\eta_0}^{as}(2.3) = - 1.18 \\
{\eta_0}^{exact}(2.5) = - 1.6 & , & {\eta_0}^{as}(2.5) = - 1.60
\end{eqnarray*}
We have
\begin{eqnarray*}
\eta_l(x_s \rightarrow \infty) \rightarrow - x_s \ln(2 x_s) \rightarrow - \infty & , & {\mathrm{for\; fixed \;}} l {\mathrm{\; and \;}} x_s \rightarrow \infty \\
\eta_l(x_s) \rightarrow - x_s \ln{(l + {1\over{2}})} \mathrm{\hspace{2cm}} & , & {\mathrm{for \; fixed \;}} x_s {\mathrm{\; and \;}} l \rightarrow \infty
\end{eqnarray*}
\addcontentsline{toc}{subsubsection}{High Angular Momenta}
\noindent {\bf \underline{High Angular Momenta:}} High-order partial waves give an important contribution to the elastic scattering amplitude. This fact lead us to study $\eta_l(x_s)$ for $l \geq 2$ and fixed $x_s$. We found:
\begin{eqnarray*}
\eta_l(x_s) \; = \; {\eta_l}^{coul}(x_s) + \alpha_0(x_s) + \Delta_l(x_s) \ \ & , & \ \ l \; \gg \; x_s
\end{eqnarray*}
where
\begin{eqnarray}
{\eta_l}^{coul}(x_s) & = & \arg{ \Gamma(l+1-ix_s)} \\
\Delta_l(x_s) & = & {{\alpha_1(x_s)}\over{(l+{1\over{2}})}} + {{\alpha_2(x_s)}\over{{(l+{1\over{2}})}^2}} + {{\alpha_3(x_s)}\over{{(l+{1\over{2}})}^3}} + O({1\over{{(l+{1\over{2}})}^4}}) \nonumber \\
\alpha_l(x_s) & \sim & {(x_s)}^{l+1} \nonumber \\
\alpha_0(x_s) & \simeq & {1\over{5}}x_s \; \; , \; \; \; \;
\alpha_1(x_s) \; \; \simeq \; \; {3\over{2}}{x_s}^2 \nonumber
\end{eqnarray}
As could be expected, the leading behaviour of $\eta_l(x_s)$ for large $ l $ is the same as that of the Coulomb phase shift $ {\eta_l}^{coul} $.
The difference
\begin{eqnarray*}
[{\eta_l}^{exact}(x_s) - {\eta_l}^{coul}(x_s)] = x_s f(b)
\end{eqnarray*}
can be written as $ x_s $ times a function $ f(b) $ of the impact parameter $ b = {{(l+{1\over{2}})}\over{x_s}} $, which is analytic at $ b = 0 $.
The fact that $ \eta_l $ does not vanishes in the infinite energy limit is a consequence of the strong attractive character of the black hole interaction at short distances (the term of the type $ - {{1\over{{(r-r_s)}^2}} {({k_{r_s}}^2 + {{{r_s}^2}\over{4 r^2}})}} $ in the radial wave equation).
\section{DIFFERENTIAL ELASTIC CROSS SECTION}
{\hspace{8mm}} The scattering amplitude, whose squared modulus gives the differential elastic cross section, is given by
\begin{eqnarray*}
f(\theta) = \sum_{l=0}^{\infty} {{(2l+1)}\over{2ix_s}}(\exp{(-2\beta_l)}\exp{(2i\eta_l)}-1)P_l(\cos{\theta})
\end{eqnarray*}
For all values of $x_s$ and small $\theta$, the differential elastic cross section behaves as
\begin{equation}
{\mid f(\theta) \mid}^2 = {{4}\over{{\theta}^4}} - {{C_1(x_s)}\over{{\theta}^3}} - {{4/3}\over{{\theta}^2}} + {{C_2(x_s)}\over{\theta}} + C_3(x_s) + O(\theta)
\end{equation}
The expressions for $ C_i(x_s), \; i = 1, 2 , 3 $ are given by \cite{ns2}:
\begin{eqnarray*}
C_1(x_s) & = & {{8 {\alpha}_1}\over{x_s}} \cos{(2 {\gamma}_{(-)})} \\
C_2(x_s) & = & \left (1 + {4\over{x_s}} - {{15}\over{4 {x_s}^2}}\right) 2 {\alpha}_1 \sin{(2 {\gamma}_{(-)})} + \\
& + & \left(1 + {1\over{4 {x_s}^2}}\right) {\alpha}_1 \sin{(2 {\gamma}_{(+)})} + \\
& - & 4 {{\alpha}_1}^3 \cos{(2 {\gamma}_{(-)})} \\
C_3(x_s) & = & {{{x_s}^2}\over{18}} + {4\over3}{{{\alpha}_1}\over{{x_s}^2}} + {{363}\over{72}} + {{97}\over{288}}{1\over{{x_s}^2}} + \\
& - & {7\over{6}}{{{\alpha}_2}\over{{x_s}^3}} + {{{{\alpha}_1}^4 - {{\alpha}_2}^2}\over{{x_s}^4}}
\end{eqnarray*}
where
\begin{eqnarray*}
{\gamma}_{(\pm)} & = & \arg{\left(\Gamma({1\over{2}}+{i\:{x_s}}) \pm \Gamma({i\:{x_s}})\right )}
\end{eqnarray*}
\begin{eqnarray*}
{\alpha}_0 \sim {1\over{5}} x_s {\mathrm{\hspace{3mm}, \hspace{7mm}}} &
{\alpha}_1 \sim {3\over{2}} {x_s}^2 {\mathrm{\hspace{3mm}, \hspace{7mm}}} &
{\alpha}_2 \sim {7\over{4}} {x_s}^3
\end{eqnarray*}
For intermediate angles, ${\mid f(\theta) \mid}^2$ has a complex behaviour with peaks and drops which disappear at the geometrical-optics limit.
\section{HAWKING EMISSION RATES}
{\hspace{8mm}} The energy emitted by the black hole in each mode of frequency $ k $ and angular momentum $ l $ is given by the Hawking's formula \cite{haw}
\begin{eqnarray*}
dH_l(k) = {{{\mathcal{P}}_l(k)}\over{(\exp{(4 \pi k r_s)} - 1)}} {{(2l + 1)}\over{\pi}}k\;dk
\end{eqnarray*}
Let us recall our first analytic expression for the partial absorption rate $ {\mathcal{P}}_l(k)$ (S\'{a}nchez, 1976) \cite{ns76}. This is a formula for high frequencies $(k r_s \gg 1)$ obtained within the DWBA scheme:
\begin{eqnarray}
{\mathcal{P}}_l(k) \; \; \stackrel{kr_s \gg 1}{=} \; \; {{1 - \exp{(-4 \pi k r_s)}}\over{1 + \exp{(- 4 \pi k r_s)}}} & & {\mathrm{for \;}} l \ll k r_s \nonumber\\
\vspace{-6pt} \\
\vspace{-18pt}{\mathcal{P}}_l(k) \; \; \stackrel{kr_s \gg 1}{=} \; \; {1\over{1 + \exp{\{(2l+1)\pi[1-{{27k^2{r_s}^2}\over{{(2l+1)}^2}}]\}}}} & & {\mathrm{for \;}} l \gg 1 \nonumber
\end{eqnarray}
In terms of this formula, $dH_l(k)$ can be expressed very simply by
\begin{eqnarray}
dH_l(k) & \stackrel{k r_s \gg 1}{=} &
{{1\over{(\exp{(4\pi k r_s)} + 1)}} {{2l +1}\over{2 \pi}} k dk} {\mathrm{\hspace{4.5cm}}} l \ll k r_s \nonumber \\
\\
dH_l(k) & \stackrel{k r_s \gg 1}{=} &
{{(2l+1) k dk}\over{(\exp{(4 \pi k r_s)} - 1)\{1+\exp{[(2l+1)\pi(1 - {{27k^2{r_s}^2}\over{{(2l+1)}^2}})]}\}}} {\mathrm{\hspace{5mm}}} l \gg 1 \nonumber
\end{eqnarray}
In order to compute the total emission $H(k)$, the total absorption $\sigma_A(k)$ eq.\ref{sigma}, and the set of properties discussed in the preceding section are needed.
By using the absorption cross section eq.\ref{sigma} and the partial wave results reported in the preceding sections, we have calculated the Hawking emission for a wide range of frequency and angular momentum (S\'anchez, 1978) \cite{ns}.
The total emission spectrum as a function of $x_s$ is plotted in Fig.\ref{fig1}. It does not show any of the interference oscillations characteristic of the total absorption cross section eq.\ref{sigma}, Fig.\ref{fig2}.\pagebreak[4] This is related to the fact that the S-wave contribution predominates in Hawking radiation. For example, the maxima of $H_l(k)$ for $ l = 0, 1, 2 $ are in the ratio
\begin{displaymath}
1 \ : \ {1\over{11}} \ : \ {1\over{453}}.
\end{displaymath}
For angular momenta higher than two, $H_l(k)$ is extremely small.
The spectrum of total emission has only one peak following closely the S-wave absorption cross section behaviour. Its maximum lies at the same point the maximum of $\sigma_0$ $({x_s}^{max} = 0.23)$.
The peaks of $\sigma_1$ and $\sigma_2$ turn out to have no influence on $H(k)$.
In conclusion, Hawking emission is only important in the frequency range
\begin{equation}
0 \ \leq \ k \ \leq {1\over{r_s}}
\end{equation}
\section{REMARKS ON APPROXIMATIONS}
\hspace{8mm} The analytic computation of the real part of the phase shift for low frequencies is a rather non-trivial problem.
Although the power-series expansion around $x_s = 0$ for the regular solution is known \cite{ocho}, the phase shifts cannot be obtained directly from it, because the asymptotic limit $r\rightarrow \infty$ cannot be taken term by term. Thus, the phase shifts are usually calculated by a standard procedure in which the radial equation is solved approximately in two or more regions of the positive real axis. By matching these solutions in the overlapping regions, approximated expressions for the phase shifts are found. By this procedure, several authors (Starobinsky \cite{nueve} and Unruh\cite{diez}) have obtained approximate expressions for the imaginary part of the phase shifts.
In Starobinsky's approximation, effects connected to the Coulomb tail of the interaction \underline{have not} been taken into account. With his approximation, it is possible to find for the real part of the phase shift a linear behaviour in $x_s$ $(\eta_l \sim a_l x_s)$, but inaccurate values for the coefficient $a_l$ are obtained.
In the context of a massive field, Unruh included the Coulomb interaction. However, his approximation is not sufficiently accurate to give the real part of the phase shift at least for $l=0$ in the zero-mass case.\pagebreak[3]
The discrepancy between Unruh's approximation with the exact calculation can be explained as follows: in Unruh's approach, for $r \gg r_s$, all terms of order higher than ${({{r_s}\over{r}})^2}$ are neglected in the exact radial wave equation written as
\begin{equation} \label{adob}
{{d^2}\over{dr^2}}(\xi R_l) + \left [ k^2 + {{2 k^2 r_s}\over{r-r_s}} +
{{k^2{r_s}^2}\over{{(r-r_s)}^2}} + {1\over{4}} {{{r_s}^2}\over{r^2{(r-r_s)}^2}} - {{l(l+1)}\over{r{(r-r_s)}}} \right ] (\xi R_l) = 0
\end{equation}
where
\begin{displaymath}
\xi = {1\over{k{[r(r-r_s)]}^{1\over{2}}}}.
\end{displaymath}
In Unruh's approximation, the exact solutions (Coulomb functions) of the approximate wave equation for $r \gg r_s$ are also used in the region $kr \ll 1$. However, the term ${{{1\over{4}}{r_s}^2}\over{r^2{(r-r_s)}^2}}$ is much smaller than the Coulomb term, only for $k^2 r^2 \gg {1\over{8}}{{r_s}\over{r}}$. Then the following double inequality must hold:
\begin{equation} \label{doble}
1 \ \ \gg \ \ (kr) \ \ \gg \ \ {{{(kr_s)}^{1\over{3}}}\over{2}}
\end{equation}
\hspace{8mm} Thus, in his approximation, one cannot expect to obtain good results for $x_s$ which are not extremely small. [If one consideres that the symbol $\gg$ indicates a difference of one order of magnitude, inequality (\ref{doble}) implies $x_s \leq 10^{-6}$].
Finally, concerning the ${({{r_s}\over{r}})}^2$ order term ${{k^2\;{r_s}^2}\over{{(r-r_s)}^2}}$ in Eq.(\ref{adob}), it cannot be neglected obviously for $l=0$.
In a different approach (Persides, \cite{ocho}), it has been shown that the Wronskian of the exact radial wave equation can be expanded in a double power series of $x_s$ and $x_s \ln x_s$, although explicit expressions for the coefficients are not known. In this way, if we calculate the phase shifts the leading behaviour obtained for the real part $\eta_l$ at low frequencies shows a linear and a cubic term in $x_s$ plus $O({x_s}^3 \ln x_s)$.
For large values of $x_s$, ($x_s > {x_s}^{(2)}(l)$, where ${x_s}^{(2)}(l)$ is the frequency at which the second zero of $\eta_l$ occurs), it follows from our results \cite{ns2} that $\eta_l$ is negative and a monotonically decreasing function os $x_s$. Here ${({x_s}^{(2)})}^2 \gg V_{eff}(max)$ and $\eta_l$ tends to $- \infty$ as $x_s$ increases to $\infty$.
For large $x_s$, we find a good agreement between our exact values eq.(\ref{dieza}) and the asymptotic formula eq.(\ref{diezb}) derived by us \cite{ns76} in the Approximation DWB (Distorted Wave Born Approximation).
\section{CONCLUSIONS}
\hspace{8mm} Accurate and powerful computational methods, based on the analytic
resolution of the wave equation in the black hole background, developed by
the present author allow to obtain the total absorption spectrum of the
Black Hole. As well as phase shifts and cross sections (elastic and
inelastic) for a wide range of energy and angular momentum, the angular
distribution of absorbed and scattered waves, and the Hawking emission
rates.
The total absorption spectrum of the Black Hole is known exactly.
The absorption spectrum as a function of the frequency shows a
{\bf{remarkable oscillatory}}
behaviour characteristic of a diffraction pattern. The absorption cross
section oscillates around its optical geometric limit with decreasing
amplitude and almost constant period. Such oscillatory absorption pattern is
an unique distinctive feature of the Black Hole. Absorption by ordinary bodies,
complex refraction index or optical models do not present these features.
For ordinary absorptive bodies, the absorption takes place in the whole medium
while for the Black Hole it takes place only at the origin ($ r = 0 $).
For the Black Hole, the effective Hamiltonian describing the wave-black hole
interaction is non-hermitian, despite of being real, due to its singularity
at the origin $(r = 0)$. The well known unitarity (optical)
theorem of the potential scattering theory can be generalized to the Black
Hole case, explicitly relating the presence of a non zero absorption cross
section to the existence of a singularity $(r=0)$ in the space time.
All partial absorption amplitudes have absolute maxima at the frequence
$k = {{3 \sqrt{3}}\over{4 M}} (l + {{1}\over{2}})$. By summing up all angular
momenta, each absolute partial wave maximum, produces a relative maximum
in the total cross section giving rise to the presence of oscillations.
All these results allow to {\bf understand} and {\bf reproduce} the exact
absorption spectrum in terms of the Fresnel-Kirchoff diffraction theory.
The oscillatory behaviour of ${\sigma}_A(k)$ is due to the interference of the
absorbed rays arriving at the origin $(r = 0)$ through different optical
paths.
Semiclassical WKB Approximation for the Scattering by Black Holes only gives
information about the high frquency $(kr_s \gg 1)$, (and high angular momenta),
of the elastic (real part) of the phase shifts, but fail to describe well the
absorption properties (and low partial wave angular momenta).
DWBA (Distorted Wave Born Approximation) for the Black Hole as it was
implemented by the present author more than twenty years ago \cite{ns76} is
an accurate better approximation for high frequencies $(k r_s \gg 1)$ to
compute the absorption (imaginary part) phase shifts and rates, both for
high $(l \gg k r_s)$ and low $(l \ll k r_s)$ angular momenta.
Approximative expressions (whatever they be), for very high frequencies, or for
low frequencies {\bf{do not}} allow to find the remarkable
{\bf{oscillatory}} behaviour of the total absorption cross section
as a function of frequency, of the Black Hole.
The knowledge
of the highly non trivial total absorption spectrum of the Black Hole needed
the development
of computational methods \cite{ns} more powerful and accurate than the commonly used approximations.
The angular distribution of absorbed and elastically scattered waves have been also computed with these methods.
The conceptual general features of the Black Hole Absorption spectrum will
survive for higher dimensional ($D > 4$) generic Black Holes, and including
charge and angular momentum. They will be also present for Black Hole
backgrounds solutions of the
low energy effective field equations of string theories and D branes.
The Absorption Cross Section is a classical concept. It is exactly known and
understood in terms of classical physics (classical perturbations around
fixed backgrounds). (Although, of course, it is possible to rederive and
compute magnitudes from several different ways and techniques).
An increasing amount of paper \cite{once} has been devoted to the
computation of absorption cross sections (``grey body factors'') of Black
Holes, whatever D-dimensional, ordinary, D-braneous,
stringy, extremal or non extremal. All these papers \cite{once} deal
with approximative expressions for the partial wave
cross sections. In all these papers \cite{once} the fundamental remarkable
features of the Total Absorption Spectrum of the Black Hole are overlooked.
|
1,314,259,994,774 | arxiv | \section{Introduction}
Large area surveys such as the Two Degree Field Galaxy Redshift Survey
(2dFGRS; Colless et~al.~2001) and Sloan Digital Sky Survey (SDSS;
Abazajian et~al.~2009)
have measured positions and redshifts of millions of galaxies.
These measurements allow us to map the 3D structure
of the nearby universe\footnote{A visual impression is given in this video:
http://vimeo.com/4169279}.
Galaxies are not randomly distributed in space. They form a complex
cosmic network of galaxy clusters, groups, filaments, isolated
field galaxies, and voids, which are large regions of space that
are almost devoid of galaxies. The current understanding of the distribution of
galaxies and
structure formation in the universe is based on the theory of gravitational
instability.
Very early density fluctuations became the ``seeds'' of cosmic
structure. These have been
observed as small temperature fluctuations
($\delta T/T \sim 5\times 10^{-5}$) in the
cosmic microwave background with the Cosmic
Background Explorer (Smoot et~al.~1992).
The small primordial matter density enhancements have progressively
grown through gravitational collapse and created the complex network
seen in the distribution of matter in the later universe.
During a galaxy's lifetime different physical processes, which are still
not well
understood, can trigger a mass flow onto the central super-massive black hole
(SMBH).
In this phase of galaxy evolution, the galaxy is observed as an Active Galactic
Nucleus (AGN).
After several million years, when the SMBH has consumed its accretion reservoir, the
central
engine shuts down, and the object is again observed as a normal galaxy. The AGN
phase
is thought to be a repeating special epoch in the process of galaxy
evolution.
In recent years it has become evident that both fundamental galaxy and AGN parameters
change significantly between
low ($z < 0.3$) and intermediate redshifts ($z\sim 1-2$), e.g., global star
formation density (Hopkins \& Beacom~2006)
and accretion rate onto SMBHs. For example, the contribution
to black hole growth has shifted from high luminosity objects at high
redshifts to low luminosity objects at low redshifts
(AGN ``downsizing''; e.g., Hasinger et~al.~2005).
It has also become clear that SMBH masses follow a
tight relation with the mass or velocity dispersion of the stars in
galactic bulges (Magorrian et~al.~1998; Gebhardt et~al.~2000;
Ferrarese \& Merritt~2000).
These observational correlations motivate a co-evolution scenario for
galaxies and AGNs and provide evidence of a possible interaction or
feedback mechanism between the SMBH and the host galaxy.
The interpretation of this correlation, i.e., whether and to what
extent the
AGN influences its host galaxy, is controversially debated (e.g., Jahnke \& Macci\'o~2011).
Since AGNs are generally much brighter than (inactive) galaxies, one major
advantage of
AGN large-scale (i.e., larger than the size of a galaxy) clustering measurements over galaxy clustering measurements
is that they allow the study of the matter distribution in the universe out to higher
redshifts. At these very high redshifts, it becomes challenging and observationally
expensive to detect galaxies in sufficient numbers. Furthermore, as the distribution
of AGNs and galaxies in the universe depends on galaxy evolution physics,
large-scale
clustering measurements are an independent method to identify and constrain the
physical
processes that turn an inactive galaxy into an AGN and are responsible for
AGN/galaxy co-evolution.
In the last decade the scientific interest in AGN large-scale
clustering measurements has increased significantly. As only a very small
fraction of
galaxies contain an AGN ($\sim$1\%), the remaining and dominating
challenge in deriving
physical constraints based on AGN clustering measurements is the
relative small sample size compared to galaxy clustering measurements. However, this
situation will change entirely in the next decade when several
different
surveys come online that are expected to identify millions of AGN
over $\sim$80\% of cosmic time.
We therefore review the current broad-line AGN clustering measurements.
A general introduction to clustering measurements is given in Sections~2 \& 3. In Section~4
we briefly summarize how AGN clustering measurements have evolved and discuss
recent developments. In Section~5, we discuss the outlook for
AGN clustering measurements in future upcoming projects.
\section{Understanding Observed Clustering Properties}
In our current understanding, the observed galaxy and AGN spatial
distribution in the universe -- i.e., large-scale clustering --
is caused by the interplay between cosmology and the physics of
AGN/galaxy formation and evolution.
In the commonly assumed standard cosmological model, Lambda-CDM, the
universe is currently composed
of $\sim$70\% dark energy, $\sim$25\% dark matter (DM),
and $\sim$5\% baryonic matter (Larsen et~al.~2011). Dark
matter plays a key role in structure formation as it is the dominant
form of matter in the universe.
Baryonic matter settles in the deep gravitational potentials created by dark
matter, the so-called dark matter halos (DMHs).
The term ``halo'' commonly refers to a bound, gravitationally collapsed
dark matter structure which is approximately in dynamical equilibrium.
The parameters of the cosmological model
determine how the DMHs are distributed in space (Fig.~\ref{fig1},
left panel, A-branch) as a function of the DMH mass and cosmic time. Different
cosmological
models lead to different properties of the DMH population.
\begin{myfigure}
\centerline{\resizebox{83mm}{!}{\includegraphics{krumpe_FraWS_2013_01_Fig01.ps}}}
\caption{Current conceptual model of the physical processes involved in
large-scale galaxy and AGN clustering. {\it Left:} The two branches (A and B)
in the diagram show the primary causes of clustering: (A) the properties of the
dark matter halo population, which are
based on the cosmological model, and (B) the physics
of complex processes in galaxy formation and evolution, which lead to a
distinct baryonic population within collapsed dark matter halos. Figure adapted
from Weinberg 2002.
{\it Right:} Illustration of the spatial distribution of galaxies within a dark
matter halo. The picture maps the galaxy cluster
Abell 1689, where an optical image showing
the galaxy cluster members is superimposed with
the distribution of dark matter shown in purple.
Credit: NASA, ESA, E. Jullo, P. Natarajan, and J-P. Kneib.}
\label{fig1}
\end{myfigure}
Inside DMHs, or within halos inside another DMH, called sub-halos, the baryonic
gas will
radiatively cool. If the gas reservoir is large enough, star and galaxy
formation will be initiated. The gas can also be accreted onto the
SMBH in the center of the galaxy.
On scales comparable to the size of the galaxy, the AGN might heat and/or eject the
surrounding gas, preventing star formation, and eventually removing the gas supply of
the AGN itself. All the galaxy evolution processes described here determine how
galaxies
and AGNs are distributed within DMHs (Fig.~\ref{fig1}, left panel, B-branch).
This distribution of AGN and galaxies within DMHs (Fig.~\ref{fig1}, right panel)
is described by the halo occupation distribution (HOD; Peacock \& Smith~2000). In
addition to the
spatial distribution of AGN/galaxies in DMHs, the HOD describes the probability
distributions of the number of AGN/galaxies per DMH of a certain mass and the
velocity distribution of AGN/galaxies within a DMH.
The interplay between cosmology and galaxy evolution causes the observed large-scale
clustering of galaxies and AGNs. The goal of AGN and galaxy clustering measurements
is to reverse the causal arrows in the Fig.~\ref{fig1} (left panel), working backwards
from the data to the galaxy \& AGN halo occupation distribution and DMH population
properties, in order to finally draw conclusions about galaxy and
AGN physics, as well as to constrain fundamental cosmological parameters.
\section{Clustering Measurements}
The most common statistical estimator for large-scale clustering is the two-point
correlation function (2PCF; Peebles~1980) $\xi(r)$.
This quantity measures the spatial clustering of a class of object
{\it in excess} of a Poisson distribution. In practice,
$\xi(r)$ is
obtained by counting pairs of objects with a given separation and comparing them to
the
number of pairs in a random sample with the same separation. Different correlation
estimators
are described in the literature (e.g., Davis \& Peebles~1983; Landy \& Szalay~1993).
The large-scale clustering of a given class of object can be quantified by
computing the angular (2D) correlation function, which is the projection
onto the plane of the sky, or with the spatial (3D) correlation function,
which requires redshift information for each object.
Obtaining spectra to measure the 3D correlation function is observationally expensive, which is the main reason why some studies have
had to rely on angular correlation functions.
However, 3D correlation function measurements are by far
preferable, since the deprojection (Limber~1954)
of the angular correlation function introduces large systematic uncertainties. Despite
these large caveats and the already moderately low uncertainties of current 3D
correlation
measurements, the use of angular correlation function might still be justified when
exploring
a new parameter space. However, the next generation multi-object spectrographs (e.g.,
4MOST (de Jong et~al.~2012),
BigBOSS (Schlegel et~al.~2012), and WEAVE (Dalton et~al.~2012), will make it far easier
to simultaneously
obtain thousands of spectra over wide fields. Hence, measurements of the 3D
correlation function will soon become ubiquitous.
As one measures line-of-sight distances for 3D correlation functions from
redshifts,
measurements of $\xi(r)$ are affected by redshift-space distortions due to peculiar
velocities of the objects within DMHs. To remove this effect, $\xi(r)$ is
commonly extracted by counting pairs on a 2D grid
of separations where $r_{p}$ is perpendicular to the line of sight and
$\pi$ is along the line of sight.
Then, integrating along the $\pi$-direction leads to the projected
correlation function, $w_p(r_p)$, which is free of redshift distortions.
The 3D correlation function $\xi(r)$ can be recovered from the projected
correlation function (Davis \& Peebles~1983).
The resulting signal can be approximated by a power law where the
largest clustering strength is found at small scales. At large separations of
$>$50 Mpc $h^{-1}$ the distribution of objects in the universe becomes nearly
indistinguishable from a randomly-distributed sample. Only on comoving scales
of $\sim$100 Mpc $h^{-1}$ can a weak positive signal be detected (e.g.,
Eisenstein et~al.~2005; Cole et~al.~2005)
which is caused by baryonic acoustic oscillations (BAO) in the early universe.
The spatial clustering of observable objects does not precisely mirror the
clustering
of matter in the universe. In general, the large-scale density distribution
of an object class is a function
of the underlying dark matter density. This relation of how an object class
traces the
underlying dark matter density is quantified using the linear bias
parameter $b$.
This
contrast enhancement factor is the ratio of the mean overdensity of
the observable object class, the so-called tracer set, to the mean overdensity of
the dark matter field, defined as
$
b= (\delta \rho/\langle \rho \rangle)_{\rm tracer} / (\delta \rho/\langle \rho \rangle)_{\rm DM},
$
where $\delta \rho = \rho - \langle \rho \rangle$, $\rho$ is the local mass density,
and $\langle \rho \rangle$ is the mean mass density on that scale.
In terms of the correlation function, the bias parameter is defined as the
square root of the 2PCF ratio of the tracer set to the dark matter
field: $b = \sqrt{\xi_{\rm tracer} / \xi_{\rm DM}}$.
Rare objects which form only
in the highest density peaks of the mass distribution have a large bias parameter and
consequently a large clustering strength.
Theoretical studies of DMHs (e.g., Mo \& White~1996; Sheth et~al.~2001) have established
a solid understanding of the bias parameter of DMHs with respect to various
parameters. Comparing the bias parameter of an object class with that of
DMHs in a certain mass range at the same cosmological epoch
allows one to determine the DMH mass which hosts the object class of interest.
A halo may contain substructures, but the DMH mass inferred from the linear bias
parameter refers to the single, largest (parent) halo in the context of HOD models.
\subsection{Why are we interested in AGN clustering?}
AGN clustering measurements explore different physics on different scales.
At scales up to the typical size of a DMH ($\sim 1-2$ Mpc), clustering
measurements are sensitive to the physics of galaxy/AGN formation and evolution.
Constraints on the galaxy/AGN merger rate and the radial distribution of these
objects within DMHs can be derived. On scales larger than the size of DMHs, the
large-scale
clustering is sensitive to the underlying DM density field, which essentially
depends only on cosmological parameters. Consequently, with only one
measurement both galaxy/AGN co-evolution and cosmology can be studied.
Future high precision AGN clustering measurements have the potential to accurately
establish missing fundamental parameters that
describe when AGN activity and feedback occur as a function of luminosity
and redshift. Since they will precisely determine how DMHs
are populated by AGN host galaxies, these measurements will also improve our
theoretical understanding of galaxy/AGN formation and evolution by enabling comparisons
to galaxy measurements and cosmological simulations.
Here, we elaborate on some (though not all) of the critical observational
constraints which are provided by AGN clustering measurements:
\begin{itemize}
\item{{\it AGN host galaxy} --
AGN clustering measurements determine the host
galaxy type in a statistical sense for the entire AGN sample, regardless of the AGN's
luminosity.
Comparing the observed AGN clustering to very accurate galaxy clustering
measurements, which depend
on different galaxy subclasses (morphological, spectral type, luminosity), constrains
the AGN
host galaxy type.}
\vspace*{-0.2cm}
\item{{\it External (mergers) vs. internal triggering} --
Different theoretical models (e.g., Fry~1996; Sheth et~al.~2001; Shen~2009)
of how AGNs are triggered predict very different large-scale clustering properties
with AGN parameters such as luminosity and redshift. Moderately precise
AGN clustering measurements allow us to distinguish between these different models
(Allevato et~al.~2011).
Furthermore, the validity of different models can be tested for different
luminosities and
cosmological epochs.}
\vspace*{-0.2cm}
\item{{\it Fundamental galaxy/AGN physics} --
AGN large-scale clustering dependences with various AGN properties could potentially be key in providing independent constraints on galaxy/AGN physics.
Comparing the observed AGN clustering properties with
results from simulations with different inputs for galaxy/AGN physics
could identify the physics that links
the evolution of AGNs and galaxies.}
\vspace*{-0.2cm}
\item{{\it AGN Lifetimes} -- AGN clustering measurements allow us to estimate
the AGN lifetime at different cosmological epochs (Martini \& Weinberg~2001).
The underlying idea is that rare, massive DMHs are highly biased tracers of the
underlying mass distribution, while more common objects are less strongly biased
(Kaiser~1984). Therefore, if AGNs are heavily biased
they must be in rare, massive DMHs. The ratio of the AGN number density to
the host halo number density is a measure of the ``duty cycle'', i.e., the fraction
of the time that the object spends in the AGN phase.}
\vspace*{-0.11cm}
\item{{\it Cosmological parameters} --
As AGN clustering measurements extend to much higher redshifts than galaxy clustering measuring, they
can be used to derive constraints on cosmological parameters back to the time
of the formation of the first AGNs. Currently, the detection of the BAO imprint on clustering measurements at different cosmological epochs
is of great interest to constrain the equation of state of dark energy.
AGN large-scale clustering measurements with very large AGN samples
can detect the BAO signal in a redshift range that is not accessible with galaxy
clustering
measurements.}
\end{itemize}
\section{AGN Clustering Measurements: Past and Present}
Until the 1980s, studies had to primarily rely on small, optically-selected,
very luminous AGN samples for clustering measurements. Then
the main question was whether AGNs are randomly distributed in the universe
(e.g., Bolton et~al.~1976; Setti \& Woltjer~1977).
The extremely small samples sizes did not allow clustering measurements
at scales below $\sim$50 Mpc, where a significant deviation from a random
distribution is present. Thanks to the launch of major
X-ray missions in the 1980s and 1990s such as Einstein (Giacconi et~al.~1979)
and ROSAT (Truemper~1993), much larger
AGN samples enabled successful detections of the AGN large-scale clustering signal.
A detailed review on the history of X-ray AGN clustering measurements is given
in Cappelluti et~al.~(2012).
Although AGN clustering measurements are far from being as precise as galaxy
clustering measurements, some general findings have emerged in recent years.
Interestingly,
over all of studied cosmic time ($z\sim 0-3$)
broad-line AGNs occupy DMH masses of
log $(M_{\rm DMH}/[h^{-1} M_{\odot}])\sim 12.0-13.5$ and therefore cluster like
groups of galaxies. More detailed information about the current
picture of broad-line AGN clustering is presented in Section~6.6 of
Krumpe et al.~(2012).
Some puzzling questions remain. For example, at $z<0.5$ a weak X-ray
luminosity dependence on the clustering strength is found (in that
luminous X-ray AGNs
cluster more strongly than their low luminosity counterparts, e.g.,
Krumpe et~al.~2010; Cappelluti et~al.~2010; Shen et~al.~2012).
However, at high redshift it seems that high luminosity,
optically-selected AGNs cluster less strongly
than moderately-luminous X-ray selected AGNs.
Whether this finding
is due to differences in the AGN populations, an intrinsic
luminosity dependence to the clustering amplitude, or an observational
bias is yet not understood.
We note that different studies have used different relations to translate
the measured linear bias parameter to DMH mass, as well as different $\sigma_8$
values.
Therefore, instead of blindly comparing the derived DMH mass, re-calculating the
masses based on
the same linear bias to DMH mass relation and the same $\sigma_8$ is essential when
comparing measurements in the literature.
\subsection{Recent Developments}
In the last few years several new approaches have been used to improve
the precision of AGN clustering measurements or their interpretation.
We briefly summarize these developments below.
\vspace*{0.1cm}
\hspace*{-0.5cm}{\it Cross-correlation measurements: }\\
Auto-correlation function (ACF) measurements of broad-line AGNs often
have large uncertainties due to the low number of
objects. Especially at low redshifts, large galaxy samples
with spectroscopic redshifts are
frequently available. In such cases, the statistical uncertainties of AGN
clustering measurements can be reduced significantly by computing the
cross-correlation function (CCF). The CCF measures
the clustering of objects between two different object classes
(e.g., broad-line AGNs and galaxies), while the ACF measures the spatial
clustering of
objects in the same sample (e.g., galaxies or AGNs). CCFs have been used before to
study the dependence of the AGN clustering signal with different AGN
parameters. However,
these values could not be compared to other studies as the CCFs also depend
on the galaxy populations used and their clustering strength. Only
recently has an alternative approach (Coil et~al.~2009) allowed the comparison
of the results from different studies by inferring the AGN ACF from the
measured AGN CCF and ACF of the galaxy tracer set. The basic idea of this
method, which is now frequently used (e.g.,
Krumpe et~al.~2010, 2012; Mountrichas \& Georgakakis~2012; Shen et~al.~2012),
is that both populations trace the same underlying DM density field.
\vspace*{0.15cm}
\hspace*{-0.5cm}{\it Photometric redshift samples: }\\
Large galaxy tracer sets with spectroscopic redshifts
are not available at all redshifts. Some studies therefore rely on
photometric redshifts. The impact of the large uncertainties and
catastrophic outliers when using photometric redshifts
is commonly not considered but it is
essential. The use of the full probability density function (PDF) of the
photometric
redshift fit, instead of a single value for the photometric redshift, has been
used in some studies (e.g., Mountrichas et~al.~2013).
Here photometric galaxies samples
are used as tracer sets to derive the CCF. Each object is given a
weight for the probability that it is actually located at a certain redshift based on
the fit to the photometric data.
\begin{myfigure}
\centerline{\resizebox{80mm}{!}{\includegraphics{krumpe_FraWS_2013_01_Fig02.ps}}}
\caption{ In the conceptual model of the HOD approach, there are
two contributions to the pairs that
account for the measured correlation function.
Pairs of objects (black stars) can either be located within the same
DMH (pink filled circles), such that their measured separation contributes to
the 1-halo term (red solid line in the large DMH), or can reside
in different DMHs, such that
their separations (green dotted line) contribute to the
2-halo term.}
\label{hod}
\end{myfigure}
\hspace*{-0.5cm}{\it AGN Halo Occupation Distribution Modeling: }\\
Instead of deriving only mean DMH masses from the linear bias parameter, HOD
modeling of the correlation function allows the determination of the full
distribution of AGN as a function of DMH mass. The derived distribution
also connects observations and simulations as it provides recipes for how
to populate DMHs with observable objects.
In the HOD approach, the measured 2PCF is modeled as the sum of contributions
from pairs within individual DMHs (Fig.~\ref{hod}; 1-halo term) and in
different DMHs
(2-halo term). The superposition of both components describes the shape of
the observed 2PCF better than a simple power law.
In the HOD description, a DMH can be populated by one central AGN/galaxy and
by additional objects in the same DMH, so-called satellite AGN/galaxies.
Applying the HOD approach to the 2PCF allows one to determine, e.g., the minimum DMH
needed to host the object class of interest, the fraction of objects in satellites,
and the number of satellites as a function of DMH mass.
Instead of using the derived AGN ACF from CCF measurements,
Miyaji et~al.~(2011) utilize the HOD model directly on high precision AGN/galaxy CCF
and achieve additional constraints on the AGN/galaxy co-evolution and AGN physics.
\hspace*{-0.5cm}{\it Theoretical predictions: }\\
Only recently have several different theoretical models been published which
try to explain
the observed AGN clustering with different physical approaches
(e.g, Fanidakis et~al.~2013a; H\"utsi et~al.~2013).
The key to observationally distinguishing between these models are their different
predictions for the clustering dependences of different AGN parameters.
In addition to theoretical models of the observed clustering, other very recently
developed models predict the halo occupation distribution of AGN at different
redshifts, e.g., Chatterjee et~al.~(2012).
The major challenge presented by all of these models is the urgent need for
observational constraints
with higher precision than can be provided with current AGN samples.
In the future, progress in AGN physics and AGN/galaxy evolution
will be achieved through a close interaction between state-of-the-art cosmological
simulations and observational constraints from high precision clustering measurements.
Simulations which incorporate different physical processes will
lead to different predictions of the AGN and galaxy large-scale clustering trends and
their halo occupation distributions.
Observational studies will then identify the correct model and consequently
the actual underlying physical processes.
\section{The future of AGN clustering measurements}
AGN clustering measurements from several upcoming projects will significantly
extend our knowledge of the growth of
cosmic structure and will also provide a promising avenue towards new
discoveries in the fields of galaxy/AGN co-evolution, AGN triggering,
and cosmology. For example, eROSITA (Predehl et~al.~2010); launch
2014/2015)
will perform several all-sky X-ray surveys.
After four years the combined survey is expected to
contain approximately three million AGNs.
HETDEX (Hill et~al.~2008) will use an array of integral-field spectrographs to
provide a total sample of $\sim$20,000 AGNs without any pre-selection over an
area of
$\sim$ 420 deg$^2$. The SDSSIV/eBOSS and BigBOSS builds
upon the SDSS-III/BOSS project
and will use a fiber-fed spectrograph. Over an area of 14,000 deg$^2$,
it will observe roughly one million QSOs at $1.8 < z <3.5$. In addition to these projects,
there will be other major enterprises such as LSST (LSS collaboration~2009) and
Pan-STARRS (Kaiser et~al.~2002) which will detect several million AGNs (though
these surveys currently lack dedicated spectroscopic follow-up programs).
In the following we will focus on eROSITA, as this mission will compile the largest
AGN sample ever observed. Figure~\ref{fig2} shows that
eROSITA AGN detections will outnumber at $z>0.4$ current galaxy samples
with spectroscopic redshifts.
Using a large number of AGNs that continuously cover the redshift space,
will allow us (in contrast to galaxy samples) to measure the distribution of
matter with high precision in the last $\sim$11 Gyr of cosmic time.
To fully exploit the eROSITA potential for AGN clustering
measurements, a massive spectroscopic follow-up program is needed. Several
spectroscopic multi-object programs and instruments are currently planned or are in
an early construction phase (e.g., SDSS IV/SPIDERS and 4MOST).
\begin{myfigure}
\centerline{\resizebox{88mm}{!}{\includegraphics{krumpe_FraWS_2013_01_Fig03.ps}}}
\caption{Number of expected eROSITA AGNs (red) and currently available
galaxies with spectroscopic redshifts
(black solid line at $z<0.4$ -- SDSS data release 7; black dotted line -- PRIMUS
(Coil et~al.~2011); black solid line at $z\sim1$ -- DEEP2 (Davis et~al.~2003)
and VVDS (Le F\`evre et~al.~2005)).
Instead of the full sky area, we consider only the expected number of eROSITA
AGNs with spectroscopic redshifts from 4MOST over 14,000 deg$^2$.}
\label{fig2}
\end{myfigure}
eROSITA AGN clustering measurements at $z \sim 0.8-1$ will even allow for the
detection of the BAO signal. The feasibility of such a measurement can be
estimated using the BAO detection found with $\sim$46,000 SDSS LRGs
($\langle$$z$$\rangle=0.35$) over 3,816 square
degrees of sky (0.72 $h^{-3}$ Gpc$^3$) as a standard for comparison
(Eisenstein et~al.~2005).
The observed AGN X-ray luminosity function (Gilli et~al.~2007) and the
eROSITA sensitivity determine the number density of eROSITA AGNs.
In the above redshift range, the eROSITA AGN area density will be
comparable to that of SDSS LRGs at lower redshifts. Therefore, the comoving volume number
density of eROSITA AGNs will be five times lower than that of SDSS LRGs.
Since eROSITA will conduct an all-sky survey, the increased sky area will counterbalance
the lower volume density. Given the signal-to-noise ratio (S/N) of the
BAO detection of Eisenstein et al. (2005) and an assumed spectroscopic area of
14,000 deg$^2$, we expect a $\sim$3$\sigma$ BAO detection using eROSITA AGNs only
in the redshift range of $z \sim 0.8-1$. This is consistent with
Kolodzig et~al.~(2013), who use a different approach based on the angular power
spectrum for estimating the significance of a BAO detection with eROSITA AGN.
With the much larger AGN datasets that will exist in the future,
the statistical uncertainties in clustering measurements
will be significantly decreased. Systematic uncertainties will then be the
dominant source of uncertainty. The impact and level of different systematic
uncertainties
can only be carefully explored and quantified through simulations.
Thus far, there has not been a need for such studies because
the AGN samples to date are i) drawn from surveys that (with exceptions)
cover a rather moderate sky area and are therefore likely to suffer from the problem
of cosmic
variance\footnote{Surveys with a small sky area will not sample a representative part of the
universe due to cosmic variance, i.e., if a survey incidentally aims at an underdensity/overdensity in the
universe a lower/higher clustering amplitude will be measured than when aiming
at a representative part of the universe.}
and/or ii) comprised of up to several thousand
objects and are consequently Poisson noise dominated.
Both limitations will be removed in future AGN clustering measurements with
the upcoming extensive AGN samples covering extremely large sky areas.
However, to derive reliable constraints on AGN physics and cosmology,
as well as to avoid any possible misinterpretations of future unprecedented high precision AGN clustering
measurements, we have to fully understand and be able to correctly model the impact of the
systematic uncertainties.
Only then can we maximize the scientific return of future AGN clustering
measurements and have a major impact in the field of cosmology and galaxy/AGN evolution.
\thanks
MK received funding from the European Community's Seventh Framework Programme
(/FP7/2007-2013/) under grant agreement No 229517. TM is supported by
UNAM/PAPIIT IN104113 and CONACyT 179662. ALC acknowledges support from NSF
CAREER award AST-1055081.
\vspace*{0.0cm}
|
1,314,259,994,775 | arxiv | \section{Introduction}
The study of events with bottom quarks has led in the past 10 years to
some of the most important Tevatron results: the discovery and study
of the top quark, the appreciation of the colour-octet-mediated
quarkonium production mechanisms, as well as general results in
b-hadron physics (spectroscopy, lifetimes, mixing, $\sin^2 2\beta$)
These results have been obtained while both CDF and D0 were reporting
factor-of-3 discrepancies between observed and predicted $b$-hadron
cross-sections. To claim that we need to understand $b$ production in
order to make new discoveries is therefore a bit exagerated: important
discoveries should be able to stand on their feet without appealing to
the prediction of a QCD calculation. Nevertheless, lack of confidence
in the ability to describe the properties of events containing $b$
quarks, in addition to raising doubts over the general applicability
of perturbative QCD in hadronic collisions, does limit our potential
for the observation of new dynamical regimes (e.g. small-$x$
physics~\cite{Collins:1991ty}-\cite{Jung:2001rp}) or for the discovery
of new phenomena (e.g. Supersymmetry~\cite{Berger:2000mp}). In some
cases, the existing measurements challenge the theory in ways which go
beyond simple overall normalization issues, pointing at effects which
are apparently well beyond reasonable theoretical systematics: this is
the case of recent CDF studies, which detected anomalies in both rates
and properties of events with secondary vertices and soft
leptons~\cite{Acosta:2001ct}. It cannot be contested, therefore, that
the study of $b$ production properties should be one of the main
priorities for Run~II at the Tevatron, with implications which could
go beyond the simple study of QCD.
Starting from the situation as it developed during the early Tevatron
runs, I will review here the progress in the theoretical predictions.
More details on the historical evolution of the cross section
measurements can be found in~\cite{mlmtalk}, as well as
in~\cite{Frixione:1994nb,Frixione:1997ma}, which also review the status of
fixed-target heavy quark studies. For a recent review including
$\gamma\gamma$ and $ep$ data as well, see~\cite{Cacciari:2004ur}.
I will then present the
implications of the preliminary results from Run II. Their
complete theoretical analysis is contained
in~\cite{Cacciari:2003uh}.
\section{Review of run~0 and run~I results}
The prehistory of $b$ cross-section measurements in hadronic
collisions starts with UA1 at the S$\bar{p}p$S ($\sqrt{S}=630$~GeV)
collider~\cite{Albajar:1987iu}. The data were compared with
theoretical predictions~\cite{Nason:1988xz,Nason:1989zy},
showing good agreement, within the rather large ($\pm 40\%$)
theoretical uncertainty. ``Theory'', in those days, already meant a full NLO
QCD calculation~\cite{Nason:1988xz,Nason:1989zy},
including all mass effects, state-of-art NLO PDF
fits~\cite{Diemoz:1987xu}, and $b\to B$ non-perturbative fragmentation
functions parameterized according to~\cite{Peterson:1982ak}, with a
parameter $\epsilon=0.006$ extrapolated from fits~\cite{Chrin:1987yd} to charm
fragmentation data in $e^+e^-$, using the relation
$\epsilon_b=\epsilon_c \times (m_c/m_b)^2$.
At the beginning only predictions for total cross-sections and
inclusive \mbox{$p_T^b$}\ spectra were available. Later on, more exclusive
calculations were performed, allowing for the application of general
cuts to the final states, as well as for the study of correlations
between the $b$ and
$\bar{b}$~\cite{Mangano:1991jk}\footnote{For lack of time, I
will however focus my attention in this presentation on inclusive \mbox{$p_T$}\
spectra.}.
After such a good start in UA1,
the first published data from CDF~\cite{Abe:1992fc} appeared as a big
surprise. CDF collected a sample of $14\pm 4$ fully reconstructed
$B^{\pm} \to \psi K^{\pm}$ decays, leading to:
\begin{equation}
\sigma(\mbox{$p \bar{p}$} \to bX; \; \mbox{$p_T^b$}>11.5\mbox{$\mathrm{GeV}$},\; \vert y \vert<1)=
\begin{array}{ll}
\mathrm{CDF:} & 6.1\pm 1.9_{\scriptstyle stat} \pm 2.4_{\scriptstyle syst} \,\mu{\mathrm
b} \\
\mathrm{theory:} & 1.1\pm 0.5 \, \mu{\mathrm
b}
\end{array}
\end{equation}
In spite of the large uncertainties, which led to a mere 1.5$\sigma$
discrepancy, attention focused on the large data/theory=5.5 excess.
Theoretical work to explain the apparent contradiction between the
success of the NLO theory at 630~\mbox{$\mathrm{GeV}$}\ and the disaster at 1.8~TeV
concentrated at the beginning on possible effects induced by the
different $x$ range probed at the two energies: PDF
uncertainties~\cite{Berger:1992je} and large small-$x$
effects~\cite{Collins:1991ty}-\cite{Levin:1991ry}, where $x\sim
m_b/\sqrt{S}$. In the first case
marginal fits to both data sets could be obtained at the cost of
strongly modifying the gluon density, in a way which however would not
survive the later accurate determinations of $g(x)$ from HERA. In the
second case, conflicting conclusions were reached: on one side the
first paper of~\cite{Levin:1991ry} obtained increases by factors of
3-5 due to small-$x$ effects; on the other, the analysis
of~\cite{Collins:1991ty} proved that the resummation of small-$x$
logarithms could only augment the total rate by 30\% (or less, in the
case of $g(x)$ more singular than $1/x$)\footnote{The option of very
large small-$x$ effects being manifest only at 1.8TeV
will be definitely ruled out several years
later, when CDF measured~\cite{Acosta:2002qk} the $b$ cross section at
$\sqrt{S}=630$GeV and showed that the scaling from 630 to 1.8TeV was
consistent with the predictions of pure NLO QCD.}.
The ball was therefore back on the experimentalists' court. CDF
expanded the set of measurements, including final states with
inclusive $\psi$ and $\psi'$~\cite{Abe:1992ww} and inclusive
leptons~\cite{Abe:1993sj}, summarised in fig.~\ref{fig:cdf93}.
The measurement of the $b$ cross section from the inclusive charmonium
decays turned out later to be incorrect.
\begin{figure}[t]
\includegraphics[height=.3\textheight, clip]{cdf93.eps}
\caption{CDF data from inclusive $\psi$, $\psi'$~\cite{Abe:1992ww}
and lepton~\cite{Abe:1993sj} final
states, compared to NLO QCD.}
\label{fig:cdf93}
\end{figure}
In run 0, in fact, CDF could not measure secondary vertices,
so that charmonium states from direct production and from $B$ decays
could not be separated. The extraction of a $b$ rate from these final
states was based on theoretical prejudice about the prompt
production rates, prejudice which in run~I, when the secondary
vertices started being measured by CDF, turned out to be terribly
wrong~\cite{onium}\footnote{Incidentally, this fact puts into question
the UA1 results, which heavily relied on the $\psi$ final states and
on explicit assumptions about the prompt charmonium rates!}.
The data on inclusive leptons,
while high compared to the central value of the theoretical
prediction, were nevertheless
consistent with its upper value, and in any case within
1$\sigma$.
Increased statistics in run~I allowed CDF to improve its measurement
of fully reconstructed exclusive decay modes, leading to the
measurements in fig.~\ref{fig:cdf95}. For this measurement CDF used
19pb$^{-1}$ of data, leading to approximately 55 $B^0\to \psi K^*$
and 125 $B^\pm \to \psi K^\pm$ decays. The cross section was still
high compared to the central value of the theoretical prediction
(data/theory=1.9$\pm$0.3), but this was already a marked improvement
over the first measurement from run~0, when this ratio was equal to 6.1!
More explicitly, the 1995 measurement gave $\sigma(\mbox{$p_T$}(B^+)>6\mbox{$\mathrm{GeV}$}, \vert
y\vert<1)=2.39\pm 0.54\mu$b, compared to the 1992 measurement of $\langle\sigma(\mbox{$p_T$}(B)>9\mbox{$\mathrm{GeV}$}, \vert
y\vert<1)\rangle=2.8\pm 1.4\mu$b (where $\langle \sigma(B) \rangle \equiv
[\sigma(B^+)+\sigma(B^0)]/2)$. Taking into account that the $b$ rate is
expected to increase by 2.7
when going from a 9~GeV to a 6~GeV threshold, the 1992 measurement
appears to be a factor of 3.2 higher than the 1995 result, consistent
with the 6.1/1.9 ratio.
\begin{figure}[t]
\includegraphics[width=.3\textheight, angle=-90]{cdf95.eps}
\caption{Evolution of data/theory comparisons with improved PDF
fits. The data on both plots are exactly the same; the theory curves
on the left were generated with the MRSD0 set, on the right with the
post-HERA set CTEQ5 and MRST.}
\label{fig:cdf95}
\end{figure}
\begin{figure}[t]
\includegraphics[width=.3\textheight, angle=-90]{pdfevol.eps}
\caption{Left: the NLO $b$-quark rate as a function of $p_{T,min}$,
for post-HERA PDF sets CTEQ4M (\cite{Lai:1996mg}, \ifmmode\alpha_s(m_Z)\else$\alpha_s(m_Z)$\fi=0.116)
and CTEQ6M (\cite{Pumplin:2002vw}, \ifmmode\alpha_s(m_Z)\else$\alpha_s(m_Z)$\fi=0.118), normalized to the pre-HERA
set MRSD0. Right: total cross section for $\vert y \vert < 1$ for
various PDF sets, distributed on the abscissa in order of increasing
release date. The crosses correspond to the rates calculated by
forcing $\Lambda_{\scriptscriptstyle QCD}$ to take a value consistent with the
LEP $\ifmmode\alpha_s\else$\alpha_s$\fi(m_Z)$ fits ($\Lambda_{nf=5}^{2-loop}=226\mbox{$\mathrm{MeV}$} \Rightarrow \ifmmode\alpha_s(m_Z)\else$\alpha_s(m_Z)$\fi=0.118$ . }
\label{fig:pdfevol}
\end{figure}
This drop in the
experimental cross-section was not inconsistent with the large
statistical and systematic uncertainties of the 1992 measurement, but
somehow the common belief that theory was way off had already stuck. It
is also worth noting that the same data, when compared to theoretical
predictions obtained a couple of years later using the same QCD
calculations, but up-to-date sets of input PDFs
(MRST~\cite{Martin:1998sq} with \ifmmode\alpha_s(m_Z)\else$\alpha_s(m_Z)$\fi=0.1175, and
CTEQ5M~\cite{Lai:1999wy} with \ifmmode\alpha_s(m_Z)\else$\alpha_s(m_Z)$\fi=0.118), gave very good
agreement. This is shown in the right panel of
fig.~\ref{fig:cdf95}, taken from an update of~\cite{Frixione:1997ma}.
The crucial change between the two predictions was the change in the
value of the QCD coupling strength \ifmmode\alpha_s\else$\alpha_s$\fi\ extracted from global PDF
fits. The fits used in the CDF 1995 publication,
MRSD0~\cite{Martin:1992as}, did not include HERA data and had \ifmmode\alpha_s(m_Z)\else$\alpha_s(m_Z)$\fi=0.111,
signficantly lower than what we were getting from LEP, namely
$\ifmmode\alpha_s\else$\alpha_s$\fi(m_Z)\sim 0.120$. This 10\% difference, when evolved to the low scales
of relevance to $b$ production, becomes much more significant,
especially because $b$ rates grow like $\ifmmode\alpha_s\else$\alpha_s$\fi^2$.
This is shown more explicitly in fig.~\ref{fig:pdfevol}. The left
panel shows the ratio of the rates obtained by using post-HERA PDF
sets, normalized to the MRSD0 set used in the CDF 1995 comparison. The
right panel shows the integrated total cross section (for $\vert y
\vert < 1$) for several PDF sets, ordered versus the date of
release. One can notice a constant increase, with the most recent
sets being almost a factor of 2 higher than the older ones. Notice
that this increase is due by and large to the increased value of $\ifmmode\alpha_s\else$\alpha_s$\fi$
returned by the PDF fits. Forcing $\Lambda_{\scriptscriptstyle QCD}$
to take the value consistent with LEP's $\ifmmode\alpha_s\else$\alpha_s$\fi(m_Z)$, one would
have obtained for each PDF set the values corresponding to the crosses
in the plots. There the increase relative to the pre-HERA fit MRSD0
is significantly smaller.
While the improvements in the PDF fits were reducing the difference
between data and theory, as shown fig.~\ref{fig:cdf95}, a new CDF
measurement from the full sample of run~I exclusive $B$ decays
in the range $6~\mbox{$\mathrm{GeV}$}<\mbox{$p_T$}<20$~GeV appeared in
2001~\cite{Acosta:2001rz}, and is shown here in Fig.~\ref{fig:cdf02}.
The total rate turned out to be 50\% larger than in the
previous 1995
publication~\cite{Abe:1995dv}: $\sigma(\mbox{$p_T$}(B^+)>6\mbox{$\mathrm{GeV}$}, \vert
y\vert<1)=3.6\pm 0.6\mu$b, compared to the previous $2.4\pm
0.5\mu$b, a change in excess of 2$\sigma$.
The ratio between data and the central value of the theory prediction
was quoted as $2.9\pm 0.5$: a serious disagreement was back!
\begin{figure}[t]
\includegraphics[height=.3\textheight, clip]{cdf02.eps}
\caption{Final CDF analysis of run~I exclusive-decay
data~\cite{Acosta:2001rz}, compared to
the CDF evaluation of the NLO QCD prediction with MRST
PDFs and Peterson fragmentation.}
\label{fig:cdf02}
\end{figure}
On the other side of the Tevatron ring, the D0 experiment started
presenting the first $b$ cross section measurements in 1994.
The first preliminary results~\cite{Bazizi:1994sp} were in perfect
agreement with QCD, as shown in the left panel of
Fig.~\ref{fig:D094-00}. They were eventually published, after
significant changes, in~\cite{Abachi:1994kj}. The results from a
larger dataset of 6.6pb$^{-1}$ appeared in~\cite{Abachi:1996jq}, where
$\psi$ dimuons were added. They
are shown in the central panel of the figure, and they show a clear
increase over the preliminary analysis, but are still consistent with
the QCD expectations. The same data set underwent further analysis,
and eventually appeared few years later in~\cite{Abbott:1999se}. They
are shown in the right panel of the figure. Now
the data are significantly higher than QCD, and certainly higher than in
1996, especially in view of the fact that in the meantime the theory
predictions had increased by almost a factor of 2 as a result of the
use of new PDF sets (this is clearly visible by the shift of the
theory curves between the central and right panels). As in the case of the
CDF exclusive analysis, this evolution
underscores the difficulty in performing these measurements, and
indicates that it was not just the theory that was having difficulties
in coming to grips with the problem!
\begin{figure}[t]
\includegraphics[width=.23\textheight, angle=-90]{D094-00.eps}
\caption{Evolution of the D0 measurements. Left: preliminary results
from 90nb$^{-1}$~\cite{Bazizi:1994sp}. Center:
6.6pb$^{-1}$~\cite{Abachi:1996jq}. Right: final analysis of the same
data set, with the addition of inclusive
dimuons~\cite{Abbott:1999se}.}
\label{fig:D094-00}
\end{figure}
An additional element was added to the puzzle when D0
reported~\cite{Abbott:1999wu} the
measurement of $b$ production at large rapidity, using inclusive
forward muons ($2.4<\vert y_\mu \vert <3.2$). The results, shown in
fig.~\ref{fig:D0fwdmu}, indicated an excess over NLO QCD by a factor
larger than what observed in the central region.
\begin{figure}[t]
\includegraphics[width=.17\textheight, angle=-90]{D0fwdmu.eps}
\caption{Forward muon production at D0~\cite{Abbott:1999wu}.}
\label{fig:D0fwdmu}
\end{figure}
This anomaly could not be explained away by assuming some extra
systematics related to PDFs. From the point of view of perturbation
theory, furthermore, there was no reason to expect a significant
deterioration of the predictive power when going to large rapidity.
So when this result first appeared in its preliminary form I was
led~\cite{Mangano:1997ri} to review our assumptions about the
non-perturbative part of the calculation, in particular the impact of
the fragmentation function. A crucial observation is that in hadronic
collisions the fragmentation function is probed in different ranges of
$z$ as we change rapidity. This is easily seen as follows. Let us
assume that the $b$ \mbox{$p_T$}\ spectrum takes the simplified form:
\begin{equation}
\frac{d\sigma(b)}{d p_T} \sim \frac{1}{p_T^N} \; ,
\end{equation}
where the slope $N$ will typically depend on rapidity, becoming larger at
higher $y_b$.
The meson spectrum is then obtained via convolution with the
fragmentation function $f(z)$, leading to the simple result:
\begin{equation} \frac{d\sigma(B)}{d P_T} \equiv \int \frac{dz}{z}\,
\frac{d\sigma(b)}{d p_T}(p_T= P_T/z) =
\int \frac{dz}{z}\, (\frac{z}{
P_T})^N\, f(z) =f_N \, \frac{d\sigma(b)}{d P_T} \; ,
\end{equation}
where $f_N$ is the $N$-th moment of $f(z)$. This means that a steeper
partonic spectrum selects higher moments. Since the index $N$ is
larger for forward production, a relative difference in $B$ production
rates in the forward/central regions could be explained by making the
fragmentation function harder, enhancing the larger moments
(which measure the large-$z$ behaviour of $f(z)$).
A related observation is that $f(z)$ fits to $e^+e^-$ data are mostly
driven by the value of the first moment $f_1$, which measures the
average of the fragmentation variable $z$. It is therefore possible
that different choices of $f(z)$, giving equivalent overall fits to
$e^+e^-$, could make very different predictions for the higher moments
of relevance to hadronic production (in this case $N$ is in the range
4-6).
\begin{figure}[t]
\includegraphics[height=.3\textheight]{D0bjet.eps}
\caption{$b$-jet production at D0~\cite{Abbott:2000iv}.}
\label{fig:D0bjet}
\end{figure}
One way to understand whether indeed the inaccurate description of the
fragmentation process could affect the theoretical predictions was
therefore to think of measurements not affected by this
systematics.
The most obvious observable of this kind is the \mbox{$E_T$}\
spectrum of jets containing a $b$ quark~\cite{Frixione:1996nh}. Since
the tagging of a $b$ inside the jet is only marginally affected by the
details of the $b\to B$ fragmentation, measuring the rate of $b$ jets
is a direct measurement of the $b$ production rate with negligible
fragmentation systematics. In addition, this measurement is also
insensitive to higher-order large-\mbox{$p_T$}\ logarithms which are present in
the \mbox{$p_T^b$}\ spectrum, therefore improving in principle the perturbative
accuracy. D0 carried out the measurement,
publishing~\cite{Abbott:2000iv} the results shown in
Fig.~\ref{fig:D0bjet}. The agreement with NLO
QCD~\cite{Frixione:1996nh} is better than in
the case of the \mbox{$p_T^b$}\ spectrum, as was hoped. We took this as
strong evidence that a reappraisal of the fragmentation function
systematics may have led to a better description of the \mbox{$p_T^b$}\ and
$y_\mu$ distributions.
The necessary ingredients to carry out this programme are
perturbative calculations of matching accuracy for $b$ spectra in
both $e^+e^-$ and $\mbox{$p \bar{p}$}$ collisions, in addition of course to
accurate $e^+e^-$ data to be used in the fits. These tools had just
become available towards the end of the 90's
The resummation of the logarithms of $\mbox{$p_T$}/m_b$, with next-to-leading
logarithmic accuracy (NLL), and the matching with the fixed-order (FO),
exact NLO calculation for massive quarks, had been performed in
\cite{Cacciari:1998it} (Fixed-Order with Next-to-Leading-Log
resummation: FONLL)
and a calculation with this level of accuracy for
$e^+e^-$ collisions was presented
in~\cite{Nason:1999zj}. Here it had been used for the
extraction of the non-perturbative fragmentation
function $f(z)$ from LEP and SLC
data~\cite{Heister:2001jg}, with the main result that the Peterson
functional form is strongly disfavoured over other
alternatives~\cite{Kartvelishvili:1977pi}.
The equivalence of the perturbative
inputs allows one to consistently apply this fit to the
FONLL $b$-quark spectra in hadronic collisions, leading to FONLL
predictions for the $b$ hadron ($H_b$) spectrum. A comparison of these
predictions with the final CDF data at 1.8~TeV for $B^\pm$-meson
production in the range $6~\mbox{$\mathrm{GeV}$}<\mbox{$p_T$}<20$~GeV has been presented
in~\cite{Cacciari:2002pa}. The results are shown in
Fig.~\ref{fig:cn02}: the left panel compares the CDF data
from~\cite{Acosta:2001rz}
with the theory curve evaluated using CTEQ5M PDF, FONLL, and
fragmentation functions fitted to LEP and SLC data.
The right panel shows a comparison~\cite{Nason:2003zf}
with the D0 forward muon data.
In both cases the agreement with data
is much improved. In the case of the CDF central cross section,
the ratio between data and theory improves from $2.9\pm 0.5$ to
$1.7\pm 0.7$. As discussed in detail in~\cite{Cacciari:2002pa}, the
improvement is due to the sum of three independent 20\% effects
($1.2^3\sim 2.9/1.7$), all going in the same direction:
the resummation of \mbox{$p_T$}\ logarithms, the change
in functional form of the fragmentation function, and the use
of the LEP/SLC $b$ fragmentation data.
The heritage of run~I was therefore a set of measurements, more or
less consistent with each other, normalized with a factor of about 1.5 to 2
higher than the central theoretical prediction, but still compatible
with the upper end of the theoretical systematics band.
\begin{figure}[t]
\includegraphics[width=.3\textheight, angle=-90]{cn02.eps}
\caption{Left panel: FONLL prediction by Cacciari and
Nason~\cite{Cacciari:2002pa} for the run~I $B$ meson spectrum,
compared to the CDF data~\cite{Acosta:2001rz}. Right panel: the
prediction of this calculation for the forward muon rapidity
spectrum at D0.}
\label{fig:cn02}
\end{figure}
\section{The run~II CDF results}
The final phase of this history deals with the new run~II data from
CDF~\cite{cdfrun2}. A great improvement took place in the ability to
trigger on very low \mbox{$p_T^b$}\ events, allowing for a measurement down to
$\mbox{$p_T^b$}\sim 0$, although still in the limited rapidity range $\vert y_b
\vert \lsim 0.6$. This is also accompanied by very large statistics,
allowing a fine binning in \mbox{$p_T$}. The measurement down to very small
\mbox{$p_T^b$}\ is important because the total rate has a much reduced
dependence on the fragmentation systematics, and because it is
particularly sensitive to possible small-$x$ phenomena.
On the theoretical side, in addition to the calculations described
above, a new tool has meanwhile become available, namely the
MC@NLO code~\cite{Frixione:2003ei}, which
merges the full NLO matrix elements with the complete shower evolution
and hadronization performed by the {\small HERWIG} Monte Carlo. As discussed in
detail in~\cite{Frixione:2003ei}, this comparison probes a few features
where FONLL and MC@NLO differ by
effects beyond NLO: the evaluation
of subleading logarithms in higher-order emissions, in particular in
the case of gluon emission from the $b$ quark, and the hadronization
of the heavy quark, which in MC@NLO is performed through {\small
HERWIG}'s cluster model, tuned on $Z^0\to H_b X$ decays.
\begin{figure}[t]
\includegraphics[height=.3\textheight]{cdfpsi2.eps}
\caption{CDF $J/\psi$ spectrum from $B$ decays.
The theory band represents the FONLL systematic uncertainties,
as described in the text. Two MC@NLO predictions are
also shown (histograms).}
\label{fig:cdfpsi}
\end{figure}
The comparison of the run~II data with the theoretical calculations
is given in Fig.~\ref{fig:cdfpsi}, which shows
the data with our prediction for the
spectrum of $J/\psi$s form $H_b$ decays,
obtained by convoluting the FONLL result with the
$J/\psi$ momentum distribution in inclusive $B\to J/\psi+X$
decays. The theoretical error band is obtained by varying
renormalization and factorization scales ($\mu_{R,F}=\xi_{R,F}\mu_0$,
with $\mu_0^2=\mbox{$p_T$}^2+m_b^2$), the $b$-quark mass, and parton densities.
The central values of our predictions are obtained with $\xi_{R,F}=1$,
$m_b=4.75$~GeV and CTEQ6M. The mass uncertainty corresponds to the range
$4.5~\mbox{$\mathrm{GeV}$}<m_b<5$~GeV.
The scale uncertainty is obtained by varying $\mu_{R,F}$
over the range
$0.5<\xi_{R,F}<2$, with the constraint $0.5 <\xi_R/\xi_F < 2$.
The PDF uncertainty is calculated by using all the three sets of PDFs with
errors given by the CTEQ, MRST and Alekhin
groups~\cite{Pumplin:2002vw,Martin:2002aw,Alekhin:2002fv}.
The data lie well within the uncertainty band, and are in very good
agreement with the central FONLL prediction. I also show the two
MC@NLO predictions corresponding to the two different choices of the
$b$ hadronization parameters (see~\cite{Cacciari:2003uh} for the
details).
I stress that both FONLL and MC@NLO are based on the NLO result
of~\cite{Nason:1989zy} (henceforth referred to as NDE),
and only marginally enhance the cross section
predicted there, via some higher-order effects. The most relevant
change in FONLL with respect to old predictions lies at the
non-perturbative level, i.e. in the treatment
of the $b\to H_b$ hadronization, which makes use~\cite{Cacciari:2002pa}
of the
moment-space analysis of the most up-to-date data on $b$ fragmentation
in $e^+e^-$ annihilation. The evolution of the NLO theoretical
predictions over time is shown in Fig.~\ref{fig:history}. Here
we plot the original central prediction of NDE for $\sqrt{S}=$1.8~TeV
(symbols), obtained using NLO QCD partonic cross sections convoluted with
the PDF set available at the time, namely DFLM260~\cite{Diemoz:1987xu}.
The same calculation, performed with the CTEQ6M
PDF set (dotted curve),
shows an increase of roughly 20\% in rate in the region
$\mbox{$p_T$}<10$~GeV. The effect of the inclusion of the resummation of NLL
logarithms is displayed by the dashed curve, and is seen to be modest
in the range of interest. Finally, we compare the
original NDE prediction after convolution with the Peterson
fragmentation function ($\epsilon=0.006$, dot-dashed curve), with the
FONLL curve convoluted with the fragmentation function extracted
in~\cite{Cacciari:2002pa} (solid curve).
Notice that the effect of the fragmentation obtained in~\cite{Cacciari:2002pa}
brings about a modest decrease of the cross section (the difference
between the dashed and solid curves),
while the traditional Peterson fragmentation with
$\epsilon=0.006$ has a rather pronounced effect (the difference
between the symbols and the dot-dashed curve).
Thus, the dominant change in the theoretical prediction
for heavy flavour production from the original NDE calculation
up to now appears to be the consequence of more precise
experimental inputs to the bottom fragmentation
function~\cite{Heister:2001jg}, that have shown that non-perturbative
fragmentation effects in bottom production are much smaller
than previously thought.
\begin{figure}[t]
\includegraphics[height=.3\textheight]{history.eps}
\caption{Evolution of the NLO QCD predictions over time, for $\sqrt{S} =
1800$ GeV. See the text for the meaning of the various curves.}
\label{fig:history}
\end{figure}
The main improvement in the comparison between data and theory
w.r.t. the final run~I results discussed in~\cite{Cacciari:2002pa}
comes from the normalization of the run~II CDF data, which tend to be
lower than one would have extrapolated from the latest measurements at
1.8~TeV. To clarify this point, we collect in Fig.~\ref{fig:bplus} the
experimental results from the CDF measurements of the $B^\pm$ cross
section in Run~IA~\cite{Abe:1995dv}, in Run~IB~\cite{Acosta:2001rz}
and in Run~II. The rate for $\mbox{$p_T$}(B^\pm)>6$~GeV,
evolved from $2.4\pm0.5~\mbox{$\mu\mathrm{b}$}$ (Run~IA) to $3.6\pm0.6~\mbox{$\mu\mathrm{b}$}$ (Run~IB),
and decreased to $2.8\pm0.4~\mbox{$\mu\mathrm{b}$}$ in Run~II. The increase in the
c.m. energy should have instead led to an increase by 10-15\%. The
Run~II result is therefore lower than the extrapolation from Run~IB by
approximately 30\%. By itself, this result alone would reduce the
factor of 1.7 quoted in~\cite{Cacciari:2002pa} to 1.2 at $\sqrt{S} =
1.96$~TeV. In addition, the results presented
in~\cite{Cacciari:2003uh} lead to an increase in rate relative to the
calculation of~\cite{Cacciari:2002pa} by approximately 10-15\%, due to
the change of PDF from CTEQ5M to CTEQ6M. We then conclude that the
improved agreement between the Run~II measurements and perturbative
QCD is mostly a consequence of improved experimental inputs (which
include up-to-date $\ifmmode\alpha_s\else$\alpha_s$\fi$ and PDF determinations).
\begin{figure}[t]
\includegraphics[height=.3\textheight]{Bplus2.eps}
\caption{Evolution of the CDF data for exclusive $B^\pm$ production:
Run~IA\protect\cite{Abe:1995dv}, Run~IB~\protect\cite{Acosta:2001rz} and
Run~II\protect\cite{cdfrun2}.}
\label{fig:bplus}
\end{figure}
\section{Conclusions}
When I meet colleagues and discuss the latest $b$ results, and when
I hear presentations or read conference proceedings, there is
often a more or less explicit message that now things are OK
because theorists kept beating on their calculations until they got them
right. I hope that this note will dispel this prejudice. The history
of the experimental measurements indicates that many things have also
``strongly evolved'' on the data side, often with changes well in
excess of the standard $\pm 1\sigma$ variation.
The ``history'' plot in fig.~\ref{fig:history} shows on the other
hand that not much
has changed on the theory side, aside from data-driven modifications
associated to the value of \ifmmode\alpha_s(m_Z)\else$\alpha_s(m_Z)$\fi, to the low-$x$ behaviour of the
gluon as determined by the HERA data, and to the improved data on
$b\to B$ fragmentation. The theoretical improvements due to the
resummation of the large-\mbox{$p_T$}\ logarithms play a major role in allowing
a consistent use of the fragmentation functions extracted from
$e^+e^-$ data, but have a very limited impact in the region of \mbox{$p_T^b$}\
probed by the run~II data. Their significance will only manifest
itself directly at
high \mbox{$p_T^b$}\ ($\mbox{$p_T^b$}>20-30$GeV), where the resummation leads to a much
reduced scale dependence, and to more accurate predictions, allowing
more compelling quantitative tests of the theory. It is auspicable
that the improved run~II detectors and the higher statistics will make it
possible to extend the range of the measurements to really large \mbox{$p_T^b$}\
(in the range of 80-100 GeV). Tools are now available (MC@NLO) to
compare data subject to complex experimental constraints directly with
realistic NLO calculations, including a complete description of the
hadronic final state. This will avoid the risky business of attempting
to connect the observables to a \mbox{$p_T$}\ spectrum of the $b$ quark, a
practice which, although unavoidable in the past, has certainly
contributed to the inflation of theoretical and experimental
systematic uncertainties.
To this date, the recent CDF measurement of total $b$-hadron
production rates in $p\bar{p}$ collisions at $\sqrt{S}=1.96$~TeV is in
good agreement with NLO QCD, the residual discrepancies being well
within the uncertainties due to the choice of scales and, to a lesser
extent, of mass and PDF. A similar conclusion is reached for the \mbox{$p_T$}\
spectrum. The improvement in the quality of the agreement between
data and theory relative to previous studies is the result of several
small effects, ranging from a better knowledge of fragmentation and
structure functions and of \ifmmode\alpha_s\else$\alpha_s$\fi, which constantly increased in the DIS
fits over the years, to the fact that these data appear to lead to
cross sections slightly lower than one would have extrapolated from
the measurements at 1.8~TeV. The currently still large uncertainties in data
and theory leave room for new physics. However there is no evidence
now that their presence is required for the description of the data,
and furthermore the recent results of~\cite{Janot:2004cy} rule out the
existence of a scalar bottom quark in the range preferred by the
mechanism proposed in~\cite{Berger:2000mp}. The data disfavour the
presence of small-$x$ effects of the size obtained with the approaches
of refs.~\cite{Levin:1991ry}. They are instead compatible with the
estimates of~\cite{Collins:1991ty}.
While these results have no direct impact on other anomalies
reported by CDF in the internal structure and correlations of
heavy-flavoured jets~\cite{Acosta:2001ct}, we do expect that the
improvements relative to pure parton-level calculations present in the
MC@NLO should provide a firmer benchmark for future studies of the
global final-state stucture of $b\bar{b}$ events.
\begin{theacknowledgments}
I am thankful to Harry Weerts and the other local organizers for
the invitation and for the pleasant hospitality, and to Joey Huston
for the support provided.
I am also deeply grateful to my friends, with whom I shared the challenge of
understanding $b$ production properties over more than 10 years:
M.Cacciari, S.Frixione, P.Nason and G.Ridolfi, plus all the pals and
collaborators in CDF and D0.
\end{theacknowledgments}
|
1,314,259,994,776 | arxiv | \section{Introduction}
\label{sec1}
One of the most exciting challenges in today's nuclear structure
is the physics of exotic nuclei far from the line of
$\beta$-stability. What makes this subject particularly
interesting (and difficult) is the unique combination of weak
binding and the proximity of the particle continuum, both
implying the large diffuseness of the nuclear surface and
extreme spatial dimensions characterizing the outermost nucleons
\cite{[Roe92],[Mue93],[Han93],[Naz95]}.
{}For the weakly bound nuclei the decay channels have to be
considered explicitly. Due to the virtual scattering of
nucleons from bound orbitals to unbound scattering states, the
traditional shell-model technology becomes inappropriate. The
proper tool is the continuum shell model \cite{[Fan61],[Phi77]}
which correctly accounts for the coupling to resonances; the
single-particle basis of the continuum shell model consists of
both bound and unbound states. The explicit coupling between
bound states and the continuum and the presence of low-lying
low-$\ell$ scattering states invites strong interplay between
various aspects of nuclear structure and reaction theory.
Particularly exciting are new phenomena on the neutron-rich
side. Because neutrons do not carry an electric charge, the
neutron drip line is located very far from the valley of
$\beta$-stability. Consequently, the neutron drip-line systems
(i.e., those close to the neutron drip line) are characterized
by unusually large $N$/$Z$ ratios. The outer zone of these
nuclei are expected to constitute essentially a new form of a
many-body system: low-density neutron matter (neutron halos and
skins).
Except for the lightest nuclei, the bounds of neutron stability
are not known experimentally. Theoretically, because of their
sensitivity to various theoretical details (e.g., approximations
used, parameter values, interactions) predicted drip lines are
strongly model-dependent. The placement of the one-neutron drip
line, defined by the condition $S_n(Z,N) = B_n(Z,N)-B_n(Z,N-1) =
0$, is solely determined by the {binding energy difference}
between two neighboring isotopes. Analogously, the vanishing
two-neutron separation energy, $S_{2n}(Z,N) =
B_n(Z,N)-B_n(Z,N-2)$, defines the position of the two-neutron
drip line. Since experimental masses (binding energies) near
the neutron drip lines are unknown, in order to extrapolate far
from stability, the large-scale mass calculations are usually
used (see, e.g.,
\cite{[Hau88],[Mol95],[Abo95],[Smo93],[Dob95a]}). However,
since their techniques and parameters are optimized to reproduce
known atomic masses, it is by no means obvious whether the
particle number dependence obtained from global calculations at
extreme values of $N/Z$ is correct. Apart from strong
theoretical and experimental interest in nuclear physics aspects
of exotic nuclei, calculations for nuclei far from stability
have strong astrophysical implications, especially in the
context of the r-process mechanism \cite{[How93],[Che95]}.
In previous work \cite{[Naz94]} several aspects of nuclear
structure at the limits of extreme isospin were discussed by
means of the macroscopic-microscopic approach. In the present
study, ground-state properties of drip-line systems and the
sensitivity of predictions to effective forces are investigated
by means of the self-consistent Hartree-Fock-Bogoliubov (HFB)
approach.
The paper is organized as follows. Section~\ref{forces}
discusses the effective interactions employed in this study.
Since pairing correlations are crucial for the behavior of
drip-line systems, particular attention is paid to the
particle-particle (p-p, pairing) component of the interaction.
After a short review of general properties of effective
pairing interactions, with emphasis on the density dependence,
the pairing forces investigated in our work, namely contact
forces (delta interaction, density-dependent delta interaction,
and Skyrme interaction) and the finite-range Gogny force, are
described.
The basic ingredients of the HFB formalism in the coordinate
representation (single-quasiparticle orbitals, time-reversal
symmetry, canonical states, and various densities) are defined
in Sec.{\ }\ref{sec2a}. In contrast to the
single-quasiparticle wave functions which often contain a
scattering (outgoing) component, canonical states
(Sec.{\ }\ref{sec2ab}) are always localized, even if they have
positive average energy. The interpretation of particle and
(especially) pair densities in terms of single-particle and
correlation probabilities is given in Sec.{\ }\ref{sec2ac}.
This interpretation is essential when relating the calculated
HFB densities and fields to various { experimental
observables}.
The structure of the HFB equations is analyzed in
Sec.{\ }\ref{sec2b}. Here, various functions entering the equations
of motion (i.e., mass parameters and mean field potentials)
are introduced for both p-h and p-p channels
(Secs.{\ }\ref{sec2ba} and \ref{sec2bap}).
The advantage of using the coordinate-space HFB formalism for
weakly bound systems is that in this method the particle
continuum is treated properly. This important point is discussed
in detail in Sec.{\ }\ref{sec3}. In particular, the difference
between the single-particle Hartree-Fock (HF) spectra and
canonical HFB spectra (Sec.{\ }\ref{sec3d}), the asymptotic
properties of the HFB states (Sec.{\ }\ref{sec2bcd})
and densities (Sec.{\ }\ref{sec3c}), and the
effect of the pairing coupling to positive-energy states
(Sec.{\ }\ref{sec3e}) are carefully explained.
The robust predictions of the formalism for various experimental
observables (pairing gaps and pair transfer amplitudes, masses
and separation energies, radii, shell gaps, and shell structure)
are reviewed in Sec.{\ }\ref{sec4}, where experimental
fingerprints of the surface-peaked pairing fields and the
quenching of shell effects far from stability are also given.
Section~\ref{sec6} contains the main conclusions of the paper.
The technical details (i.e., the form of a mean-field Gogny
Hamiltonian and the discussion of the energy cut-off in the
Skyrme model) are collected in the Appendices.
\section{Effective interactions in the p-p channel}
\label{forces}
The uniqueness of drip-line nuclei for studies of effective
interactions is due to the very special role played by the pairing force.
This is seen from approximate HFB relations between the Fermi
level, $\lambda$, pairing gap, $\Delta$, and the particle
separation energy, $S$ \cite{[Bei75a]}:
\begin{equation}\label{Sn0}
S \approx -\lambda - \Delta.
\end{equation}
Since for drip-line nuclei $S$ is very small, $\lambda +
\Delta$$\approx$0. Consequently, the single-particle field
characterized by $\lambda$ (determined by the p-h component of
the effective interaction) and the pairing field $\Delta$
(determined by the p-p part of the effective interaction) are
equally important. In other words, contrary to the situation
encountered close to the line of beta stability, the pairing
component can no longer be treated as a {\em residual}
interaction; i.e., a small perturbation important only in the
neighborhood of the Fermi surface.
Surprisingly, rather little is known about the basic properties
of the p-p force. In most calculations, the pairing Hamiltonian
has been approximated by the state-independent seniority
pairing force, or schematic multipole pairing interaction
\cite{[Lan64]}. Such oversimplified forces, usually treated by
means of the BCS approximation, perform remarkably well when
applied to nuclei in the neighborhood of the stability valley
(where, as pointed out above, pairing can be considered as a
small correction). As a result, considerable effort was devoted
in the past to optimizing the p-h part of the interaction, while
leaving the p-p component aside.
Up to now, the microscopic theory of the pairing interaction has
only seldom been applied in realistic calculations for finite
nuclei (see Ref.{\ }\cite{[Del95]} for a recent example). A
``first-principle" derivation of pairing interaction from the
bare $NN$ force using the renormalization procedure ($G$-matrix
technique), still encounters many problems such as, e.g.,
treatment of core polarization \cite{[Kuc91],[Kad87]}. Hence,
phenomenological pairing interactions are usually introduced.
Two important open questions asked in this context are: (i) the
role of finite range, and (ii) the importance of density
dependence. Since the realistic effective interactions are
believed to have a finite range, the first question seems purely
academic. However, the remarkable success of zero-range Skyrme
forces suggests that, in many cases, the finite-range effect can
be mocked up by an explicit velocity dependence. To what extent
this is true for the pairing channel remains to be seen. One
obvious advantage of using finite-range forces is the automatic
cut-off of high-momentum components; for the zero-range forces
this is solved by restricting the pair scattering to a limited
energy range and by an appropriate renormalization of the
pairing coupling constant (see Appendix{\ }\ref{appB}).
The answer to the question on the density dependence is much
less clear. Early calculations \cite{[Bru60],[Eme60]} for
nuclear matter predicted a very weak $^1S_0$ pairing at the
saturation point ($k_F$=1.35\,fm$^{-1}$). Consequently, it was
concluded that strong pairing correlations in finite nuclei had
to be due to interactions at the nuclear surface. This led to
the surface delta interaction (SDI) \cite{[Gre65]}, a highly
successful residual interaction between valence nucleons. Of
course, the SDI is an extreme example of surface interaction.
More realistic density-dependent pairing forces are variants of
the density-dependent delta interaction (DDDI) introduced in
the Migdal theory of finite Fermi systems \cite{[Mig67]}.
Since the effective interactions commonly used in the HF
calculations are bound to be density dependent in order to
reproduce the compressibility of the infinite nuclear matter
\cite{[RS80]} (an explicit density dependence is also said to
account for three- and higher-body components of the
interaction), it seems natural to introduce the density
dependence in the p-p channel as well \cite{[Boc67]}.
Interestingly, the presence (absence) of the density dependence in
the pairing channel has consequences for the spatial properties
of pairing densities and fields. As early recognized
\cite{[Sap65]}, the density-independent p-p force gives rise to a
pairing field that has a volume character. For instance, the
commonly used contact delta interaction,
\begin{equation}\label{DIDI}
V^{\delta}(\bbox{r},\bbox{r}')
= V_0 \delta(\bbox{r}-\bbox{r}'),
\end{equation}
leads to volume pairing. By adding a density-dependent
component, the pairing field becomes surface-peaked. A simple
modification of force (\ref{DIDI}) is the DDDI
\cite{[Boc67],[Cha76],[Kad78]}
\begin{equation}\label{DDDI}
V^{\delta\rho}(\bbox{r},\bbox{r}') =
V_0\delta(\bbox{r}-\bbox{r}')
\left\{1-\left[\rho(\bbox{r})/\rho_c\right]^\gamma\right\},
\end{equation}
where $\rho(\bbox{r})$ is the isoscalar nucleonic density, and
$V_0, \rho_c$ and $\gamma$ are constants. If $\rho_c$ is chosen
such that it is close to the saturation density,
$\rho_c$$\approx$$\rho(\bbox{r}=0)$, both the resulting pair
density and the pairing potential $\Delta(\bbox{r})$
(see Secs.{\ }\ref{sec2ac} and \ref{sec2ba}) are small in the
nuclear interior. By varying the magnitude of the
density-dependent term, the transition from volume pairing
[$\rho_c$$\gg$$\rho(0)$] to surface pairing can be probed.
What are the experimental arguments in favor of surface pairing?
Probably the strongest evidence is the odd-even staggering in
differential radii, explained in terms of the direct coupling
between the proton density and the neutron pairing tensor
\cite{[Zaw87],[Reg88],[Fay94],[Hor94]}. Other experimental
observables which strongly reflect the spatial character of
pairing are the particle widths and energies of deep-hole states
\cite{[Bul80],[Bel87]}, strongly influenced by the
pairing-induced coupling to the particle continuum, and the pair
transfer form factors, directly reflecting the shape of the pair
density. Because of strong surface effects, the properties of
weakly bound nuclei are sensitive to the density dependence of
pairing. In particular, the same type of force is used to
describe the spatial extension of loosely bound light systems
\cite{[Ber91],[Len91],[Ost92],[Boh93]}. (The measurable
fingerprints of surface pairing in neutron-rich systems are
further discussed in Sec.{\ }\ref{sec4}.) In this context, it
is also worth mentioning that the self-consistent model with the
DDDI has recently been used to describe the nuclear charge radii
\cite{[Taj93b]} and the moments of inertia of superdeformed
nuclei \cite{[Ter95]}. In the latter case, the inclusion of a
density dependence in the p-p channel turned out to be crucial
for the reproduction of experimental data around $^{194}$Hg.
In a series of papers \cite{[Zve84],[Zve85],[Smi88]} the
Quasiparticle Lagrangian method (QLM) \cite{[Kho82]} based on
the single-particle Green function approach in the coordinate
representation \cite{[Shl75]} has been applied to the
description of nuclear superfluidity. The resulting pairing
interaction, based on the Landau-Migdal ansatz \cite{[Mig67]},
has zero-range and contains two-body and three-body components,
thus leading to a density-dependent contact force similar to
that of Eq.{\ }(\ref{DDDI}). (Note that in the approximations
of Ref.{\ }\cite{[Zve84]}, the {\em neutron} pairing
interaction is proportional to the {\em proton} density and vice
versa.) However, in practical QLM calculations
\cite{[Zve84],[Zve85],[Smi88]}, a pure density-independent delta
force was used.
A better understanding of the density dependence of the nuclear pairing
interaction is important for theories of superfluidity in
neutron stars. As pointed out in Ref.{\ }\cite{[Pet95]}, it
is impossible at present to deduce the magnitude of the pairing gaps
in neutron stars with sufficient accuracy. Indeed, calculations
of $^1S_0$ pairing gaps in pure neutron matter, or symmetric
nuclear matter, based on bare $NN$ interactions \cite{[Che93]}
suggest a strong dependence on the force used; in general, the
singlet-$S$ pairing is very small at the saturation point. On
the other hand, nuclear matter calculations with an effective
finite-range interaction, namely the Gogny force \cite{[Kuc89]},
yield rather large values of the pairing gap at saturation
($\Delta$$\simeq$0.7\,MeV). (For relativistic HFB calculations
for symmetric nuclear matter, see Ref.{\ }\cite{[Kuc91]}. The
pairing properties of the Skyrme force in nuclear matter were
investigated in Ref.{\ }\cite{[Tak94]}. See also
Ref.{\ }\cite{[Bal95]} for schematic calculations of pairing
properties in nuclear matter based on the Green function method
with a contact interaction, and
Ref.{\ }\cite{[Deb94]} for a semi-classical description
of neutron superfluidity in neutron stars using the Gogny force.)
In this study, several self-consistent models based upon the HFB
approaches are used. The effective interactions employed, and
other model parameters, are briefly discussed below.
The spherical HFB-Skyrme calculations have been carried out in
spatial coordinates following the method introduced in
Ref.{\ }\cite{[Dob84]} and discussed in detail in
Secs.{\ }\ref{sec2a}-\ref{sec3}. Several effective Skyrme
interactions are investigated. These are: (i) the Skyrme
parametrization SkP introduced in Ref.{\ }\cite{[Dob84]} (SkP
has exactly the same form in the particle-hole (p-h) and pairing
channels); (ii) Skyrme interaction SkP$^{\delta}$ of
Ref.{\ }\cite{[Dob95c]} (in the p-h channel, this force is the
SkP Skyrme parametrization, while its pairing component is given
by delta interaction, Eq.{\ }(\ref{DIDI}); (iii) the Skyrme
interaction SkP$^{\delta\rho}$ of Ref.{\ }\cite{[Dob95c]} (in
the p-h channel, this force is the SkP Skyrme parametrization,
while its pairing component is given by Eq.{\ }(\ref{DDDI});
(iv) the force SIII$^{\delta}$ (in the p-h channel, this is the
SIII Skyrme parametrization \cite{[Bei75]}; its pairing
component is given by the delta force of
Ref.{\ }\cite{[Dob95c]}); (v) the force SkM$^{\delta}$ (in the
p-h channel, this is the SkM$^*$ Skyrme parametrization
\cite{[Bar82]}, and its pairing part is given by the delta force
with the parameters of Ref.{\ }\cite{[Dob95c]}).
Apart from other parameters, the above Skyrme forces differ in
their values of the effective mass for symmetric nuclear
matter, $m^*/m$. Namely, $m^*/m$ is 0.76, 0.79, and 1 for SIII,
SkM$^*$, and SkP, respectively. All HFB-Skyrme results have
been obtained using the pairing phase space as determined in
Ref.{\ }\cite{[Dob84]} (see also discussion in
Appendix{\ }\ref{appB}).
A set of spherical HFB calculations has also been performed
using the finite-range density-dependent Gogny interaction D1S
of Ref.{\ }\cite{[Dec80]}. In this effective interaction
\cite{[Gog73]} the central part consists of four terms
parametrized with finite-range Gaussians (see
Appendix{\ }\ref{appA}). Spin-orbit and density-dependent
terms of zero range are also included as in the Skyrme
parametrizations. The pairing field is calculated from the D1S
force, i.e., the same interaction is used for a microscopic
description for both the mean field and the pairing channels.
However, by a specific choice of the exchange contribution, the
pairing component of the D1S is density-independent. It is also
interesting to note that the pairing component of the D1S is
repulsive at short distances and attractive at long ranges
\cite{[Kuc91],[Rum94]}. For the D1S force, the effective mass
for infinite nuclear matter is $m^*/m$=0.70.
The parameters of the D1S interaction were chosen to reproduce
certain global properties of a set of spherical nuclei and of
nuclear matter \cite{[Ber89]}. The HFB+Gogny results presented
here were obtained by expanding the HFB wave functions in a
harmonic oscillator basis containing up to 19 shells.
\section{Independent-quasiparticle states}
\label{sec2a}
The HFB approach is a variational method which uses
independent-quasiparticle states as trial wave functions. These
states are particularly convenient when used in a variational
theory, because, due to the Wick theorem \cite{[Wic50]}, one can
easily calculate for them the average values of an arbitrary
many-body Hamiltonian. Even if the exact eigenstates of such a
Hamiltonian can be rather remote from any one of the
independent-quasiparticle states, one can argue \cite{[Bal92b]}
that one may obtain in this way fair estimates of at least
one-body observables.
An independent-quasiparticle state is defined as a vacuum of
quasiparticle operators which are linear combinations of
particle creation and annihilation operators. This linear
combination is called the Bogoliubov transformation
\cite{[Bog58],[Bog59a],[Bog59b]}. According to the Thouless
theorem \cite{[Tho60]}, every independent-quasiparticle state
$|\Psi\rangle$, which is not orthogonal to the vacuum state
$|0\rangle$, i.e., $\langle0|\Psi\rangle$$\neq$0, can be
presented in the form
\begin{equation}\label{eq101}
|\Psi\rangle = \exp\left\{-\frac{1}{2}\sum_{\mu\nu}
Z^+_{\mu\nu}a^+_\mu a^+_\nu\right\} |0\rangle ,
\end{equation}
where the Thouless matrix $Z$ is antisymmetric, $Z^+=-Z^*$, and
in general complex. The phase of state (\ref{eq101}) is fixed
by the condition $\langle0|\Psi\rangle$=1; the norm is given by
$\langle\Psi|\Psi\rangle$=$\det(1+Z^+Z)^{1/2}$. In the
following, state $|\Psi\rangle$ will represent the
$I^\pi$=0$^+$ ground state of the even-even system.
We refer to standard textbooks \cite{[RS80]} for a discussion of
the properties of the Bogoliubov transformation. Here we start our
discussion from the trial wave function (\ref{eq101}) which is
parametrized by the matrix elements of $Z$. This form of the
independent-quasiparticle state is very convenient in
variational applications because variations with respect to all
matrix elements $Z_{\mu\nu}$=$-Z_{\nu\mu}$ are independent of
one another.
Instead of using the matrix representation corresponding to a
set of single-particle creation operators $a^+_\mu$ numbered by
the discrete index $\mu$, one may use the spatial coordinate
representation. This is particularly useful when discussing
spatial properties of the variational wave functions and the
coupling to the particle continuum. Therefore, in the following,
we shall consider the operators creating a particle in the space
point $\bbox{r}$ and having the projection of spin
$\sigma$=$\pm\frac{1}{2}$,
\begin{equation}\label{eq103}
a^+_{\bbox{r}\sigma} = \sum_\mu \psi^*_\mu(\bbox{r}\sigma)
a^+_\mu,
\end{equation}
where $\psi_\mu(\bbox{r}\sigma)$ is the wave function of the
$\mu$-th single-particle state. To simplify the following
expressions, we consider only one type of particle. A
generalization to systems described by a product of neutron and
proton wave functions is straightforward, while that involving
the mixing in the isospin degree of freedom is discussed in
Ref.{\ }\cite{[Roh95]}.
The inverse relation with respect to (\ref{eq103}) is given by
\begin{equation}\label{eq104}
a^+_\mu = \int \text{d}^3\bbox{r} \sum_\sigma
\psi_\mu(\bbox{r}\sigma) a^+_{\bbox{r}\sigma}.
\end{equation}
Equations (\ref{eq103}) and (\ref{eq104}) assume that the wave
functions $\psi_\mu(\bbox{r}\sigma)$ form an orthonormal and
complete set. In practical calculations, the basis has to be
truncated and the completeness is realized only approximately.
The choice of the single-particle wave functions used (size of
the set and, in particular, the asymptotic behavior) is of
crucial importance to the phenomena discussed in this study.
In coordinate space, the Thouless state (\ref{eq101}) has
the form
\begin{equation}\label{eq105}
|\Psi\rangle = \exp\left\{-\frac{1}{2}
\int \text{d}^3\bbox{r}\text{d}^3\bbox{r}'
\sum_{\sigma\sigma'}
Z^+(\bbox{r}\sigma,\bbox{r}'\sigma')
a^+_{\bbox{r}\sigma} a^+_{\bbox{r}'\sigma'}
\right\} \! |0\rangle ,
\end{equation}
and is defined by the antisymmetric complex function,
$Z^+(\bbox{r}\sigma,\bbox{r}'\sigma')$=
$-Z^*(\bbox{r}\sigma,\bbox{r}'\sigma')$, of space-spin
coordinates. Already, at this point, we see that any variational
method employing an attractive effective interaction
for a {\em bound finite system} must lead
to functions which are localized in space,
\begin{equation}\label{eq106}
\lim_{|\bbox{r}|\rightarrow\infty}
Z(\bbox{r}\sigma,\bbox{r}'\sigma')=0,
\mbox{\hspace{3ex}for any~} \bbox{r}', \sigma',
\mbox{and~} \sigma.
\end{equation}
Recall that in the coordinate space, values of the function
$Z(\bbox{r}\sigma,\bbox{r}'\sigma')$ at different space-spin
points are the variational parameters, and that any arbitrarily
small value of this function at large distance,
$|\bbox{r}|\rightarrow\infty$, would create at this point a
non-zero probability density. Whether this would be
energetically favorable depends upon the number of particles in
the system and on the interaction used in the variational
method. Apart from exotic phenomena such as halos, and apart
from infinite matter such as in the neutron-star crust, we
assume that the attractiveness of the interaction always favors
compact, localized probability densities, and hence we require
the localization condition (\ref{eq106}) for the variational
parameters $Z(\bbox{r}\sigma,\bbox{r}'\sigma')$.
An expansion of the variational function
$Z(\bbox{r}\sigma,\bbox{r}'\sigma')$ in terms of the
single-particle wave functions is a straightforward consequence
of transformations (\ref{eq103}) and (\ref{eq104}),
\begin{equation}\label{eq107}
Z(\bbox{r}\sigma,\bbox{r}'\sigma')
= \sum_{\mu\nu} \psi^*_\mu(\bbox{r} \sigma )Z_{\mu\nu}
\psi^*_\nu(\bbox{r}'\sigma') .
\end{equation}
The localization condition, Eq.~(\ref{eq106}), can, therefore,
be guaranteed in the most economic way by requiring that {\em
all} single-particle wave functions $\psi_\mu(\bbox{r}\sigma)$
vanish at large distances. Of course, this is only a matter of
convenience and manageability, because any localized function
can be expanded in any complete basis. It is, however, obvious
that such an expansion converges very slowly if the basis has
inappropriate asymptotic properties. For example, one can expect
that a plane-wave expansion of
$Z(\bbox{r}\sigma,\bbox{r}'\sigma')$ would require an infinite
number of basis states $\psi_\mu(\bbox{r}\sigma)$, and in
practice, any reduction to a finite basis would lead to serious
errors. A discussion pertaining to asymptotic properties of
functions in spatial coordinates, and the choice of an
appropriate single-particle basis, will be a pivotal point in
our study.
\subsection{Time-reversal}
\label{sec2aa}
The present study is entirely restricted to an analysis of
ground-state phenomena, and therefore, we use only time-even
variational independent-quasiparticle wave functions. The
time-reversal operator can be represented as a product of the
spin-flip operator and the complex conjugation; i.e., $\hat
T$=$-i\hat\sigma_y\hat K$ \cite{[Mes61]}. The explicit
time-reversed creation operators then have the form
\begin{mathletters}\begin{eqnarray}
\hat T^+a^+_{\bbox{r}\sigma}\hat T &=& -2\sigma a^+_{\bbox{r},-\sigma} ,
\label{eq108a} \\
\hat T^+a^+_\mu\hat T &=& \int \text{d}^3\bbox{r} \sum_\sigma
[2\sigma\psi^*_\mu(\bbox{r},-\sigma)] a^+_{\bbox{r}\sigma}.
\label{eq108b}
\end{eqnarray}\end{mathletters}%
We now suppose that the set of basis states represented by the
creation operators $a^+_\mu$ is closed with respect to time
reversal, and that the state $\hat T^+a^+_\mu\hat T$ is actually
proportional (up to a phase factor $s_{\bar\mu}$=$-s_{\mu}$,
$|s_{\mu}|$=1) to another basis state denoted by a bar over the
Greek symbol, i.e.,
\begin{mathletters}\begin{eqnarray}
\hat T^+a^+_\mu\hat T &=& s_{\bar\mu}a^+_{\bar\mu}, \label{eq109a} \\
s_{\bar\mu}\psi_{\bar\mu}(\bbox{r}\sigma)
&=& 2\sigma\psi^*_\mu(\bbox{r},-\sigma). \label{eq109b}
\end{eqnarray}\end{mathletters}%
In this way, the single-particle basis is assumed to be composed
of pairs of time-reversed states denoted by indices $\mu$ and
$\bar\mu$. In what follows, we use the convention that
$\bar{\bar\mu}$$\equiv$$\mu$, and that the sums over either
$\mu$ or $\bar\mu$ are always performed over {\em all} basis
states. The phase factors $s_{\mu}$ depend on relative phases
chosen for the $\mu$-th and $\bar\mu$-th states of the basis; it
is convenient to keep them unspecified in all theoretical
formulae and to make a definite suitable choice of the phase
convention only in a specific final application.
\subsection{Canonical basis}
\label{sec2ab}
A requirement of the time-reversal symmetry of the quasiparticle vacuum
(\ref{eq101}) or (\ref{eq105}),
$\hat T|\Psi\rangle$=$|\Psi\rangle$, leads to the following
conditions:
\begin{mathletters}\begin{eqnarray}
Z_{\mu\nu} &=& s^*_{\mu}s^*_{\nu}Z^*_{\bar\mu\bar\nu},
\label{eq110a} \\
Z(\bbox{r}\sigma,\bbox{r}'\sigma')
&=& 4\sigma\sigma'Z^*(\bbox{r},-\sigma,\bbox{r}',-\sigma').
\label{eq110b}
\end{eqnarray}\end{mathletters}%
These properties allow the introduction of more suitable forms of
$Z_{\mu\nu}$ and $Z(\bbox{r}\sigma,\bbox{r}'\sigma')$; namely,
\begin{mathletters}\begin{eqnarray}
\tilde Z_{\mu\nu} &:=& s_{\mu}Z_{\bar\mu\nu} ,
\label{eq111a} \\
\tilde Z(\bbox{r}\sigma,\bbox{r}'\sigma')
&:=& 2\sigma Z(\bbox{r},-\sigma,\bbox{r}'\sigma') .
\label{eq111b}
\end{eqnarray}\end{mathletters}%
The matrix $\tilde Z_{\mu\nu}$ and the function $\tilde
Z(\bbox{r}\sigma,\bbox{r}'\sigma')$ are both time-even and
hermitian,
\begin{mathletters}\begin{eqnarray}
\tilde Z^*_{\mu\nu} &=& \tilde Z_{\nu\mu} ,
\label{eq112a} \\
\tilde Z^*(\bbox{r}\sigma,\bbox{r}'\sigma')
&=& \tilde Z(\bbox{r}'\sigma',\bbox{r}\sigma) ,
\label{eq112b}
\end{eqnarray}\end{mathletters}%
and therefore they can be considered as usual operators in the
corresponding Hilbert spaces. In particular, the function
$\tilde Z^*(\bbox{r}\sigma,\bbox{r}'\sigma')$ can be
diagonalized by solving the following integral eigenequation:
\begin{equation}\label{eq113}
\int \text{d}^3\bbox{r}' \sum_{\sigma'}
\tilde Z(\bbox{r}\sigma,\bbox{r}'\sigma')
\breve\psi_{\mu}(\bbox{r}'\sigma')
= z_\mu \breve\psi_{\mu}(\bbox{r}\sigma),
\end{equation}
where $z_\mu$ are real eigenvalues, $z_\mu$=$z_{\bar\mu}$. The
eigenfunctions $\breve\psi_{\mu}(\bbox{r}\sigma)$ form the
single-particle basis, usually referred to as the {\em canonical
basis}. Canonical states, together with the eigenvalues $z_\mu$,
completely define the quasiparticle vacuum $|\Psi\rangle$.
(Here and in the following we use the checked symbols, e.g.,
$\breve\psi_{\mu}$ and $\breve{a}^+_{\mu}$, to denote objects
pertaining to the canonical basis.)
Two important remarks concerning the canonical basis are now in
order. First, the localization condition (\ref{eq106}) directly
results in the fact that {\em all} canonical-basis
single-particle wave functions
$\breve\psi_{\mu}(\bbox{r}\sigma)$ are localized in space; i.e.,
vanish at large distances, $|\bbox{r}|\rightarrow\infty$.
Therefore, as discussed previously, a choice of the localized
wave functions for the basis states $\psi_{\mu}(\bbox{r}\sigma)$
may allow for a rapid convergence in the expansion
\begin{mathletters}\begin{eqnarray}
\breve\psi_{\mu}(\bbox{r}\sigma) &=& \sum_\nu D_{\nu\mu}
\psi_{\nu}(\bbox{r}\sigma) , \label{eq114a} \\
\breve a^+_{\mu} &=& \sum_\nu D_{\nu\mu}
a^+_{\nu} . \label{eq114b}
\end{eqnarray}\end{mathletters}%
Second, since $\tilde Z(\bbox{r}\sigma,\bbox{r}'\sigma')$ and
$\tilde Z_{\mu\nu}$ are related by
[cf.{\ }Eq.{\ }(\ref{eq107})]
\begin{equation}\label{eq115}
\tilde Z(\bbox{r}\sigma,\bbox{r}'\sigma')
= \sum_{\mu\nu} \psi_\mu(\bbox{r} \sigma )\tilde Z_{\mu\nu}
\psi^*_\nu(\bbox{r}'\sigma') ,
\end{equation}
a diagonalization of
$\tilde{Z}(\bbox{r}\sigma,\bbox{r}'\sigma')$,
Eq.{\ }(\ref{eq113}), is equivalent to a diagonalization of
$\tilde Z_{\mu\nu}$,
\begin{equation}\label{eq117}
\sum_\nu\tilde Z_{\mu\nu}D_{\nu\tau} = z_{\tau}D_{\mu\tau}.
\end{equation}
Therefore, in the canonical basis,
the Thouless state (\ref{eq101}) acquires the well-known
separable BCS-like form
\begin{eqnarray}
|\Psi\rangle &=& \exp\left\{\frac{1}{2}\sum_{\mu\nu}
\tilde Z_{\mu\nu}s_\mu a^+_{\bar\mu} a^+_\nu\right\}
|0\rangle
\nonumber \\[1ex]
&=& \exp\left\{\sum_{\nu>0}
z_{\nu}s_\nu \breve a^+_{\bar\nu}
\breve a^+_\nu\right\}
|0\rangle
\nonumber \\
&=& \prod_{\nu>0}\left(1+
z_{\nu}s_\nu \breve a^+_{\bar\nu}
\breve a^+_\nu\right)
|0\rangle, \label{eq116}
\end{eqnarray}
where the symbol $\nu>0$ denotes the sum over one-half of the
basis states with only one state (either one) of each
time-reversed pair ($\nu,\bar\nu$) included, and
$\breve{a}^+_\nu$ is the creation operator in the canonical
basis.
\subsection{Density matrices and the correlation probability}
\label{sec2ac}
According to the Wick theorem \cite{[Wic50],[RS80]} for the
independent-quasiparticle state, Eqs.{\ }(\ref{eq101}) or
(\ref{eq105}), an average value of any operator can be expressed
through average values of bifermion operators,
\begin{mathletters}\begin{eqnarray}
\rho(\bbox{r}\sigma,\bbox{r}'\sigma') &=&
\langle\Psi|a^+_{\bbox{r}'\sigma'}a_{\bbox{r}\sigma}|\Psi\rangle,
\label{eq118a} \\
\tilde\rho(\bbox{r}\sigma,\bbox{r}'\sigma') &=& -2\sigma'
\langle\Psi|a_{\bbox{r}',-\sigma'}a_{\bbox{r}\sigma}|\Psi\rangle.
\label{eq118b}
\end{eqnarray}\end{mathletters}%
{}Functions $\rho(\bbox{r}\sigma,\bbox{r}'\sigma')$ and
$\tilde\rho(\bbox{r}\sigma,\bbox{r}'\sigma')$ are called the
particle and pairing density matrices, respectively. For a
time-reversal invariant state $|\Psi\rangle$, both density
matrices are time-even and hermitian:
\begin{mathletters}\label{7ab}\begin{eqnarray}
\rho(\bbox{r} \sigma, \bbox{r}' \sigma') &=& 4\sigma \sigma'
\rho(\bbox{r} -\sigma, \bbox{r}' -\sigma')^{*},
\label{7} \\
\tilde\rho(\bbox{r} \sigma, \bbox{r}' \sigma') &=& 4\sigma \sigma'
\tilde\rho(\bbox{r} -\sigma, \bbox{r}' -\sigma')^{*}.
\label{7b}
\end{eqnarray}\end{mathletters}%
Therefore, the pairing density matrix
$\tilde\rho(\bbox{r}\sigma,\bbox{r}'\sigma')$ is more convenient
to use than the standard pairing tensor
$\kappa(\bbox{r}\sigma,\bbox{r}'\sigma')$ \cite{[RS80]},
\begin{equation}\label{eq119}
\kappa(\bbox{r}\sigma,\bbox{r}'\sigma') =
2\sigma'\tilde\rho(\bbox{r}\sigma,\bbox{r}',-\sigma'),
\end{equation}
which is an antisymmetric function of the space-spin arguments.
The formulae expressing $\rho(\bbox{r}\sigma,\bbox{r}'\sigma')$
and $\tilde\rho(\bbox{r}\sigma,\bbox{r}'\sigma')$ in terms of
the function $\tilde Z(\bbox{r}\sigma,\bbox{r}'\sigma')$ can be
easily derived from those for the density matrix and the pairing
tensor \cite{[RS80]}, and they read
\begin{mathletters}\label{eq120}\begin{eqnarray}
\rho &=& (1+\tilde Z^2)^{-1}\tilde Z^2, \label{eq120a} \\
\tilde\rho &=& (1+\tilde Z^2)^{-1}\tilde Z. \label{eq120b}
\end{eqnarray}\end{mathletters}%
As a result, the density matrices obey the following relations:
\begin{mathletters}\label{eq121}\begin{eqnarray}
\tilde\rho\cdot\rho &=& \rho\cdot\tilde\rho , \label{eq121a} \\
\rho\cdot\rho &+& \tilde\rho\cdot\tilde\rho = \rho . \label{eq121b}
\end{eqnarray}\end{mathletters}%
In the above equations, the matrix multiplications and
inversions should be understood in the operator sense; i.e.,
they involve the integration over space and summation over spin
variables. For instance:
\begin{equation}\label{eq124}
(\tilde\rho\cdot\rho)(\bbox{r}\sigma,\bbox{r}'\sigma')
= \int \text{d}^3\bbox{r}'' \sum_{\sigma''}
\tilde\rho(\bbox{r}\sigma,\bbox{r}''\sigma'')
\rho(\bbox{r}''\sigma'',\bbox{r}'\sigma').
\end{equation}
{\em Local} HFB densities, i.e., the density matrices for equal
spatial arguments, $\bbox{r}'$=$\bbox{r}$, have very
well-defined physical interpretations. To see this, let us
assume that $\psi_{\bbox{x}s}(\bbox{r}\sigma)$ is a normalized
single-particle wave function (wave packet) concentrated in a
small volume $V_{\bbox{x}}$ around the point
$\bbox{r}$=$\bbox{x}$ and having the spin $s$=$\sigma$. The
corresponding creation operator
\begin{equation}\label{eq125}
a^+_{\bbox{x}s} = \int \text{d}^3\bbox{r} \sum_\sigma
\psi_{\bbox{x}s}(\bbox{r}\sigma) a^+_{\bbox{r}\sigma},
\end{equation}
together with its hermitian conjugate, define the operator
\begin{equation}\label{eq126}
\hat N_{\bbox{x}s} = a^+_{\bbox{x}s}a_{\bbox{x}s} ,
\end{equation}
which measures the number of particles in the vicinity of the
point $\bbox{x}$. Since
\begin{equation}\label{eq127}
\hat N_{\bbox{x}s}^2 = \hat N_{\bbox{x}s} ,
\end{equation}
$\hat N_{\bbox{x}s}$ can be regarded as a projection operator
which projects out the component of the many-body wave function
that contains one spin-$s$ fermion in the volume $V_{\bbox{x}}$.
Therefore, its average value gives {\em the probability to find
a particle with spin $s$ in this volume}:
\begin{equation}\label{eq128}
{\cal{P}}_{1}
(\bbox{x}s) = \langle\Psi|\hat N_{\bbox{x}s}|\Psi\rangle =
V_{\bbox{x}}\rho(\bbox{x}s,\bbox{x}s) .
\end{equation}
In a very similar way, the probability of finding a fermion in
$V_{\bbox{x}}$ having opposite spin can be obtained by
considering the time-reversed wave function
$2\sigma\psi^*_{\bbox{x}s}(\bbox{r},-\sigma)$,
cf.{\ }Eq.{\ }(\ref{eq108b}). This gives
\begin{equation}\label{eq129}
{\cal{P}}_{1}
(\bbox{x},-s) = \langle\Psi|\hat N_{\bbox{x},-s}|\Psi\rangle =
V_{\bbox{x}}\rho(\bbox{x},-s,\bbox{x},-s) .
\end{equation}
Due to time-reversal symmetry, probabilities (\ref{eq128})
and (\ref{eq129}) are equal.
We may now ask the question, ``What is {\em the probability of
finding a pair of fermions with opposite spin projections} in the
volume $V_{\bbox{x}}$, ${\cal{P}}_{2}(\bbox{x})$?". If one
considers two {\em independent} measurements, where in the first
one is found the spin-$s$ fermion, and in another one the
spin-($-s$) fermion, ${\cal{P}}_{2}(\bbox{x})$ is equal to the
product of individual probabilities; i.e.,
${\cal{P}}_{1}(\bbox{x},s){\cal{P}}_{1}(\bbox{x},-s)$. On the
other hand, if one wants to find in $V_{\bbox{x}}$ both fermions
{\em simultaneously}, one should project out from $|\Psi\rangle$
a corresponding two-fermion component. In this case,
${\cal{P}}_{2}(\bbox{x})$ becomes the expectation value of the
product of the projection operators $\hat N_{\bbox{x}s}$ and
$\hat N_{\bbox{x},-s}$; i.e.,
${\cal{P}}_{2}(\bbox{x})=\langle\Psi|\hat N_{\bbox{x}s} \hat
N_{\bbox{x},-s}|\Psi\rangle$. Using the Wick theorem, this
average value is
\begin{equation}\label{eq132a}\begin{array}{rl}
{\cal{P}}_{2}(\bbox{x})
=& V^2_{\bbox{x}} \rho(\bbox{x}s,\bbox{x}s)
\rho(\bbox{x},-s,\bbox{x},-s) + \\
& V^2_{\bbox{x}}\tilde\rho(\bbox{x}s,\bbox{x}s)
\tilde\rho(\bbox{x},-s,\bbox{x},-s),
\end{array}\end{equation}
or in terms of the time-even spin-averaged densities:
\begin{equation}\label{eq132b}
{\cal{P}}_{2}(\bbox{x})
= \frac{1}{4}V^2_{\bbox{x}} \rho(\bbox{x})^2
+ \frac{1}{4}V^2_{\bbox{x}}\tilde\rho(\bbox{x})^2,
\end{equation}
for
\begin{mathletters}\label{eq320}\begin{eqnarray}
\rho(\bbox{r}) &=& \sum_\sigma
\rho(\bbox{r}\sigma,\bbox{r}\sigma),
\label{eq320a} \\
\tilde\rho(\bbox{r}) &=& \sum_\sigma
\tilde\rho(\bbox{r}\sigma,\bbox{r}\sigma).
\label{eq320b}
\end{eqnarray}\end{mathletters}%
Since the first terms in Eqs.{\ }(\ref{eq132a}) and
(\ref{eq132b}) describe the probability of finding the two
fermions in independent measurements, the second terms in these
equations should be interpreted as the probability of finding
{\em the correlated pair} at point $\bbox{x}$.
The above arguments allow us to give a transparent physical
interpretation to the local HFB densities. Namely, as usual,
$\rho(\bbox{r})$ represents the probability density of finding a
particle at the given point. On the other hand,
$\tilde\rho(\bbox{r})^2$ gives the correlation probability
density; i.e., the probability of finding a pair of fermions
{\em in excess} of the probability of finding two uncorrelated
fermions.
It is important to note that kinematic conditions (\ref{eq121}),
which result from the fact that $|\Psi\rangle$ is an
independent-quasiparticle state, Eq.{\ }(\ref{eq105}), {\em do
not} directly constrain the local values of the particle and
pairing density matrices. In particular, there is no obvious
kinematic relation between the probability of finding two
independent particles at a given point of space, and the
probability of finding a correlated pair at the same point. In
particular, the first one can be small, while the second can be
large (see discussion in Secs.{\ }\ref{sec2aca} and
\ref{sec3c}). This result means that in such a situation the
experiments probing the presence of two particles will always
find these two particles as correlated pairs without a
``background'' characteristic of two independent particles.
Relations (\ref{eq120}) imply that all three functions:
$\tilde Z(\bbox{r}\sigma,\bbox{r}'\sigma')$,
$\rho(\bbox{r}\sigma,\bbox{r}'\sigma')$, and
$\tilde\rho(\bbox{r}\sigma,\bbox{r}'\sigma')$
are diagonal in the canonical basis, cf.{\ }Eq.{\ }(\ref{eq113}).
Using the standard
notation for the eigenvalues of $\rho$ and $\tilde\rho$, one obtains
\begin{mathletters}\label{eq122}\begin{eqnarray}
\int \text{d}^3\bbox{r}' \sum_{\sigma'}
\rho(\bbox{r}\sigma,\bbox{r}'\sigma')
\breve\psi_{\mu}(\bbox{r}'\sigma')
&=& v^2_\mu \breve\psi_{\mu}(\bbox{r}\sigma) , \label{eq122a} \\
\int \text{d}^3\bbox{r}' \sum_{\sigma'}
\tilde\rho(\bbox{r}\sigma,\bbox{r}'\sigma')
\breve\psi_{\mu}(\bbox{r}'\sigma')
&=& u_\mu v_\mu \breve\psi_{\mu}(\bbox{r}\sigma) , \label{eq122b}
\end{eqnarray}\end{mathletters}%
where the real factors $v_\mu$ and $u_\mu$ are given by
\begin{equation}\label{eq123a}
v_\mu = v_{\bar\mu} = \frac{z_\mu}{\sqrt{1+z^2_\mu}} \quad,\quad
u_\mu = u_{\bar\mu} = \frac{ 1 }{\sqrt{1+z^2_\mu}} .
\end{equation}
A completeness of the canonical basis leads to standard
expressions for the density matrices:
\begin{mathletters}\label{eq321}\begin{eqnarray}
\rho(\bbox{r}\sigma,\bbox{r}'\sigma')
&=& \sum_\mu v^2_\mu \breve\psi_{\mu}^*(\bbox{r}\sigma)
\breve\psi_{\mu}(\bbox{r}'\sigma')
, \label{eq321a} \\
\tilde\rho(\bbox{r}\sigma,\bbox{r}'\sigma')
&=& \sum_\mu u_\mu v_\mu \breve\psi_{\mu}^*(\bbox{r}\sigma)
\breve\psi_{\mu}(\bbox{r}'\sigma')
, \label{eq321b}
\end{eqnarray}\end{mathletters}%
Equation (\ref{eq122a}) represents the traditional definition of
the canonical states as the eigenstates of the HFB density
matrix. It also shows that the canonical states are the {\em
natural states}
\cite{[Low55],[Van93],[Ant93],[Ant94],[Ant95],[Pol95]} for the
density matrix corresponding to the independent-quasiparticle
many-body state $|\Psi\rangle$, Eq.{\ }(\ref{eq118a}), and the
eigenvalues $v^2_\mu$ are the corresponding natural occupation
numbers.
One may now easily repeat the previous analysis of probabilities
of finding a particle, or a pair of particles, in the
canonical-basis single-particle state
$\breve\psi_{\mu}(\bbox{r}\sigma)$. The result, analogous to
Eqs.{\ }(\ref{eq132a}) and (\ref{eq132b}), is
${\cal{P}}_1(\mu)$=$v^2_\mu$, and ${\cal{P}}_2(\mu)$=$u^2_\mu
v^2_\mu$+$v^4_\mu$. In this case, due to the normalization
condition $u^2_\mu$+$v^2_\mu$=1,
${\cal{P}}_1(\mu)$=${\cal{P}}_2(\mu)$. This result means that
the particles in the canonical states with indices $\mu$ and
$\bar{\mu}$ are extremely correlated spatially; i.e., the
probability of finding the canonical pair, $u^2_\mu v^2_\mu$, is
directly dependent on the probability of finding two independent
canonical fermions, $v^4_\mu$. However, as discussed above, a
similar direct relation between ${\cal{P}}_1(\bbox{x})$ and
${\cal{P}}_2(\bbox{x})$ does not exist. In particular
${\cal{P}}_1(\bbox{x})$$\ne$${\cal{P}}_2(\bbox{x})$.
\subsubsection{Examples of particle and pairing densities}
\label{sec2aca}
{}Figures {\ }\ref{FIG16} and \ref{FIG17} display the particle
and pairing local spherical neutron HFB densities $\rho(r)$ and
$\tilde\rho(r)$, Eq.{\ }(\ref{eq320}), as functions of the
radial coordinate $r$=$|\bbox{r}|$. Results are shown for
several tin isotopes across the stability valley. {}For
particle densities, the results obtained with the SkP and
SkP$^\delta$ interactions are almost indistinguishable.
Therefore, Fig.{\ }\ref{FIG16} (middle panel) shows results
for the SIII$^\delta$ interaction. For pairing densities,
compared in Fig.{\ }\ref{FIG17} are results for SkP,
SkP$^\delta$, and D1S effective interactions.
The particle densities obtained with these three effective
interactions are qualitatively very similar. One can see that
adding neutrons results in a simultaneous increase of the
central neutron density, and of the density in the surface
region. The relative magnitude of the two effects is governed by
a balance between the volume and the surface asymmetry energies
of effective interactions. Since all three forces considered
have been fitted in a similar way to bulk nuclear properties,
including the isospin dependence, the resulting balance between
the volume and the surface isospin effects is similar. Of
course, this does not exclude some differences which are seen
when a more detailed comparison is carried out.
The pairing densities shown in Fig.{\ }\ref{FIG17} reflect
different characters of the interactions used in the p-p
channel. The contact force (the SkP$^\delta$ results) leads to
the pairing densities which are, in general, largest at the
origin and decrease towards the surface. (This general trend is
slightly modified by shell fluctuations resulting from
contributions from orbitals near the Fermi level.) At the
surface, the isospin dependence of SkP$^\delta$ is fairly weak.
For example, there is very little difference between the pairing
densities in $^{150}$Sn and $^{172}$Sn. These results are
characteristic for the volume-type pairing correlations.
A different pattern appears for the SkP results, where the
density dependence renders the p-p interaction strongly peaked
at the surface. In this case, the pairing densities tend to
increase when going from the center of the nucleus towards its
surface. Again, the shell fluctuations are superimposed on top
of this general behavior. In particular, the central bump in
the pairing density in $^{120}$Sn is due to a contribution from
the 3$s_{1/2}$ state. A more pronounced dependence on the
neutron excess is seen in the surface region. Especially near
the drip line, the pairing density develops a long tail
extending towards large distances.
The results obtained for the finite-range interaction D1S
exhibit intermediate features between the surface and the volume
type of pairing correlations. In particular, in the nuclear
interior one observes a fairly large region of relatively
constant pairing density. The overall magnitude of the pairing
densities is very similar in all three approaches. In
particular, it is interesting to see that at the nuclear surface
($r$$\sim$5\,fm) all three pairing densities in $^{120}$Sn are
very close to 0.018\,fm$^{-3}$.
\section{Hartree-Fock-Bogoliubov equations}
\label{sec2b}
We begin this section by presenting basic definitions and
equations of the HFB approach. The HFB theory is discussed in
many textbooks and review articles (see
Refs.{\ }\cite{[Goo79],[RS80]}, for example), while its
aspects pertaining to the coordinate representation have been
presented in Ref.{\ }\cite{[Dob84]}. An earlier discussion of
the coordinate-representation HFB formalism has been given by
Bulgac, whose work is available only in preprint form
\cite{[Bul80]}. Recently, similar methods have also been
applied to a description of light nuclei \cite{[Ost92],[Boh93]}.
It is also worth mentioning that the Green function approach in
the coordinate representation (the Gor'kov method
\cite{[Gor58]}), is formally equivalent to HFB -- cf. discussion
in Refs.{\ }\cite{[Zve84],[Zve85]}. The only difference
between the methods lies in the explicit energy dependence of
the quasiparticle mass operator, an analog to the p-h
single-particle HF Hamiltonian (see below).
\subsection{HFB energy and HFB potentials}
\label{sec2ba}
The two-body effective Hamiltonian of a nuclear system
can be written in the coordinate representation as
\widetext
\begin{eqnarray}
\hat H &=& \int\text{d}^3\bbox{r}\text{d}^3\bbox{r}'
\sum_{\sigma\sigma'}
T(\bbox{r}\sigma,\bbox{r}'\sigma')
a^+_{\bbox{r}\sigma} a_{\bbox{r}'\sigma'}
\label{eq135} \\
&+& \frac{1}{4}
\int\text{d}^3\bbox{r}_1 \text{d}^3\bbox{r}_2
\text{d}^3\bbox{r}_1'\text{d}^3\bbox{r}_2'
\sum_{\sigma_1\sigma_2\sigma_1'\sigma_2'}
V(\bbox{r}_1 \sigma_1 ,\bbox{r}_2 \sigma_2 ;
\bbox{r}_1'\sigma_1',\bbox{r}_2'\sigma_2')
a^+_{\bbox{r}_1 \sigma_1 }a^+_{\bbox{r}_2 \sigma_2 }
a_{\bbox{r}_2'\sigma_2'} a_{\bbox{r}_1'\sigma_1'}
\nonumber .
\end{eqnarray}
The first term represents the kinetic energy, while the second
one is the two-body interaction. In the following, we assume that
$V(\bbox{r}_1\sigma_1,\bbox{r}_2 \sigma_2;
\bbox{r}_1'\sigma_1',\bbox{r}_2'\sigma_2')$ includes the
exchange terms.
The average energy of the Hamiltonian (\ref{eq135}) in a time-even
HFB vacuum (\ref{eq105}) reads
\begin{eqnarray}
E_{\text{HFB}} &=& \int\text{d}^3\bbox{r}\text{d}^3\bbox{r}'
\sum_{\sigma\sigma'}
T(\bbox{r}\sigma,\bbox{r}'\sigma')
\rho(\bbox{r}'\sigma',\bbox{r}\sigma)
\label{eq136} \\
&+& \frac{1}{2}
\int\text{d}^3\bbox{r}_1 \text{d}^3\bbox{r}_2
\text{d}^3\bbox{r}_1'\text{d}^3\bbox{r}_2'
\sum_{\sigma_1\sigma_2\sigma_1'\sigma_2'}
V(\bbox{r}_1 \sigma_1 ,\bbox{r}_2 \sigma_2 ;
\bbox{r}_1'\sigma_1',\bbox{r}_2'\sigma_2')
\rho(\bbox{r}_1'\sigma_1',\bbox{r}_1\sigma_1)
\rho(\bbox{r}_2'\sigma_2',\bbox{r}_2\sigma_2)
\nonumber \\
&-& \frac{1}{4}
\int\text{d}^3\bbox{r}_1 \text{d}^3\bbox{r}_2
\text{d}^3\bbox{r}_1'\text{d}^3\bbox{r}_2'
\sum_{\sigma_1\sigma_2\sigma_1'\sigma_2'}
4\sigma_1\sigma_2'
V(\bbox{r}_1,-\sigma_1 ,\bbox{r}_2 \sigma_2 ;
\bbox{r}_1'\sigma_1',\bbox{r}_2',-\sigma_2')
\tilde\rho(\bbox{r}_1 \sigma_1 ,\bbox{r}_2 \sigma_2 )
\tilde\rho(\bbox{r}_1'\sigma_1',\bbox{r}_2'\sigma_2')
\nonumber .
\end{eqnarray}
The last two terms are the interaction energies in the
particle-hole (p-h) and in the particle-particle (p-p) channels,
respectively. Equivalently, one can define the p-h and p-p
single-particle Hamiltonians,
$h(\bbox{r}\sigma,\bbox{r}'\sigma')$ =
$T(\bbox{r}\sigma,\bbox{r}'\sigma')$ +
$\Gamma(\bbox{r}\sigma,\bbox{r}'\sigma')$ and $\tilde
h(\bbox{r}\sigma,\bbox{r}'\sigma')$, respectively:
\begin{mathletters}\label{eq141}\begin{eqnarray}
\Gamma(\bbox{r}\sigma,\bbox{r}'\sigma') &=&
\int\text{d}^3\bbox{r}_2
\text{d}^3\bbox{r}_2'
\sum_{\sigma_2\sigma_2'}
V(\bbox{r} \sigma ,\bbox{r}_2 \sigma_2 ;
\bbox{r}'\sigma',\bbox{r}_2'\sigma_2')
\rho(\bbox{r}_2'\sigma_2',\bbox{r}_2\sigma_2)
, \label{eq141a} \\
\tilde h(\bbox{r}\sigma,\bbox{r}'\sigma') &=&
\int\text{d}^3\bbox{r}_1'\text{d}^3\bbox{r}_2'
\sum_{\sigma_1'\sigma_2'}
2\sigma'\sigma_2'
V(\bbox{r}\sigma ,\bbox{r}',-\sigma' ;
\bbox{r}_1'\sigma_1',\bbox{r}_2',-\sigma_2')
\tilde\rho(\bbox{r}_1'\sigma_1',\bbox{r}_2'\sigma_2')
, \label{eq141b}
\end{eqnarray}\end{mathletters}%
which gives the HFB energy in the form:
\begin{equation}\label{eq142}
E_{\text{HFB}} = \frac{1}{2}
\int\text{d}^3\bbox{r}\text{d}^3\bbox{r}'
\sum_{\sigma\sigma'}\left(
T(\bbox{r}\sigma,\bbox{r}'\sigma')
\rho(\bbox{r}'\sigma',\bbox{r}\sigma)
+ h(\bbox{r}\sigma,\bbox{r}'\sigma')
\rho(\bbox{r}'\sigma',\bbox{r}\sigma)
+ \tilde h(\bbox{r}\sigma,\bbox{r}'\sigma')
\tilde\rho(\bbox{r}'\sigma',\bbox{r}\sigma)\right) .
\end{equation}
\narrowtext\noindent%
Additional terms coming from the density-dependence of the two-body
interaction $V$ have been for simplicity omitted in
Eqs.{\ }(\ref{eq141a}), (\ref{eq141b}), and (\ref{eq142}).
The last term in Eq.{\ }(\ref{eq142}),
\begin{equation}\label{epair}
E_{\text{pair}} = \frac{1}{2}
\int\text{d}^3\bbox{r}\text{d}^3\bbox{r}'
\sum_{\sigma\sigma'}
\tilde h(\bbox{r}\sigma,\bbox{r}'\sigma')
\tilde\rho(\bbox{r}'\sigma',\bbox{r}\sigma),
\end{equation}
represents the pairing energy. We also define the average
magnitude of pairing correlations by the formula \cite{[Dob84]}
\begin{equation}\label{eq157}
\langle\Delta\rangle = - \frac{1}{N^\tau}
\int\text{d}^3\bbox{r}\text{d}^3\bbox{r}'
\sum_{\sigma\sigma'}
\tilde h(\bbox{r}\sigma,\bbox{r}'\sigma')
\rho(\bbox{r}'\sigma',\bbox{r}\sigma) ,
\end{equation}
where $N^\tau$ is the number of particles (neutrons or protons).
The p-h and p-p mean fields (\ref{eq141}) have particularly
simple forms for the Skyrme interaction \cite{[Dob84]}. In
Appendix \ref{appA} we present the form of the p-h and p-p
mean-field Hamiltonians in the case of a local two-body
finite-range Gogny interaction.
\subsubsection{Examples of the p-h and p-p potentials}
\label{sec2baa}
In this section we aim at comparing the self-consistent
potentials obtained with the Skyrme and Gogny forces. Such a
comparison cannot be carried out directly, because the
corresponding integral kernels
$h(\bbox{r}\sigma,\bbox{r}'\sigma')$ and $\tilde
h(\bbox{r}\sigma,\bbox{r}'\sigma')$ have different structure.
For the Skyrme interaction, they are proportional to
$\delta(\bbox{r}$$-$$\bbox{r}')$ and depend also on the
differential operators (linear momenta) \cite{[Dob84]}, while
for the Gogny interaction they are sums of terms proportional to
$\delta(\bbox{r}$$-$$\bbox{r}')$ and terms which are functions
of $\bbox{r}$ and $\bbox{r}'$ (Appendix \ref{appA}).
Therefore, for the purpose of the present comparison we
introduce operational prescriptions to calculate the local parts
of the integral kernels:
\begin{mathletters}\label{local}\begin{eqnarray}
U(\bbox{r}) &=& {\cal LOC}\left[
\Gamma(\bbox{r}\sigma,\bbox{r'}\sigma')\right]
, \label{locala} \\
\tilde{U}(\bbox{r}) &=& {\cal LOC}\left[
\tilde{h}(\bbox{r}\sigma,\bbox{r'}\sigma')\right]
. \label{localb}
\end{eqnarray}\end{mathletters}%
These formal definitions in practice amount to: (i) disregarding
the momentum-dependent terms of the kernels, (ii) considering
only terms with $\sigma$=$\sigma'$=1/2 (which by time reversal
symmetry are equal to those with $\sigma$=$\sigma'$=$-$1/2), and
(iii) taking into account {\em only} the term proportional to
$\delta(\bbox{r}$$-$$\bbox{r}')$, if such a term is present.
The expressions for ${U}(\bbox{r})$ and $\tilde{U}(\bbox{r})$
can be found in Appendix A of Ref.{\ }\cite{[Dob84]} (Skyrme
interaction) and in Appendix A (Gogny interaction). In the
Skyrme calculations, the contribution of the Coulomb interaction
to $\tilde{U}(\bbox{r})$ has been neglected since it is
estimated to be small.
In the case of finite-range local interactions (such as Gogny or
Coulomb), the corresponding non-local pairing field
$\tilde{h}(\bbox{r}\sigma,\bbox{r'}\sigma')$ does not contain
the term proportional to $\delta(\bbox{r}$$-$$\bbox{r}')$ (see
Appendix{\ }\ref{appA}). Consequently, the local field
$\tilde{U}(\bbox{r})$ cannot be extracted in a meaningful way.
For instance, the diagonal (i.e., $\bbox{r'}$=$\bbox{r}$) part
of of the D1S pairing field is positive; i.e., it is dominated
by the short-range repulsive component rather than the
long-range attractive part \cite{[Kuc91],[Rum94]}.
In the spherical case, the potentials ${U}(\bbox{r})$ and
$\tilde{U}(\bbox{r})$ depend on only one radial coordinate
$r$=$|\bbox{r}|$. This facilitates the qualitative comparison
between different forces. Figure{\ }\ref{FIG01a} displays the
self-consistent spherical local p-h potentials ${U}(r)$,
Eq.{\ }(\ref{local}), for several tin isotopes, calculated
with SkP, SIII$^\delta$, and D1S interactions (the results with
SkP$^\delta$ are very close to those with SkP). The terms
depending on the angular momentum, which result from a reduction
to the radial coordinate, are not included. (The general
behavior of the self-consistent p-h potentials has already been
discussed many times in the literature,
e.g.{\ }\cite{[Fuk93],[Dob94],[Deb94a]}, and we include these
results only for completeness and for a comparison with the
corresponding p-p potentials, for which the detailed analysis
does not exist.)
Qualitatively, the results for ${U}(r)$ obtained with different
effective forces are quite similar, which reflects the fact that
all these interactions correctly describe global nuclear
properties. In particular, one sees that with increasing neutron
excess the neutron potentials become more shallow in the
interior and more wide in the outer region. Interestingly, for
each of these three forces there exists a pivoting point at
which the potential does not depend on the neutron excess. For
the three forces presented, this occurs at $r$=5.9, 4.6, and
5.4\,fm, respectively. The differences in the overall depths of
the average potentials reflect the associated effective masses
(i.e., the non-local contributions of the two-body
interactions).
The analogous results for the p-p potentials $\tilde{U}(r)$
calculated for the SkP and SkP$^\delta$ interactions are shown
in Fig.{\ }\ref{FIG01b}. On can see that the different
character of pairing interactions is directly reflected in the
form of the p-p potentials. Particularly noteworthy is the
fact that the density-dependent pairing interaction in SkP
yields a very pronounced surface-peaked potential (the behavior
of $\tilde{U}(r)$ at large distances is further discussed in
Sec.{\ }\ref{sec3e}). One can easily understand its form by
recalling that this potential is equal to the product of the
pairing density $\tilde\rho(r)$ [Fig.{\ }\ref{FIG17}] and the
function which roughly resembles the behavior of DDDI of
Eq.{\ }(\ref{DDDI}); i.e., small in the interior and large in
the outer region. Of course, values of $\tilde\rho(r)$ and
$\tilde{U}(r)$ depend on each other by the fact that they both
result from a self-consistent solution of the complete HFB
equation in which the p-h and p-p channels are coupled together
(see Sec.{\ }\ref{sec2bc}). Similar results were also
obtained in Refs.{\ }\cite{[Sta91]} (in the HFB+SkP model) and
\cite{[Sta92]} (in the QLM) for the proton-rich rare-earth
nuclei.
Since the p-h channel provides the bulk part of the interaction
energy, the particle densities $\rho(r)$ closely follow the
pattern of the p-h potentials (i.e., the density is large where
the potential is deep). An analogous relation is only partly
true for $\tilde\rho(r)$ and $\tilde{U}(r)$; i.e., even the
dramatic surface character of the SkP p-p potential
(Fig.{\ }\ref{FIG01b}) does not result in the pairing density
being similarly peaked at the surface. Recall that the
contributions to $\tilde\rho(r)$ come mainly from a few wave
functions near the Fermi surface, and that the form of these
wave functions is mainly governed by the p-h channel. Since
these wave functions must have significant components in the
interior, the resulting pairing densities cannot exactly fit
into the surface-peaked p-p potentials. Nevertheless, a clear
tendency towards surface localization is evident in
Fig.{\ }\ref{FIG17}.
In the case of the pure contact interaction (SkP$^\delta$
calculations) the p-p potential is exactly proportional to the
pairing density \cite{[Dob84]} with the proportionality constant
$V_0/2$ equal to $-$80\,MeV\,fm$^3$ \cite{[Dob95c]}. Therefore,
the resulting potential is concentrated at the origin and
increases towards the surface. (Early calculations of p-p
potentials in the QLM with the density-independent delta
interaction can be found in Ref.{\ }\cite{[Smi88]}. The
general behavior of $\tilde{U}(r)$, denoted as $\Delta(r)$
therein, is very similar to our SkP$^\delta$ results.)
\subsection{HFB equations in the coordinate representation}
\label{sec2bap}
The variation
of the HFB energy with respect to independent
parameters $Z(\bbox{r}\sigma,\bbox{r}'\sigma')$ leads to
the HFB equation \cite{[RS80],[Dob84]},
\widetext
\begin{equation}\label{eq143}
\int\text{d}^3\bbox{r}'
\sum_{\sigma'}\left(\begin{array}{cc}
h(\bbox{r}\sigma,\bbox{r}'\sigma') &
\tilde h(\bbox{r}\sigma,\bbox{r}'\sigma') \\
\tilde h(\bbox{r}\sigma,\bbox{r}'\sigma') &
- h(\bbox{r}\sigma,\bbox{r}'\sigma') \end{array}\right)
\left(\begin{array}{c}
\phi_1 (E,\bbox{r}'\sigma') \\
\phi_2 (E,\bbox{r}'\sigma')
\end{array}\right) =
\left(\begin{array}{cc}
E+\lambda & 0 \\
0 & E-\lambda \end{array}\right)
\left(\begin{array}{c}
\phi_1 (E,\bbox{r}\sigma) \\
\phi_2 (E,\bbox{r}\sigma)
\end{array}\right),
\end{equation}
where $\phi_1(E,\bbox{r}\sigma)$ and $\phi_2(E,\bbox{r}\sigma)$
are upper and lower components of the two-component
single-quasiparticle HFB wave function, and $\lambda$ is the
Fermi energy.
Properties of the HFB equation in the spatial coordinates,
Eq.{\ }(\ref{eq143}), have been discussed in
Ref.{\ }\cite{[Dob84]}. In particular, it has been shown that
the spectrum of eigenenergies $E$ is continuous for
$|E|$$>$$-\lambda$ and discrete for $|E|$$<$$-\lambda$. Since
for $E$$>$0 and $\lambda$$<$0 the lower components
$\phi_2(E,\bbox{r}\sigma)$ are localized functions of
$\bbox{r}$, the density matrices,
\begin{mathletters}\label{eq144}\begin{eqnarray}
\rho(\bbox{r}\sigma,\bbox{r}'\sigma') &=&
\sum_{0<E_n<-\lambda} \phi_2 (E_n,\bbox{r} \sigma )
\phi^*_2(E_n,\bbox{r}'\sigma')
+ \int_{-\lambda}^\infty \text{d}n(E)
\phi_2 (E ,\bbox{r} \sigma )
\phi^*_2(E ,\bbox{r}'\sigma')
, \label{eq144a} \\
\tilde\rho(\bbox{r}\sigma,\bbox{r}'\sigma') &=&
- \sum_{0<E_n<-\lambda} \phi_2 (E_n,\bbox{r} \sigma )
\phi^*_1(E_n,\bbox{r}'\sigma')
- \int_{-\lambda}^\infty \text{d}n(E)
\phi_2 (E ,\bbox{r} \sigma )
\phi^*_1(E ,\bbox{r}'\sigma')
, \label{eq144b}
\end{eqnarray}\end{mathletters}%
are always localized.
{}For the case of a discretized continuum,
Sec.{\ }\ref{sec3a}, the integral over the energy reduces to a
discrete sum \cite{[Dob84]} but one should still carefully
distinguish between contributions coming from the discrete
($E_n$$<$$-\lambda$) and discretized ($E_n$$>$$-\lambda$)
states. The orthogonality relation for the single-quasiparticle
HFB wave functions reads
\begin{equation}\label{orthog}
\int\text{d}^3\bbox{r}
\sum_{\sigma} \left[ \phi^*_1(E_n,\bbox{r} \sigma )
\phi_1 (E_{n'},\bbox{r}\sigma)
+ \phi^*_2(E_n,\bbox{r} \sigma )
\phi_2 (E_{n'},\bbox{r}\sigma)
\right] = \delta_{n,n'}.
\end{equation}
It is seen from Eq.{\ }(\ref{orthog}) that the lower components are
not normalized.
Their norms,
\begin{equation}\label{norms}
N_n = \int\text{d}^3\bbox{r}\sum_{\sigma}
|\phi_2(E_n,\bbox{r} \sigma )|^2,
\end{equation}
define the total number of particles
\begin{equation}\label{Ntot}
N=\int\text{d}^3\bbox{r}\rho(\bbox{r}) = \sum_{n}N_n.
\end{equation}
In the HFB theory, the localization condition (\ref{eq106})
discussed in Sec.{\ }\ref{sec2a} is automatically guaranteed
for any system with negative Fermi energy $\lambda$. This allows
studying nuclei which are near the particle drip lines where the
Fermi energy approaches zero through negative values.
{}For the Skyrme interaction, the HFB equation (\ref{eq143}) is
a differential equation in spatial coordinates \cite{[Dob84]}.
If the spherical symmetry is imposed, which is assumed in the
following, this equation reads
\begin{equation}\label{eq152}
\left[-\frac{\text{d}}{\text{d}r}
\left(\begin{array}{cc} M & \tilde M \\
\tilde M & -M \end{array}\right)
\frac{\text{d}}{\text{d}r}+
\left(\begin{array}{cc} {U}-\lambda & \tilde{U} \\
\tilde{U} & -{U}+\lambda
\end{array}\right)\right]
\left(\begin{array}{c} r\phi_1(E,r) \\ r\phi_2(E,r) \end{array}\right) =
E\left(\begin{array}{c} r\phi_1(E,r) \\ r\phi_2(E,r) \end{array}\right) ,
\end{equation}
\narrowtext\noindent%
where $M$ and $\tilde M$ are p-h and p-p mass parameters,
respectively, and ${U}$ and $\tilde{U}$ are defined in
Sec.{\ }\ref{sec2ba}. Due to the spherical symmetry,
Eq.{\ }(\ref{eq152}) is solved separately for each partial
wave ($j,\ell$). The potentials include also the centrifugal
and spin-orbit terms, and the p-h mass parameter $M$ is
expressed in terms of the effective mass $m^*$; i.e.,
$M$=$\hbar^2/2m^*$, see Ref.{\ }\cite{[Dob84]} for details.
Before discussing the properties of the HFB wave functions, we
analyze the structure of the spherical HFB Hamiltonian of
Eq.{\ }(\ref{eq152}). Figure {\ }\ref{FIG20} shows the
behavior of $M(r)$ and $\tilde M(r)$, and ${U}(r)$ and
$\tilde{U}(r)$ (central parts only) obtained for neutrons in
$^{120}$Sn in the HFB+SkP model. The p-h functions, $M(r)$ and
${U}(r)$, are similar to those obtained in other mean-field
theories. $M(r)$ has values close to
$\hbar^2/2m$$\simeq$20\,MeV\,fm$^2$, which corresponds to the
value of the free nucleon mass $m$. In the nuclear interior,
this function has slightly smaller values, because the effective
mass $m^*$ is here slightly larger than $m$. This effect is due
to the non-zero isovector effective mass of the Skyrme SkP
interaction; recall that for this interaction the nuclear-matter
value of the isoscalar effective mass is $m^*$=$m$. The central
potential ${U}(r)$ has the standard depth of about 40\,MeV and
disappears around $r$=7.5\,fm.
The form of the p-p functions, $\tilde M(r)$ and $\tilde{U}(r)$,
characterizes the pairing properties of the system. One may
note that both these functions are essentially peaked at the
nuclear surface. In $^{120}$Sn they also exhibit central bumps
resulting from the fact that in this nucleus the neutron 3$s_{1/2}$
orbital is located near the Fermi surface. Values of $\tilde
M(r)$ are (in the chosen units) an order of magnitude smaller
than those of $\tilde{U}(r)$. This should be compared with the
results obtained for the p-h channel, where the values of $M(r)$
are only about a factor of 2 smaller than those of ${U}(r)$. It
means that, for the SkP parametrization, the kinetic term in
the p-p channel (which simulates the finite-range effects) is
relatively less important than the kinetic energy term in the
p-h channel.
\subsection{Single-quasiparticle wave functions}
\label{sec2bc}
This section contains the discussion of HFB wave functions
$\phi_1(E,r)$ and $\phi_2(E,r)$ (Sec.{\ }\ref{sec2bca}),
canonical-basis wave functions $\breve\psi_\mu(r)$
(Sec.{\ }\ref{sec2bcb}), and HF+BCS wave functions
(Sec.{\ }\ref{sec2bcc}). In the following, the HFB equation
(\ref{eq152}) was solved in the spherical box of the radius
$R_{\text{box}}$=20\,fm for the $j$=1/2 and $\ell$=0 ($s_{1/2}$)
neutron states; i.e., for vanishing centrifugal, Coulomb, and
spin-orbit potentials. The calculations were performed for
$^{120}$Sn.
\subsubsection{Examples of the single-quasiparticle wave functions}
\label{sec2bca}
The neutron single-quasiparticle wave functions are presented in
Fig.{\ }\ref{FIG21}. The upper components $r\phi_1(E_n,r)$,
and the lower components $r\phi_2(E_n,r)$, are plotted in the
left and right columns, respectively. Because a box of a finite
radius was used, the particle continuum is discretized. The
positive quasiparticle eigenenergies $E_n$ are in increasing
order numbered by the index $n$, and their values are tabulated
in the left portion of Table \ref{TAB01}, together with the
norms of the lower components (\ref{norms}),
$N_n$=$4\pi\int{r^2}\text{d}r|\phi_2(E_n,r)|^2$. Since the
lower components define the particle density matrix
[Eq.{\ }(\ref{eq144a})] the numbers $(2j$+1)$N_n$ (i.e.,
2$N_n$ for the $j$=1/2 case considered) constitute contributions
of a given quasiparticle state to the total number of neutrons
(see Eq.{\ }(\ref{Ntot})).
Wave functions in Fig.{\ }\ref{FIG21}, and the entries in
Table \ref{TAB01}, have been ordered from the bottom to the top
not according to the excitation-energy index $n$, but rather
according to numbers of nodes of the {\em large} component.
(The large component is the lower component for hole states and
the upper component for particle states -- see
Fig.{\ }\ref{FIG21}.) The lower component of the $n$=8 state
is large, and it has zero nodes; therefore it is plotted at the
bottom of the figure. Next comes the $n$=5 state, whose lower
component has one node, and the $n$=1 state with two nodes.
Lower components of these three states are larger than their
upper components and they contribute almost 2 particles each to
the total number of neutrons. Consequently, these quasiparticle
states should be associated with the 1$s_{1/2}$, 2$s_{1/2}$, and
3$s_{1/2}$, single-hole states.
{}For all other calculated $s_{1/2}$ states the upper components
are larger than the lower ones, and these states contribute
small fractions to the particle number, see Table \ref{TAB01}.
Consequently, these quasiparticle states should be associated
with the $s_{1/2}$ single-particle states. The behavior of
these wave functions differs in the nuclear interior (i.e., for
$r<R$ where $R$$\sim$7{\ }fm is the nuclear radius) and
outside ($r>R$). Since the wavelength of the upper component is
roughly proportional to $1/\sqrt{E_n+\lambda-{U}(r)}$, the ratio
of the corresponding wavelengths behaves as
\begin{equation}\label{wave}
\frac{\lambda_{\text{out}}}{\lambda_{\text{in}}}
\approx \sqrt{1+\frac{|{U}(0)|}
{E_n+\lambda}},
\end{equation}
where ${U}(0)$ is the depth of the neutron potential well. For
the $s_{1/2}$ neutron states in $^{120}$Sn the excitation
energy, $E_n+\lambda$, can be found from Table{\ }\ref{TAB01}
($\lambda$=$-$7.94\,MeV), and ${U}(0)$$\sim$$-$45\,MeV (see
Fig.{\ }\ref{FIG20}).
The upper component of the $n$=2 state has three nodes.
However, for $r>R$ the exterior part of the wave function
corresponds to a half-wave; i.e., it represents the
lowest-energy discretized continuum state. Since $E_n+\lambda$
is only 0.97\,MeV, the wavelength in the nuclear interior is
$\sim$6.5 times shorter than $\lambda_{\text{out}}$. The next
two wave functions have four and five nodes in their upper
components. Compared to the $n$=2 state, they exhibit shorter
wavelengths both outside and inside the nucleus (the
corresponding excitation energies are larger), and the ratio
${\lambda_{\text{out}}}/{\lambda_{\text{in}}}$ decreases
according to Eq.{\ }(\ref{wave}).
The quasiparticle states with $n$=2, 3, and 4 should be
associated with the 4$s_{1/2}$, 5$s_{1/2}$, and 6$s_{1/2}$
states in the particle continuum. Of course, the values of their
quasiparticle energies strongly depend on the size of the box,
because the wavelength of their exterior parts will increase
with increasing $R_{\text{box}}$ (is roughly proportional to
$R_{\text{box}}$).
{}From the above discussion, one can see that the structure of
large components resembles very much that of the HF wave
functions. Moreover, the small components are very small
compared to the large ones; in order to plot both of them in the
same scale (Fig.{\ }\ref{FIG21}) they have to be multiplied by
factors from 10 to 25. Only the lowest quasiparticle state
($n$=1), which corresponds to the 3$s_{1/2}$ state near the
Fermi surface, has the two components of a similar magnitude.
It is to be noted, however, that the detailed structure of small
components is decisive for a description of the pairing
correlations. Indeed, both components are coupled in the HFB
equations by the pairing fields
$\tilde{h}(\bbox{r}\sigma,\bbox{r}'\sigma')$ or $\tilde{U}$.
In agreement with general asymptotic properties of the upper and
lower components \cite{[Bul80],[Dob84]}, one sees in
Fig.{\ }\ref{FIG21} that the lower components vanish at large
distances for all quasiparticle states, regardless of the
excitation energy. Consequently, the resulting density matrix
is localized. It is interesting to observe (Table \ref{TAB01})
that the norms of the lower components $N_n$ do not behave
monotonically with quasiparticle energy. Namely, $N_n$ is about
0.0002 for $n$=2; then it increases to 0.0019 at $n$=6, and only
then it decreases to about 0.0001 at $n$=11. This means that
the pairing correlations couple states with very high
quasiparticle excitations and short-wavelength upper components;
i.e., located high up in the particle continuum. In the
considered example, only by going to the energy region of as
high as 50\,MeV is the pairing coupling to the continuum states
exhausted.
Apart from the $n$=1 state which has the quasiparticle energy
$E$ smaller than $-\lambda$, for all other quasiparticle states
the upper components oscillate at large distances; i.e., these
states belong to the HFB continuum. This seems natural for the
4$s_{1/2}$, 5$s_{1/2}$, and 6$s_{1/2}$ states discussed above,
but it also holds for the deep-hole states 1$s_{1/2}$ and
2$s_{1/2}$. This illustrates the physical property of the
deep-hole states that once such a state is excited, it is
coupled to the particle continuum and acquires some particle
width. Of course, before such a hole is created (e.g.,
one-quasiparticle excitation in the neighboring nucleus) the
nucleus (i.e., quasiparticle vacuum) is perfectly particle-bound
and the contributions from the deep-hole-like quasiparticle
states to the density matrix are localized in space.
\subsubsection{Examples of the canonical-basis wave functions}
\label{sec2bcb}
By solving the integral eigenequation for the density matrix
(\ref{eq122a}), one obtains the canonical-basis wave functions
$\breve\psi_\mu(r)$. Actually, when the HFB equation
(\ref{eq152}) is solved by a discretization method on a spatial
mesh, as is done here, the density matrix is represented by a
matrix and the integral eigenequation becomes the usual matrix
eigenproblem. In the present application to $^{120}$Sn, the mesh
of equally spaced points with $\Delta r$=0.25\,fm was used and
then the canonical-basis wave functions were obtained on the
same mesh of points. These wave functions are plotted in
Fig.{\ }\ref{FIG22}, while other characteristics of the
canonical states are listed on the right-hand side portion of
Table \ref{TAB01}. Here the states are ordered from bottom to
top according to their occupation probabilities $v_\mu^2$.
When $\mu$ increases from 1 to 5, the number of nodes of the
canonical-basis wave functions increases from zero to four.
Therefore, these states represent the 1$s_{1/2}$ to 5$s_{1/2}$
single-particle states. The first three of them have large
occupation probabilities $v_\mu^2$, negative average values
$\epsilon_\mu$ of the p-h Hamiltonian, and positive pairing gaps
$\Delta_\mu$ [see Eq.{\ }(\ref{eq148})]. These states have all
the characteristics of bound single-particle states, and their
wave functions strongly resemble the large components of the
$n$=8, 5, and 1 quasiparticle states shown in
{}Fig.{\ }\ref{FIG21}. It is interesting to note that the two
states $\mu$=4 and 5 follow exactly the same pattern of
localized wave functions, despite the {\em positive} values of
$\epsilon_\mu$. Therefore, these two states can be understood
as the representatives of the positive-energy spectrum in the
ground-state of $^{120}$Sn. We purposely avoid using the term
``particle continuum'', because these orbitals represent
discrete and localized eigenstates of the density matrix.
Table \ref{TAB01} shows that the occupation probabilities of the
canonical-basis states with $\mu$=4,$\ldots$,7 decrease very
rapidly. In fact only states with $\mu$=4 and 5 have tangible
occupation probabilities; one can say that the remaining
orbitals are entirely empty. This feature has to be compared
with the sequence of norms of the lower HFB components, $N_n$,
which do not fall down to zero at even a nearly similar pace.
This demonstrates that even if the convergence of the HFB
eigenproblem requires high quasiparticle energies, the number of
physically important single-particle states is very restrained.
Unfortunately, as discussed below in Sec.{\ }\ref{sec2bb}, one
cannot obtain the canonical-basis states without actually
solving the HFB equations up to high energies. For $\mu$=6 and
higher, the occupation probabilities are so small that the
numerical procedure used to diagonalize the density matrix
returns accidental mixtures of almost degenerate eigenfunctions.
This is seen in Fig.{\ }\ref{FIG22}, where the wave function
with $\mu$=6 has six nodes instead of five, expected from the
regular sequence. Also the energies $\epsilon_\mu$ are for these
nearly empty states randomly scattered between 40 and 70\,MeV,
while the pairing gaps $\Delta_\mu$ are scattered around zero.
\subsubsection{Examples of the BCS quasiparticle wave functions}
\label{sec2bcc}
The BCS quasiparticle wave functions can be obtained by
enforcing the BCS approximation on the HFB equations. This is
done by setting the pairing Hamiltonian $\tilde h$ to a
constant; i.e., by using $\tilde M(r)$=0 and
$\tilde{U}(r)$=$-$1.232\,MeV. This value of $\tilde{{U}}$ is
equal to minus the HFB average neutron pairing gap, as defined
in Eq.{\ }(\ref{eq157}). As seen in Fig.{\ }\ref{FIG23},
the pattern of large components follows closely that obtained in
the HFB method, while the shapes of small components are
entirely different. Indeed, since in the BCS approximation lower
and upper components are simply proportional, small and large
components have the same asymptotic properties. This leads to
serious inconsistencies, because the small lower components {\em
are not} localized any more, and introduce an unphysical
particle gas in the density matrix, while the small upper
components {\em are} localized and the corresponding deep-hole
states have no particle width.
\subsection{HFB equations in the canonical basis}
\label{sec2bb}
It is seen in Eqs.{\ }(\ref{eq136}) and (\ref{eq141}) that
the two-body interaction enters the p-h and p-p channels in a
different way. This is particularly conspicuous when the
canonical basis (\ref{eq122}) is used; i.e.,
\begin{eqnarray}
E_{\text{HFB}} = \sum_\nu \breve T_{\mu\mu}v^2_\mu
&+&\frac{1}{2}\sum_{\mu\nu}
\breve F_{\mu\nu}v^2_\mu v^2_\nu \nonumber \\
&-&\frac{1}{4}\sum_{\mu\nu}
\breve G_{\mu\nu}u_\mu v_\mu u_\nu v_\nu , \label{eq137}
\end{eqnarray}
where
\begin{mathletters}\label{eq138}\begin{eqnarray}
\breve F_{\mu\nu} &=& \frac{1}{2}\left(
\breve V_{\mu\nu\mu\nu} + \breve V_{\mu\bar\nu\mu\bar\nu}\right)
, \label{eq138a} \\
\breve G_{\mu\nu} &=& -s^*_\mu s_\nu
\breve V_{\mu\bar\mu\nu\bar\nu}
. \label{eq138b}
\end{eqnarray}\end{mathletters}%
The two-body matrix elements in the canonical basis are defined
as usual:
\widetext
\begin{equation}\label{eq139}
\breve{V}_{\mu\nu\mu'\nu'} =
\int\text{d}^3\bbox{r}_1 \text{d}^3\bbox{r}_2
\text{d}^3\bbox{r}_1'\text{d}^3\bbox{r}_2'
\sum_{\sigma_1\sigma_2\sigma_1'\sigma_2'}
V(\bbox{r}_1 \sigma_1 ,\bbox{r}_2 \sigma_2 ;
\bbox{r}_1'\sigma_1',\bbox{r}_2'\sigma_2')
\breve\psi^*_\mu (\bbox{r}_1 \sigma_1 )
\breve\psi^*_\nu (\bbox{r}_2 \sigma_2 )
\breve\psi_{\mu'}(\bbox{r}_1'\sigma_1')
\breve\psi_{\nu'}(\bbox{r}_2'\sigma_2').
\end{equation}
\narrowtext\noindent%
Since we include
in $V(\bbox{r}_1\sigma_1,\bbox{r}_2\sigma_2;
\bbox{r}_1'\sigma_1',\bbox{r}_2'\sigma_2')$ the exchange term,
the matrix $V_{\mu\nu\mu'\nu'}$ is antisymmetric in $\mu\nu$ and
in $\mu'\nu'$. Due to the hermiticity and the time-reversal
symmetry of the interaction, matrices $\breve{F}_{\mu\nu}$ and
$\breve{G}_{\mu\nu}$ obey the following symmetry relations,
\begin{mathletters}\begin{eqnarray}
\breve F_{\mu\nu} = \breve F^*_{\mu\nu}
= \breve F_{\nu\mu}
&=& \breve F_{\mu\bar\nu}
= \breve F_{\bar\mu\nu}
, \label{eq140a} \\
\breve G_{\mu\nu} = \breve G^*_{\mu\nu}
= \breve G_{\nu\mu}
&=& \breve G_{\mu\bar\nu}
= \breve G_{\bar\mu\nu}
. \label{eq140b}
\end{eqnarray}\end{mathletters}%
The matrix $\breve{F}$ is defined by different matrix elements
of the interaction than the matrix $\breve{G}$. Namely, the
matrix element $\breve{F}_{\mu\nu}$ represents a ``diagonal''
scattering of pairs of states $\mu\nu$$\rightarrow$$\mu\nu$ (or
$\mu\bar\nu$$\rightarrow$$\mu\bar\nu$). This type of scattering
concerns {\em all} pairs of states. The resulting contributions
to the energy, Eq.{\ }(\ref{eq137}), involve the occupation
probabilities of the single-particle states constituting each
pair. On the other hand, the matrix elements
$\breve{G}_{\mu\nu}$ represent a ``non-diagonal'' scattering of
pairs of {\em time-reversed} states
$\nu\bar\nu$$\rightarrow$$\mu\bar\mu$. This scattering concerns
only a very special subset of all pairs.
In principle, an effective interaction should describe both
channels of interaction at the same time. This is, for example,
the case for the Gogny interaction \cite{[Gog73]} and for the
Skyrme SkP interaction \cite{[Dob84]}. However, the fact that
both channels of interaction play a different role in the HFB
theory allows the use of different forms of interaction to model
the p-h and p-p channels. Such an approach is additionally
motivated by the fact that the interaction in the p-h channel,
which defines, e.g., the saturation properties, is much better
known than the p-p interaction. Moreover, the p-h channel
provides a two-orders-of-magnitude larger interaction energy.
Since the canonical-basis wave functions
$\breve\psi(\bbox{r}\sigma)$ are all localized, it is
instructive to consider the HFB equations in this particular
basis. They read:
\begin{mathletters}\label{eq145}\begin{eqnarray}
(\breve h - \lambda)_{\mu\nu}\eta_{\mu\nu}
+ \breve{\tilde h}_{\mu\nu} \xi_{\mu\nu} &=& 0
, \label{eq145a} \\
(\breve h - \lambda)_{\mu\nu} \xi_{\mu\nu}
- \breve{\tilde h}_{\mu\nu}\eta_{\mu\nu} &=& \breve E_{\mu\nu}
, \label{eq145b}
\end{eqnarray}\end{mathletters}%
where
\begin{mathletters}\begin{eqnarray}
\eta_{\mu\nu} &:=& u_\mu v_\nu + u_\nu v_\mu
, \label{eq146a} \\
\xi_{\mu\nu} &:=& u_\mu u_\nu - v_\nu v_\mu
. \label{eq146b}
\end{eqnarray}\end{mathletters}%
Equation (\ref{eq145a}) is equivalent to the variational
condition that the HFB energy is minimized, while
Eq.{\ }(\ref{eq145b}) defines the energy matrix
$\breve{E}_{\mu\nu}$. (The matrix $\breve{E}_{\mu\nu}$
represents the HFB Hamiltonian in the canonical basis.) Since
for every pair of indices $\mu\nu$ it holds that
$(\eta_{\mu\nu})^2$+$(\xi_{\mu\nu})^2$=1,
Eqs.{\ }(\ref{eq145}) can be written as
\begin{mathletters}\begin{eqnarray}
(\breve h - \lambda)_{\mu\nu} &=& \breve E_{\mu\nu} \xi_{\mu\nu}
, \label{eq147a} \\
- \breve{\tilde h}_{\mu\nu} &=& \breve E_{\mu\nu}\eta_{\mu\nu}
. \label{eq147b}
\end{eqnarray}\end{mathletters}%
The occupation probabilities $v_\mu$ are solely determined by
the diagonal matrix elements of the p-h and p-p Hamiltonians,
\begin{mathletters}\label{eq148}\begin{eqnarray}
\epsilon_\mu &:=& \breve h_{\mu\mu}
, \label{eq148a} \\
\Delta_\mu &:=& - \breve{\tilde h}_{\mu\mu}
, \label{eq148b}
\end{eqnarray}\end{mathletters}%
and the result is
\begin{mathletters}\label{eq149}\begin{eqnarray}
v_\mu =& \text{sign}(\Delta_\mu)
&\sqrt{\frac{1}{2}-\frac{\epsilon_\mu-\lambda}{2E_\mu}}
, \label{eq149a} \\
u_\mu =&
&\sqrt{\frac{1}{2}+\frac{\epsilon_\mu-\lambda}{2E_\mu}}
, \label{eq149b}
\end{eqnarray}\end{mathletters}%
where $E_\mu$ are the diagonal matrix elements of the matrix
$\breve{E}_{\mu\nu}$:
\begin{equation}\label{eq150}
E_\mu := \breve E_{\mu\mu} =
\sqrt{(\epsilon_\mu-\lambda)^2 + \Delta_\mu^2} .
\end{equation}
In this representation, the average pairing gap (\ref{eq157})
is given by the average value of $\Delta_\mu$ in the occupied states,
\begin{equation}\label{eq158}
\langle\Delta\rangle = \frac{\sum_\mu\Delta_\mu v^2_\mu}
{\sum_\mu v^2_\mu}
= \frac{1}{N^\tau}\sum_\mu\Delta_\mu v^2_\mu .
\end{equation}
Equations (\ref{eq149}) and (\ref{eq150}) misleadingly resemble
those of the simple BCS theory \cite{[RS80]}. However, in the
HFB theory, $\epsilon_\mu$ is not the single-particle energy
(i.e., the eigenvalue of $h$) but the diagonal matrix element of
$h$ in the canonical basis. Similarly, $\Delta_\mu$ does not
represent the pairing gap in the state $\breve\psi_\mu$, and
$E_\mu$ is not the quasiparticle energy $E$. However, since
these quantities define the occupation probabilities, they play
a very important role in an interpretation of the HFB results,
and many intuitive, quantitative, and useful features of the
BCS theory can be reinterpreted in terms of the canonical
representation (cf.{\ }Sec.{\ }\ref{sec3f}).
In particular, the average values of single-particle p-h and
p-p Hamiltonians fulfill the following self-consistency
equations:
\begin{mathletters}\label{eq151}\begin{eqnarray}
\epsilon_\mu = T_{\mu\mu} +&& \frac{1}{2}\sum_\nu \breve F_{\mu\nu}
\left(1-\frac{\epsilon_\nu-\lambda}{E_\nu}\right)
, \label{eq151a} \\
\Delta_\mu = &&\frac{1}{4}\sum_\nu \breve G_{\mu\nu}
\frac{\Delta_\nu}{E_\nu}
. \label{eq151b}
\end{eqnarray}\end{mathletters}%
{}For a given interaction $\breve{F}_{\mu\nu}$ and
$\breve{G}_{\mu\nu}$, Eqs.{\ }(\ref{eq151}) represent a set of
nonlinear equations which determine $\epsilon_\mu$ and
$\Delta_\mu$. Equations for $\epsilon_\mu$ (\ref{eq151a}) and
for $\Delta_\mu$ (\ref{eq151b}) are coupled by the values of
$E_\nu$ (\ref{eq150}), which depend on both $\epsilon_\mu$ and
$\Delta_\mu$. However, it is clear that the interaction in the
p-h channel mainly influences the values of $\epsilon_\mu$,
while that in the p-p channel -- $\Delta_\mu$.
Unfortunately, Eqs.{\ }(\ref{eq151}) cannot replace the
original HFB equations, because they require the knowledge of
the canonical basis to determine the $\breve{F}_{\mu\nu}$ and
$\breve{G}_{\mu\nu}$ matrices. The only way to determine the
canonical basis is to solve the original HFB equation
(\ref{eq143}), and then to diagonalize the density matrix
(\ref{eq144a}). Moreover, solving Eqs.{\ }(\ref{eq151})
ensures that only the $\mu$=$\nu$ subset of variational
equations (\ref{eq145a}) is met, the minimum of energy being
obtained by solving the whole set (i.e., for all indices $\mu$
and $\nu$).
The diagonalization of the energy matrix $\breve{E}_{\mu\nu}$
gives the spectrum of HFB eigenenergies, $E_n$:
\begin{equation}\label{Ediag}
\sum_{\nu}\breve{E}_{\mu\nu}{\cal{U}}_{n\nu}=E_n {\cal{U}}_{n\mu}.
\end{equation}
The matrix ${\cal{U}}_{n\mu}$ represents the unitary
transformation from the canonical to the quasiparticle basis
\cite{[RS80]}. Its matrix elements provide the link between the
quasiparticle energies $E_n$ and the diagonal matrix elements
$E_\mu$ which define the occupation probabilities, i.e.,
\begin{equation}\label{Esumrule}
E_\mu=\sum_n E_n |{\cal{U}}_{n\mu}|^2.
\end{equation}
\section{Coupling to the positive-energy states}
\label{sec3}
{}For weakly bound nuclei one may expect that the particle
continuum influences the ground-state properties in a
significant way. As discussed in Sec.{\ }\ref{sec2bcb}, the
phase space corresponding to positive single-particle energies
should not be confused with the continuum of scattering states
which asymptotically behave as plane waves, and are significant
for genuine scattering phenomena.
\subsection{Boundary conditions}
\label{sec3a}
Properties of the continuum scattering states are intuitively
well understood in terms of unpaired single-particle orbits.
Shown in Fig.{\ }\ref{FIG12} are the self-consistent HF+SkP
neutron single-particle energies in $^{150}$Sn,
$\epsilon^{\text{HF}}_{nlj}$, as functions of the radius
$R_{\text{box}}$ of the spherical box in which the HF equations
are solved. It is assumed that the following boundary condition
holds for all single-particle wave functions:
\begin{equation}\label{eq153}
\psi_\mu(R_{\text{box}}) = 0.
\end{equation}
{}For bound single-particle states,
$\epsilon^{\text{HF}}_{nlj}$$<$0, the effect of increasing
$R_{\text{box}}$ beyond 10\,fm is insignificant. As seen in
Fig.{\ }\ref{FIG12}, the energies of the least bound 3$p$,
2$f$, 1$h_{9/2}$, and 1$i_{13/2}$ states, which form the
82$\leq$$N$$\leq$126 shell, are independent of $R_{\text{box}}$.
The boundary condition (\ref{eq153}) leads to a discretization
of the continuum by selecting only those states which have a
node at $r$=$R_{\text{box}}$. When $R_{\text{box}}$ increases,
the density of the low-energy continuum states increases as
$R_{\text{box}}^3$. This effect is very well visible in
Fig.{\ }\ref{FIG12}. Among those states whose energies
decrease with $R_{\text{box}}$, one may easily distinguish some
quasi-bound states, which have energies fairly independent of
$R_{\text{box}}$. In Fig.{\ }\ref{FIG12} these are the
high-$\ell$ states $i_{11/2}$, $j_{13/2}$, $j_{15/2}$, and
$k_{15/2}$. However, at some values of $R_{\text{box}}$ they
are crossed by, and they interact with, the real continuum
states (plane waves) of the same quantum numbers, and their
precise determination is, in practice, very difficult.
A solution of the HFB equation (\ref{eq152}) in the spherical
box amounts to using the analogous boundary conditions,
\begin{equation}\label{eq154}
\phi_1(E,R_{\text{box}}) = \phi_2(E,R_{\text{box}}) = 0,
\end{equation}
for both components of the HFB wave function. As a result, the
quasiparticle continuum of states with $|E|$$>$$-\lambda$ is
discretized and becomes more and more dense with increasing
$R_{\text{box}}$. However, as discussed in
Sec.{\ }\ref{sec2bc}, the density matrix depends only on the
localized (lower) components of the quasiparticle wave functions
and, therefore, is very well stable with increasing
$R_{\text{box}}$. By the same token, the properties of the
canonical-basis states, which are the eigenstates of the density
matrix, are also asymptotically stable. Of course, the bigger
the value of $R_{\text{box}}$, the larger is the numerical
effort required to solve the HFB equations. Consequently, it is
important to optimize the value of $R_{\text{box}}$; i.e., to
use the smallest box sizes which reproduce all interesting
physical properties of the system.
Apart from ours, there are also other possible approaches to
solving the HFB eigenproblem; in particular: (i) the
diagonalization in the large harmonic oscillator basis, and (ii)
the two-step diagonalization. Scheme (i) has been used, e.g., in
the HFB+Gogny calculations or in the deformed HFB+SkP
calculations of Ref.{\ }\cite{[Sta91]}. Its limitations, due
to the incorrect asymptotics, are discussed in
Sec.{\ }\ref{sec3e} below. In method (ii) one first solves the
HF problem and then diagonalizes the full HFB Hamiltonian in
the HF basis. Such a strategy has been suggested in
Ref.{\ }\cite{[Zve85]} and recently adopted in
Ref.{\ }\cite{[Ter95a]}.
\subsection{Canonical single-particle spectrum}
\label{sec3d}
As discussed in Sec.{\ }\ref{sec2bb}, quantities which
determine the p-h properties of the system are the canonical
energies $\epsilon_\mu$ [Eq.{\ }(\ref{eq148a})]. The neutron
canonical energies in $^{150}$Sn are shown in
Fig.{\ }\ref{FIG10} as functions of the box size
$R_{\text{box}}$. In this figure, the single-particle index
$\mu$ is represented by the spherical quantum numbers
$n{\ell}j$; only the states with occupation probabilities
$v^2_{n{\ell}j}$$>$0.0001 are presented. The canonical states
belonging to the shell 82$\leq$$N$$\leq$126 have negative
$\epsilon_{n{\ell}j}$'s, and they are very close to the HF
single-particle energies displayed in Fig.{\ }\ref{FIG12}.
They do not depend on the values of $R_{\text{box}}$ for
$R_{\text{box}}$$>$10\,fm.
At positive values of $\epsilon_{nlj}$, there are several
orbitals which do not depend on the box size even at
$R_{\text{box}}$$<$15\,fm. These states correspond to the
high-$\ell$ quasibound states $i_{11/2}$, $j_{13/2}$,
$j_{15/2}$, and $k_{15/2}$, already identified in the HF
spectrum of Fig.{\ }\ref{FIG12}. The values of
$\epsilon_{n{\ell}j}$ for these states are only slightly higher
than the corresponding values of
$\epsilon^{\text{HF}}_{n{\ell}j}$. However, these quasibound
canonical-basis states are not accompanied by the sea of
plane-wave scattering states (cf. the $j_{13/2}$, and the
$k_{15/2}$ states in Figs.{\ }\ref{FIG12} and \ref{FIG10}).
One can thus say that the canonical-basis states represent the
quasibound states well decoupled from the scattering continuum.
Many other canonical-basis states, especially those with low
orbital angular momenta $\ell$, significantly depend on the box
size up to about $R_{\text{box}}$=18\,fm, and then stabilize.
Therefore, in all subsequent calculations we use a ``safe"
value of $R_{\text{box}}$=20\,fm, unless stated otherwise.
Above 20\,MeV there appear states with canonical energies
fluctuating with $R_{\text{box}}$. These states have very
small occupation probabilities close to the limiting value of
$v^2_{n{\ell}j}$=0.0001, and their determination as eigenstates
of the density matrix is prone to large numerical uncertainties
(see Sec.{\ }\ref{sec2bcb}). One should note that the physical
observables are calculated directly by using the HFB density
matrices, and the above numerical uncertainties do not affect
the results obtained within the HFB theory.
As pointed out in Ref.{\ }\cite{[Dob94]}, the canonical
spectrum presented in Fig.{\ }\ref{FIG10} can be used to
analyze the shell effects far from stability. In particular, the
size of the $N$=126 gap is very small (a 2\,MeV gap between the
1$i_{13/2}$ and 4$s_{1/2}$ states), and hence it cannot yield
any pronounced shell effect (seen, e.g., in the behavior of the
two-neutron separation energies, Sec.{\ }\ref{sec4c}). This
shell-gap quenching is not a result of a too small value of the
spin-orbit splitting. Indeed, a larger spin-orbit strength would
push the 1$i_{13/2}$ level down in energy, without affecting
the size of the $N$=126 shell gap (several negative-parity
states are nearby). The $N$=126 gap, which is equal to about
4\,MeV at $R_{\text{box}}$=10\,fm, closes up with increasing
$R_{\text{box}}$ due to the several low-$\ell$ states whose
energies steadily decrease. This effect can be attributed to
the pairing-induced coupling with the positive-energy states
(see Sec.{\ }\ref{sec3e}).
In the energy window between 0 and 20\,MeV, the density of
single-particle canonical energies is fairly uniform and no
pronounced shell effects are visible. Since the Fermi energy
must stay at negative values, this region of the phase space
cannot be reached. However, one may say that the influence of
the positive-energy spectrum on the bound states (had we
analyzed it in terms of, e.g., the Strutinsky averaging) is
characterized by a rather structureless distribution of states.
Above 20\,MeV, the occupation probabilities rapidly decrease
(cf. Table \ref{TAB01}), and this part of the phase space can
safely be disregarded, provided one stays in the canonical
basis.
\subsection{Single-quasiparticle spectrum}
\label{sec3f}
The eigenvalues of the HFB equation (\ref{eq152})
(single-quasiparticle energies) carry information on the
elementary modes of the system. The lowest single-quasineutron
energies $E^{\text{HFB}}_{n{\ell}j}$ in tin isotopes between
$N$=50 and $N$=126 are shown in Fig.{\ }\ref{FIG14} (top
panel). Apart from the magic shell gaps at $N$=50 and $N$=82,
where the single-quasiparticle energies exhibit sudden jumps,
they depend rather smoothly on neutron number. For a given
orbital $n{\ell}j$, the minimum of $E^{\text{HFB}}_{n{\ell}j}$
is attained in the isotope where the corresponding
single-particle state is closest to the Fermi energy. Hence,
from Fig.{\ }\ref{FIG14} one can infer the order of
single-particle energies in the beginning of the
50$\leq$$N$$\leq$82 shell as $2d_{5/2}$, $3s_{1/2}$, $2d_{3/2}$,
$1g_{7/2}$, and $1h_{11/2}$. Similarly, the predicted order at
the bottom of the next major shell is $2f_{7/2}$, $3p_{3/2}$,
$3p_{1/2}$, $2f_{5/2}$, $1h_{9/2}$, and $1i_{13/2}$. The order
of spherical single-particle states does vary with $N$. For
instance, according to the HFB+SkP calculations of
Fig.{\ }\ref{FIG14}, the $1g_{7/2}$ shell never becomes lowest
in energy, as it should have done, had the single-particle
energies been $N$-independent.
Noteworthy is the fact that, due to the strong interaction with
the low-$\ell$ continuum (cf.{\ }Sec.{\ }\ref{sec3d}), the
$4s_{1/2}$ excitation becomes lowest at $N$$>$114. Above the
$4s_{1/2}$ state there appear several quasiparticle states with
excitation energies rapidly decreasing with $N$. These orbitals
represent the low-energy continuum states. They are very close
in energy, exhibit small spin-orbit splitting, and the lowest of
them are the low-$\ell$ states: $4p_{1/2}$, $4p_{3/2}$,
$3d_{3/2}$, and $3d_{5/2}$. All these features are
characteristic of the continuum states \cite{[Dob95d]}. Still
higher in energy, one may distinguish a similar doublet of the
$3f_{5/2}$ and $3f_{7/2}$ states, as well as the $2g_{9/2}$
state which represents a high-$\ell$ resonance.
The bottom panel of Fig.{\ }\ref{FIG14} shows similar results
for the BCS-like canonical energies $E_\mu$ defined in
Eq.{\ }(\ref{eq150}), and denoted here by
$E^{\text{can}}_{n{\ell}j}$. A comparison between
$E^{\text{HFB}}_{n{\ell}j}$ and $E^{\text{can}}_{n{\ell}j}$
illustrates the fact that the {\em lowest} elementary
excitations of the nucleus are equally well described by both
these quantities. Indeed, a general pattern and, in most cases,
also the values of $E^{\text{HFB}}_{n{\ell}j}$ and
$E^{\text{can}}_{n{\ell}j}$ are very similar. The differences
mainly concern the $s_{1/2}$ states, and also the low-$\ell$
states in the continuum, which in the canonical representation
appear higher in energy (see Table{\ }\ref{TAB01} for the
direct comparison for $s_{1/2}$ states). On the other hand, the
position of the high-$\ell$ $2g_{9/2}$ resonance is almost
identical in both representations. Such a similarity supports
the supposition (Ref.{\ }\cite{[Dob94]} and
Sec.{\ }\ref{sec3d}) that the canonical single-particle
energies, which are the main ingredients of
$E^{\text{can}}_{n{\ell}j}$, constitute a fair representation of
single-particle and single-quasiparticle properties of the
system.
\subsection{Relation between canonical and single-quasiparticle
wave functions}
\label{canqua}
As discussed in Sec.{\ }\ref{sec2ab}, the canonical states
constitute a basis in which the independent-quasiparticle state
$|\Psi\rangle$ has the form of a product of correlated pairs
[Eq.{\ }(\ref{eq116})]. Therefore, these states can be
considered as fundamental building blocks describing the pairing
correlations in a many-fermion system. On the other hand, the
canonical states are determined by a solution of the HFB
equation -- the single-quasiparticle states.
Since the canonical states constitute an orthonormal ensemble,
the lower and upper HFB components can be expanded as
\begin{mathletters}\label{eqqcan}\begin{eqnarray}
\phi_1 (E_n,\bbox{r} \sigma )&=&\sum_{\mu} {\cal{A}}^{(1)}_{n\mu}
\breve\psi_{\mu}(\bbox{r}\sigma)
, \label{eqqcanb} \\
\phi_2 (E_n,\bbox{r} \sigma )&=&\sum_{\mu} {\cal{A}}^{(2)}_{n\mu}
\breve\psi_{\mu}(\bbox{r}\sigma)
, \label{eqqcana}
\end{eqnarray}\end{mathletters}%
where
\begin{equation}\label{overlaps}
{\cal{A}}^{(i)}_{n\mu}\equiv
\int\text{d}^3\bbox{r}\sum_{\sigma}
\breve\psi^*_{\mu}(\bbox{r}\sigma)
\phi_i(E_n,\bbox{r} \sigma ) ~~~(i=1,2)
\end{equation}
are the associated overlaps. In order to find the relation
between ${\cal{A}}^{(1)}_{n\mu}$ and ${\cal{A}}^{(2)}_{n\mu}$
one can employ Eqs.{\ }(\ref{eq122}) and (\ref{eq144}) for the
HFB densities. This gives the canonical wave functions
expressed as linear combinations of the {\em lower} HFB
components:
\begin{mathletters}\label{eqqcan1}\begin{eqnarray}
v^2_\mu \breve\psi_{\mu}(\bbox{r}\sigma) &=&
\sum_{n}{\cal{A}}^{(2)}_{n\mu}\phi_2 (E_n,\bbox{r} \sigma )
, \label{eqqcan1a} \\
-u_\mu v_\mu \breve\psi_{\mu}(\bbox{r}\sigma) &=&
\sum_{n}{\cal{A}}^{(1)}_{n\mu}\phi_2 (E_n,\bbox{r} \sigma )
. \label{eqqcan1b}
\end{eqnarray}\end{mathletters}%
One should note that the expansions (\ref{eqqcan1}) are valid
regardless of the fact that the lower components
$\phi_2(E_n,\bbox{r}\sigma)$ {\em do not constitute} an
orthogonal ensemble of wave functions. By multiplying both sides
of Eqs.{\ }(\ref{eqqcan1a}) and (\ref{eqqcan1b}) with
$\breve\psi^*_{\nu}(\bbox{r}\sigma)$ and taking the scalar
product, one arrives at the orthogonality relations:
\begin{mathletters}\label{eqqcan2}\begin{eqnarray}
\sum_{n}{\cal{A}}^{(2)}_{n\mu}{\cal{A}}^{(2)*}_{n\nu}
&=& v^2_\mu \delta_{\mu\nu}
, \label{eqqcan2a} \\
\sum_{n}{\cal{A}}^{(1)}_{n\mu}{\cal{A}}^{(2)*}_{n\nu}
&=& -u_\mu v_\mu \delta_{\mu\nu}
. \label{eqqcan2b}
\end{eqnarray}\end{mathletters}%
The above identities express the fact, that both
${\cal{A}}^{(2)}_{n\mu}$ and ${\cal{A}}^{(1)}_{n\mu}$ are
related to the transformation matrix ${\cal{U}}_{n\nu}$ defined
in Eq.{\ }(\ref{Ediag}):
\begin{equation}\label{transl}
{\cal{A}}^{(2)}_{n\mu}=v_\mu {\cal{U}}_{n\mu} \quad,\quad
{\cal{A}}^{(1)}_{n\mu}=-u_\mu {\cal{U}}_{n\mu},
\end{equation}
and Eqs.{\ }(\ref{eqqcan2}) reflect the unitarity of
${\cal{U}}_{n\nu}$. Equations (\ref{transl}) can be easily
derived by inserting expansions (\ref{eqqcan}) into the HFB
equation (\ref{eq143}), and then expressing the matrix
$\breve{E}_{\mu\nu}$ (\ref{eq145b}) in its eigensystem
(\ref{Ediag}).
It is instructive to express the upper HFB component in a form
similar to that of Eq.{\ }(\ref{eqqcana}):
\begin{equation}\label{phiB}
\phi_1 (E_n,\bbox{r} \sigma )=-\sum_{\mu} \frac{u_\mu}{v_\mu}
{\cal{A}}^{(2)}_{n\mu}
\breve\psi_{\mu}(\bbox{r}\sigma).
\end{equation}
For $E_n>-\lambda$, the upper component
$\phi_1(E_n,\bbox{r}\sigma)$ is the scattering wave function. It
can be formally expanded in the {\em localized} canonical wave
functions according to Eq.{\ }(\ref{phiB}), but the main
contribution comes from the particle-like states with very small
values of ${v^2_\mu}$. Hence, this relation is not too useful in
practical applications.
\subsection{Spectral distribution for the canonical-basis wave functions}
\label{sec2bcd}
In order to discuss the importance of the particle continuum on
the structure of canonical states, it is interesting to see how
a given canonical state is distributed among the
single-quasiparticle states. For this, it is convenient to
rewrite Eq.{\ }(\ref{eqqcan1a}) in the following way:
\begin{equation}\label{spectral2}
\breve\psi_{\mu}(\bbox{r}\sigma)=
\sum_{0<E_n<E_{\text{max}}} \frac{{\cal{S}}_{n\mu}}{\sqrt{N_n}}
\phi_2 (E_n,\bbox{r} \sigma ).
\end{equation}
The spectral amplitudes ${\cal{S}}_{n\mu}$ define the
distribution of the canonical states among the
single-quasiparticle states. It is important to recall at this
point that the sum in Eq.{\ }(\ref{spectral2}) represents in
fact the discrete ($E_n$$<$$-\lambda$) states and the
discretized ($E_n$$>$$-\lambda$) continuum states, i.e.,
\begin{eqnarray}
\breve\psi_{\mu}(\bbox{r}\sigma) &=&
{\displaystyle\sum_{0<E_n<-\lambda}
\frac{{\cal{S}}_{n\mu}}{\sqrt{N_n}}}
\phi_2 (E_n,\bbox{r} \sigma ) +
\nonumber \\
&& {\displaystyle\int_{-\lambda}^\infty \text{d}n(E)
\frac{{\cal{S}}_{E,\mu}}{\sqrt{N_E}}}
\phi_2 (E ,\bbox{r} \sigma ) ,
\label{spectral}
\end{eqnarray}
with the spectral amplitudes ${\cal{S}}_{n\mu}$ and
${\cal{S}}_{E,\mu}$ pertaining to the discrete and continuous HFB
spectrum, respectively.
The spectral amplitudes can be expressed in terms of matrices
${\cal{A}}^{(2)}_{n\mu}$ or ${\cal{U}}_{n\mu}$ introduced in
Sec.{\ }\ref{canqua}:
\begin{equation}\label{ampli}
{\cal{S}}_{n\mu} = \frac{\sqrt{N_n}}{v_\mu^2}
{\cal{A}}^{(2)}_{n\mu} = \frac{\sqrt{N_n}}{v_\mu}{\cal{U}}_{n\mu}.
\end{equation}
We have included in ${\cal{S}}_{n\mu}$ the norms $N_n$ of the
lower components, Eq.{\ }(\ref{norms}). In this way, the
values of spectral amplitudes measure the influence of
quasiparticle states irrespective of the overall magnitude of
their lower components.
Before discussing the properties of the spectral amplitudes,
let us write down the two sum rules:
\begin{equation}\label{eqqcan4a}
1 = \sum_{\mu}|{\cal{S}}_{n\mu}|^2\frac{v_\mu^2}{N_n}
= \sum_{n} |{\cal{S}}_{n\mu}|^2\frac{v_\mu^2}{N_n},
\end{equation}
\begin{equation}\label{eqqcan4b}
1 = \sum_{\mu}|{\cal{S}}_{n\mu}|^2\frac{v_\mu^4}{N_n^2}.
\end{equation}
The first two sum rules, Eq.{\ }(\ref{eqqcan4a}), come from
the the unitarity of ${\cal{U}}_{n\mu}$. The last one,
Eq.{\ }(\ref{eqqcan4b}), expresses the condition defining the
norm of the lower HFB component.
In Fig.{\ }\ref{FIG33} are shown the spectral amplitudes for
the $s_{1/2}$ canonical states in $^{120}$Sn
(cf.{\ }Secs.{\ }\ref{sec2bca} and \ref{sec2bcb}). The
phases of the single-quasiparticle wave functions have been
fixed in such a way that all the amplitudes ${\cal{S}}_{n\mu}$
for $\mu$=1 are positive (some of these amplitudes are too
small to be displayed in the figure). This defines the
relative phases of the spectral amplitudes for $\mu$$>$1. Then,
the positive and negative amplitudes are in
Fig.{\ }\ref{FIG33} shown by bars hashed in opposite
directions. Results shown in this figure pertain to the same
single-quasiparticle and canonical states as those shown in
Figs.{\ }\ref{FIG21} and \ref{FIG22}, respectively, and in
Table \ref{TAB01}.
The lowest panel in Fig.{\ }\ref{FIG33} shows that the
$1s_{1/2}$ canonical state ($\mu$=1) is composed mainly of two
components corresponding to the two deep-hole quasiparticles at
$E_8$=31.64\,MeV and $E_5$=17.60\,MeV. Similarly, the $\mu$=2
and $\mu$=3 canonical states are mixtures of the
$E_5$=17.60\,MeV and $E_1$=1.54\,MeV quasiparticles. For all
three of these canonical states, the diagonal amplitudes
dominate.
Another pattern appears for the positive-energy canonical
states; i.e., for $\mu$=4 and $\mu$=5. These two canonical
states contain large components of the hole-like quasiparticles
at $E_5$=17.60\,MeV and $E_1$=1.54\,MeV, but in addition, they
also acquire large components of the particle-type
quasiparticles belonging to the continuum. These continuum
components are centered around 15 and 20\,MeV for $\mu$=4 and
$\mu$=5, respectively. This illustrates the fact that a correct
description of the positive-energy canonical states requires
solving the HFB equation to rather high energies. The widths of
the corresponding distributions are rather large, which
indicates that there is not a single resonance in the particle
continuum which would alone describe the high-energy $s_{1/2}$
canonical states. This can be well understood by recalling that
the $\ell$=0 resonances have usually very large widths.
{}For the drip-line nucleus $^{150}$Sn, the spectral $s_{1/2}$
amplitudes are shown in Fig.{\ }\ref{FIG35}. Similarly to the
case of $^{120}$Sn, the three lowest canonical states for
$\mu$=1, 2, and 3 are mainly composed of the three hole-like
quasiparticles at $E_9$=34.27, $E_7$=22.12, and $E_3$=7.24\,MeV
with dominating diagonal amplitudes. On the other hand, the
low-lying positive-energy canonical $\mu$=4 state has large and
almost equal components coming from the particle-like
quasiparticles at $E_1$=2.40, $E_2$=4.84, and $E_4$=8.93\,MeV.
The following $\mu$=5 canonical state has dominant amplitudes
from the hole-like and particle-like quasiparticles at
$E_3$=7.24 and 8.93\,MeV, respectively. One should note that the
$\mu$=4 and $\mu$=5 canonical $s_{1/2}$ states in $^{150}$Sn
have rather large occupation factors as compared to those in
$^{120}$Sn. Both of them require including the
single-quasiparticle states {\em at least} up to 10\,MeV. The
following $\mu$=6 state (not shown in the figure) has the
occupation probability of $v^2_6$=0.0003 and the spectral
amplitudes extending up to 25\,MeV.
The spectral amplitudes for the $f_{7/2}$ states in $^{120}$Sn
and $^{150}$Sn are shown in Figs.{\ }\ref{FIG34} and
\ref{FIG36}, respectively. An interesting situation appears in
$^{120}$Sn where two quasiparticles, one of the particle type
and another one of the hole type, have rather similar
single-quasiparticle energies of 17.63 and 18.97\,MeV. As a
result, the lowest canonical state ($\mu$=1) acquires a
substantial particle-type quasiparticle component, while both
quasiparticles contribute almost equally to the $\mu$=3
canonical state. In $^{150}$Sn, the positive-energy $f_{7/2}$
canonical states ($\mu$=3 and 4) have large amplitudes from the
hole-like quasiparticles (contributing almost exclusively to the
structure of the negative-energy canonical states with $\mu$=1
(1$f_{7/2}$) and $\mu$=2 (2$f_{7/2}$)), as well as from a wide
distribution of several particle-type quasiparticles extending
up to 20\,MeV.
The spectral amplitudes allow also for a determination of the
asymptotic properties of canonical states. (See
Ref.{\ }\cite{[Van93]} for a discussion of the the asymptotic
properties of natural orbits.) The lower components
$\phi_2(E_n,\bbox{r}\sigma)$ behave asymptotically as
$\exp($$-$$r\sqrt{2m(E_n-\lambda)/\hbar^2}$
\cite{[Bul80],[Dob84]}. Therefore, as seen from
Eq.{\ }(\ref{spectral}), the asymptotic properties of
canonical states are governed by the lowest discrete
quasiparticle, provided the corresponding spectral amplitude,
${\cal{S}}_{1\mu}$, is not equal to zero. However, if such a
spectral amplitude is non-zero but very small, the corresponding
asymptotic behavior will be attained only at very large
distances. In practice, the lowest discrete quasiparticle
dominates the asymptotic behavior only if the corresponding
spectral amplitude has a significantly large value. For the
$s_{1/2}$ states in $^{120}$Sn (Fig.{\ }\ref{FIG33}) such a
situation occurs for the canonical states with $\mu$=2--5 On the
other hand, since the value of $|{\cal{S}}_{1,1}|$ is very
small, the asymptotic behavior of the $\mu$=1 canonical state is
dominated by the hole-like quasiparticle at $E_5$=17.60\,MeV. A
similar situation occurs for the $f_{7/2}$ states in $^{120}$Sn.
Namely, only for the $\mu$=2 canonical state the asymptotic
behavior is determined by the lowest discrete quasiparticle.
An entirely different property can occur in drip-line nuclei,
where the Fermi energy is close to zero and there may be no
quasiparticle excitations in the discrete spectrum between 0
and $-\lambda$. In such a situation, shown in
Figs.{\ }\ref{FIG35} and \ref{FIG36}, the canonical states are
represented by superpositions of lower quasiparticle components
belonging to the particle continuum. Consequently, it is the
integral over the lowest continuum quasiparticle states just
above the $E>-\lambda$ threshold that determines the asymptotic
properties of the canonical states. In other words, the profile
of the level density, $dn(E)/dE$, around $E=-\lambda$ becomes a
crucial factor. Good examples of a very strong coupling to the
particle continuum are the $\mu$=4 and 5 canonical $s_{1/2}$ and
$f_{7/2}$ states in $^{150}$Sn, where the quasiparticle strength
is distributed in a very wide energy interval ranging from 1.5
to 20\,MeV. On the other hand, the two lowest canonical
$f_{7/2}$ states in $^{150}$Sn can be associated with the two
quasiparticle excitations well localized in energy (see
Fig.{\ }\ref{FIG36}) and their asymptotics is governed by the
energy of the lowest quasiparticle.
An analysis of the spectral distribution, analogous the one
presented above, has recently been performed \cite{[Pol95]} for
the natural orbits in $^{16}$O determined within the Green's
function method using the $NN$ interaction. This method
accounts for a much more general class of correlations as
compared to the HFB correlations of the pairing type studied
here. However, the general features of the spectral
distributions remain essentially the same. Namely, the
low-occupation-number natural orbits are determined mostly
through high-energy continuum contributions, and large box sizes
(15--20\,fm) and large single-particle bases (20 states per
${\ell}j$-block) have to be used to stabilize the solutions.
This is so even if the studied nucleus ($^{16}$O) is
$\beta$-stable, well-bound, and light; one can expect that for
drip-line nuclei the aforementioned features can only be more
pronounced.
\subsection{Asymptotic properties}
\label{sec3c}
In the limit of weak binding, radial dimensions of atomic nuclei
increase and it becomes exceedingly important to control the
radial asymptotics of many-body wave functions, not only in
reaction studies but also in nuclear structure applications.
Figure{\ }\ref{FIG04} displays the radial dependence of the
neutron density $\rho(r)$ in $^{150}$Sn calculated with the
values of $R_{\text{box}}$ between 10 and 30\,fm. It is seen
that, for every value of $R_{\text{box}}$, $\rho(r)$ follows its
asymptotic behavior up to about $R_{\text{box}}$$-$3\,fm and
then falls down to zero as a result of the boundary conditions
(\ref{eq154}). That is, these boundary conditions affect the
density only in a narrow spherical layer of the thickness equal
to about 3\,fm, while inside this layer $\rho(r)$ behaves
independently of the value of $R_{\text{box}}$. Analogous
results for the pairing density $\tilde\rho(r)$ are shown in
Fig.{\ }\ref{FIG04a}.
At very large distances the asymptotic behavior of the particle
density is governed by the square of the lower component of the
single-quasiparticle wave function corresponding to the lowest
quasiparticle energy $E_{\text{min}}$. Similarly, the
asymptotic behavior of the pairing density $\tilde\rho(r)$ is
determined by the product of the upper and the lower components
of quasiparticle $E_{\text{min}}$. Using the asymptotic
properties of the HFB wave functions derived in
\cite{[Bul80],[Dob84]}, one obtains:
\begin{mathletters}\label{eq155}\begin{eqnarray}
\rho(r) \stackrel{\text{large}~r}{\longrightarrow}
& \sim {\displaystyle\frac{\exp(- \chi r)}{r^2}}&
\quad ; \quad
\chi=2\kappa_2 , \label{eq155a} \\
\tilde\rho(r) \stackrel{\text{large}~r}{\longrightarrow}
& \sim {\displaystyle\frac{\exp(-\tilde\chi r)}{r^2}}&
\quad ; \quad
\tilde\chi= \kappa_1 + \kappa_2 , \label{eq155b}
\end{eqnarray}\end{mathletters}%
where
\begin{equation}\label{eq156}
\kappa_1 = \sqrt{\frac{2m(-E_{\text{min}}-\lambda)}{\hbar^2}}\quad,\quad
\kappa_2 = \sqrt{\frac{2m( E_{\text{min}}-\lambda)}{\hbar^2}}.
\end{equation}
In the considered example of $^{150}$Sn the calculated values
are $\lambda$=$-$1.46\,MeV and $E_{\text{min}}$= 1.07\,MeV (a
$p_{1/2}$ state). Consequently, $\chi$$\simeq$0.70\,fm$^{-1}$
and $\tilde\chi$$\simeq$0.49\,fm$^{-1}$. In
Figs.{\ }\ref{FIG04} and \ref{FIG04a} the asymptotic
dependencies given by Eq.{\ }(\ref{eq155}) are shown as shaded
lines. One can see that for $\rho(r)$ the asymptotic regime is
reached only at distances as large as 25\,fm, which means that
the contributions from other quasiparticle states, and/or from
the next-to-leading-order terms in the Hankel functions, still
influence the particle density at rather large values of $r$.
Interestingly, the pairing density approaches the asymptotic
limit already at $r$$\sim$10\,fm.
A rough estimate of $\chi$ and $\tilde\chi$ can be obtained by
substituting the value of a typical pairing gap
($\Delta$=1\,MeV) for the lowest quasiparticle energy
$E_{\text{min}}$. {}For stable nuclei
($\lambda$$\simeq$$-$8\,MeV) one obtains
$\chi$$\simeq$1.32\,fm$^{-1}$, while for the one-neutron drip
nuclei, defined by a vanishing separation energy,
$S_n$$\simeq$$\Delta$+$\lambda$$\simeq$0, the result is
$\chi$$\simeq$0.62\,fm$^{-1}$. This difference illustrates the
increase in the spatial extension of the {\em particle}
densities when going towards the neutron drip line. On the
other hand, for the {\em pairing} densities the corresponding
numbers are $\tilde\chi$$\simeq$1.24\,fm$^{-1}$ and
$\tilde\chi$=$\chi/2$$\simeq$0.31\,fm$^{-1}$. Therefore, in
stable nuclei both types of densities have rather similar
asymptotic behavior, while in drip-line nuclei the pairing
densities have much longer tails.
In this context, it is instructive to recall the discussion from
Sec.{\ }\ref{sec2ac} regarding the probabilistic
interpretation of the HFB densities. The probability
${\cal{P}}_1(x)$ (${\cal{P}}_2(x)$) of finding a particle or a
pair of particles at $r$=$x$ is proportional to $\rho(x)$ or
$\rho^2(x)+\tilde\rho^2(x)$, respectively. Consequently, in stable nuclei
${\cal{P}}_2(x)$ decays much faster than ${\cal{P}}_1(x)$ at
large distances. This is not true for drip-line nuclei, where
the asymptotics of ${\cal{P}}_1(x)$ and ${\cal{P}}_2(x)$ is the
same.
As discussed above, static pairing correlations can influence
dramatically the asymptotic behavior of density distributions in
drip-line nuclei. In addition, a significant modification of the
density tails comes from the dynamical coupling to collective
modes through the particle continuum. Such a coupling can be
treated in terms of the continuum QRPA and has been shown to be
very important for light systems \cite{[Len93],[Sch95]}. An
analysis of the asymptotic behavior of the particle density
$\rho(r)$ has recently been performed \cite{[Ben95]} by finding
the {\em exact} solutions for weakly bound two particles
interacting through a contact force. In that study, the role of
one-particle resonant states on the density asymptotics has been
discussed.
\subsection{Pairing coupling to positive-energy states}
\label{sec3e}
As illustrated in Sec.{\ }\ref{sec3a}, the density of the
scattering continuum states increases with $R_{\text{box}}$. In
the limit of very large values of $R_{\text{box}}$, the set of
discretized continuum states can be considered as a fair
approximation of the real continuum, and the sums over the
positive-energy states can correctly represent integrals over
the continuous energy variable. Therefore, we may consider this
limit in order to study the dynamical coupling between the bound
single-particle states and the positive-energy states. In the
language of pairing correlations, one may think of this coupling
in terms of a virtual scattering of pairs of fermions from the
bound states to positive-energy states, and back. Such a pair
scattering gives rise to the additional pairing energy to the
ground-state energy.
To illustrate the stability of results with increasing box size,
in Fig.{\ }\ref{FIG03} we show the neutron p-p potentials
$\tilde{U}(r)$ in $^{150}$Sn and $^{172}$Sn calculated in the
HFB+SkP model for several values of $R_{\text{box}}$. In these
two nuclei, the values of $\tilde{U}(r)$ do not change when
$R_{\text{box}}$ is larger than 20 and 22\,fm, respectively, but
at smaller values of $R_{\text{box}}$, one observes significant
variations. A rather unexpected result of this analysis is that
the overall magnitude of pairing correlations, represented by
the average pairing gap $\langle\Delta\rangle$, {\em decreases}
with increasing $R_{\text{box}}$. This occurs in spite of the
fact that the actual density of scattering states dramatically
{\em increases} with increasing $R_{\text{box}}$.
This effect can be understood by noting that the pairing
correlations produced by a density-dependent p-p interaction
(and hence for the SkP force used here) are concentrated at the
nuclear surface; i.e., at a fixed location in space. For small
values of $R_{\text{box}}$, the boundary conditions
(\ref{eq154}) have a tendency to push the continuum wave
functions towards smaller distances, and into the surface
region. This increases the magnitude of pairing correlations. On
the other hand, with increasing $R_{\text{box}}$, the scattering
states spread out uniformly outside the nucleus and effectively
leave the surface region. Hence $\langle\Delta\rangle$
decreases. As a consequence, with increasing $R_{\text{box}}$
the self-consistent attractive pairing potential $\tilde{U}(r)$
decreases in magnitude and significantly spreads out towards
large distances.
The importance of allowing the pairing interaction to couple
properly to the particle continuum is illustrated in
Fig.{\ }\ref{FIG05a}, where the neutron rms radius, the
average pairing gap, and the Fermi energy are shown as
functions of $R_{\text{box}}$. The two upper plots confirm that
a stability of results is attained beyond 20 or 22\,fm, while
the bottom plot indicates that the pairing coupling to the
positive-energy states can be a decisive factor influencing the
nuclear binding. Indeed, below $R_{\text{box}}$$\simeq$20\,fm
the nucleus $^{172}$Sn is unbound, and it becomes bound only
when its ground state is allowed to gain an additional binding
from the pairing correlations at large distances. This
indicates that, for the surface-type pairing interaction, one
has to consider a rather dense particle continuum before the
pairing coupling to positive-energy states is exhausted. (For a
similar discussion in a schematic model see
Ref.{\ }\cite{[Bel87]}. There, it has been pointed out that
because of strong coupling to the continuum, $\lambda$ is
significantly lowered in the case of surface pairing as
compared to the case of volume pairing.)
Since, for the Gogny interaction, the HFB equations are solved
by expansion in the harmonic oscillator basis, one can test the
coupling to the positive-energy states by increasing the number
$N_{\text{sh}}$ of the oscillator shells used in the basis. In
practice, calculations must be restricted to
$N_{\text{sh}}$$\leq$20, which allows one to describe the wave
functions up to about
$R_{\text{max}}$$\simeq$$\sqrt{2N_{\text{sh}}\hbar/m\omega_0}$,
where $\omega_0$ is the frequency of the harmonic oscillator
\cite{[Naz94]}. For $N_{\text{sh}}$=20 this corresponds to
about $R_{\text{max}}$=14\,fm.
{}Figure{\ }\ref{FIG06} compares the asymptotic behavior of
the neutron particle densities in three neutron-rich tin
isotopes calculated in the spatial coordinates (SkP) or in the
harmonic-oscillator basis (D1S). In the former case one obtains
a clean region of the asymptotic dependence governed by
Eq.{\ }(\ref{eq155a}), which around $r$=18\,fm is perturbed by
the box boundary conditions (\ref{eq154}) at
$R_{\text{box}}$=20\,fm. In the latter case, the region of
proper asymptotic behavior becomes perturbed by the
$\exp(-m{\omega_0}r^2/\hbar)$ dependence characteristic of the
harmonic-oscillator-basis wave functions. The $\omega_0$
values, obtained by minimizing the total energy for the
$N_{\text{sh}}$=17 basis, are equal to 13.4, 6.6, and 6.3\,MeV
in $^{132}$Sn, $^{150}$Sn, and $^{172}$Sn, respectively. Due to
this, a study of the continuum influence using such a basis can
be performed only up to densities of scattering states
corresponding to about $R_{\text{box}}$=14\,fm in the heavier
isotopes and only $R_{\text{box}}$=10\,fm in $^{132}$Sn, as can
be seen in Fig.{\ }\ref{FIG12}. Let us note, however, that the
neutron densities beyond $r$=10\,fm are typically smaller than
10$^{-4}$\,fm$^{-3}$, which explains the stability of the HFB
calculations with increasing size of the basis.
This is illustrated in Fig.{\ }\ref{FIG05b} which is analogous
to the similar study presented for the SkP interaction in
Fig.{\ }\ref{FIG05a}. Here, for each value of $N_{\text{sh}}$
and for each nucleus, the value of $\omega_0$ was optimized so
as to minimize the total energy. As can be seen, one obtains a
nice stability of results by using $N_{\text{sh}}$=17. This
test corresponds to testing the coordinate-representation
solutions (Fig.{\ }\ref{FIG05a}) in the range of box sizes
between 12\,fm$\leq$$R_{\text{box}}$$\leq$14\,fm. In this
rather narrow region, the SkP results are not stable because of
the dominant surface-type character of its pairing interaction.
Since the p-p Gogny interaction is more of the volume type
(Sec.{\ }\ref{sec2aca}) it requires much smaller distances to
saturate.
\subsection{BCS approximation}
\label{sec3b}
When inspecting Fig.{\ }\ref{FIG12}, it is obvious that by
applying the BCS approximation to the state-independent pairing
force and by allowing the BCS-type pairing correlations to
develop in such a dense spectrum, the result can be disastrous.
The seniority force gives rise to the {\em non-localized pairing
field} \cite{[Dob84]},
\begin{equation}\label{VBCS}
\tilde{h}_{\text{BCS}}
(\bbox{r}\sigma,\bbox{r'}\sigma')=-\Delta_{\text{BCS}}
\delta(\bbox{r}-\bbox{r'})
\delta_{\sigma\sigma'},
\end{equation}
i.e., to a constant pairing gap, identical for all states. The
high density of single-particle states in the particle continuum
immediately results in an unrealistic increase of BCS pairing
correlations \cite{[Naz94]}. One may, in principle,
artificially readjust the pairing strength constant to avoid
such an increase, but then the predictive power of the approach
is lost and, moreover, the spatial asymptotic properties of
the solutions are still going to be incorrect.
To illustrate the latter point, Fig.{\ }\ref{FIG25} (top
panel) shows the neutron densities in $^{150}$Sn calculated for
several values of $R_{\text{box}}$ within the HF+BCS
approximation. In order to avoid the increase of pairing
correlations with increasing density of states, the calculations
have been performed by fixing the values of the pairing gap. For
every box size $R_{\text{box}}$, the value of
$\Delta_{\text{BCS}}$ has been set equal to the average pairing
gap $\langle\Delta\rangle$ obtained within the HFB method. The
corresponding $\langle\Delta\rangle$ values are quoted in
Fig.{\ }\ref{FIG03}.
It is not too surprising to see that the asymptotic behavior of
the density calculated in the HF+BCS+$\langle\Delta\rangle$
method (top panel) is entirely different than that shown in
Fig.{\ }\ref{FIG04}. Due to a nonzero occupation probability
of quasibound states, there appears an unphysical gas of
neutrons surrounding the nucleus. In Fig.{\ }\ref{FIG25} this
gas has a constant density of
$\rho$$\simeq$6$\times$10$^{-5}$\,fm$^{-3}$, independent of
$R_{\text{box}}$. This result means that an external pressure
would have been necessary to keep the neutrons inside the box.
Namely, had the box boundary condition been released, one would
have observed a stream of neutrons escaping the nucleus. This is
a completely artificial (and unwanted) feature of the BCS
approximation, because for a negative value of the Fermi energy,
neutrons cannot be emitted.
In the above example the density of the neutron gas at
$R_{\text{box}}$=25\,fm corresponds to about 4 neutrons
uniformly distributed in the sphere of $R$=$R_{\text{box}}$.
Needless to say, by increasing the box radius, the number
neutrons in the gas grows at the expense of the number of
neutrons constituting the nucleus in the center of the box.
Since the total average number of neutrons is conserved, by
changing $R_{\text{box}}$ one actually performs an unphysical
study of {\em different} nuclei, surrounded by a neutron gas of
a fixed density. Another consequence of the presence of a gas
of particles is that the rms nuclear radius cannot be calculated
in the BCS theory, because the results strongly depend on the
box size (see discussion in Refs.{\ }\cite{[Dob84],[Dob95a]}).
It has been suggested in the literature \cite{[Ton79]} that the
above deficiencies of the BCS approximation can be cured by
applying to them the state-dependent-pairing-gap version, where
the pairing gap is calculated for every single-particle state
using an interaction which is not of the seniority type. (The
corresponding BCS equations resemble the canonical-basis
relations (\ref{eq151}).) In such an approach one hopes that the
majority of continuum states would neither contribute to the
pairing field (e.g., because of their very different spatial
character) nor result in the appearance of the unphysical gas.
This conjecture is tested in Fig.{\ }\ref{FIG25} (middle and
bottom panel) where the neutron densities obtained within the
state-dependent version of the BCS approximation using the
SkP$^\delta$ and the SkP interactions are presented. It is seen
that a reduced coupling of some continuum states to the pairing
field does indeed decrease the gas density, however, the
asymptotic behavior of the density is still incorrect.
In the above plots, the shaded lines represent the asymptotic
behavior given by Eq.{\ }(\ref{eq155a}) assuming
$E_{\text{min}}$=0, i.e., that of a single-particle state at the
Fermi energy. It is seen that a surplus density above this
asymptotic limit appears at large distances. However, the
deficiencies of the state-dependent BCS approximation, as used
for example in Refs.{\ }\cite{[Ton79],[Len91],[Nay95]}, are
certainly less acute than those of the seniority-pairing BCS.
For example, in this type of approach one may probably calculate
radii of nuclei much nearer to the drip line.
It is clear that the neutron gas appears in the BCS solutions
because of the nonzero occupation probabilities of scattering
states. Therefore, one may think that excluding the scattering
states from the pairing phase space could be a decisive solution
to the problem. However, for drip-line nuclei, where the Fermi
energy is by definition close to zero, the remaining phase space
would then be small, and this would lead to an artificial
quenching of pairing correlations. Moreover, even if the density
obtained in such method would vanish asymptotically, the
corresponding factor $\chi$ would not be governed by
$\Delta$$-$$\lambda$$\simeq$2\,MeV, as discussed in
Sec.{\ }\ref{sec3c}, but by the single-particle energy,
$\epsilon$$\simeq$0, of the highest-energy single-particle
state considered in BCS calculations. This again would lead to
densities vanishing at much slower pace than it is required by
the HFB theory.
\section{Physical observables far from stability}
\label{sec4}
In this section discussed are some experimental consequences
of the HFB theory, particularly important for weakly bound nuclei.
\subsection{Pairing gaps}
\label{sec4a}
Pairing gaps are p-p analogs of single-particle energies. They
carry the information about the energies of non-collective
excitations, level occupations, odd-even mass differences, and
other observables. The average neutron canonical pairing gaps
(\ref{eq148b}) are shown in Figs.{\ }\ref{FIG24a}
($^{120}$Sn) and \ref{FIG24b} ($^{150}$Sn) as functions of the
canonical single-particle energies (\ref{eq148a}).
As seen in the middle part of Fig.{\ }\ref{FIG24a}, pairing
gaps obtained with the volume-type pairing interaction exhibit
very weak configuration dependence. In $^{120}$Sn they decrease
slightly with $\epsilon_\mu$ but remain confined between 1.0 and
1.5\,MeV. In general, the values of $\Delta_\mu$ for the
$s_{1/2}$ states are slightly larger than for other orbitals,
which is again related to the volume character of volume delta
interaction.
The results presented in the bottom part of
Fig.{\ }\ref{FIG24a} nicely illustrate the surface character
of the SkP pairing interaction. Indeed, here the pairing gaps
increase from 0.5\,MeV (deep-hole states) to about
1.25--1.5\,MeV when the single-particle energies increase
towards the Fermi energy, and then they decrease again to about
1.0\,MeV for positive single-particle energies. This is related
to the fact that orbitals near the Fermi level are concentrated
in the surface region.
Still another type of behavior is obtained for the finite range
Gogny interaction (top part of Fig.{\ }\ref{FIG24a}). Here,
the pairing gaps decrease steadily with single-particle energy.
In $^{120}$Sn the values of $\Delta_\mu$ decrease from about
2.5\,MeV for deep-hole states to about 0.75\,MeV for
positive-energy states. (A similar energy dependence of pairing
gaps was obtained in the BCS calculations of
Ref.{\ }\cite{[Del95]} with the renormalized Paris potential.)
Interestingly, the values obtained for the high-$\ell$,
$j$=$\ell$$-$$\frac{1}{2}$ orbitals (antiparallel $L$$-$$S$
coupling) are significantly larger than those for other
orbitals. The different ranges of $\epsilon_\mu$ values for SkP
and D1S in Fig.{\ }\ref{FIG24a} reflects the different
effective masses in both models. A rather low effective mass in
D1S, $m^*/m$=0.70, gives rise a reduced level density and a more
bound 1$s_{1/2}$ ground state as compared with the SkP model
($m^*/m$=1). In fact, due to the non-local exchange
contributions to the p-h mean field (Appendix \ref{appA}), the
1$s_{1/2}$ state in the Gogny model has the canonical energy
lower than the bottom of the local potential well, shown in
Fig.{\ }\ref{FIG01a}.
In $^{120}$Sn, the HFB+D1S pairing gaps at the Fermi energy are
of the order of 1.75\,MeV, which slightly overestimates the
values corresponding to the odd-even mass staggering in this
region. However, one should bear in mind that the pairing gaps
at the Fermi energy are rather rough approximations to the
odd-even mass difference. A more accurate description can be
obtained by performing blocked HFB calculations for odd-mass
isotopes. In the vicinity of $^{120}$Sn this method yields the
odd-even mass staggering of 1.6\,MeV \cite{[Dec80]} for the D1S
interaction and of 1.3\,MeV \cite{[Dob84]} for the SkP
interaction. Another contribution to the odd-even mass
difference comes from the coupling to the low-lying collective
modes. Therefore, the D1S parameters have been adjusted
\cite{[Dec80]} to give the pairing gap in tin to be 0.3\,MeV
larger than the experimental one. On the other hand, such a
margin has not been taken into account for the SkP and
SkP$^\delta$ forces. Clearly, a detailed comparison of the
values of pairing gaps for the interactions discussed in
Fig.{\ }\ref{FIG24a} is delicate. Much more information can
actually be derived from the comparison of their dependence on
the single-particle energies, which is markedly different.
The general pattern of $\Delta_\mu$ remains very similar when
going to the neutron-rich nucleus $^{150}$Sn
(Fig.{\ }\ref{FIG24b}). In particular, the magnitude of the
average pairing gap in deep-hole states depends strongly on the
range and density dependence of pairing interaction.
{}Figures{\ }\ref{FIG07} shows the average neutron pairing
gaps (Eqs.{\ }(\ref{eq157}) and (\ref{eq158})) for SkP,
SIII$^\delta$, and D1S interactions. The large values of
$\langle\Delta\rangle$ obtained in HFB+D1S can be explained by:
(i) an overall larger magnitude of pairing correlations in tin
nuclei, and (ii) strong pairing correlations in deep-hole states
which strongly contribute to the average,
Eq.{\ }(\ref{eq158}). It is to be noted, however, that
despite stronger pairing in D1S, the HFB+D1S pairing gaps vanish
at $N$=126 (near the two-neutron drip line), in contrast to the
HFB+SkP result. This difference may be traced back to a much
larger continuum phase space taken into account in our HFB+SkP
calculations (Sec.{\ }\ref{sec3e}) which are performed in the
coordinate representation, and to a larger $N$=126 shell gap
(4.2 MeV in $^{168}$Sn) obtained with D1S. (The increase of
proton pairing gaps when approaching the proton drip line has
been calculated previously in Ref.{\ }\cite{[Sta91]} with the
HFB+SkP model and explained in a similar way.) The disappearance
of the neutron pairing at $N$=126 in the HFB+SIII$^\delta$ model
is partly due to the volume character of $\tilde{h}$ (a weaker
coupling to the particle continuum) and partly due to a larger
$N$=126 shell gap \cite{[Dob95c]}.
\subsection{Shell effects}
\label{sec4b}
As discussed in Sec.~\ref{sec2ba}, diffused nucleonic densities
and very strong, surface-peaked, pairing fields obtained with
the density-dependent pairing interaction are expected to lead
to very shallow single-particle potentials in drip-line nuclei.
Because of a very diffuse surface (no flat bottom), the resulting
single-particle spectrum resembles that of a harmonic oscillator
with a spin-orbit term (but with a weakened ${\ell}^2$ term)
\cite{[Dob94]}. Schematically, this effect is illustrated in
the left panel of Fig.~\ref{FIG27}. By comparing with the
situation characteristic of stable nuclei (right panel of
Fig.~\ref{FIG27}), new shell structure emerges with a more
uniform distribution of normal-parity orbits, and the
unique-parity intruder orbit which reverts towards its parent
shell. Such a new shell structure, with no pronounced shell gaps,
would give rise to different kinds of collective phenomena
\cite{[Naz94],[Cho95]}.
The effect of the weakening of shell effects in drip-line
nuclei, first mentioned in the astrophysical context
\cite{[Hae89]}, was further investigated in
Refs.~\cite{[Smo93],[Dob94],[Dob95c]}. First analyses of its
consequences for the nucleosynthesis have also been performed
\cite{[Che95],[Pfe95]}. Microscopically, it can be explained
by: (i) the changes in the mean field itself due to weak
binding (see above), and (ii) a strong pairing-induced coupling
between bound orbitals and the low-$\ell$ continuum.
\subsection{Separation energies}
\label{sec4c}
Weakening of shell effects with neutron number manifests itself
in the behavior of two-neutron separation energies. This is
illustrated in Fig.~\ref{FIG28} which displays the two-neutron
separation energies for the $N$=80, 82, 84, and 86 spherical
even-even isotones. The large $N$=82 magic gap, clearly seen in
the nuclei close to the stability valley and to the proton-drip
line, gradually closes down when approaching the neutron drip
line. A similar effect is seen in the ($Z, N$) map of the
spherical two-neutron separation energies for the particle-bound
even-even nuclei calculated in the HFB+SkP model
(Fig.{\ }\ref{FIG26}). Namely, the neutron magic gaps $N$=20,
28, 50, 82, and 126, clearly seen as cliffs in the $S_{2n}$
surface, disappear for neutron-rich systems.
The gradual disappearance of the neutron shell structure with
$N$ is not a generic property of all effective interactions. As
seen in the plot of $S_{2n}$ and $\lambda_N$ for the tin
isotopes (Fig.~\ref{FIG02}) this effect is seen in the SkP and
SkP$^{\delta\rho}$ models, and, to some degree, also in the
SkP$^{\delta}$ model. (A weak irregularity at $N$=126 reflects
the weaker coupling to continuum for the volume pairing
\cite{[Bel87]}.) The strong shell effect seen in the SIII and
SkM$^*$ results has been discussed in Ref.~\cite{[Dob95c]}; it
can be attributed to the low effective mass in these forces. The
result of the D1S model, both for $S_{2n}$ and $\lambda_N$, is
close to that of the SkP$^{\delta}$ model. It is interesting
to point out that the QLM calculations of Ref.~\cite{[Zve85]}
(with $m^*/m=1$) for the Sn isotopes yield very similar results
to those of HFB+SkP.
The very neutron-rich nuclei, as those shown in
Fig.~\ref{FIG02}, cannot be reached experimentally under present
laboratory conditions. On the other hand, these systems are the
building blocks of the astrophysical r-process; their separation
energies, decay rates, and cross sections are the basic
quantities determining the results of nuclear reaction network
calculations. Consequently, one can learn about properties of
very neutron-rich systems by studying element abundances
\cite{[Kra93]}. The recent r-process network calculations
\cite{[Che95]}, based on several mass formulae, indicate a
quenching of the shell effect at $N$=82 in accordance with the
results of HFB+SkP model.
\subsection{Deep hole states}
\label{sec4d}
Pairing interaction between bound orbitals and particle
continuum is partly responsible for the appearance of particle
widths of deep-hole states and the term-repulsion phenomenon
(strong repulsion between single-particle levels)
\cite{[Bul80],[Bel87]}. In the DWBA and for the local pairing
field $\tilde{U}$ the particle width is given by
\begin{equation}\label{width}
\Gamma_i = 2\pi \left|\int\text{d}^3 \bbox{r} \varphi_i(\bbox{r})
\tilde{U}(\bbox{r}) \varphi_\epsilon(\bbox{r})
\right|^2.
\end{equation}
Here, $\varphi_i(r)$ is the HF wave function of the bound
deep-hole state $i$ with the single-particle energy
$\lambda-E_i$ in the absence of pairing, while
$\varphi_\epsilon(r)$ is the HF wave function of the unbound
state with the energy $\lambda+E_i$.
Equation (\ref{width}) is obtained by assuming that the p-p
field of the HFB Hamiltonian can be treated perturbatively. A
more consistent way would be to estimate $\Gamma_i$ based on
self-consistent HFB solutions containing pairing correlations.
The proper formulation of the nonperturbative HFB-based theory
of deep hole states and one-particle transfer process still
needs to be developed.
As discussed in Ref.{\ }\cite{[Bel87]}, $\Gamma_i$ is
sensitive to the type of the pairing force. In general, the
widths are larger for surface pairing than for volume pairing.
However, the result for an individual state strongly depends on
its angular momentum and excitation energy.
Experimentally, total widths of deep hole states,
$\Gamma_{\text{tot}}$, are of the order of MeV's, (see, e.g.,
Refs.{\ }\cite{[Mou76],[Her88],[Gal88],[Van93a]}). That is,
the partial width (\ref{width}), of the order of 10-100\,keV,
constitutes an extremely small fraction of
$\Gamma_{\text{tot}}$. Consequently, the experimental
determination of $\Gamma_i$ alone is very unlikely.
\subsection{Pair transfer form factors}
\label{sec4f}
There are many interesting aspects of physics of unstable nuclei
which are related to reaction mechanism studies: weak binding,
large spatial dimensions, skins (see, e.g.,
Refs.{\ }\cite{[Mue93],[Das94],[Kim94]}). Below, we discuss
some consequences of surface-peaked pairing fields for pair
transfer studies.
An experimental observable that may probe the character of the
pairing field is the pair transfer form factor, directly related
to the pairing density $\tilde\rho$. The difference in the
asymptotic behavior of single-particle density $\rho$ and pair
density $\tilde\rho$ in a weakly bound system (see
Secs.~\ref{sec2aca} and \ref{sec3c}) can be probed by comparing
the energy dependence of one-particle and pair-transfer cross
sections. Such measurements, when performed for both stable and
neutron-rich nuclei, can shed some light on the asymptotic
properties of HFB densities; hence on the character of pairing
field.
{}Figure{\ }\ref{FIG32} displays the pair transfer form
factors $r^2\tilde\rho(r)$ calculated in $^{120}$Sn, $^{150}$Sn,
and $^{172}$Sn with the SkP interaction. These microscopic
results are compared with the macroscopic form factors
$r^2\delta\rho(r)$ \cite{[Das85]} which are determined by using
the derivative of the particle density with respect to the
neutron number:
\begin{equation}\label{eq305}
\delta\rho(r) = 2\frac{-E_{\text{pair}}}{\langle\Delta\rangle}
\frac{\text{d}\rho(r)}{\text{d}N},
\end{equation}
where $E_{\text{pair}}$ is given by Eq.{\ }(\ref{epair}).
This expression can be motivated by the fact that only the
orbitals near the Fermi surface make significant contributions
to the pair density. In the BCS theory, the normalization
constant in $\delta\rho(r)$ is usually chosen \cite{[Bes86]} as
$\Delta/G$=$-$$E_{\text{pair}}/\Delta$. Here, we use neither
the BCS approximation nor the constant pairing strength $G$.
Therefore, the normalization
$-$$E_{\text{pair}}/\langle\Delta\rangle$ is employed. The
derivative in Eq.{\ }(\ref{eq305}) is calculated from the
finite difference between the self-consistent results for the
HFB vacuum corresponding to particle numbers $N$+1 and $N$$-$1.
In these calculations, in order to explore the smooth dependence
on the particle number $N$, the odd-average-particle-number
vacua have been calculated without using the blocking
approximation. It should be mentioned at this point that the
further approximation \cite{[Das85],[Das89]} of the derivative
$\text{d}\rho(r)/\text{d}N$ by the spatial derivative
$\text{d}\rho(r)/\text{d}r$ is not justified, because the
volume-conservation condition is not valid for the neutron
density distribution (see Fig.{\ }\ref{FIG16}).
The pair transfer form factors in Fig.{\ }\ref{FIG32} clearly
show that this process has a predominantly surface character.
The macroscopic form factors have smaller widths and higher
maxima than the microscopic ones. On the other hand, they are
smaller in the interior of the nucleus as well as in the
asymptotic region. In $\beta$-stable nuclei the macroscopic
approximation works fairly well, while in the drip-line nuclei
the differences between the two form factors are markedly
larger. In general, the corresponding differences are much
larger than those obtained within the BCS and the
particle-number-projected BCS approaches for the seniority
interaction \cite{[Civ92]}.
A comparison of the results obtained for different isotopes
conspicuously shows a significant increase in the pair transfer
form factors in the outer regions of drip-line nuclei. In
$^{120}$Sn, the form factors vanish around 9\,fm, while in
$^{150}$Sn and $^{172}$Sn they extend to much larger distances.
This effect is particularly pronounced for the microscopic pair
transfer form factors.
\subsection{Other observables}
\label{sec4e}
The importance of the HFB treatment for calculations of nuclear
radii has been discussed in several papers
\cite{[Dob84],[Sta91],[Fay94],[Dob95a]}. As mentioned in
Sec.~\ref{forces}, odd-even staggering of rms charge radii is
one of the best experimental indicators of the density-dependent
pairing. The proper treatment of the pairing effect on radii is
especially important for weakly bound systems which exhibit halo
or skin effects \cite{[Ber91],[Sta91],[Dob95a]} (cf. discussion
in Sec.~\ref{sec3e}).
Apart from the information on the nuclear rms radii, one may
also gain some experimental insight into the ratios of neutron
and proton densities at large distances from the center of
nucleus \cite{[Lub94],[Wyc95]}. This is possible due to
experiments on antiproton annihilation from atomic orbits, which
leads to different reaction products depending on whether the
process involves a proton or a neutron.
The role of deformation in neutron drip-line nuclei still needs
to be investigated. One can anticipate that due to: (i) very
diffused surfaces, and (ii) strong pairing correlations, the
geometric concept of collective deformation (defined as a
deviation of nuclear surface from sphericity) should be
revisited. In this context, the symmetry-unrestricted HFB
calculations in coordinate space are called for.
\section{Summary and Conclusions}
\label{sec6}
The advent of radioactive nuclear beams provides many exciting
opportunities to create and study unstable nuclei far from the
$\beta$ stability valley. One of the unexplored areas far from
stability is physics of nuclear pairing in weakly bound nuclei,
especially near the neutron drip line. Contrary to the situation
characteristic of stable nuclei, the coupling between the p-h
field and the p-p field in nuclei with extreme $N/Z$ ratios is
dramatic; i.e., no longer can pairing be treated as a residual
interaction.
The main objective of this study was to perform a detailed
analysis of various facets of pairing fields in atomic nuclei.
The first part contains the comprehensive summary of the HFB
formalism, with particular attention on the physical
interpretation of the underlying densities and fields. Very
little is known about the p-p component of the nuclear effective
interaction; its structure is of considerable importance not
only for nuclear physics but also for nuclear astrophysics and
cosmology. Therefore, the second part of this work focuses on
the differences between various pairing interactions. In
particular, the role of density dependence and finite range of
the p-p force has been illuminated, and the importance of the
coupling to the particle continuum has been emphasized.
Finally, the third part of our study relates the theoretical
formalism to experimental observables; i.e., energy spectra,
masses, radii, and pair transfer form factors. It is
demonstrated that these observables carry invaluable information
that can pin down many basic questions regarding the effective
$NN$ force, and its pairing component in particular. It should
be stressed, however, that in order to see clearly some of the
predicted effects, the excursion far from the valley of
$\beta$-stability is necessary.
The analysis presented in this paper should be viewed as a
useful starting point for future investigations. One of them is
the coupling between collective surface modes (e.g.,
deformation) and pairing fields in weakly bound nuclei. Another
interesting avenue of explorations is the role of dynamics;
e.g., the importance of the particle number conservation and the
coupling to pair vibrations. A fascinating and difficult
research program is the microscopic description of excited
states, especially those lying above the particle emission
threshold, for which the boundary conditions used in this study
(an impenetrable box) have to be modified to account explicitly
for outgoing waves. We are only beginning to explore many
unusual aspects of the nuclear many-body problem offered by
systems with extreme $N/Z$ ratios.
\acknowledgments
Interesting discussions with H. Flocard, P.-H. Heenen, and H. Lenske
are gratefully acknowledged.
Oak Ridge National Laboratory is managed for the U.S. Department
of Energy by Lockheed Martin Energy Systems under Contract No.
DE-AC05-84OR21400. The Joint Institute for Heavy Ion Research
has as member institutions the University of Tennessee,
Vanderbilt University, and the Oak Ridge National Laboratory; it
is supported by the members and by the Department of Energy
through Contract No. DE-FG05-87ER40361 with the University of
Tennessee. We thank the Department of Energy's Institute for
Nuclear Theory at the University of Washington for its
hospitality and partial support during the completion of this
work. This research was supported in part by the U.S.
Department of Energy through Contract No. DE-FG05-93ER40770 and
the Polish Committee for Scientific Research under Contract
No.~2~P03B~034~08.
|
1,314,259,994,777 | arxiv | \section{Introduction}
\subsection {Motivation}
In recent years, relay-assisted transmission has gained significant
attention as a powerful technique to enhance the performance of
wireless networks, combat the fading effect, extend the coverage,
and reduce the amount of interference due to frequency reuse. The
main idea is to deploy some extra nodes in the network to facilitate
the communication between the end terminals. In this manner, these
supplementary nodes act as spatially distributed antennas for the
end terminals. More recently, cooperative diversity techniques have
been proposed as candidates to exploit the spatial diversity offered
by the relay networks (for example, see \cite{laneman, azarian,
yuksel, khisti}). A fundamental measure to evaluate the performance
of the existing cooperative diversity schemes is the
diversity-multiplexing tradeoff (DMT) which was first introduced by
Zheng and Tse in the context of point-to-point MIMO fading channels
\cite{zheng_tse}. Roughly speaking, the diversity-multiplexing
tradeoff identifies the optimal compromise between the ``transmission
reliability" and the``data rate" in the high-SNR regime.
In spite of all the interest in relay networks, none of the existing
cooperative diversity schemes is proved to achieve the optimum DMT.
The problem has been open even for the simple case of half-duplex
single-relay single-source single-destination single-antenna setup.
Indeed, the only existing DMT achieving scheme for the single-relay
channel reported in \cite{yuksel} requires knowledge of CSI (channel
state information) for all the channels at the relay node.
\subsection{Related Works}
The DMT of relay networks was first studied by Laneman {\em et al.}
in \cite{laneman} for half-duplex relays. In this work, the authors
prove that the DMT of a network with single-antenna nodes, composed
of a single source and a single destination assisted with $K$
half-duplex relays, is upper-bounded by\footnote{Throughout the
paper, for any real value $a$, $a^+\equiv\max\left\{0, a \right\}$.}
\begin{equation}
d(r) = (K+1) (1-r)^{+}. \label{eq:ub}
\end{equation}
This result can be established by applying either the
multiple-access or the broadcast cut-set bound \cite{cover_book} on
the achievable rate of the system. In spite of its simplicity, this
is still the tightest upper-bound on the DMT of the relay networks.
The authors in \cite{laneman} also suggest two protocols based on
decode-and-forward (DF) and amplify-and-forward (AF) strategies for
a single-relay system with single-antenna nodes. In both protocols,
the relay listens to the source during the first half of the frame,
and transmits during the second half. To improve the spectral
efficiency, the authors propose an incremental relaying protocol in
which the receiver sends a single bit feedback to the transmitter
and to the relay to clarify if it has decoded the transmitter's
message or needs help from the relay for this purpose. However, none
of the proposed schemes are able to achieve the DMT upper-bound.
The non-orthogonal amplify-and-forward (NAF) scheme, first proposed
by Nabar {\em et al.} in \cite{nabar_bolcskei}, has been further
studied by Azarian {\em at al.} in \cite{azarian}. In addition to
analyzing the DMT of the NAF scheme, reference \cite{azarian} shows
that NAF is the best in the class of AF strategies for
single-antenna single-relay systems. The dynamic decode-and-forward
(DDF) scheme has been proposed independently in
\cite{azarian,mitran_tarokh,katz_shamai} based on the DF strategy.
In DDF, the relay node listens to the sender until it can decode the
message, and then re-encodes and forwards it to the receiver in the remaining time.
Reference \cite{azarian} analyzes the DMT of the DDF scheme and
shows that it is optimal for low rates in the sense that it achieves
(\ref{eq:ub}) for the multiplexing gains satisfying $r \leq 0.5$.
However, for higher rates, the relay should listen to the
transmitter for most of the time, reducing the spectral efficiency.
Hence, the scheme is unable to follow the upper-bound for high
multiplexing gains. More importantly, the generalizations of NAF and
DDF for multiple-relay systems fall far from the upper-bound,
especially for high multiplexing gains.
Yuksel {\em et al.} in \cite{yuksel} apply compress-and-forward (CF)
strategy and show that CF achieves the DMT upper-bound for multiple-antenna
half-duplex single-relay systems. However, in their proposed scheme,
the relay node needs to know the CSI of all the channels in the
network which may not be practical.
Most recently, Yang {\em et al.} in \cite{yang_belfiore2} propose a
class of AF relaying scheme called slotted amplify-and-forward (SAF)
for the case of half-duplex multiple-relay ($K>1$) and single
source/destination setup. In SAF, the transmission frame is divided
into $M$ equal length slots. In each slot, each relay transmits a
linear combination of the previous slots. Reference
\cite{yang_belfiore2} presents an upper-bound on the DMT of SAF and
shows that it is impossible to achieve the MISO upper-bound for
finite values of $M$, even with the assumption of full-duplex
relaying. However, as $M$ goes to infinity, the upper-bound meets
the MISO upper-bound. Motivated by this upper-bound, the authors in
\cite{yang_belfiore2} propose a half-duplex sequential SAF scheme.
In the sequential SAF scheme, following the first slot, in each
subsequent slot, one and only one of the relays is permitted to
transmit an amplified version of the signal it has received in the
previous slot. By doing this, the different parts of the signal are
transmitted through different paths by different relays, resulting
in some form of spatial diversity. However, \cite{yang_belfiore2}
could only show that the sequential SAF achieves the MISO
upper-bound for the setup of non-interfering relays, i.e. when the
consecutive relays (ordered by transmission times) do not cause any
interference on one another.
Apart from investigating the optimum diversity-multiplexing tradeoff for relay networks, recently, other aspects of the relay
networks has also been studied (for example, see \cite{kramer,
xie, madsen, gastpar2, avesti_outage, avesti_wireless_deterministic,
gupta_kumar, xie_kumar, gastpar, nabar, shahab_parallel, nabar3,
bolcskei}). \cite{kramer, xie} develop new coding schemes
based on Decode-and-Forward and Compress-and-Forward relaying
strategies for relay networks. Avestimehr {\em et al.} in
\cite{avesti_outage} study the outage capacity of the relay channel for
low-SNR regime and show that in this regime, the bursty
Amplify-and-Forward relaying protocol achieves the optimum outage.
Avestimehr {\em et al.} in \cite{avesti_wireless_deterministic}
present a linear deterministic model for the wireless relay network
and characterize its exact capacity. Applying the capacity-achieving
scheme of the corresponding deterministic model, the authors in
\cite{avesti_wireless_deterministic} show that the capacity of
wireless single-relay channel and the diamond relay channel can be
characterized within 1 bit and 2 bits, respectively, regardless of
the values of the channel gains. The scaling law capacity of large
wireless networks is addressed in \cite{gupta_kumar, xie_kumar,
gastpar, nabar, shahab_parallel, nabar3, bolcskei}. Gastpar {\em et
al.} in \cite{gastpar} prove that employing AF relaying achieves the
capacity of the Gaussian parallel single-antenna relay network for
asymptotically large number of relays. Bolcskei {\em et al.} in
\cite{nabar} extend the work of \cite{gastpar} to the parallel
multiple-antenna relay network and characterize the capacity of
network within $O(1)$, for large number of relays. Oveis Gharan {\em
et al.} in \cite{shahab_parallel} propose a new AF relaying scheme
for parallel multiple-antenna fading relay networks. Applying the
proposed AF scheme, the authors in \cite{shahab_parallel}
characterize the capacity of parallel multiple-antenna relay
networks for the scenario where either the number of relays is large
or the power of each relay tends to infinity.
Recently, in a parallel and independent work by Kumar \textit{et al}\cite{vkumar}\footnote{After the completion of this work, the authors became aware of \cite{vkumar}.} the possibility of achieving the optimum DMT is shown in single-antenna half-duplex relay networks with some graph topologies including KPP , KPP(I), KPP(D) graphs for $K \geq 3$. A KPP graph is a directed graph consisted of $K$ vertex-disjoint paths each with the length greater than one, connecting the transmitter to the receiver. KPP(I) is a directed graph consisted of $K$ vertex-disjoint paths each with length greater than one, connecting the transmitter to the receiver, and possible edges between different paths. KPP(D) is a directed graph consisted of $K$ vertex-disjoint paths each with length greater than one, and a direct path connecting the transmitter to the receiver. It is worth mentioning that in all the mentioned graph topologies, the upper-bound of DMT is achieved by a cut-set of the MISO or SIMO form, i.e. all edges crossing the cut are originated from or destined to the same vertex. Also, they show that the maximum diversity can be achieved in a general multiple-antenna multiple relays network.
\subsection{Contributions}
In this paper, we propose a new scheme, which we call random
sequential (RS), based on the SAF relaying for general
multiple-antenna multi-hop networks. The key elements of the
proposed scheme are: 1) signal transmission through sequential paths
in the network, 2) path timing such that no non-causal interference
is caused from the transmitter of the future paths on the receiver
of the current path, 3) multiplication by a random unitary matrix at
each relay node, and 4) no signal boosting in amplify-and-forward
relaying at the relay nodes, i.e. the received signal is amplified by a coefficient with the absolute value of at most 1.
Furthermore, each relay node knows the CSI of its corresponding
backward channel, and the receiver knows the equivalent end-to-end
channel. We prove that this scheme achieves the maximum diversity
gain in a general multiple-antenna multiple-relay network (no
restriction imposed on the set of interfering node pairs).
Furthermore, we derive the DMT of the RS scheme for general
single-antenna multiple-relay networks. Specifically, we derive: 1)
the exact DMT of the RS scheme under the condition of
``non-interfering relaying'', and 2) a lower-bound on the DMT of the
RS scheme (no conditions imposed). Finally, we prove that for
single-antenna multiple-access multiple-relay networks (with $K>1$
relays) when there is no direct link between the transmitters and
the receiver and all the relays are connected to the transmitter and
to the receiver, the RS scheme achieves the optimum DMT. However,
for two-hop multiple-access single-relay networks, we show that the proposed
scheme is unable to achieve the optimum DMT, while the DDF scheme is shown to perform
optimum in this scenario.
It is worth mentioning that the optimality results in this paper can easily be applied to the case of KPP and KPP(D) graphs introduced in \cite{vkumar}. However, the proof approach we use in this paper is entirely different from that of used in \cite{vkumar}; Our proofs are based on the matrix inequalities while the proofs of \cite{vkumar} are based on information-theoretic inequalities. Furthermore, \cite{vkumar} shows the achievability of the maximum diversity gain in a general multiple-antenna multiple-relay network by considering a multiple-antenna node as multiple single-antenna nodes and using just one antenna at each time, while in our proof we show that the proposed RS scheme in general can achieve the maximum diversity also in the MIMO form and by using all the antennas simultaneously.
Finally, the achievability of the linear DMT between the points $(0,d_{\max})$ and $(1,0)$ in single-antenna layered network and directed acyclic graph network with full-duplex relays is independently shown as a remark of Theorems 1 and 4 in our paper, respectively.
The rest of the paper is organized as follows. In section II, the
system model is introduced. In section III, the proposed random
sequential scheme (RS) is described. Section IV is dedicated to the
DMT analysis of the proposed RS scheme. Section V proves the
optimality of the RS scheme in terms of diversity gain in general
multiple-antenna multiple-relay networks. Finally, section VI
concludes the paper.
\subsection{Notations}
Throughout the paper, the superscripts $^T$ and $^H$ stand for
matrix operations of transposition and conjugate transposition, respectively. Capital bold letters
represent matrices, while lowercase bold letters and regular letters
represent vectors and scalars, respectively. $\|\mathbf{v}\|$
denotes the norm of vector $\mathbf{v}$ while $\|\mathbf{A}\|$
represents the Frobenius norm of matrix $\mathbf{A}$.
$|\mathbf{A}|$ denotes the determinant of matrix $\mathbf{A}$. $\log (.)$ denotes the base-2 logarithm.
The notation $\mathbf{A}\preccurlyeq\mathbf{B}$ is
equivalent to $\mathbf{B}-\mathbf{A}$ is a positive semi-definite
matrix. Motivated by the definition in \cite{zheng_tse}, we define the notation $f(P) \doteq g(P)$ as $\lim_{P \to \infty} \frac{f(P)}{\log (P)} = \lim_{P \to \infty} \frac{g(P)}{\log (P)}$. Similarly, $f(P) \dot \leq g(P)$ and $f(P) \dot \geq g(P)$ are equivalent to $\lim_{P \to \infty} \frac{f(P)}{\log (P)} \leq \lim_{P \to \infty} \frac{g(P)}{\log (P)}$ and $\lim_{P \to \infty} \frac{f(P)}{\log (P)} \geq \lim_{P \to \infty} \frac{g(P)}{\log (P)}$, respectively. Finally, we use $A \approx B$ to denote the approximate equality between $A$ and $B$, such that by substituting $A$ by $B$ the validity of the equations are not compromised.
\section{System Model}
Our setup consists of $K$ relays assisting the transmitter and the
receiver in the half-duplex mode, i.e. at a given time, the relays
can either transmit or receive. Each two nodes are assumed either i) to be
connected by a quasi-static flat Rayleigh-fading channel,
i.e. the channel gains remain constant during a block of
transmission and change independently from block to block; or ii) to be
disconnected, i.e. there is no direct link between them. Hence, the
undirected graph $G=(V,E)$ is used to show the connected pairs in
the network\footnote{Note that however, in Remarks 2 and 6, the directed graph is considered.}. The node set is denoted by $V=\left\{0,1,\dots,{K+1}
\right\}$ where the $i$'th node is equipped with $N_i$ antennas.
Nodes $0$ and ${K+1}$ correspond to the transmitter and the receiver
nodes, respectively\footnote{Throughout the paper, it is assumed
that the network consists of one transmitter. However, in Theorems 5
and 6, we study the case of two-hop multiple transmitters single
receiver scenario.}. The received and the transmitted vectors at the
$k$'th node are shown by $\mathbf y_k$ and $\mathbf x_k$,
respectively. Hence, at the receiver side of the $a$'th node, we
have
\begin{equation}
\mathbf y_a = \sum_{\left\{a, b\right\} \in E} {\mathbf H_{a,b} \mathbf x_b} + \mathbf n_a,
\end{equation}
where $\mathbf H_{a,b}$ shows the $N_a \times N_b$
Rayleigh-distributed channel matrix between the $a$'th and the
$b$'th nodes and $\mathbf n_a \sim \mathcal N \left(\mathbf
0,\mathbf I_{N_a} \right)$ is the additive white Gaussian noise. We
assume reciprocal channels between each two nodes. Hence, $\mathbf
H_{a,b}=\mathbf H_{b,a}^T$. However, it can be easily verified that
all the statements of the paper are valid under the
non-reciprocity assumption. In the scenario of
single-antenna networks, the channel between nodes $a$ and $b$ is
denoted by $h_{\left\{a,b\right\}}$ to emphasize both the SISO and
the reciprocally assumptions. As in \cite{azarian} and
\cite{yang_belfiore2}, each relay is assumed to know the state of
its backward channel, and moreover, the receiver knows the
equivalent end-to-end channel. Hence, unlike the CF scheme in
\cite{yuksel}, no CSI feedback is needed. All nodes have the same
power constraint, $P$. Finally, we assume that the topology of the
network is known by the nodes such that they can perform a
distributed AF strategy throughout the network.
Throughout the section on diversity-multiplexing tradeoff, we make
some further assumptions in order to prove our statements. First, we
consider the scenario in which nodes with a single antenna are used.
Moreover, in Theorems 2, 3, 5, and 6, where we address DMT optimality
of the RS scheme, we assume that there is no direct link between
the transmitter(s) and the receiver. This assumption is reasonable
when the transmitter and the receiver are far from each other and
the relay nodes establish the connection between the end nodes.
Moreover, we assume that all the relay nodes are connected to the
transmitter and to the receiver through quasi-static flat
Rayleigh-fading channels. Hence, the network graph is two-hop. In
specific, we denote the output vector at the transmitter as $\mathbf
x$, the input vector and the output vector at the $k$'th relay as
$\mathbf r_k$ and $\mathbf t_k$, respectively, and the input at the
receiver as $\mathbf y$.
\section{Proposed Random Sequential (RS) Amplify-and-Forwarding Scheme}
In the proposed RS scheme, a sequence $\mathrm P \equiv
\left(\mathrm p_1, \mathrm p_2, \dots, \mathrm p_L \right)$ of $L$
paths\footnote{Throughout the paper, a path $\mathrm p$ is defined
as a sequence of the graph nodes $(v_0, v_1, v_2, \dots, v_l)$ such
that for any $i$, $\left\{v_i, v_{i+1} \right\} \in E$, and for all
$i \neq j$, we have $v_i \neq v_j$. The length of the path is
defined as the total number of edges on the path, $l$. Furthermore,
$\mathrm p(i)$ denotes the $i$'th node that $\mathrm p$ visits, i.e.
$\mathrm p(i) = v_i$.} originating from the transmitter and
destinating to the receiver with the length $\left(l_1, l_2, \dots,
l_L \right)$ are involved in connecting the transmitter to the
receiver sequentially ($\mathrm p_i(0)=0, \mathrm p_i(l_i)=K+1$).
Note that any path $\mathrm p$ of $G$ can be selected multiple times
in the sequence.
Furthermore, the entire block of transmission is divided into $S$
slots, each consisting of $T'$ symbols. Hence, the entire block
consists of $T=ST'$ symbols. Let us assume the transmitter intends
to send information to the receiver at a rate of $r$ bits per
symbol. To transmit a message $w$, the transmitter selects the
corresponding codeword from a Gaussian random code-book consisting
of $2^{ST'r}$ elements each of with length $LT'$. Starting from the first
slot, the transmitter sequentially transmits the $i$'th portion ($1
\leq i \leq L$) of the codeword through the sequence of relay nodes
in $\mathrm p_i$. More precisely, a timing sequence $\left\{s_{i,j}
\right\}_{i=1,j=1}^{L,l_i}$ is associated with the path sequence.
The transmitter sends the $i$'th portion of the codeword in the
$s_{i,1}$'th slot. Following the transmission of the $i$'th portion of
the codeword by the transmitter, in the $s_{i,j}$'th slot, $1 \leq j
\leq l_i$, the node $\mathrm p_i(j)$ receives the transmitted signal
from the node $\mathrm p_i(j-1)$. Assuming $\mathrm p_i(j)$ is not
the receiver node, i.e. $j<l_i$, it multiplies the received signal in the
$s_{i,j}$'th slot by a
$N_{\mathrm p_i(j)} \times N_{\mathrm p_i(j)}$ random, uniformly
distributed unitary matrix $\mathbf U_{i,j}$ which is known at the
receiver side, amplifies the signal by the maximum possible
coefficient $\alpha_{i,j}$ considering the output power constraint
$P$ and $\alpha_{i,j} \leq 1$, and transmits the amplified signal in
the $s_{i,j+1}$'th slot. Furthermore, the timing sequence $\left\{s_{i,j}
\right\}$ should have the following properties
\begin{enumerate}
\item [(1)] for all $i,j$, we have $1 \leq s_{i,j} \leq S$.
\item [(2)] for $i < i'$, we have $s_{i,1} < s_{i',1}$ (the ordering assumption on the paths)
\item [(3)] for $j < j'$, we have $s_{i,j} < s_{i,j'}$ (the causality assumption)
\item [(4)] for all $i < i'$ and $s_{i,j} = s_{i',j'}$, we have
$\left\{ \mathrm p_i(j), \mathrm p_{i'}(j'-1) \right\} \notin E$
(no noncausal interference assumption). This assumption ensures that the signal of the
future paths causes no interference on the output signal of the
current path. This assumption can be realized by designing the
timing of the paths such that in each time slot, the current running
paths are established through disjoint hops.
\end{enumerate}
At the receiver side, having received the signal of all paths, the
receiver decodes the transmitted message $w$ based on the signal
received in the time slots $\left\{s_{i,l_i}\right\}_{i=1}^L$. As we
observe in the sequel, the fourth assumption on
$\left\{s_{i,j}\right\}$ converts the equivalent end-to-end channel
matrix to lower-triangular in the case of single-antenna nodes, or
to block lower-triangular in the case of multiple-antenna nodes.
\begin{figure}[t
\centering
\includegraphics[scale=0.8]{example.eps
\caption{An example of a 3 hops network where $N_0=N_5=2, N_1=N_2=N_3=N_4=1$.}
\label{fig:example}
\end{figure}
\begin{table}[b]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|}
\hline
time-slot & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\
\hline \hline $\mathrm P_1(1)$ & $0 \to 1 $ & $1 \to 3 $ & $3 \to 5 $ & --- & --- & --- & --- \\
\hline $\mathrm P_1(2)$ & --- & $0 \to 2$ & $2 \to 4$ & $4 \to 5$ & --- & --- & --- \\
\hline $\mathrm P_1(3)$ & --- & --- & --- & $0 \to 1$ & $1 \to 4$ & $4 \to 5$ & --- \\
\hline $\mathrm P_1(4)$ & --- & --- & --- & --- & $0 \to 2$ & $2 \to 3$ & $3 \to 5$ \\
\hline
\end{tabular}
\caption{One possible valid timing for RS scheme with the path sequence
$\mathrm P_1= \left( \mathrm p_1, \mathrm p_2, \mathrm p_3, \mathrm p_4 \right)$.}
\label{tbl:1}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
time-slot & 1 & 2 & 3 & 4 & 5 & 6 \\
\hline \hline $\mathrm P_2(1)$ & $0 \to 1 $ & $1 \to 3 $ & $3 \to 5 $ & --- & --- & --- \\
\hline $\mathrm P_2(2)$ & --- & $0 \to 2$ & $2 \to 4$ & $4 \to 5$ & --- & --- \\
\hline $\mathrm P_2(3)$ & --- & --- & $ 0 \to 1 $ & $1 \to 3 $ & $3 \to 5 $ & ---\\
\hline $\mathrm P_2(4)$ & --- & --- & --- & $0 \to 2$ & $2 \to 4$ & $4 \to 5$ \\
\hline
\end{tabular}
\caption{One possible valid timing for RS scheme with the path sequence
$\mathrm P_2= \left( \mathrm p_1, \mathrm p_2, \mathrm p_1, \mathrm p_2 \right)$.}
\label{tbl:2}
\end{center}
\end{table}
An example of a three-hop network consisting of $K=4$ relays is
shown in figure (\ref{fig:example}). It can easily be verified that
there are exactly 12 paths in the graph connecting the transmitter
to the receiver. Now, consider the four paths $\mathrm
p_1=(0,1,3,5)$, $\mathrm p_2=(0,2,4,5)$, $\mathrm p_3=(0,1,4,5)$ and
$\mathrm p_4=(0,2,3,5)$ connecting the transmitter to the receiver.
Assume the RS scheme is performed with the path sequence $\mathrm
P_1 \equiv (\mathrm p_1, \mathrm p_2, \mathrm p_3, \mathrm p_4)$.
Table \ref{tbl:1} shows one possible valid timing sequence
associated with RS scheme with the path sequence $\mathrm P_1$. As
seen, the first portion of the transmitter's codeword is sent in the
$1$st time slot and is received by the receiver through the nodes of
the path $\mathrm P_1(1)\equiv(0, 1, 3, 5)$ as follows: In the $1$st
slot, the transmitter's signal is received by node $1$. Following
that, in the $2$nd slot, node $1$ sends the amplified signal to node
$3$, and finally, in the $3$rd slot, the receiver receives the
signal from node $3$. As observed, for every $1 \leq i \leq 3$, signal
of the $i$'th path interferes on the output signal of the $i+1$'th
path. However, no interference is caused by the signal of future
paths on the outputs of the current path. The timing sequence
corresponding to Table \ref{tbl:1} can be expressed as
$s_{i,j}=i+\lfloor \frac{i}{3}\rfloor+j-1$ and it results in the
total number of transmission slots to be equal to $7$, i.e. $S=7$.
As an another example, consider RS scheme with the path sequence
$\mathrm P_2 \equiv (\mathrm p_1, \mathrm p_2, \mathrm p_1, \mathrm
p_2)$. Table \ref{tbl:2} shows one possible valid timing-sequence
for the RS scheme with the path sequence $\mathrm P_2$. Here, we
observe that the signal on every path interferes on the output of
the next two consecutive paths. However, like the scenario with
$\mathrm P_1$, no interference is caused by the signal of future
paths on the output signal of the current path. The timing sequence
corresponding to Table \ref{tbl:2} can be expressed as
$s_{i,j}=i+j-1$ and it results in the total number of transmission
slots equal to $6$, i.e. $S=6$.
It is worth noting that to achieve higher spectral efficiencies
(corresponding to larger multiplexing gains), it is desirable to
have larger values for $\frac{L}{S}$. Indeed, $\frac{L}{S} \to 1$ is
the highest possible value. However, this can not be achieved in
some graphs (an example is the case of two-hop single relay scenario
studied in the next section where $\frac{L}{S}=0.5$). On the other
hand, to achieve higher reliability (corresponding to larger
diversity gains between the end nodes), it is desirable to utilize
more paths of the graph in the path sequence. It is not always
possible to satisfy both of these objectives simultaneously. As an
example, consider the single-antenna two-hop relay network where
there is a direct link between the end nodes, i.e. $G$ is the
complete graph. Here, all the nodes of the graph interfere on each
other, and consequently, in each time slot only one path can transmit signal. Hence,
in order to achieve $\frac{L}{S} \to 1$, only the direct path $(0,
K+1)$ should be utilized for almost all the time.
As an another example, consider the 3-hop network in figure
(\ref{fig:example}). As we will see in the following sections, the
RS scheme corresponding to the path sequence $\mathrm P_1$ achieves
the maximum diversity gain of the network, $d=4$. However, it can
easily be verified that no valid timing-sequence can achieve fewer
number of transmission slots than the one shown in Table
\ref{tbl:1}. Hence, $\frac{L}{S}=\frac{4}{7}$ is the best RS scheme
can achieve with $\mathrm P_1$. On the other hand, consider the RS
scheme with the path sequence $\mathrm P_2$. Although, as seen in
the sequel, the scheme achieves the diversity gain $d=2$ which is
below the maximum diversity gain of the network, it utilizes fewer
number of slots compared to the case using the path sequence
$\mathrm P_1$. Indeed, it achieves $\frac{L}{S}=\frac{4}{6}$.
In the two-hop scenario investigated in the next section, we will
see that for asymptotically large values of $L$, it is possible to
utilize all the paths needed to achieve the maximum diversity gain
and, at the same time, devise the timing sequence such that
$\frac{L}{S} \to 1$. Consequently, it will be shown that in this
setup, the proposed RS scheme achieves the optimum DMT.
\section{Diversity-Multiplexing Tradeoff}
In this section, we analyze the performance of the RS scheme in
terms of the DMT for the single-antenna multiple-relay networks.
First, in subsection \textit{A}, we study the performance of the RS
scheme for the case of non-interfering relays where there exists
neither causal nor noncausal interference between the signals sent
through different paths. In this case, as there exists no
interference between different paths, we can assume that the
amplification coefficients take values greater than one, i.e. the
constraint $\alpha_{i,j} \leq 1$ can be omitted. Under the condition
of non-interfering relays, we derive the exact DMT of the RS scheme.
As a result, we show that the RS scheme achieves the optimum DMT for
the setup of non-interfering two-hop multiple-relay ($K>1$)
single-transmitter single-receiver, where there exists no direct
link between the relay nodes and between the transmitter and the
receiver (more precisely, $E=\left\{\left\{ 0, k \right\},\left\{ k,
K+1 \right\} \right\}_{k=1}^K$). To prove this, we assume that the
RS scheme relies on $L=BK$ paths, $S=BK+1$ slots, where $B$ is an integer number, and the path
sequence is $\mathrm Q \equiv \left(\mathrm q_1, \dots, \mathrm q_K,
\mathrm q_1, \dots, \mathrm q_K, \dots, \mathrm q_1, \dots,
\mathrm q_K\right)$ where $\mathrm q_k\equiv(0, k, K+1)$. In other
words, every path $\mathrm q_k$ is used $B$ times in the sequence.
Here, each $K$ consecutive slots are called a sub-block. Hence, the
entire block of transmission consists of $B+1$ sub-blocks. The
timing sequence is defined as $s_{i,j}=i+j-1$. It is easy to verify
that the timing sequence satisfies the requirements. Here, we
observe that the spectral efficiency is $\frac{L}{S}=1-\frac{1}{S}$
which converges to 1 for asymptotically large values of $S$. By
deriving the exact DMT of the RS scheme, we prove that the RS scheme
achieves the optimum DMT for asymptotically large values of $S$.
In subsection \textit{B}, we study the performance of the RS scheme
for general single-antenna multiple-relay networks. First, we study
the performance of RS scheme for the setup of two-hop
single-transmitter single-receiver multiple-relay ($K > 1$) networks
where there exists no direct link between the transmitter and the
receiver; However, no additional restriction is imposed on the graph
of the interfering relay pairs. We apply the RS scheme with the same
parameters used in the case of two-hop non-interfering networks. We
derive a lower-bound for DMT of the RS scheme. Interestingly, it
turns out that the derived lower-bound merges to the upper-bound on
the DMT for asymptotic values of $B$. Next, we generalize our result
and derive a lower-bound on DMT of the RS scheme for general
single-antenna multiple-relay networks.
Finally, in subsection \textit{C}, we generalize our results for the
scenario of single-antenna two-hop multiple-access multiple-relay
($K>1$) networks where there exists no direct link between the
transmitters and the receiver. Here, we apply the RS scheme with the
same parameters as used in the case of single-transmitter
single-receiver two-hop relay networks. However, it should be noted
that here, instead of sending data from the single transmitter, all
the transmitters send data coherently. By deriving a lower-bound on
the DMT of the RS scheme, we show that in this network the RS scheme
achieves the optimum DMT. However, as studied in subsection
\textit{D}, for the setup of single-antenna two-hop multiple-access
single-relay networks where there exists no direct link between the
transmitters and the receiver, the proposed RS scheme reduces to
naive amplify-and-forward relaying and is not optimum in terms of
the DMT. In this setup, we show that the DDF scheme achieves
the optimum DMT.
\subsection{Non-Interfering Relays}
In this subsection, we study the DMT behavior of the RS scheme in
general single-antenna multi-hop relay networks under the condition
that there exists neither causal nor noncausal interference between
the signals transmitted over different paths. More precisely, we
assume the timing sequence is designed such that if
$s_{i,j}=s_{i',j'}$, then we have $\left\{ \mathrm p_i(j), \mathrm
p_{i'}(j'-1) \right\} \notin E$. This assumption is stronger than
the fourth assumption on the timing sequence (here the condition $i
< i'$ is omitted). We call this the ``non-interfering relaying''
condition. Under this condition, as there exists no interference
between signals over different paths, we can assume that the
amplification coefficients take values greater than one, i.e. the
constraint $\alpha_{i,j} \leq 1$ can be omitted.
First, we need the following definition.
\begin{definition}
For a network with the connectivity graph $G=(V,E)$, a cut-set on
$G$ is defined as a subset $\mathcal S \subseteq V$ such that $0 \in
\mathcal S, K+1 \in \mathcal S^c$. The weight of the cut-set
corresponding to $\mathcal S$, denoted by $w(\mathcal S)$, is
defined as
\begin{eqnarray}
w_G(\mathcal S) & = & \sum_{a \in \mathcal S, b \in \mathcal S^c, \left\{a, b \right\} \in E}{N_a \times N_b}.
\end{eqnarray}
\end{definition}
\begin{thm}
Consider a half-duplex single-antenna multiple-relay network with
the connectivity graph $G=(V,E)$. Assuming ``non-interfering
relaying'', the RS scheme with the path sequence $\left(\mathrm p_1, \mathrm p_2, \dots ,\mathrm p_L \right)$ achieves the diversity
gain corresponding to the following linear programming optimization problem
\begin{equation}
d_{RS, NI}(r) = \displaystyle \min_{\boldsymbol \mu \in \mathcal {\hat R}} \sum_{e \in E} \mu_e, \label{eq:t1_exact}
\end{equation}
where $\boldsymbol \mu$ is a vector defined on edges of $G$ and
$\mathcal {\hat R}$ is a region of $\boldsymbol \mu$ defined as
$$\mathcal {\hat R} \equiv \left\{ \boldsymbol \mu \left| \, \,
\mathbf 0 \leq \boldsymbol \mu \leq \mathbf 1, \displaystyle
\sum_{i=1}^{L} \max_{1 \leq j \leq l_i} \mu_{\left\{ \mathrm p_i(j),
\mathrm p_i(j-1) \right\}} \geq L - Sr \right\} \right..$$ Furthermore, the DMT
of the RS scheme can be upper-bounded as
\begin{equation}
d_{RS, NI}(r) \leq (1-r)^+\min_{\mathcal S} w_G( \mathcal S), \label{eq:t1_ub}
\end{equation}
where $\mathcal S$ is a cut-set on $G$. Finally, by properly
selecting the path sequence, one can always achieve
\begin{equation}
d_{RS, NI}(r) \geq \left(1-l_G r\right)^+ \min_{\mathcal S} w_G(\mathcal S), \label{eq:t1_lb}
\end{equation}
where $\mathcal S$ is a cut-set on $G$ and $l_G$ is the maximum path length between the transmitter and the receiver.
\end{thm}
\begin{proof}
Since the relay nodes are non-interfering, the achievable rate of
the RS scheme for a realization of the channels is equal to
\begin{IEEEeqnarray}{l}
R_{RS,NI}\left(\left\{ h_e\right\}_{e \in E}\right) = \nonumber \\
\frac{1}{S} \sum_{i=1}^{L} \log \left( 1 + P \prod_{j=1}^{l_i}\left|
\alpha_{i,j} \right|^2 \left| h_{\left\{\mathrm p_i(j), \mathrm p_i(j-1) \right\}}
\right|^2 \left(1 + \sum_{j=1}^{l_i-1} \prod_{k=j}^{l_i-1}{\left| \alpha_{i,k} \right|^2 \left|
h_{\left\{\mathrm p_i(k), \mathrm p_i(k+1) \right\}} \right|^2} \right)^{-1} \right),
\end{IEEEeqnarray}
where $\forall j < l_i: \alpha_{i,j} = \sqrt{\frac {P}{1+\left|
h_{\left\{\mathrm p_i(j-1), \mathrm p_i(j)\right\}} \right|^2P} }$
and $\alpha_{i, l_i}=1$ (since $\mathrm p_i(l_i)=K+1$). In deriving the above equation, we have used the fact that as the paths are non-interfering, the achievable rate can be written as the sum of the rates over the paths, noting that the terms $P \prod_{j=1}^{l_i}\left|
\alpha_{i,j} \right|^2 \left| h_{\left\{\mathrm p_i(j), \mathrm p_i(j-1) \right\}}
\right|^2$ and $1 + \sum_{j=1}^{l_i-1} \prod_{k=j}^{l_i-1}{\left| \alpha_{i,k} \right|^2 \left|
h_{\left\{\mathrm p_i(k), \mathrm p_i(k+1) \right\}} \right|^2}$ represent the effective signal power and the noise power over the $i$th path, respectively.
Hence, the probability of outage equals
\begin{eqnarray} \label{pout}
\mathbb P\left\{ \mathcal E \right\} & = & \mathbb P \left\{ R_{RS,NI}\left(\left\{ h_e\right\}_{e \in E}\right) \leq r \log \left( P \right) \right\} \nonumber \\
& \stackrel{(a)}{\doteq} & \mathbb P \left\{ \prod_{i=1}^{L} \max \left\{ P^{-1} , \min \left\{ \left| h_{\left\{0, \mathrm p_i(1) \right\}} \right|^2 \prod_{k=1}^{j}{\left| \alpha_{i,k} \right|^2 \left| h_{\left\{\mathrm p_i(k), \mathrm p_i(k+1) \right\}} \right|^2} \right\}_{j=0}^{l_i-1} \right\} \leq P^{Sr - L} \right\} \nonumber \\
& \stackrel{(b)}{\doteq} & \displaystyle \max_{\begin{subarray}{c} t_1, t_2, \dots, t_L \\ 1 \leq t_i \leq l_i \end{subarray}
} \mathbb P \left\{ \prod_{i=1}^{L} \max \left\{P^{-1}, \left| h_{\left\{0, \mathrm p_i(1) \right\}} \right|^2 \prod_{k=1}^{t_i-1}{\left| \alpha_{i,k} \right|^2 \left| h_{\left\{\mathrm p_i(k), \mathrm p_i(k+1) \right\}} \right|^2} \right\} \leq P^{Sr - L} \right\} \nonumber \\
& \stackrel{(c)}{\doteq} & \displaystyle \max_{\begin{subarray}{c} \mathcal S_1, \mathcal S_2, \dots, \mathcal S_L \\ \mathcal S_i \subseteq \left\{1,2,\dots, l_i-1 \right\} \end{subarray}
} \displaystyle \max_{\begin{subarray}{c} t_1, t_2, \dots, t_L \\ \max \left\{x \in \mathcal S_i\right\} < t_i \leq l_i \end{subarray}
} \nonumber \\
&&\mathbb P \left\{ \prod_{i=1}^{L} \max \left\{P^{-1}, P^{\left| \mathcal S_i \right|}\left| h_{\left\{\mathrm p_i(t_i), \mathrm p_i(t_i-1) \right\}} \right|^2 \prod_{k \in \mathcal S_i}{ \left| h_{\left\{\mathrm p_i(k), \mathrm p_i(k-1) \right\}} \right|^2} \right\} \leq P^{Sr - L} \right\}. \label{eq:t1_ni_1}
\end{eqnarray}
Here, $(a)$ follows from the facts that i)~$\forall x\geq 0: \max
\left\{ 1, x \right\} \leq 1+x \leq 2 \max \left\{ 1, x \right\}$, which implies that $1+P \Theta \approx \max (1, P \Theta)$, where $$ \Theta \triangleq \prod_{j=1}^{l_i}\left|
\alpha_{i,j} \right|^2 \left| h_{\left\{\mathrm p_i(j), \mathrm p_i(j-1) \right\}}
\right|^2 \left(1 + \sum_{j=1}^{l_i-1} \prod_{k=j}^{l_i-1}{\left| \alpha_{i,k} \right|^2 \left|
h_{\left\{\mathrm p_i(k), \mathrm p_i(k+1) \right\}} \right|^2} \right)^{-1},$$
and ii)~for all $x_i \geq 0$, $\frac{1}{M} \min \left\{ \frac{1}{x_i}
\right\}_{i=1}^{M} \leq \left( \sum_{i=1}^{M} x_i\right)^{-1} \leq
\min \left\{ \frac{1}{x_i} \right\}_{i=1}^{M}$, which implies that $$ \left(1 + \sum_{j=1}^{l_i-1} \prod_{k=j}^{l_i-1}{\left| \alpha_{i,k} \right|^2 \left|
h_{\left\{\mathrm p_i(k), \mathrm p_i(k+1) \right\}} \right|^2} \right)^{-1} \approx \min \left(1, \left\{ \left(\prod_{k=j}^{l_i-1}{\left| \alpha_{i,k} \right|^2 \left|
h_{\left\{\mathrm p_i(k), \mathrm p_i(k+1) \right\}} \right|^2} \right)^{-1}\right\}_{j=1}^{l_i-1}\right).$$ $(b)$ follows from
the fact that for any increasing function $f(.)$ , we have
$$ \displaystyle \max_{1 \leq i \leq M} \mathbb P \left\{ f\left( x_i
\right) \leq y \right\} \leq \mathbb P \left\{ f\left( \displaystyle
\min_{1 \leq i \leq M} x_i \right) \leq y \right\} \leq
\displaystyle M \max_{1 \leq i \leq M} \mathbb P \left\{ f\left( x_i
\right) \leq y \right\} .$$ $(c)$ follows from the fact that $$0.5
\min \left\{1, P \left| h_{\left\{\mathrm p_i(k), \mathrm p_i(k-1)
\right\}} \right|^2 \right\} \leq \left| \alpha_{i,k}
h_{\left\{\mathrm p_i(k), \mathrm p_i(k-1) \right\}} \right|^2
\leq \min \left\{1, P \left| h_{\left\{\mathrm p_i(k), \mathrm p_i(k-1) \right\}} \right|^2 \right\} ,$$ which implies that $\left| \alpha_{i,k}
h_{\left\{\mathrm p_i(k), \mathrm p_i(k-1) \right\}} \right|^2
\leq \min \left\{1, P \left| h_{\left\{\mathrm p_i(k), \mathrm p_i(k-1) \right\}} \right|^2 \right\}$. In the last line of (\ref{pout}), $\mathcal{S}_i$ denotes the subset of $\{1,2, \cdots, t_i-1\}$ for which $ P \left| h_{\left\{\mathrm p_i(k), \mathrm p_i(k-1) \right\}} \right|^2 \leq 1$.
Assuming $\left| h_e
\right|^2=P^{-\mu_e}$, we define the region $\mathcal R \subseteq
\mathbb R^{\left| E \right|}$ as the set of points $\boldsymbol \mu
= [\mu_e]_{e \in E}$ that the outage event occurs. Let us define
$\mathcal R_+ = \mathcal R \cap \left( \mathbb R_+ \cup \left\{ 0
\right\}\right)^{\left| E \right|}$. As the probability density
function diminishes exponentially as $e^{-P^{\mu_e}}$ for positive
values of $\mu_e$, we have $\mathbb P \left\{ \mathcal R_+ \right\}
\doteq \mathbb P \left\{ \mathcal R \right\}$. Hence, we have
\begin{eqnarray}
\mathbb P \left\{ \mathcal E \right\} & \doteq & \mathbb P \left\{ \mathcal R_+ \right\} \nonumber \\
& \stackrel{(a)}{\doteq} & \displaystyle \max_{\begin{subarray}{c} \mathcal S_1,
\mathcal S_2, \dots, \mathcal S_L \\ \mathcal S_i \subseteq \left\{1,2,\dots, l_i-1 \right\} \end{subarray}
} \displaystyle \max_{\begin{subarray}{c} t_1, t_2, \dots, t_L \\ \max \left\{x \in \mathcal S_i\right\} < t_i \leq l_i \end{subarray}
}
\mathbb P \left\{ \mathcal R\left( \boldsymbol{\mathcal S} , \mathbf t \right) \right\} \nonumber \\
& \stackrel{(b)}{\doteq} & \displaystyle \displaystyle \max_{\begin{subarray}{c} \mathbf t \\ 1 \leq t_i \leq l_i \end{subarray}
}
\mathbb P \left\{ \mathcal R_0\left( \mathbf t \right) \right\}, \label{eq:hd_show}
\end{eqnarray}
where
$$\mathcal R\left( \boldsymbol{\mathcal S}, \mathbf t \right) \equiv \left\{ \boldsymbol \mu \in \left( \mathbb R_+ \cup
\left\{ 0 \right\}\right)^{\left| E \right|} \left| \sum_{i=1}^{L}
\min \left\{1, \mu_{\left\{ \mathrm p_i(t_i), \mathrm p_i(t_i-1)
\right\}} + \displaystyle \sum_{k \in \mathcal S_i} \mu_{\left\{
\mathrm p_i(k), \mathrm p_i(k-1) \right\}} - \left| \mathcal S_i
\right| \right\} \geq L - Sr \right\} \right.,$$ $\mathbf t = \left[ t_1,
t_2, \dots, t_L\right]$, $\boldsymbol{\mathcal{S}}= \left[ \mathcal{S}_1, \cdots, \mathcal{S}_L\right]$, and $\mathcal R_0\left( \mathbf t \right)
\equiv \mathcal R\left( \oslash , \oslash, \dots, \oslash, t_1, t_2,
\dots, t_L \right)$, in which $\oslash$ denotes the null set. Here, $(a)$ follows from \eqref{eq:t1_ni_1}. In order to prove $(b)$, we first show that
\begin{equation}
\min \left\{1, \mu_{\left\{ \mathrm p_i(t_i), \mathrm p_i(t_i-1) \right\}} + \displaystyle \sum_{k \in
\mathcal S_i} \mu_{\left\{ \mathrm p_i(k), \mathrm p_i(k-1)
\right\}} - \left| \mathcal S_i \right| \right\} \leq
\displaystyle \max_{t'_i \in \mathcal S_i \cup \left\{ t_i \right\}}
\min \left\{1, \mu_{\left\{ \mathrm p_i(t'_i), \mathrm p_i(t'_i-1) \right\}} \right\}. \label{eq:easy_show}
\end{equation}
In order to verify \eqref{eq:easy_show}, consider two possible
scenarios: i) for all $t'_i \in \mathcal S_i \cup \left\{ t_i
\right\}$, we have $\mu_{\left\{ \mathrm p_i(t'_i), \mathrm
p_i(t'_i-1) \right\}} \leq 1$. In this scenario, as in the left hand
side of the inequality, we have the summation of $| \mathcal S_i | +
1$ positive parameters with value less than or equal to $1$
subtracted by $|\mathcal S_i|$, we conclude that the left hand side of
the inequality is less than or equal to $\mu_{\left\{ \mathrm
p_i(t'_i), \mathrm p_i(t'_i-1) \right\}}$ for any $t' \in \mathcal
S_i \cup \left\{ t_i \right\}$. Hence, \eqref{eq:easy_show} is
valid; ii) At least for one $t' \in \mathcal S_i \cup \left\{ t_i
\right\}$, we have $\mu_{\left\{ \mathrm p_i(t'_i), \mathrm
p_i(t'_i-1) \right\}} > 1$. In this scenario, the right hand side of
the inequality is equal to $1$ and accordingly,
\eqref{eq:easy_show} is valid. According to \eqref{eq:easy_show}, we
have $ \mathcal R \left(\boldsymbol{\mathcal S}_1, \mathbf{t} \right) \subseteq \displaystyle
\bigcup_{\begin{subarray}{c} \mathbf t' \\ t'_i \in \mathcal S_i
\cup \left\{ t_i \right\} \end{subarray}} \mathcal R_0\left( \mathbf
t' \right)$, which results in $(b)$ of \eqref{eq:hd_show}.
On the other hand, we know that for $\boldsymbol \mu^0 \geq \mathbf
0$, we have $\mathbb{P} \left\{\boldsymbol \mu \geq \boldsymbol
\mu^0 \right\} \doteq P^{-\mathbf{1} \cdot \boldsymbol \mu^0}$. By
taking derivative with respect to $\boldsymbol \mu$, we have
$f_{\boldsymbol \mu}(\boldsymbol \mu) \doteq P^{- \mathbf 1 \cdot
\boldsymbol \mu}$. Let us define $l_0 \triangleq
\displaystyle \min_{\boldsymbol \mu \in \mathcal R_0(\mathbf t)}
\mathbf 1 \cdot \boldsymbol \mu$ and $\boldsymbol \mu_0 \triangleq
\displaystyle \arg \, \min_{\boldsymbol \mu \in \mathcal
R_0(\mathbf t)} \mathbf 1 \cdot \boldsymbol \mu$, $\mathcal I \triangleq \left[0, l_0 \right]^{2K}$, $\mathcal
I_0^c \triangleq [\mu_0(1), \infty ) \times [\mu_0(2), \infty ) \times \dots
\times [\mu_0(L), \infty )$ and for $1 \leq i \leq L$,
$\mathcal{I}_i^c \triangleq [0, \infty )^{i-1} \times [l_0, \infty ) \times [0,
\infty )^{L-i}$. It is easy to verify that $\mathcal I_0^c \subseteq
\mathcal R_0(\mathbf t)$. Hence, we have
\begin{eqnarray}
\mathbb{P} \left\{ \mathcal R_0(\mathbf t) \right\} & \stackrel{(a)}{\doteq}
& \mathbb{P} \left\{ \mathcal I_0^c \right\} + \int_{R_0(\mathbf t) \bigcap \mathcal{I}}{f_{\boldsymbol
\mu}\left(\boldsymbol \mu \right) d\boldsymbol \mu
} + \sum_{i=1}^{L}{\mathbb{P} \left\{ \mathcal R_0( \mathbf t ) \cap \mathcal{I}_i^c \right\}}
\nonumber
\\
&\stackrel{(b)}{\doteq} & P^{-l_0}.
\end{eqnarray}
Here, $(a)$ follows from the facts that i)~$\mathbb P \left\{
\bigcup_{i=1}^M \mathcal A_i \right\} \doteq \sum_{i=1}^{M} \mathbb
P \left\{ \mathcal A_i \right\}$, and ii)~$\mathcal I_0^c \subseteq \mathcal
R_0( \mathbf t)$ and $\mathbb R_+^L = \mathcal I \bigcup \left(
\bigcup_{i=1}^{L} \mathcal I_i^c \right)$ which imply that $\mathcal R_0(\mathbf t) $ can be written as $\mathcal I_0^c \bigcup \left( \mathcal R_0(\mathbf t) \bigcap \mathcal I\right) \bigcup \left[ \bigcup_{i=1}^M \left( \mathcal R_0(\mathbf t) \bigcap \mathcal I_i^c\right)\right]$. $(b)$ follows from the
facts that $\mathbb{P} \left\{ \mathcal I_0^c \right\} = \mathbb P
\left\{ \boldsymbol \mu \geq \boldsymbol \mu_0 \right\} \doteq
P^{-l_0}$, $\int_{R_0( \mathbf t) \bigcap
\mathcal{I}}{f_{\boldsymbol \mu}\left(\boldsymbol \mu \right)
d\boldsymbol \mu } \dot \leq \mbox{vol} \left(R_0( \mathbf t)
\bigcap \mathcal{I}\right) P^{-l_0}$, noting that $\mbox{vol} \left(R_0( \mathbf t)
\bigcap \mathcal{I}\right)$ is a constant number independent of $P$, and $\mathbb{P} \left\{
\mathcal R_0( \mathbf t ) \cap \mathcal{I}_i^c \right\} \leq
\mathbb{P} \left\{ \mathcal{I}_i^c \right\} = P^{-l_0}$. Now,
defining $g_{\mathbf t} (\boldsymbol \mu)=\sum_{i=1}^{L} \min
\left\{1, \mu_{\left\{ \mathrm p_i(t_i), \mathrm p_i(t_i-1)
\right\}} \right\}$ and $ \boldsymbol {\hat \mu} = [\min \left\{
\mu_e, 1\right\}]_{e \in E}$, it is easy to verify that $g_{\mathbf
t}(\boldsymbol {\hat \mu} ) = g_{\mathbf t}(\boldsymbol \mu )$ and
at the same time $\mathbf 1 \cdot \boldsymbol {\hat \mu} < \mathbf 1
\cdot \boldsymbol \mu$ unless $\boldsymbol {\hat \mu} = \boldsymbol
\mu$. Hence, defining $\hat g_{\mathbf t}(\boldsymbol \mu) =
\sum_{i=1}^{L} \mu_{\left\{ \mathrm p_i(t_i), \mathrm p_i(t_i-1)
\right\}} $, we have
\begin{equation}
d_{RS, NI}(r) = \min_{\begin{subarray}{c}
\mathbf t \\
1 \leq t_i \leq l_i
\end{subarray}
} \min_{\begin{subarray}{c}
\boldsymbol \mu \geq \mathbf 0 \\
g_{\mathbf t} (\boldsymbol \mu) \geq L - Sr
\end{subarray}
} \mathbf 1 \cdot \boldsymbol \mu = \min_{\begin{subarray}{c}
\mathbf t \\
1 \leq t_i \leq l_i
\end{subarray}
} \min_{\begin{subarray}{c}
\mathbf 0 \leq \boldsymbol \mu \leq \mathbf 1 \\
\hat g_{\mathbf t}(\boldsymbol \mu) \geq L - Sr
\end{subarray}
} \mathbf 1 \cdot \boldsymbol \mu = \min_{\boldsymbol \mu \in \mathcal {\hat R}} \, \mathbf 1 \cdot \boldsymbol \mu,
\end{equation}
where $\mathcal {\hat R} = \left\{ \boldsymbol \mu \left| \,
\mathbf 0 \leq \boldsymbol \mu \leq \mathbf 1, \displaystyle
\sum_{i=1}^{L} \max_{1 \leq j \leq l_i} \mu_{\left\{ \mathrm p_i(j),
\mathrm p_i(j-1) \right\}} \geq L - Sr \right\} \right.$. This proves the first part of the theorem.
Now, let us define $G_{\mathrm P}=(V, E_{\mathrm P})$ as the
subgraph of $G$ consisting of the edges in the path sequence, i.e.
$E_{\mathrm P}=\left\{ \left\{ \mathrm p_i(j), \mathrm p_i(j-1)
\right\}, \forall i,j: 1 \leq i \leq L, 1 \leq j \leq l_i \right\}$.
Assume $\mathcal {\hat S} = \displaystyle \underset{\mathcal
S}{\mbox{argmin}} \, w_{G_{\mathrm P}}(\mathcal S), $ where $\mathcal
S$ is a cut-set on $G_{\mathrm P}$. We define $\boldsymbol { \hat
\mu}$ as $ \hat \mu_e = \frac{(L-Sr)^+}{L}$ for all $e
\in E_{\mathrm P}$ such that $|e \cap \mathcal{\hat S}|=|e \cap
\mathcal{\hat S}^c| = 1$ and $ \hat \mu_e = 0$ for the
other edges $e \in E$. As all the paths cross the cutset $\mathcal{\hat S}$ at least once, it follows that $\max_{1 \leq j \leq l_i} \mu_{\left\{ \mathrm p_i(j),
\mathrm p_i(j-1) \right\}} = \frac{(L-Sr)^+}{L}$, which implies that $\boldsymbol {\hat
\mu} \in \mathcal {\hat R}$. Hence, we have
\begin{equation}
d_{RS,NI}(r) \leq \mathbf 1 \cdot \boldsymbol {\hat \mu} = \frac{(L-Sr)^+}{L}
\min_{\mathcal S} w_{G_{\mathrm P}}(\mathcal S) \stackrel{(a)}{\leq} \frac{(L-Sr)^+}{L}
\min_{\mathcal S} w_{G}(\mathcal S) \stackrel{(b)}{\leq} (1-r)^+ \min_{\mathcal S} w_{G}(\mathcal S),
\end{equation}
where $(a)$ follows from the fact that as $G_{\mathrm P}$ is a sub-graph of $G$, we have $\min_{\mathcal S} \, w_{G_{\mathrm P}}(\mathcal S) \leq \min_{\mathcal S} \, w_{G}(\mathcal S) $ and $(b)$ results from $S \geq L$. This proves the second part of the Theorem.
Finally, we prove the lower-bound on the DMT of the RS scheme. Let
us define $d_G=\min_{\mathcal S} w_G (\mathcal S)$. Consider the
maximum flow algorithm \cite{graph_book} on $G$ from the source
node $0$ to the sink node $K+1$. According to the Ford-Fulkerson
Theorem \cite{graph_book}, one can achieve the maximum flow which is
equal to the minimum cut of $G$ by the union of elements of a
sequence $\left(\mathrm {\hat p}_1, \mathrm {\hat p}_2, \dots,
\mathrm {\hat p}_{d_G} \right)$ of paths with the lengths
$\left(\hat l_1,\hat l_2, \dots, \hat l_{d_G}\right)$. Now, consider
the RS scheme with $L=L_0 d_G$ paths and the path sequence
$\left(\mathrm p_1, \mathrm p_2, \dots, \mathrm p_L\right)$
consisting of the paths that achieve the maximum flow of $G$ such
that any path $\mathrm {\hat p}_i$ occurs exactly $L_0$ times in the
sequence. Considering $\left(l_1, l_2, \dots, l_L\right)$ as the
length sequence, we select the timing sequence as
$s_{i,j}=\sum_{k=1}^{i-1}l_k+j$. It is easy to verify that, not
only the timing sequence satisfies the 4 requirements needed for the
RS scheme, but also the active relays with the timing sequence are
non-interfering. Hence, the assumptions of the first part of the
theorem are valid. Moreover, we have $S \leq l_{G} L$. According to
\eqref{eq:t1_exact}, the diversity gain of the RS scheme equals
\begin{equation}
d_{RS, NI}(r)=\min_{\boldsymbol \mu \in \mathcal {\hat R}} \sum_{e \in E} \mu_e. \label{eq:t1_ni_2}
\end{equation}
As $\boldsymbol \mu \in \mathcal {\hat R}$, we have
\begin{equation}
(L-Sr)^+ \leq \sum_{i=1}^{L} \max_{1 \leq j \leq l_i} \mu_{\left\{ \mathrm p_i(j),
\mathrm p_i(j-1) \right\}} \stackrel{(a)}{\leq} L_0 \sum_{e \in E} \mu_e,\label{eq:t1_ni_3}
\end{equation}
where $(a)$ results from the fact that as $\left(\mathrm {\hat
p}_1, \mathrm {\hat p}_2, \dots, \mathrm {\hat p}_{d_G} \right)$
form a valid flow on $G$ (they are non-intersecting over $E$), every $e \in E$ occurs in at most one
$\mathrm {\hat p}_i$, or equivalently, in at most $L_0$ number of
$\mathrm p_i$'s. Combining \eqref{eq:t1_ni_2} and
\eqref{eq:t1_ni_3}, we have
\begin{equation}
d_{RS, NI}(r) \geq \frac{(L-Sr)^+}{L_0} \geq \left(1-l_G r\right)^+d_G = \left(1-l_G r\right)^+ \min_{\mathcal S} w_G(\mathcal S). \label{eq:t1_lb_proof}
\end{equation}
This proves the third part of the Theorem.
\end{proof}
\textit{Remark 1-} In scenarios where the minimum-cut on $G$ is
achieved by a cut of the MISO or SIMO form, i.e., the edges that
cross the cut are either originated from or destined to the same vertex,
the upper-bound on the diversity gain of the RS scheme derived in
\eqref{eq:t1_ub} meets the information-theoretic upper-bound on the
diversity gain of the network. Hence, in this scenario, any RS
scheme that achieves \eqref{eq:t1_ub} indeed achieves the optimum
DMT.
\textit{Remark 2-} In general, the upper-bound \eqref{eq:t1_ub} can be achieved for various certain graph topologies by wisely designing the path sequence and the timing sequence. One example is the case of the layered network\cite{avesti_wireless_deterministic} in which all the paths from the source to the destination have the same length $l_{G}$. Let us assume that the relays are allowed to operate in the full-duplex manner. In this case, it easily can be observed that the timing sequence corresponding to the path sequence $\left(\mathrm p_1, \mathrm p_2, \dots, \mathrm p_L\right)$ used in the proof of \eqref{eq:t1_lb} can be modified to $s_{i,j}=i+j-1$. Accordingly, the number of slots is decreased to $S=L+l_G-1$. Rewriting \eqref{eq:t1_lb_proof}, we have $d_{RS, NI}(r)=\left(1-r-\frac{l_G-1}{L}r\right)^+ \min_{\mathcal S} w_G(\mathcal S)$ which achieves $\left(1-r\right)^+ \min_{\mathcal S} w_G(\mathcal S)$ for large values of $L$.
Next, using Theorem 1, we show that the RS scheme achieves the
optimum DMT in the setup of single-antenna two-hop multiple-relay
networks where there exists no direct link neither between the transmitter
and the receiver, nor between the relay nodes.
\begin{thm}
Assume a single-antenna half-duplex parallel relay scenario with $K$
non-interfering relays. The proposed SM scheme with $L=BK$,
$S=BK+1$, the path sequence $$\mathrm Q \equiv (\mathrm q_1, \dots,
\mathrm q_K, \mathrm q_1, \dots, \mathrm q_K, \dots, \mathrm q_1,
\dots, \mathrm q_K)$$ where $\mathrm q_k\equiv(0, k, K+1)$ and the
timing sequence $s_{i,j}=i+j-1$ achieves the diversity gain
\begin{equation}
d_{RS,NI}(r)=\max \left\{0,
K\left(1-r\right)- \frac{r}{B} \right\}, \label{eq:t1}
\end{equation}
which achieves the optimum DMT curve $d_{opt}(r)=K(1-r)^+$ as $B \to \infty$.
\end{thm}
\begin{proof}
First, according to the cut-set bound theorem \cite{cover_book}, the
point-to-point capacity of the uplink channel (the channel from the
transmitter to the relays) is an upper-bound on the achievable rate
of the network. Accordingly, the diversity-multiplexing curve of a
$1 \times K$ SIMO system which is a straight line (from the multiplexing
gain $1$ to the diversity gain $K$, i.e. $d_{opt}(r)=K(1-r)^+$) is
an upper-bound on the DMT of the network. Now, we prove that the
proposed RS scheme achieves the upper-bound on the DMT for
asymptotically large values of $S$.
As the relay pairs are non-interfering ($1 \leq k\leq K: \left\{k,
(k \mod K)+1 \right\} \notin E$), the result of Theorem 1 can be
applied. As a result
\begin{equation}
d_{RS, NI}(r) = \displaystyle \min_{\boldsymbol \mu \in \mathcal {\hat R}} \sum_{e \in E} \mu_e,
\end{equation}
where $\mathcal {\hat R} = \left\{ \boldsymbol \mu \left| \, \,
\mathbf 0 \leq \boldsymbol \mu \leq \mathbf 1, \displaystyle \sum_{i=1}^{BK}
\max_{1 \leq j \leq 2} \mu_{\left\{ \mathrm q_{(i-1) \mod K + 1}(j), \mathrm q_{(i-1) \mod K + 1}(j-1) \right\}}
\geq BK - (BK+1)r \right\} \right.$. Hence, we have
\begin{equation}
BK \left(1 - r - \frac{1}{BK}r\right)^+ \stackrel{(a)}{\leq} B
\sum_{k=1}^{K} \max \left\{ \mu_{\left\{ 0, k \right\}},
\mu_{\left\{ K+1, k \right\}} \right\} \leq B \displaystyle \sum_{e \in E} \mu_e,
\end{equation}
where $(a)$ results from the fact that every path $\mathrm q_k$ is
used $B$ times in the path sequence. Hence, DMT can be lower-bounded
as
\begin{equation}
d_{RS, NI}(r) \geq K \left(1 - r - \frac{1}{BK}r\right)^+. \label{eq:t11_1}
\end{equation}
On the other hand, considering the vector $\boldsymbol {\hat
\mu}=[\hat \mu_e]_{e \in E}$ where $\forall 1 \leq k \leq K: \hat
\mu_{\left\{0,k\right\}} = \left( 1 - r - \frac{1}{BK}r \right)^+$
and $\forall k,k' \neq 0: \hat \mu_{\left\{k,k'\right\}}=0$, it is
easy to verify that $\boldsymbol {\hat \mu} \in \mathcal {\hat R}$.
Hence,
\begin{equation}
d_{RS,NI}(r) \leq \sum_{e \in E} \hat \mu_e = K \left(1 - r - \frac{1}{BK}r\right)^+. \label{eq:t11_2}
\end{equation}
Combining \eqref{eq:t11_1} and \eqref{eq:t11_2} completes the proof.
\end{proof}
\textit{Remark 3-} Note that as long as the complement\footnote{For every undirected graph $G=(V,E)$, the complement of $G$ is a graph $H$ on the same vertices such that two vertices of $H$ are adjacent if and only if they are non-adjacent in $G$. \cite{graph_book}} of the induced sub-graph of $G$
on the relay nodes $\left\{1, 2, \dots, K \right\}$ includes a
Hamiltonian cycle \footnote{A Hamiltonian cycle is a simple cycle
$(v_1,v_2,\cdots ,v_K ,v_1)$ that goes exactly one time through each
vertex of the graph\cite{graph_book}.}, the result of Theorem 2
remains valid. However, the paths $\mathrm q_1, \mathrm q_2, \dots,
\mathrm q_K$ should be permuted in the path sequence according to
their orderings in the corresponding Hamiltonian cycle.
According to (\ref{eq:t1}), we observe that the RS scheme achieves
the maximum multiplexing gain $1-\frac{1}{BK+1}$ and the maximum
diversity gain $K$, respectively, for the setup of non-interfering
relays. Hence, it achieves the maximum diversity gain for any finite
value of $B$. Also, knowing that no signal is sent to the receiver
in the first slot, the RS scheme achieves the maximum possible
multiplexing gain. Figure (\ref{fig:dm_wi}) shows the DMT of the
scheme for the case of non-interfering relays and various values of
$K$ and $B$.
\begin{figure}[hbt
\centering
\includegraphics[scale=0.8]{dm_wi2.eps
\caption{DMT of RS scheme in parallel relay network for both ``interfering''
and ``non-interfering'' relaying scenarios and for different values of $K,B$.} \label{fig:dm_wi}
\end{figure}
\subsection{General Case}
In this section, we study the performance of the RS scheme in
general single-antenna multi-hop wireless networks and derive a
lower bound on the corresponding DMT. First, we show that the RS
scheme with the parameters defined in Theorem 2 achieves the optimum
DMT for the single-antenna parallel-relay networks when there is no
direct link between the transmitter and the receiver. Then, we
generalize the statement and provide a lower-bound on the DMT of the
RS scheme for the more general case.
As stated in the section ``System Model'', throughout the two-hop
network analysis, we slightly modify our notations to simplify the
derivations. Specifically, the output vector at the transmitter, the
input and the output vectors at the $k$'th relay, and the input
vector at the receiver are denoted as $\mathbf x$, $\mathbf r_k$,
$\mathbf t_k$ and $\mathbf y$, respectively. $h_k$ and $g_k$
represent the channel gain between the transmitter and the $k$'th
relay and the channel gain between the $k$'th relay and the
destination, respectively. $(k)$ and $(b)$ are defined as $(k)\equiv
\left((k-2) \mod K\right) +1$ and $(b)\equiv b - \lfloor
\frac{(k)}{K} \rfloor$. Finally, $i_{(k)}$, $\mathbf n_k$, $\mathbf
z$, and $\alpha_k$ denote the channel gain between the $k$'th and
the $(k)$'th relay nodes, the noise at the $k$'th relay and at the
receiver, and the amplification coefficient at the $k$'th relay.
Figure (\ref{fig:model}) shows a realization of this setup with $4$ relays. As observed, the relay set $\{1,2\}$ is disconnected from the relay set
$\{3,4\}$. In general, the output signal of any relay node $k'$ such that $\{k, k'\} \in E$ can interfere on the received signal of relay node $k$. However, in Theorem 3, the RS scheme is applied with the same parameters as in Theorem 2. Hence, when the transmitter is sending signal to the $k$'th relay in a time-slot, just the $(k)$'th relay is simultaneously transmitting and interferes at the $k$'th relay side. As an example, for the scenario shown in figure (\ref{fig:model}), we have
\begin{eqnarray}
\mathbf r_1 & = & h_1 \mathbf x + i_{4} \mathbf t_4 + \mathbf n_1, \nonumber \\
\mathbf r_2 & = & h_2 \mathbf x + \mathbf n_2. \nonumber
\end{eqnarray}
\begin{figure}[t
\centering
\includegraphics[scale=1.0]{model.eps
\caption{An example of the half-duplex parallel relay network setup, relay nodes $\{1,2\}$ are disconnected from relay nodes $\{3,4\}$.}
\label{fig:model}
\end{figure}
However, for the sake of simplicity, in the proof of the following theorem, we assume that all the relays interfere with each other. Hence, at the $k$'th relay, we have
\begin{equation}
\mathbf{r}_k = h_{k} \mathbf{x} + i_{(k)} \mathbf t_{(k)} + \mathbf n_k.
\end{equation}
According to the output power constraint, the amplification
coefficient is bounded as $\alpha _k \leq \sqrt {\frac{P}{P \left(
\left| h_k \right|^2 + \left| i_{(k)} \right|^2 \right) + 1}}$.
However, according to the signal boosting constraint imposed on the RS scheme, we have $|\alpha_k| \leq 1$. Hence, the amplification coefficient is equal to
\begin{equation}
\alpha_k = \min
\left\{ 1, \sqrt {\frac{P}{P \left( \left| h_k \right|^2 + \left| i_{(k)}
\right|^2 \right) + 1} } \right\}. \label{eq:alpha_constraint}
\end{equation}
In this manner, it is guaranteed that the noise terms of the
different relays are not boosted throughout the network. This is
achieved at the cost of working with the output power less than $P$.
On the other hand, we know that almost surely \footnote{By almost
surely, we mean its probability is greater than $1-P^{-\delta}$, for
any value of $\delta > 0$.} $\left| h_k \right|^2 , \left| i_{(k)}
\right|^2 \dot{\leq} 1$. Hence, almost surely, we have $\alpha _k
\doteq 1$. This point will be elaborated further in the proof of the
theorem. Now, we prove the DMT optimality of the RS scheme for general
single-antenna parallel-relay networks.
\begin{thm} \label{thm:DMT-n-ir}
Consider a single-antenna half-duplex parallel relay network with
$K>1$ interfering relays where there is no direct link between the
transmitter and the receiver. The diversity gain of the RS scheme
with the parameters defined in Theorem 2 is lower-bounded as
\begin{equation}
d_{RS,I}(r) \geq \max \left\{ 0, K \left( 1 - r \right) -
\frac{r}{B} \right\}. \label{eq:t2}
\end{equation}
Furthermore, the RS scheme achieves the optimum DMT
$d_{opt}(r)=K(1-r)^+$ as $B \to \infty$.
\end{thm}
\begin{proof}
First, we show that the entire channel matrix is equivalent to a
lower triangular matrix. Let us define $\mathbf{x}_{b,k},
\mathbf{n}_{b,k}, \mathbf{r}_{b,k}, \mathbf{t}_{b,k},
\mathbf{z}_{b,k}, \mathbf{y}_{b,k}$ as the portion of signals that
is sent or received in the $k$'th slot of the $b$'th sub-block. At
the receiver side, we have
\begin{eqnarray}
\mathbf{y}_{b,k} & = & g_{(k)} \mathbf{t}_{b, k} + \mathbf{z}_{b, k}
\nonumber \\
& = & g_{(k)} \alpha _{(k)} \left( \sum_{\begin{subarray}{c}1 \leq b_1\leq b, 1 \leq k_1 \leq K \\ b_1 K + k_1
< b K + k \end{subarray}}{p_{b-b_1, k, k_1}\left( h_{k_1}\mathbf{x}_{b_1,
k_1} + \mathbf{n}_{b_1, k_1}\right) } \right) + \mathbf{z}_{b, k}.
\end{eqnarray}
Here, $p_{b, k, k_1}$ has the following recursive formula $p_{0, k,
k}=1, p_{b, k, k_1}=i_{((k))}\alpha_{((k))}p_{(b), (k), k_1}$. Defining
the square $BK \times BK$ matrices $\mathbf{G}= \mathbf{I}_B
\otimes \textit{diag}\left\{ g_1, g_2, \cdots ,g_K \right\}$,
$\mathbf{H}= \mathbf{I}_B \otimes \textit{diag}\left\{ h_1, h_2,
\cdots ,h_K \right\}$, $\mathbf{\Omega} = \mathbf{I}_B \otimes
\textit{diag}\left\{ \alpha _1, \alpha _2, \cdots ,\alpha _K
\right\}$, and
\begin{equation}
\mathbf{F}= \left(
\begin{array}{ccccc}
1 & 0 & 0 & 0 & \ldots \\
p_{0,2,1} & 1 & 0 & 0 & \ldots \\
p_{0,3,1} & p_{0, 3, 2} & 1 & 0 & \ldots \\
\vdots & \vdots & \vdots & \vdots & \ddots \\
p_{B-1, K, 1} & p_{B-1, K, 2} & \ldots & p_{0, K, K-1} & 1
\end{array}
\right),
\end{equation}
where $\otimes$ is the Kronecker product \cite{matrix_book} of
matrices and $\mathbf{I}_B$ is the $B \times B$ identity matrix, and
the $BK \times 1$ vectors $\mathbf{x}\left(s\right)=[x_{1,1}(s),
x_{1, 2}(s), \cdots ,x_{B, K}(s)]^T$,
$\mathbf{n}\left(s\right)=\left[n_{1,1}\left(s\right), n_{1, 2}(s),
\cdots ,n_{B, K}(s)\right]^T$,
$\mathbf{z}\left(s\right)=[z_{1,2}(s), z_{1, 3}(s), \cdots ,z_{B+1,
1}(s)]^T$, and $\mathbf{y}\left(s\right)=[y_{1,2}(s), y_{1, 3}(s),
\cdots ,y_{B+1, 1}(s)]^T$, we have
\begin{equation}
\mathbf{y}\left(s\right) = \mathbf{G} \mathbf{\Omega} \mathbf{F}
\left( \mathbf{H} \mathbf{x}\left(s\right) +
\mathbf{n}\left(s\right) \right) + \mathbf{z}\left(s\right).
\label{eq:ref1}
\end{equation}
Here, we observe that the matrix of the entire channel is equivalent
to a lower triangular matrix of size $BK \times BK$ for a MIMO
system with a colored noise. The probability of outage of such a
channel for the multiplexing gain $r$ ($r \leq 1$) is defined as
\begin{equation}
\mathbb{P} \left\{ \mathcal{E} \right\}=\mathbb{P} \left\{ \log
\left|\mathbf{I}_{BK} + P \mathbf{H}_{T}\mathbf{H}_{T}^{H}\mathbf{P}_n^{-1} \right| \leq
(BK+1)r \log\left( P \right) \right\},
\end{equation}
where $\mathbf{P}_n=\mathbf{I}_{BK}+\mathbf{G} \mathbf{\Omega}
\mathbf{F} \mathbf{F}^H \mathbf{\Omega}^H \mathbf{G}^H$, and
$\mathbf{H}_T=\mathbf{G} \mathbf{\Omega} \mathbf{F} \mathbf{H}$.
Assume $|h_k|^2=P^{-\mu_k}$, $|g_k|^2=P^{-\nu_k}$,
$|i_k|^2=P^{-\omega_k}$, and $\mathcal{R}$ as the region in
$\mathbb{R}^{3K}$ that defines the outage event $\mathcal{E}$ in
terms of the vector $[\boldsymbol \mu^T, \boldsymbol \nu^T,
\boldsymbol \omega^T]^T$, where $\boldsymbol \mu=\left[ \mu_1 \mu_2
\cdots \mu_K \right]^T, \boldsymbol \nu=\left[ \nu_1 \nu_2 \cdots
\nu_K \right]^T,\boldsymbol \omega=\left[ \omega_1 \omega_2 \cdots
\omega_K \right]^T$. The probability distribution function (and also
the complement of the cumulative distribution function) decays
exponentially as $P^{-P^{-\delta}}$ for positive values of $\delta$.
Hence, the outage region $\mathcal R$ is almost surely equal to
$\mathcal{R}_{+}=\mathcal{R} \bigcap \mathbb{R}_{+}^{3K}$. Now, we
have
\begin{eqnarray}
\mathbb{P} \left\{ \mathcal{E} \right\} & \stackrel{(a)}{\leq} &
\mathbb{P} \left\{ \left| \mathbf{H}_T \right|^2 \left| \mathbf{P}_n
\right|^{-1} \leq
P^{-BK \left( 1-r \right) +r}\right\} \nonumber \\
& \stackrel{(b)}{\leq} & \mathbb{P} \left\{ -B
\sum_{k=1}^{K}{\left(\mu_k+\nu_k- \min \left\{ 0, \mu_k, \omega_{(k)}
\right\}\right)} - \frac{BK\log(3) + \log \left| \mathbf{P}_{n} \right|}{\log
\left( P \right)} \leq -BK(1-r)+r
\right\} \nonumber \\
& \stackrel{(c)}{\dot{\leq}} & \mathbb{P} \left\{ - BK \frac{\log
\left[3 \left( B^2K^2+1 \right) \right]}{\log (P)} + BK\left( 1-r
\right) - r \leq B
\sum_{k=1}^{K}{(\mu_k + \nu_k)}, \mu_k,\nu_k,\omega_k \geq 0 \right\}. \label{eq:R_hat_wi}
\end{eqnarray}
Here, $(a)$ follows from the fact that for a positive semidefinite
matrix $\mathbf A$, we have $\left| \mathbf{I} + \mathbf{A} \right|
\geq \left| \mathbf{A} \right|$ and $(b)$ follows from the fact that
\begin{equation}
|\alpha_k|^2= \min \left\{ 1, \frac{P}{P^{1-\mu_k} + P^{1-\omega_{(k)}
}+1} \right\} \geq \frac{1}{3} \min \left\{ 1, P, P^{\mu_k
}, P^{\omega_{(k)}} \right\} \nonumber
\end{equation}
and assuming $P$ is large enough such that $P \geq 1$. Finally, $(c)$ is proved as follows:
As $|\alpha_k| \leq 1$, we conclude $p_{n,k,k_1} \leq 1$. Hence, the sum of the entries of each row in $\mathbf{F}\mathbf{F}^H$ is less than $B^2K^2$. Now, consider the matrix $\mathbf{A} \triangleq B^2 K^2 \mathbf I -\mathbf{F}\mathbf{F}^H$. From the above discussion, it follows that for every $i$, we have $A_{i,i} \geq \sum_{i \neq j}|A_{i,j}|$. Hence, for every vector $\mathbf x$, we have $\mathbf x^T \mathbf A \mathbf x \geq \sum_{i < j} |A_{i,j}| x_i^2 + |A_{i,j}| x_j^2 \pm 2|A_{i,j}|x_ix_j = \sum_{i < j} |A_{i,j}| \left(x_i \pm x_j \right)^2 \geq 0$, and as a result $\mathbf A$ is positive semidefinite, which implies that $\mathbf{F}\mathbf{F}^H \preccurlyeq B^2K^2 \mathbf{I}_{BK}$. Consequently, we have $\mathbf{P}_n \preccurlyeq \mathbf I_{BK} + B^2K^2 \mathbf{G} \mathbf{\Omega} \mathbf{\Omega}^H \mathbf{G}^H$. Moreover, Knowing the fact that $\mathbb P \left\{ \mathcal{R} \right\} \doteq \mathbb P \left\{
\mathcal{R}_{+} \right\}$, and conditioned on $\mathcal{R}_{+}$, one has $|g_k|^2 \leq 1$, which implies that $\mathbf {GG}^H \preccurlyeq \mathbf I$. Combining this with the fact that $\mathbf{\Omega} \mathbf{\Omega}^H \preccurlyeq \mathbf I$ (as $|\alpha_k|^2 \leq 1$, $\forall k$) yields $\mathbf{P}_n \preccurlyeq \mathbf I_{BK} + B^2K^2 \mathbf{G} \mathbf{\Omega}
\mathbf{\Omega}^H \mathbf{G}^H \preccurlyeq \left(B^2K^2 + 1 \right) \mathbf I_{BK}$. Moreover, conditioned on $\mathcal{R}_{+}$, we have $\min \left\{ 0, \mu_k, \omega_{(k)} \right\} = 0$. This completes the proof of $(c)$.
On the other hand, for vectors $\boldsymbol{\mu}^0, \boldsymbol
{\nu}^0, \boldsymbol{\omega}^0 \geq \mathbf 0$, we have $\mathbb{P}
\left\{\boldsymbol{\mu} \geq \boldsymbol{\mu}^0, \boldsymbol{\nu}
\geq \boldsymbol{\nu}^0, \boldsymbol{\omega} \geq
\boldsymbol{\omega}^0 \right\} \doteq P^{-\mathbf{1} \cdot \left(
\boldsymbol{\mu}^0 + \boldsymbol{\nu}^0 + \boldsymbol{\omega}^0
\right)}$. Similar to the proof of Theorem 1, by taking derivative
with respect to $\boldsymbol \mu, \boldsymbol \nu$, we have
$f_{\boldsymbol \mu, \boldsymbol \nu}(\boldsymbol \mu, \boldsymbol
\nu) \doteq P^{- \mathbf 1 \cdot \left( \boldsymbol \mu +
\boldsymbol \nu \right)}$. Defining $l_0 \triangleq
- \frac{\log \left[3 \left( B^2K^2+1 \right) \right]}{\log (P)} +
\left( 1-r \right) - \frac{r}{BK} $, $\hat{\mathcal{R}} \triangleq \left\{
\boldsymbol{\mu},\boldsymbol{\nu} \geq \mathbf{0}, \frac{1}{K}
\mathbf{1} \cdot \left( \boldsymbol{\mu} + \boldsymbol{\nu}\right)
\geq l_0 \right\}$, the cube $\mathcal I$ as $\mathcal I \triangleq \left[0,
Kl_0 \right]^{2K}$, and for $1 \leq i \leq 2K$, $\mathcal{I}_i^c \triangleq [0,
\infty )^{i-1} \times [Kl_0, \infty ) \times [0, \infty )^{2K-i}$,
we observe
\begin{eqnarray}
\mathbb{P} \left\{ \mathcal E \right\} & \stackrel{(a)}{\dot{\leq}}
& \mathbb{P} \{ \hat{\mathcal R} \} \nonumber \\
& \stackrel{(b)}{\leq} & \int_{\mathcal{\hat{R}} \bigcap \mathcal{I}}{f_{\boldsymbol
\mu, \boldsymbol \nu}\left(\boldsymbol \mu, \boldsymbol \nu \right) d\boldsymbol \mu
d \boldsymbol \nu} + \sum_{i=1}^{2K}{\mathbb{P} \left\{ [\boldsymbol \mu^T,
\boldsymbol \nu^T]^T \in \mathcal{\hat{R}} \cap \mathcal{I}_i^c \right\}}
\nonumber
\\
&\dot{\leq} & \mbox{vol} (\mathcal{\hat{R}} \cap \mathcal{I})
P^{\displaystyle -\min_{\left[ \boldsymbol{\mu}_0^T, \boldsymbol{\nu}_0^T \right]^T \in
\mathcal{\hat{R}} \bigcap \mathcal{I}} \mathbf{1} \cdot \left(
\boldsymbol{\mu}_0 + \boldsymbol{\nu}_0 \right) } + 2K P^{-Kl_0}
\nonumber \\
& \stackrel{(c)}{\doteq} & P^{-Kl_0} \nonumber \\
& \doteq & P^{-\left[K \left( 1 - r \right) - \frac{r}{B} \right]}.
\label{eq:t2_r_wi}
\end{eqnarray}
Here, $(a)$ follows from (\ref{eq:R_hat_wi}), $(b)$ results from writing $\mathcal{\hat{R}}$ as $\left(\mathcal{\hat{R}} \bigcap \mathcal I \right) \bigcup \left[ \bigcup_{i=1}^M \left( \mathcal{\hat{R}} \bigcap \mathcal I_i^c\right) \right] $ and using the union bound on the probability, and $(c)$ follows from the
fact that $\mathcal{\hat{R}} \bigcap \mathcal{I}$ is a bounded
region whose volume is independent of $P$. (\ref{eq:t2_r_wi})
completes the proof of Theorem 3.
\end{proof}
\textit{Remark 4-} The argument in Theorem 3 is valid no matter what
the induced graph of $G$ on the relay nodes is. More precisely, the
DMT of the RS scheme can be lower-bounded as \eqref{eq:t2} as long
as $\left\{0, K+1 \right\} \notin E$ and $\left\{0, k\right\},
\left\{K+1, k\right\} \in E$. One special case is that the complement of the induced
subgraph of $G$ on the relay nodes includes a Hamiltonian cycle
which is analyzed in Theorem 2. Here, we observe that the
lower-bound on DMT derived in \eqref{eq:t2} is tight as shown in
Theorem 2.
Figure (\ref{fig:dm_wi}) shows the DMT of the RS scheme for varying
number of $K$ and $B$. Noting the proof of Theorem 3, we can easily
generalize the result of Theorem 3 and provide a lower-bound on the
DMT of the RS scheme for general single-antenna multi-hop
multiple-relay networks.
\begin{thm}
Consider a half-duplex single-antenna multiple-relay network with
the connectivity graph $G=(V, E)$ operated under the RS scheme with
$L$ paths, $S$ slots, and the path sequence $\left(\mathrm p_1,
\mathrm p_2, \dots, \mathrm p_L\right)$. Defining $\beta_e$ for
each $e \in E$ as the number of paths in the path sequence that go
through $e$, then the DMT of the RS scheme is lower-bounded as
\begin{equation}
d_{RS}(r) \geq \frac{L}{\displaystyle \max_{e \in E} \beta_e} \left(1 - \frac{S}{L}r \right)^+. \label{eq:t4_lb}
\end{equation}
\end{thm}
\begin{proof}
First, similar to the proof of Theorem 3, we show that the entire
channel matrix is lower triangular. At the receiver side, we have
\begin{equation}
\mathbf y_{K+1,i} = \prod_{j=1}^{l_i} h_{\left\{\mathrm p_i(j), \mathrm p_i(j-1)\right\}} \alpha_{i,j} \mathbf x_{0,i} +
\sum_{j < i} f_{i, j} \mathbf x_{0,j} + \sum_{j \leq i, m \leq l_j} q_{i, j, m} \mathbf n_{j, m}.\label{eq:t4_r_side}
\end{equation}
Here, $\mathbf x_{0,i}$ is the vector transmitted at the transmitter
side during the $s_{i,1}$'th slot as the input for the $i$'th path,
$\mathbf y_{K+1,i}$ is the vector received at the receiver side
during the $s_{i,l_i}$'th slot as the output for $i$'th path,
$f_{i,j}$ is the interference coefficient which relates the input of
the $j$'th path ($j < i$) to the output of the $i$'th path, $\mathbf
n_{j,m}$ is the noise vector during the $s_{j,m}$'th slot at the
$\mathrm p_j(m)$'th node, and finally, $q_{i, k, m}$ is the
coefficient which relates $\mathbf n_{k,m}$ to $\mathbf y_{K+1,i}$.
Note that as the timing sequence satisfies the noncausal
interference assumption, the summation terms in \eqref{eq:t4_r_side}
do not exceed $i$. Moreover, for the sake of brevity, we define
$\alpha_{i,l_i}=1$. Defining $\mathbf x(s) = \left[ x_{0,1}
\left(s\right) x_{0,2} \left(s\right) \cdots x_{0,L}\left(s\right)
\right]^T$, $\mathbf y(s) = \left[ y_{K+1,1}\left(s\right)
y_{K+1,2}\left(s\right) \cdots y_{K+1,L}\left(s\right) \right]^T$,
and $\mathbf n(s) = \left[ n_{1,1}\left(s\right)
n_{1,2}\left(s\right) \cdots n_{L,l_L}\left(s\right) \right]^T$, we
have the following equivalent lower-triangular matrix between the
end nodes:
\begin{equation}
\mathbf y(s) = \mathbf H_T \mathbf x(s) + \mathbf Q \mathbf n(s).
\end{equation}
Here,
\begin{equation}
\mathbf H_T= \left(
\begin{array}{cccc}
f_{1,1} & 0 & 0 & \ldots \\
f_{2,1} & f_{2,2} & 0 & \ldots \\
\vdots & \vdots & \vdots & \ddots \\
f_{L,1} & f_{L,2} & \ldots & f_{L,L}
\end{array}
\right),
\end{equation}
where $f_{i,i}=\displaystyle \prod_{j=1}^{l_i} h_{\left\{\mathrm p_i(j), \mathrm p_i(j-1)\right\}} \alpha_{i,j}$, and
\begin{equation}
\mathbf Q= \left(
\begin{array}{ccccccc}
q_{1,1,1} & \ldots & q_{1,1,l_1} & 0 & 0 & 0 & \ldots \\
q_{2,1,1} & \ldots & q_{2,1,l_1} & \ldots & q_{2,2,l_2} & 0 & \ldots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \\
q_{L,1,1} & q_{L,1,2} & \ldots & \ldots& \ldots& q_{L,L,l_L-1} & q_{L,L,l_L}
\end{array}
\right).
\end{equation}
Let us define $\mu_e$ for every $e \in E$ such that
$|h_e|^2=P^{-\mu_e}$. First, we observe that similar to the proof of
Theorem \ref{thm:DMT-n-ir}, it can be shown that i)~$\alpha_{i,j}
\doteq 1$ with probability 1\footnote{More precisely, with
probability greater than $1-P^{-\delta}$, for any $\delta > 0$.}, ii)~we can restrict ourselves to the region $\mathbb R_+$, i.e., the region $\boldsymbol{\mu} > \mathbf 0$. These two facts imply that $|q_{i, j, m}| \dot \leq 1$. This
means there exists a constant $c$ which depends just on
the topology of the graph $G$ and the path sequence such that
$\mathbf P_n \triangleq \mathbf Q \mathbf Q^H \preccurlyeq c \mathbf
I_L$ (by a similar argument as in the proof of Theorem 3). Hence, similar to the arguments in the equation series
\eqref{eq:R_hat_wi}, the outage probability can be bounded as
\begin{eqnarray}
\mathbb P \left\{ \mathcal E \right\} & = & \mathbb P \left\{ \left|
\mathbf I_{L} + P \mathbf H_T \mathbf H_T^H \mathbf P_n^{-1} \right| \leq P^{Sr} \right\}\nonumber \\
& \dot \leq & \mathbb P \left\{ \left| \mathbf H_T \right| \left| \mathbf H_T^H \right| \leq P^{Sr-L} \right\} \nonumber \\
& = & \mathbb P \left\{ \sum_{e \in E} \beta_e \mu_e \geq L - Sr \right\} \nonumber \\
& \doteq & \mathbb P \left\{\boldsymbol \mu \geq \mathbf 0, \sum_{e \in E} \beta_e \mu_e \geq (L - Sr)^+ \right\},
\end{eqnarray}
where $\beta_e$ is the number of paths in the path sequence that
pass through $e$. Knowing that $\mathbb P \left\{ \boldsymbol \mu \geq
\boldsymbol \mu^0 \right\} \doteq P^{-\mathbf 1 \cdot \boldsymbol
\mu}$ and computing the derivative, we have $f_{\boldsymbol
\mu}(\boldsymbol \mu) = P^{-\mathbf 1 \cdot \boldsymbol \mu}$.
Defining $\mathcal R = \left\{ \boldsymbol \mu >
\mathbf 0, \sum_{e \in E} \beta_e \mu_e \geq (L - Sr)^+ \right\}$
and applying the results of equation series \eqref{eq:t2_r_wi}, we
obtain
\begin{eqnarray}
\mathbb P \left\{ \mathcal E \right\} & \dot \leq & P^{- \displaystyle \min_{\boldsymbol \mu \in \mathcal R}
\mathbf 1 \cdot \boldsymbol \mu } \stackrel{(a)}{=} P^{\displaystyle - \frac{L}{\max_{e \in E} \beta_e}
\left(1 - \frac{S}{L}r \right)^+}, \label{eq:t4_last_eq}
\end{eqnarray}
where $(a)$ follows from the fact that for every $\boldsymbol \mu
\in \mathcal R$, $\left(L - Sr \right)^+ \leq \sum_{e \in E} \beta_e
\mu_e \leq \max_{e \in E} \beta_e \sum_{e \in E} \mu_e$ which implies that $\sum_{e \in E} \mu_e = \mathbf 1 \cdot \boldsymbol{\mu} \geq \frac{\left(L - Sr \right)^+}{\max_{e \in E} \beta_e}$, and on the
other hand, defining $\boldsymbol \mu^\star$ such that $\boldsymbol
\mu^\star(\hat e) = \frac{(L-Sr)^+}{\beta_{\hat e}}$ where $\hat e =
\underset{e \in E}{\mbox{argmax}}~\beta_e$ and otherwise
$\boldsymbol \mu^\star(e) = 0$, we have $\boldsymbol \mu^\star \in
\mathcal R$ and $\mathbf 1 \cdot \boldsymbol \mu^\star =
\frac{L}{\max_{e \in E} \beta_e} \left(1 - \frac{S}{L}r \right)^+$.
\eqref{eq:t4_last_eq} completes the proof of Theorem 4.
\end{proof}
\textit{Remark 5-} The lower-bound of \eqref{eq:t1_lb} can also be proved by using the lower-bound of \eqref{eq:t4_lb} obtained for DMT of the general RS scheme. In order to prove this, one needs to apply the RS scheme with the same path sequence and timing sequence used in the proof of \eqref{eq:t1_lb} in Theorem 1. Putting $S=L_0 d_G$ and $S\leq l_G L$ in \eqref{eq:t4_lb} and noting that for all $e \in E$, we have $\beta_e \in \left\{0, L_0 \right\}$, \eqref{eq:t1_lb} is easily obtained.
\textit{Remark 6-} It should be noted that \eqref{eq:t1_ub} is yet an upper-bound for the DMT of the RS scheme, i.e., even for the case of interfering relays. This is due to the fact that in the proof of \eqref{eq:t1_ub} the non-interfering relaying assumption is not used. However, by employing the RS scheme with causal-interfering relaying and applying \eqref{eq:t4_lb}, one can find a bigger family of graph topologies that can achieve \eqref{eq:t1_ub}. Such an example is the two-hop relay network studied in Theorem 3. Another example is the case that $G$ is a directed acyclic graph (DAG)\footnote{A directed acyclic graph $G$ is a directed graph that has no directed cycles.} and the relays are operating in the full-duplex mode. Here, the argument is similar to that of \textit{Remark 2}. Assume that each $\mathrm { \hat p_i}$ is used $L_0$ times in the path sequence in the form that $\mathrm p_{(i-1)L_0+j} \triangleq \mathrm { \hat p_i}, 1 \leq j \leq L_0$. Let us modify the timing sequence as $s_{i, j}=i + j - 1 + \displaystyle \sum_{k=1}^{\lceil \frac{i}{L_0} \rceil - 1} \hat l_k$ which results in $S=L+\sum_{i=1}^{d_G} l_i$. Here, it is easy to verify that only non-causal interference exists between the signals corresponding to different paths. However, by considering the paths in the reverse order or equivalently reversing the time axis, the paths can be observed with the causal interference. Hence, the result of Theorem 4 is still valid for such paths. Here, knowing that for all $e \in E$, we have $\beta_e \in \left\{0, L_0 \right\}$ and applying \eqref{eq:t4_lb}, we have $d_{RS}(r) \geq d_G \left( 1 - r - \frac{\sum_{i=1}^{d_G} l_i}{L_0d_G} \right)^+$ which achieves \eqref{eq:t1_ub} for asymptotically large values of $L_0$. This fact is also observed by \cite{vkumar}.
\subsection{Multiple-Access Multiple-Relay Scenario}
In this subsection, we generalize the result of Theorem 3 to the
multiple-access scenario aided by multiple relay nodes. Here,
similar to Theorem 3, we assume that there is no direct link between
each transmitter and the receiver. However, no restriction is
imposed on the induced subgraph of $G$ on the relay nodes. Assuming
having $M$ transmitters, we show that for the rate sequence $r_1
\log (P), r_2 \log (P), \dots, r_M\log (P)$, in the asymptotic case
of $B \to \infty$ ($B$ is the number of sub-blocks), the RS scheme achieves the diversity gain $d_{SM,
MAC}(r_1, r_2, \dots, r_M)=K \left(1 - \sum_{m=1}^{M}{r_m}
\right)^{+}$, which is shown to be optimum due to the cut-set bound
on the cutset between the relays and the receiver. Here, the
notations are slightly modified compared to the ones used in Theorem
3 to emphasize the fact that multiple signals are transmitted from
multiple transmitters. Throughout this subsection and the next one,
$\mathbf{x}_m$ and $h_{m, k}$ denote the transmitted vector at the
$m$'th transmitter and the Rayleigh channel coefficient between the
$m$'th transmitter and the $k$'th relay, respectively. Hence, at the
received side of the $k$'th relay, we have
\begin{equation}
\mathbf{r}_k = \sum_{m=1}^{M}{h_{m, k} \mathbf{x}_m} + i_{(k)}
\mathbf{t}_{(k)} + \mathbf{n}_k,
\end{equation}
where $\mathbf{x}_m$ is the transmitted vector of the $m$'th
sender. The amplification coefficient at the $k$'th relay is set to
\begin{equation}
\alpha_k = \min \left\{ 1, \sqrt { \frac{P}{P \left( \sum_{m=1}^{M}{\left|
h_{m,k} \right|^2} + \left| i_{(k)} \right|^2 \right) + 1}} \right\}.
\end{equation}
Here, the RS scheme is applied with the same path sequence and
timing sequence as in the case of Theorem 2 and 3. However, it
should be mentioned that in the current case, during the slots that
the transmitter is supposed to transmit the signal, i.e. in the
$s_{i,1}$'th slot, all the transmitters send their signals
coherently. Moreover, at the receiver side, after receiving the $BK$
vectors corresponding to the outputs of the $BK$ paths, the
destination node decodes the messages $\omega_1, \omega_2, \dots,
\omega_K$ by joint-typical decoding of the received vectors in the
corresponding $BK$ slots and the transmitted signal of all the
transmitters, i.e., in the same way that joint-typical decoding works
in the multiple access setup~\cite{cover_book}. Now, we prove the
main result of this subsection.
\begin{thm}
Consider a multiple-access channel consisting of $M$ transmitting
nodes aided by $K>1$ half-duplex relays. Assume there is no direct
link between the transmitters and the receiver. The RS scheme with
the path sequence and timing sequence defined in Theorems 2 and 3
achieves a diversity gain of
\begin{equation}
d_{RS,MAC}(r_1, r_2, \dots, r_M) \geq \left[ K \left( 1 -
\sum_{m=1}^{M}{r_m} \right) - \frac{\sum_{m=1}^{M}{r_m}}{B}
\right]^+, \label{eq:t3}
\end{equation}
where $r_1, r_2, \dots, r_M$ are the multiplexing gains corresponding to users
$1,2,\dots,M$. Moreover, as $B \to \infty$, it achieves the optimum
DMT which is $d_{opt, MAC}(r_1, r_2, \dots, r_M)=K \left(1 -
\sum_{m=1}^{M}{r_m} \right)^{+}$.
\end{thm}
\begin{proof}
At the receiver side, we have
\begin{eqnarray}
\mathbf{y}_{b,k} & = & g_{(k)} \mathbf{t}_{b, k} + \mathbf{z}_{b, k}
\nonumber \\
& = & g_{(k)} \alpha _{(k)} \left( \sum_{\begin{subarray}{c} 1
\leq b_1 \leq b, 1 \leq k_1 \leq K \\ b_1 K + k_1 < b K + k \end{subarray}
}{p_{b-b_1, k, k_1}\left(
\sum_{m=1}^{M}{h_{m,k_1}\mathbf{x}_{m, b_1, k_1}} + \mathbf{n}_{b_1,
k_1}\right) } \right) + \mathbf{z}_{b, k}, \nonumber \\
\end{eqnarray}
where $p_{b, k, k_1}$ is defined in the proof of Theorem 3 and
$\mathbf{x}_{m, b, k}$ represents the transmitted signal of the
$m$'th sender in the $k$'th slot of the $b$'th sub-block. Similar to
(\ref{eq:ref1}), we have
\begin{equation}
\mathbf{y}\left(s\right) = \mathbf{G} \mathbf{\Omega} \mathbf{F}
\left( \sum_{m=1}^{M}{ \mathbf{H}_m \mathbf{x}_m \left(s\right)} +
\mathbf{n}\left(s\right) \right) + \mathbf{z}\left(s\right),
\end{equation}
where $\mathbf{H}_m= \mathbf{I}_B \otimes \textit{diag}\left\{
h_{m,1}, h_{m,2}, \cdots ,h_{m,K} \right\}$,
$\mathbf{x}_m\left(s\right)=[x_{m,1,1}(s), x_{m,1, 2}(s), \cdots
,x_{m, B, K}(s)]^T$, and $\mathbf {y}_s , \mathbf {n}_s, \mathbf
{z}_s, \mathbf G , \mathbf \Omega , \mathbf F$ are defined in the
proof of Theorem 3. Similarly, we observe that the entire channel
from each of the transmitters to the receiver acts as a MIMO channel
with a lower triangular matrix of size $BK \times BK$.
Here, the outage event occurs whenever there exists a subset
$\mathcal S \subseteq \left\{ 1, 2, \dots, M \right\}$ of the
transmitters such that
\begin{equation}
I \left( \mathbf{x}_{\mathcal S}(s) ; \mathbf y(s) |
\mathbf{x}_{\mathcal S^c} (s) \right) \leq \left( BK + 1 \right)
\left( \sum_{m \in \mathcal S}{r_m} \right) \log (P).
\end{equation}
This event is equivalent to
\begin{equation}
\log \left|\mathbf{I}_{BK} + P
\mathbf{H}_{T}\mathbf{H}_{T}^{H}\mathbf{P}_n^{-1} \right| \leq
(BK+1) \left( \sum_{m \in \mathcal S}{r_m} \right) \log\left( P
\right).
\end{equation}
where $\mathbf{P}_n$ is defined in the proof of Theorem 3, $\mathbf{H}_T =
\mathbf G \mathbf \Omega \mathbf F \mathbf{H}_{\mathcal S}$, and
\begin{equation}
\mathbf{H}_{\mathcal S}= \mathbf{I}_B \otimes \textit{diag}\left\{
\sqrt{\sum_{m \in \mathcal S}\left| h_{m,1}\right|^2}, \sqrt{\sum_{m
\in \mathcal S}\left| h_{m,2}\right|^2}, \cdots , \sqrt{\sum_{m \in
\mathcal S}\left| h_{m,K}\right|^2} \right\}.
\end{equation}
Defining such an event as $\mathcal E_{\mathcal S}$ and the outage
event as $\mathcal E$, we have
\begin{eqnarray}
\mathbb {P} \left\{ \mathcal E \right\} & = & \mathbb {P} \left\{
\bigcup_{\mathcal S \subseteq \left\{1, 2, \dots, M \right\}}
\mathcal E_S \right\} \nonumber \\
& \leq & \sum_{\mathcal S \subseteq \left\{1, 2, \dots, M
\right\}}{\mathbb {P} \left\{ \mathcal E_S \right\}} \nonumber \\
& \leq & (2^M - 1)\max_{\mathcal S \subseteq \left\{1, 2, \dots, M
\right\}}{\mathbb {P} \left\{ \mathcal E_S \right\}} \nonumber \\
& \doteq & \max_{\mathcal S \subseteq \left\{1, 2, \dots, M
\right\}}{\mathbb {P} \left\{ \mathcal E_S \right\}}.
\label{eq:t3_r00}
\end{eqnarray}
Hence, it is sufficient to upper-bound $\mathbb {P} \left\{ \mathcal
E_S \right\}$ for all $\mathcal S$.
Defining $\hat{\mathbf{H}}_{\mathcal S} = \mathbf{I}_B \otimes
\textit{diag}\left\{ \max_{m \in \mathcal S}\left| h_{m,1} \right|,
\max_{m \in \mathcal S}\left| h_{m,2} \right|, \cdots , \max_{m \in
\mathcal S}\left| h_{m,K} \right| \right\}$, we have
$\hat{\mathbf{H}}_{\mathcal S} \hat{\mathbf{H}}_{\mathcal S}^H
\preccurlyeq \mathbf{H}_{\mathcal S} \mathbf{H}_{\mathcal S}^H$.
Therefore,
\begin{eqnarray}
\mathbb {P} \left\{ \mathcal E_S \right\} & \leq & \mathbb P \left
\{ \log \left|\mathbf{I}_{BK} + P \mathbf G \mathbf \Omega \mathbf F
\hat{\mathbf{H}}_{\mathcal S}\hat{\mathbf{H}}_{\mathcal S}^{H}
\mathbf F^H \mathbf \Omega^H \mathbf G^H \mathbf{P}_n^{-1} \right|
\leq (BK+1) \left( \sum_{m \in \mathcal S}{r_m} \right) \log\left( P
\right) \right\} \nonumber \\ & \triangleq & \mathbb P \left\{
\hat{\mathcal E}_{\mathcal S} \right\}. \label{eq:t3_r01}
\end{eqnarray}
Assume $\max_{m \in \mathcal S}|h_{m, k}|^2=P^{-\mu_k}$, and
$|g_k|^2=P^{-\nu_k}$, $|i_k|^2=P^{-\omega_k}$, and $\mathcal{R}$ as
the region in $\mathbb{R}^{3K}$ that defines the outage event $\hat
{\mathcal{E}_{\mathcal S}}$ in terms of the vector $[\boldsymbol
\mu^T, \boldsymbol \nu^T, \boldsymbol \omega^T]^T$. Similar to the proof of Theorem 3, we have $\mathbb{P} \left\{ \mathcal R \right\} \doteq
\mathbb{P} \left\{ \mathcal{R}_{+} \right\}$ where $\mathcal{R}_{+}
= \mathcal R \bigcap \mathbb{R}_{+}^{3K}$. Rewriting the equation
series of (\ref{eq:R_hat_wi}), we have
\begin{eqnarray}
\mathbb{P} \left\{ \hat{\mathcal E}_{\mathcal S} \right\} &
\dot{\leq} & \mathbb{P} \left\{ - BK \frac{\log \left[ 3 \left(
B^2K^2+1 \right) \right]}{\log (P)} + BK\left( 1- \sum_{m \in
\mathcal S}{r_m} \right) - \sum_{m \in \mathcal S}{r_m} \leq B
\sum_{k=1}^{K}{(\mu_k + \nu_k)},\right. \nonumber \\
&& \mu_k,\nu_k,\omega_k \geq 0 \Bigg\}. \label{eq:t3_r1}
\end{eqnarray}
On the other hand, as $\left\{h_{m,k}\right\}$'s are independent
random variables, we conclude that for $\boldsymbol{\mu}^0,
\boldsymbol{\nu}^0 \geq \mathbf 0$, we have
$\mathbb{P} \left\{\boldsymbol{\mu} \geq \boldsymbol{\mu}^0,
\boldsymbol{\nu} \geq \boldsymbol{\nu}^0 \right\} \doteq P^{-\mathbf{1} \cdot \left(
\left| \mathcal S \right| \boldsymbol{\mu}^0 + \boldsymbol{\nu}^0 \right)}$. Similar to the proof of Theorem 3,
by computing the derivative with respect to $\boldsymbol \mu,
\boldsymbol \nu$, we have $f_{\boldsymbol \mu, \boldsymbol
\nu}(\boldsymbol \mu, \boldsymbol \nu) \doteq P^{- \mathbf 1 \cdot
\left( \left| \mathcal S \right| \boldsymbol \mu + \boldsymbol \nu
\right)}$. Defining $l_0 \triangleq - \frac{\log
\left[ 3 \left(B^2K^2+1 \right) \right] }{\log (P)} + \left( 1-
\sum_{m \in \mathcal S}{r_m} \right) - \frac{\sum_{m \in \mathcal
S}{r_m}}{BK} $, the region $\hat{\mathcal{R}}$ as
$\hat{\mathcal{R}} \triangleq \left\{ \boldsymbol \mu,\boldsymbol \nu \geq
\mathbf{0}, \frac{1}{K} \mathbf{1} \cdot \left( \boldsymbol \mu +
\boldsymbol \nu\right) \geq l_0 \right\}$, the cube $\mathcal I$ as
$\mathcal I \triangleq \left[0, Kl_0 \right]^{2K}$, and for $1 \leq i \leq
2K$, $\mathcal{I}_i^c=[0, \infty )^{i-1} \times [Kl_0, \infty )
\times [0, \infty )^{2K-i}$, we have
\begin{eqnarray}
\mathbb{P} \left\{ \hat{\mathcal E}_{\mathcal S} \right\} &
\stackrel{(a)}{\dot{\leq}}
& \mathbb{P} \{ \hat{\mathcal R} \} \nonumber \\
& \leq & \int_{\mathcal{\hat{R}} \bigcap \mathcal{I}}{f_{\boldsymbol
\mu, \boldsymbol \nu}\left(\boldsymbol \mu, \boldsymbol \nu \right) d\boldsymbol \mu
d \boldsymbol \nu} + \sum_{i=1}^{2K}{\mathbb{P} \left\{ [\boldsymbol \mu^T,
\boldsymbol \nu^T]^T \in \mathcal{\hat{R}} \cap \mathcal{I}_i^c \right\}}
\nonumber
\\
&\dot{\leq} & \mbox{vol} (\mathcal{\hat{R}} \cap \mathcal{I})
P^{\displaystyle -\min_{\left[ \boldsymbol{\mu}, \boldsymbol{\nu} \right] \in
\mathcal{\hat{R}} \bigcap \mathcal{I}} \mathbf{1} \cdot \left(
\left| \mathcal S \right| \boldsymbol{\mu} + \boldsymbol{\nu} \right) }
+ 2K P^{-Kl_0}
\nonumber \\
& \stackrel{(b)}{\doteq} & P^{-Kl_0} \nonumber \\
& \doteq & P^{-\left[K \left( 1 - \sum_{m \in \mathcal S}{r_m}
\right) - \frac{\sum_{m \in \mathcal S}{r_m}}{B} \right]}.
\label{eq:t3_r2}
\end{eqnarray}
Here, (a) follows from (\ref{eq:t3_r1}) and (b) follows from the
fact that $\mathcal{\hat{R}} \bigcap \mathcal{I}$ is a bounded
region whose volume is independent of $P$ and the fact that $\min_{\left[ \boldsymbol{\mu}, \boldsymbol{\nu} \right] \in
\mathcal{\hat{R}} \bigcap \mathcal{I}} \mathbf{1} \cdot \left(
\left| \mathcal S \right| \boldsymbol{\mu} + \boldsymbol{\nu} \right) = K l_0 $, which is achieved by having $\boldsymbol{\mu}=\mathbf 0$. Comparing
(\ref{eq:t3_r00}), (\ref{eq:t3_r01}) and (\ref{eq:t3_r2}), we
observe
\begin{equation}
\mathbb {P} \left\{ \mathcal E \right\} \dot{\leq} \max_{\mathcal S
\subseteq \left\{1, 2, \dots, M \right\}}{\mathbb {P} \left\{
\mathcal E_S \right\}} \dot{\leq} \max_{\mathcal S \subseteq
\left\{1, 2, \dots, M \right\}}{\mathbb{P} \left\{ \hat{\mathcal
E}_{\mathcal S} \right\}} \dot{\leq} P^{-\left[K \left( 1 -
\sum_{m=1}^{M}{r_m} \right) - \frac{\sum_{m=1}^{M}{r_m}}{B}
\right]}.
\end{equation}
Next, we prove that $K \left( 1 - \sum_{m=1}^{M}{r_m} \right)^+$ is
an upper-bound on the diversity gain of the system corresponding to
the sequence of rates $r_1, r_2, \dots, r_M$. We have
\begin{equation}
\mathbb {P} \left\{ \mathcal E \right\} \geq \mathbb {P} \left\{
\max_{p\left(\mathbf t_1, \mathbf t_2, \dots , \mathbf t_K
\right)}{I \left(\mathbf t_1, \mathbf t_2, \dots , \mathbf t_K ;
\mathbf y \right)} \leq \left( \sum_{m=1}^{M}{r_m} \right) \log (P)
\right\} \stackrel{(a)}{\doteq} P^{-K\left( 1 - \sum_{m=1}^{M}{r_m}
\right)^+}.
\end{equation}
Here, (a) follows from the DMT of the point-to-point MISO channel
proved in \cite{ zheng_tse}. This completes the proof.
\end{proof}
\textit{Remark 7-} The argument of Theorem 5 is valid for the general
case in which any arbitrary set of relay pairs are non-interfering.
\textit{Remark 8-} In the \textit{symmetric} situation for which the
multiplexing gains of all the users are equal to say $r$, the
lower-bound in \eqref{eq:t3} takes a simple form. First, we observe
that the maximum multiplexing gain which is simultaneously
achievable by all the users is $\frac{1}{M} \cdot \frac{BK}{BK+1}$.
Noting that no signal is sent to the receiver in $\frac{1}{BK+1}$
portion of the time, we observe that the RS scheme achieves the
maximum possible symmetric multiplexing gain for all the users.
Moreover, from (\ref{eq:t3}), we observe that the RS scheme achieves
the maximum diversity gain of $K$ for any finite value of $B$, which
turns out to be tight as well. Finally, the lower-bound on the DMT
of the RS scheme is simplified to $\left[ K \left( 1 - Mr\right) -
\frac{Mr}{B} \right]^+$ for the \textit{symmetric} situation.
\subsection{Multiple-Access Single Relay Scenario}
As we observe, the arguments of Theorems 2, 3 and 5 concerning DMT
optimality of the RS scheme are valid for the scenario of having
multiple relays ($K > 1$). Indeed, for the single relay scenario,
the RS scheme is reduced to the simple amplify-and-forward relaying
in which the relay listens to the transmitter in the first half of
the frame and transmits the amplified version of the received signal
in the second half. However, like the case of non-interfering relays
studied in \cite{yang_belfiore2}, the DMT optimality arguments are
no longer valid. On the other hand, we show that the DDF scheme
achieves the optimum DMT for this scenario.
\begin{thm}
Consider a multiple-access channel consisting of $M$ transmitting
nodes aided by a single half-duplex relay. Assume that all the
network nodes are equipped with a single antenna and there is no
direct link between the transmitters and the receiver. The
amplify-and-forward scheme achieves the following DMT
\begin{equation}
d_{AF,MAC}(r_1, r_2, \dots , r_M) = \left( 1 - 2 \sum_{m=1}^{M}{r_m}
\right)^{+}. \label{eq:mac_1r_af}
\end{equation}
However, the optimum DMT of the network is
\begin{equation}
d_{MAC}(r_1, r_2, \dots , r_M) = \left( 1 -
\frac{\sum_{m=1}^{M}{r_m}}{1-\sum_{m=1}^{M}{r_m}}
\right)^{+}, \label{eq:mac_1r_ddf}
\end{equation}
which is achievable by the DDF scheme of \cite{azarian}.
\end{thm}
\begin{proof}
First, we show that the DMT of the AF scheme follows
(\ref{eq:mac_1r_af}). At the receiver side, we have
\begin{equation}
\mathbf y = g \alpha \left( \sum_{m=1}^{M}{h_m \mathbf x_m} +
\mathbf n \right) + \mathbf z,
\end{equation}
where $h_m$ is the channel gain between the $m$'th transmitter and
the relay, $g$ is the down-link channel gain, and $\alpha = \sqrt{
\frac{P}{P \sum_{m=1}^{M}{\left| h_m \right|^2} + 1} }$ is the
amplification coefficient. Defining the outage event $\mathcal{E_S}$
for a set $\mathcal S \subseteq \{ 1, 2, \dots, M\}$, similar to the
case of Theorem 5, we have
\begin{eqnarray}
\mathbb P \left\{ \mathcal{E_S} \right\} & = & \mathbb P \left\{ I
\left( \mathbf{x}_{\mathcal S} ; \mathbf y | \mathbf{x}_{\mathcal
S^c} \right) < 2 \left( \sum_{m \in \mathcal S}{r_m} \right) \log
(P) \right\}
\nonumber \\
& = & \mathbb P \left\{
\log \left( 1 + P \left( \sum_{m \in \mathcal S}{\left| h_m \right|^2} \right) \left| g \right|^2
\left| \alpha \right|^2 \left( 1 + \left| g \right|^2 \left|
\alpha \right|^2 \right)^{-1} \right) < 2 \left( \sum_{m \in
\mathcal S}{r_m} \right) \log (P) \right\} \nonumber \\
&\doteq & \mathbb{P}\left\{\left( \sum_{m \in \mathcal S}{\left| h_m
\right|^2} \right) |g|^2|{\alpha}|^2 \min \left\{ 1,
\frac{1}{|g|^2|{\alpha}|^2} \right\} \leq
P^{-\left(1-2\sum_{m \in \mathcal S}{r_m}\right)} \right\} \nonumber \\
& \stackrel{(a)}{\doteq} & \mathbb{P} \left\{\sum_{m \in \mathcal S}{\left| h_m
\right|^2} \leq P^{-\left(1-2\sum_{m \in \mathcal S}{r_m}\right)}
\right\} + \nonumber \\
& & \mathbb{P} \left\{ \left( \sum_{m \in \mathcal S}{\left|
h_m \right|^2} \right) |g|^2 |\alpha|^2 \leq
P^{-\left(1-2\sum_{m \in \mathcal S}{r_m}\right)} \right\} \nonumber \\
& \stackrel{(b)}{\doteq} & \mathbb{P} \left\{\sum_{m \in \mathcal S}{\left| h_m
\right|^2} \leq P^{-\left(1-2\sum_{m \in \mathcal S}{r_m}\right)}
\right\} + \nonumber \\
& & \mathbb{P} \left\{ |g|^2 \left( \sum_{m \in \mathcal S}{\left|
h_m \right|^2} \right) \min \left\{P , \frac{1}{
\sum_{m=1}^{M}{\left| h_m \right|^2} }\right\} \leq
P^{-\left(1-2\sum_{m \in \mathcal S}{r_m}\right)} \right\} \nonumber \\
& \stackrel{(a)}{\doteq} & \mathbb{P} \left\{\sum_{m \in \mathcal S}{\left| h_m
\right|^2} \leq P^{-\left(1-2\sum_{m \in \mathcal S}{r_m}\right)}
\right\}
+ \nonumber \\
& & \mathbb P \left\{ \left| g \right|^2 \left( \sum_{m \in \mathcal S}{ \left| h_m \right|^2 } \right) \leq P^{-2 \left( 1 - \sum_{m
\in \mathcal S}{r_m} \right)} \right\} + \nonumber \\
& & \mathbb P \left\{ \frac {\left| g
\right|^2 \sum_{m \in \mathcal S}{ \left| h_m \right|^2 } }{\sum_{m=1}^{M}{\left| h_{m} \right|^2}} \leq P^{-
\left(1-2\sum_{m \in \mathcal S}{r_m}\right) } \right\} \label{eq:1r_1}.
\end{eqnarray}
In the above equation, $(a)$ comes from the fact that $\mathbb P \{\min (X,Y) \leq z\} = \mathbb P \left\{ (X \leq z) \bigcup (Y \leq z)\right\} \doteq \mathbb P \{X \leq z\} + \mathbb P \{Y \leq z \}$. $(b)$ follows from the fact that $|\alpha|^2$ can be asymptotically written as $\min \left\{P , \frac{1}{
\sum_{m=1}^{M}{\left| h_m \right|^2} }\right\}$.
Since $\{|h_m|^2\}_{m=1}^M$ are i.i.d. random variables with exponential distribution, it follows that $\sum_{m \in \mathcal S}{\left| h_m
\right|^2}$ has Chi-square distribution with $2 |\mathcal S|$ degrees of freedom, which implies that
\begin{eqnarray}
\mathbb{P} \left\{\sum_{m \in \mathcal S}{\left| h_m
\right|^2} \leq P^{-\left(1-2\sum_{m \in \mathcal S}{r_m}\right)}
\right\} \doteq P^{-|\mathcal S|\left(1-2\sum_{m \in \mathcal S}{r_m}\right)}. \label{eq:1}
\end{eqnarray}
To compute the second term in (\ref{eq:1r_1}), defining $\epsilon_1 \triangleq P^{-2 \left(1-\sum_{m \in \mathcal S}{r_m}\right)}$, we have
\begin{eqnarray}
\mathbb P \left \{ \left| g \right|^2 \left( \sum_{m \in \mathcal S}{ \left| h_m \right|^2 } \right) \leq \epsilon_1 \right\} &\stackrel{(a)}{\dot \geq}& \mathbb P \left \{ \left| g \right|^2 \leq \epsilon_1 \right\} \notag\\
&\doteq& \epsilon_1, \label{eq:t4_t1}
\end{eqnarray}
where $(a)$ follows from the fact that as $\sum_{m \in \mathcal S}{\left| h_m
\right|^2}$ has Chi-square distribution, we have $\sum_{m \in \mathcal S}{\left| h_m
\right|^2} \dot \leq 1$ with probability one (more
precisely, with a probability greater than $1 - P^{-\delta}$ for
every $\delta > 0$).
On the other hand, we have
\begin{eqnarray}
\mathbb P \left \{ \left| g \right|^2 \left( \sum_{m \in \mathcal S}{ \left| h_m \right|^2 } \right) \leq \epsilon_1\right\}
&\leq& \mathbb P \left\{ \left| g \right|^2 |h_m|^2\leq \epsilon_1 \right\} \notag\\
&\doteq& \epsilon_1. \label{eq:t4_t2}
\end{eqnarray}
Putting (\ref{eq:t4_t1}) and (\ref{eq:t4_t2}) together, we have
\begin{equation}
\mathbb P \left \{ \left| g \right|^2 \left( \sum_{m \in \mathcal S}{ \left| h_m \right|^2 } \right) \leq \epsilon_1 \right\} \doteq \epsilon_1 . \label{eq:t4_t3}
\end{equation}
Now, to compute the third term in (\ref{eq:1r_1}), defining $\epsilon_2 \triangleq P^{- \left(1-2\sum_{m \in \mathcal S}{r_m}\right)}$, we observe
\begin{equation}
\epsilon_2 \doteq \mathbb P \left\{ \left| g \right|^2 \leq \epsilon_2 \right\}
\leq \mathbb P \left\{ \left| g \right|^2 \frac {\sum_{m \in \mathcal S}{ \left| h_m \right|^2 }}
{\sum_{m=1}^{M}{\left| h_m \right|^2 }} \leq \epsilon_2 \right\} \stackrel{(a)}{\dot{\leq}}
\mathbb P \left\{ \left| g \right|^2 \left( \sum_{m \in \mathcal S}{ \left| h_m \right|^2 } \right)
\leq \epsilon_2 \right\} \stackrel{(b)}{\doteq} \epsilon_2 . \nonumber
\end{equation}
Here, $(a)$ follows from the fact that with probability one, we have $\sum_{m=1}^{M}{\left| h_m \right|^2 } \dot{\leq} 1$ and $(b)$ follows
from (\ref{eq:t4_t3}). As a result
\begin{equation}
\mathbb P \left\{ \left| g \right|^2 \frac {\sum_{m \in \mathcal S}{ \left| h_m \right|^2 }}
{\sum_{m=1}^{M}{\left| h_m \right|^2 }} \leq \epsilon_2 \right\} \doteq \epsilon_2 \label{eq:t4_t4}
\end{equation}
From (\ref{eq:1}), (\ref{eq:t4_t3}), and (\ref{eq:t4_t4}), we have
\begin{equation}
\mathbb P \left\{ \mathcal{E_S} \right\} \doteq P^{- \left| \mathcal S \right|
\left( 1 - 2 \sum_{m \in \mathcal S}{r_m} \right)^+} + P^{-2 \left( 1 - \sum_{m \in \mathcal S}{r_m} \right)^+}
+ P^{- \left( 1 - 2 \sum_{m \in \mathcal S}{r_m} \right)^+} \doteq P^{-\left( 1 - 2 \sum_{m \in \mathcal S}{r_m} \right)^+}. \label{eq:t4_t5}
\end{equation}
Observing (\ref{eq:t4_t5}) and applying the argument of (\ref{eq:t3_r00}), we have
\begin{equation}
\mathbb P \left\{ \mathcal E \right\} \doteq \max_{\mathcal S \subseteq
\left\{1,2,\dots,M \right\}}{\mathbb P \left\{ \mathcal{E_S} \right\} } \doteq P^{-\left( 1 - 2 \sum_{m=1}^{M}{r_m} \right)^+}.
\end{equation}
This completes the proof for the AF scheme. Now, to compute the DMT
of the DDF scheme, let us assume that the relay listens to the
transmitted signal for the $l$ portion of the time until it can decode it perfectly. Hence, we have
\begin{equation}
l = \min \left\{ 1, \max_{\mathcal S \subseteq \left\{ 1, 2, \dots, M \right\} }
{\frac { \left( \sum_{m \in \mathcal S}{r_m} \right) \log (P) }
{\log \left( 1+ \left( \sum_{m \in \mathcal S}{ \left| h_m \right|^2 } \right) P \right) } } \right\}.
\end{equation}
The outage event occurs whenever the relay can not transmit the
re-encoded information in the remaining portion of the time. Hence,
we have
\begin{equation}
\mathbb P \left\{ \mathcal E \right\} \doteq \mathbb P \left\{ \left( 1- l \right)
\log \left( 1 + \left| g \right|^2 P \right) < \left( \sum_{m=1}^{M}{r_m} \right) \log (P) \right\}.
\end{equation}
Assuming $|h_m|^2=P^{-\mu_m}$ and $|g|^2=P^{-\nu}$, at high SNR, we
have
\begin{equation}
l \approx \min \left\{1, \max_{\mathcal S \subseteq \left\{1,2, \dots , M \right\} }
\frac {\sum_{m \in \mathcal S}{r_m}}{1-\min_{m \in \mathcal S}{\mu_m} } \right\} .
\end{equation}
Equivalently, an outage event occurs whenever
\begin{equation}
\left( 1 - \max_{\mathcal S \subseteq \left\{1,2, \dots , M \right\} }
\frac {\sum_{m \in \mathcal S}{r_m}}{1-\min_{m \in \mathcal S}{\mu_m} } \right)
\left( 1- \nu \right) < \sum_{m=1}^{M}{r_m}. \label{eq:t4_out_reg}
\end{equation}
In order to find the probability of the outage event, we first find an upper-bound on the outage probability and then, we show that this upper-bound is indeed tight. Defining $R=\sum_{m=1}^{M}{r_M}$ and $\mu =
\sum_{m=1}^{M}{\mu_m}$, we have
\begin{equation}
R \stackrel{(a)}{>} \left( 1 - \frac {\sum_{m \in \mathcal S_0}{r_m}}
{1-\min_{m \in \mathcal S_0}{\mu_m} } \right) \left( 1 - \nu \right) >
\left( 1 - \frac {R}{1- \mu } \right) \left( 1 - \nu \right). \label{eq:t4_t6}
\end{equation}
Here, $(a)$ follows from (\ref{eq:t4_out_reg}). Equivalently,
\begin{equation}
R \stackrel{(a)}{>} \frac{(1 - \mu)(1 - \nu)}{(1 - \mu) + (1 - \nu)}
> \frac{1 - \mu - \nu}{(1 - \mu) + (1 - \nu)}, \label{eq:t4_t7}
\end{equation}
where $(a)$ follows from (\ref{eq:t4_t6}). It can be easily checked
that (\ref{eq:t4_t7}) is equivalent to
\begin{equation}
R > ( 1 - R ) ( 1 - \mu - \nu). \label{eq:t4_t8}
\end{equation}
In other words, any vector point $\left[\mu_1, \mu_2, \dots,
\mu_M, \nu \right]$ in the outage region $\mathcal R$, i.e., the
region that satisfies (\ref{eq:t4_out_reg}), also satisfies (\ref{eq:t4_t8}). As a result, defining $\mathcal{R}'$ as the region defined by (\ref{eq:t4_t8}), we have
\begin{eqnarray}
\mathbb P \{\mathcal E\} &\leq& \mathbb P \{\boldsymbol{\pi} \in \mathcal {R}'\},
\end{eqnarray}
where $\boldsymbol{\pi} \triangleq \left[\mu_1, \mu_2, \dots,
\mu_M, \nu \right]$. Similar to the approach used in the proofs of Theorems 3 and 5, $\mathbb P \{\boldsymbol{\pi} \in \mathcal {R}'\}$ can be computed as
\begin{eqnarray}
\mathbb P \{\boldsymbol{\pi} \in \mathcal {R}'\} \doteq P^{-\frac{R}{1-R}}.
\end{eqnarray}
Hence,
\begin{eqnarray} \label{ttttl}
\mathbb P \{ \mathcal E\} \dot \leq P^{-\frac{R}{1-R}}.
\end{eqnarray}
For lower-bounding the outage probability, we note that all the vectors $[\mu_1, \cdots, \mu_M, \nu]$ for which $\mu_m > 0, m=1, \cdots, M$ and $\nu > \frac{R}{1-R}$, lie in the outage region defined in (\ref{eq:t4_out_reg}). In other words,
\begin{eqnarray} \label{ttttu}
\mathbb P \{ \mathcal E\} &\geq& \mathbb P \left \{\boldsymbol{\pi} > \left[0, \cdots,0, \frac{R}{1-R} \right] \right \} \notag\\
&\doteq& P ^{-\frac{R}{1-R}}.
\end{eqnarray}
Combining (\ref{ttttl}) and (\ref{ttttu}) yields
\begin{eqnarray} \label{tttt}
\mathbb P \{ \mathcal E\} &\doteq& P^{-\frac{R}{1-R}} \notag\\
&=& P^{-\frac{\sum_{m=1}^M r_m}{1- \sum_{m=1}^M r_m}},
\end{eqnarray}
which completes the proof for the DMT analysis of the DDF scheme.
Next, we prove that the DDF scheme achieves the optimum DMT. As the
channel from the transmitters to the receiver is a degraded version
of the channel between the transmitters and the relay, similar to
the argument of \cite{cover} for the case of single-source
single-relay, we can easily show that the decode-forward
strategy achieves the capacity of the network for each realization
of the channels. Now, consider the realization in which for all $m$ we
have, $\left |h_m \right|^2 \leq \frac{1}{M}$. As we know, $\mathbb
P \left\{\forall m: \left |h_m \right|^2 \leq \frac{1}{M} \right\}
\doteq 1$. Let us assume in the optimum decode-and-forward strategy, the relay
spends $l$ portion of the time for listening to the transmitter. According to the Fano's inequality \cite{cover_book}, to make the probability of error in decoding the transmitters' message
at the relay side approach zero, we should have $l \log \left(1 +
\frac{P}{l} \sum_{m=1}^M \left|h_m\right|^2 \right) \geq \left(
\sum_{m=1}^M r_m \right) \log (P)$. Accordingly, we should have
$l \geq \sum_{m=1}^M r_m$. On the other hand, in order that the
receiver can decode the relay's message with a vanishing probability
of error in the remaining portion of the time, we should have
$\left( 1 - l \right) \log \left( 1 + \frac{P}{1-l} \left| g
\right|^2 \right) \geq \sum_{m=1}^M r_m \log (P)$. Hence, we have $\mathbb
P \left\{ \mathcal E \right\} \geq \mathbb P \left\{ \left| g
\right|^2 \leq cP^{-\left( 1 -
\frac{\sum_{m=1}^{M}{r_m}}{1-\sum_{m=1}^{M}{r_m}}
\right)}, \forall m: \left |h_m \right|^2 \leq \frac{1}{M} \right\} \doteq P^{-\left( 1 -
\frac{\sum_{m=1}^{M}{r_m}}{1-\sum_{m=1}^{M}{r_m}}
\right)^+}$, for a constant $c$. This completes the proof.
\end{proof}
\begin{figure}[hbt
\centering
\includegraphics[scale=0.7]{dm_mac.eps
\caption{Diversity-Multiplexing Tradeoff of AF scheme versus the
optimum and DDF scheme for multiple access single relay channel
consisting of $M=2$ transmitters assuming \textit{symmetric}
transmission, i.e. $r_1=r_2=r$.} \label{fig:dm_mac}
\end{figure}
Figure \ref{fig:dm_mac} shows DMT of the AF scheme and the DDF scheme for multiple access single relay setup consisting of $M=2$ transmitters assuming \textit{symmetric} situation, i.e. $r_1=r_2=r$. As can be observed in this figure, although the AF scheme achieves the maximum multiplexing gain and maximum diversity gain, it does not achieve the optimum DMT in any other points of the tradeoff region.
\section{Maximum Diversity Achievability Proof in General Multi-Hop Multiple-Antenna Scenario}
In this section, we consider our proposed RS scheme and prove that
it achieves the maximum diversity gain between two end-points in a
general multiple-antenna multi-hop network (no additional
constraints imposed). However, in this general scenario, it can not
achieve the optimum DMT. Indeed, we show that in order to achieve
the optimum DMT, in some scenarios, multiple interfering nodes have
to transmit together during the same slot.
\begin{thm}
Consider a relay network with the connectivity graph $G=(V,E)$ and
$K$ relays, in which each two adjacent nodes are connected through a
Rayleigh-fading channel. Assume that all the network nodes are equipped with
multiple antennas. Then, by properly choosing the path sequence, the proposed RS scheme achieves the maximum
diversity gain of the network which is equal to
\begin{equation}
d_{G} = \min_{\mathcal S} w_G(\mathcal S),
\end{equation}
where $\mathcal S$ is a cut-set on $G$.
\end{thm}
\begin{proof}
First, we show that $d_G$ is indeed an upper-bound on the
diversity-gain of the network. To show this, we do not consider the
half-duplex nature of the relay nodes and assume that they operate
in full-duplex mode. Consider a cut-set $\mathcal S$ on $G$. We have
\begin{eqnarray} \label{mimo1}
\mathbb P \left\{ \mathcal E \right\} & \stackrel{(a)}{\dot \geq} & \mathbb P \left\{ I\left( X\left( \mathcal S \right);Y\left( \mathcal S^c \right) | X\left( \mathcal S^c \right) \right) < R \right\} \nonumber \\
& \stackrel{(b)}{=} & \mathbb P \left\{\sum_{k \in \mathcal S^c} I\left( X\left( \mathcal S \right);Y_k | Y\left( \mathcal S^c / \left\{1,2, \dots, k \right\} \right),X\left( \mathcal S^c \right) \right) < R \right\} \nonumber \\
& \stackrel{(c)}{\geq} & \prod_{k \in \mathcal S^c} \mathbb P \left\{ I\left( X\left( \mathcal S \right);Y_k | X\left( \mathcal S^c \right) \right) < \frac{R}{\left| \mathcal S^c \right|} \right\} \nonumber \\
& \stackrel{(d)}{\doteq} & \prod_{k \in \mathcal S^c}P^{-\left| \left\{ e \in E | k \in e, e \cap \mathcal S \neq \oslash \right\} \right|} \nonumber \\
& \doteq & P^{-w_G\left( \mathcal S \right)},
\end{eqnarray}
where $R$ is the target rate which does not scale with $P$ (i.e., $r=0$). Here, $(a)$ follows from the cut-set bound theorem
\cite{cover_book} and the fact that for the rates above the
capacity, the error probability approaches one (according to Fano's
inequality \cite{cover_book}), $(b)$ follows from the chain rule on the mutual
information \cite{cover_book}, $(c)$ follows from the
facts that i)~$\left(Y_k, X\left(\left\{0,1,\dots,K+1\right\}\right),
Y\left( \mathcal S^c / \left\{1,2, \dots, k \right\}
\right)\right)$ form a Markov chain \cite{cover_book} and as a
result, $I\left( X\left( \mathcal S \right);Y_k | Y\left( \mathcal
S^c / \left\{1,2, \dots, k \right\} \right),X\left( \mathcal S^c
\right) \right) \leq I\left( X\left( \mathcal S \right);Y_k |
X\left( \mathcal S^c \right) \right)$, and ii)~$I\left( X\left( \mathcal S \right);Y_k |
X\left( \mathcal S^c \right) \right)$ depends only on the channel matrices between $X\left( \mathcal S \right)$ and $Y_k$ and as all the channels in the network are independent of each other, it follows that the events $$\left\{ I\left( X\left( \mathcal S \right);Y_k | X\left( \mathcal S^c \right) \right) < \frac{R}{\left| \mathcal S^c \right|} \right\}_{k \in \mathcal S^c} $$ are mutually independent, and finally $(d)$ follows
from the diversity gain of the MISO channel. Considering all possible cut-sets on $G$ and using (\ref{mimo1}), we have
\begin{equation}
\mathbb P \left\{ \mathcal E \right\} \dot \geq P^{-\min_{\mathcal S}w_G \left( \mathcal S \right)}.
\end{equation}
Now, we prove that this bound is indeed achievable by the RS scheme.
First, we provide the path sequence needed to achieve the maximum
diversity gain. Consider the graph $\hat G = (V, E, w)$ with the
same set of vertices and edges as the graph $G$ and the weight
function $w$ on the edges as $w_{\left\{a,b \right\}}= N_a N_b$.
Consider the maximum-flow algorithm \cite{graph_book} on $\hat G$
from the source node $0$ to the sink node $K+1$. Since the weight
function is integer over the edges, according to the Ford-Fulkerson
Theorem \cite{graph_book}, one can achieve the maximum flow which is
equal to the minimum cut of $\hat G$ or $d_G$ by the union of
elements of a sequence $\left( \mathrm p_1, \mathrm p_2, \dots,
\mathrm p_{d_G}\right)$ of paths ($L = d_G$). We show that this family of paths
are sufficient to achieve the optimum diversity. Here, we do not
consider the problem of selecting the path timing sequence
$\left\{s_{i,j} \right\}$. We just assume that a timing sequence
$\left\{s_{i,j} \right\}$ with the 4 requirements defined in the
third section exists. However, it should be noted that as we
consider the maximum diversity throughout the theorem, we
are not concerned with $\frac{S}{L}$. Hence, we can select the path
timing sequence such that no two paths cause interference on each
other.
Noting that the received signal at each node is multiplied by a random isotropically distributed unitary matrix, at the receiver side we have
\begin{eqnarray}
\mathbf y_{K+1,i} & = & \mathbf H_{K+1, \mathrm p_i(l_i -1)} \alpha_{i,l_i-1} \mathbf U_{i,l_i-1}
\mathbf H_{\mathrm p_i(l_i -1), \mathrm p_i(l_i -2)} \alpha_{i,l_i-2} \mathbf U_{i,l_i-2}
\cdots \alpha_{i,1} \mathbf U_{i,1} \mathbf H_{\mathrm p_i(1), 0} \mathbf x_{0,i} + \nonumber \\
& &\sum_{j < i} \mathbf X_{i, j} \mathbf x_{0,j} + \sum_{j \leq i, m
\leq l_j} \mathbf Q_{i, j, m} \mathbf n_{j,
m}.\label{eq:MIMO_div_model}
\end{eqnarray}
Here, $\mathbf x_{0,i}$ is the vector transmitted at the transmitter
side during the $s_{i,1}$'th slot as the input for the $i$'th path,
$\mathbf y_{K+1,i}$ is the vector received at the receiver side
during the $s_{i,l_i}$'th slot as the output for $i$'th path, $\mathbf U_{i,j}$ denotes the multiplied unitary matrix at the $\mathrm p_i (j)$'th node of the $i$th path,
$\mathbf X_{i,j}$ is the interference matrix which relates the input
of the $j$'th path ($j < i$) to the output of the $i$'th path,
$\mathbf n_{j,m}$ is the noise vector during the $s_{j,m}$'th slot
at the $\mathrm p_j(m)$'th node of the network, and finally,
$\mathbf Q_{i, k, m}$ is the matrix which relates $\mathbf n_{k,m}$
to $\mathbf y_{K+1,i}$. Notice that as the timing sequence satisfies
the noncausal interference assumption, the summation terms in
\eqref{eq:MIMO_div_model} do not exceed $i$. Defining $\mathbf x(s)
= \left[ \mathbf x_{0,1}^T\left(s\right) \mathbf
x_{0,2}^T\left(s\right) \cdots \mathbf x_{0,L}^T\left(s\right)
\right]^T$, $\mathbf y(s) = \left[ \mathbf y_{K+1,1}^T\left(s\right)
\mathbf y_{K+1,2}^T\left(s\right) \cdots \mathbf
y_{K+1,L}^T\left(s\right) \right]^T$, and $\mathbf n(s) = \left[
\mathbf n_{1,1}^T\left(s\right) \mathbf n_{1,2}^T\left(s\right)
\cdots \mathbf n_{L,l_L}^T\left(s\right) \right]^T$, we have the
following equivalent block lower-triangular matrix between the end
nodes
\begin{equation} \label{mimo_vec}
\mathbf y(s) = \mathbf H_T \mathbf x(s) + \mathbf Q \mathbf n(s).
\end{equation}
Here,
\begin{equation}
\mathbf H_T= \left(
\begin{array}{cccc}
\mathbf X_{1,1} & \mathbf 0 & \mathbf 0 & \ldots \\
\mathbf X_{2,1} & \mathbf X_{2,2} & \mathbf 0 & \ldots \\
\vdots & \vdots & \vdots & \ddots \\
\mathbf X_{L,1} & \mathbf X_{L,2} & \ldots & \mathbf X_{L,L}
\end{array}
\right),
\end{equation}
where $\mathbf X_{i,i}=\mathbf H_{K+1, \mathrm p_i(l_i -1)} \alpha_{i,l_i-1} \mathbf U_{i,l_i-1}
\mathbf H_{\mathrm p_i(l_i -1), \mathrm p_i(l_i -2)} \alpha_{i,l_i-2} \mathbf U_{i,l_i-2}
\cdots \alpha_{i,1} \mathbf U_{i,1} \mathbf H_{\mathrm p_i(1), 0}$, and
\begin{equation}
\mathbf Q= \left(
\begin{array}{ccccccc}
\mathbf Q_{1,1,1} & \ldots & \mathbf Q_{1,1,l_1} & \mathbf 0 & \mathbf 0 & \mathbf 0 & \ldots \\
\mathbf Q_{2,1,1} & \ldots & \mathbf Q_{2,1,l_1} & \ldots & \mathbf Q_{2,2,l_2} & \mathbf 0 & \ldots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \\
\mathbf Q_{L,1,1} & \mathbf Q_{L,1,2} & \ldots & \ldots& \ldots& \mathbf Q_{L,L,l_L-1} & \mathbf Q_{L,L,l_L}
\end{array}
\right).
\end{equation}
Having (\ref{mimo_vec}), the outage probability can be written as
\begin{eqnarray}
\mathbb P \left\{ \mathcal E \right\} & =& \mathbb P \left\{ \left| \mathbf I_{L} + P \mathbf H_T \mathbf H_T^H \mathbf P_n^{-1} \right| < 2^{SR} \right\},
\end{eqnarray}
where $\mathbf P_n = \mathbf Q \mathbf Q^H$. First, similar to the
proof of theorem \ref{thm:DMT-n-ir}, we can show that $\alpha_{i,j}
\doteq 1$ with probability 1\footnote{More precisely, with
probability greater than $1-P^{-\delta}$ for any $\delta > 0$.}, and
also show that there exists a constant $c$ which depends just on the
topology of graph $G$ and the path sequence such that $\mathbf P_n
\preccurlyeq c \mathbf I_L$. Assume that for each $\{a,b\} \in E$,
$\lambda_{\max} \left( \mathbf H_{a,b} \right)=P^{-\mu_{\left\{a,b
\right\}}}$, where $\lambda_{\max} \left( \mathbf A \right)$ denotes
the greatest eigenvalue of $\mathbf A \mathbf A^H$. Also, assume that
\begin{eqnarray}
\gamma_{i,j} \triangleq \left| \mathbf v_{r,\max}^H\left( \mathbf H_{\left\{\mathrm p_i(j+1),
\mathrm p_i(j)\right\}} \right) \mathbf U_{i,j} \mathbf
v_{l,\max}\left( \mathbf H_{\left\{\mathrm p_i(j), \mathrm p_i(j-1)\right\}} \mathbf U_{i,j-1} \mathbf H_{\left\{\mathrm p_i(j-1), \mathrm p_i(j-2)\right\}} \dots \mathbf H_{\left\{\mathrm p_i(1), 0 \right\}} \right) \right|^2 = P^{-\nu_{i,j}},
\end{eqnarray}
where $\mathbf
v_{l,\max}\left( \mathbf A \right)$ and $\mathbf v_{r,\max}\left(
\mathbf A \right)$ denote the left and the right eigenvectors of
$\mathbf A$ corresponding to $\lambda_{\max} \left( \mathbf A
\right)$, respectively. The outage probability can be upper-bounded as
\begin{eqnarray}
\mathbb P \left\{ \mathcal E \right\} & \stackrel{(a)}{\leq} & \mathbb P \left\{ \lambda_{\max}\left(\left(\mathbf H_T \mathbf H_T^H \mathbf P_n^{-1} \right)^{\frac{1}{2}} \right) \leq \left( 2^{SR} -1 \right) P^{-1} \right\} \nonumber \\
& \stackrel{(b)}{\leq} & \mathbb P \left\{ \lambda_{\max}\left(\mathbf H_T \right) \leq c \left( 2^{SR} -1 \right) P^{-1} \right\} \nonumber \\
& \stackrel{(c)}{\leq} &\mathbb P \left\{\bigcap_{i=1}^{L}\left( \lambda_{\max}\left(\mathbf X_{i,i} \right) \leq c \left( 2^{SR} -1 \right) P^{-1} \right) \right\} \nonumber \\
& \stackrel{(d)}{\leq} & \mathbb P \left\{ \bigcap_{i=1}^{L} \left( \sum_{j=1}^{l_i}
\mu_{\left\{\mathrm p_i(j), \mathrm p_i(j-1)\right\}} +
\sum_{j=1}^{l_i-1} \nu_{i,j} \geq 1 - \log\frac{c \left(2^{SR} -1
\right)}{P} \right) \right\} \notag\\
&\stackrel{(e)}{\doteq}& \mathbb P \left\{ \bigcap_{i=1}^{L} \left( \sum_{j=1}^{l_i}
\mu_{\left\{\mathrm p_i(j), \mathrm p_i(j-1)\right\}} +
\sum_{j=1}^{l_i-1} \nu_{i,j} \geq 1 \right) \right\}.
\end{eqnarray}
In the above equation, $(a)$ follows from the fact that
$ 1+ \lambda_{\max} (\mathbf A^{\frac{1}{2}}) \leq \left| \mathbf I + \mathbf A\right|,$
for a positive semi-definite matrix $\mathbf A$. $(b)$ results from $\mathbf P_n
\preccurlyeq c \mathbf I_L$. $(c)$ follows from the fact that $\lambda_{\max} (\mathbf H_T) \geq \max_i \lambda_{\max} (\mathbf X_{i,i})$. To obtain $(d)$, we first show that
\begin{eqnarray} \label{mimo2}
\lambda_{\max} (\mathbf{AUB}) \geq \lambda_{\max} (\mathbf{A})\lambda_{\max} (\mathbf{A}) \left| \mathbf v_{r,\max}^H (\mathbf A) \mathbf U \mathbf v_{l, \max} (\mathbf B) \right|^2,
\end{eqnarray}
for any matrices $\mathbf A$, $\mathbf U$ and $\mathbf B$. To show this, we write
\begin{eqnarray}
\lambda_{\max} (\mathbf{AUB}) &=& \max_{\begin{subarray}{c}
\mathbf x\\
\|\mathbf x\|^2 =1
\end{subarray}
} \left\| \mathbf x^H \mathbf{AUB} \right\|^2 \notag\\
&\geq& \left\| \mathbf v_{l,\max} (\mathbf A) \mathbf{AUB}\right\|^2 \notag\\
&=& \left\| \sigma_{\max} (\mathbf A) \mathbf v_{r, \max}^H (\mathbf A) \mathbf{U} \sum_{i} \mathbf v_{l,i} (\mathbf B)
\sigma_i (\mathbf B) \mathbf v_{r,i}^H (\mathbf B) \right\|^2 \notag\\
&\stackrel{(a)}{=}& \sum_i \left\| \sigma_{\max} (\mathbf A) \mathbf v_{r, \max}^H (\mathbf A) \mathbf{U} \mathbf v_{l,i} (\mathbf B)
\sigma_i (\mathbf B) \mathbf v_{r,i}^H (\mathbf B) \right\|^2 \notag\\
&\geq& \left\| \sigma_{\max} (\mathbf A) \mathbf v_{r, \max}^H (\mathbf A) \mathbf{U} \mathbf v_{l,\max} (\mathbf B)
\sigma_{\max} (\mathbf B) \mathbf v_{r,\max}^H (\mathbf B) \right\|^2 \notag\\
&\stackrel{(b)}{=}& \lambda_{\max} (\mathbf A) \lambda_{\max} (\mathbf B) \left| \mathbf v_{r,\max}^H (\mathbf A) \mathbf U \mathbf v_{l, \max} (\mathbf B) \right|^2,
\end{eqnarray}
where $\sigma_i (\mathbf A)$ denotes the $i$'th singular value of $\mathbf A$, and $\sigma_{\max} (\mathbf A)$ denotes the singular value of $\mathbf A$ with the highest norm. Here, $(a)$ follows from the fact that as $\left\{ \mathbf v_{r,i} (\mathbf B) \right\}$ are orthogonal vectors, the square-norm of their summation is equal to the summation of their square-norms. $(b)$ results from the fact that $\lambda_{i} (\mathbf A) = |\sigma_{i} (\mathbf A) |^2, \forall i$.
By recursively applying (\ref{mimo2}), it follows that
\begin{eqnarray}
\lambda_{\max} (\mathbf X_{i,i}) &\geq& \lambda_{\max} \left(\mathbf H_{K+1, \mathrm p_i(l_i -1)}\right) \gamma_{i,l_i-1} \lambda_{\max} \left(
\mathbf H_{\mathrm p_i(l_i -1), \mathrm p_i(l_i -2)} \right) \gamma_{i,l_i-2}
\cdots \gamma_{i,1} \lambda_{\max} \left(\mathbf H_{\mathrm p_i(1), 0} \right) \notag\\
&=& \prod_{j=1}^{l_i} \lambda_{\max} \left(
\mathbf H_{\mathrm p_i(j), \mathrm p_i(j-1)} \right) \prod_{j=1}^{l_i-1} \gamma_{i,j}.
\end{eqnarray}
Noting the definitions of $\mu_{\{i,j\}}$ and $\nu_{i,j}$, $(d)$ easily follows. Finally, $(e)$ results from the fact that as $P \to \infty$, the term $\log\frac{c \left(2^{SR} -1
\right)}{P}$ can be ignored.
Since the left and the right unitary matrices resulting
from the SVD of an i.i.d. complex Gaussian matrix are independent of
its singular value matrix \cite{random_matrix_book} and $\mathbf
U_{i,j}$ is an independent isotropically distributed unitary matrix, we
conclude that all the random variables in the set $ \left\{ \left\{\mu_e\right\}_{e \in E},
\left\{\nu_{i,j}\right\}_{1 \leq i \leq L, 1 \leq j < l_i} \right\}$ are
mutually independent. From the
probability distribution analysis of the singular values of circularly symmetric Gaussian matrices in \cite{zheng_tse}, we can easily
prove $\mathbb P \left\{\mu_e \geq \mu_e^0 \right\} \doteq P^{-N_a
N_b\mu_e^0}=P^{-w_e\mu_e^0}$. Similarly, as $\mathbf U_{i,j}$ is
isotropically distributed, it can be shown that $\mathbb P
\left\{\nu(i,j) \geq \nu_0(i,j) \right\} \doteq P^{-\nu_0(i,j)}$.
Hence, defining $\boldsymbol \mu = [\mu_e]_{e \in E}^T$,
$\boldsymbol \nu = [\nu_{i,j}]_{1 \leq i \leq L, 1 \leq j < l_i}^T$, and $\mathbf w = [w_e]_{e \in E}$,
we have
\begin{equation}
\mathbb P \left\{\boldsymbol \mu \geq \boldsymbol \mu_0, \boldsymbol \nu
\geq \boldsymbol \nu_0 \right\} \doteq P^{-\left( \mathbf 1 \cdot \boldsymbol \nu + \mathbf w \cdot \boldsymbol \mu \right)}.
\end{equation}
Let us define $\mathcal R$ as the region in $\mathbb R^{\left| E
\right| + \sum_{i=1}^{L}{l_i}-L}$ of the vectors $\left[\boldsymbol
\mu^T \boldsymbol \nu^T \right]^T$ such that for all $1\leq i \leq
L$, we have $\sum_{j=1}^{l_i} \mu_{\left\{\mathrm p_i(j), \mathrm
p_i(j-1)\right\}} + \sum_{j=1}^{l_i-1} \nu_{i,j} \geq 1$. Using the
same argument as in the proof of Theorem \ref{thm:DMT-n-ir}, we
conclude that $\mathbb P \left\{ \mathcal R \right\} = \mathbb P
\left\{ \mathcal R \bigcap \mathbb R_+^{\left| E \right| +
\sum_{i=1}^{L}{l_i}-L} \right\}$. Hence, defining $\mathcal R_+ =
\mathcal R \bigcap \mathbb R_+^{\left| E \right| +
\sum_{i=1}^{L}{l_i}-L}$ and $d_0=\displaystyle\min_{[\boldsymbol
\mu^T \boldsymbol \nu^T]^T \in \mathcal R_+} \mathbf w \cdot
\boldsymbol \mu + \mathbf 1 \cdot \boldsymbol \nu$, which can
easily be verified to be bounded, and applying the same argument as in
the proof of Theorem \ref{thm:DMT-n-ir}, we have
\begin{eqnarray}
\mathbb P \left\{ \mathcal E \right\} \dot \leq \mathbb P \left\{ \mathcal R_+ \right\} \doteq P^{-d_0}.
\end{eqnarray}
To complete the proof, we have to show that $d_0=d_G$, or equivalently, $d_0=L$ (note that $L=d_G$). The value of
$d_0$ is obtained from the following linear programming optimization
problem
\begin{IEEEeqnarray}{ll}
\min ~ & \mathbf w \cdot \boldsymbol \mu + \mathbf 1 \cdot \boldsymbol \nu \\
s.t. ~ & \boldsymbol \mu \geq \mathbf 0, \boldsymbol \nu \geq \mathbf 0,
\forall i \sum_{j=1}^{l_i} \mu_{\left\{\mathrm p_i(j), \mathrm p_i(j-1))\right\}} + \sum_{j=1}^{l_i-1} \nu_{i,j} \geq 1. \nonumber
\end{IEEEeqnarray}
According to the argument of linear programming \cite{linear_book},
the solution of the above linear programming problem is equal to the
solution of the dual problem which is
\begin{IEEEeqnarray}{ll}
\max ~ & \sum_{i=1}^{L} f_i \label{eq:dual_prob}\\
s.t. ~ & \mathbf 0 \leq \mathbf f \leq \mathbf 1, \forall e \in E, \sum_{e \in \mathrm p_i} f_i \leq w_e. \nonumber
\end{IEEEeqnarray}
Let us consider the solution $\mathbf f_0 = \mathbf 1$ for
\eqref{eq:dual_prob}. As the path sequence $\left( \mathrm p_1,
\mathrm p_2, \dots ,\mathrm p_L \right)$ consists of the paths that
form the maximum flow in $\hat G$, we conclude that for every $e \in
E$, we have $\displaystyle \sum_{ e \in \mathrm p_i} 1 \leq w_e$.
Hence, $\mathbf f_0$ is a feasible solution for
\eqref{eq:dual_prob}. On the other hand, as for all feasible
solutions $\mathbf f$ we have $\mathbf f \leq \mathbf 1$, we
conclude that $\mathbf f_0$ maximizes \eqref{eq:dual_prob}. Hence,
we have
\begin{equation}
d_0 = \min ~ \mathbf w \cdot \boldsymbol \mu + \mathbf 1 \cdot \boldsymbol \nu \stackrel{(a)}{=} \max ~ \sum_{i=1}^{L} f_i = L = d_G.
\end{equation}
Here, $(a)$ results from duality of the primal and dual linear
programming problems. This completes the proof.
\end{proof}
\textit{Remark 9-} It is worth noting that according to the proof of
Theorem 7, any RS scheme achieves the maximum diversity of the
wireless multiple-antenna multiple-relays network as long as its
corresponding path sequence includes the paths $\mathrm p_1,\mathrm
p_2,\dots,\mathrm p_{d_G}$ used in the proof of Theorem 7.
Theorem 7 shows that the RS scheme is capable of exploiting the
maximum achievable diversity gain in multiple-antenna multiple-relay
wireless networks. However, as the following example shows, the RS
scheme is unable to achieve the maximum multiplexing gain in a general
multiple-antenna multiple-node wireless network.
\textit{Example-} Consider a two-hop relay network consisting of
$K=4$ relay nodes. The transmitter and the receiver are equipped
with $N_0=N_5=2$ antennas, while each of the relays has a single
receiving/transmitting antenna. There exists no direct link between
the transmitter and the receiver, i.e. $\{0,5\} \notin E$. For the
sake of simplicity, assume that the relays are non-interfering, i.e.
$1 \leq a \leq 4, 1 \leq b \leq 4, \{a,b \} \notin E$. Let us
partition the set of relays into $\mathcal S_0=\{1, 2 \}, \mathcal
S_1=\{3, 4 \}$. Consider the following amplify-and-forward strategy:
In the $i$'th time slot, the relay nodes in $\mathcal S_{i \mod 2}$
transmit what they have received in the last time slot, while the
relay nodes in $\mathcal S_{(i+1) \mod 2}$ receive the
transmitter's signal. It can be easily verified that this scheme
achieves a maximum multiplexing gain of $r=2$. However, the proposed
RS scheme achieves a maximum multiplexing gain of $r=1$.
\section{Conclusion}
The setup of a multi-antenna multiple-relay network is studied in this paper. Each
pair of nodes are assumed to be either connected through a
quasi-static Rayleigh fading channel or disconnected. A new scheme
called \textit{random sequential} (RS), based on the
amplify-and-forward relaying, is introduced for this setup. It is
proved that for the general multiple-antenna multiple-relay
networks, the proposed scheme achieves the maximum diversity gain.
Furthermore, bounds on the diversity-multiplexing tradeoff (DMT) of
the RS scheme are derived for a general single-antenna
multiple-relay network. Specifically, 1) the exact DMT of the RS scheme
is derived under the assumption of ``non-interfering relaying''; 2)
a lower-bound is derived on the DMT of the RS scheme (no conditions
imposed). Finally, it is shown that for the single-antenna
two-hop multiple-access multiple-relay network setup where there is no
direct link between the transmitter(s) and the receiver, the RS
scheme achieves the optimum diversity-multiplexing tradeoff.
However, for the multiple access single relay scenario, we show that the RS scheme is unable to perform optimum in terms of the DMT,
while the dynamic decode-and-forward scheme is shown to achieves the optimum DMT for
this scenario.
\bibliographystyle{IEEEtran}
|
1,314,259,994,778 | arxiv | \section{Introduction}
Let
$P(x)=a_0z^d+\dots+a_d=a_0\prod_{i=1}^d(z-\alpha_i)$ be a
nonconstant polynomial with (at first) complex coefficients. Then, following
Mahler \cite{Mahl62} its {\it Mahler measure} is defined to be
\begin{equation}\label{E-0}
M(P):=\exp\left(\int_0^1 \log|P(e^{2\pi it})|dt\right),
\end{equation}
the geometric mean of $|P(z)|$ for $z$ on the
unit circle. However $M(P)$ had appeared earlier in a paper of
Lehmer \cite{Lehm33}, in an alternative form
\begin{equation}\label{E-1}
M(P)=|a_0|\prod_{|\alpha_i|\ge 1} |\alpha_i|.
\end{equation}
The equivalence of
the two definitions follows immediately from Jensen's formula
\cite{Jens1899}
\[
\int_0^1 \log|e^{2\pi i t}-\alpha|dt=\log_+|\alpha|.
\]
Here $\log_+x$ denotes $\max(0,\log x)$. If $|a_0|\ge 1$, then clearly $M(P)\ge 1$. This is the case when $P$ has integer coefficients; we assume
henceforth that $P$ is of this form. Then, from a result of
Kronecker \cite{Kron1857}, $M(P)=1$ occurs only if $\pm P$ is a
power of $z$ times a cyclotomic polynomial.
In \cite{Mahl62} Mahler called $M(P)$ the measure of the polynomial
$P$, apparently to distinguish it from its (na\"\i ve) height. This
was first referred to as Mahler's measure by Waldschmidt
\cite[p.21]{Wald79} in 1979 (`mesure de Mahler'), and soon
afterwards by Boyd \cite{Boyd81} and Durand \cite{Dura81}, in the
sense of ``the function that Mahler called `measure'\,", rather than as
a name. But it soon {\it became} a name. In 1983 Louboutin \cite{Loub83}
used the term to apply to an algebraic number. We shall follow this
convention too
--- $M(\alpha}\newcommand{\om}{\omega)$ for an algebraic number
$\alpha}\newcommand{\om}{\omega$ will mean the Mahler measure of the minimal polynomial $P_\alpha}\newcommand{\om}{\omega$ of $\alpha}\newcommand{\om}{\omega$,
with $d$ the degree of $\alpha}\newcommand{\om}{\omega$, having conjugates $\alpha}\newcommand{\om}{\omega=\alpha}\newcommand{\om}{\omega_1,\alpha}\newcommand{\om}{\omega_2,\dots,\alpha}\newcommand{\om}{\omega_d$. The Mahler measure is
actually a height
function on polynomials with integer coefficients, as there are only a finite number of such polynomials of bounded degree and bounded Mahler measure.
Indeed, in the MR review of \cite{Loub83}, it is called the Mahler height;
but `Mahler measure' has stuck.
For the Mahler measure in the form $M(\alpha}\newcommand{\om}{\omega)$, there is a third
representation to add to (\ref{E-0}) and (\ref{E-1}). We consider
a complete set of inequivalent valuations $|.|_\nu$ of the field
$\Q(\alpha}\newcommand{\om}{\omega)$, normalised so that, for $\nu|p$,
$|.|_\nu=|.|_p$ on $\Q_p$. Here $\Q_p$ is the field of $p$-adic numbers, with the usual valuation $|.|_p$.
Then for $a_0$ as in (\ref{E-1}),
\begin{equation}\label{E-4}
|a_0|=\prod_{p<\infty}|a_0|^{-1}_p=\prod_{p<\infty}\prod_{\nu|p}\max(1,|\alpha}\newcommand{\om}{\omega|_\nu^{d_\nu}),
\end{equation} coming from the product formula, and from considering the Newton polygons of the irreducible
factors (of degree $d_\nu$) of $P_\alpha}\newcommand{\om}{\omega$ over $\Q_p$ (see e.g.
\cite[p. 73]{Weis63}).
Then \cite[pp. 74--79]{Wald00}, \cite{BG06} from (\ref{E-1}) and (\ref{E-4})
\begin{equation}\label{E-2}
M(\alpha}\newcommand{\om}{\omega)=\prod_{\text{all }\nu} \max(1,|\alpha}\newcommand{\om}{\omega|^{d_\nu}_\nu),
\end{equation}
and so also \begin{equation}\label{E-5}
h(\alpha}\newcommand{\om}{\omega):= \frac{\log M}{d}=\sum_{\text{all
}\nu}\log_+|\alpha}\newcommand{\om}{\omega|_\nu^{d_\nu/d}.
\end{equation}
Here $h(\alpha}\newcommand{\om}{\omega)$ is called the {\it Weil}, or {\it absolute} height of $\alpha}\newcommand{\om}{\omega$.
\section{Lehmer's problem}
While Mahler presumably had applications of his measure to transcendence in
mind, Lehmer's interest was in finding large primes. He sought them
amongst the Pierce numbers $\prod_{i=1}^d(1\pm\alpha_i^m)$, where
the $\alpha}\newcommand{\om}{\omega_i$ are the roots of a monic polynomial $P$ having integer
coefficients. Lehmer showed that for $P$ with no roots on the unit
circle these numbers grew with $m$ like $M(P)^m$. Pierce
\cite{Pier16} had earlier considered the factorization of these
numbers. Lehmer posed the problem of whether, among those monic
integer polynomials with $M(P)>1$, polynomials could be chosen with
$M(P)$ arbitrarily close to $1$. This has become known as `Lehmer's
problem', or `Lehmer's conjecture', the `conjecture' being that they
could not, although Lehmer did not in fact make this conjecture.\footnote{`Lehmer's conjecture' is also used to refer to a
conjecture on the non-vanishing of Ramanujan's $\tau$-function. But
I do not know that Lehmer actually made that conjecture either: in
\cite[p. 429]{Lehm47} he wrote ``$\dots$ and it is natural to ask
whether $\tau(n)=0$ for any $n>0$."} The smallest value of $M(P)>1$
he could find was
\[\label{E-1.176}
M(L)=1.176280818\dots,
\]
where $L(z)=z^{10}+z^9-z^7-z^6-z^5-z^4-z^3+z+1$ is now called `Lehmer's polynomial'. To this day no-one
has found a smaller value of $M(P)>1$ for $P(z)\in \mathbb{Z}[z]$.
Lehmer's problem is central to this survey. We concentrate on
results for $M(P)$ with $P$ having integer coefficients. We do not
attempt to survey results for $M(P)$ for $P$ a polynomial in
several variables. For this we refer the reader to
\cite{Bert07}, \cite{Boyd81}, \cite{Vill99}, \cite{Boyd98}, \cite{BV02}, \cite{Boyd02}, \cite[Chapter 3]{EW99}. However,
the one-variable case should not really be separated from the general case, because of the fact that for every
$P$ with integer coefficients, irreducible and in genuinely more than one variable (i.e., its Newton polytope
is not one-dimensional) $M(P)$ is known \cite[Theorem 1]{Boyd81} to be the limit of
$\{M(P_n)\}$ for some sequence $\{P_n\}$ of one-variable integer polynomials. This is part of a far-reaching
conjecture of Boyd \cite{Boyd81} to the effect that the set of all $M(P)$ for $P$ an integer polynomial
in any number of variables is a closed subset of the real line.
Our survey of results related to Lehmer's problem falls into three
categories. We report lower bounds, or sometimes exact infima, for
$M(P)$ as $P$ ranges over certain sets of integer polynomials.
Depending on this set, such lower bounds can either tend to $1$ as
the degree $d$ of $P$ tends to infinity (Section \ref{S-small}),
be constant and greater than $1$ (Section \ref{S-medium}), or
increase exponentially with $d$ (Section \ref{S-big}). We also
report on computational work on the problem (Section
\ref{S-compute}).
In Sections \ref{S-house0} and \ref{S-house1} we discuss the closely-related function $\ho{\alpha}\newcommand{\om}{\omega}$ and the Schinzel-Zassenhaus conjecture. In Section \ref{S-disc} connections between Mahler measure and the discriminant are covered. In Section \ref{S-algnum} the known properties of $M(\alpha}\newcommand{\om}{\omega)$ as an
algebraic number are outlined. Section \ref{S-count} is concerned with counting integer polynomials of given Mahler measure, while in Section \ref{S-dynam} a dynamical systems version of Lehmer's problem is presented. In Section \ref{S-variants} variants of Mahler measure are discussed, and finally in Section \ref{S-applications} some applications of Mahler measure are given.
\section{The house $\ho{\alpha}\newcommand{\om}{\omega}$ of $\alpha}\newcommand{\om}{\omega$ and the conjecture of Schinzel and Zassenhaus}\label{S-house0}
Related to the Mahler measure of an algebraic integer $\alpha}\newcommand{\om}{\omega$ is $\ho{\alpha}\newcommand{\om}{\omega}$\,, the {\it house} of $\alpha}\newcommand{\om}{\omega$, defined as
the maximum modulus of its conjugates (including $\alpha}\newcommand{\om}{\omega$ itself). For $\alpha}\newcommand{\om}{\omega$ with $r>0$ roots of modulus greater
than $1$ we have the obvious inequality
\begin{equation}\label{E-3}
M(\alpha}\newcommand{\om}{\omega)^{1/d}\le M(\alpha}\newcommand{\om}{\omega)^{1/r}\le \ho{\alpha}\newcommand{\om}{\omega}\le M(\alpha}\newcommand{\om}{\omega)
\end{equation}
(see e.g. \cite{Boyd85}). If $\alpha}\newcommand{\om}{\omega$ is in fact a unit (which is certainly the case if $M(\alpha}\newcommand{\om}{\omega)<2$) then $M(\alpha}\newcommand{\om}{\omega)=M(\alpha}\newcommand{\om}{\omega^{-1})$
so that
\[
M(\alpha}\newcommand{\om}{\omega)\le \left(\max(\ho{\alpha}\newcommand{\om}{\omega}\,,\ho{{1/\alpha}\newcommand{\om}{\omega}}\,)\right)^{d/2}.
\]
In 1965 Schinzel and Zassenhaus \cite{SZ65} proved that if $\alpha\neq 0$ an algebraic integer that is not a root of unity
and if $2s$
of its conjugates are nonreal, then
\begin{equation}\label{E-SZ1}
\ho{\alpha}>1+4^{-s-2}.
\end{equation}
This was the first unconditional result towards solving Lehmer's problem, since by (\ref{E-3}) it implies the same lower
bound for $M(\alpha}\newcommand{\om}{\omega)$ for such $\alpha}\newcommand{\om}{\omega$.
They conjectured, however, that a much stronger
bound should hold: that under these conditions in fact
\begin{equation}\label{E-SZ}
\ho{\alpha}\ge 1+c/d
\end{equation}
for some absolute constant $c>0$.
Its truth is implied by a positive answer to Lehmer's
`conjecture'. Indeed, because $\ho{\alpha}\newcommand{\om}{\omega}\ge M(\alpha}\newcommand{\om}{\omega)^{1/d}$ where $d=\deg\alpha}\newcommand{\om}{\omega$, we have
\begin{equation}\label{E-SZ2}
\ho{\alpha}\newcommand{\om}{\omega}\ge 1+\frac{\log M(\alpha}\newcommand{\om}{\omega)}{d}=1+h(\alpha}\newcommand{\om}{\omega),
\end{equation}
so that if $M(\alpha}\newcommand{\om}{\omega)\ge c_0>1$ then $\ho{\alpha}\newcommand{\om}{\omega}> 1+\frac{\log(c_0)}{d}$.
Likewise, from this inequality any results in the direction of
solving Lehmer's problem will have a corresponding
`Schinzel-Zassenhaus conjecture' version. In particular, this applies to the results of Section \ref{SS-nonrec} below, including that of Breusch. His inequality appears to be the first, albeit conditional, result in the direction
of the Schinzel-Zassenhaus conjecture or the Lehmer problem.
\section{Unconditional lower bounds for $M(\alpha}\newcommand{\om}{\omega)$ that tend to $1$\\ as $d\to\infty$}\label{S-small}
\subsection{The bounds of Blanksby and Montgomery, and Stewart.}
The lower bound for $M(\alpha}\newcommand{\om}{\omega)$ coming from (\ref{E-SZ1}) was dramatically improved in 1971 by Blanksby and Montgomery \cite{BM71}, who showed, again for
$\alpha}\newcommand{\om}{\omega$ of degree $d>1$ and not a root of unity, that
\[
M(\alpha}\newcommand{\om}{\omega)>1+\frac{1}{52d\log(6d)}.
\]
Their methods were based on Fourier series in several variables, making use of the nonnegativity of Fej\' er's
kernel
\begin{equation*}
\tfrac{1}{2}+\sum_{k=1}^K\left(1-\tfrac{k}{K+1}\right)\cos(kx)=\tfrac{1}{2(K+1)}
\left(\sum_{j=0}^Ke^{ix\left(\frac{K}{2}-j\right)}\right)^2. \end{equation*}
They also employed a neat geometric lemma
for bounding the modulus of complex numbers near the unit circle: if $0<\rho\le 1$ and $\rho\le|z|\le \rho^{-1}$
then
\begin{equation}
|z-1|\le\rho^{-1}\left|\rho\tfrac{z}{|z|}-1\right|.
\end{equation}
In 1978 Stewart \cite{Stew78a} caused some surprise by obtaining a lower
bound of the same strength $1+\frac{C}{d\log d}$ by the use of a completely different argument. He
based his proof on the construction of an auxiliary function of the type used in transcendence proofs.
In such arguments it is of course necessary to make use of some arithmetic information, because of the fact that
the polynomials one is dealing with, here the minimal polynomials of
algebraic integers, are monic, have integer coefficients, and no root is a root of unity. In the three
proofs of the results given above, this is done by making use of the fact that, for $\alpha}\newcommand{\om}{\omega$ not a root of unity,
the Pierce numbers $\prod_{i=1}^d(1-\alpha}\newcommand{\om}{\omega_i^m)$ are then nonzero integers for all $m\in\mathbb{N}}\newcommand{\R}{\mathbb{R}$. Hence they are at least $1$ in modulus.
\subsection{Dobrowolski's lower bound.} In 1979 a breakthrough was achieved by Dobrowolski, who, like Stewart, used an argument based on an
auxiliary function to get a lower bound
for $M(\alpha}\newcommand{\om}{\omega)$. However, he also employed more powerful arithmetic information: the fact that for any prime $p$
the resultant of the minimal polynomials of $\alpha}\newcommand{\om}{\omega$ and of $\alpha}\newcommand{\om}{\omega^p$ is an integer
multiple of $p^d$. Since this
can be shown to be nonzero for $\alpha}\newcommand{\om}{\omega$ not a root of unity, it is at least $p^d$ in modulus. Dobrowolski
\cite{Dobr79} was
able to apply this fact to obtain for $d\ge 2$ the much improved lower bound
\begin{equation}\label{E-D}
M(\alpha}\newcommand{\om}{\omega)> 1 +\frac{1}{1200} \left(\frac{\log\log d}{\log d}\right)^3.
\end{equation}
He also has an asymptotic version of his result, where the constant $1/1200$ can be increased to $1-\varepsilon$ for $\alpha}\newcommand{\om}{\omega$ of degree $d\ge d_0(\varepsilon)$.
Improvements in the constant in Dobrowolski's Theorem have been made since that time. Cantor and Straus \cite{CS82} proved the asymptotic version of his result with the larger constant $2-\varepsilon$, by a different method: the auxiliary function was replaced
by the use of generalised Vandermonde determinants. See also \cite{Raus85} for a similar argument
(plus some early references to these determinants). As with Dobrowolski's argument, the large
size of the resultant of $\alpha}\newcommand{\om}{\omega$ and $\alpha}\newcommand{\om}{\omega^p$ was an essential ingredient.
Louboutin \cite{Loub83} improved the constant further, to $9/4-\varepsilon$, using the Cantor-Straus method. A different proof of Louboutin's result was given by Meyer \cite{Meye88}. Later Voutier \cite{Vout96},
by a very careful argument based on Cantor-Straus, has obtained the constant $1/4$ valid for all $\alpha}\newcommand{\om}{\omega$ of
degree $d\ge 2$.
However, no-one has been able to improve the
dependence on the degree $d$ in (\ref{E-D}), so that Lehmer's problem remains unsolved!
\subsection{Generalisations of Dobrowolski's Theorem.}
Amoroso and David \cite{AD98,AD99} have generalised Dobrowolski's
result in the following way. Let $\alpha}\newcommand{\om}{\omega_1,\dots,\alpha}\newcommand{\om}{\omega_n$ be $n$
multiplicatively independent algebraic numbers in a number field
of degree $d$. Then for some constant $c(n)$ depending only on $n$
\begin{equation}
h(\alpha}\newcommand{\om}{\omega_1)\dots h(\alpha}\newcommand{\om}{\omega_n)\ge \frac{1}{d \log(3d)^{c(n)}}.
\end{equation}
Matveev \cite{Matv99} also has a result of this type, but using
instead the modified Weil height
$h_*(\alpha}\newcommand{\om}{\omega):=\max(h(\alpha}\newcommand{\om}{\omega),d^{-1}|\log\alpha}\newcommand{\om}{\omega|)$.
Amoroso and Zannier \cite{AZ} have given a version of Dobrowolski's result for $\alpha}\newcommand{\om}{\omega$, not $0$ or a root of unity, of degree $D$ over an finite abelian extension of a number field. Then
\begin{equation}
h(\alpha}\newcommand{\om}{\omega)\ge \frac{c}{D}\left(\frac{\log\log{5D}}{\log{2D}}\right)^{13},
\end{equation}
where the constant $c$ depends only on the number field, not on its abelian extension. Amoroso and Delsinne \cite {ADe} have recently improved this result,
for instance essentially reducing the exponent $13$ to $4$.
Analogues of Dobrowolski's Theorem have been proved for elliptic curves by
Anderson and Masser \cite{AM80}, Hindry and Silverman \cite{HS90}, Laurent \cite{Laur83} and Masser \cite{Mass89}.
In particular Masser proved that for an elliptic curve $E$ defined over a number field $K$ and a nontorsion
point $P$ defined over a degree $\le d$ extension
$F$ of $K$ that the canonical height $\hat h(P$) satisfies
$$
\hat h(P)\ge \frac{C}{d^{3}(\log d)^2}.
$$
Here $C$ depends only on $E$ and $K$. When $E$ has non-integral $j$-invariant
Hindry and Silverman improved this bound to $\hat h(P)\ge \frac{C}{d^{2}(\log d)^2}$. In the case where $E$ has complex multiplication, however,
Laurent obtained the stronger bound
$$
\hat h(P)\ge \frac{C}{d}(\log\log d/\log d)^3.
$$
This is completely analogous to the formulation of Dobrowolski's result (\ref{E-D}) in terms of the Weil
height $h(\alpha}\newcommand{\om}{\omega)=\log M(\alpha}\newcommand{\om}{\omega)/d$.
\section{Restricted results of Lehmer strength: $M(\alpha}\newcommand{\om}{\omega)>c>1$.}\label{S-medium}
\subsection{Results for nonreciprocal algebraic numbers and polynomials}\label{SS-nonrec}
Recall that a polynomial $P(z)$ of degree $d$ is said to be {\it reciprocal} if it satisfies $z^dP(1/z)= \pm P(z)$. (With the negative
sign,
clearly $P(z)$ is divisible by $z-1$.) Furthermore an algebraic number $\alpha}\newcommand{\om}{\omega$ is {\it reciprocal} if it is conjugate to
$\alpha}\newcommand{\om}{\omega^{-1}$ (as then $P_\alpha}\newcommand{\om}{\omega$ is a reciprocal polynomial). One might at first think that it should be possible to prove
stronger results on
Lehmer's problem if we restrict our attention to reciprocal polynomials. However, this is far from being the
case:
reciprocal polynomials seem to be the most difficult to work with, perhaps because cyclotomic polynomials are
reciprocal; we can prove stronger results on Lehmer's problem if we restrict our attention to nonreciprocal
polynomials!
The first result in this direction was due to Breusch \cite{Breu51}. Strangely, this paper was unknown to number
theorists until it was recently
unearthed by Narkiewicz.
Breusch proved that for $\alpha}\newcommand{\om}{\omega$ a nonreciprocal algebraic integer
\begin{equation}\label{E-Br}
M(\alpha}\newcommand{\om}{\omega)\ge M(z^3-z^2-\tfrac{1}{4})=1.1796\dots\quad.
\end{equation}
Breusch's argument is based on the study of the resultant of $\alpha}\newcommand{\om}{\omega$ and $\alpha}\newcommand{\om}{\omega^{-1}$, for $\alpha}\newcommand{\om}{\omega$ a
root of
$P$. On the one hand, this resultant must be at least $1$ in modulus. But, on the other hand, this is not possible
if $M(P)$ is too
close
to $1$, because then all the distances $|\alpha}\newcommand{\om}{\omega_i-\overline{\alpha}\newcommand{\om}{\omega_i^{-1}}|$ are too small. (Note that $\alpha}\newcommand{\om}{\omega_i= \overline{\alpha}\newcommand{\om}{\omega_i^{-1}}$ implies that $P$ is
reciprocal.)
In 1971 Smyth \cite{Smyt71} independently improved the constant in (\ref{E-Br}), showing for $\alpha}\newcommand{\om}{\omega$ a nonreciprocal algebraic integer
\begin{equation}\label{E-ta}
M(\alpha}\newcommand{\om}{\omega)\ge M(z^3-z-1)=\theta}\newcommand{\disc}{\operatorname{disc}_0=1.3247\dots,
\end{equation}
the real root of $z^3-z-1=0$. This constant is best possible here,
$z^3-z-1$ being nonreciprocal. Equality $M(\alpha}\newcommand{\om}{\omega)=\theta}\newcommand{\disc}{\operatorname{disc}_0$
occurs only for $\alpha}\newcommand{\om}{\omega$ conjugate to $(\pm \theta}\newcommand{\disc}{\operatorname{disc}_0)^{\pm 1/k}$
for $k$ some positive integer.\footnote{As Boyd \cite{Boyd86b} pointed out, however,
this does not
preclude the possibility of equality for some {\it
reciprocal} $\alpha}\newcommand{\om}{\omega$. But it was proved by Dixon and Dubickas
\cite[Cor. 14]{DD04} that this could not happen.} Otherwise in fact $M(\alpha}\newcommand{\om}{\omega)>\theta}\newcommand{\disc}{\operatorname{disc}_0+10^{-4}$ (\cite{Smyt72}), so that $\theta}\newcommand{\disc}{\operatorname{disc}_0$
is an isolated point in the spectrum of Mahler measures of nonreciprocal
algebraic integers. The lower bound $10^{-4}$ for this gap in the spectrum was increased to $0.000260\dots$ by Dixon and Dubickas \cite[Th. 15]{DD04}. It would be interesting to know more about this spectrum. All of its known small
points come from trinomials, or their irreducible factors:
$1.324717959\dots =M(z^3-z-1)=M(\frac{z^5-z^4-1}{z^2-z+1})$;
$1.349716105\dots=M(z^5-z^4+z^2-z+1)=M(\frac{z^7+z^2+1}{z^2+z+1})$;
$1.359914149\dots=M(z^6-z^5+z^3-z^2+1)=M(\frac{z^8+z+1}{z^2+z+1})$;
$1.364199545\dots=M(z^5-z^2+1)$;
$1.367854634\dots=M(z^9-z^8+z^6-z^5+z^3-z+1)=M(\frac{z^{11}+z^4+1}{z^2+z+1})$.
The smallest known limit point of nonreciprocal measures is
$$\lim_{n\to\infty}M(z^n+z+1)=1.38135\dots$$
(\cite{Boyd78b}).
The spectrum clearly contains the set of all Pisot numbers, except perhaps the reciprocal ones.
But in fact it does contain those too,
a result due to Boyd \cite[Proposition 2]{Boyd86b}. There are
however smaller limit points of reciprocal measures (see \cite{Boyd81}, \cite{BM05}
).
The method of proof of (\ref{E-ta}) was based on the Maclaurin expansion of the rational function $F(z)=P(0)P(z)/z^dP(1/z)$,
which has integer coefficients and is nonconstant for $P$ nonreciprocal. This idea had been used in 1944
by Salem \cite{Sale44} in his proof that the set of Pisot numbers is closed, and in the same year by
Siegel \cite{Sieg44} in his proof that $\theta}\newcommand{\disc}{\operatorname{disc}_0$ is the smallest Pisot number. One can write $F(z)$ as a
quotient $f(z)/g(z)$ where $f$ and $g$ are both holomorphic and bounded above by $1$ in modulus in the
disc $|z|<1$. Furthermore, $f(0)=g(0)=M(P)^{-1}$. These functions were first studied by
Schur \cite{Schu17}, who completely specified the conditions on the coefficients of a power series $\sum_{n=0}^\infty c_nz^n$ for it to belong
to this class. Then study of functions of this type, combined
with the fact that the series of their quotient has integer coefficients, enables one to get the required
lower
bound for $M(P)$. To prove that $\theta}\newcommand{\disc}{\operatorname{disc}_0$ is an isolated point of
the nonreciprocal spectrum, it was necessary to consider the quotient
$F(z)/F_1(z)$, where $F_1(z)=P_1(0)P_1(z)/z^dP_1(1/z)$. Here
$P_1$ is chosen as the minimal polynomial of some $(\pm \theta}\newcommand{\disc}{\operatorname{disc}_0)^{\pm
1/k}$ so that, if $F(z)=1+a_kz^k+\dots$, where $a_k\ne 0$ then also $F_1(z)\equiv 1+a_kz^k\pmod {z^{k+1}}$.
Thus this quotient, assumed nonconstant, had a first nonzero term of
higher order, enabling one to show that $M(P)>\theta}\newcommand{\disc}{\operatorname{disc}_0+10^{-4}$.
\subsection{Nonreciprocal case: generalizations}
Soon afterwards Schinzel \cite{Schi73} and then Bazylewicz \cite{Bazy76} generalised Smyth's result to polynomials
over
Kroneckerian fields. (These are fields that are either totally real extensions of the rationals, or totally nonreal quadratic extensions of such fields.) For a further generalisation to polynomials in several variables see
\cite[Theorem 70]{Schi00}. In these generalisations the optimal
constant is obtained. If the field does not contain a primitive
cube root of unity $\om_3$ then the best constant is again
$\theta}\newcommand{\disc}{\operatorname{disc}_0$, while if it does contain $\om_3$ then the best constant
is the maximum modulus of the roots $\theta}\newcommand{\disc}{\operatorname{disc}$ of
$\theta}\newcommand{\disc}{\operatorname{disc}^2-\om_3\theta}\newcommand{\disc}{\operatorname{disc}-1= 0$.
Generalisations to algebraic numbers were proved by Notari \cite{Nota78} and Lloyd-Smith \cite{Lloy85}.
See also Skoruppa's Heights notes \cite{Skor99} and Schinzel
\cite{Schi00}.
\subsection{The case where {\bf $\Q(\alpha}\newcommand{\om}{\omega)/\Q$} is Galois}
In 1999 Amoroso and David \cite{AD99}, as a Corollary of a far more general result concerning heights of
points on subvarieties of ${\mathbb G}_{\text{m}}^n$, solved Lehmer's
problem for $\Q(\alpha}\newcommand{\om}{\omega)/\Q$ a Galois extension: they proved that
there is a constant $c>1$ such that if $\alpha}\newcommand{\om}{\omega$ is not zero or a root of unity and $\Q(\alpha}\newcommand{\om}{\omega)$ is Galois of
degree $d$ then $M(\alpha}\newcommand{\om}{\omega)\ge c$.
\subsection{Other restricted results of Lehmer strength} Mignotte \cite[Cor. 2]{Mign78} proved that if $\alpha}\newcommand{\om}{\omega$
is an algebraic number of degree $d$ such that there is a
prime less than $d\log d$ that is unramified in the field $\Q(\alpha}\newcommand{\om}{\omega)$ then $M(\alpha}\newcommand{\om}{\omega)\ge 1.2$.
Mignotte \cite[Prop. 5]{Mign78} gave a very short proof, based on an idea of Dobrowolski, of the fact that for
an irreducible noncyclotomic polynomial $P$ of length $L=||P||_1$ that
$M(P)\ge 2^{1/2L}$. For a similar result (where $2^{1/2L}$ is replaced
by $1+1/(6L)$), see Stewart \cite{Stew78b}.
In 2004 P. Borwein, Mossinghoff and Hare \cite{BHM04} generalised the argument in \cite{Smyt71} to nonreciprocal
polynomials $P$ all of whose coefficients are odd, proving that in this case
$$M(P)\ge M(z^2-z-1)=\phi.$$ Here $\phi=(1+\sqrt{5})/{2}$. This lower bound is clearly
best possible. Recently Borwein, Dobrowolski and
Mossinghoff have been able to drop the requirement of nonreciprocality: they proved
in \cite{BDM07} that for a noncyclotomic irreducible polynomial with all odd
coefficients then \begin{equation} M(P)\ge 5^{1/4}=1.495348\dots\quad. \end{equation} In the other direction,
in a search \cite{BHM04} of polynomials up to degree $72$ with coefficients $\pm 1$ and no cyclotomic factor
the smallest Mahler measure found was $M(z^6+z^5-z^4-z^3-z^2+z+1)=1.556030\dots$\,\,.
Dobrowolski, Lawton and Schinzel \cite{DLS83} first
gave a bound for the Mahler measure of an
noncyclotomic integer polynomial $P$ in terms of the number $k$ of its nonzero coefficients: \begin{equation} M(P)\ge 1+\frac{1}{\exp_{k+1}2k^2}. \end{equation} Here
$\exp_{k+1}$ is the $(k+1)$-fold exponential.
This was later improved by Dobrowolski \cite{Dobr91} to $1+\frac{1}{13911}\exp(-2.27k^k)$, and lately \cite{Dobr06} to
\begin{equation} M(P)\ge 1+\frac{1}{\exp(a 3^{\lfloor(k-2)/4\rfloor}k^2\log k)}, \end{equation}
where $a<0.785$.
Furthermore, in the same paper he proves that if $P$ has no cyclotomic factors then
\begin{equation} M(P)\ge 1+\frac{0.31}{k!}. \end{equation}
With the additional restriction that $P$ is irreducible, Dobrowolski \cite{Dobr80} gave the lower bound
\begin{equation} M(P)\ge
1+\frac{\log(2e)}{2e}(k+1)^{-k}.\end{equation}
In \cite{Dobr06} he strengthened this to
\begin{equation} M(P)\ge 1+\frac{0.17}{2^mm!}, \end{equation}
where $m=\lceil k/2\rceil$.
Recently Dobrowolski \cite{Dobr07} has proved that for an integer symmetric $n\times n$ matrix $A$ with
characteristic polynomial $\chi_A(x)$, the reciprocal polynomial
$z^n\chi_A(z+1/z)$ is either cyclotomic or has Mahler measure at least $1.043$.
The Mahler measure of $A$ can then be
defined
to be the Mahler measure of this polynomial. McKee and Smyth \cite{MS07} have just improved the lower bound
in Dobrowolski's result to the best possible value $\tau_0=1.176\dots$ coming from Lehmer's polynomial.
The adjacency matrix of the graph below is an example of a matrix where this value is attained.
The Mahler measure of a graph, defined as the Mahler measure of its adjacency matrix, has been studied by
McKee and Smyth \cite{MS05}. They showed that its Mahler measure was either $1$ or at
least $\tau_0$, the Mahler measure of the graph
\leavevmode
\hbox{%
\epsfxsize0.8in \epsffile{E10.eps}}.
They further found all numbers in the interval $[1,\phi]$ that were Mahler measures of graphs.
All but one of these numbers is a Salem number.
\section{Restricted results where $M(\alpha}\newcommand{\om}{\omega)>C^d$.}\label{S-big}
\subsection{Totally real {\bf $\alpha}\newcommand{\om}{\omega$}}
Suppose that $\alpha}\newcommand{\om}{\omega$ is a totally real algebraic integer of degree $d$, $\alpha}\newcommand{\om}{\omega\ne 0$ or $\pm 1$. Then Schinzel \cite{Schi73} proved that
\begin{equation}\label{E-S}
M(\alpha}\newcommand{\om}{\omega)\ge \phi^{d/2}.
\end{equation}
A one-page proof of this result was later provided by H\" ohn and Skoruppa
\cite{HS93}. The result also holds for any nonzero algebraic number $\alpha}\newcommand{\om}{\omega$ in a Kroneckerian
field, provided $|\alpha}\newcommand{\om}{\omega|\ne 1$. Amoroso
and Dvornicich \cite[p. 261]{ADv00} gave the interesting example of $\alpha}\newcommand{\om}{\omega=\tfrac{1}{2}\sqrt{3+\sqrt{-7}}$, not an
algebraic integer, where $|\alpha}\newcommand{\om}{\omega|=1$,
$\Q(\alpha}\newcommand{\om}{\omega)$ is Kroneckerian, but $M(\alpha}\newcommand{\om}{\omega)=2<\phi^2$.
Smyth \cite{Smyt80-1} studied the spectrum of values $M(\alpha}\newcommand{\om}{\omega)^{1/d}$ in $(1,\infty)$.
He showed that this spectrum was discrete at first, and found its smallest four points. The method used is
semi-infinite
linear programming (continuous real variables and a finite number
of constraints), combined with resultant information. One takes a list of judiciously chosen polynomials $P_i(x)$,
and then finds the largest $c$ such that for some $c_i\ge 0$
\begin{equation}
\log_+|x|\ge c -\sum_ic_i\log|P_i(x)|
\end{equation}
for all real $x$. Then, averaging this inequality over the conjugates of $\alpha}\newcommand{\om}{\omega$,
one gets that $M(\alpha}\newcommand{\om}{\omega)\ge e^c$, unless some $P_i(\alpha}\newcommand{\om}{\omega)=0$.
Two further isolated
points were later found by Flammang \cite{Flam96}, giving the six points comprising the whole of the spectrum
in $(1, 1.3117)$. On the
other hand Smyth also showed that this spectrum was dense in $(\ell,\infty)$, where $\ell=1.31427\dots$\,\,. The
number
$\ell$ is $\lim_{n\to\infty}M(\alpha}\newcommand{\om}{\omega_n)$, where $\beta_0=1$ and $\beta_n$, of degree $2^n$, is defined by $\beta_n-\beta_n^{-1}=\beta_{n-1} (n\ge 1)$. The limiting
distribution of the conjugates of $\beta_n$ was studied in detail by Davie and Smyth \cite{DS89}. It is
highly irregular: indeed, the Hausdorff dimension of the associated probability measure is
$0.800611138269168784\dots$\,\,. It is the invariant measure of the map $\mathbb{C}}\newcommand{\Q}{\mathbb{Q}\to\mathbb{C}}\newcommand{\Q}{\mathbb{Q}$ taking $t\mapsto t-1/t$,
whose Julia set (and thus the support of the measure) is $\R$.
Bertin \cite{Bert99} pointed out that from a result of Matveev
(\ref{E-S}) could be strengthened when $\alpha}\newcommand{\om}{\omega$ was a nonunit.
\subsection{Langevin's Theorem} In 1988 Langevin \cite{Lang88} proved the following general result, which
included
Schinzel's result
(\ref{E-S}) as a special case (though not with the explicit and best constant given by Schinzel). Suppose that
$V$ is an
open subset of $\mathbb C$ that has nonempty intersection with the unit circle $|z|=1$, and is stable under
complex conjugation. Then there is a constant $C(V)>1$ such that for every irreducible monic integer
polynomial $P$ of degree $d$ having all its roots outside $V$ one has $M(P)>C(V)^d$. The proof is based on the
beautiful
result of Kakeya to the effect that, for a compact subset of $\mathbb{C}}\newcommand{\Q}{\mathbb{Q}$ stable under complex conjugation and of transfinite
diameter
less than $1$ there is a nonzero polynomial with integer coefficients whose maximum modulus on this set is
less
than $1$. (Kakeya's result is applied to the unit disc with $V$ removed.) For Schinzel's result
take $V=\mathbb{C}}\newcommand{\Q}{\mathbb{Q}\setminus \R$, $C(\R)=\phi^{1/2}$, where the value of $C(\R)$ given here is best possible.
It is of course of interest to find such best possible constants for other sets $V$.
Stimulated by Langevin's Theorem, Rhin and Smyth \cite{RS95} studied the case where the subset of $\mathbb{C}}\newcommand{\Q}{\mathbb{Q}$ was the
sector $V_\theta}\newcommand{\disc}{\operatorname{disc}=\{z\in\mathbb{C}}\newcommand{\Q}{\mathbb{Q}: |\arg z|> \theta}\newcommand{\disc}{\operatorname{disc}\}$.
They found a value $C(V_\theta}\newcommand{\disc}{\operatorname{disc})>1$ for $0\le\theta}\newcommand{\disc}{\operatorname{disc}\le 2\pi/3$, including $9$ subintervals of this range for which
the constants found were best possible. In particular, the best constant $C(V_{\pi/2})$ was evaluated. This implied
that for $P(z)$ irreducible, of degree $d$, having all its roots with positive real part and not equal to
$z-1$ or $z^2-z+1$ we have
\begin{equation}
M(P)^{1/d}\ge M(z^6-2z^5+4z^4-5z^3+4z^2-2z+1)^{1/6}=1.12933793\dots,
\end{equation}
all roots of $z^6-2z^5+4z^4-5z^3+4z^2-2z+1$ having positive real part. Curiously, for
some root $\alpha}\newcommand{\om}{\omega$ of this polynomial, $\alpha}\newcommand{\om}{\omega+1/\alpha}\newcommand{\om}{\omega=\theta}\newcommand{\disc}{\operatorname{disc}_0^2$, where as above $\theta}\newcommand{\disc}{\operatorname{disc}_0$ is the smallest Pisot
number.
Recently Rhin and Wu \cite{RW05} extended these results, so that there are now $13$ known subintervals of
$[0,\pi]$ where the best constant $C(V_\theta}\newcommand{\disc}{\operatorname{disc})$ is known.
It is of interest to see what happens as $\theta}\newcommand{\disc}{\operatorname{disc}$ tends to $\pi$; maybe one could obtain a bound connected to
Lehmer's original problem. Mignotte \cite{Mign89}
has looked at this, and has shown that for $\theta}\newcommand{\disc}{\operatorname{disc}=\pi-\varepsilon$ the smallest limit point of the set $M(P)^{1/d}$
for $P$ having all its roots outside $V_\theta}\newcommand{\disc}{\operatorname{disc}$ is at least $1+c\varepsilon^3$ for some positive constant $c$.
Dubickas and Smyth \cite{DS01a} applied Langevin's Theorem to the annulus
$$V(R^{-\gamma},R)=\{z\in\mathbb{C}}\newcommand{\Q}{\mathbb{Q}\mid R^{-\gamma}<|z|<R\},$$ where $R>1$ and $\gamma>0$, proving that the best constant
$C(V(R^{-\gamma},R))$ is $R^{\gamma/(1+\gamma)}$.
\subsection{Abelian number fields} In 2000 Amoroso and Dvornicich \cite{ADv00} showed that when $\alpha}\newcommand{\om}{\omega$ is a nonzero algebraic number, not a root of unity,
and $\Q(\alpha}\newcommand{\om}{\omega)$
is an abelian extension of $\Q$ then $M(\alpha}\newcommand{\om}{\omega)\ge 5^{d/12}$. They also give an example with $M(\alpha}\newcommand{\om}{\omega)=7^{d/12}$. It would be interesting to find the best constant $c>1$ such that $M(\alpha}\newcommand{\om}{\omega)\ge c^d$ for these numbers.
Baker and Silverman \cite{Bake03}, \cite{Silv04}, \cite{BaSi04}
generalised this lower bound first to elliptic curves, and then to
abelian varieties of arbitrary dimension.
\subsection{Totally {\bf $p$}-adic fields} Bombieri and Zannier \cite{BZ01} proved an analogue of Schinzel's result (\ref{E-S})
for `totally $p$-adic' numbers:
that is, for algebraic numbers $\alpha}\newcommand{\om}{\omega$ of degree $d$ all of whose conjugates lie in $\Q_p$. They showed that then $M(\alpha}\newcommand{\om}{\omega)\ge c_p^d$, for some constant $c_p>1$.
\subsection{The heights of Zagier and Zhang and generalisations.}
Zagier \cite{Zagi93} gave a result that can be formulated as
proving that the Mahler measure of any irreducible nonconstant
polynomial in $\mathbb{Z}[(x(x-1)]$ has Mahler
measure at least $\phi^{d/2}$, apart from $\pm(x(x-1)+1)$. Doche \cite{Doch01a,Doch01b} studied
the spectrum resulting from the measures of such polynomials, giving
a gap to the right of the smallest point $\phi^{1/2}$, and finding a
short interval where the smallest limit point lies. He used the
semi-infinite linear programming method outlined above. For this
problem, however, finding the second point of the spectrum seems to
be difficult. Zagier's work was motivated by a far-reaching result
of Zhang \cite{Zhan92} (see also \cite[p. 103]{Wald00}) for curves
on a linear torus. He proved that for all such curves, apart from those of the type
$x^iy^j=\om$, where $i,j\in\mathbb{Z}$ and $\om$ is a root of unity, there is a constant $c>0$ such that the curve has only
finitely many algebraic points $(x,y)$ with $h(x)+h(y)\le c$.
Zagier's result was for the curve $x+y=1$.
Following on from Zhang, there have been recent deep and diverse
generalisations in the area of small points on subvarieties of
${\mathbb G}_{\text{m}}^n$. In particular see Bombieri and Zannier
\cite{BZ95}, Schmidt \cite{Schm96} and Amoroso and David
\cite{AD00,AD03,AD04,AD06}.
Rhin and Smyth \cite{RS97} generalised Zagier's result by replacing
polynomials in $\mathbb{Z}(x(x-1))$ by polynomials in $\mathbb{Z}[Q(x)]$, where
$Q(x)\in\mathbb{Z}[x]$ is not $\pm$ a power of $x$. Their proof used a very
general result of Beukers and Zagier \cite{BZ97} on heights of points
on projective hypersurfaces.
Noticing that Zagier's result has the
same lower bound as Schinzel's result above for totally real $\alpha}\newcommand{\om}{\omega$, Samuels \cite{Samu06} has recently
shown that
the same lower bound holds for a more general height
function. His result includes those of both Zagier and Schinzel.
The proof is also based on \cite{BZ97}.
\section{Lower bounds for $\ho{\alpha}\newcommand{\om}{\omega}$}\label{S-house1}
\subsection{General lower bounds} We know that any lower bound for $M(\alpha}\newcommand{\om}{\omega)$ immediately gives a corresponding lower
bound for $\ho{\alpha}\newcommand{\om}{\omega}$\,, using (\ref{E-SZ2}). For instance, from
\cite{Vout96} it follows that for $\alpha}\newcommand{\om}{\omega$ of degree $d>2$ and not a root of unity
\begin{equation}\label{E-V}
\ho{\alpha}\newcommand{\om}{\omega}\ge 1+\frac{1}{4d}\left(\frac{\log\log d}{\log d}\right)^3.
\end{equation}
Some lower bounds, though asymptotically weaker, are better for small degrees.
For example Matveev \cite{Matv91} has shown that for such $\alpha}\newcommand{\om}{\omega$
\begin{equation}\label{E-M}
\ho{\alpha}\newcommand{\om}{\omega}\ge \exp\frac{\log(d+0.5)}{d^2},
\end{equation}
which is better than (\ref{E-V}) for $d\le 1434$ (see \cite{RW07}).
Recently Rhin and Wu have improved (\ref{E-M}) for $d\ge 13$ to
\begin{equation}
\ho{\alpha}\newcommand{\om}{\omega}\ge \exp\frac{3\log(d/2)}{d^2},
\end{equation}
which is better than (\ref{E-V}) for $d\le 6380$. See also the paper of Rhin and Wu in this volume.
Matveev \cite{Matv91} also proves that if $\alpha}\newcommand{\om}{\omega$ is a reciprocal
(conjugate to $\alpha}\newcommand{\om}{\omega^{-1}$) algebraic integer, not a root of unity,
then $\ho{\alpha}\newcommand{\om}{\omega}\ge (p-1)^{1/(pm)}$, where $p$ is the least prime
greater than $m=n/2\ge 3$.
Indeed, Dobrowolski's first result in this area \cite{Dobr78}
was for $\ho{\alpha}\newcommand{\om}{\omega}$ rather than $M(\alpha}\newcommand{\om}{\omega)$: he proved that
\begin{equation*}
\ho{\alpha}\newcommand{\om}{\omega}> 1+\frac{\log d}{6d^2}.
\end{equation*}
His argument is a beautifully simple one,
based on the use of the power sums
$s_k=\sum_{i=1}^d \alpha}\newcommand{\om}{\omega_i^k$, the Newton identities, and the arithmetic fact that, for any prime $p$,
$s_{kp}\equiv s_k\pmod p$.
The strongest asymptotic result to date in the direction of the Schinzel-Zassenhaus conjecture is due
to Dubickas \cite{Dubi93}: that
given $\varepsilon>0$ there is a constant $d(\varepsilon)$ such than any
nonzero algebraic integer $\alpha}\newcommand{\om}{\omega$ of degree $d>d(\varepsilon)$ not a root of unity
satisfies
\begin{equation}\label{E-64}
\ho{\alpha}\newcommand{\om}{\omega}>
1+\left(\frac{64}{\pi^2}-\varepsilon\right)\left(\frac{\log\log d}{\log
d}\right)^3\frac{1}{d}.
\end{equation}
Cassels \cite{Cass66} proved that if an algebraic number $\alpha}\newcommand{\om}{\omega$ of
degree $d$ has the property $\ho{\alpha}\newcommand{\om}{\omega}\le 1+\frac{1}{10d^2}$ then at least one of
the conjugates of $\alpha}\newcommand{\om}{\omega$ has modulus $1$. Although this result has
been superseded by Dobrowolski's work, Dubickas \cite{Dubi00}
applied the inequality \begin{equation} \prod_{k<j}|z_k\overline{z_j}-1|\le
n^{n/2}\left(\prod_{m=1}^n\max(1,|z_m|)\right)^{n-1} \end{equation} for complex
numbers $z_1,\dots,z_n$, a variant of one in \cite{Cass66},
to prove that
$$M(\alpha}\newcommand{\om}{\omega)^2\left|\prod\log|\alpha}\newcommand{\om}{\omega_i|\right|^{1/d}\ge 1/(2d)$$
for a nonreciprocal algebraic number $\alpha}\newcommand{\om}{\omega$ of degree $d$ with conjugates $\alpha}\newcommand{\om}{\omega_i$.
\subsection{ The house {\bf$\ho{\alpha}\newcommand{\om}{\omega}$} for {\bf$\alpha}\newcommand{\om}{\omega$} nonreciprocal}\label{S-3/2}
The Schinzel-Zassenhaus conjecture (\ref{E-SZ}) restricted to
nonreciprocal polynomials follows from Breusch's result above, with
$c=\log 1.1796\dots=0.165\dots$, using (\ref{E-SZ2}). Independently
Cassels \cite{Cass66} obtained this result with $c=0.1$, improved by
Schinzel to $0.2$
(\cite{Schi69}), and by Smyth \cite{Smyt71} to $\log \theta}\newcommand{\disc}{\operatorname{disc}_0=0.2811\dots$\,\,. He also showed that $c$ could not
exceed $\frac{3}{2}\log\theta}\newcommand{\disc}{\operatorname{disc}_0=0.4217\dots$\,\,.
In 1985 Lind and Boyd (see \cite{Boyd85}), as a result of extensive computation (see Section \ref{S-compute}),
conjectured that, for degree $d$, the extremal $\alpha}\newcommand{\om}{\omega$ are nonreciprocal and have
$\sim{\frac{2}{3}}d$ roots outside the unit circle. What a contrast with Mahler measure, where all small
$M(\alpha}\newcommand{\om}{\omega)$ are reciprocal!
This would imply that the best constant $c$ is $\frac{3}{2}\log\theta}\newcommand{\disc}{\operatorname{disc}_0$. In 1997 Dubickas \cite{Dubi97}
proved that $c>0.3096$ in this nonreciprocal case.
\subsection{The house of totally real {\bf $\alpha}\newcommand{\om}{\omega$}.}
Suppose that $\alpha}\newcommand{\om}{\omega$ is a totally real algebraic integer.
If $\ho{\alpha}\newcommand{\om}{\omega}\le 2$ then by \cite[Theorem 2]{Kron1857} $\alpha}\newcommand{\om}{\omega$
is of the form $\om+1/\om$, where $\om$ is a root of unity. If for some $\delta>0$ we have $2<\ho{\alpha}\newcommand{\om}{\omega}\le 2+\delta^2/(1+\delta)$, then, on
defining $\gamma$ by
$\gamma+1/\gamma=\alpha}\newcommand{\om}{\omega$, we see that $\gamma$ and its conjugates are either
real or lie on the unit circle, and $1<\ho{\gamma}\le 1+\delta$. This
fact readily enables us to deduce a lower bound greater than $2$ for $\ho{\alpha}\newcommand{\om}{\omega}$ whenever we have a lower
bound greater than $1$ for $\ho{\gamma}$\,. Thus from (\ref{E-SZ1}) \cite{SZ65} it
follows that for $\alpha}\newcommand{\om}{\omega$ not of the form $2\cos \pi r$ for any $r\in \Q$
\begin{equation}\label{E-SZ1R}
\ho{\alpha}\newcommand{\om}{\omega}\ge 2+4^{-2d-3} \end{equation} \cite{SZ65}. In a similar way
(\ref{E-64}) above implies that for such $\alpha}\newcommand{\om}{\omega$, and $d>d(\varepsilon)$
that
\begin{equation}\label{E-64R}
\ho{\alpha}\newcommand{\om}{\omega}>
2+\left(\frac{4096}{\pi^4}-\varepsilon\right)\left(\frac{\log\log d}{\log
d}\right)^6\frac{1}{{d}^2}
\end{equation}
\cite{Dubi93}.
However Dubickas \cite{Dubi95} managed to improve this lower bound to
\begin{equation} \ho{\alpha}\newcommand{\om}{\omega}>2+3.8\frac{(\log\log d)^3}{d(\log d)^4}.
\end{equation} He improved the constant $3.8$ to $4.6$ in \cite{Dubi97}.
\subsection{The Kronecker constant} Callahan, Newman and Sheingorn \cite{CNS77} define the {\it
Kronecker constant} of a number field $K$ to be the least $\varepsilon>0$ such that
$\ho{\alpha}\newcommand{\om}{\omega}\ge 1+\varepsilon$ for every algebraic integer $\alpha}\newcommand{\om}{\omega\in K$. The truth of the Schinzel-Zassenhaus conjecture
(\ref{E-SZ}) would imply that
the Kronecker constant of $K$ is at least $c/[K:\Q]$. They give \cite[Theorem 2]{CNS77} a sufficient condition
on $K$ for this to be the case.
They also point out, from
considering $\alpha}\newcommand{\om}{\omega\overline{\alpha}\newcommand{\om}{\omega}-1$, that if $\alpha}\newcommand{\om}{\omega$ is a nonzero
algebraic integer not a root
of unity in a Kroneckerian field then $\ho{\alpha}\newcommand{\om}{\omega}\ge \sqrt{2}$ (See also \cite{Mign81}), so
that the Kronecker constant of a Kroneckerian field is at least $\sqrt{2}-1$.
\section{small values of $M(\alpha}\newcommand{\om}{\omega)$ and $\ho{\alpha}\newcommand{\om}{\omega}$}\label{S-compute}
\subsection{Small values of { $ M(\alpha}\newcommand{\om}{\omega)$} } The first recorded computations on Mahler measure were performed by Lehmer in his 1933 paper \cite{Lehm33}. He found the smallest values of $ M(\alpha}\newcommand{\om}{\omega)$ for $\alpha}\newcommand{\om}{\omega$ of degrees $2,3$ and $4$, and the smallest $M(\alpha}\newcommand{\om}{\omega)$ for $\alpha}\newcommand{\om}{\omega$
reciprocal of degrees $2,4,6$ and $8$. Lehmer records the fact that Poulet (?unpublished) ``\dots has made a similar investigation of symmetric polynomials with practically the same results". Boyd has done extensive computations, searching for `small' algebraic integers
of various kinds. His first major published table was of Salem
numbers less than $1.3$
\cite{Boyd77}, with four more found in \cite{Boyd78a}.
Recall that these are positive reciprocal algebraic integers of degree at least $4$ having only one conjugate
(the number itself) outside the unit circle.
These numbers give many examples of small Mahler measures, most
notably (from (\ref{E-1.176})) $M(L)=1.176\dots$ from the Lehmer
polynomial itself, which is the minimal polynomial of a Salem
number. In later computations \cite{Boyd80}, \cite{Boyd89}, he finds all reciprocal
$\alpha}\newcommand{\om}{\omega$ with $M(\alpha}\newcommand{\om}{\omega)\le 1.3$ and degree up to $20$, and those with
$M(\alpha}\newcommand{\om}{\omega)\le 1.3$ and degree up to $32$ having coefficients in
$\{-1,0,1\}$ (`height $1$').
Mossinghoff \cite{Moss98} extended Boyd's tables from degree $20$ to
degree $24$ for $M(\alpha}\newcommand{\om}{\omega)<1.3$, and to degree $40$ for height $1$
polynomials, finding four more Salem numbers less than $1.3$. He
also has a website \cite{Moss} where up-to-date tables of small Salem
numbers and Mahler measures are conveniently displayed (though
unfortunately without their provenance). Flammang, Grandcolas and
Rhin \cite{FGR99} proved that Boyd's table, with the additions by
Mossinghoff, of the $47$ known Salem numbers less than $1.3$ is
complete up to degree $40$. Recently Flammang, Rhin and Sac-\' Ep\'
ee \cite{FRS06} have extended these tables, finding all
$M(\alpha}\newcommand{\om}{\omega)<\theta}\newcommand{\disc}{\operatorname{disc}_0$ for $\alpha}\newcommand{\om}{\omega$ of degree up to $36$, and all $M(\alpha}\newcommand{\om}{\omega)<1.31$
for $\alpha}\newcommand{\om}{\omega$ of degree up to $40$. This latter computation showed that
the earlier tables of Boyd and Mossinghoff for $\alpha}\newcommand{\om}{\omega$ of degree up to
$40$ with $M(\alpha}\newcommand{\om}{\omega)<1.3$ are complete.
\subsection{Small values of { $\ho{\alpha}\newcommand{\om}{\omega}\,$}} Concerning $\ho{\alpha}\newcommand{\om}{\omega}\,$, Boyd \cite{Boyd85} gives tables of
the smallest values of $\ho{\alpha}\newcommand{\om}{\omega}$ for $\alpha}\newcommand{\om}{\omega$ of degree $d$ up to $12$, and for $\alpha}\newcommand{\om}{\omega$ reciprocal of degree up to $16$. Further
computation has recently been done on this problem by Rhin and Wu
\cite{RW07}. They computed the smallest house of algebraic numbers of degree up to $28$. All are nonreciprocal, as predicted by Boyd's conjecture (see Section \ref{S-3/2}). Their data led the authors to conjecture that,
for a given degree, an algebraic number of that degree with minimal house was a root of a polynomial consisting of at most four monomials.
\section{Mahler measure and the discriminant}\label{S-disc}
\subsection{}
Mahler \cite{Mahl64b} showed that for a complex polynomial $$P(z)=a_0z^d+\dots+a_d=a_0(z-\alpha}\newcommand{\om}{\omega_1)\dots(z-\alpha}\newcommand{\om}{\omega_d)$$
its
discriminant $\disc(P)=a_0^{2d-2}\prod_{i<j}(\alpha}\newcommand{\om}{\omega_i-\alpha}\newcommand{\om}{\omega_j)^2$ satisfies
\begin{equation}
|\disc(P)|\le d^dM(P)^{2d-2}.
\end{equation}
From this it follows immediately that if there is an absolute constant $c>1$ such that $|\disc(P)|\ge (cd)^d$
for all irreducible $P(z)\in\mathbb Z[z]$,
then $M(P)\ge c^{d/(2d-2)}$, which would solve Lehmer's problem. This consequence of Mahler's inequality has
been
noticed in various variants by several people, including Mignotte
\cite{Mign78} and Bertrand \cite{Bert82}.
In 1996 Matveev \cite{Matv96} showed that in Dobrowolski's inequality, the
degree $d\ge 2$ of $\alpha}\newcommand{\om}{\omega$ could be replaced by a much smaller (for large
$d$) quantity $$\delta=\max(d/\disc(\alpha}\newcommand{\om}{\omega)^{1/d},\delta_0(\varepsilon))$$ for those $\alpha}\newcommand{\om}{\omega$
for which $\alpha}\newcommand{\om}{\omega^p$ had degree $d$ for all primes $p$. (Such $\alpha}\newcommand{\om}{\omega$ do not include any roots of unity.)
Specifically, he obtained for given $\varepsilon>0$
\begin{equation}
M(\alpha}\newcommand{\om}{\omega)\ge \exp\left((2-\varepsilon)\left(\frac{\log\log \delta}{\log
\delta}\right)^3\right)
\end{equation}
for these $\alpha}\newcommand{\om}{\omega$.
Mahler \cite{Mahl64b} also gives the lower bound
\begin{equation}
\delta(P)>\sqrt{3}|\disc(P)|^{1/2}d^{-(d+2)/2}M(P)^{-(m-1)}
\end{equation}
for the minimum distance $\delta(P)=\min_{i<j}|\alpha}\newcommand{\om}{\omega_i-\alpha}\newcommand{\om}{\omega_j|$ between
the roots of $P$.
\subsection{Generalisation involving the discriminant of Schinzel's lower
bound}\label{S-discRhin}
Rhin \cite{Rhin04} generalised Schinzel's result (\ref{E-S}) by proving, for $\alpha}\newcommand{\om}{\omega$ a totally positive algebraic
integer of degree $d$ at least $2$ that
\begin{equation}
M(\alpha}\newcommand{\om}{\omega)\ge \left(\frac{\delta_1+\sqrt{\delta_1^2+4}}{2}\right)^{d/2}.
\end{equation}
Here $\delta_1=|\disc(\alpha}\newcommand{\om}{\omega)|^{1/d(d-1)}$. This result apparently also follows from an earlier result of Za\"\i mi \cite{Zaim94} concerning a lower bound for a weighted product of the moduli of the conjugates of an algebraic integer --- see the Math Review of Rhin's paper.
\section{Properties of $M(\alpha}\newcommand{\om}{\omega)$ as an algebraic number}\label{S-algnum}
A {\it Perron number} is an algebraic integer with exactly one
conjugate of maximum modulus. It is clear from (\ref{E-1}) that
$M(\alpha}\newcommand{\om}{\omega)$ is a Perron number for any algebraic integer $\alpha}\newcommand{\om}{\omega$; this
seems to have been first observed by Adler and Marcus \cite{AM79}
(see \cite{Boyd86b}). In the other direction: is the Perron number
$1+\sqrt{17}$ a Mahler measure? See Schinzel \cite{Schi04}, Dubickas
\cite{Dubi05}. Dubickas \cite{Dubi04c} proves that for any Perron
number $\beta$ some integer multiple of $\beta$ is a Mahler measure.
(These papers also contains other interesting properties of the set
of Mahler measures.) Boyd \cite{Boyd86a} proves that if
$\beta=M(\alpha}\newcommand{\om}{\omega)$ for some algebraic integer $\alpha}\newcommand{\om}{\omega$, then all conjugates
of $\beta$ other than $\beta$ itself either lie in the annulus
$\beta^{-1}<|z|<\beta$ or are equal to $\pm\beta^{-1}$.
If $\alpha}\newcommand{\om}{\omega$ were reciprocal, it might be expected that
$M(\alpha}\newcommand{\om}{\omega)$ would be reciprocal too, while if $\alpha}\newcommand{\om}{\omega$ were nonreciprocal,
then $M(\alpha}\newcommand{\om}{\omega)$ would be nonreciprocal. However neither of these need
be the case: in \cite[Proposition 6]{Boyd86b} Boyd exhibits a family
of degree $4$ Pisot numbers that are the Mahler measures of
reciprocal algebraic integers of degree $6$, and in
\cite[Proposition 2]{Boyd86b} he notes that for $q\ge 3$ a root
$\alpha}\newcommand{\om}{\omega_q$ of the irreducible nonreciprocal polynomial
$z^4-qz^3+(q+1)z^2-2z+1$ then $M(\alpha}\newcommand{\om}{\omega_q)=\frac{1}{2}(q+\sqrt{q^2-4})$
is reciprocal. In fact, since
$M(\frac{1}{2}(q+\sqrt{q^2-4}))=\frac{1}{2}(q+\sqrt{q^2-4})$, this
also shows that a number can be both a reciprocal and a
nonreciprocal measure. See also \cite{Boyd87}. Dixon and Dubickas
\cite{DD04} prove that the set of all $M(\alpha}\newcommand{\om}{\omega)$ does not form a
semigroup, as for instance $\sqrt{2}+1$ and $\sqrt{3}+2$ are Mahler
measures, while their product is not. (In terms of polynomials, this
set is of course equal to the set of all $M(P)$ for $P$ irreducible.
If instead we take the set of all (reducible and irreducible)
polynomials, then, because of $M(PQ)=M(P)M(Q)$ this larger set {\it
does} form a semigroup.)
In \cite{Dubi04b} Dubickas proves that the additive group generated
by all Mahler measures is the group of all real algebraic numbers,
while the multiplicative group generated by all Mahler measures is
the group of all positive real algebraic numbers.
We know that $M(P(z))=M(P(\pm z^k))$ for either choice of sign, and
any $k\in \mathbb{N}}\newcommand{\R}{\mathbb{R}$. Is this the only way that Mahler measures of
irreducible polynomials can be equal? Boyd \cite{Boyd80} gives some
illuminating examples to show that there can be other reasons that make this
happen. The examples were discovered during his computation of
reciprocal polynomials of small Mahler measure (see Section \ref{S-compute}). For
example, for $P_6=z^6+2z^5+2z^4+z^3+2z^2+2z+1$ and $P_8=z^8+z^7-z^6-z^5+z^4-z^3-z^2+z+1$ we have
\begin{equation*}
M(P_6)=M(P_8)=1.746793\ldots=M,
\end{equation*}
say, where both polynomials are irreducible. Boyd explains how such
examples arise. If $\alpha}\newcommand{\om}{\omega_i(i=1,\dots,8)$ are the roots of $P_8$, then
for different $i$ $M(\alpha}\newcommand{\om}{\omega_1\alpha}\newcommand{\om}{\omega_i)$ can equal $M$, $M^2$ or $M^3$. The
roots of $P_6$ are the three $\alpha}\newcommand{\om}{\omega_1\alpha}\newcommand{\om}{\omega_i$ with $M(\alpha}\newcommand{\om}{\omega_1\alpha}\newcommand{\om}{\omega_i)=M$ and
their reciprocals. Clearly $M(\alpha}\newcommand{\om}{\omega_1^2)=M^2$, while for three other
$\alpha}\newcommand{\om}{\omega_i$ the product $\alpha}\newcommand{\om}{\omega_1\alpha}\newcommand{\om}{\omega_i$ is of degree $12$ and has
$M(\alpha}\newcommand{\om}{\omega_1\alpha}\newcommand{\om}{\omega_i)=M^3$. ($P_8$ has the special property that it has
roots $\alpha}\newcommand{\om}{\omega_1, \alpha}\newcommand{\om}{\omega_2,\alpha}\newcommand{\om}{\omega_3, \alpha}\newcommand{\om}{\omega_4$ with $\alpha}\newcommand{\om}{\omega_1\alpha}\newcommand{\om}{\omega_2=\alpha}\newcommand{\om}{\omega_3\alpha}\newcommand{\om}{\omega_4\ne 1$.)
Dubickas \cite{Dubi02} gives a lower bound for the distance of an
algebraic number $\gamma$ of degree $n$ and leading coefficient $c$,
not a Mahler measure, from a Mahler measure $M(\alpha}\newcommand{\om}{\omega)$ of degree $D$:
\begin{equation} |M(\alpha}\newcommand{\om}{\omega)-\gamma|>c^{-D}(2\,\ho{\gamma}\,)^{-nD}. \end{equation}
\section{Counting polynomials with given Mahler measure}\label{S-count}
Let $\#(d,T)$ denote the number of integer polynomials of degree
$d$ and Mahler measure at most $T$. This function has been studied
by several authors. Boyd and Montgomery \cite{BM90} give the
asymptotic formula \begin{equation} c(\log
d)^{-1/2}d^{-1}\exp\left(\frac{1}{\pi}\sqrt{105\zeta(3)d}\right)(1+o(1)),
\end{equation} where $c=\frac{1}{4\pi^2}\sqrt{105\zeta(3)e^{-\gamma}}$, for the
number $\#(d,1)$ of cyclotomic polynomials of degree $d$, as
$d\to\infty$.
Dubickas and Konyagin \cite{DK98} obtain by simple arguments the
lower bound $\#(d,T)>\frac{1}{2}T^{d+1}(d+1)^{-(d+1)/2}$, and
upper bound $\#(d,T)<T^{d+1}\exp(d^2/2)$, the latter being valid
for $d$ sufficiently large. For $T\ge \theta}\newcommand{\disc}{\operatorname{disc}_0$ they derived the upper
bound $\#(d,T)<T^{d(1+16\log\log d/\log d)}$.
Chern and Vaaler \cite{CV01} obtained
the asymptotic formula $V_{d+1}T^{d+1}+O_d(T^d)$ for $\#(d,T)$ for
fixed $d$, as $T\to\infty$. Here $V_{d+1}$ is an explicit constant
(the volume of a certain star body). Recently Sinclair \cite{Sinc07} has produced corresponding estimates for counting functions of reciprocal polynomials.
\section{A dynamical Lehmer's problem}\label{S-dynam}
Given a rational map $f(\alpha}\newcommand{\om}{\omega)$ of degree $d\ge 2$ defined over a number field $K$, one
can define for $\alpha}\newcommand{\om}{\omega$ in some extension field of $K$ a canonical height
$$
h_f(\alpha}\newcommand{\om}{\omega) =\lim_{n\to\infty} d^{-n}h(f^n(\alpha}\newcommand{\om}{\omega)),
$$
where $f^n$ is the $n$th iterate of $f$, and $h$ is, as before, the Weil height of $\alpha}\newcommand{\om}{\omega$. Then $h_f(\alpha}\newcommand{\om}{\omega)=0$ if and only if the iterates $f^n(\alpha}\newcommand{\om}{\omega)$ form a finite set, and an analogue of Lehmer's problem would be to decide whether or not
$$
h_f(\alpha}\newcommand{\om}{\omega) \ge \frac{C}{\deg(\alpha}\newcommand{\om}{\omega)}
$$
for some constant $C$ depending only on $f$ and $K$. Taking $f(\alpha}\newcommand{\om}{\omega)=\alpha}\newcommand{\om}{\omega^d$ we retrieve the Weil height and the original Lehmer problem. There seem to be no good estimates, not even of polynomial decay, for any $f$ not associated to an endomorphism of an algebraic group. See \cite[Section 3.4]{Silv07} for more details.
\section{Variants of Mahler measure}\label{S-variants}
Everest and n\'{\i} Fhlath\' uin \cite{EF96} and Everest and Pinner
\cite{EP98} (see also \cite[Chapter 6]{EW99}) have defined the {\it
elliptic Mahler measure}, based on a given elliptic curve $E=\mathbb{C}}\newcommand{\Q}{\mathbb{Q}/L$
over $\mathbb{C}}\newcommand{\Q}{\mathbb{Q}$, where $L=\langle \om_1,\om_2\rangle\subset \mathbb{C}}\newcommand{\Q}{\mathbb{Q}$ is a
lattice, with $\wp_L$ its associated Weierstrass $\wp$-function. Then
for $F\in\mathbb{C}}\newcommand{\Q}{\mathbb{Q}[z]$ the (logarithmic) elliptic Mahler measure $m_E(F)$
is defined as \begin{equation}
\int_0^1\int_0^1\log|F(\wp_L(t_1\om_1+t_2\om_2))|dt_1dt_2. \end{equation} If
$E$ is in fact defined over $\Q$ and has a rational point $Q$ with
$x$-coordinate $M/N$ then often $m_E(Nz-M)=2\hat h(Q)$, showing that
$m_E$ is connected with the canonical height on $E$.
Kurokawa \cite{Kuro04} and Oyanagi \cite{Oyan04} have defined a $q$-analogue
of Mahler measure, for a real parameter $q$. As $q\to 1$ the classical Mahler measure is recovered.
Dubickas and Smyth \cite{DS01b} defined the {\it metric Mahler
measure} ${\mathcal M}(\alpha}\newcommand{\om}{\omega)$ as the infimum of $\prod_i
M(\beta_i)$, where $\prod_i \beta_i=\alpha}\newcommand{\om}{\omega$. They used this to define
a metric on the group of nonzero algebraic numbers modulo torsion
points, the metric giving the discrete topology on this group if
and only if Lehmer's `conjecture' is true (i.e.,~
$\inf_{\alpha}\newcommand{\om}{\omega:M(\alpha}\newcommand{\om}{\omega)>1}M(\alpha}\newcommand{\om}{\omega)>1$).
Very recently Pritsker \cite{Prit08, Prit07} has studied an areal analogue of Mahler measure, defined by replacing the normalised arclength measure on the unit circle by the normalised area measure on the unit disc.
\section{applications}\label{S-applications}
\subsection{Polynomial factorization}
I first met Andrzej Schinzel at the ICM in Nice in 1970. There he mentioned to me an
application of Mahler measure to irreducibility of polynomials. (After this we had some correspondence about the work leading to \cite{Smyt71}, which was very helpful to me.) If a class of irreducible polynomials
had Mahler measure at least $B$, then any polynomial
of Mahler measure less than $B^2$ can have at most one factor from that class.
For instance, a trinomial $z^d\pm z^m\pm 1$ has, by Vicente Gon\c{c}alves'
inequality \cite{ViGo50}, \cite{Ostr60}, \cite[Th. 9.1.1]{RS02}
$M(P)^2+M(P)^{-2}\le || P||_2^2$, Mahler measure at most $\phi$.
Since $\phi<\theta}\newcommand{\disc}{\operatorname{disc}_0^2$, by (\ref{E-ta}) such trinomials can have at most one irreducible noncyclotomic factor. Here $|| P||_2$ is the $2$-norm of $P$ (the square root of the sum
of the squares of its coefficients).
More generally Schinzel (see \cite{Dobr79}) pointed
out the following consequence of (\ref{E-D}): that for any
fixed $\varepsilon>0$ and polynomial $P$ of degree $d$ with integer coefficients, the
number of its noncyclotomic irreducible factors counted with multiplicities is $O(d^\varepsilon||
P||_2^{1-\varepsilon})$.
See also \cite{Schi76}, \cite{Schi83}, \cite{PV93-9}.
\subsection{Ergodic theory}
One-variable Mahler measures have applications in ergodic theory.
Consider an automorphism of the torus $\R^d/\mathbb{Z}^d$ defined by a
$d\times d$ integer matrix of determinant $\pm 1$, with
characteristic polynomial $P(z)$. Then the topological entropy of
this map is $\log M(P)$ (Lind \cite{Lind74} --- see
also \cite{Boyd81}, \cite[Theorem 2.6]{EW99}).
\subsection{Transcendence and diophantine approximation} Mahler measure, or rather the Weil height $h(\alpha}\newcommand{\om}{\omega)=\log
M(\alpha}\newcommand{\om}{\omega)/d$, plays an important technical r\^ ole
in modern transcendence theory, in particular for bounding the
coefficients of a linear form in logarithms known to be dependent.
As remarked by Waldschmidt \cite[p65]{Wald00}, the fact that this
height has three equivalent representations, coming from
(\ref{E-0}), (\ref{E-1}) and (\ref{E-2}) makes it a very versatile
height function for these applications.
If $\alpha}\newcommand{\om}{\omega_1,\dots,\alpha}\newcommand{\om}{\omega_n$ are algebraic numbers such that their logarithms are $\Q$-linearly dependent,
then it is of importance in Baker's transcendence method to get small upper estimates for the size of integers
$m_1,\dots,m_n$ needed so that $m_1\log\alpha}\newcommand{\om}{\omega_1+\dots+m_n\log\alpha}\newcommand{\om}{\omega_n=0$. Such estimates can be given using Weil
heights of the $\alpha}\newcommand{\om}{\omega_i$. See \cite[Lemma 7.19]{Wald00} and the remark after it.
Chapter 3 (`Heights of Algebraic Numbers') of \cite{Wald00} contains
a wealth of interesting material on the Weil height and
other height functions, connections between them, and applications.
For instance, for a polynomial $f\in\mathbb{Z}[z]$ of degree at most $N$ for
which the algebraic number $\alpha}\newcommand{\om}{\omega$ is not a root one has \begin{equation*}
|f(\alpha}\newcommand{\om}{\omega)|\ge\frac{1}{M(\alpha}\newcommand{\om}{\omega)^N|| f ||_1^{d-1}},
\end{equation*}
where $|| f ||_1$ is the length of
$f$, the sum of the absolute values of its coefficients, and
$d=\deg\alpha}\newcommand{\om}{\omega$ (\cite[p83]{Wald00}).
In particular, for a rational number $p/q\ne \alpha}\newcommand{\om}{\omega$ with $q>0$, and
$f(x)=qx-p$ we obtain \begin{equation}\label{E-pq}
\left|\alpha}\newcommand{\om}{\omega-\frac{p}{q}\right|\ge\frac{1}{M(\alpha}\newcommand{\om}{\omega)q(\max(|p|+q))^{d-1}}.
\end{equation}
\subsection{Distance of $\alpha}\newcommand{\om}{\omega$ from $1$.} From (\ref{E-pq}) we immediately
get for $\alpha}\newcommand{\om}{\omega\ne 1$ \begin{equation} |\alpha}\newcommand{\om}{\omega-1|\ge \frac{1}{2^{d-1}M(\alpha}\newcommand{\om}{\omega)}. \end{equation}
Better lower bounds for $|\alpha}\newcommand{\om}{\omega-1|$ in terms of its Mahler measure
have been given by Mignotte \cite{Mign79}, Mignotte
and Waldschmidt \cite{MW94}, Bugeaud, Mignotte and
Normandin \cite{BMN95}, Amoroso \cite{Amor96}, Dubickas \cite{Dubi95}, and \cite{Dubi98}.
For instance Mignotte and Waldschmidt prove that
\begin{equation}
|\alpha}\newcommand{\om}{\omega-1|>\exp\{-(1+\varepsilon)(d(\log d)(\log M(\alpha}\newcommand{\om}{\omega)))^{1/2}\} \end{equation} for
$\varepsilon>0$ and $\alpha}\newcommand{\om}{\omega$ of degree $d\ge d(\varepsilon)$. Dubickas \cite{Dubi95}
improves the constant $1$ in this result to $\pi/4$, and in the
other direction \cite{Dubi98} proves that for given $\varepsilon>0$ there
is an infinite sequences of degrees $d$ for which an $\alpha}\newcommand{\om}{\omega$ of degree
$d$ satisfies \begin{equation} |\alpha}\newcommand{\om}{\omega-1|<\exp\left\{-(c-\varepsilon)\left(\frac{d\log
M(\alpha}\newcommand{\om}{\omega)}{\log d}\right)^{1/2}\right\}. \end{equation} Here Dubickas uses the
following simple result: if $F\in\mathbb{C}}\newcommand{\Q}{\mathbb{Q}[z]$ has degree $t$ and $F'(1)\ne
0$ then there is a root $a$ of $F$ such that $|a-1|\le
t|F(1)/F'(1)|$.
\subsection{Shortest unit lattice vector}
Let $K$ be a number field with unit lattice of rank $r$, and $M=\min
M(\alpha}\newcommand{\om}{\omega)$, the minimum being taken over all units $\alpha}\newcommand{\om}{\omega\in K$, $\alpha}\newcommand{\om}{\omega$
not a root of unity. Kessler \cite{Kess91} showed that then the
shortest vector $\lambda$ in the unit lattice has length
$||\lambda||_2$ at least $\sqrt{\frac{2}{r+1}}\log M$.
\subsection{Knot theory}
Mahler measure of one-variable polynomials arises in knot theory
in connection with Alexander polynomials of knots and reduced
Alexander polynomials of links --- see Silver and Williams
\cite{SW02}. Indeed, in Reidemeister's classic book on the subject
\cite{Reid32}, the polynomial $L(-z)$ appears as the Alexander
polynomial of the $(-2,3,7)$-pretzel knot. Hironaka \cite{Hiro01}
has shown that among a wide class of Alexander polynomials of
pretzel links, this one has the smallest Mahler measure.
Champanerkar and Kofman \cite{CK05} study a sequence of Mahler
measures of Jones polynomials of hyperbolic links $L_m$ obtained
using $(-1/m)$-Dehn surgery, starting with a fixed link. They show
that it converges to the Mahler measure of a $2$-variable
polynomial. (The many more applications of Mahler measures of
several-variable polynomials to geometry and topology are outside
the scope of this survey.)
\section{Final remarks}
\subsection{Other sources on Mahler measure}
Books covering various aspects of Mahler measure include the following:
Bertin and Pathiaux-Delefosse \cite{BP89}, Bertin {\it et al} \cite{BDGPS92}, Bombieri and
Gubler \cite{BG06}, Borwein \cite{Borw02},
Schinzel \cite{Schi82}, Schinzel \cite{Schi00}, Waldschmidt \cite{Wald00}.
Survey articles and lecture notes on Mahler measure include:
Boyd \cite{Boyd78b}, Boyd \cite{Boyd81}, Everest \cite{Ever98},
Hunter \cite{Hunt82}, Schinzel \cite{Schi99}, Skoruppa \cite{Skor99}, Stewart \cite{Stew78b},
Vaaler \cite{Vaal03}, Waldschmidt \cite{Wald81}.
\subsection{Memories of Mahler} As one of a small group of undergraduates in ANU, Canberra in
the mid-1960s, we were encouraged to attend graduate courses at the
university's Institute of Advanced Studies, where Mahler had a
research chair. I well remember his lectures on transcendence with his blackboard copperplate handwriting,
all the technical details being carefully spelt out.
\subsection{Acknowledgements} I thank Matt Baker, David Boyd, Art\= uras Dubickas, James McKee, Alf van der Poorten, Georges Rhin, Andrzej Schinzel,
Joe Silverman, Michel Waldschmidt, Susan Williams, Umberto Zannier and the referee
for some helpful remarks concerning an earlier draft of this survey,
which have been incorporated into the final version. This article
arose from a talk I gave in January 2006 at the Mahler Measure in
Mobile Meeting, held in Mobile, Alabama. I would like to thank the
organisers Abhijit Champanerkar,
Eriko Hironaka, Mike Mossinghoff, Dan Silver and Susan Williams for a stimulating meeting.
|
1,314,259,994,779 | arxiv | \section{Introduction and motivation}
The classic Mehler–Heine formula, introduced by Heine in 1861 and by Mehler in 1868 (who was motivated by the problem of knowing the distribution of electricity on spherical domains \cite{Mehler}), states that
the Bessel function $J_0$ is a limit of Legendre polynomials $P_N$ of order $N$
in the following sense
\begin{equation*}
{\displaystyle \lim _{N\to \infty }P_{N}{\Bigl (}\cos ({\frac{z}{N})\Bigr )}=J_{0}(z)},
\end{equation*}
where the limit is uniform over $z$ in an arbitrary bounded domain in the complex plane.
Observe that the functions on the left side are the spherical functions of the Gelfand pair $(SO(3),SO(2))$ and the function on the right side is a spherical function of the Gelfand pair $(SO(2)\ltimes \mathbb{R}^2,SO(2))$ (for a reference see, for e.g., \cite{van Dijk}).
There is a generalization of this formula involving other classical special functions as follows
\begin{equation*}
{\displaystyle \lim _{N\to \infty }\frac{P_{N}^{\alpha ,\beta }\left(\cos ({\frac {z}{N}})\right)}{N^{\alpha }}=\frac{J_{\alpha }(z)}{\left({\frac {z}{2}}\right)^{\alpha }}~},
\end{equation*}
where $P_N^{\alpha,\beta}$ are the Jacobi polynomials and $J_\alpha$ is the Bessel function of first kind of order $\alpha$ (cf. \cite{Szego}). If $\alpha=\beta= \frac{n-2}{2}$, on the left side we have the Gegenbauer polynomials that are orthogonal polynomials that correspond to the spherical functions associated with the Gelfand pair $(\operatorname{SO}(n+1),\operatorname{SO}(n))$ and on the right side the function $\frac{J_{\frac{n-2}{2} }(z)}{\left({\frac {z}{2}}\right)^{\frac{n-2}{2} }}$ is a spherical function associated with the Gelfand pair $(\operatorname{SO}(n)\ltimes \mathbb{R}^n,\operatorname{SO}(n))$ (without normalization). We will denote by $\operatorname{M}(n):=\operatorname{SO}(n)\ltimes \mathbb{R}^n$ the $n$-dimensional euclidean motion group.
\\ \\
In this article we obtain the spherical functions (scalar and matrix-valued) of the strong Gelfand pair $(\operatorname{M}(n),\operatorname{SO}(n))$ as an appropriate limit of spherical functions (scalar and matrix-valued) of the strong Gelfand pair $(\operatorname{SO}(n+1),\operatorname{SO}(n))$ and then as an appropriate limit of spherical functions of the strong Gelfand pair $(\operatorname{SO}_0(n,1),\operatorname{SO}(n))$, where $\operatorname{SO}_0(n,1)$ is connected component of the
identity of the Lorentz group. We will need the notion of group contraction introduced by Inönü and Wigner in \cite{Wigner}. For our purpose the results given by Dooley and Rice in the papers \cite{Dooley} and \cite{Dooley2} will be extremely useful. Their results allow to show how to approximate matrix coefficients of irreducible representations of $\operatorname{M}(n)$ by a sequence of matrix coefficients of irreducible representations of $\operatorname{SO}(n+1)$ (see \cite{Clerc}) and we will generalize this fact.
\\ \\
The case that involves the compact group $\operatorname{SO}(n+1)$ is more difficult than the case with the non compact group $\operatorname{SO}_0(n,1)$. Indeed, only the last section will be devoted to gain an asymptotic formula involving the spherical functions of $(\operatorname{SO}_0(n,1),\operatorname{SO}(n))$. Moreover, we can treat this case from a much more global optic, we will work with Cartan motions groups that arise from non compact semisimple groups.
\\ \\
For the first part of this work we will follow the same writing structure as the paper \cite{Dooley} of Dooley and Rice and
our main result is the Theorem \ref{coro} that states the following:
\\ \\
\textit{Let $(\tau,V_\tau)$ be an irreducible unitary representation of ${\operatorname{SO}(n)}$ and let $\Phi^{\tau,\operatorname{M}(n)}$ be a spherical function of type $\tau$ of the strong Gelfand pair $(\operatorname{M}(n),\operatorname{SO}(n))$.
There exists a sequence $\{\Phi_{\ell}^{\tau, \operatorname{SO}(n+1)}\}_{\ell\in\mathbb{Z}_{\geq 0}}$ of spherical functions of type $\tau$ of the strong Gelfand pair $(\operatorname{SO}(n+1),\operatorname{SO}(n))$ and a contraction
$\{ D_{\ell} \}_{\ell\in\mathbb{Z}_{\geq 0}} $ of $\operatorname{SO}(n+1)$ to $\operatorname{M}(n)$ such that
\begin{equation*}
\lim_{\ell\rightarrow \infty} \Phi_\ell^{\tau, \operatorname{SO}(n+1)}\circ D_{\ell} =\Phi^{\tau, \operatorname{M}(n)},
\end{equation*}
where the convergence is point-wise on $V_\tau$ and uniform on compact sets of $\operatorname{M}(n)$. }
\\ \\
In the last section we obtain an analogous result changing $\operatorname{SO}(n+1)$ by $\operatorname{SO}_0(n,1)$.
\\ \\
\large{\textsc{\textbf{Acknowledgements:}}} To Fulvio Ricci who had the first idea.
\section{Preliminaries}
\subsection{Spherical functions}
Let $(G,K,\tau)$ be a triple where $G$ is a locally compact Hausdorff unimodular topological
group (or just a Lie group), $K$ be a compact subgroup of $G$ and $(\tau,V_\tau)$ be an irreducible unitary representation of $K$ of dimension $d_\tau$. We denote by $\chi_\tau$ the character associated to $\tau$, by $\operatorname{End}(V_\tau)$ the group of endomorphisms of the vector space $V_\tau$ and by
$\widehat{G}$ (respectively, $\widehat{K}$) the set of equivalence classes of
irreducible unitary representations of $G$ (respectively, of $K$). We assume that for each $\pi\in\widehat{G}$, the multiplicity $m(\tau,\pi)$ of $\tau$ in $\pi_{|_{K}}$ is at most one. In these cases the triple $(G,K,\tau)$ is said \textit{commutative} because the convolution algebra of $\operatorname{End}(V_\tau)$-valued integrable functions on $G$ such that are bi-$\tau$-equivariant (i.e., $f(k_1gk_2)=\tau(k_2)^{-1}f(g)\tau(k_1)^{-1}$ for all $g\in G$ and for all $k_2,k_2\in K$) turns out to be commutative. When $\tau$ is the trivial representation we have the notion of \textit{Gelfand pair}. It is said that $(G,K)$ is a \textit{strong Gelfand pair} if $(G,K,\tau)$ is a commutative triple for every $\tau\in\widehat{K}$.
\\ \\
Let $\widehat{G}(\tau)$ be the set of those representations $\pi\in\widehat{G}$ which contain $\tau$ upon restriction to $K$. For $\pi\in\widehat{G}(\tau)$, let $\mathcal{H}_\pi$ be the Hilbert space where $\pi$ acts and let $\mathcal{H}_\pi(\tau)$ be the subspace of vectors which transforms under $K$ according to $\tau$. Since $m(\tau,\pi)=1$, $\mathcal{H}_\pi(\tau)$ can be identified with $V_\tau$. Let $P_\pi^\tau:\mathcal{H}_\pi\longrightarrow\mathcal{H}_\pi(\tau)$ be the orthogonal projection (see, e.g., \cite[Proposition 5.3.7]{Wallach} and \cite[Section 3]{Camporesi}) given by
\begin{equation}\label{projection}
P_\pi^\tau=d_\tau\pi_{|_{K}}(\overline{\chi_\tau})=d_\tau\int_K {\chi_\tau(k^{-1})}\pi(k)dk.
\end{equation}
\begin{definition}\label{def spherical function of type tau}
Let $\pi\in\widehat{G}$. The function
\begin{equation*}
\Phi_\pi^\tau(g):=P_\pi^\tau \circ \pi(g)\circ P_\pi^\tau \qquad (\forall g\in G)
\end{equation*}
is called a \textit{spherical function of type $\tau$}.
\end{definition}
\begin{remark} \label{clases de conj de func esf} \
\begin{itemize}
\item[$(i)$] Observe that the spherical functions depend only on the classes of equivalence of irreducible unitary representations of $G$. That is, if $\pi_1$ y $ \pi_2$ are two equivalent irreducible unitary representations of $G$ with intertwining operator $A:\mathcal{H}_{\pi_1}\longrightarrow\mathcal{H}_{\pi_2}$ (i.e., $A\circ {\pi_1}(g)\circ A^{-1}=\pi_2(g)$ for all $g\in G$), then
$A\circ P_{\pi_{1}}^\tau\circ A^{-1}= P_{\pi_2}^\tau$ and so
\begin{equation*}
A \circ \Phi_{\pi_1}^\tau (g) \circ A^{-1} \ = \ \Phi_{\pi_2}^\tau (g) \qquad \forall g\in G.
\end{equation*}
As a result, $\Phi_{\pi_1}^\tau (g)$ and $\Phi_{\pi_2}^\tau (g)$ are conjugated by the same isomorphism $A$ for all $g\in G$.
\item[$(ii)$] Apart from that, as we say before, given $\pi\in \widehat{G}$ such that $\tau\subset\pi$ as $K$-module and $m(\tau,\pi)=1$, the vector space $\mathcal{H}_\pi(\tau)$ is isomorphic to $V_\tau$. If $T:\mathcal{H}_\pi(\tau)\longrightarrow V_\tau$ is the isomorphism between them, we will not make distinctions between $\Phi_\pi^\tau(g)\in \operatorname{End}(\mathcal{H}_\pi(\tau))$ and $T\circ\Phi_\pi^\tau(g)\circ T^{-1}\in \operatorname{End}(V_\tau)$. \end{itemize}
\end{remark}
\noindent In this work we consider the strong Gelfand pairs $(\operatorname{M}(n),\operatorname{SO}(n))$, $(\operatorname{SO}(n+1),\operatorname{SO}(n))$ and $(\operatorname{SO}_0(n,1),\operatorname{SO}(n))$. For a reference see for e.g \cite{Fulvio, Nosotros} for the first pair, \cite{Ignacio 1, Ignacio 2} for the second pair and \cite{Camporesi} for the third pair.
\\ \\
The natural action of $\operatorname{SO}(n)$ on $\mathbb{R}^n$ will be denote by
\begin{gather*}
\operatorname{SO}(N)\times \mathbb{R}^N\longrightarrow \mathbb{R}^N\\
(k,x)\mapsto k\cdot x.
\end{gather*}
\noindent From now on
we will denote by $K$ the group isomorphic to $\operatorname{SO}(n)$ which is, depending on the context, a subgroup of $\operatorname{SO}(n+1)$ or a subgroup of $\operatorname{M}(n)$. In the first case it must be identified with $\{g\in \operatorname{SO}(n+1) | \ g\cdot e_1=e_1\}$ (where $e_1$ is the canonical vector $(1,0,...,0)\in\mathbb{R}^{n+1}$) and in the second with $\operatorname{SO}(n)\times\{0\}$.
\subsection{The representation theory of $\operatorname{SO}(N)$}\label{rep SO(n+1)}
Let $N$ be an arbitrary natural number. The Lie algebra $\mathfrak{so}(N)$ of $\operatorname{SO}(N)$ is the space of antisymmetric matrices of order $N$. Its complexification $\mathfrak{so}(N,\mathbb{C})$ is the space of complex such matrices. Let $M$ be the integral part of $N/2$.
For a maximal torus
$\mathbb{T}$ of $\operatorname{SO}(2M)$ we consider
\begin{small}
\begin{equation*}
\left\lbrace \begin{pmatrix}
\cos(\theta_1)& \sin(\theta_1) \\
-\sin(\theta_1) & \cos(\theta_1) \\
& & \ddots &\\
& & & \cos(\theta_M)& \sin(\theta_M) \\
& & & -\sin(\theta_M) & \cos(\theta_M)
\end{pmatrix} | \ \theta_1, ..., \theta_M \in \mathbb{R} \right\rbrace
\end{equation*}
\end{small}
\noindent and for $\operatorname{SO}(2M+1)$ the same but with a one in the right bottom corner.
In what follows we describe the basic notions of the root system of $\mathfrak{so}(N,\mathbb{C})$, following \cite{Knapp, Fulton y Harris}, in order to fix notation.
\noindent Let $\mathfrak{t}$ denote the Lie algebra of $\mathbb{T}$. A Cartan subalgebra $\mathfrak{h}$ of the complex Lie algebra $\mathfrak{so}(N,\mathbb{C})$ is given by the complexification of $\mathfrak{t}$.
If $N$ is even we consider $\{H_1,...,H_M\}$ the following basis of $\mathfrak{h}$ as a $\mathbb{C}$-vector space
\begin{small}
\begin{equation*}
\left\lbrace H_1:=
\begin{pmatrix}
0& i & \\
-i & 0 \\
& & \ddots &\\
& & & 0&0\\
& & & 0&0
\end{pmatrix},
\text{ ... } , H_M:=
\begin{pmatrix}
0& 0 \\
0 & 0 \\
& & \ddots &\\
& & & 0 & i \\
& & & -i & 0
\end{pmatrix}\right\rbrace
\end{equation*}
\end{small}
\noindent (where $i=\sqrt{-1}$) and if $N$ is odd we consider the same but with a zero in the right bottom corner. This basis is \textit{orthogonal} with respect to the Killing form $B$, that is
\begin{gather*}
B(H_i,H_j)=0 \quad \forall i\neq j \quad \text{and}\\
B(H_i,H_i)=\begin{cases}
4(M-1) & \text{(if } N \text{ is even)}\\
4(M-1)+2 & \text{(if } N \text{ is odd)}.
\end{cases}
\end{gather*}
Let $\mathfrak{h}^*$ be the dual space of $\mathfrak{h}$ and let $\{L_1,...,L_M\}$ be the dual basis of $\{H_1,...,H_M\}$ (that is, $L_i(H_j)=\delta_{i,j}$, where $\delta_{i,j}$ is the Kronecker delta).
To each irreducible representation of $\mathfrak{so}(N,\mathbb{C})$ corresponds its highest weight
$\lambda=\sum_{i=1}^M \lambda_iL_i$, where $\lambda_i$ are all integers or all half integers satisfying
\begin{itemize}
\item[$(i)$] $\lambda_1\geq\lambda_2\geq \ ... \ \geq\lambda_{M-1}\geq|\lambda_M|$ if $N$ is even or
\item[$(ii)$] $\lambda_1\geq\lambda_2\geq ... \geq\lambda_M\geq 0$ if $N$ is odd.
\end{itemize}
Thus, we can associate each irreducible representation of $\mathfrak{so}(N,\mathbb{C})$ with an $M$-tuple $(\lambda_1,...,\lambda_M)$ fulfilling the mentioned conditions. We call such tuple a \textit{partition}.
\\ \\
We recall a well known formula regarding the decomposition of a representation of $\mathfrak{so}(N,\mathbb{C})$ under its restriction to $\mathfrak{so}(N-1,\mathbb{C})$ (cf. \cite[(25.34) and (25.35)]{Fulton y Harris})
\begin{itemize}
\item[]
\underline{Case odd to even:}
Let $\rho_\lambda$ be the irreducible representation of $\mathfrak{so}(2M+1,\mathbb{C})$ that is in correspondence with the partition $\lambda=(\lambda_1,...,\lambda_M)$ where $\lambda_1\geq\lambda_2\geq ... \geq\lambda_M\geq 0$. Then,
\begin{equation}\label{decomp so restr caso imp}
(\rho_\lambda)_{|_{\mathfrak{so}(2M,\mathbb{C})}} = \bigoplus_{\overline{\lambda}} \rho_{\overline{\lambda}}
\end{equation}
where the sum runs over all the partitions $\overline{\lambda}=(\overline{\lambda_1},...,\overline{\lambda_{M}})$ that satisfy
$$\lambda_1\geq\overline{\lambda_1}\geq\lambda_2\geq\overline{\lambda_2}\geq ... \geq \overline{\lambda_{M-1}}\geq\lambda_M\geq|\overline{\lambda_M}|,$$
with the $\lambda_i$ and $\overline{\lambda_i}$ simultaneously all integers or all half integers.
\item[] \underline{Case even to odd:}
Let $\rho_\lambda$ be the irreducible representation of $\mathfrak{so}(2M)$ that is in correspondence with the partition $\lambda=(\lambda_1,...,\lambda_M)$ where $\lambda_1\geq\lambda_2\geq \ ... \ \geq\lambda_{M-1}\geq|\lambda_M|$). Then,
\begin{equation}\label{decomp so restr caso par}
(\rho_\lambda)_{|_{\mathfrak{so}(2M-1,\mathbb{C})}} = \bigoplus_{\overline{\lambda}} \rho_{\overline{\lambda}}
\end{equation}
where the sum runs over all the partitions $\overline{\lambda}=(\overline{\lambda_1},...,\overline{\lambda_{M-1}})$ that satisfy
$$\lambda_1\geq\overline{\lambda_1}\geq\lambda_2\geq\overline{\lambda_2}\geq \ ... \ \geq \overline{\lambda_{M-1}}\geq|\lambda_M|,$$
with the $\lambda_i$ and $\overline{\lambda_i}$ simultaneously all integers or all half integers.
\end{itemize}
\noindent Finally, we recall that each irreducible representation of the group $\operatorname{SO}(N)$ corresponds to a partition $(\lambda_1,...,\lambda_M)$ (with the properties $(i)$ or $(ii)$ depending on the parity of its dimension) where $\lambda_i$ are all integers.
\subsubsection{The Borel-Weil-Bott Theorem}
The Borel-Weil-Bott theorem provides a concrete model for irreducible representations of the rotation group, since it is a compact Lie group. Let $\mathbb{T}$ be a maximal torus of $\operatorname{SO}(N)$. Given a character $\chi$ of $\mathbb{T}$, $\operatorname{SO}(N)$ acts on the space of holomorphic sections of the line bundle $G\times_{\chi}\mathbb{C}$ by the left regular representation. This representation is either zero or irreducible, moreover, it is irreducible when $\chi$ is dominant integral.
The theorem asserts that each irreducible representation of $\operatorname{SO}(N)$ arises from this way for a unique character $\chi$ of the maximal torus $\mathbb{T}$. (For a reference see \cite[Section 6.3]{Wallach} and \cite[Section 1]{Dooley}.)
\begin{remark}\label{remark holomorphic sections}
The holomorphic sections of the line bundle $G\times_{\chi}\mathbb{C}$ may be identified with $C^\infty$ functions on $\operatorname{SO}(N)$ satisfying the following two conditions:
\begin{enumerate}
\item[$(i)$] $f(gt)=\overline{\chi(t)}f(g) \quad \forall t\in \mathbb{T} \text{ and } g\in \operatorname{SO}(N)$ and
\item[$(i)$] for each $X\in\mathfrak{\eta}^+, \quad Xf(g):=\frac{d}{ds}_{|_{s=0}}f(g \ \exp(sX))=0 \quad \forall g\in \operatorname{SO}(N)$.
\end{enumerate}
With this identification, the representation of $\operatorname{SO}(N)$ is given by the left regular action, i.e.,
$L_g(f)(x):=f(g^{-1}x)$ $ \ \forall g \in \operatorname{SO}(N)$.
\end{remark}
\subsubsection{A special character and a special function}\label{A special character and a special function}
For the case $N=n+1$ we will introduce a character $\gamma$ and a function $\psi$ that will play an important role later on. Let $m$ be the integral part of $(n+1)/2$ and let $\mathbb{T}^m$ denote a maximal torus of $\operatorname{SO}(n+1)$ like at the beginning of this section.
\\ \\
Let $\gamma:\mathbb{T}^m\longrightarrow \mathbb{C}$ be the projection onto the first factor, i.e., $\gamma(e^{i\theta_1},...,e^{i\theta_m})=e^{i\theta_1}$.
The irreducible representation of $\operatorname{SO}(n+1)$ associated with $\gamma$ (through the Borel-Weil-Bott theorem) is equivalent to the standard representation \cite[Lemma 1]{Dooley}. Moreover, for each $\ell\in \mathbb{N}$, the irreducible representation of $\operatorname{SO}(n+1)$ associated with the $\ell$-th power of $\gamma$ (i.e. $\gamma^\ell(e^{i\theta_1},...,e^{i\theta_m})=e^{i\ell\theta_1}$) has $(\ell,0,...,0)$, that is, the one that can be realized on the space of harmonic homogeneous polynomials of degree $\ell$ on $\mathbb{R}^{n+1}$ with complex coefficients.
\\ \\
In the standard representation of $\operatorname{SO}(n+1)$ appears
the trivial representation of $K$ as a $K$-submodule. As a consequence, we can take a $K$-fixed vector for the standard representation, i.e., a function $\psi:\operatorname{SO}(n+1)\longrightarrow \mathbb{C}$ as in the Remark \ref{remark holomorphic sections} satisfying $\psi(k^{-1}g)=\psi(g)$ for all $k\in K$ and $g\in \operatorname{SO}(n+1)$. Moreover, we can choose $\psi$ such that $\psi(k)=1$ for all $k\in K$.
\subsection{The representation theory of $\operatorname{M}(n)$}\label{rep M(n)}
We will follow the Mackey's orbital analysis to describe the irreducible representations of $\operatorname{M}(n)$ (for a reference see \cite[Section 14]{Mackey} and \cite[Section 2]{Dooley}). The orbits of the natural action of $\operatorname{SO}(n)$ on $\mathbb{R}^n$ are the spheres of radius $R>0$ and the origin set point $\{0\}$ (which is a fixed point for the whole group $\operatorname{SO}(n)$). The irreducible representations corresponding to the trivial orbit $\{0\}$ are one-dimensional, parametrized by $\lambda\in \mathbb{R}^n$ and explicitly they are
\begin{equation}\label{rep de Mn de med zero}
(k,x)\mapsto e^{i\langle\lambda,x\rangle} \qquad \forall (k,x)
\in \operatorname{M}(n),
\end{equation}
where $\langle\cdot,\cdot\rangle$ denotes the canonical inner product on $\mathbb{R}^n$.
Since these representations have zero Plancherel measure we are not interested in. (They will not provide spherical functions appearing in the inversion formula for the spherical Fourier transform.)
\\ \\
The irreducible representations that arises from the non trivial orbits will be more interesting for us. One must fix a point $R e_1$ on the sphere of radius $R>0$ (where $e_1:=(1,0,...,0)\in\mathbb{R}^n$), take its stabilizer \begin{equation*}
K_{R e_1}:=\{k\in \operatorname{SO}(n)| \ k\cdot Re_1=Re_1\},
\end{equation*}
the character
\begin{equation*}
\chi_R (x):=e^{iR\langle x,e_1\rangle} \qquad (\forall x\in\mathbb{R}^n).
\end{equation*}
and a representation $\sigma\in\widehat{\operatorname{SO}(n-1)}$ (note that $K_{Re_1}$ is isomorphic to $SO(n-1)$). Finally, inducing the representation $\sigma\otimes\chi_R$ from $K_{R e_1}\ltimes \mathbb{R}^n$ to $\operatorname{M}(n)$ one obtains an irreducible representation $\omega_{\sigma,R}$ of $\operatorname{M}(n)$ .
\\ \\
It can be view (using the Borel-Weil-Bott model for $\sigma\in \widehat{\operatorname{SO}(n-1)}$) that this representation can be realized on a subspace of scalar-valued square integrable functions on $\operatorname{SO}(n)$. This space consists of the functions $f\in L^2(\operatorname{SO}(n))$ that satisfy the following two conditions,
\begin{itemize}
\item[(i)] if $\mathbb{T}^{m-1}$ denotes the maximal torus of $K_{R e_1}\simeq \operatorname{SO}(n-1)$, then
\begin{equation*}
f(kt)=\chi_{\sigma}(t^{-1})f(k) \qquad \forall k\in \operatorname{SO}(n) \text{ and } \forall t\in \mathbb{T}^{m-1},
\end{equation*}
where $\chi_\sigma$ is the character associated with $\sigma$;
\item[(ii)] for each $k\in \operatorname{SO}(n)$, the function
\begin{gather*}
\operatorname{SO}(n-1)\longrightarrow \mathbb{C} \\
\tilde{k}\mapsto f(k\tilde{k})
\end{gather*}
satisfies condition $(ii)$ from Remark \ref{remark holomorphic sections} (with $N=n-1$).
\end{itemize}
We denote this space as $\mathcal{H}_{\sigma,R}$. The irreducible representation $\omega_{\sigma, R}$ acts on $\mathcal{H}_{\sigma,R}$ in the following way, let $f\in \mathcal{H}_{\sigma,R}$ and let $(k,x)\in \operatorname{SO}(n)\times \mathbb{R}^n$, then
\begin{equation}\label{principal serie}
(\omega_{\sigma, R}(k,x)(f))(h)=e^{iR\langle h^{-1}\cdot x,e_1\rangle}f(k^{-1}h) \qquad (\forall h\in \operatorname{SO}(n)).
\end{equation}
It is known that all the irreducible representations of $\operatorname{M}(n)$ are equivalent to the ones given by (\ref{rep de Mn de med zero}) or to the ones given by (\ref{principal serie}).
\subsection{Contraction}
The notion of group contraction was introduced Inönü and Wigner in \cite{Wigner}. We recall its definition (cf. \cite[p. 211]{Ricci contraction}).
\begin{definition}\label{def contraction}
If $G$ and $H$ are two connected Lie groups of the same dimension, we say that the family
$\{D_{\alpha}\}$ of infinitely differentiable maps
$D_\alpha:H\longrightarrow G$, mapping the identity $e_H$ to the identity $e_G$ of $G$, defines a \textit{contraction of $G$ to $H$} if, given any relatively compact open neighborhood $V$ of $e_H$
\begin{enumerate}
\item[$(i)$] there is $\alpha_{V}\in \mathbb{N}$ such that for $\alpha>\alpha_V$, ${(D_{\alpha})}_{|_{V}}$ is a diffeomorphism,
\item[$(ii)$] if $W$ is such that $W^2\subset V$ and $\alpha>\alpha_V$, then $D_{\alpha}(W)^2\subset D_{\alpha}(V)$ and
\item[$(iii)$] for $h_1, h_2\in W$,
\begin{equation*}
\lim\limits_{\alpha \to \infty} D_\alpha^{-1}\left( D_\alpha(h_1) D_\alpha(h_2)^{-1} \right)=h_1 h_2^{-1}
\end{equation*}
uniformly on $V\times V$.
\end{enumerate}
\end{definition}
\noindent In particular, for $G=\operatorname{SO}(n+1)$ and $H=\operatorname{M}(n)$ we consider the following family of contraction maps $\{D_\alpha\}_{\alpha\in\mathbb{R}_{>0}}$,
\begin{equation} \label{contaction}
\begin{split}
D_\alpha: \operatorname{M}(n)\longrightarrow \operatorname{SO}(n+1) \\
D_{\alpha}(k,x):= \exp\left(\frac{x}{\alpha}\right) \ k,
\end{split}
\end{equation}
where $\exp$ denotes the exponential map $\mathfrak{so}(n+1)\longrightarrow \operatorname{SO}(n+1)$ and we identified (as vector spaces) $\mathbb{R}^n$ with the complement of $\mathfrak{so}(n)$
on $\mathfrak{so}(n+1)$, which is invariant under the adjoint action of $K$. (Note that we are using the so called Cartan decomposition.)
Writing
\begin{align*}
D_\alpha(k_1,x_1)D_\alpha(k_2,x_2)&= \exp\left(\frac{1}{\alpha}x_1\right) \ k_1 \ \exp\left(\frac{1}{\alpha}x_2\right) \ k_2 \\
&= \exp\left(\frac{1}{\alpha}x_1\right) \left[k_1 \ \exp\left(\frac{1}{\alpha}x_2\right) k_1^{-1}\right]k_1 \ k_2\\
&= \exp\left(\frac{1}{\alpha}x_1\right) \ \exp\left(\operatorname{Ad}(k_1)\frac{1}{\alpha}x_2\right) \ k_2
\end{align*}
(where $\operatorname{Ad}$ denotes the adjoint representation of $\operatorname{SO}(n)$) and using the Bake-Campbell-Hausdorff formula we can derive, at the limit of $\alpha\rightarrow \infty$, the property $(iii)$ for all $(k_1,x_1), (k_2,x_2)\in \operatorname{M}(n)$.
\subsection{The contracting sequence of an irreducible representation of $\operatorname{M}(n)$}\label{sec rdos de Dooley}
In this section we summarize the results proved by Dooley and Rice in \cite[Sections 3 and 4]{Dooley} that will be frequently used in the sequel.
\\ \\
Let $R\in\mathbb{R}_{>0}$, let $\sigma\in\widehat{\operatorname{SO}(n-1)}$ corresponding to the partition $(\sigma_1,...,\sigma_{m-1})$ and let $\omega_{\sigma,R}\in\widehat{\operatorname{M}(n)}$ be the irreducible unitary representation given by (\ref{principal serie}). Finally, let $\gamma$ be the character given in Section \ref{A special character and a special function}. The following definition will be very important.
\begin{definition} \cite[Definition 4]{Dooley}
The sequence
$\{\gamma^\ell\chi_\sigma\}_{\ell=1}^{\infty}$ of characters of $\mathbb{T}^m$ defines, for $\ell\geq\sigma_1$, a sequence $\{\rho_{\sigma,\ell}\}_\ell$ of irreducible unitary representations of $\operatorname{SO}(n+1)$ (as in Section \ref{rep SO(n+1)}) and it is called the \textit{contracting sequence} associated with $\omega_{\sigma,R}$. For each non negative integer $\ell\geq\sigma_1$, we denote by $\mathcal{H}_{\sigma,\ell}$ the space given by Remark \ref{remark holomorphic sections}, which is a model for $\rho_{\sigma,\ell}$.
\end{definition}
\noindent
We will use the following results proved by Dooley and Rice.
\begin{lemma}\cite[Lemma 5]{Dooley}\
\begin{itemize}
\item For each $\ell\in{\mathbb{N}}$, the multiplication by the function $\psi$ (given in Section \ref{A special character and a special function}) defines a linear map from $\mathcal{H}_{\sigma,\ell}$ to $\mathcal{H}_{\sigma,\ell+1}$.
\item If $\tilde{f}\in \mathcal{H}_{\sigma,\ell}$, then the restrictions of $\tilde{f}$ and $\psi \tilde{f}\in \mathcal{H}_{\sigma,\ell+1}$ to $\operatorname{SO}(n)$ are the same (since $\psi_{|_{\operatorname{SO}(n)}}\equiv 1$).
\item The spaces $\{{\mathcal{H}_{\sigma,\ell}}_{|_{\operatorname{SO}(n)}}\}_{\ell\in{\mathbb{N}}}$ of restrictions to $\operatorname{SO}(n)$ form an increasing sequence of subspaces of $\mathcal{H}_{\sigma,R}$.
\end{itemize}
\end{lemma}
\begin{theorem}\cite[Theorem 1 and Corollary 1]{Dooley}.
Let $\psi^\ell$ denote the $\ell$-th power of $\psi$ (i.e, $\psi^\ell=\psi\circ ... \circ \psi$). Let $B$ be a compact subset of $\mathbb{R}^n$. For an arbitrary function $\tilde{f} \in\mathcal{H}_{\sigma,\ell_0}$, it follows that
\begin{enumerate}
\item[$(i)$] for all $s\in \operatorname{SO}(n)$,
\begin{equation}\label{Theorem 1 Dooley}
\lim\limits_{\ell \to \infty} \left( \rho_{\sigma,\ell_0+\ell} (D_{\ell/R}(k,x)) ( \psi^\ell \tilde{f} ) \right) (s) = \left( \omega_{\sigma,R}(k,x) (\tilde{f}_{|_{\operatorname{SO}(n)}}) \right)(s)
\end{equation}
uniformly for $(k,x)\in \operatorname{SO}(n)\times B$;
\item[$(ii)$] and also,
\begin{equation}\label{Corollary 1 Dooley}
\lim\limits_{\ell \to \infty} \left\|\rho_{\sigma,\ell_0+\ell} (D_{\ell/R}(k,x)) ( \psi^\ell \tilde{f} ) - \omega_{\sigma,R}(k,x)
(\tilde{f}_{|_{\operatorname{SO}(n)}}) \right\|_{L^2(\operatorname{SO}(n))} = 0
\end{equation}
uniformly for $(k,x)\in \operatorname{SO}(n)\times B$.
\end{enumerate}
\end{theorem}
\begin{corollary}\cite[Corollary 2]{Dooley}
The increase union $\bigcup_{\ell={1}}^\infty \left( {\mathcal{H}_{\sigma,\ell}}_{|_{\operatorname{SO}(n)}} \right)$ is dense in $\mathcal{H}_{\sigma,R}$ with respect to the $L^{2}(\operatorname{SO}(n))$-norm.
\end{corollary}
\section{The approximation theorem}
The aim of this section is to prove that the spherical functions of type $\tau$ corresponding to the strong Gelfand pair $(\operatorname{SO}(n)\ltimes\mathbb{R}^n, \operatorname{SO}(n))$ can be obtained as an appropriate limit of spherical functions of type $\tau$ associated to the strong Gelfand pair $(\operatorname{SO}(n+1), \operatorname{SO}(n))$.
\\ \\
Let $(\tau,V_\tau)$ be an arbitrary irreducible unitary representation of ${K}$. We take $(\omega_{\sigma,R},\mathcal{H}_{\sigma,R})\in \widehat{\operatorname{M}(n)}$ such that $\tau\subset \omega_{\sigma,R}$ as $K$-module, that is, $\tau$ appears in the decomposition of $\omega_{\sigma,R}$ into irreducible representations as $K$-module. According to Section \ref{rep M(n)}
$$\omega_{\sigma,R}=\operatorname{Ind}_{\operatorname{SO}(n-1)\ltimes\mathbb{R}^n}^{\operatorname{SO}(n)\ltimes \mathbb{R}^n }(\sigma\otimes\chi_R).$$
From Frobenius reciprocity,
the representation $\tau\subset\omega_{\sigma,R}$ as $\operatorname{SO}(n)$-module if and only if $\sigma\subset\tau$ as $\operatorname{SO}(n-1)$-module. Moreover,
\begin{equation}\label{frobenius}
m(\tau,\omega_{\sigma,R} )=m(\sigma, \tau).
\end{equation}
We denote by $(\tau_1,...,\tau_{m})$ the partition associated to $\tau$ if $n=2m$ and $(\tau_1,...,\tau_{m-1})$ if $n=2m-1$.
\begin{remark}
Let $(\sigma_1,...,\sigma_{m-1})$ be the partition corresponding to the representation $\sigma\in\widehat{\operatorname{SO}(n-1)}$ and assume $\tau\subset\omega_{\sigma,R}$. From \eqref{frobenius} and the branching formulas given in Section \ref{rep SO(n+1)} we have the following:
\begin{itemize}
\item[$(i)$] If $n=2m$, from (\ref{decomp so restr caso par}) we have that
\begin{equation}\label{sigma en tau 1}
\tau_1\geq\sigma_1\geq\tau_2\geq\sigma_2\geq \ ... \ \geq \tau_{m-1}\geq \sigma_{m-1}\geq |\tau_m|.
\end{equation}
\item[$(ii)$] If $n=2m-1$, from (\ref{decomp so restr caso imp})
\begin{equation}\label{sigma en tau 2}
\tau_1\geq\sigma_1\geq\tau_2\geq\sigma_2\geq \ ... \ \geq \sigma_{m-2}\geq\tau_{m-1}\geq |\sigma_{m-1}|.
\end{equation}
\end{itemize}
\end{remark}
\noindent Let $\mathcal{H}_{\sigma,R}(\tau)$ be the $\tau$-isotypic component of $\mathcal{H}_{\sigma,R}$.
We fix $\Phi_{\omega_{\sigma,R}}^{\tau, \operatorname{M}(n)}$ the spherical function of type $\tau$ of $(\operatorname{M}(n),K)$ associated to the representation $\omega_{\sigma,R}$ of $\operatorname{M}(n)$ (see Definition \ref{def spherical function of type tau}).
\\ \\
We will consider a family of irreducible unitary representations of $\operatorname{SO}(n+1)$ that is a contracting sequence associated to $\omega_{\sigma,R}$.
Let $\chi_\sigma$ denote the character associated to $\sigma$.
The special orthogonal group $\operatorname{SO}(n-1)$ can be embedded in $\operatorname{SO}(n+1)$ by starting with a $2\times 2$ identity block in the top left hand corner. Let $\mathbb{T}^m$ be the maximal torus of $\operatorname{SO}(n+1)$ and $\mathbb{T}^{m-1}$ be the maximal torus of $\operatorname{SO}(n-1)$ that are of the form given in Section \ref{rep SO(n+1)}. That is, if $n+1$ is even
\begin{small}
\begin{equation*}
\mathbb{T}^m \supset \mathbb{T}^{m-1}=\left\lbrace \begin{pmatrix}
1& 0 \\
0 & 1 \\
& & \cos(\theta_2)& \sin(\theta_2) \\
& & -\sin(\theta_2) & \cos(\theta_2) \\
& & & & \ddots &\\
& & & & & \cos(\theta_m)& \sin(\theta_m) \\
& & & & & -\sin(\theta_m) & \cos(\theta_m)
\end{pmatrix} | \ \theta_2, ..., \theta_m \in \mathbb{R} \right\rbrace .
\end{equation*}
\end{small}
\noindent When $n+1$ is odd they are the same but with a one in the right bottom corner. We consider a Cartan subalgebra of $\mathfrak{so}(n+1,\mathbb{C})$ generated by $\{H_1,H_2,...,H_m\}$ as in Section \ref{rep SO(n+1)}. By the relations of orthogonality with respect to the Killing form, we can consider that $\{H_2,...,H_m\}$ is a basis of a Cartan subalgebra of $\mathfrak{so}(n-1,\mathbb{C})$ embedded on $\mathfrak{so}(n+1,\mathbb{C})$.
\\ \\
For each non-negative integer $\ell$, let $(\rho_{\sigma,\ell},\mathcal{H}_{\sigma,\ell})$ be the representation of $\operatorname{SO}(n+1)$ constructed as in Section \ref{rep SO(n+1)} from the character $\gamma^\ell\chi_\sigma$ of $\mathbb{T}^m$. It is easy to see that the corresponding partition is $(\ell,\sigma_1,...,\sigma_{m-1})$. If $\ell\in\mathbb{N}$ is such that $\ell<\sigma_1$, the representation $\rho_{\sigma,\ell}$ is trivial and if $\ell\geq\sigma_1$, the representation $\rho_{\sigma,\ell}$ is irreducible.
\begin{lemma}\label{lemma 2}
If $\tau$ appears in the decomposition into irreducible representations of $\omega_{\sigma,R}$ as $K$-module, then $\tau$ appears in the decomposition of $\rho_{\sigma,\ell}$ as $K$-module for all $\ell\geq \tau_1$.
\end{lemma}
\begin{proof}
We will apply the results given in Section \ref{rep SO(n+1)} to our case recalling \eqref{frobenius}.
\begin{itemize}
\item[$(i)$] Let $n+1=2m+1$
Since $\rho_{\sigma,\ell}$ is in correspondence with the partition $(\ell,\sigma_1,...,\sigma_{m-1})$, it follows from (\ref{sigma en tau 1}) and (\ref{decomp so restr caso imp}) that $\tau$ appears in the decomposition of $\rho_{\sigma,\ell}$ as $K$-module if $\ell\geq\tau_1$,.
\item[$(ii)$]
Let $n+1=2m$.
If $\ell\geq\tau_1$, it follows from (\ref{sigma en tau 2}) and (\ref{decomp so restr caso par}) that $\tau$ appears in the decomposition of $\rho_{\sigma,\ell}$ as $K$-module.
\end{itemize}
\end{proof}
\begin{remark}\label{remark rep M(n) para el aprox teo}
Note that, from (\ref{principal serie}) the representation $\omega_{\sigma,R}$ restricted to $K$ acts on $\mathcal{H}_{\sigma,R}$ as the left regular action, i.e., for each $k\in K$,
$$(\omega_{\sigma,R}(k,0)f)(k_0)=(L_k(f))(k_0)=f(k^{-1}k_0) \quad \forall k_0\in K \quad \text{ and } \quad \forall f\in \mathcal{H}_{\sigma,R}.$$
Apart from that, for each $\ell\in\mathbb{Z}_{\geq 0}$, ${\mathcal{H}_{\sigma,\ell}}_{|_{K}}$ is a $K$-submodule of ${\mathcal{H}_{\sigma,R}}$.
Thus,
the restriction operator given by
\begin{equation*}
Res_\ell(\tilde{f}):=\tilde{f}_{|_{K}} \qquad \forall \tilde{f}\in \mathcal{H}_{\sigma,\ell}
\end{equation*}
intertwines $\mathcal{H}_{\sigma,\ell}$ and $\mathcal{H}_{\sigma,R}$ as $K$-modules.
\end{remark}
\begin{lemma} \label{lemma 4}
If $f\in\mathcal{H}_{\sigma,R}(\tau)$, then there exists $\ell'\in\mathbb{N}$ such that $f\in{\mathcal{H}_{\sigma,\ell}}_{|_{K}}$ for all $\ell\geq \ell'$. Moreover, let $\ell_0:=\max\{\tau_1,\ell'\}$, then there exists a unique $\tilde{f}\in\mathcal{H}_{\sigma,\ell_0}(\tau)$ such that $f=\tilde{f}_{|_{K}}$.
\end{lemma}
\begin{proof}
The space $\mathcal{H}_{\sigma,R}(\tau)\simeq V_\tau$ is an invariant factor in the decomposition of $\mathcal{H}_{\sigma,R}$ as $K$-module.
From Section \ref{sec rdos de Dooley}, each ${\mathcal{H}_{\sigma,\ell}}_{|_{K}}$ is a subspace of $\mathcal{H}_{\sigma,R}$, moreover, $\bigcup_{\ell={1}}^\infty \left( {\mathcal{H}_{\sigma,\ell}}_{|_{\operatorname{SO}(n)}} \right)$ is dense in $\mathcal{H}_{\sigma,R}$.
Since the dimension of $V_\tau$ is finite, there exists $\ell'\in \mathbb{N}$ such that $\mathcal{H}_{\sigma,R}(\tau)$ is contained in ${\mathcal{H}_{\sigma,\ell'}}_{|_{K}}$. Furthermore, as ${\mathcal{H}_{\sigma,\ell}}_{|_{K}}\subset {\mathcal{H}_{\sigma,\ell+1}}_{|_{K}}$ for all $\ell\in \mathbb{N}$, it follows that $\mathcal{H}_{\sigma,R}(\tau)\subset {\mathcal{H}_{\sigma,\ell}}_{|_{K}}$ for all $\ell\geq \ell'$.
\\ \\
Apart from that, since the decomposition of $\rho_{\sigma,\ell}$ as $K$-module is multiplicity free (for all $\ell\in\mathbb{N}$),
then the operator $Res_\ell$ is a linear isomorphism that maps the irreducible component $\mathcal{H}_{\sigma,\ell}(\tau)$ into the irreducible component ${\mathcal{H}_{\sigma,\ell}}_{|_{K}}(\tau)$, for all $\ell \geq \tau_1$.
Finally, if $f$ is an arbitrary function in $\mathcal{H}_{\sigma,R}(\tau)$, there is a unique $\tilde{f}\in \mathcal{H}_{\sigma,\ell_0}(\tau)$ such that $\tilde{f}(k)=f(k)$ for all $k\in K$.
\end{proof}
\begin{lemma}\label{lemma 5}
Let $\ell_0\in\mathbb{N}$ as in Lemma \ref{lemma 4} and let $\tilde{f}\in\mathcal{H}_{\sigma,\ell_0}(\tau)$. It follows that $\psi^{\ell}\tilde{f}\in \mathcal{H}_{\sigma,\ell_0+\ell}(\tau)$ for all $\ell\in \mathbb{N}$.
\end{lemma}
\begin{proof}
From Section \ref{sec rdos de Dooley} if $\tilde{f}\in\mathcal{H}_{\sigma,\ell_0}$, then $\psi^{\ell}\tilde{f}\in \mathcal{H}_{\sigma,\ell_0+\ell}$ for all $\ell\in\mathbb{N}$. Also, since $\psi$ is a $K$-invariant function (i.e, $\psi(k^{-1}g)=\psi(g)$ for all $ k\in K$ and $g\in SO(n+1)$),
then the multiplication by $\psi^\ell$ is an intertwining operator between $(\rho_{\sigma,\ell_0},\mathcal{H}_{\sigma,\ell_0})$ and $(\rho_{\sigma,\ell_0+\ell},\mathcal{H}_{\sigma,\ell_0+\ell})$ as $K$-modules.
Since the decomposition of $\rho_{\sigma,\ell}$ as $K$-module is multiplicity free (for all $\ell\in\mathbb{N}$), the multiplication by $\psi^\ell$ maps irreducible component to irreducible component, that is, maps $\tilde{f}\in\mathcal{H}_{\sigma,\ell_0}(\tau)$ into $\psi^\ell\tilde{f}\in\mathcal{H}_{\sigma,\ell_0+\ell}(\tau)$.
\end{proof}
\noindent Let $\ell_0$ as in Lemma \ref{lemma 4}. We consider the family
\begin{equation}\label{sq sph func de so}
\{\Phi_{\rho_{\sigma,\ell_0+\ell}}^{\tau, \operatorname{SO}(n+1)}\}_{\ell\in\mathbb{Z}_{\geq 0}}
\end{equation}
of spherical functions of type $\tau$ of the strong Gelfand pair $(\operatorname{SO}(n+1), K)$ associated with the representations $\rho_{\sigma,\ell_0+\ell}$.
\\ \\
With all the previous notation we enunciate the following result.
\begin{theorem} Let $\tau\in\widehat{\operatorname{SO}(n)}$ and $(\omega_{\sigma,R},\mathcal{H}_{\sigma,R})\in \widehat{\operatorname{M}(n)}$ such that $\sigma\subset\tau$ as $\operatorname{SO}(n-1)$-module. Let $\Phi_{\omega_{\sigma,R}}^{\tau, \operatorname{M}(n)}$ be the spherical function of type $\tau$ of $(\operatorname{M}(n),\operatorname{SO}(n))$ corresponding to $\omega_{\sigma,R}$. Then, the family $\{\Phi_{\rho_{\sigma,\ell}}^{\tau,\operatorname{SO}(n+1)}\}_{\ell\in \mathbb{Z}_{\geq 0}}$ of spherical functions of type $\tau$ of $(\operatorname{SO}(n+1),\operatorname{SO}(n))$ satisfying:
\noindent For each $f\in \mathcal{H}_{\sigma,R}(\tau)$ there exists a unique $\tilde{f}\in \mathcal{H}_{\sigma,\ell_0}(\tau)$ such that for every compact subset $B$ of $\mathbb{R}^n$ it holds that
\begin{itemize}
\item[$(i)$]
\begin{equation*}
\lim\limits_{\ell \to \infty} \left( \Phi_{\rho_{\sigma,\ell_0+\ell}}^{\tau, \operatorname{SO}(n+1)} (D_{\ell/R}(k,x) ) ( \psi^\ell \tilde{f} ) \right) (s) = \left( \Phi_{\omega_{\sigma,R}}^{\tau, \operatorname{M}(n)}(k,x)(f) \right) (s) \quad \text{ for all } s\in \operatorname{SO}(n) \text{ and}
\end{equation*}
\item[$(ii)$]
\begin{equation*}
\lim\limits_{\ell \to \infty} \left\| \left( \Phi_{\rho_{\sigma,\ell_0+\ell}}^{\tau, \operatorname{SO}(n+1)} (D_{\ell/R}(k,x) ) ( \psi^\ell \tilde{f} ) \right) _{|_{\operatorname{SO}(n)}} - \Phi_{\omega_{\sigma,R}}^{\tau, \operatorname{M}(n)}(k,x)(f) \right\|_{L^2(\operatorname{SO}(n))}=0
\end{equation*}
\end{itemize}
where the convergences are uniformly for $(k,x)\in \operatorname{SO}(n)\times B$.
\end{theorem}
\begin{proof}
Let $\tilde{f}$ given by Lemma \ref{lemma 4}. First of all, note that $\Phi_{\rho_{\sigma,\ell_0+\ell}}^{\tau, \operatorname{SO}(n+1)} (g)\in \operatorname{End}(\mathcal{H}_{\sigma,\ell_0+\ell}(\tau))$ and from Lemma \ref{lemma 5}, $\psi^\ell \tilde{f}\in \mathcal{H}_{\sigma,\ell_0+\ell}(\tau)$.
\\ \\
Since the convergence in (\ref{Theorem 1 Dooley}) and (\ref{Corollary 1 Dooley}) is uniform for $(k,x)\in \operatorname{SO}(n)\times B$, we are allowed to make a convolution (over $K$) with $d_\tau\overline{\chi_\tau}$ and we get that
for all $s\in K$,
\begin{equation*}
\lim\limits_{\ell \to \infty} \left( d_\tau\overline{\chi_\tau} * \left(\rho_{\sigma,\ell_0+\ell} (D_{\ell/R}(k,x)) ( \psi^\ell \tilde{f} )_{|_{K}} \right) \right) (s) = \left( d_\tau\overline{\chi_\tau} * \omega_{\sigma,R}(k,x) (f) \right)(s)
\end{equation*}
and also,
\begin{equation*}
\lim\limits_{\ell \to \infty} ||d_\tau\overline{\chi_\tau} *_K (\rho_{\sigma,\ell_0+\ell} (D_{\ell/R}(k,x)) ( \psi^\ell \tilde{f} )) - d_\tau\overline{\chi_\tau} * (\omega_{\sigma,R}(k,x)(f)) ||_{L^2(\operatorname{SO}(n))} = 0 .
\end{equation*}
Since it is obvious that
$$d_\tau\overline{\chi_\tau} * \left(\rho_{\sigma,\ell_0+\ell} (g) ( \psi^\ell \tilde{f} )_{|_{K}}\right)=
\left( d_\tau\overline{\chi_\tau} *_K \rho_{\sigma,\ell_0+\ell} (g) ( \psi^\ell \tilde{f} )\right)_{|_{K}} \quad \forall g\in \operatorname{SO}(n+1),$$
then for each $g\in \operatorname{SO}(n+1)$ and for all $s\in K$,
\begin{align*}
\left( d_\tau \overline{\chi_\tau} *_K \rho_{\sigma,\ell_0+\ell} (g) ( \psi^\ell \tilde{f} )\right)(s)&=
d_\tau \int_{K} \overline{\chi_\tau}(k) \left(\rho_{\sigma,\ell_0+\ell} (g) ( \psi^\ell \tilde{f} )\right)(k^{-1}s)dk \\
&= d_\tau \int_{K} \overline{\chi_\tau}(k) L_k\left(\rho_{\sigma,\ell_0+l} (g) ( \psi^\ell \tilde{f} )\right)(s)dk\\
&= d_\tau \int_{K} \overline{\chi_\tau}(k) \rho_{\sigma,{\ell_0+\ell}}(k)\left(\rho_{\sigma,\ell_0+\ell} (g) ( \psi^\ell \tilde{f} )\right)(s)dk\\
&=P_{\rho_{\sigma,\ell_0+\ell}}^\tau \left( \rho_{\sigma,\ell_0+\ell} (g) ( \psi^\ell \tilde{f} ) \right)(s)\\
&=\left( \Phi_{\rho_{\sigma,\ell_0+\ell}}^{\tau, \operatorname{SO}(n+1)} (g) ( \psi^\ell \tilde{f} ) \right) (s).
\end{align*}
Similarly, for each $(k,x)\in \operatorname{M}(n)$ and for all $s\in K$,
\begin{align*}
\left( d_\tau\overline{\chi_\tau} * \omega_{\sigma,R}(k,x) (f) \right)(s)&= P_{\omega_{\sigma,R}}^\tau \left( \omega_{\sigma,R} (k,x) ( {f} ) \right)(s)\\
&=\left( \Phi_{\omega_{\sigma,R}}^{\tau, \operatorname{M}(n)} (k,x) ( {f} ) \right) (s).
\end{align*}
\end{proof}
\noindent We would like to end this paper, as in the last remark given by Dooley and Rice in \cite{Dooley}, saying that the harmonic analysis (scalar-valued and vector or matrix-valued) on $\operatorname{M}(n)$ can be obtained as a limit (in an appropriate sense) of the harmonic analysis on $\operatorname{SO}(n+1)$. Indeed,
consider for each $\ell\in\mathbb{Z}_{\geq 0}$ the map
\begin{gather*}
Res_{\ell_0+\ell}:\mathcal{H}_{\sigma,\ell_0+\ell}(\tau)\longrightarrow \mathcal{H}_{\sigma, R}(\tau) \\
\qquad \qquad \qquad h\longmapsto h_{|_{K}}
\end{gather*}
and the map
\begin{gather*}
\mathcal{H}_{\sigma, R}(\tau) \longrightarrow \mathcal{H}_{\sigma,\ell_0+\ell}(\tau)\\
f\longmapsto \psi^{\ell}\tilde{f},
\end{gather*}
where $\tilde{f}$ is as in Lemma \ref{lemma 4}. This two maps are inverses one from the other.
From the previous theorem for each
$f\in\mathcal{H}_{\sigma,R}(\tau)$ it follows that
\begin{equation}\label{conjugar con Res}
\lim_{\ell\rightarrow \infty} \left\| \left[ Res_{\ell_0+\ell}\circ \Phi_{\rho_{\sigma,\ell_0+\ell}}^{\tau, \operatorname{SO}(n+1)}(D_{\ell/R}(\cdot)) \circ (Res_{\ell_0+\ell})^{-1}-\Phi_{\omega_{\sigma,R}}^{\tau, \operatorname{M}(n)} (\cdot)\right](f)\right\|_{L^2(\operatorname{SO}(n))}=0,
\end{equation}
where the convergence is uniform on compact sets of $\operatorname{M}(n)$.
As we saw in the Remark \ref{clases de conj de func esf}, for all $\ell\in \mathbb{Z}_{\geq 0}$, the functions $\Phi_{\rho_{\sigma,\ell_0+\ell}}^{\tau, \operatorname{SO}(n+1)}$ and $Res_{\ell_0+\ell}\circ [ \Phi_{\rho_{\sigma,\ell_0+\ell}}^{\tau, \operatorname{SO}(n+1)}(\cdot)] \circ (Res_{\ell_0+\ell})^{-1}$ represent the same spherical function.
Now, using the isomorphism $\mathcal{H}_{\sigma,R}(\tau)\simeq V_\tau$ and again the Remark \ref{clases de conj de func esf}, the
limit given in (\ref{conjugar con Res})
can be rewritten as
\begin{equation}
\lim_{\ell\rightarrow \infty} \left\| \left[ \Phi_{{\rho}
_{\sigma,\ell_0+\ell}}^{\tau, \operatorname{SO}(n+1)}(D_{\ell/R}(\cdot)) -\Phi_{\omega_{\sigma,R}}^{\tau, \operatorname{M}(n)} (\cdot)\right](v)\right\|_{V_\tau}=0 \qquad \text{ for all } v\in V_\tau,
\end{equation}
where $\|\cdot\|_{V_\tau}$ is a norm on the finite-dimensional vector space $V_\tau$ and the limit is uniform on compact sets of $\operatorname{M}(n)$.
\\ \\
Therefore we have proved the following theorem.
\begin{theorem} \label{coro}
Let $(\tau,V_\tau)\in\widehat{\operatorname{SO}(n)}$ and let $\Phi^{\tau,\operatorname{M}(n)}$ be a spherical function of type $\tau$ of the strong Gelfand pair $(\operatorname{M}(n),\operatorname{SO}(n))$.
There exists a sequence $\{\Phi_{\ell}^{\tau, \operatorname{SO}(n+1)}\}_{\ell\in\mathbb{Z}_{\geq 0}}$ of spherical functions of type $\tau$ of the strong Gelfand pair $(\operatorname{SO}(n+1),\operatorname{SO}(n))$ and a contraction $\{D_\ell \}_{\ell\in\mathbb{Z}_{\geq 0}}$ of $\operatorname{SO}(n+1)$ to $\operatorname{M}(n)$ such that
\begin{equation*}
\lim_{\ell\rightarrow \infty} \Phi_\ell^{\tau, \operatorname{SO}(n+1)}(D_{\ell}(k,x)) =\Phi^{\tau, \operatorname{M}(n)} (k,x),
\end{equation*}
where the convergence is point-wise on $V_\tau$ and it is uniform on compact sets of $\operatorname{M}(n)$.
\end{theorem}
\begin{remark}
We emphasize that the above result is independent from the model chosen for the representations that parametrize the spherical functions.
\end{remark}
\section{The approximation theorem in the dual case}
In this paragraph we will consider first a general framework. Let $G$ be connected Lie group with Lie algebra $\mathfrak{g}$ and $K$ be a closed subgroup with Lie algebra $\mathfrak{k}$. The coset space $G/K$ is called reductive if $\mathfrak{k}$ admits an $\operatorname{Ad}_G(K)$-invariant complement $\mathop{p}$ in $\mathfrak{g}$.
In this case it can be form the semidirect product $K\ltimes\mathop{p}$ with respect to the adjoint action of $K$ on $\mathop{p}$. We will restrict ourselves to the case where $G$ is semisimple with finite center. In particular, let $\theta$ be an analytic involution on $G$ such that $(G,K)$ is a Riemnannian symmetric pair, that is, $K$ is contained in the fixed point set $K_\theta$ of the involution $\theta$, it contains the connected component of the identity and $\operatorname{Ad}_G(K)$ is compact. The subalgebra $\mathfrak{k}$ is the $+1$ eigenspace of $d\theta_e$ and naturally we can choose $\mathop{p}$ as the $-1$ eigenspace. Furthermore, we will just consider $G$ non compact. In this case $K$ is compact and connected \cite[p. 252]{Helgason} and $d\theta_e$ is a Cartan involution, so $\mathfrak{g}=\mathfrak{k}\oplus\mathop{p}$ is called a Cartan decomposition \cite[p. 182]{Helgason}.
The semidirect product $K\ltimes \mathop{p}$ is called the \textit{Cartan motion group} associated to the pair $(G,K)$.
\\ \\
The unitary dual $\widehat{K\ltimes \mathop{p}}$ can be described as the one given in Section \ref{rep M(n)}. First one must fix a character of $ \mathop{p}$.
Any character of $\mathop{p}$ can be uniquely expressed as $e^{i\phi(x)}$ for a linear functional $\phi\in\mathop{p}^{*}$. Then one must consider
$$K_\phi:=\left\{k\in K | \ e^{i\phi(\operatorname{Ad}(k^{-1})x)}=e^{i\phi(x)} \ \forall x\in \mathop{p}\right\}$$
and $(\sigma,H_\sigma)\in \widehat{K_\phi}$.
After that we get an irreducible unitary representation $\omega_{\sigma,\phi}$ of $K\ltimes \mathop{p}$ inducing $\sigma\otimes e^{i\phi(\cdot)}$ from $K_{\phi}\ltimes \mathop{p}$ to $K\ltimes \mathop{p}$.
By definition $\omega_{\sigma,\phi}$ acts by left translations on a space of functions $f:K\ltimes \mathop{p}\longrightarrow H_\sigma$ satisfying
\begin{equation*}
f(gxm)=e^{-i\phi(x)}\sigma(m)^{-1}f(g) \qquad \forall x\in \mathop{p}, \ m\in K_\phi \text{ and } g\in K\ltimes \mathop{p}.
\end{equation*}
Consequently,
\begin{equation*}
f(xk)=f(k\operatorname{Ad}(k^{-1})x)=e^{-i\phi(\operatorname{Ad}(k^{-1})x)}f(k) \qquad \forall x\in \mathop{p}, \ k\in K,
\end{equation*}
so any such $f$ is completely determined by its restriction to $K$. Therefore, for the representation $\omega_{\sigma,\phi}$ we can consider only those functions whose restrictions to $K$ lie on $L^2(K,H_\sigma)$. This space comprise the close subspace $H_{\omega_{\sigma,\phi}}$ of $L^2(K,H_\sigma)$
\begin{equation*}
H_{\omega_{\sigma,\phi}}:=\left\{f\in L^2(K,H_\sigma) | \ f(km)=\sigma(m)^{-1}f(k) \ \forall m\in K_\phi, k\in K \right\}
\end{equation*}
and $\omega_{\sigma,\phi}$ acts on $H_{\sigma,\phi}$ by
\begin{equation*}
\left(\omega_{\sigma,\phi}(k,x)f\right)(k_0):=e^{i\phi(\operatorname{Ad}(k_0^{-1})x)}f(k^{-1}k_0).
\end{equation*}
Every irreducible unitary representation of $K\ltimes \mathop{p}$ occurs in this way and two irreducible unitary representations $\omega_{\sigma_1,\phi_1}$
and $\omega_{\sigma_2,\phi_2}$ are unitarily equivalent if and only if
\begin{itemize}
\item $\phi_1$ and $\phi_2$ lie in the same coadjoint orbit of $K$ and
\item $\sigma_1$ and $\sigma_2$ are unitarily equivalent.
\end{itemize}
Because $K$ is compact we can endow $\mathop{p}$ with an $\operatorname{Ad}(K)$-invariant inner product $\langle\cdot,\cdot\rangle$ (for example, the Killing form restricted to $\mathop{p}$) and via $\langle\cdot,\cdot\rangle$ we identify $\mathop{p}$ with $\mathop{p^{*}}$ and the adjoint with the coadjoint action of $K$. Let $\mathfrak{a}\subset\mathop{p}$ be a maximal abelian subalgebra of $\mathop{p}$. Every adjoint orbit of $K$ in $\mathop{p}$ intersects $\mathfrak{a}$ (\cite[p. 247]{Helgason}). Hence every irreducible unitary representation of $K\ltimes\mathop{p}$ has the form $\omega_{\sigma,\phi}$ with $\phi(x)=\langle H, x \rangle$ for some $H\in \mathfrak{a}$, that is, we are allowed to suppose $\phi\in\mathfrak{a}^*$. Therefore, $K_\phi$ coincides with the stabilizer of $H$ under the adjoint action of $K$. Let $M$ be the centralizer of $\mathfrak{a}$ in $K$. We say that $\omega_{\sigma,\phi}\in\widehat{K\ltimes \mathop{p}}$ is \textit{generic} if $K_\phi=M$. Since the set of non generic irreducible unitary representations of $K\ltimes \mathop{p}$
has zero Plancherel measure, we shall be concerned with the generic cases. That is we will consider
\begin{equation}\label{rep generica}
\omega_{\sigma,\phi}=\operatorname{Ind}_{M\ltimes \mathop{p}}^{K\ltimes \mathop{p}}(\sigma\otimes e^{i\phi(\cdot)}) \qquad (\sigma\in\widehat{M}, \phi\in\mathfrak{a}^*).
\end{equation}
\noindent From the other side, let $G=KAN$ be the Iwasawa decomposition of $G$, where $A:=\exp_G(\mathfrak{a})$. Let $(\sigma,H_\sigma)\in\widehat{M}$. Let $\gamma\in \mathfrak{a}^*\otimes \mathbb{C}$ such that $\gamma=\phi+i\nu$ where $\phi\in \mathfrak{a}^*$ and $\nu\in \mathfrak{a}^*$ is the particular linear map $\nu:=\frac{1}{2}\sum_{r\in P^+}c_rr$ where $P^+$ is the set of positive restricted roots and $c_r$ is the multiplicity of the root $r$. Let $1_N$ denote the trivial representation of $N$. A principal series representation $\rho_{\sigma,\phi}$ of $G$ can be given by inducing $\gamma\otimes\sigma\otimes 1_N$ from $MAN$ to $KAN=G$, that is,
\begin{equation}\label{rep serie ppal}
\rho_{\sigma,\phi}=\operatorname{Ind}_{MAN}^{G}(\gamma\otimes\sigma\otimes 1_N) \qquad (\sigma\in\widehat{M}, \phi\in\mathfrak{a}^*).
\end{equation}
\noindent As such, it is realised on a space of functions $F:G\longrightarrow H_\sigma$ satisfying
\begin{equation}\label{cond noncompact rep}
f(gman)=e^{-i\gamma(\log(a))}\sigma(m)^{-1}f(g) \qquad \forall g\in G, \ man\in MAN.
\end{equation}
By the Iwasawa decomposition such functions are clearly determined by their restrictions to $K$. A principal series representation give rise to a unitary representation when its representation space $H_{\rho_{\sigma,\phi}}$ consist of functions satisfying \eqref{cond noncompact rep} and whose restrictions to $K$ lie in $L^2(K,H_\sigma)$. These restrictions comprise the subspace of $L^2(K,H_\sigma)$ whose functions $f$ satisfy
\begin{equation*}
f(km)=\sigma(m)^{-1}f(k) \qquad \forall k\in K, m\in M.
\end{equation*}
Note that $H_{\omega_{\sigma,\phi}}$ coincides with $\left(H_{\rho_{\sigma,\phi}}\right)_{|_{K}}$.
\\ \\
Given any generic irreducible unitary representation $\omega_{\sigma,\phi}$ of $K\ltimes \mathop{p}$, we can associate the sequence $\left\{\rho_{\sigma,\ell\phi}\right\}_{\ell=1}^\infty$ of unitary principal series representations of $G$.
As in \eqref{contaction} we consider the contraction maps $\{D_\beta\}_{\beta\in\mathbb{R}_{>0}}$
\begin{gather}\label{contraction 2}
D_\beta: K\ltimes \mathop{p}\longrightarrow G \notag \\
D_{\beta}(k,x):= \exp_G(\frac{1}{\beta}x) \ k.
\end{gather}
As in Section \ref{A special character and a special function} we consider the special function
\begin{gather}\label{s phi}
s_\phi: G\longrightarrow \mathbb{C} \notag \\
s_\phi(kan):= e^{-i\phi(\log(a))},
\end{gather}
which is $K$-invariant and has value $1$ on $K$. We have that, if $f\in H_{\rho_{\sigma,\ell\phi}}$, then $s_\phi(f)\in H_{\rho_{\sigma,(\ell+1)\phi}}$ and $s_\phi(f)$ has the same restriction to $K$ as $f$.
The following result, due to Dooley and Rice, show how
the sequence $\left\{\rho_{\sigma,\ell\phi}\right\}_{\ell=1}^\infty$ approximates $\omega_{\sigma,\phi}$.
\begin{theorem} \cite[Theorem 1 and Corollary (4.4)]{Dooley2}\label{teo dooley 2}
For all $(k,x)\in K\ltimes \mathop{p}$ and $F\in H_{\rho_{\sigma,\phi}}$
\begin{equation}\label{D R paper 2}
\lim_{\ell\rightarrow\infty}\left\|
\left(\rho_{\sigma,\ell\phi} (D_{\ell}(k,x))(s_\phi^\ell F)\right)_{|_{K}} -
\omega_{\sigma, \phi}(k,x)(F_{|_{K}})
\right\|_{L^2(K,H_\sigma)}=0.
\end{equation}
Moreover, if $F$ is a smooth function, the convergence is uniform on compact subsets of $K\ltimes\mathop{p}$.
\end{theorem}
\bigskip
\noindent Let $\tau\in\widehat{K}$. It follows from Frobenius reciprocity that $\tau\subset\left(\omega_{\sigma,\phi}\right)_{|_{K}}$ and that $\tau\subset\left(\rho_{\sigma,\ell\phi}\right)_{|_{K}}$ if and only if $\sigma\subset\tau_{|_{M}}$. In particular,
\begin{equation}\label{multiplicity}
m(\tau,\omega_{\sigma,\phi})=m(\sigma,\tau)=m(\tau,\rho_{\sigma,\phi})
\end{equation}
\noindent We fix $\omega_{\sigma,\phi}\in \widehat{K\ltimes \mathop{p}}$ such that $\tau$ is a $K$-submodule of $\omega_{\sigma,\phi}$.
\\ \\
Consider the restriction operator $$Res_{\ell\phi}(F):=F_{|_{K}} \qquad \text{for all } F\in H_{\rho_{\sigma,\ell\phi}}.$$ Since the action of $\rho_{\sigma,\ell\phi}$ is by left translations it is obvious that $Res_{\ell\phi}$ intertwines $H_{\rho_{\sigma,\ell\phi}}$ and $H_{\omega_{\sigma,\phi}}$ as $K$-modules.
Moreover, $Res_{\ell\phi}$ sends $H_{\rho_{\sigma,\ell\phi}}(\tau)$ to $H_{\omega_{\sigma,\phi}}(\tau)$.
Apart from that, observe that the multiplication by the function $s_\phi$ is an intertwining operator between $H_{\rho_{\sigma,\ell\phi}}$ and $H_{\rho_{\sigma,(\ell+1)\phi}}$ as $K$-modules (for all $\ell\in \mathbb{N}$).
\\ \\
Now, let $f\in H_{\omega_{\sigma,\phi}}$, we extend it to $G$ by
\begin{equation}\label{extention}
F(g)=F(k_g a_g n_g):=e^{-i\gamma(\log(a_g))}f(k_g),
\end{equation}
where $g=k_g a_g n_g$ with $k_g\in K$, $a_g\in A$ and
$n_g\in N$ is the Iwasawa decomposition of $g\in G$. The inverse of the restriction map defined previously is $Res^{-1}_{\ell\phi}(f):=(s_\phi)^\ell F$ for all $f\in H_{\omega_{\sigma,\phi}}(\tau)$ where $F$ is defined as \eqref{extention}.
\\ \\
With all this in mind the Theorem \ref{teo dooley 2} can be rewritten in the following way: For all $(k,x)\in K\ltimes\mathop{p}$ and $f\in H_{\omega_{\sigma,\phi}}(\tau)$,
\begin{equation}\label{D R paper 2 reescrita}
\lim_{\ell\rightarrow\infty}\left\|\left(
Res_{\ell\phi}\circ\rho_{\sigma,\ell\phi}\left( D_{\ell}(k,x)\right)\circ Res_{\ell\phi}^{-1} -
\omega_{\sigma, \phi}(k,x)\right)(f)
\right\|_{L^2(K,H_\sigma)}=0.
\end{equation}
Finally, by \eqref{projection},
the projections $P_{\omega_{\sigma,\phi}}^\tau$
and $P_{\rho_{\sigma,\ell\phi}}^\tau$ are given by the same formula, i.e., by the convolution on $K$ with $d_\tau \overline{\chi_\tau}$. Moreover, they are continuous operators. Therefore, from \eqref{D R paper 2 reescrita} we get the asymptotic formula
\begin{equation}\label{D R paper 2 reescrita 2}
\lim_{\ell\rightarrow\infty}\left\|\left(
P_{\rho_{\sigma,\ell\phi}}^\tau\circ Res_{\ell\phi}\circ\rho_{\sigma,\ell\phi}\left( D_{\ell}(k,x)\right)\circ Res_{\ell\phi}^{-1} - P_{\omega_{\sigma,\phi}}^\tau\circ
\omega_{\sigma, \phi}(k,x)\right)(f)
\right\|_{L^2(K,H_\sigma)}=0.
\end{equation}
\begin{proposition}\label{prop con lo de antes}
Let $G$ be a connected, non compact semisimple Lie group and $K$ be a closed subgroup of $G$ such that $(G,K)$ is a Riemannian symmetric pair. Let $K\ltimes \mathop{p}$ be the Cartan motion group associated to $(G,K)$ and let $\tau\in\widehat{K}$. The triple $(G,K, \tau)$ is commutative if and only if $(K\ltimes \mathop{p},K,\tau)$ is commutative. In particular, $(G,K)$ is a strong Gelfand pair if and only if $(K\ltimes \mathop{p},K)$ is a strong Gelfand pair.
\end{proposition}
\begin{proof}
The Plancherel measure of $K\ltimes \mathop{p}$ is concentrated on
the set of generic irreducible unitary representations of $K\ltimes \mathop{p}$. Respectively, the Plancherel measure of $G$ is concentrated on
the set of principal series representations.
From \cite[Theorem 3]{Nosotros2}, $(K\ltimes\mathop{p},K,\tau)$ is a commutative triple if and only if $m(\tau,\omega)\leq 1$ for all $\omega$ in the subset of $\widehat{K\ltimes\mathop{p}}$ which has non-zero Plancherel measure. (This result is bases on the ideas given in \cite{BJR} for the case of a Gelfand pair).
So we take arbitrary generic and principal series representations $\omega_{\sigma,\phi}\in\widehat{K\ltimes \mathop{p}}$ and $\rho_{\sigma,\phi}\in\widehat{G}$ as in \eqref{rep generica} and \eqref{rep serie ppal} respectively, for $\sigma\in \widehat{M}$ and $\phi\in\mathfrak{a}^*$. By \eqref{multiplicity}, $m(\tau,\omega_{\sigma,\phi})=m(\tau,\rho_{\sigma,\phi})$ and the conclusion of this proposition follows immediately.
\end{proof}
\begin{theorem}\label{teo dual}
Let $G$ be a connected, non compact semisimple Lie group and $K$ be a maximal compact subgroup of $G$ such that $(G,K)$ is a Riemannian symmetric pair. Let $K\ltimes \mathop{p}$ be the Cartan motion group associated to $(G,K)$ and let $(\tau,V_\tau)\in\widehat{K}$ such that $(K\ltimes\mathop{p},K,\tau)$ is a commutative triple.
Let $\Phi_{\omega_{\sigma,\phi}}^{\tau}:K\ltimes\mathop{p}\longrightarrow\operatorname{End}(V_\tau)$ be the spherical function of type $\tau$ corresponding to $\omega_{\sigma,\phi}$. Then, there exists a family $\{\Phi_{\rho_{\sigma,\ell\phi}}^{\tau}\}_{\ell\in \mathbb{Z}_{\geq 0}}$ where $\Phi_{\rho_{\sigma,\ell\phi}}^{\tau}:G\longrightarrow\operatorname{End}(V_\tau)$ is a spherical function of type $\tau$ corresponding to $\rho_{\sigma,\ell\phi}$ and such that for each $(k,x)\in K\ltimes\mathop{p}$
\begin{equation*}
\lim_{\ell\rightarrow \infty} \Phi_{\omega_{\sigma,\ell\phi}}^{\tau}(D_{\ell}(k,x)) =\Phi_{\rho_{\sigma,\ell\phi}}^{\tau} (k,x),
\end{equation*}
where the convergence is point-wise on $V_\tau$.
\end{theorem}
\begin{proof}
From Proposition \ref{prop con lo de antes}, $(G,K,\tau)$ is also a commutative triple and the proof follows from \eqref{D R paper 2 reescrita 2} and Remark \ref{clases de conj de func esf}.
\end{proof}
\noindent In particular, if we consider $G=\operatorname{SO}_0(n,1)$ the Lorentz group and $K=\operatorname{SO}(n)$, then $K\ltimes \mathop{p}=\operatorname{M}(n)$. The pair $(\operatorname{SO}_0(n,1),\operatorname{SO}(n))$ is a strong Gelfand pair and analogously to the case $(SO(n+1), SO(n))$ we have the following result.
\begin{corollary}
Let $(\tau,V_\tau)\in\widehat{\operatorname{SO}(n)}$ and let $\Phi^{\tau,\operatorname{M}(n)}$ be a spherical function of type $\tau$ of the strong Gelfand pair $(\operatorname{M}(n),\operatorname{SO}(n))$.
There exists a sequence $\{\Phi_{\ell}^{\tau, \operatorname{SO}_0(n,1)}\}_{\ell\in\mathbb{Z}_{\geq 0}}$ of spherical functions of type $\tau$ of the strong Gelfand pair $(\operatorname{SO}_0(n,1),\operatorname{SO}(n))$ and a family of contraction maps $\{D_\ell \}_{\ell\in\mathbb{Z}_{\geq 0}}$ between $\operatorname{M}(n)$ and $\operatorname{SO}_0(n,1)$ such that for all $(k,x)\in \operatorname{M}(n)$
\begin{equation*}
\lim_{\ell\rightarrow \infty} \Phi_\ell^{\tau, \operatorname{SO}_0(n,1)}(D_{\ell}(k,x)) =\Phi^{\tau, \operatorname{M}(n)} (k,x),
\end{equation*}
where the convergence is point-wise on $V_\tau$.
\end{corollary}
|
1,314,259,994,780 | arxiv | \section{INTRODUCTION}
Dark matter searches are hybrids of particle physics and astrophysics
in many aspects, and naturally, one can find the infusion of
techniques and research paradigms from one field into the other.
One very important fallout of this hybridization is that the importance
of single purpose dedicated telescopes is being widely recognized.
Baryonic dark matter searches utilize gravitational microlensing
phenomenon that occurs when a foreground dark object traverses
the line of sight of a target star (in the LMC, SMC and M31).
What is measured in the microlensing experiments is the {\it time
variation} of the brightness of the target stars. Therefore the stars
have to be monitored constantly and that is not unlike tending an
accelerator beam line day and night. (A subtle difference must be that
an observer does not have to stay up during the day.)
This ``new mode" of telescope usage is orthogonal (and complementary) to the
conventional ``time-sharing mode" where ``photon-starved" astronomers build
the largest possible telescopes the funding allows and share them
by allocating each observer a few nights at a time so that many different
astrophysical phenomena are pursued independently and thus incoherently.
This ``new telescope mode" is
expected to become a new tradition in astronomy due to the enormous
success of the current microlensing experiments (See Pratt {\it et al},
Bennett {\it et al} and Lehner {\it et al} in this volume).
It is not hard to imagine the importance of single
purpose dedicated telescopes if one is interested in learning the
{\it dynamics} of the celestial bodies {\it directly} by monitoring their
{\it changes in time}.
The impressive catalog of $\sim 40,000$ variable stars collected by the
MACHO experiment should be only the beginning
of what are to come in the near future.
The bread-and-butter astrodynamics
single purpose telescopes can bring is boundless.
Here we advocate that one of the most immediate
beneficiaries of the ``new tradition" of dedicated telescopes
in optical and infrared bands
can be the search for low mass planets through microlensing.
The trademark of microlensing signature of baryonic dark matter has been
advocated to be the light curves that are achromatic, symmetric and
non-repetitive. The symmetric light
curve is due to the spherically symmetric gravitational potential of
a point mass. When the mass has a small companion, the distribution
of the gravitational potential changes ever slightly, but the light curve
can have a substantial deviation from the symmetric shape, even though
briefly in time. That is because lensing is a catastrophic phenomenon.
We can capitalize on this catastrophic behavior of
lenisng to detect planets as small as the earth.
Microlensing is the only ground-based method capable of detecting earth-mass
planets and providing planetary statistics much needed for the future
space-based planet search programs envisioned by the ExNPS panel.
\begin{figure}[thbp]
\begin{center}
\leavevmode
\hbox{%
\epsfxsize=7.9cm
\epsffile{dm96_fig1.ps}}
\vspace{-0.65cm}
\end{center}
\vspace{-1.2cm}
\caption {The binary lens fits to the GPE: The solid curve is the small
mass fit for $\epsilon = 1 - \epsilon_1 = 0.01$, and the lens parameters
are shown on the right. The dashed curve is the small
separation fit for $\epsilon = \epsilon_1$, whose lens parameters are shown
on the left. $\hat t$ is the transit time
of the Einstein ring radius of the total mass, $|sep|$ is the separation,
$\theta$ is the angle of the source trajectory with respect to the lens
axis where $\epsilon_1$ is the fractional mass to the positive direction
in our convention, and $|u_{\rm min}|$ is the distance of the source
trajectory to the lens axis.}
\label{dm96_fig1}
\end{figure}
\section{The First Planet found through Microlensing?}
The first candicate Jupiter mass planet was ``found" in the first
microlensing candidate event (GPE: Gold Plated Event) of the MACHO experiment.
The microlensing event toward the LMC is achromatic and fits
the symmetric single lens light curve with peak amplification $7.2$
fairly well and there is no doubt that that is a microlensing event
(Alcock {\it et al.}, 1993). However, there are a couple of ``anomalous data
points" near the peak that are systematically shifted to the left
from the single lens fit curve. One of us found that the `misfit'
can be explained if the lensing object has a companion of the fractional
mass $\epsilon=0.01$ (Rhie, 1994). On the other hand, the assumption of
a binary lens introduces three more fit parameters and that leaves room
for other fit parameter values.
Dominick and Hirshfeld (1994)
found that the data can be fit with $\epsilon = 0.414$.
A binary lens system is characterized by two parameters, namely, the
mass ratio and the separation (tranverse distance on the lens plane)
between the two masses, and
a binary lens behaves very closely to a single lens (of
the total mass) when the mass ratio or the separation is small.
In other words, when the GPE is considered as a binary
lensing event, the parameter space is confined to two small regions of
small mass ratio or small separation because of the close proximity to the
single lens behavior. (The binary lenses in the parameter space
of small mass {\it and} small
separation are practically a single lens and can not explain
the ``deviation points".) Our best fit values are $\epsilon=0.01$
and $\epsilon=0.463$ and the corresponding fit curves are shown
in figure~\ref{dm96_fig1}.
One should note that the two fit curves are quite
distinct: The highest amplification point lies on the rising side
of the curve with $\epsilon =0.01$ and on the falling side of the curve with
$\epsilon = 0.463$. In retrospect, one more data point near the peak
might have resolved the dilemma of whether the lensing star has a Jupiter
mass planet or is a dwarf binary star. For this particular event,
we wouldn't ever know because the probability for the lensing object to
lens another star is $\sim R_{E}^2 n \sim 10^{-6}$, where $R_{E}$ is the
Einstein ring radius and $n$ is the surface number density of the LMC.
\section {Microlensing Signature of Earth Mass Planets}
If we summarize the lesson from the GPE event as a binary lens,
$(1)$ \ When the mass of the companion is small ($\epsilon << 1$),
the microlensing light curve is largely that of a single lens.
\ $(2)$ \ However, the small mass companion can produce an unmistakable
signal by modulating the single lens curve substantially.
\ $(3)$ \ The modulation signal lasts only briefly and the unambiguous
detection can be made only through dense sampling of the light curve.
\ In addition, \ $(4)$ \ The separation (transverse distance between
the star and planet on the lens plane) can not be too big because the
planetary signal will be dissociated from the stellar signal.
Therefore, the separation has to be within a certain interval,
and the interval is called the `lensing zone'. Of course, the
``lensing zone" depends on the sensitivity of the detectors, and it
will turn out to be $\approx 0.6 R_{\rm E} - 1.6 R_{\rm E}$ for low
mass planets, where $R_{\rm E}$ is the Einstein ring radius of the total
mass ($\approx$ stellar mass). What should be noted here is that the
``lensing zone" scales with the Einstein ring radius $\propto \sqrt{
\rm stellar\ mass}$. It is a practical ``rule of thumb" that
the ``lensing zone" is given by $\approx a^{-1}R_{\rm E} - aR_{\rm E}$,
where $a (> 1)$ is the fudge factor depending on the mass ratio of the
planet, detection strategy, and etc.
The duration of a microlensing event depends on
many parameters such as the mass, transverse velocity and reduced distance
of the lens, and also the size of the source star.
Howevever, for the current microlensing experiments toward
the Galactic Bulge and the Large Magellanic Cloud, one can estimate
the duration as a function of the mass of the lens by considering
the typical transverse velocity and reduced distance.
The mass dependence goes as $\propto \sqrt{{\rm mass}}$, and
the duration for a solar mass object is typically a couple of months
and a few days for a Jupiter mass brown dwarf, etc.
When the Jupiter mass object is a planet around a star, we can
estimate that the modulation duration is typically a few days.
If we consider the exposure time of a few minutes as is the case in
the current microlensing experiments toward the
Bulge, one can sample a given
modulation due to a Jupiter mass planet about 1400 times in principle.
Of course, the currently active telescopes for microlensing experiments
are survey telescopes and can not afford to follow one event with such a
scrutiny, but it demonstrates that planet search via microlensing is not
an idle idea at all.
What one immediately realizes is that earth mass planet search via
microlensing is also a sure possibility. The modulation duration due to
an earth mass planet is a few hours and hence the signal can be sampled
as many as 45 times in principle. The modulation signal typically has
the shape of a {\it wavelet} and one can resolve the peaks and
troughs of the ``modulation wavelet" without ambiguity. Actually,
detecting Mars's ($0.1 m_{\oplus}$) dosn't seem to be an impossibility
when estimated along the same line. However, the size of the source
stars begins to affect the signal seriously when the mass of the planet
falls below 10 $m_{\oplus}$ or so. In other words, the size of the
source star (precisely speaking, the size of the star projected onto the
lens plane with the observer at the focus of the projection) becomes
comparable to the variation in the modulation wavelet (of a point source),
and the planetary signal gets smoothed over due to the integration effect.
If the modulation wavelet has comparable troughs and peaks, the planetary
signal can be completely wahsed out, whose case we would like to term
``interference effect". If the peak is dominant over the troughs -- or,
the trough is dominant over the peaks, the
signal will not be averaged to zero, but it gets
broadened and eventually buried below the measurement error, which
we may term ``broadening effect" (meaning smoothing without destructive
interference).
Therefore, it is important to carry out realistic calculations to
decide upon the feasibility of detecting $1 - 10 m_{\oplus}$ planets.
According to our calculations (Bennett and Rhie, 1996),
the probability to detect the planetary signal of
an $10 m_{\oplus}$ planet in the lensing zone is $\sim 15\%$,
and for an earth mass planet, the probability drops to
$\sim 2\%$. We expect that the probability will decrease even more
drastically for the mars mass planets, and a work is in progress for
the sake of confirmation.
\begin{figure}[thbp]
\begin{center}
\leavevmode
\hbox{%
\epsfxsize=7.9cm
\epsffile{fig_lc_e1m5s08.ps}}
\vspace{-0.6cm}
\hbox{%
\epsfxsize=7.9cm
\epsffile{fig_lc_e1m5s13.ps}}
\end{center}
\caption {The microlensing signature of earth mass planets orbiting
stars of mass $0.3 M_{\odot}$ in the Galactic disk toward the Bulge
with separations $\ell = 0.8$ (upper panel) and $\ell = 1.3$ (lower panel).
The main plots are for a stellar radius of $r_s = 0.003$ while the insets
show the effect of increasing stellar size
($r_s =$ 0.003, 0.006, 0.013 and 0.03). }
\label{dm96_fig2}
\end{figure}
\section {Planetary Binary Lenses}
In order to discuss what we can learn from a given planetary binary
lensing event, it is necessary to know about binary lenses.
When a planetary system falls in the line of sight of a background star,
the planetary system can be considered primarily as a binary lens
because what matters is the planet falling in the lensing zone.
(Of course, more than one planets can fall into the ``lensing zone",
and the signature will be multiple ``modulation wavelets".)
A binary lens simply means that the lens system consists of two bodies
jointly governing the gravitational potential that determines the optical
paths, and the resulting configurational behavior of the images and
their sources are described by the binary lens equation.
If $\omega$ and $z$ denote the source and image
positions in the lens plane as a complex plane, the binary lens equation is
\begin{equation}
\label{eq-bilens}
\omega = z - {1-\epsilon\over \bar z - \bar x_s}
- {\epsilon\over \bar z - \bar x_p} \ ,
\end{equation}
where $\epsilon$ is the fractional mass of the planet, and $x_s$ and $x_p$
are the positions of the star and planet respectively. We work in units
of the Einstein radius, $R_E$, of the total mass $M$. Eq.~(\ref{eq-bilens})
has 3 or 5 solutions ($z$) for a given source location, $\omega$.
If $J_i$ is the Jacobian determinant of (the transformation given by)
the lens equation at the position of the $i$-th image,
the amplification of the image is
given by the size of the image with respect to the size of the source.
Therefore, the microlensing amplification (or total amplification) is given
by
\begin{equation}
\label{eq-Asum}
A = \sum_i |J_i|^{-1} \ .
\end{equation}
The sign of $J$ describes the relative orientation of the area elements,
and hence an image with $J<0$ is a flipped image and an image with
$J>0$ is an unflipped image. $J=0$ not only defines the boundary
between flipped and unflipped images but also where the images enormously
brightens because of the inverse relation with the amplification $A$.
This singular (or catastrophic) behavior is at the heart of the
microlensing as one of the most powerful tools in planet search.
The source positions that produce images falling on the critical curve
($J=0$) are called {\it caustic curve}, and the caustic curve of the
binary lenses changes its shape, size, position, and topology by joining
and splitting as the lens parameter
($\epsilon$ and $l \equiv |x_s - x_p|$) changes.
The caustic of a binary lens consists of one, two, or three
{\it closed cuspy loops}, and this geometric diversity needs to be
understood somewhat rigorously because that is where the planetary
signature lies.
For a planetary binary lens ($\epsilon << 1$), one caustic (almost a point)
is always near the star (just as in a single lens -- so, we might as well
call ``stellar caustic"), and hence the planetary signature ``modulation
wavelets" are determined by the distribution of the ``planetary caustics".
When the separation $l > 1$, there is one ``planetary
caustic" (of diamond shape) near the planet, and the
``modulation wavelets" are ``peak-dominated". When the separation
is $l > 1$, there are two triangular shape caustics (reflection symmetric),
and the ``modulation wavelets" tend to be ``trough-dominated".
(See the plates in Bennett and Rhie, 1996.) \
\section {Conclusion}
So, what can we learn about the planet once a planetary microlensing
event is detected? As we have discussed before, the mass ratio of the
planet with respect to the stellar mass can be determined without ambiguity.
The projected distance between the star and the planet can be determined
(in units of the Einstein ring radius) from the time difference between
the peak of the stellar light curve and the appearance of the planetary
signal. The possible two-fold ambituity -- the separation $l$ or
$l^{-1}$ ($l > 1$) -- can be resolved because of the distinctive nature
of the modulation wavelets for $l > 1$ and $l < 1$.
More than one planet can fall into the ``lensing zone", and the lens
system may have to be considered as an $n$-point lens system where
$n > 2$. However, the gravitational interference between the planets
can be ignored most of the time and hence the signature of two planets,
for example, in the ``lensing zone" will be simply two modulation wavelets
on the symmetric stellar light curve. (The interference becomes
singnificant when the separation of the planets is order of the Einstein
ring radius of the total mass of the planets.)
The only feasible target site for microlensing planet search is the Galactic
Bulge not only because the other galaxies have low microlensing probabilities
(the detection rate by the MACHO experiment toward
the LMC is $3-4$ per year) but because one is looking for lensing
evnets by normal stars (main sequence stars) that may host planets.
With a $2$m survey telescope, one can detect
$\approx 250$ microlenisng events, and a network of $1.5 -2$ m follow-up
telescopes in Australia, Chile and the South Africa (and also the
South Pole) will be able to monitor the events with sufficient precision.
With photometric precision $1\%$, the detection probability of an earth
mass planet in the lensing zone is $\approx 2\%$ and hence one may be
able to detect a couple of earth mass events per year. The frequency
of planets and especially that of earth mass planets constitutes a totally
unknown territory. It is an exciting possibility that one can detect
planets {\it unambiguously} through microlensing from earth mass to
Jupiter mass.
The Galactic Bulge is not visible during the austral summer (Nov., Dec.,
Jan.), and the telescope network can be pointed toward the satellite
galaxies for dark matter search or possible signatures of planets
orbiting the stars in the satellite galaxies.
|
1,314,259,994,781 | arxiv | \section{Introduction}
A matrix is a rook matrix if each entry is $0$ or $1$ and each row and column have at most one $1$. A rook matrix $A$ is {\it planar} or {\it order preserving} if the matrix obtained from $A$ by deleting all the zero rows and all the zero columns is an identity matrix.
The structure and representation theory of the rook monoid, consisting of all rook matrices, are intensively studied \cite{M2, S}. Herbig gives a structure and representation theory of a planar rook monoid \cite{H}. The planar upper triangular rook monoid $B_n$ consists of planar upper triangular rook matrices of size $n$.
It is natural to ask: What are the representation and structure properties of the planar upper triangular rook monoid? More specifically, how do we construct interesting modules over $B_n$, and what do irreducible $B_n$-modules look like? What is the order of $B_n$ and what are the dimensions of the modules of interest? How are the order and the dimensions related to combinatorics? What are the generators and defining relations of $B_n$? These questions are closely related to the theory of linear algebraic monoids, since it was made clear in \cite{LLC14, R1} that we are here dealing with the most familiar interesting case of planar upper triangular Renner monoids of reductive monoids. For more information on Renner monoids, see \cite{LR03, LLC14, P1, R2, S}.
In this paper we answer the questions above, and our discussion goes a little deeper, showing that the $B_n$-module properties of $V$ are dramatically different from those of $V$ as a module over the planar rook monoid.
In Section 2 after gathering basic definitions and concepts related to planar upper triangular rook monoids $B_n$, we give a new interpretation of $B_n$ using generalized reduced echelon matrices. We then calculate in Section 3 the order of $B_n$ in two different ways and show that it is a Catalan number.
Section 4 is devoted to the investigation of $B_n$-modules over a field $F$ of characteristic $0$. Let $V_k$ be a vector space over $F$ generated by a set of elements $v_S$ indexed by the $k$-subsets of $\n=\{1, ..., n\}$. Then $V_k$ is a $B_n$-module under the action (\ref{moddef}). We are particularly interested in $B_n$-submodules of $V_k$ and of $V=\bigoplus_{k=0}^n V_k$. We find that
every nonzero submodule of $V$ is completely decomposable, and that a submodule of $V$ is indecomposable if any only if it lies in some $V_k$. Furthermore, we show that every submodule of $V$ is cyclic, and that each irreducible submodule of $V$ is $1$-dimensional and is contained in all nonzero submodules of some $V_k$. We also show that any two different submodules of $V$ are not isomorphic. Moreover, we give a formula for calculating the dimension of every submodule of $V$ using the inclusion-exclusion principle. In particular, we provide a recursive formula for calculating the dimensions of the modules generated by a single basis vector, and find that some of these dimensions are Catalan numbers again, connecting to combinatorics. Viewed as $B_t$-modules with $t<n$, we are able to decompose some $B_n$-submodules of $V_k$ into indecomposable $B_t$-submodules.
Section 5 describes the generators and defining relations of $B_n$.
{\bf Acknowledgement} {We would like to thank Dr. M. Can for useful email communications and Dr. R. Koo for valuable comments.}
\section{Preliminaries}
\begin{definition}
An {\em injective partial map} $f$ of
$\mathbf{n}$ is a one-to-one map of a subset $D(f)$
of $\mathbf{n}$ onto a subset $R(f)$ of $\mathbf{n}$ where $D(f)$ is
the domain of $f$ and $R(f)$ is the range of $f$.
\end{definition}
We agree that there is a map with empty domain and range and call it 0 map. We can write an injective partial map $f$ of $\mathbf{n}$ in 2-line notation by writing the numbers $s_1,\dots, s_k$ in the top line if $D(f)=\{s_1,\dots, s_k\}$, and then below each number we write its image. Equivalently, we can represent such a map by an $n\times n$ rook matrix, where the entry in the
$i$th row and the $j$th column is 1 if the map takes $j$ to $i$, and is 0 otherwise.
For example, the map $\sigma$ given below is an injective partial map of $\mathbf{5}$,
{
\begin{eqnarray*}
\sigma&=& \left(
\begin{array}{cccc}
1 & 2 & 3 & 5 \\
1 & 2 & 4 & 5 \\
\end{array}
\right)
\\
&=& \left(
\begin{array}{ccccc}
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 \\
\end{array}
\right)~.
\end{eqnarray*}
}
\begin{definition} The {\em rook monoid} $R_n$ is the monoid of
injective partial maps from $\mathbf{n}$ to $\mathbf{n}$, whose operation is the composition of partial maps and the identity element is the identity map of $\n$.
\end{definition}
Since elements of $R_n$ are not necessarily invertible, $R_n$ is not a group. The map with empty domain and empty range behaves as
a zero element. In matrix form, the composition of $R_n$ is consistent with the usual matrix multiplication. Here is an example: for $g=\left(
\begin{array}{ccc}
2 & 3 & 4 \\
1 & 5 & 2 \\
\end{array}
\right), ~
f=\left(
\begin{array}{cccc}
1 & 3 & 4 & 5 \\
1 & 2 & 3 & 4 \\
\end{array}
\right)\in R_5,
$ we have
$$
gf=
\left(
\begin{array}{ccc}
2 & 3 & 4 \\
1 & 5 & 2 \\
\end{array}
\right)\circ
\left(
\begin{array}{cccc}
1 & 3 & 4 & 5 \\
1 & 2 & 3 & 4 \\
\end{array}
\right)
=\left(
\begin{array}{ccc}
3 &4& 5 \\
1 &5& 2 \\
\end{array}
\right)~.
$$
The corresponding matrix form of the operation reads as
$$
gf= \left(
\begin{array}{ccccc}
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
\end{array}
\right) \left(
\begin{array}{ccccc}
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 \\
\end{array}
\right)= \left(
\begin{array}{ccccc}
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
\end{array}
\right)~. $$
An injective partial map from $\mathbf{n}$ to $\mathbf{n}$ is {\em order preserving} if whenever $a<b$ in the domain of the map, then $f(a)<f(b)$. An injective partial map $f$ is order preserving if and only if the matrix obtained from the matrix form of $f$ by deleting all the zero rows and all the zero columns is an identity matrix; equivalently the graph obtained from the 2-line notation of $f$ by joining all defined $f(a)$ in the range of the map to $a$ is a planar graph, which justifies the name in the following definition.
\begin{definition}
The {\em planar rook monoid}, denoted by $PR_n$, is the monoid of {\em order preserving} injective partial maps from $\mathbf{n}$
to $\mathbf{n}$.
\end{definition}
\noindent Obviously, $PR_n$ is a submonoid of $R_n$. The structure and representation of the planar rook monoid is studied in Herbig \cite{H}. In particular, $V_k$ is an irreducible $PR_n$-module.
The next definition will give a different interpretation of an order preserving injective partial map.
\begin{definition}
A rectangular matrix is a {\em generalized (row and column) reduced echelon matrix if}
\vspace{-3mm}
\begin{enumerate}[{\rm(1)}]
\item Each leading entry of a row is $1$ and is in a column to the right of the leading entry of the row above it.
\vspace{-3mm}
\item Each leading entry of a column is $1$ and is in a row below the leading entry of the column to the left of it.
\vspace{-3mm}
\item Each leading 1 is the only nonzero entry in its column and its row.
\end{enumerate}
\end{definition}
\noindent
This definition does not require that all nonzero rows are above any zero rows nor all nonzero columns are to the left of any zero columns. Since the row and column reduced echelon form of a matrix is equivalent to the normal form of the matrix, we can consider a generalized reduced echelon matrix to be a generalization of the normal form of a matrix.
An injective partial map is order preserving if and only if its matrix form is a generalized reduced echelon matrix. Thus, the set of all the generalized reduced echelon matrices of size $n$ is a monoid with respect to the multiplication of matrices, and the order of this monoid is $\binom{2n}{n}$, since the order of $PR_n$ is
\[
|PR_n| = \binom{2n}{n}~.
\]
An injective partial map is called {\em order decreasing} if for all $a$ in the domain of the map, we have $f(a)\leq a$. Equivalently, an injective partial map is order decreasing if and only if its matrix form is an upper triangular rook matrix, which motivates the name in the following definition.
\begin{definition}
The {\em planar upper triangular rook monoid}, denoted by $B_n$, is the monoid of order preserving, order decreasing injective partial maps from $\mathbf{n}$ to $\mathbf{n}$.
\end{definition}
An injective partial map is in $B_n$ if and only if its matrix form is an upper triangular generalized reduced echelon matrix. In the previous example, we have $f\in B_5$, but $g\notin B_5$.
\section{Order of $B_n$}
We first show that the order of $B_n$ is a Catalan number, which is defined by
$c_0=c_1=1$ and $c_n=\sum_{i=0}^{n-1}c_ic_{n-1-i}$ for
$n>1$ (see \cite{St}).
\begin{proposition}\label{bnCatalan}
Let $n\ge 0$. Then the order of the planar upper triangular rook monoid $B_n$
is the Catalan number $c_{n+1}$, that is, $b_n=c_{n+1}$.
\end{proposition}
\begin{proof}
To prove the proposition, we set up a one-to-one correspondence
between the set $B_n$ and the set $C_{n+1}$ of all sequences
$a_1,a_2,\dots,a_{2n+2}$ of $n+1$ copies of 1's and $n+1$ copies of $-1$'s, such that
$a_1+a_2+\dots+a_l\geq 0$ for all $1\leq l\leq 2n+2$.
Let $f$ be an element of $B_n$
with domain $S=\{s_1<s_2<\dots<s_k\}$ and range
$T=\{t_1<t_2<\dots<t_k\}$. Define $s_1^\p=s_1, \,s_i^\p=s_i-s_{i-1}$
for $2\leq i\leq k$, and $s_{k+1}^\p=n+1-s_k$. Also define
$t_1^\p=t_1, \,t_i^\p=t_i-t_{i-1}$ for $2\leq i\leq k$, and
$t_{k+1}^\p=n+1-t_k$. Let $f^\p$ be the sequence
\begin{equation}\label{sequence}
\underbrace{1,\,\dots,\,1}_{s_1^\p},\, \underbrace{-1,\,\dots,\,-1}_{t_1^\p},\,
\underbrace{1,\,\dots,\,1}_{s_2^\p}, \underbrace{-1,\dots,-1}_{t_2^\p}, \,
\dots, \,
\underbrace{1,\,\dots,\,1}_{s_{k+1}^\p},\, \underbrace{-1,\,\dots,\,-1}_{t_{k+1}^\p} ~.
\end{equation}
Now we prove $f^\p\in C_{n+1}$. By definition of $B_n$, we have
$t_i\leq s_i$ for all $1\leq i\leq n$. Thus
$s_1^\p-t_1^\p=s_1-t_1\geq 0, \,(s_1^\p+s_2^\p+\dots+s_i^\p)
-(t_1^\p+t_2^\p+\dots+t_i^\p)=s_i-t_i\geq 0$ for $2\leq i\leq k$ and
$(s_1^\p+s_2^\p+\dots+s_{k+1}^\p)
-(t_1^\p+t_2^\p+\dots+t_{k+1}^\p)=(n+1)-(n+1)\geq 0$. Denote the
partial sum of the first $l$ items in (\ref{sequence}) by $a_l$.
Then our previous argument shows that $a_l\geq 0$ for
$l=s_1+t_1+s_2+t_2+\dots+s_h+t_h$, where $1\leq h\leq k+1$, and this
implies $a_l\geq 0$ for $1\leq l\leq 2+2n$ by the format of
(\ref{sequence}). Hence, $f^\p\in C_{n+1}$.
We next show that the mapping from $B_n$ to $C_{n+1}$ defined by
\begin{eqnarray*}
\sigma:B_n&\rightarrow& C_{n+1} \\
f &\mapsto& f^\p
\end{eqnarray*}
is bijective. From the definition of the map $\sigma$, it is straightforward to
see that $\sigma$ is injective. Now we prove the onto property of
the map $\sigma$. For arbitrary element
$f^\p=\{a_1,a_2,\dots,a_{2n+2}\}$ of $C_{n+1}$, the condition
$a_1+a_2+\dots+a_l\geq 0$ for all $1\leq l\leq 2n+2$ implies $a_1=1$
and $a_{2n+2}=-1$. As in (\ref{sequence}), let $s_1^\p$ be the number
of the first consecutive 1's in $f^\p$, let $t_1^\p$ be the number of
the consecutive $-1$'s that follow, and define similarly $s_i^\p$ and
$t_i^\p$ as before for $2\leq i\leq k+1$. Let
$s_1=s_1^\p,\, t_1=t_1^\p,\, \,s_i=s_1^\p+s_2^\p+\dots+s_i^\p$, and $t_i=t_1^\p+t_2^\p+\dots+t_i^\p$ for $2\leq i\leq k$. Then the condition $a_1+a_2+\dots+a_l\geq 0$ for all
$1\leq l\leq 2n+2$ implies $t_i\leq s_i$ for $1\leq i\leq k$.
Define $f$ to be the mapping $f(s_i) = t_i$ with domain $s_1,s_2,\dots,s_k$
and range $t_1,t_2,\dots,t_k$. Then
$f\in B_n$ and $\sigma(f)=f^\p$. Hence $\sigma$
is a one-to-one correspondence between the set $B_n$ and the set
$C_{n+1}$. It is well known that the order of $C_{n+1}$ is the
Catalan number $c_{n+1}$, so the proposition is proved.
\end{proof}
\begin{remark}
{\rm
Clearly $B_n$ is a submonoid of the monoid $\mathscr B_n$ consisting of all order decreasing (not necessarily planar) injective partial maps. See \cite{BRR} for the order of $\mathscr B_n$.
}
\end{remark}
The following corollary is immediate.
\begin{corollary}
The number of upper triangular generalized reduced echelon matrices of size $n$ is the Catalan number $c_{n+1}$.
\end{corollary}
Our next proposition provides a recursive formula for calculating the order $b_n$ of
\begin{equation*}\label{bn}
B_n=\{f\in R_n \mid f(j)\leq j \;\mathrm{for}\; j\in D(f);\, f(i)<f(j)
\;\mathrm{for}\; i,j\in D(f) \;\mathrm{and}\; i<j\}~.
\end{equation*}
For $0\leq p,\,q\leq n-1$, let $b_{p,\,q}=|B_{p,\,q}|$ where
\[
B_{p,\,q}=\{f\in B_n \mid D(f)\subseteq \{n-q,\, \dots,\,n\} \text{ and }R(f)\subseteq \{n-p,\, \dots,\, n\}\}~.
\]
\begin{proposition}\label{bnre} Let $n\ge 1$. Then
\vspace{-2mm}
\begin{enumerate}[{\rm(1)}]
\item $b_0=1$,\, $b_1=2$~.
\vspace{-2mm}
\item $b_{p,\,0}=p+2$\hspace{3.5cm} for $0\le p\le n-1$~.
\vspace{-2mm}
\item $b_{p,\,p}=b_{p+1}$\quad\quad\quad\quad\quad\quad\quad\quad\quad\, for $0\le p\le n-1$~.
\vspace{-2mm}
\item $b_n = 2b_{n-1} + 1 + \sum_{q=0}^{n-3}b_{n-2,\,q}$\quad\, for $n\geq 2$~.
\vspace{-2mm}
\item $b_{p,\,q}=1+\sum_{r=0}^q b_{p-1,\,r}$\quad\quad\quad\quad\, for $1\leq q<p\leq n-1$~.
\end{enumerate}
\end{proposition}
\begin{proof}
Parts (1), (2), and (3) are clear. To prove (4), divide the elements of $B_n$ into two groups: the elements whose ranges contain $1$, and those whose ranges do not contain $1$. Part (4) follows from the following three identities:
\vspace{-2mm}
\begin{align*}
b_{n-1}&=|\{f\in B_n \mid f(1)=1\}| = |\{f\in B_n \mid 1\notin R(f)\}|~. \\
b_{n-2,\,q}&=|\{f\in B_n\mid f(n-1-q)=1\}|\quad \text{for}\quad 0\leq q\leq n-3 ~. \\
1 &= |\{f\in B_n\mid f(n)=1\}|~.
\end{align*}
Similarly, for part (5), we divide the elements of $B_{p, q}$ into two groups: the elements whose ranges contain $n-p$, and those whose ranges do not contain $n-p$. Part (5) follows from the following three identities:
\begin{align*}
b_{p-1,\,q}&=|\{f\in B_{p,q}\mid n-p\notin R(f)\}|~.\\
b_{p-1,\,r}&=|\{f\in B_{p,q}\mid f(n-1-r)=n-p\}|\quad\text{for}\quad 0\leq r\leq q-1~.\\
1 &=|\{f\in B_{p,q}\mid f(n)=n-p\}|~.
\end{align*}
\end{proof}
\section{Modules for $B_n$}
A vector space $V$ over a field $F$ of characteristic $0$ is called a $B_n$-module if $B_n$ acts on $V$ satisfying, for all $f, f_1, f_2\in B_n$, $u, v\in V$, and $\lambda\in F$,
\begin{eqnarray*}
f\cdot (u+v) &= f\cdot u + f\cdot v, \quad\quad\quad f_1\cdot (f_2\cdot u) &= (f_1f_2)\cdot u, \\
f\cdot (\lambda u) &= \lambda (f\cdot u),~\quad\quad\quad\quad\quad\quad\quad 1\cdot u &= u.
\end{eqnarray*}
From now on, $V$ denotes a vector space with a basis
$
\mathcal{B} = \{v_S\mid S\subseteq\n\}
$
indexed by all the subsets of $\n$. Then $V=\bigoplus_{S\subseteq\n} Fv_S$ as subspaces is a $B_n$-module with respect to the following action: for $f\in B_n$ and $S\subseteq\n$,
\begin{equation}\label{moddef}
f\cdot v_S=
\left\{
\begin{array}{ll}
v_{S^\p}, & \hbox{if\; $S\subseteq D(f)$} \\
0, & \hbox{otherwise,}
\end{array}
\right.
\end{equation}
where $S^\p=\{f(s_1),\dots,f(s_k)\}$ if $S=\{s_1,\dots,s_k\}$. For $0\le k \le n$, let
$$
V_k=\mathrm{span}\{v_S\in\mathcal{B}\mid k=|S|\}.
$$
Then $V=\bigoplus^n_{k=0}V_k$ is a direct sum of $B_n$-submodules.
Every module under consideration is a $B_n$-module over $F$, unless otherwise stated.
Our intent below is to describe the $B_n$-module structure of $V_k$ and $V$. To this end, we
define a partial order on the power set of $\n$. For any $k$-subsets
$S=\{s_1<\dots<s_k\}$ and $T=\{t_1<\dots<t_k\}$ of $\n$, define
\[
T\leq S \quad \Leftrightarrow \quad t_i\leq s_i \quad\text{for all}\quad i\in \mathbf{k}~,
\]
and a $k$-subset is not comparable to any $l$-subset if $k\ne l$.
For $v\in V$ we use $B_nv$ to denote the cyclic submodule of $V$ generated by $v$. If $S$ is a $k$-subset of $\n$, then $B_nv_S$ is a submodule of $V_k$. Indeed, for any $f\in B_n$ if $S\subseteq D(f)$ then $f(S)$ is a $k$-subset, so $f\cdot v_S = v_{f(S)}\in V_k$; if $S$ is not a subset of $D(f)$ then $f\cdot v_S = 0\in V_k$. Some further properties of the module $B_nv_S$ are described in the next result.
\begin{lemma}\label{mod1} Let $S,T$ be $k$-subsets of $\n$.
{\rm(1)} $B_nv_S = \bigoplus_{S'\subseteq\n,\, S'\leq S}Fv_{S'}$ as vector spaces. In particular, $V_k=B_nv_{\{n-k+1,\, \ldots,\, n\}}$.
{\rm(2)} $B_nv_T\subseteq B_nv_S$ if and only if $T\leq S$.
{\rm(3)} $B_nv_S\cap B_nv_T = B_nv_{S\wedge T}$, where $S\wedge T$ is the greatest lower bound of $S$ and $T$.
\end{lemma}
\begin{proof}
To prove (1) notice that two subsets $S'\leq S$ if and only if $S=D(f)$ and $S'=R(f)$ for a unique $f\in B_n$. Let $S'\le S$. Then $v_{S'} = f\cdot v_S\in B_nv_S$. Hence $\bigoplus_{S'\subseteq\n,\, S'\leq S}Fv_{S'}$ is included in $B_nv_S$. Conversely, let $x=g\cdot v_S \ne 0$ for some $g\in B_n$. We have $S\subseteq D(g)$, $g(S)\le S$, and hence $x=v_{g(S)}\in \bigoplus_{S'\subseteq\n,\, S'\leq S}Fv_{S'}$. The second part of (i) is now clear.
The proof of (2) follows from (1) since $\{T'\mid T'\subseteq\n,\, T'\leq T\}\subseteq\{S'\mid S'\subseteq\n,\, S'\leq S\}$ if and only if $T\le S$.
To prove (3) let $g\cdot v_S = h\cdot v_T\ne 0$ for some $g, h\in B_n$. Then $g(S)=h(T)$. Suppose
\[
S = \{s_1< \ldots < s_k\}\quad\text{and}\quad T = \{t_1< \ldots < t_k\}~.
\]
Then
$
S \wedge T =\{\min(s_1, t_1),\, \ldots,\, \min(s_k, t_k)\},
$
and $g(s_i)=h(t_i)$. We define $f\in B_n$ with $D(f)=S \wedge T $ and $R(f)=g(S)$ by
$
f(\min(s_i,\, t_i)) = g(s_i),
$
where $1\le i\le k$. Then $g\cdot v_S = f\cdot v_{S\wedge T} \in B_n v_{S\wedge T}$, and hence $B_nv_S\cap B_nv_T \subseteq B_nv_{S\wedge T}$.
Conversely, for any given $0\ne f\cdot v_{S\wedge T} \in B_n v_{S\wedge T}$ define $g(s_i)=h(t_i)= f(\min(s_i,\, t_i))$ for $1\le i\le k$. Then $f\cdot v_{S\wedge T} = g\cdot v_S=h\cdot v_T\in B_nv_S\cap B_nv_T$. The proof of (3) is complete.
\end{proof}
Let $v=\sum_{S\subseteq \n}\lambda_Sv_S, \lambda_S\in F$ be a vector of $V$. The {\em support} of $v$ is defined to be
\[
{\rm supp}(v) = \{S\subseteq\n\mid \lambda_S \ne 0\}~.
\]
\begin{definition}\label{redGen}
A vector of the form $w=\sum_{S\in {\rm supp}(w)}v_S\in V$ is called a {\em reduced generator} of a submodule $W$ of $V$ if $W=B_nw$ and $W$ cannot be generated by any other vector whose support contains fewer elements than {\rm supp(}$w${\rm )}. We agree that $0$ is the reduced generator of the zero submodule.
\end{definition}
The next proposition gives some properties of submodules of $V$.
\begin{proposition}\label{cyclic} Let $v=\sum_{S\in\,{\rm supp(}v{\rm )}}\lambda_Sv_S\in V$.
{\rm (1)} If $S$ is in {\rm supp}$(v)$, then $v_S\in B_nv$~.
{\rm (2)} $B_nv = \bigoplus_{T\in \mathcal{P}(v)} Fv_T$ as subspaces, where $\mathcal{P}(v) = \bigcup_{S\in {\rm supp}(v)}\{T\subseteq\n\mid T\le S\}$.
{\rm (3)} Every submodule of $V$ is cyclic and contains a unique reduced generator.
\end{proposition}
\begin{proof} To prove (1) let $\min \big\{\,|S| \,\big|\, S \in {\rm supp}(v)\big\}=r$. Then there exists an $r$-subset $T=\{t_1<\cdots < t_r\}\subseteq\n$ such that $T\in {\rm supp}(v)$; if $r=0$, then $T=\emptyset$. Let $f\in B_n$ such that $D(f)=R(f)=T$. By the choice of $r$, for every $S\in{\rm supp}(v)$ with $S\neq T$, there is at least one $s\in S$ such that $s\notin T$, so $f\cdot v_S=0$. Hence
$$
f\cdot v=f\cdot \sum_{S\in\,{\rm supp(}v{\rm )}}\lambda_Sv_S=\sum_{S\in\,{\rm supp(}v{\rm )}}\lambda_S(f\cdot v_S)=\lambda_Tv_T~.
$$
Thus $v_T\in B_nv$ since $\lambda_T\neq 0$. It is easily seen that
$$
\sum_{S\in\,{\rm supp(}v{\rm )}\atop |S|>r}\lambda_Sv_S=v-\sum_{S\in\,{\rm supp(}v{\rm )}\atop |S|=r}\lambda_Sv_S\in B_nv~.
$$
Applying the above procedure to
$
\sum_{S\in\,{\rm supp(}v{\rm )},\,|S|>r}\lambda_Sv_S
$
and iteratively using this procedure, if needed, we get $v_S\subseteq B_nv$ for all $S\in \text{supp}(v)$. The proof of (1) is complete.
From (1) and Lemma \ref{mod1} (1), we have
\begin{eqnarray*}
B_nv&=& \sum_{S \in\text{\rm supp}(v)}\lambda_SB_nv_S \\
&=&\sum_{S \in\text{\rm supp}(v)}\mathrm{span}\,\{v_T\in\mathcal{B}\mid T\leq S\}\\
&=& \bigoplus_{S\in \mathcal{P}(v)} Fv_S, \quad\text{as subspaces}.
\end{eqnarray*}
This completes the proof of (2).
We now prove (3). It is trivial for $W=\{0\}$. Let W be a nonzero submodule of $V$. We claim that $W$ has a basis $\{v_S\in\mathcal{B}\mid S\in \mathcal{P}\}$ for some subset $\mathcal{P}$ of the power set of $\n$. Indeed, suppose $\mathcal{B}_1$ is a basis of $W$ and write every element of $\mathcal B_1$ as a linear combination of basis vectors in $\mathcal{B}=\{v_S\mid S\subseteq \n\}$. Let $\mathcal P$ be the set of all the different subsets $S$ where $S$ runs through the support of every element of $\mathcal B_1$. By (1) the set $\{v_S\in\mathcal{B}\mid S\in \mathcal{P}\}$ is a subset of $W$, and hence a basis of $W$ since it is linearly independent and spans $W$. Let
$
w=\sum_{S\in\mathcal{P}} v_S.
$
By (1) again, $W$ is generated by $w$, and hence $W$ is cyclic.
We now show how to deduce a reduced generator of $W$ from $w$. Indeed, if $w$ contains two vectors $v_S, \,v_T$ with $T\le S$ and $T\ne S$ in supp($w$), then we can remove the term $v_T$ from $w$, and by Lemma \ref{mod1} (i) the sum of the remaining terms is still a generator. Repeat this process until we obtain the set
\[
{\rm Red}(w) = \{S\mid S \;\text{is maximal in supp}(w)\},
\]
and then we define the corresponding generator $w_{\rm red}$ of $W$ by
\[
w_{\rm red}=\sum_{S\in{\rm Red}(w)}v_S~.
\]
We claim that $w_{\rm red}$ is a reduced generator of $W$. Let $v = \sum_{S\in\text{supp}(v)}\lambda_Sv_S$ be another generator of $W$. From Definition \ref{redGen} it suffices to show that $|{\rm supp(}v{\rm )}| \ge |{\rm Red(}w{\rm )}|$.
From (2) we find $W = \bigoplus_{T\in \mathcal{P}(v)} Fv_T = \bigoplus_{T\in \mathcal{P}(w)} Fv_T$ where $\mathcal{P}(v)$ and $\mathcal{P}(w)$ are as in (2),
and hence
$
\mathcal{P}(v) = \mathcal{P}(w).
$
Define
\begin{equation}\label{redv}
{\rm Red}(v) = \{S\mid S \text{ is maximal in supp}(v)\}.
\end{equation}
Thus, ${\rm Red}(v) = \{S\mid S \text{ is maximal in }\mathcal{P}(v)\}$ and
${\rm Red}(w) = \{S\mid S \text{ is maximal in }\mathcal{P}(w)\}$.
So, Red($v$) = Red($w$) and $|{\rm supp(}v{\rm )}| \ge |{\rm Red(}v{\rm )}| = |{\rm Red(}w{\rm )}|$, showing that $w_{\rm red}$ is reduced.
Suppose that $v = \sum_{S\in{\rm supp}(v)}v_S$ is another reduced generator of $W$. By the definition of reduced generators we know $|{\rm supp(}v{\rm )}| = |{\rm Red(}w{\rm )}|$. Hence $|{\rm supp(}v{\rm )}| = |{\rm Red(}v{\rm )}|$ since Red($v$) = Red($w$). It follows that ${\rm supp(}v{\rm )} = {\rm Red(}v{\rm )}$. Let $v_{\rm red}=\sum_{S\in{\rm Red}(v)}v_S$. Then $v=v_{\rm red}=w_{\rm red}$. Therefore $w_{\rm red}$ is the unique reduced generator of $W$.
\end{proof}
\begin{definition} The set ${\rm Red}(v)$ in {\rm (\ref{redv})} is called the {\em reduced support} of $v$, and the element $v_{\rm red}=\sum_{S\in{\rm Red}(v)}v_S$ is termed the {\em reduced form} of $v$. The reduced support of $0$ is empty, and the reduced form of $0$ is itself.
\end{definition}
For example, if $n=7$ and $v = v_\emptyset - 2v_{\{1\}} + v_{\{3\}} + 5 v_{\{1, \,2\}} + 3v_{\{4,\, 7\}} - 2v_{\{5, \,6\}} + v_{\{1, \,2, \,3\}}$, then Red($v$) = $\{\emptyset,\,\{3\},\,\{5,\, 6\},\,\{4,\, 7\}, \{1, \,2, \,3\}\}$ is the reduced support of $v$, and its reduced form is $v_{\rm red} = v_\emptyset + v_{\{3\}} + v_{\{4,\, 7\}} + v_{\{5, \,6\}} + v_{\{1, \,2, \,3\}}$.
It is sometimes convenient to call the reduced support of $v$ the {\em reduced support} of the module $B_nv$.
A direct calculation yields that the reduced generator of $V_k$ is $v_{\{n-k+1,\,\ldots,\,n\}}$ for $1\le k\le n$, and the reduced support of $V_k$ is the set $\{n-k+1,\,\ldots,\,n\}$. The module $V_0$ has the element $v_\emptyset$ as its reduced generator, and its reduced support is the set $\{\emptyset\}$.
The next result is a consequence of Lemma \ref{mod1} (i) and Proposition \ref{cyclic} (3).
\begin{corollary}\label{eq}
If $v, w\in V$, then $B_n v = B_n w$ if and only if they have the same reduced support {\rm Red(}v{\rm)} = {\rm Red(}w{\rm)} if and only if they have the same reduced generator $v_{\rm red} = w_{\rm red}$.
\end{corollary}
We can now describe the irreducible submodules of $V_k$ for $0\le k\le n$. Write $\bold k = \{1,\,\dots,\,k\}$. If $k=0$, we agree that $\bold k=\emptyset$ and $v_\bold k = v_\emptyset$.
\begin{proposition}\label{irreVk}
For each $0\le k\le n$, the 1-dimensional submodule $B_nv_{\bold k}$ is the only irreducible submodule of $V_k$, and every nonzero submodule of $V_k$ contains $B_nv_{\bold k}$.
\end{proposition}
\begin{proof} Since ${\bold k}$ is the smallest element of the set of all $k$-subsets and the elements of $B_n$ are order decreasing as well as order preserving injective maps, from action (\ref{moddef}) we find $B_nv_{{\bold k}}=Fv_{\bold k}$ is an irreducible submodule of $V_k$, and it is $1$-dimensional.
If $W$ is another nonzero irreducible submodule of $V_k$, by Proposition \ref{cyclic} (3) there exists a generator $w\in V_k$ such that $W = B_nw$. The irreducibility of $W$ forces that supp($w$) contains only the $k$-subset ${\bold k}$, since if supp($w$) contains another $k$-subset $S$ different from ${\bold k}$, then by Lemma \ref{mod1} (1), $v_S\in W\setminus B_nv_{\bold k}$ and hence $B_nv_{\bold k}$ would be a nonzero proper submodule of $W$. We conclude $W=B_nv_{{\bold k}}$.
Now let $W$ be any nonzero submodule of $V_k$. From Proposition \ref{cyclic} (3) we know that $W$ is generated by a nonzero element $v\in V_k$. Pick any $S\in$ supp($v$). Then $v_S\in W$ by Proposition \ref{cyclic} (1). Since there exists a unique map $f\in B_n$ such that $D(f)=S$ and $R(f)={\bold k}$, we find $v_{\bold k}=f\cdot v_S\in B_nv_S\subseteq W$. Therefore $B_nv_{\bold k}\subseteq W$.
\end{proof}
The next result describes irreducible submodules of $V$ in terms of those of $V_k$.
\begin{proposition}\label{irreV}
If $W$ is an irreducible submodule of $V$, then $W=B_nv_{{\bold k}}$ for some $0\le k\le n$ and $\dim W =1.$ Moreover, $\{B_nv_{{\bold k}} \mid k = 0,\,\ldots,\, n\}$ is a complete set of irreducible submodules of $V$.
\end{proposition}
\begin{proof}
From Proposition \ref{cyclic} (3) we find $W = B_nw$ for a reduced generator $w\in V$. Since $W$ is nonzero, supp($w$) is not empty. Assume that $S,\,T\in$ supp($w$) and $S\ne T$. Since $S,\,T$ are different maximal elements in supp($w$), from Lemma \ref{mod1} (i) we find $v_T\in W\setminus B_nv_S$, and hence $B_nv_S$ is a nonzero proper submodule of $W$, which contradicts the irreducibility of $W$. Therefore, supp($w$) contains only one subset of $\n$, showing that $W$ is a submodule of $V_k$ for some $0\le k\le n$. It follows from Proposition \ref{irreVk} that $W=B_nv_{{\bold k}}$ and $\dim W =1$. The second part of the proposition is now straightforward.
\end{proof}
Recall that a $B_n$-module is {\em indecomposable} if it is nonzero and cannot be written as a direct sum of two nonzero submodules, and that a $B_n$-module is called {\em completely decomposable} if it is nonzero and is a direct sum of indecomposable submodules.
\begin{proposition}\label{indDecom}
Let $W$ be a nonzero submodule of $V$. Then $W$ is indecomposable if and only if $W$ is a submodule of some $V_k$ where $0\le k\le n$.
\end{proposition}
\begin{proof}
If $W$ is a nonzero submodule of $V_k$ where $0\le k\le n$, then by Proposition \ref{irreVk} any two nonzero submodules of $W$ both contain $B_nv_{{\bold k}}$, so their sum cannot be direct, and hence $W$ is indecomposable.
Conversely, if $W$ is a nonzero indecomposable submodule of $V$, from Proposition \ref{cyclic} (3) it follows that $W=B_nv$ for a unique reduced generator $v = \sum_{S\in {\rm Red}(v)}v_S\in V$. Let $\mathcal{P}(i)$ be the set of all $i$-subsets of $\n$ where $0\le i\le n$. For each $i$ let Red$_i(v) = {\rm Red(}v{\rm)}\cap \mathcal{P}(i)$. Forgetting all the possible empty Red$_i(v)$, we obtain a partition of
\[
{\rm Red}(v) = {\rm Red}_{i_1}(v) \sqcup \cdots \sqcup {\rm Red}_{i_s}(v), \quad\text{for some } 1\le s\le n+1,
\]
where $0\le i_1 < \cdots < i_s \le n.$ Let $v_{i_j} = \sum_{S\in {\rm Red}_{i_j}(v)} v_S$ be the reduced vector with support Red$_{i_j}(v)$ where $1\le j\le s$. Then
\begin{equation}\label{comDom}
W = \bigoplus_{j=1}^{s} B_nv_{i_j}~, \quad\text{direst sum of submodules}.
\end{equation}
Since $W$ is indecomposable and each $B_nv_{i_j}$ is a nonzero proper submodule of $W$, there exists some $k=i_j$ such that $W = B_nv_k$, which is a submodule of $V_k$.
\end{proof}
\begin{corollary}
Every nonzero submodule of $V$ is completely decomposable. In particular, $V$ is completely decomposable and $V=\bigoplus_{k=0}^{n}V_k$ is a direct sum of indecomposable submodules.
\end{corollary}
\begin{proof}
Let $W$ be a nonzero submodule of $V$. Then from Proposition \ref{cyclic} (3) we have $W=B_nv$ for a unique reduced generator $v = \sum_{S\in {\rm Red}(v)}v_S\in V$. A similar argument to that of Proposition \ref{indDecom} shows that $W$ has the decomposition (\ref{comDom}). From Proposition \ref{indDecom} each $B_nv_{i_j}$ in (\ref{comDom}) is indecomposable. The first part of the desired result follows. The second part follows immediately.
\end{proof}
\begin{proposition}
No two different submodules of $V$ are isomorphic.
\end{proposition}
\begin{proof}
Let $W$ and $U$ be two submodules of $V$ and $\sigma:W\rightarrow U$ be a module isomorphism. Let $x=\sum_{S\in\, \mathcal I} v_S$ be a reduced generator of $W$, where $\mathcal I$ is an index set. Now for $v_S,S\in \mathcal I$, suppose $\sigma(v_S)=\sum_{T\in\, \mathcal J} \lambda_T v_T$ for some index set $\mathcal J$, where $\lambda_T\in F$. Take $f_S\in B_n$ with $D(f)=R(f)=S$. Then
\[
\sigma(v_S)=\sigma(f_S\cdot v_S)=f_S\cdot\sigma(v_S)=f_S\cdot\sum_{T\in \mathcal J} \lambda_T
v_T=\sum_{T\in \mathcal J,\,T\subseteq S} \lambda_T (f_S\cdot v_T)=\sum_{T\in
\mathcal J,\,T\subseteq S} \lambda_T v_T.
\]
We show that $\sigma(v_S)=\lambda_Sv_S$ with $\lambda_S\neq 0$. If in the sum on the right there is some $T^\p\in \mathcal J,\,T^\p\subsetneq S$ with $\lambda_{T^\p}\neq 0$, let $f_{T^\p}\in B_n$ with $D(f_{T^\p})=R(f_{T^\p})=T^\p$. We have
\[
f_{T^\p}\cdot\sigma(v_S) = f_{T^\p}\cdot\sum_{T\in \mathcal J,\,T\subseteq S}\lambda_T v_T=\sum_{T\in \mathcal J,\,T\subseteq S}\lambda_T (f_{T^\p}\cdot v_T)=\sum_{T\in \mathcal J,\,T\subseteq T^\p}\lambda_T v_T\ne 0,
\]
but $f_{T^\p}\cdot\sigma(v_S) = \sigma(f_{T^\p}\cdot v_S) =\sigma(0)=0$, a contradiction. Thus we get $\sigma(v_S)=\lambda_Sv_S$ and $\lambda_S\neq 0$, so $B_nv_S=B_n\sigma(v_S)$. By Proposition 4.3 (1), we find
\[
W=B_nx=\sum_{S\in \mathcal I} B_nv_S=\sum_{S\in \mathcal I}B_n\sigma(v_S)=\sigma\Big(\sum_{S\in \mathcal I}B_nv_S\Big)=\sigma\Big(B_n\big(\sum_{S\in \mathcal I}v_S\big)\Big)=\sigma(B_nx)=U.
\] \end{proof}
We now describe the dimension of any nonzero submodule of $V$. Proposition \ref{cyclic} (3) assures that the submodule is equal to the module $B_nv$ generated by some $v\in V$.
\begin{proposition}
Let $v\in V$ and {\rm Red(}v{\rm )}= $\{S_1,\,\ldots,\,S_m\}$. For any $J\subseteq$ {\rm Red(}v{\rm )} denote by $S_J$ the greatest lower bound of $\{S_j\mid j\in J\}$. Then the dimension of $B_nv$ is given by
\[
\dim B_nv = \sum_{\emptyset\,\ne J\,\subseteq {\bold m}} (-1)^{|J|-1} \dim B_nv_{S_J}.
\]
\end{proposition}
\begin{proof}
From Proposition \ref{cyclic} (2) and (3) the dimension of $B_nv$ is equal to the cardinality of the set
$\mathcal{P}(v) = \bigcup_{S\in {\rm Red(}v{\rm )}}\{T\subseteq\n\mid T\le S\}$. Let $A_j=\{T\subseteq\n\mid T\le S_j\}$, $j\in{\bold m}$. Then $\mathcal{P}(v) = \bigcup_{j\in{{\bold m}}}A_j$, and $\dim B_nv_{S_j} = |A_j|$ by Proposition \ref{cyclic} (2). With Lemma \ref{mod1} (3) in mind and applying the inclusion-exclusion principle to count the cardinality of $\mathcal{P}(v)$, we obtain the desired formula for $\dim B_nv$.
\end{proof}
We further describe the dimension of $B_nv_S$ for any $S\subseteq\n$. In what follows we agree that if $x>y$, then $\binom{y}{x}=0$ and the empty sum $\sum_{i=x}^y\square_i = 0$.
\begin{theorem}\label{dim}
If $S=\{s_1<\dots<s_k\}$ is a $k$-subset of $\n$, let $d_k$ be the dimension of the module $B_nv_S$. We have $d_1 = s_1$, and for $k\ge 2$,
\begin{align}
d_k &= \sum^{k-1}_{i=1}{s_{k-i+1} \choose k+1-i}\gamma_i -\sum^{k-1}_{i=1}{s_{k-i+1}-s_1 \choose k+1-i}\gamma_i-\sum^{k-2}_{i=1}s_1 {s_{k-i+1}-s_2 \choose k-i}\gamma_i ~, \label{dk}
\end{align}
where $\gamma_1=1$ and for $2\leq j\leq k-1$,
\begin{equation*}\label{}
\gamma_j=-\sum^{j-2}_{i=1}
{s_{k+1-i}-s_{k+2-j} \choose j-i}\gamma_i ~.
\end{equation*}
\end{theorem}
\begin{proof}
By Lemma \ref{mod1} (i) we know that $d_k$ is equal to the number of $k$-subsets $T$ of $\n$ such that $T\leq S$. Let
$
\la_i=s_{k-i+1}-(k-i+1)\text{ for } 1\le i \le k.
$
Then $\lambda_i\ge\lambda_{i+1}$ since $s_{k-i+1}>s_{k-i}$.
Because the smallest $k$-subset is $\{1,\dots,k\}$, we have
\begin{equation}\label{lambdaSequence}
\la_1\geq \dots\geq\la_k\geq 0,
\end{equation}
and the number of $k$-subsets $T$ of $\n$ with $T\leq S$ is equal to the number of all the sequences
\begin{equation}\label{partition}
\mu_1\geq \dots\geq \mu_k\geq 0\quad\text{with}\quad \mu_i\leq \lambda_i\quad\text{for}\quad i=1,\dots,k.
\end{equation}
To find $d_k$ it suffices to compute the number of the sequences in (\ref{partition}) for the given sequence (\ref{lambdaSequence}).
If $k=1$, then $d_1 = \lambda_1 + 1 = s_1$.
If $k\ge 2$, let $2\le j\le k$. For each fixed nonnegative integer $\mu\le\la_j$ we calculate iteratively on $j$ the number $\alpha_j(\mu)$ of the sequences
\begin{equation}\label{aSj}
\mu_1\geq \dots\geq \mu_{j-1}\ge\mu\quad\text{with}\quad \mu_i\leq \lambda_i\quad\text{for}\quad i=1,\dots,j-1,
\end{equation}
and the required dimension $d_k = \sum_{\mu = 0}^{\lambda_k}\alpha_k(\mu)$.
Let $\xi_j=\la_j-\mu$. Then $0\leq \xi_j\leq\la_j$. Our aim now is to prove
\begin{equation}\label{alphaj}
\alpha_j(\mu) = \beta_j + \gamma_j,
\end{equation}
where
$\beta_j=\sum^{j-1}_{i=1}{\la_i-\la_j+\xi_j+j-i \choose j-i}\gamma_i$ and $\gamma_j = -\sum^{j-2}_{i=1}{\la_i-\la_{j-1}+j-i-1 \choose j-i}\gamma_i$ with $\gamma_1=1$.
Notice that $\alpha_j(\mu)$ is a sum of two numbers $\beta_j$ and $\gamma_j$, of which $\gamma_j$ depends on $\la_1,\dots,\la_{j-1}$, whereas $\beta_j$ depends on $\la_1,\dots,\la_j$ and $\xi_j$.
We use induction on $j$ to prove (\ref{alphaj}) for $2 \le j\le k$. If $j=2$, for each fixed nonnegative integer $\mu\le\lambda_2$
we have $\xi_2=\la_2-\mu$ and $0\leq \xi_2\leq\la_2$. Let $\xi_1=\la_1-\mu_1$. To ensure that (\ref{aSj}) holds for this case, namely $\mu_1\geq\mu$ and $\mu_1\leq\lambda_1$, we must have $0\leq \xi_1\leq\la_1-\la_2+\xi_2$, and conversely. So
\begin{equation*}
\alpha_2(\mu)= \la_1-\la_2+\xi_2+1 = \beta_2+\gamma_2
\end{equation*}
where $\beta_2=\la_1-\la_2+\xi_2+1$ and $\gamma_2=0$, and this is (\ref{alphaj}) for $j=2$.
Suppose (\ref{alphaj}) holds for $j=l$ with $2\leq l\leq k-1$, that is, for each fixed nonnegative integer $\mu\le\lambda_l$
we have $\xi_l=\la_l-\mu$ with $0\leq \xi_l\leq\la_l$, and the number of sequences $\mu_1\geq\dots\geq\mu_{l-1}\geq\mu$ with $\mu_i\leq\lambda_i$ for $i=1,\ldots,l-1$ is
\begin{equation}\label{hypothesis}
\alpha_l(\mu) = \beta_l + \gamma_l,
\end{equation}
where
$\beta_l=\sum^{l-1}_{i=1}{\la_i-\la_l+\xi_l+l-i \choose l-i}\gamma_i$ and $\gamma_l = -\sum^{l-2}_{i=1}{\la_i-\la_{l-1}+l-i-1 \choose l-i}\gamma_i$.
We now prove (\ref{alphaj}) for $j=l+1$. For a fixed nonnegative integer $\nu\le\lambda_{l+1}$ we have $\xi_{l+1} = \la_{l+1}-\nu$ with $0\leq \xi_{l+1}\leq\la_{l+1}$. Let $\mu=\la_l-\xi_l$. To ensure that the condition (\ref{aSj})
\[
\mu_1\geq\dots\geq\mu_{l-1}\geq\mu\ge\nu\quad\text{ with}\quad\mu_i\le\lambda_i,\, i=1,\ldots,{l-1}\text{ and } \mu\le\lambda_l
\]
holds here, we must have $0\leq \xi_l\leq\rho_l$ where $\rho_l=\la_l-\la_{l+1}+\xi_{l+1}$, and conversely. Adding all $\alpha_l(\mu)$ up for $\nu\le\mu\le\lambda_l$ and using the induction hypothesis (\ref{hypothesis}), we obtain
\begin{eqnarray
\nonumber\alpha_{l+1}(\nu)&=&\sum_{\mu = \nu}^{\lambda_{l}} \alpha_l(\mu) = \sum_{\mu = \nu}^{\lambda_{l}} (\beta_l + \gamma_l)\\
&=&\sum^{\rho_l}_{\xi_l=0}\sum^{l-1}_{i=1}{\la_i-\la_l+\xi_l+l-i \choose l-i}\gamma_i+\sum^{\rho_l}_{\xi_l=0}\gamma_l \label{second}\\
\nonumber &=&\sum^{l-1}_{i=1}\left\{{\la_i-\la_{l+1}+\xi_{l+1}+(l+1)-i \choose l+1-i}\gamma_i
-{\la_i-\la_l+l-i \choose l+1-i}\gamma_i\right\}\\
&&\qquad\qquad\qquad\qquad\qquad+{\la_l-\la_{l+1}+\xi_{l+1}+1\choose 1}\gamma_l \label{alp1}\\
\nonumber &=&\sum^l_{i=1}{\la_i-\la_{l+1}+\xi_{l+1}+(l+1)-i \choose l+1-i}\gamma_i-\sum^{l-1}_{i=1}{\la_i-\la_l+l-i \choose l+1-i}\gamma_i\\
\nonumber &=&\beta_{l+1} + \gamma_{l+1},
\end{eqnarray}
where
\begin{eqnarray*}\label{}
\beta_{l+1}&=&\sum^l_{i=1}
{\la_i-\la_{l+1}+\xi_{l+1}+(l+1)-i \choose l+1-i}\gamma_i ~,\\
\gamma_{l+1}&=&-\sum^{l-1}_{i=1}{\la_i-\la_l+l-i \choose
l+1-i}\gamma_i ~.
\end{eqnarray*}
Here we have made use of the identity $\sum_{z=a}^{a+b-1}\binom{z}{p} = \binom{a+b}{p+1} - \binom{a}{p+1}$ in which $a, b, p$ are natural numbers to obtain (\ref{alp1}) from (\ref{second}) by assigning $a=\lambda_i - \lambda_l + l - i,\, b=\lambda_l - \lambda_{l+1} + \xi_{l+1} + 1$ and $p=l-i\ge 1$.
Therefore, (\ref{alphaj}) is valid for $j=l+1$, and we complete the proof of (\ref{alphaj}) by induction.
We are now able to calculate the dimension $d_k$ of the module $B_nv_S$ for $k\ge 2$ by summing all $\alpha_k(\mu)$ in (\ref{alphaj}) up where $\mu$ runs from $0$ to $\la_k$, yielding
\begin{eqnarray*}\label{dimlambda3}
\nonumber d_k &=& \sum^{\la_k}_{\mu=0}\alpha_k(\mu) \\
\nonumber &=&\sum^{\la_k}_{\xi_k=0}\sum^{k-1}_{i=1}
{\la_i-\la_k+\xi_k+k-i \choose k-i}\gamma_i-\sum^{\la_k}_{\xi_k=0}\sum^{k-2}_{i=1}
{\la_i-\la_{k-1}+k-i-1 \choose k-i}\gamma_i\\
\nonumber &=&\sum^{k-1}_{i=1}{\la_i+k-i+1 \choose k+1-i}\gamma_i -\sum^{k-1}_{i=1}{\la_i-\la_k+k-i \choose k+1-i}\gamma_i \\
&&\qquad\qquad -\sum^{k-2}_{i=1}(\la_k+1) {\la_i-\la_{k-1}+k-i-1 \choose k-i}\gamma_i~.
\end{eqnarray*}
From $\la_i=s_{k-i+1}-(k-i+1)$, we conclude that $d_k$ is given by (\ref{dk}).
\end{proof}
The following four corollaries are consequences of Theorem \ref{dim}. Considering the sequence $\la_1\geq \dots\geq\la_k\geq 0$ given in (\ref{lambdaSequence}) a partition of $d=\sum_{i=1}^k\la_i$ into at most $k$-parts, we obtain the next result.
\begin{corollary}
Let $\la_1\geq\dots\geq\la_k\geq 0$ be a given partition
of some nonnegative integer. Then the number of distinct Young diagrams obtained from the Young diagram of $\la$ by removing zero or more boxes from the rows is $d_k$, which is given in Theorem {\rm \ref{dim}}.
\end{corollary}
\begin{proof}
It is easily seen that the number of distinct Young diagrams obtained from the Young diagram of $\la$ by removing zero or more boxes from the rows is equal to the number of partitions $\mu_1\geq\dots\geq\mu_k\geq 0$ such that $\mu_i\leq \la_i$ for all $i=1,\dots,k$. The desired result follows from the proof of Theorem \ref{dim}.
\end{proof}
\begin{corollary}\label{dimspeci1}
If $S=\{2,4,\ldots,2k\} \subseteq \mathbf{n}$, then the dimension of the submodule $B_nv_S$ is the Catalan number
$c_{k+1}$.
\end{corollary}
\begin{proof}
The sequence $\la_1\geq \dots\geq\la_k\geq 0$ in (\ref{lambdaSequence}) associated to $S$ is now $k \geq k-1\geq\cdots\geq 1$. In this case, it is well known that the number of sequences $\mu_1\ge\mu_2\ge\ldots\ge\mu_k\ge 0$ such that $\mu_i\leq\la_i$ for $1\leq i\leq k$ is the Catalan number $c_{k+1}$. The desired result follows from Theorem \ref{dim}.
\end{proof}
We find a combinatorial identity below for the Catalan number. To our knowledge, the identity is new.
\begin{corollary}\label{comId}
If $k\geq 2$, then
\begin{equation}\label{catalannewform1}
c_{k+1}=\sum^{k-1}_{i=1}{2(k-i+1) \choose k+1-i}\gamma_i
-\sum^{k-1}_{i=1}{2(k-i) \choose k+1-i}\gamma_i-\sum^{k-2}_{i=1}2
{2(k-i-1) \choose k-i}\gamma_i~,
\end{equation}
where $\gamma_1=1$ and for $2\leq i\leq k$,
\begin{equation}\label{catalannewform2}
\gamma_i =-\sum^{i-2}_{j=1}
{2(i-j-1) \choose i-j}\gamma_j~.
\end{equation}
\end{corollary}
\begin{proof}
The combinatorial identity is obtained from Theorem \ref{dim} and Corollary \ref{dimspeci1}.
\end{proof}
\begin{corollary}\label{dimspeci2}
For the $k$-subset $\{m+1,\ldots,m+k\}$ of $\mathbf{n}$, the
dimension of the submodule $B_nv_S$ is ${m+k \choose k}$.
\end{corollary}
\begin{proof}
The sequence in (\ref{lambdaSequence}) corresponding to $S$ is the $k$-subset $\{m, \ldots,m\}$. A direct calculation of $d_k$ for $k\geq 1$ using the formulas given in Theorem \ref{dim} yields
\begin{equation}\label{}
d_k={m+k \choose k}~,
\end{equation}
which is the desired result.
\end{proof}
We now compute the dimension $d_{k,\, m}$ of the $B_n$-module $B_nv_{S_{k,\,m}}$, where $k\ge m$ and
$$
S_{k,\,m}=\{2,\,4,\,\ldots,\,2m,\,2m+1,\,2m+2,\,\ldots,\,2m+(k-m)\}
$$
is a subset of $\mathbf{n}$, which is a mixture of the two types of subsets in Corollaries \ref{dimspeci1} and \ref{dimspeci2}. The sequence (\ref{lambdaSequence}) associated to $S_{k,\,m}$ is $\{m,\, m,\,\ldots,m,\,m-1,\,m-2,\,\ldots,\,1\}$ of length $k$.
Recall that if $x>y$, then $\binom{y}{x}=0$ and the empty sum $\sum_{i=x}^y\square_i = 0$. Without showing the details, from Theorem \ref{dim} we obtain, for $m\ge 2$,
\begin{eqnarray}\label{catagene2}
\nonumber d_{k,\,m}&=&{m+k \choose k}-{m+k-2 \choose k}-2
{m+k-4 \choose k-1} +\sum^{k-1}_{i=k-m+3}{2(k-i+1) \choose k+1-i}\gamma_i \\
&&{}-\sum^{k-1}_{i=k-m+3}{2(k-i) \choose k+1-i}\gamma_i-\sum^{k-2}_{i=k-m+3}2
{2(k-i-1) \choose k-i}\gamma_i
\end{eqnarray}
where $\gamma_1=1,\gamma_2=\gamma_3=\dots=\gamma_{k-m+2}=0,
\gamma_{k-m+3}=-1$, and for $i\geq k-m+4$
\begin{equation}\label{catagene3}
\gamma_i =-{m-k+2i-4\choose i-1}-\sum^{i-2}_{j=k-m+3}
{2(i-j-1) \choose i-j}\gamma_j~.
\end{equation}Notice that $d_{k,\,k}$ is just the Catalan number $c_{k+1}$ by Corollary (\ref{dimspeci1}). Formula (\ref{catagene2}) will be used in Corollary \ref{ccd}.
Identifying an element of $B_t$ $(t< n)$ with an element of $B_n$ that fixes $t+1, \ldots,n$, we can regard $B_t$ as
a submonoid of $B_n$, so we are allowed to view any $B_n$-submodules of $V$, for example $V_k$ and its submodules, as $B_t$-modules.
Our aim below is to investigate decompositions of the $B_n$-module $W^m_k =B_nv_{S^m_k}$ into indecomposable submodules for $S^m_k=\{m+1,\ldots,m+k\}$ where $k$ and $m$ are positive integers and $k+m\le n$. Using the notation above, we obtain the following result.
\begin{proposition}\label{decomposition}
Viewed as a $B_{m+l}\,(1\leq l<k)$ module, $W^m_k =B_nv_{S^m_k}$ is decomposed into a direct sum of indecomposable submodules
\vspace {-2mm}
\begin{equation}\label{moddecom1}
W^m_k \downarrow_{B_{m+l}}^{B_{m+k}} \,\cong\, \bigoplus_{a=0}^{k-l} {k-l \choose a}W_{k-a}^{m+l-k+a} ~,
\end{equation}
\vspace {-2mm}
where ${k-l \choose a}$ is the multiplicity of the indecomposable
submodule $W_{k-a}^{m+l-k+a}$.
\end{proposition}
\begin{proof}
By Lemma \ref{mod1} (1) we have
\[
W^m_k = B_{m+k}v_{S^m_k} = \bigoplus_{k\text{-subsets } T\subseteq \{1,\, ...,\, m+k\}} Fv_T~.
\]
To obtain the desired indecomposable $B_{m+l}$-submodules on the right of (\ref{moddecom1}), we group the $1$-dimensional subspaces $Fv_T$ with $k$-subsets $T\subseteq\{1,\, ...,\, m+k\}$ into categories according to the intersection $\{t'_1, \ldots, t'_a\} = T\cap \{m+l+1,\,\ldots,\, m+k\}$ where $t'_1 < \cdots < t'_a$,\, $0\leq a\leq k-l$, and $1\leq l<k$.
Let $\mathcal{T}$ denote the set of all these $k$-subsets $T$. For any $T\in \mathcal{T}$, we have
\[
T=\{t_1,\,\ldots,\,t_{k-a}, \,t'_1, \ldots, t'_a\}~,
\]
for some subset $\{t_1,\,\ldots,\,t_{k-a}\}\subseteq\{1, ..., m+l\}$ with $t_1 < \cdots < t_{k-a}$, so $m+l\ge k-a.$
Write $p = m+l-(k-a)$ and let $T_a = \{p+1,\,\ldots,\, m+l, \,t'_1, \ldots, t'_a\}.$ Then $T_a$ is a $k$-subset of $\{1,\ldots,m+k\}$, and
$T\le T_a$. Define $f\in B_{m+k}$ by
\[
f(p+1)=t_1,\, \ldots,\, f(m+l) = t_{k-a},\, f(m+l+1) = m+l+1,\, \ldots,\, f(m+k) = m+k.
\]
Then $T=f(T_a)$. Identifying $f$ with an element of $B_{m+l}$, we have $v_{T}=f\cdot v_{T_a}\in B_{m+l}v_{T_a}$. From Lemma \ref{mod1} (i) we have
\[
B_{m+l}v_{T_a} = \bigoplus_{T\in\, \mathcal{T}}Fv_T ~.
\]
We next show that
$
B_{m+l}v_{T_a} \cong B_{m+l}v_{S_{k-a}^{p}}
$
as $B_{m+l}$-modules. Notice that
\[
S_{k-a}^{p}=\{p+1,\,\ldots,\, p+(k-a)\} = \{p+1,\,\ldots,\, m+l\} \subseteq \{1, ..., m+l\}~.
\]
Let
$
\mathcal{U} = \{U\subseteq \{1, ..., m+l\}\mid U\le S_{k-a}^{p}\}.
$
By Lemma \ref{mod1} (i) we have
\[
B_{m+l}v_{S_{k-a}^{p}} = \bigoplus_{U\in\, \mathcal{U}}Fv_U~.
\]
Since
$
T_a = S_{k-a}^{p}\cup \{t'_1,\, \ldots, t'_a\},
$
the map of $\mathcal{T}$ to $\mathcal{U}$ defined by
\[
T \mapsto T\cap\{1,\,\cdots,\,m+l\} = \{t_1,\,\ldots,\,t_{k-a}\}
\]
is one-to-one and onto. Since $B_{m+l}$ fixes $\{t'_1,\, \ldots, t'_a\}$ pointwise, the map defined by
$
v_T \mapsto v_{\{t_1,\, \ldots,\, t_{k-a}\}}
$
leads to a $B_{m+l}$-module isomorphism of $B_{m+l}v_{T_a}$ onto $B_{m+l}v_{S_{k-a}^{p}}$, which is indecomposable by Proposition \ref{indDecom}.
Since there are ${k-l \choose a}$ ways to choose $\{t'_1,\,\ldots,\, t'_a\}$
from $\{m+l+1,\ldots,m+k\}$, there are the same number of corresponding
modules $B_{m+l}v_{T_a}$ in $W^m_k$. Thus
\[
W^m_k \downarrow_{B_{m+l}}^{B_{m+k}} \,\cong\, \bigoplus_{a=0}^{k-l} {k-l \choose a}B_{m+l}v_{T_a}~,
\]
and the proof is complete.
\end{proof}
\begin{corollary} Let $k, l, m$ be positive integers with $l<k$.
\begin{equation}\label{moddecomform}
{m+k \choose k}=\sum_{a=0}^{k-l} {k-l \choose a}{m+l \choose k-a}~.
\end{equation}
\end{corollary}
\begin{proof}
Corollary \ref{dimspeci2} shows that $\dim W^m_k={m+k \choose k}$ and $\dim W_{k-a}^{m+l-k+a} = {m+l \choose k-a}$. Inserting them into (\ref{moddecom1}), we complete the proof.
\end{proof}
We apply the same procedure in the proof of Proposition \ref{decomposition} to deal with, without showing all the details, the decomposition of the $B_{2k}$-module $W_k=B_nv_{S_k}$ into indecomposable $B_{2(k-1)}$-modules for $S_k=\{2,4,\ldots,2k\}$ with $2k\le n$. We regard $W_k$ as a $B_{2(k-1)}$-module.
Let $S$ be a $k$-subset of $\mathbf{n}$ such that $S\leq S_k$. If $S$ contains $2k$, then $v_S=fv_{S_k}$ for some $f\in B_{2(k-1)}$ since $B_{2(k-1)}$ fixes $2k$. Thus the module $B_{2(k-1)}v_{S_k}$ contains basis vectors $v_S$ where $S$ runs through all $k$-subsets containing $2k$, so $B_{2(k-1)}v_{S_k}$ is isomorphic to the submodule
$W_{k-1} = B_{2(k-1)}v_{S_{k-1}}$.
If $S$ does not contain $2k$ but contains $2k-1$, then $S\leq T=\{2, \,4,\,\ldots,\,2(k-1),\,2k-1\}$, and since $B_{2(k-1)}$
fixes $2k-1$, we have $v_S=fv_{T}$ for some $f\in B_{2(k-1)}$. Thus the module $B_{2(k-1)}v_{T}$ contains basis vectors $v_S$ where $S$ runs through all $k$-subsets containing $2k-1$, so it is isomorphic to the submodule $W_{k-1} = B_{2(k-1)}v_{S_{k-1}}$.
If $S$ contains neither $2k$ nor $2k-1$, then $S\leq S_{k,\, k -2}=\{2,4,\ldots,2(k-2),2k-3,2k-2\}$, and we have
$v_S=fv_{S_{k,\, k -2}}$ for some $f\in B_{2(k-1)}$. The module $B_{2(k-1)}v_{k,\, k -2}$ then contains basis vectors $v_S$ where $S$ runs through all $k$-subsets containing neither $2k$ nor $2k-1$. Therefore, we have shown
\begin{proposition}
For the $k$-subset $S_k=\{2,4,\ldots,2k\}$ of $\mathbf{n}$, let
$W_k=B_nv_{S_k}$. Then viewed as a $B_{2(k-1)}$ module, we have the
decomposition of $W_k$ into a direct sum of indecomposable
submodules
\begin{equation}\label{moddecom2}
W_k\downarrow_{B_{2(k-1)}}^{B_{2k}}\cong 2W_{k-1} \oplus
B_{2(k-1)}v_{S_{k,\,k-2}}~.
\end{equation}
\end{proposition}
\begin{corollary}\label{ccd} Let $k\ge 2$. Then the $(k+1)${\rm st} Catalan number
\begin{equation}\label{ccd} c_{k+1}=2c_k + d_{k,\,k-2}~,
\end{equation}
where $c_k$ is the $k$th Catalan number and $d_{k,\,k-2}$ is the dimension of $B_{2(k-1)}v_{S_{k,\,k-2}}$.
\end{corollary}
\begin{proof}
By Corollary \ref{dimspeci1}, the dimension of $W_k$ is $c_{k+1}$ and that of $W_{k-1}$ is $c_{k}$.
The dimension $d_{k,\,k-2}$ of $B_{2(k-1)}v_{S_{k,\,k-2}}$ is given in (\ref{catagene2}). Putting them into (\ref{moddecom2}), we obtain the desired combinatorial identity (\ref{ccd}).
\end{proof}
\section{Presentation on generators and relations}
We use the method of Section 3 of Herbig \cite{H} to describe generators and relations of $B_n$.
Some preparations are needed. We define a new monoid $\hat{B}_n$ generated by symbols $\l_i,\e_i, \hat 1$ subject to the relations:
\vspace{-2mm}
\begin{enumerate}[{\rm(i)}]
\item $\e_i^2=\e_i$
\vspace{-2mm}
\item $\l_i\l_{i+1}\l_i=\l_i\l_{i+1}=\l_{i+1}\l_i\l_{i+1}$
\vspace{-2mm}
\item $\l_i\e_i=\l_i=\e_{i+1}\l_i$
\vspace{-2mm}
\item $\l_i\e_{i+1}=\e_i\e_{i+1}=\e_i\l_i=\l_i^3=\l_i^2$
\vspace{-2mm}
\item $\e_i\l_j=\l_j\e_i$ for $i\neq j,j+1$
\vspace{-2mm}
\item $\l_i\l_j=\l_j\l_i$ for $|i-j|\geq 2$
\vspace{-2mm}
\item $\e_i\e_j=\e_j\e_i$ for all $i,j$~.
\end{enumerate}
\vspace{-2mm}
For $a, b\in \mathbf{n}$ with $a>b$, let $\L^{a,\,a}=\hat{1}$ and
$
\L^{a,\,b}=\l_b\l_{b+1}\cdots \l_{a-2}\l_{a-1}.
$
For any subsets $S=\{s_1<\dots<s_k\}$ and $T=\{t_1<\dots<t_k\}$ of $\mathbf{n}$ satisfying $s_j\geq t_j$ for all $1\leq j\leq k$,
let
\begin{equation}\label{uv}
U=\mathbf{n}-S=\{u_1<\cdots<u_{n-k}\}\quad\text{and}\quad V=\mathbf{n}-T=\{v_1<\cdots<v_{n-k}\}~.
\end{equation}
Define
$
\E_S = \e_{u_1}\cdots \e_{u_{n-k}},\,\L^{S,\,T} =\L^{s_k,\,t_k}\cdots \L^{s_1,\,t_1},\text{ and } \E_T=\e_{v_1}\cdots \e_{v_{n-k}}~,
$
where we agree that $\E_\mathbf{n} = \hat{1}$. The word $W^S_T=\E_T\L^{S,\,T}\E_S$ is called a {\em standard word} of $\hat{B}_n$. The following proposition shows that every element of $\hat{B}_n$ is equivalent, under the relations (i) -- (vii), to one of the standard words. First, we note that the generators $\l_i,\e_i$ and $\hat{1}$ are themselves standard words.
\begin{proposition}\label{ml}
\begin{enumerate}[{\rm(1)}]
\item $W^S_T\l_i=W^{S^\prime}_{T^\prime},$ where
$$(S^\prime,\,T^\prime)=
\left\{
\begin{array}{ll}
(S,\,T) & \text{\rm if }\, i,\,i+1\notin S \\
(S\backslash\{i\},\,T\backslash\{t_{c+1}\}) & \text{\rm if }\, i,\,i+1\in S \\
((S\backslash\{i\})\cup\{i+1\},\,T) & \text{\rm if }\, i\in S,\,i+1\notin S \\
(S\backslash\{i+1\},\,T\backslash\{t_{c+1}\}) & \text{\rm if }\, i\notin S,\,i+1\in S \\
\end{array}
\right.
$$where $i+1$ is mapped to $t_{c+1}$ under $W^S_T$.
\item $W^S_T\e_i=W^{S^\prime}_{T^\prime},$ where
$$(S^\prime,\,T^\prime)=
\left\{
\begin{array}{ll}
(S,\,T) & \text{\rm if }\, i\notin S \\
(S\backslash\{i\},\,T\backslash\{t_c\})& \text{\rm if }\, i\in S \\
\end{array}
\right.
$$where $i$ is mapped to $t_c$ under $W^S_T$.
\item $\l_iW^S_T=W^{S^\prime}_{T^\prime},$ where
$$(S^\prime,\,T^\prime)=
\left\{
\begin{array}{ll}
(S,\,T) & \text{\rm if }\, i,\,i+1\notin T \\
(S\backslash\{s_c\},\,T\backslash\{i+1\}) & \text{\rm if }\, i,\,i+1\in T \\
(S\backslash\{s_c\},\,T\backslash\{i\}) & \text{\rm if }\, i\in T,\,i+1\notin T \\
(S,\,(T\backslash\{i+1\})\cup\{i\}) & \text{\rm if }\, i\notin T,\,i+1\in T \\
\end{array}
\right.
$$where $s_c$ is mapped to $i$ under $W^S_T$.
\item $\e_iW^S_T=W^{S^\prime}_{T^\prime},$ where
$$(S^\prime,\,T^\prime)=
\left\{
\begin{array}{ll}
(S,\,T) & \text{\rm if }\, i\notin T \\
(S\backslash\{s_c\},\,T\backslash\{i\}) & \text{\rm if }\, i\in T \\
\end{array}
\right.
$$where $s_c$ is mapped to $i$ under $W^S_T$.
\end{enumerate}
\end{proposition}
\begin{proof}
We use relations (i) to (vii) repeatedly, but sometimes we do not mention them.
We divide the proof of part (1) into four cases.
Case 1.1: neither $i$ nor $i+1$ is in $S$. Then $i,i+1\in U$ and $\hat{e_i},\e_{i+1}$ appear in $\E_S$. Since the items in $\E_S$ commute with $\hat{l_i}$ except $\hat{e_i},\e_{i+1}$, we have
\begin{eqnarray*}
W^S_T\l_i &=& \E_T\L^{S,\,T}\e_{u_1}c\dots \e_i\e_{i+1}\cdots
\e_{u_{n-k}}\l_i \\
&=& \E_T\L^{S,\,T}\e_{u_1}\cdots\e_i\e_{i+1}\l_i\cdots
\e_{u_{n-k}} \\
&=& \E_T\L^{S,\,T}\e_{u_1}\cdots\e_i\l_i\cdots \e_{u_{n-k}} \quad\quad\text{(by \rm (iii))}\\
&=& \E_T\L^{S,\,T}\e_{u_1}\cdots\e_i\e_{i+1}\cdots \e_{u_{n-k}} \quad\text{(by \rm (iv))} \\
&=& W^S_T~.
\end{eqnarray*}
Case 1.2: both $i$ and $i+1$ are in $S$. We have $i,i+1\notin U$, and every term in $\E_S$ commutes with $\l_i$ by (v).
From (iii) we get
$$
\E_S\l_i=\E_S\l_i\e_i =\l_i\E_S\e_i=\l_i\E_{S^\prime}~,
$$
where $S^\prime=S\backslash\{i\}.$
Let $s_c=i$. Then $s_{c+1}=i+1$ since $i+1\in S$. For $j<c$, by (vi) the terms in $\L^{s_j,t_j}$ commute with $\l_i$, as
their indices are less than or equal to $i-2$. Thus
\begin{eqnarray}\label{wstl}
\nonumber W^S_T\l_i &=& \E_T\L^{S,\,T}\E_S\l_i \\
\nonumber &=& \E_T\L^{S,\,T}\l_i\E_{S^\p} \\
&=& \E_T\L^{s_k,\,t_k}\cdots\L^{s_{c+2},\,t_{c+2}}(\L^{i+1,\,t_{c+1}}\L^{i,\,t_{c}}\l_i)\L^{s_{c-1},\,t_{c-1}}\cdots\L^{s_1,\,t_1}\E_{S^\p}~.
\end{eqnarray}
We now show
\begin{equation}\label{lll}
\L^{i+1,\,t_{c+1}}\L^{i,\,t_c}\l_i = L^{i+1,\, t_{c}}~.
\end{equation}
Indeed, by (vi) the term $\l_i$ in $\L^{i+1,\,t_{c+1}}=\l_{t_{c+1}}\cdots\l_{i-1}\l_i$ commutes with all of the terms of $\L^{i,\,t_{c}}=\l_{t_c}\cdots\l_{i-2}\l_{i-1}$ until $\l_{i-1}$, where we use (ii): $\l_i\l_{i-1}\l_i=\l_{i-1}\l_i$ to simplify.
Repeating the same procedure for each of the remaining terms of $\L^{i+1,\,t_{c+1}}$, we conclude
\begin{eqnarray*}
\L^{i+1,\,t_{c+1}}\L^{i,\,t_c}\l_i &=& (\l_{t_{c+1}}\cdots\l_{i-1}\l_i)(\l_{t_c}\cdots\l_{i-1})\l_i \\
&=& (\l_{t_{c+1}}\cdots\l_{i-1})(\l_{t_c}\cdots\l_i\l_{i-1}\l_i) \quad\text{\rm by (vi)} \\
&=& (\l_{t_{c+1}}\cdots\l_{i-1})(\l_{t_c}\cdots\l_{i-1}\l_i) \quad\text{\rm by (ii)}\\
&\vdots& \\
&=& \l_{t_c}\cdots\l_{i-1}\l_i \\
&=& L^{i+1,\, t_{c}}~.
\end{eqnarray*}
Putting (\ref{lll}) into (\ref{wstl}), we obtain
\begin{eqnarray}\label{wstl1}
\nonumber W^S_T\l_i &=& \E_T\L^{s_k,\,t_k}\cdots\L^{s_{c+2},\,t_{c+2}}\L^{i+1,\,t_{c}}\L^{s_{c-1},\,t_{c-1}}\cdots\L^{s_1,\,t_1}\E_{S^\p}\\
&=& \E_T\L^{S^\prime,\,T^\p}\E_{S^\prime}
\end{eqnarray}
where $S'=S\setminus\{i\}$ and $T'=T\setminus\{t_{c+1}\}$.
The right hand side of (\ref{wstl1}) is not a standard word yet because $\e_{t_{c+1}}$ is still missing in $\E_T$. Since
$t_c\leq t_{c+1}-1\leq i$, the term $\l_{(t_{c+1}-1)}$ appears in $\L^{i+1,\,t_{c}}=\l_{t_c}\cdots\l_i$. Notice that
$\l_{(t_{c+1}-2)}\l_{(t_{c+1}-1)}=\l_{(t_{c+1}-1)}\l_{(t_{c+1}-2)}\l_{(t_{c+1}-1)}$ and $\e_{t_{c+1}}\l_{(t_{c+1}-1)}=\l_{(t_{c+1}-1)}$. We have
\begin{eqnarray*}
L^{i+1,\, t_{c}} &=& \l_{t_c}\cdots\l_{(t_{c+1}-3)}(\l_{(t_{c+1}-2)}\l_{(t_{c+1}-1)}) \cdots\l_i \\
&=& \l_{t_c}\cdots\l_{(t_{c+1}-3)}(\l_{(t_{c+1}-1)}\l_{(t_{c+1}-2)} \l_{(t_{c+1}-1)}) \cdots\l_i \\
&=& \l_{t_c}\cdots\l_{(t_{c+1}-3)}(\e_{t_{c+1}}\l_{(t_{c+1}-1)})\l_{(t_{c+1}-2)}\l_{(t_{c+1}-1)} \cdots\l_i \\
&=& \l_{t_c}\cdots\l_{(t_{c+1}-3)}\e_{t_{c+1}} \l_{(t_{c+1}-2)}\l_{(t_{c+1}-1)} \cdots\l_i\\
&=& \e_{t_{c+1}}(\l_{t_c}\cdots \l_{(t_{c+1}-3)}\l_{(t_{c+1}-2)}\l_{(t_{c+1}-1)} \cdots\l_i)\\
&=& \e_{t_{c+1}}L^{i+1,\, t_{c}}~,
\end{eqnarray*}
where we have used that $\e_{t_{c+1}}$ commutes with all terms on its left.
Inserting the above result into (\ref{wstl1}) and knowing that $\e_{t_{c+1}}$ commutes with $\L^{s_k,\,t_k}\cdots\L^{s_{c+2},t_{c+2}}$, we deduce
\begin{eqnarray*}
W^S_T\l_i &=& \E_T (\L^{s_k,\,t_k}\cdots\L^{s_{c+2},\,t_{c+2}})(\e_{t_{c+1}}L^{i+1,\, t_{c}}) \L^{s_{c-1},\,t_{c-1}}\cdots\L^{s_1,\,t_1}\E_{S^\prime} \\
&=& \E_T\e_{t_{c+1}}(\L^{s_k,\,t_k}\cdots\L^{s_{c+2},\,t_{c+2}} L^{i+1,\, t_{c}} \L^{s_{c-1},\,t_{c-1}}\cdots\L^{s_1,\,t_1})\E_{S^\prime} \\ &=&\E_{T^\p}\L^{S^\p,\,T^\p}\E_{S^\p} \\
&=&W^{S^\p}_{T^\p}~.
\end{eqnarray*}
Case 1.3: $i$ is in $S$, but $i+1$ is not. It follows immediately that $i\notin U,\,i+1\in
U$, so $\e_{i+1}$ appears in $\E_{S}$, but $\e_i$ does not. Using (iii): $\e_{i+1}\l_i=\l_i\e_i$ and (v): $\l_i\e_j=\e_j\l_i$ for $j\ne i, i+1$, we have
\begin{eqnarray*}
\E_S\l_i&=&\e_{u_1}\cdots\e_{i+1}\cdots\e_{u_{n-k}}\l_i\\
&=& \e_{u_1}\cdots\e_{i+1}\l_i\cdots\e_{u_{n-k}} \\
&=& \e_{u_1}\cdots\l_i\e_i\cdots\e_{u_{n-k}} \\
&=& \l_i\E_{S^\p}~,
\end{eqnarray*}
where $S^\p=(S\backslash\{i\})\cup\{i+1\}$. Let $s_c=i$. Then $s_{c+1}>i+1$ since $i+1\notin S$. For
$j<c$, all the terms in $\L^{s_j,t_j}$ commute with $\l_i$ since their indices are at most $i-2$, giving
\begin{eqnarray*}
W^S_T\l_i &=& \E_T\L^{S,\,T}\E_S\l_i \\
&=&\E_T\L^{s_k,\,t_k}\cdots\L^{i,t_c}
\cdots\L^{s_1,\,t_1}\l_i\E_{S^\p}\\
&=&\E_T\L^{s_k,\,t_k}\cdots(\l_{t_c}\cdots\l_{i-2}\l_{i-1})\l_i
\cdots\L^{s_1,\,t_1}\E_{S^\p}\\
&=&\E_{T^\p}\L^{S^\p,\,T^\p}\E_{S^\p}\\
&=&W^{S^\p}_{T^\p}~,
\end{eqnarray*}
where $T'=T$.
Case 1.4: $i+1$ is in $S$, but $i$ is not. We have $i\in U,i+1\notin U$. The item $\e_i$ is in $\E_S$, but $\e_{i+1}$ is not. By $\e_i\l_i=\l_i\e_{i+1}$, $\l_i=\l_i\e_i$, and $\l_i\e_j=\e_j\l_i$ for $j\ne i, i+1$, we find
\begin{eqnarray*}
\E_S\l_i&=&\e_{u_1}\cdots\e_i\cdots\e_{u_{n-k}}\l_i\\
&=& \e_{u_1}\cdots\e_i\l_i\cdots\e_{u_{n-k}} \\
&=& \e_{u_1}\cdots\l_i\e_{i+1}\cdots\e_{u_{n-k}} \\
&=& \e_{u_1}\cdots\l_i\e_i\e_{i+1}\cdots\e_{u_{n-k}} \\
&=& \l_i\e_{u_1}\cdots\e_i\e_{i+1}\cdots\e_{u_{n-k}} \\
&=& \l_i\E_{S^\p}~,
\end{eqnarray*}
where $S^\p=S\backslash\{i+1\}$.
Let $s_{c+1}=i+1$. Then for $j\le c$, the indices of the terms in $\L^{s_j,\,t_j}$ are at most $i-2$, so they commute with
$\l_i$, leading to
\begin{eqnarray*}
W^S_T\l_i &=& \E_T\L^{S,\,T}\E_S\l_i \\
&=&\E_T\L^{s_k,\,t_k}\cdots\L^{s_{c+2},\,t_{c+2}}\L^{i+1,\,t_{c+1}}\L^{s_{c},\,t_{c}}\cdots\L^{s_1,\,t_1}\l_i\E_{S^\p} \\
&=&\E_T\L^{s_k,\,t_k}\cdots\L^{s_{c+2},\,t_{c+2}}(\l_{t_{c+1}}\cdots \l_{i-1}\l_i)\l_i\L^{s_{c},\,t_{c}} \cdots\L^{s_1,\,t_1}\E_{S^\p}~.
\end{eqnarray*}
Since $\l_i^2=\e_i\e_{i+1}$ and $\l_{j-1}\e_j=\e_{j-1}\e_j$ for all
$t_{c+1}+1\leq j\leq i$ (use them repeatedly below), we obtain
\begin{eqnarray*}
W^S_T\l_i&=&\E_T\L^{s_k,\,t_k}\cdots\L^{s_{c+2},\,t_{c+2}}(\l_{t_{c+1}}\cdots
\l_{i-1}\l_i^2)\L^{s_{c},\,t_{c}} \cdots\L^{s_1,\,t_1}\E_{S^\p}\\
&=&\E_T\L^{s_k,\,t_k}\cdots\L^{s_{c+2},\,t_{c+2}}(\l_{t_{c+1}}\cdots
\l_{i-1}\e_i\e_{i+1})\L^{s_{c},\,t_{c}}\cdots\L^{s_1,\,t_1}\E_{S^\p}\\
&=&\E_T\L^{s_k,\,t_k}\cdots\L^{s_{c+2},\,t_{c+2}}(\l_{t_{c+1}}\cdots
\l_{i-2}\e_{i-1}\e_i\e_{i+1})\L^{s_{c},\,t_{c}}\cdots\L^{s_1,\,t_1}\E_{S^\p}\\
&\vdots& \\
&=&\E_T\L^{s_k,\,t_k}\cdots\L^{s_{c+2},\,t_{c+2}}(\e_{t_{c+1}}\cdots
\e_{i-1}\e_i\e_{i+1})\L^{s_{c},\,t_{c}}\cdots\L^{s_1,\,t_1}\E_{S^\p}~.
\end{eqnarray*}
As the indices of all the terms $\l$ on the left of $\e_{t_{c+1}}$ are at least $t_{c+1}+1$, it follows from (v) that $\e_{t_{c+1}}$ commutes with $\L^{s_k,\,t_k}\cdots\L^{s_{c+2},\,t_{c+2}}.$ We obtain
\begin{eqnarray*}
W^S_T\l_i &=&\E_T\e_{t_{c+1}}\L^{s_k,\,t_k}\cdots\L^{s_{c+2},\,t_{c+2}}
(\e_{t_{c+1}+1}\cdots\e_i\e_{i+1})
\L^{s_{c},\,t_{c}}\cdots\L^{s_1,\,t_1}\E_{S^\p}\\
&=&\E_{T^\p}\L^{s_k,\,t_k}\cdots\L^{s_{c+2},\,t_{c+2}}
(\e_{t_{c+1}+1}\cdots\e_i\e_{i+1})
\L^{s_{c},\,t_{c}}\cdots\L^{s_1,\,t_1}\E_{S^\p}~,
\end{eqnarray*}
where $T^\p=T\backslash\{t_{c+1}\}$.
Our aim now is to switch $\e_j$ for $t_{c+1}+1\leq j\leq i+1$ with the terms $\l$ one by one on the right of $\e_j$ until it encounters either $\l_j$ or $\l_{j-1}$, or until it commutes past all of the $\l$. If $t_{c+1}+1\leq j\leq s_c$, then $\e_j$ will run into some $\l_{j-1}$ in $\L^{s_c,\,t_c}=\l_{t_c}\l_{t_c+1}\cdots\l_{s_c-1}$, leading to $\e_j\l_{j-1} = \l_{j-1}$ by (iii).
If $s_c+1\le j\leq i+1$, then $\e_j$ does not run into any $\l_j$ or $\l_{j-1}$ in $\L^{s_c,\,t_c}$ since the maximal index of the $\l$ there is $s_c-1$, and $\e_j$ does not run into any $\l_j$ or $\l_{j-1}$ to the right of $\L^{s_c,\,t_c}$, either, because all the indices of the $\l$ are at most $s_c-2$. But for $s_{c+1}\le j\leq i+1$, we have $j\in \mathbf{n}-S^\p$, so $\e_j$ will run into $\e_j$ in $\E_{S^\p}$. By $\e_j\l_{j-1}=\l_{j-1}$ and
$\e^2_j=\e_j$, we get
\begin{eqnarray*}
W^S_T\l_i&=&\E_{T^\p}\L^{s_k,\,t_k}\cdots\L^{s_{c+2},\,t_{c+2}}
\L^{s_c,\,t_c}
\cdots\L^{s_1,\,t_1}\E_{S^\p} \\
&=&\E_{T^\p}\L^{S^\p,\,T^\p}\E_{S^\p} \\
&=&W^{S^\p}_{T^\p}~.
\end{eqnarray*}
This completes the proof of part (1).
We next show part (2). If $i\notin S$, then $i\in U$ and $\e_i$ is in $\E_S$. By $\e^2_i=\e_i$, we have
\begin{eqnarray*}
W^S_T\e_i &=&\E_T\L^{S,\,T}\e_{u_1}
\cdots\e_i\cdots\e_{u_{n-k}}\e_i \\
&=&\E_T\L^{S,\,T}\e_{u_1}\cdots\e_i\e_i\cdots\e_{u_{n-k}} \\
&=&\E_T\L^{S,\,T}\e_{u_1}\cdots\e_i\cdots\e_{u_{n-k}}\\
&=& \E_T\L^{S,\,T}\E_S \\
&=& W^S_T~.
\end{eqnarray*}
If $i\in S$, let $s_c=i$, and then for $j<c$, all of the
indices of the terms in $\L^{s_j,\,t_j}$ are at most $i-2$, so they
commute with $\e_i$. From $\e_i^2=\e_i$, we obtain
\begin{eqnarray*}
W^S_T\e_i&=&\E_T\L^{s_k,\,t_k}\cdots\L^{i,\,t_c}
\cdots\L^{s_1,\,t_1}\E_S\e_i \\
&=&\E_T\L^{s_k,\,t_k}\cdots\L^{i,\,t_c}
\cdots\L^{s_1,\,t_1}\E_S\e_i\e_i \\
&=&\E_T\L^{s_k,\,t_k}\cdots\L^{i,\,t_c}\e_i
\cdots\L^{s_1,\,t_1}\E_{S^\p}~.
\end{eqnarray*}
where $S^\p=S\backslash\{i\}$.
For $t_c+1\leq j\leq i$, using (iv):
$\l_{j-1}\e_j=\e_{j-1}\e_j$ repeatedly, we get
\begin{eqnarray*}
W^S_T\e_i&=&\E_T\L^{s_k,\,t_k}\cdots(\l_{t_c}\l_{t_c+1}\cdots
\l_{i-1})\e_i\cdots\L^{s_1,\,t_1}\E_{S^\p}\\
&=&\E_T\L^{s_k,\,t_k}\cdots(\l_{t_c}\l_{t_c+1}\cdots\l_{i-2}
\e_{i-1}\e_i)\cdots\L^{s_1,\,t_1}\E_{S^\p}\\
&\vdots& \\
&=&\E_T\L^{s_k,\,t_k}\cdots(\e_{t_c}\e_{t_c+1}\cdots
\e_{i-1}\e_i)\cdots\L^{s_1,\,t_1}\E_{S^\p}~.
\end{eqnarray*}
Now the situation is very similar to Case 1.3. We commute $\e_{t_c}$ past all $\l$ on the left and get
$\E_T\e_{t_c}=\E_{T^\p}$, where $T^\p=T\backslash\{t_c\}$.
For
$t_c+1\leq j\leq s_c-1$, commute $\e_j$ past all $\l$ on the right
until they reach an $\l_{j-1}$, then use $\e_j\l_{j-1}=\l_{j-1}$. For $s_{c}-2\le j\leq i$, commute $\e_j$ past all $\l$ on the right and until it meets an $\e_j$ in $\E_{S^\p}$, then use $\e_j^2=\e_j$. We find
\begin{eqnarray*}
W^S_T\e_i&=&\E_T\L^{s_k,\,t_k}\cdots(\e_{t_c}\cdots
\e_{i-1}\e_i)\cdots\L^{s_1,\,t_1}\E_{S^\p}\\
&=&\E_T\e_{t_c}\L^{s_k,\,t_k}\cdots(\e_{t_c+1}\cdots
\e_{i-1}\e_i)\cdots\L^{s_1,\,t_1}\E_{S^\p}\\
&=&\E_{T^\p}L^{S^\p,\,T^\p}E_{S^\p}\\
&=&W^{S^\p}_{T^\p}~.
\end{eqnarray*}
Parts (3) and (4) are similar.
\end{proof}
\begin{remark}
{\rm
One can prove that the standard words can also be chosen as $\E_{S,\,T}\L^{S,\,T}$ where $\E_{S,\,T}=\e_{w_1}\e_{w_2}\dots\e_{w_h}$ if $\mathbf{n}-S-T=\{w_1,w_2,\dots,w_h\}$, and $L^{S,\,T}$ is the same as Proposition \ref{ml}.
}
\end{remark}
Let $G$ be the subset of $B_n$ consisting of the elements $1$, $l_i$, and $e_j$, where
\begin{align*}
l_i&=\left(
\begin{array}{ccccccc}
1 & \dots & i-1 & i+1 & i+2 & \dots & n \\
1 & \dots & i-1 & i & i+2 & \dots & n \\
\end{array}
\right)\quad \text{ and} \\
e_j&=\left(
\begin{array}{cccccc}
1 & \dots & j-1 & j+1 & \dots & n \\
1 & \dots & j-1 & j+1 & \dots & n \\
\end{array}
\right)~,
\end{align*}
where $1\le i\leq n-1$ and $1\leq j\leq n$.
Our intention here is to show that $G$ generates $B_n$. For any arbitrary $a, b\in \mathbf{n}$ with $a>b$, we define $L^{a,\,b}=l_b\cdots l_{a-2}l_{a-1}$
and $L^{a,\,a}=1$.
For $f\in B_n$, let
\[
S=\{s_1<\cdots<s_k\} \quad\text{and}\quad T=\{t_1<\dots<t_k\}
\]
be its respective domain and range. Let $U, V$ be as in (\ref{uv}) and let
\[
E_S=e_{u_1}\cdots e_{u_{n-k}},\quad L^{S,\,T}=L^{s_k,\,t_k}\cdots L^{s_1,\,t_1},\quad E_T=e_{v_1}\cdots e_{v_{n-k}}~,
\]
where we agree that $E_\mathbf{n}=1$. Then $x=E_TL^{S,\,T}E_S$, and we have shown
\begin{theorem}\label{gene}
Every element of $B_n$ is a product of elements of $G$.
\end{theorem}
The elements of $G$ satisfy the following relations (we omit the details, which are straightforward), where the indices $i$ and $j$ are such that all expressions in the relations are meaningful.
We use $R$ to denote the set of these relations:
\vspace{-2mm}
\begin{enumerate}[{\rm(1)}]
\item $e_i^2=e_i$~.
\vspace{-2mm}
\item $l_il_{i+1}l_i=l_il_{i+1}=l_{i+1}l_il_{i+1}$~.
\vspace{-2mm}
\item $l_ie_i=l_i=e_{i+1}l_i$~.
\vspace{-2mm}
\item $l_ie_{i+1}=e_ie_{i+1}=e_il_i=l_i^3=l_i^2$~.
\vspace{-2mm}
\item $e_il_j=l_je_i$ \quad for $i\neq j,j+1$~.
\vspace{-2mm}
\item $l_il_j=l_jl_i$ \quad for $|i-j|\geq 2$~.
\vspace{-2mm}
\item $e_ie_j=e_je_i$ \quad for all $i,j$~.
\end{enumerate}
\begin{theorem}\label{main}
The monoid $B_n$ has presentation $\langle\, G \mid R\,\rangle $.
\end{theorem}
\begin{proof}
The mapping $\phi:\hat{B}_n\rightarrow B_n$ defined by $\l_i\mapsto l_i$ and $\e_i\mapsto e_i$ and $\hat 1\mapsto 1$ induces a monoid homomorphism of $\hat{B}_n$ onto $B_n$. It follows from Proposition \ref{ml} that the mapping is injective.
\end{proof}
|
1,314,259,994,782 | arxiv | \section{INTRODUCTION}
\subsection{The hierarchy problem}
May the Standard Model be placed in form of the recent insights
coming from String Theories, where several dimensions appear so
naturally? The standard model for strong, weak and electromagnetic
interactions, described by the gauge group $SU(3)\times SU(2)\times
U(1)$, has its success strongly based on experimental evidences.
However, it has several serious theoretical drawbacks suggesting the
existence of new and unexpected physical facts beyond those
discussed in the last years. One of these problems is the so called
\textit{gauge hierarchy problem} which is related to the weak
($M_{ew}$) and Planck ($M_{pl}$) scales, the fundamental scales of
the model. The central idea of this problem is to explain the
smallness of the hierarchy $M_{ew}/ M_{pl}\sim 10^{-17}$. In the
context of the minimal standard model, this hierarchy of scales is
unnatural since it requires a fine-tuning order by order in the
perturbation theory. The first attempts to solve this problem were
the technicolor scenario \cite{A} and the low energy supersymmetry
\cite{B}. We mention that electroweak interactions have been proved
at distances $~M_{ew}^{-1}$, but gravity has only accurately
measured in the $~1cm$ range. Note that the Planck length is
$~10^{-33}cm$.
\subsection{Extra dimensions - Randall-Sundrum scenario}
With the string theories, the search of many-dimensional theories
became important. The basic idea is that extra dimensions can be
used to solve the hierarchy problem: the fields of the standard
model must be confined to a $(3+1)$-dimensional subspace, embedded
in a $n$-dimensional manifold. In the seminal works of Arkani-Hamed,
Dimopoulos, Dvali and Antoniadis \cite{C}, the $4$-dimensional
Planck mass is related to $M$, the fundamental scale of the theory,
by the extra-dimensions geometry. Through the Gauss law, they have
found $M^{2}_{pl}=M^{n+2} V_{n}$, where $V_{n}$ is the extra
dimensions volume. If $V_{n}$ is large enough, $M$ can be of the
order of the weak scale. However, unless there are many extra
dimensions, a new hierarchy is introduced between the
compactification scale, $\mu_{c}= V^{-\frac{1}{n}}$, and $M$. An
important feature of this model is that the space-time metric is
factorizable, i.e., the $n$-dimensional space-time manifold is
approximately a product of a $3$-dimensional space by a compact
$(n-3)$-dimensional manifold.
Because of this new hierarchy, Randall and Sundrum \cite{D} have
proposed a higher dimensional scenario that does not require large
extra dimensions, neither the supposition of a metric factorizable
manifold. Working with a single $S^{1}/Z_{2}$ orbifold extra
dimension, with three-branes of opposite tensions localized on the
fixed points of the orbifold and with adequate cosmological
constants as $5$-dimensional sources of gravity, they have shown
that the space-time metric of this model contains a redshift factor
which depends exponentially on the radius $r_{c}$ of the
compactified dimension:
\begin{equation}\label{eq1}
d s^{2}= e^{-2k r_{c}|\phi|}\eta_{\mu\nu} d x^{\mu}d x^{\nu}-r_{c}
d\phi^{2},
\end{equation}
where $k$ is a parameter of the order of $M$, $x^{\mu}$ are Lorentz
coordinates on the surfaces of constant $\phi$, and
$-\pi\leq\phi\leq\pi$ with $(x,\phi)$ and $(x,-\phi)$ identified.
The two $3$-branes are localized on $\phi=\pi$ and $\phi=0$. In
fact, this scenario is well known in the context of string theory.
The non-factorizable geometry showed in Eq.(\ref{eq1}) has important
consequences. In particular, the $4$-dimensional Planck mass is
given in terms of the fundamental scale $M$ by
\begin{equation}\label{eq2}
M_{pl}^{2}=\frac{M^{3}}{k}[1-e^{-2k r_{c}\pi}],
\end{equation}
in such a way that, even for large $k r_{c}$, $M_{pl}$ is of the
order of $M$. The other success of this scenario is that the
standard model particle masses are scaled by the warp exponential
factor.
\subsection{Topological gravity: some motivations}
Despite these important developments in classical Einstein gravity,
background independent theories are welcome. As an example it is
worth mentioning the Quantum Loop Gravity, developed mainly by
Asthekar et al \cite{E}. Also the problem of background dependence
of string field theory has not been successfully addressed. The
string field theory has a theoretical problem: it is only
consistently quantized in particular backgrounds, which means that
we have to specify a metric background in order to write down the
field equations of the theory. This problem is fundamental because a
unified description of all string backgrounds would make possible to
answer questions about the selection of particular string vacua and
in general to give us a more complete understanding of geometrical
aspects of string theory \cite{F}.
Due to these developments, we regard as an important subject look
for topological theories in the brane context, with the major
objective to add quantum information to the Randall-Sundrum
scenario. In this sense, as the first step, we study topological
theories on brane-worlds in several dimensions. In this part of the
work we construct topological theories in brane-worlds. The
brane-world is regarded as a kink-like soliton and it appears due to
a spontaneous symmetry breaking of a Peccei-Quinn-like symmetry.
Topological terms are obtained by generalizing to many dimensions
the axion-foton anomalous interaction.
\section{TOPOLOGICAL TERMS IN BRANE-WORLDS}
We implement the theory through the following action in $D=5+1$:
\begin{equation}
S=\int d^{6}x\left( -\frac{1}{2\left( 3!\right) }H_{MNP
}H^{MNP}+g\varepsilon ^{MNPQRS}\phi
\left( z\right) H_{MNP}H_{QRS}+\frac{1}{2}%
\partial _{M }\phi \partial ^{M}\phi +V\left( \phi \right)
\right) \label{eq1-1}
\end{equation}
In this action, $H_{MNP}=\partial _{M}B_{NP }+\partial _{N }B_{PM
}+\partial _{P}B_{MN }\left(M,N ,...=0,...,5\right) $ is the field
strength of an antisymmetric tensor gauge field $B_{MN}$. The field
$B_{\mu \nu }$ has an important function in string theory: it
couples correctly with the string world-sheet in a very similar way
to the coupling of a gauge vector field $A_{M }$ to the universe
line of a point particle. The field $\phi $ is a real scalar field,
and $V\left( \phi\right) =\lambda \left( 1-\cos \phi\right)$, is a
potential that provides a phase transition.
The second term in the action (\ref{eq1-1}) is a term that
generalizes the coupling that appears from the anomaly of the
Peccei-Quinn quasisymmetry in $D=3+1$, namely,
$\phi\rightarrow\phi+a$. For such, the space-time dimension is
$D=5+1$ and the hypersurface is a $D=4+1$ world. Now we can work
with the second term of (\ref{eq1-1}) (considering that $\phi$ only
depends of the z-coordinate $z$) in order to obtain new terms by
integration by parts. Considering that the $B_{MN}$ field weakly
depends on the z-coordinate, we obtain:
\begin{equation}
S_{top.}=\int d^{5}x\left( k\varepsilon ^{MNPQ R }B_{MN}H_{PQR
}\right) \label{eq6-1}
\end{equation}
This last equation shows that over the hypersurface an effective
topological term appears with a coupling constant \textbf{k} that
have canonical dimension of mass. The theory over the hypersurface
is completely five-dimensional. This term is very similar to the
Chern-Simons term, that is written in $D=2+1$ with a gauge vector
field $A_{\mu }$. Nevertheless, the term (\ref{eq6-1}) is written
only with tensorial antisymmetric fields $B_{\mu \nu }$. Such term
have been used to explain some peculiarities of the Cosmic Microwave
Background Radiation (CMBR) within the Randall-Sundrum scenario
\cite{indianos}. It is interesting now to observe the properties of
the action (\ref{eq1-1}) in lower dimensional space-times using
dimensional reduction. Thus, supposing that the fields of the action
(\ref{eq1-1}) are independent of the coordinate $x_{M }\equiv x_{5}$
which is not the argument of the field $\phi\left( z\right)$, we
find a new action in $D=4+1$. This action has now a vectorial gauge
field reminiscent of the reduction, and contains yet the real scalar
field $\phi$, that again may give rise to the formation of a lower
dimensional domain wall-brane. In this case, the space-time
dimension is $D=4+1$ and the hypersurface is a $D=3+1$ universe. If
we observe the theory over the solitonic hypersurface we will obtain
that,
\begin{equation}
S_{top.}=k \int d^{4}x\left( \varepsilon ^{4\nu \alpha \rho \sigma
}V_{\nu \alpha }B_{\rho \sigma }\right) \label{eq10-1}
\end{equation}
If the field $V_{\mu }$ is identified with the potential four-vector
$A_{\mu }$ then we obtain the action for the $B\wedge F$ model on
the domain wall-brane. This action, under certain conditions, can
give rise to a mechanism of topological mass generation for the
field $A_{\mu }$ or for the field $B_{\mu \nu }$.
The discussion for lower dimensions ($D=3+1$) and ($D=2+1$), using
the same methods, will lead us to the following topological action:
\begin{equation}
S_{top.}=k \int d^{4}x\left[ \varepsilon ^{\mu \nu \alpha \rho }\phi
\left( z\right) \partial _{\mu }\phi \partial _{\nu }B_{\alpha \rho
}+\varepsilon ^{\mu \nu \alpha \rho }\phi \left( z\right) F_{\mu \nu
}W_{\alpha \rho }\right] \label{eq12-1}
\end{equation}
The fields $\phi$ and $W_{\alpha \rho }=\partial _{\alpha }W_{\rho
}-\partial _{\rho }W_{\alpha }$ emerge as degrees of freedom
reminiscent of the reduction. If we work with the first term of
(\ref{eq12-1}) on the domain wall, we will find a different
topological theory:
\begin{equation}
S=\int d^{3}x\left( g\varepsilon ^{abc}\partial _{a }\phi
B_{bc}\right) \label{eq13-1}
\end{equation}
Identifying again in (\ref{eq12-1}), in the second term, the vector
field $W_{\mu }$ as the gauge field $A_{\mu }$, then we will obtain
the anomalous interaction term between the real scalar field $\phi$
and the field $A_{\mu }$ . This term, rearranged on the domain wall,
reduces to the Chern-Simons term.
\section{TOPOLOGICAL APPROACH TO THE HIERARCHY PROBLEM}
In this section we review an alternative to the central point of the
Randall-Sundrum model \cite{H}, namely, the particular
nonfactorizable metric. Using a topological theory, we show that the
exponential factor, crucial in the Randall-Sundrum model, appears in
our approach, only due to the brane existence instead of a special
metric background. In order to study the hierarchy problem we choose
to work with topological gravity. Motivated by current searches in
the quantum gravity context, we study topological gravity of
$B\wedge F$ type. Then, we can affirm that our model is purely
topological because $1)$ the brane exists due to the topology of the
parameter space of the model and $2)$ gravity is metric independent.
We will see that these features give us interesting results when
compared to the Randall-Sundrum model.
\subsection{The Model}
The model is based on the following action:
\begin{equation}\label{eq3}
S= \int d^{5} x [\frac{1}{2}\partial_{M}\phi\partial^{M}\phi+
k\varepsilon^{MNPQR}\phi H_{MNP}^{a} F_{QR}^{a}-V(\phi)].
\end{equation}
In this action the $\phi$ field is a real scalar field that is
related to the domain wall. The fields $H_{MNP}^{a}$ and
$F_{QR}^{a}$ are non-abelian gauge fields strengths and will be
related to the gravitational degrees of freedom. Namely, in pure
gauge theory,
$H^{a}_{MNP}=\partial_{M}B^{a}_{NP}-\partial_{N}B^{a}_{PM}-\partial_{P}B^{a}_{MN}+gf^{abc}A^{b}_{M}B^{c}_{NP}$
and
$F^{a}_{MN}=\partial_{M}A^{a}_{N}-\partial_{N}A^{a}_{M}+g'f^{abc}A^{b}_{M}A^{c}_{N}$.
The second term of this action is a topological version of the terms
studied above. The action (\ref{eq3}) is invariant under a
Peccei-Quinn symmetry transformation $\phi\rightarrow\phi+2\pi n$.
The potential is
\begin{equation}\label{eq4}
V(\phi)=\lambda(1-\cos\phi),
\end{equation}
which preserves the Peccei-Quinn symmetry. Nevertheless, it is
spontaneously broken in scales of the order of $M_{PQ}\sim
10^{10}-10^{12}$GeV. We propose the following potential
\begin{equation}\label{eq5}
V(\phi)=\frac{\lambda}{4}(\phi^{2}-v^{2})^{2},
\end{equation}
which explicitly breaks the $U_{PQ}(1)$ Peccei-Quinn symmetry, in
order to generate a brane in an energy close to the weak scale. With
this particular choice of the potential, the existence of the brane
is put on more consistent grounds. In other words, the brane appears
almost exactly in an energy scale of the universe near the symmetry
breaking scale of the electroweak theory. This feature was assumed
in previous works without a careful justification. However, this
mechanism leads to a large disparity between the Planck mass
$M_{PL}\sim 10^{18} GeV$ and the scale of explicit breaking of
$U_{PQ}(1)$ which is relatively close to the weak scale, $M_{ew}\sim
10^{3} GeV$: we assume this disparity as a new version of the
hierarchy problem. Consider now the solution
$\phi(x_{4})=v\tanh(\sqrt{\frac{\lambda}{2}}v x_{4})$.
This solution defines a $3$-brane embedded in a $(4+1)$-dimensional
space-time. The mass scale of this model is $m=\sqrt{\lambda}v$ and
the domain wall-brane thickness is $m^{-1}$. With this information
we can now discuss the effective theory on the domain wall-brane. An
integration by parts of the topological term in the action
(\ref{eq3}) will result in
\begin{equation}\label{eq9}
S\sim \int d^{4} x \varepsilon_{\nu\alpha\rho\lambda}
B_{\nu\alpha}^{a} F_{\rho\lambda}^{a} [\lim_{r_{c}\rightarrow
+\infty} k'\int_{0}^{r_{c}} d x_{4} \partial_{4}\phi(x_{4})],
\end{equation}
where $r_{c}$ represents the extra dimension. This last conclusion
denotes the domain wall-brane contribution to the effective
four-dimensional theory. We can see that, effectively on the domain
wall-brane, the theory is purely $4$-dimensional (this is important)
and is described by a non-abelian topological $B\wedge F$ term. It
can be shown that, under parameterizations by tetrad fields, a
$B\wedge F$ type action gives us
\begin{equation}\label{eq9a}
\int d^{4} x k\varepsilon^{\nu\alpha\rho\lambda} B_{\nu\alpha}^{a}
F_{\rho\lambda}^{a}\rightarrow k\int d^{4} x \sqrt{g}R,
\end{equation}
which is the Einstein-Hilbert action for the gravitational field,
where $R$ is the scalar curvature and $g$ stands for the space-time
metric. From Eqs. (\ref{eq9}) and (\ref{eq9a}), we can see the
relation between the Planck mass $k_{4}$ in $D=4$ and the extra
dimension:
\begin{equation}\label{eq10}
k_{4}=\lim_{r_{c}\rightarrow +\infty} k'\int_{0}^{r_{c}} d x_{4}
\partial_{4}\phi(x_{4}).
\end{equation}
The limit $r_{c}\rightarrow +\infty$ ensures the topological
stability of the domain wall-brane. By the substitution of the
aforementioned solution $\phi(x_{4})$ in Eq. (\ref{eq10}),
considering a finite $r_{c}$ (which means that the domain wall-brane
is a finite object), we can show that
\begin{equation}
k_{4}=k'v(1-e^{-2y}) (1+e^{-2y})^{-1}\label{eq11},
\end{equation}
where $y=\sqrt{\frac{\lambda}{2}}v r_{c}$ is the scaled extra
dimension. This result is very interesting: as our model is a
topological one, the exponential factor must not appear from any
special metric. Here, the exponential factor appears only due to the
domain wall-brane existence. As in the Randall-Sundrum model, even
for the large limit $r_{c}\rightarrow +\infty$, the $4$-dimensional
Planck mass has a specific value. This is the reason why we believe
that our approach can be useful to treat the hierarchy problem. It
is possible to obtain scaled masses to the confined matter using
zero modes attached to the domain wall-brane. We address the fact
that we only use the domain wall-brane characteristics.
\section{ON AXION PHYSICS: some perspectives}
In this section we make a brief review of recent developments about
axion physics and propose some new perspectives in this area. The
motivations for these discussions are only mathematical: the
topological terms studied in the sections above can give us some
insights. Extra dimension physics gives new viewpoints for axion
studies. In particular, an interesting problem is how to generate
the axion scale ($f_{a}=10^{10}\sim 10^{12}$ Gev) that is
intermediate to the Planck scale ($M_{p}\approx 10^{18}$ Gev) and to
the electroweak symmetry breaking scale ($M_{ew}\approx 1$ Gev).
Such problem may be solved naturally by a $5D$ orbifold field
theories \cite{I} if the axion originates from a higher-dimensional
parity-odd field $C_{M}$:
\begin{equation}
S_{5D}=\int d^{4}x dy \sqrt{-G}
(\frac{1}{l^{2}}C_{MN}^{2}+\frac{\kappa}{\sqrt{-G}}\epsilon^{MNPQR}C_{M}
F_{NP}^{a} F_{QR}^{a}+\ldots)
\end{equation}
The action contains a Chern-Simons coupling to the $U(1)$ gauge
field $C_{M}$; $F_{NP}^{a}$ is a standard model non-abelian field
strength. The axion appears as the $C_{5}$ field, a component of
$C_{M}$, together with the correct scale $f_{a}$.
On the other hand, interesting studies have been made in gauge
fields localization procedures. For the brane of the
$\delta$-function type, Dvali et al. \cite{J} have shown that, for
gauge fields, localization holds for specific distances but there is
an effect of dissipation of cosmic radiation to extra dimensions for
large distances. The lagrangian is the following:
\begin{equation}
L=-\frac{1}{4g^{2}}F_{AB}^{2}-\frac{1}{4e^{2}}F_{\mu\nu}^{2}\delta
(y)+\ldots
\end{equation}
The first term refers to the bulk physics while the second one, to
the brane physics ($A,B=1,\ldots ,5$ and $\mu,\nu =1,\ldots ,4$). On
the brane, the $F^{2}$ term can be generated by radiative
corrections due to localized matter fields. An interesting problem
is to analyze axion physics in this context in order to see if the
same phenomenon happens for axions. In $D=4$, the axion field may be
described by an antisymmetric field $B_{\mu\nu}$. A nice way to
study this question would be to start with the following model in
$D=5$:
\begin{equation}
L=H_{ABC}^{2}+H_{\mu\nu\alpha}^{2}\delta (y)+\ldots
\end{equation}
In another way (equivalent theories on branes) we can try in $D=5$
the following model:
\begin{equation}
L=H_{ABC}^{2}-F_{AB}^{2}+\delta
(y)[\varepsilon^{\mu\nu\alpha\beta}B_{\mu\nu}F_{\alpha\beta}+A_{\mu}A^{\mu}]
\end{equation}
On the brane we have a partially topological theory that is
equivalent to a $H^{2}$ theory. In this case we have to consider
that we can get the equations of motion of the brane physics
independently of the bulk physics. We can do this if we consider low
energies in the $D=4$ world. In this case we can construct a
mechanism of axion localization on branes in a different fashion.
\section{CONCLUSIONS}
By a procedure of dimensional reduction, we have constructed several
Chern-Simons-like topological terms, in abelian and in non-abelian
theories. The domain wall-brane has been simulated by a kink-like
soliton embedded in a higher dimensional space-time and it has
emerged due to a spontaneous symmetry breaking of a specific
discrete symmetry, namely, a Peccei-Quinn-like symmetry. We have
shown that a simple topological model in field theory has the
necessary features to solve the Gauge Hierarchy problem in a very
similar way to the one found by Randall and Sundrum. With this model
we have built a stable $3$-brane (a domain wall-brane) that
simulates our four-dimensional Universe and we have argued the
possibility of topological gravity localization. Because of these
facts, the exponential factor appears only due to the existence of
the domain wall-brane and not from a special metric. We have
discussed some of our perspectives on axion physics. We have noted
that we can construct a mechanism of axion localization using a
partially topological field theory. Studies related to generation of
axion scales will follow in a forthcoming paper.
\section*{Acknowledgments}
This work was supported in part by Conselho Nacional de
Desenvolvimento Cient\'{\i }fico e Tecnol\'{o}gico-CNPq and
Funda\c{c}\~{a}o Cearense de Amparo \`{a} Pesquisa-FUNCAP.
|
1,314,259,994,783 | arxiv | \section{Introduction}
\label{sec:intro}
The presence or absence of a diagnostic spectroscopic signal can facilitate the elucidation
of a reaction mechanism or the design of a molecular material with specific properties and function.
For example, infrared (IR) spectroscopy and ultraviolet-visible light
(UV/Vis) spectroscopy yield useful information about a molecular system under study.
In UV/Vis spectroscopy, the position of a peak is given by the vertical transition energy
between different electronic states at a specific nuclear configuration.
For vibrational spectroscopy in the harmonic approximation\cite{Wilson1955, Califano1976, Bratoz1958},
the position of peaks can be related to the local shape of the potential energy surface (PES).
Intensities are then usually obtained through transition probabilities by virtue of Fermi's Golden Rule\cite{Heitler1994, Craig1998}.
The quantum chemical calculation of spectroscopic information is often more time consuming than the calculation of an electronic wave function and energy. The computational cost associated with obtaining this information will be very high if large collections or sequences of molecular structures are involved;
examples are the calculation of spectra (i) for molecular dynamics trajectories\cite{Marx2009},
(ii) for molecular conformer ensembles\cite{Hill2012}, and
(iii) in high-throughput virtual screening setting.
Furthermore, efficiency is also decisive in the framework of interactive quantum chemistry\cite{Haag2013, Haag2014, Vaucher2016, Vaucher2018} because here ultra-fast delivery of quantum chemical results is the key to interactivity.
In all these cases, a speed-up of the calculation of spectra would be very beneficial.
The calculation of IR spectra can be accelerated by the determination of only a subset of the vibrational normal modes of a molecular system according to some criterion.
In the mode-tracking approach\cite{Reiher2003, Reiher2004, Hermann2007}, the Davidson algorithm\cite{Davidson1975} is modified to refine iteratively the normal modes that are the most similar to a set of candidate vibrations at a fraction of the cost of the
full vibrational calculation.
A similar approach has been employed in the intensity-tracking algorithm\cite{Kiewisch2008, Kiewisch2009, Luber2009, Kovyrshin2010,Kovyrshin2012, Teodoro2018}, where the most intense vibrational transitions are selectively and iteratively optimized. In the PICVic method\cite{DosSantos2014}, normal modes are calculated with an efficient and inexpensive method and the ones deemed interesting are refined with few single-point calculations with more accurate methods.
Molecular fragmentation was also leveraged to obtain highly accurate vibrational spectra at a fraction of the cost of a full calculation\cite{Sahu2015}.
Vibrational analysis with a partial Hessian matrix\cite{Wang2016} exploits only the block-diagonal part of the full Hessian matrix corresponding to a molecular substructure of interest that
is evaluated and diagonalized\cite{Head1997}. This approach was successfully employed in the calculation of changes in reaction enthalpy and entropy for systems in which the changes induced by the reaction are local in nature\cite{Li2002}. The partial Hessian vibrational analysis has been extended by considering the rest of the molecular system as a collection of rigid bodies allowed to rotate and translate relative to the subsystem under scrutiny\cite{Ghysels2007}. This removed spurious negative frequencies due to the fact that the partitioned substructures were frozen in the respective relative positions. In polymer chemistry, for instance, the molecular structure is partitioned in subsystems represented by the monomer of the polymeric chain, the low-frequency vibrations are approximated
by considering the monomers as rigid blocks, successively perturbed by the high-frequency vibrations of the monomers\cite{Durand1994, Tama2000}. In the Cartesian tensor transfer method\cite{Bour1997}, the Hessian matrix and the property tensors are efficiently calculated by fragmenting the molecular structure and assembling the resulting matrices and property tensors. Infrared and Raman spectra calculated with this tensor transfer approach are, in general, well reproduced\cite{Bieler2011}.
UV/Vis spectra are calculated by solving the linear response eigenvalue equation. Efficient methods are typically based on local approximations\cite{Kovyrshin2010}, on the reduction of the excitation space\cite{Rueger2015, Grimme2013}, and on the approximation of the required integrals\cite{Grimme2013, Bannwarth2014, Niehaus2001}.
R\"uger and co-workers described a protocol based on a modification of time-dependent density functional theory (TD-DFT) for semi-empirical density functional tight binding (DFTB), namely TD-DFTB\cite{Rueger2015}.
In this protocol, the excited-state linear-response eigenvalue problem is solved in a small subspace of the full excitation space. This subset is determined by an intensity criterion: determinants corresponding to single excitations from the Hartree--Fock reference determinant will be added to the subset if the dipole matrix element for this excitation exceeds a predefined threshold.
However, the effect of this basis reduction on accuracy and reliability is difficult to foresee.
In the simplified TD-DFT (sTD-DFT) and simplified Tamm--Dancoff Approximation (sTDA)\cite{Grimme2013, Risthaus2014, Bannwarth2014}, the calculation of the two-electron integrals in the molecular orbital basis required in
the excited-state eigenvalue problem is simplified by means of the approximation of the integrals with a multipole expansion truncated after the monopole terms.
In this way, only partial charges and molecular orbital energy differences are needed to solve the excited-state linear-response problem\cite{Grimme2016}. This approach was also adopted for time-dependent density functional tight binding (TD-DFTB)\cite{Niehaus2001}.
The excited-state linear-response matrix is then diagonalized in a subspace defined by all
determinants representing single excitations in which the difference between occupied and virtual orbitals involved is lower
than the maximum energy for which the UV/Vis spectrum is calculated. Excluded basis functions that have a high off-diagonal element in the excitation matrix with basis functions included in the subset are then recovered through a perturbative approach. The accuracy of sTDA and sTD-DFT
can be similar to that of the corresponding TDA and TD-DFT, respectively, but at a fraction of the cost\cite{Grimme2013}.
Neugebauer and co-workers developed a selective TD-DFT solver automatically removing low-lying long-range charge-transfer states\cite{Kovyrshin2012}. This allows reliably to obtain the relevant states at reduced computational cost.
Furthermore, special hardware such as graphics processing units can accelerate excited-state calculations\cite{Isborn2011}.
Finally, methods employing a small basis (to which semi-empirical methods belong to)\cite{Grimme2016, Liu2018} offer an avenue for the accelerated calculation of both UV/Vis and IR spectra.
Approximate electronic structure methods introduce errors in the calculation of spectroscopic signals, the extent of which needs to be assessed with uncertainty quantification\cite{Sullivan2011}. We studied and developed protocols to quantify the uncertainty in the molecular properties calculated by density functionals\cite{Simm2016, Simm2017b, Proppe2017}, to propagate the effect of errors in activation (free) energy barriers from first principles to species concentrations in kinetic modeling\cite{Proppe2016, Proppe2019}, and to estimate the role of uncertainty in the parametrization of dispersion corrections\cite{Weymuth2018,Proppe2019b}. Such approaches can be extended in order to be applicable to spectroscopic signals. Although this is beyond the scope of the present work, we note that Jacob and coworkers have recently published first steps into this direction\cite{Oung2018}.
Even though these developments represent remarkable advances in the efficiency of single-spectrum calculations, none of them allows for interactive spectroscopic feedback describing structural changes of molecules in real time.
In this work, we seamlessly integrate spectroscopic calculations into the ultra-fast quantum mechanical exploration of a molecular system in an automated
fashion. This development was driven by the desire to obtain spectroscopic information on the fly in interactive quantum chemistry\cite{Haag2013, Haag2014, Vaucher2016, Vaucher2018}. Our developments may also be beneficial for a fast analysis of molecular dynamics trajectories\cite{Marx2009}, for the calculation of spectra averaged over ensembles of molecular conformers, and in automated high-throughput calculations such as reaction network explorations\cite{Sameera2016,Dewyer2018,Simm2019,Unsleber2020}.
\section{Theory}
\label{sec:theory}
We first review the essential theory to introduce key notation. All developments presented in this section
are implemented in our open-source C++ software library for semi-empirical methods called \textsc{Sparrow}\cite{Sparrow300}. Hartree atomic units are used throughout if not otherwise stated.
\subsection{Vibrational Spectroscopy}
Vibrational peak positions are obtained as differences between energy eigenvalues of
the time-independent nuclear Schr\"{o}dinger equation, in which the electronic energy $E_{\rm el}$ is approximated as a Taylor series expansion truncated after the second derivatives with respect to the nuclear Cartesian coordinates $\boldsymbol{R}^{\rm(c)}$. For this, the Hessian matrix $\boldsymbol{F}^{\rm(c)}$,
\begin{equation}
\label{eq:diagHessian}
F_{ij}^{\rm(c)} = \left(\frac{\partial^2E_{\rm el}(\boldsymbol{R}^{\rm(c)})}{\partial R_i^{\rm(c)} \partial R_j^{\rm(c)}}\right) ,
\end{equation}
is calculated at a local energy minimum of the PES (the indices $i$ and $j$, respectively, refer to the atomic nuclei).
In the basis of mass-weighted normal coordinates $\boldsymbol{R}^{\rm(q)}$ the nuclear Schr\"{o}dinger equation simplifies to\cite{Neugebauer2002}
\begin{equation}
\label{eq:nucSG}
\left( -\frac{1}{2} \nabla_{\rm nuc}^{\rm(q)\dagger}\nabla_{\rm nuc}^{\rm(q)} + \frac{1}{2}\boldsymbol{R}^{\rm(q)\dagger}\boldsymbol{F}^{\rm(q)}\boldsymbol{R}^{\rm(q)} \right) |v^{\rm tot}\rangle = E^{v^{\rm tot}}_{\rm nuc}|v^{\rm tot}\rangle ,
\end{equation}
where $\nabla_{\rm nuc}^{\rm(q)}$ is the vector corresponding to the nuclear gradient expressed in the basis of mass-weighted normal coordinates
and $|v^{\rm tot} \rangle$ is the nuclear wave function of the system with nuclear energy $E^{v^{\rm tot}}_{\rm nuc}$ (electronic and nuclear state indices have been omitted
for the sake of simplicity).
The Hessian matrix $\boldsymbol{F}^{\rm(q)}$ is diagonal in this representation, and the total nuclear wave function is then a product of $3N$ independent single-mode harmonic oscillator wave functions, with $N$ being the number of atomic nuclei.
The $p$-th peak position is given by the spectroscopic wavenumber $\tilde{\nu}_p$ and
determined by the $p$-th diagonal element $F_{pp}^{\rm(q)}$ of $\boldsymbol{F}^{\rm(q)}$,
\begin{equation}
\label{eq:diagElHessian}
F_{pp}^{\rm(q)} = 4 \pi^2c^2\tilde{\nu}_p^2,
\end{equation}
where $c$ is the speed of light in vacuum.
Eq.~(\ref{eq:nucSG}) is only valid for a vanishing nuclear gradient. In practice, this condition is enforced by a structure optimization of the molecular system, which can require a sizeable fraction of the total computational effort. The Hessian matrix in Cartesian coordinates is then determined analytically or semi-numerically, \textit{i.e.}, as finite differences of analytical gradients. It is transformed to mass-weighted coordinates and its center of mass translation and rotational components are projected out. Subsequent diagonalization then yields the peak positions of a vibrational spectrum in this harmonic approximation according to Eq.~(\ref{eq:diagElHessian}).
In our interactive molecular exploration framework, the calculation of a vibrational spectrum is started each time the structure approaches a local minimum on the PES indicated by negligible forces on all atoms. For the detection of a local minimum, it is sufficient to have
the quantity $G$, \textit{i.e.}, the sum of all atomic nuclear gradients, satisfy the condition
\begin{equation}
\label{eq:grad_zero}
G = \sum_a \sqrt{ \sum_{\alpha \in \{x,y,z\}}\left(\frac{\partial E_{\rm el}(\boldsymbol{R}^{\rm(c)})}{\partial R^{\rm(c)}_{a, \alpha}} \right)^2} \leq \epsilon_{\rm grad} ,
\end{equation}
where $\epsilon_{\rm grad}$ is the threshold below which the sum of the forces acting on the nuclei of a molecular structure is such that it is considered to be close to a local minimum.
Note that this detection threshold $\epsilon_{\rm grad}$ can be orders of magnitude larger than the threshold usually applied
to terminate converged structure optimizations because a subsequent structure refinement will always be possible after detection
of a local minimum. In this work, $\epsilon_{\rm grad}$ was chosen to be 0.55\,hartree $\cdot$ bohr$^{-1}$, but can be modified during an exploration if deemed necessary.
Along a molecular trajectory, the calculation of a harmonic vibrational spectrum (structure optimization and frequency analysis) is initiated after the automatic detection of a local minimum. For structures that then remain close the same local minimum of a PES, no vibrational spectrum after the first one is calculated.
In interactive quantum chemical explorations significant computational savings are attainable, because
the structural distortions induced by interactive manipulations are often local in nature.
To exploit this fact in a second approximation, we compare structures corresponding to two subsequent local minima to identify distorted molecular fragments.
Then, only the corresponding Hessian matrix entries that are expected to change have to be updated, which will reduce the computational effort significantly.
We note that the procedure outlined in this section provides peak positions in the harmonic approximation for various types of vibrational spectroscopy such as IR, vibrational circular dichroism, Raman, and Raman Optical Activity to mention only a few.
\subsection{Infrared Intensities}
The double harmonic approximation\cite{Califano1976, Wilson1955, Bratoz1958, Neugebauer2002} is the standard approach to routinely calculate IR spectra in computational chemistry.
Within this approximation, the generation of an IR spectrum involves two steps: the determination of peak positions as described in the previous section and the calculation of the corresponding intensities.
The intensity of the transition associated with the wavenumber $\tilde{\nu}_p$ is given by its integral absorption coefficient $\tilde{\mathcal{A}}_{p}$.
The integral absorption coefficient is proportional to the square
of the derivative of the molecular electric dipole moment $\boldsymbol{\mu}$ with respect to
the $p$-th normal coordinate, $R^{\rm(q)}_p$,\cite{Neugebauer2002}
\begin{equation}
\label{eq:abs_coef}
\tilde{\mathcal{A}_p} = \frac{N_{\rm A}\pi}{3c^2}\left(\frac{\partial\boldsymbol{\mu}}{\partial R^{\rm(q)}_p} \right)^2 ,
\end{equation}
where $N_{\rm A}$ is Avogadro's number. In \textsc{Sparrow}, we implemented the dipole derivative with respect to the nuclear coordinates as a finite difference
for the equilibrium Cartesian coordinates $R_{k, eq}^{\rm(c)}$ according to the 3-point central difference Bickley formula,
\begin{equation}
\left(\frac{\partial \boldsymbol{\mu}}{\partial R_k^{\rm(c)}}\right)_{R_k^{\rm(c)} = R_{k, \rm{eq}}^{\rm(c)}} \approx \frac{\boldsymbol{\mu}(R_{k, \rm{eq}}^{\rm(c)} + \Delta) - \boldsymbol{\mu}(R_{k, \rm{eq}}^{\rm(c)} - \Delta)}{2\Delta} ,
\end{equation}
where $\Delta$ is a step size chosen to be 0.01\,bohr\cite{Neugebauer2002}.
This derivative is subsequently transformed into mass-weighted normal coordinates.
For single-determinant wave functions, the electric dipole moment vector is defined as the sum of the classical nuclear electric dipole moment and the expectation value of the electric dipole operator $\hat{\boldsymbol{\mu}_{\rm el}} = -\sum_b^{n} \hat{\boldsymbol{r}}_b$ for the electronic ground state. For a Slater determinant $\Phi_0$, an antisymmetrized product of $M$ molecular spin orbitals $\psi_i$,
the total molecular electric dipole moment is obtained as\cite{Szabo1996}
\begin{align}
\boldsymbol{\mu} &= \Big\langle \Phi_0 \Big| - \sum_{b = 1}^n \hat{\boldsymbol{r}}_b \Big| \Phi_0 \Big\rangle + \sum_a^N Z_{a}\boldsymbol{R}^{\rm(c)}_{a} \nonumber\\
&= - \sum_i^{n} \langle \psi_i | \hat{\boldsymbol{r}} | \psi_i \rangle + \sum_a^N Z_{a}\boldsymbol{R}^{\rm(c)}_{a} \nonumber\\
&= - \sum_\mu\sum_\nu P_{\mu\nu}\langle\chi_\mu|\hat{\boldsymbol{r}}|\chi_\nu\rangle + \sum_a^N Z_{a}\boldsymbol{R}^{\rm(c)}_{a}, \label{eq:dipole}
\end{align}
where the Slater--Condon rules have been exploited, the index $b$ refers to the electrons, $n$ is the total number of electrons, and $\langle\chi_\mu|\hat{\boldsymbol{r}}|\chi_\nu\rangle$ is an element of the
dipole matrix expressed in an atomic orbital basis spanned by functions $\chi_\mu$, into which the molecular orbitals $\psi_i$
are expanded. The one-electron reduced density matrix elements $P_{\mu\nu}$ are defined in the same atomic orbital basis.
$Z_{a}$ is the nuclear charge number of the $a$-th atom.
The electronic component of the dipole can be approximated by means of a population analysis such as the Mulliken population analysis\cite{Mulliken1955}. Note that many of its known limitations\cite{Reed1985,Herrmann2005} are mitigated in a minimal basis of a semi-empirical approach.
The electric dipole moment within DFTB can be evaluated as a sum of atomic contributions by means of a Mulliken population analysis as
\begin{equation}
\label{eq:mulliken}
\boldsymbol{\mu} = \sum_a^N \boldsymbol{\mu}_a = \sum_a^N \boldsymbol{R}^\textrm{(c)}_a \left( Z_a + \sum_{\lambda \in a} \sum_\sigma S_{\lambda\sigma} P_{\lambda\sigma})\right) \; ,
\end{equation}
where $\lambda$ is the index for an atomic orbital basis function centered on atom $a$, the index $\sigma$ refers to any atomic orbital basis function, $\boldsymbol{S}$ is the overlap matrix with elements $S_{\lambda\sigma} = \langle \chi_\lambda | \chi_\sigma \rangle$.
\subsection{UV/Vis Spectroscopy}
\subsubsection{Linear Response Formalism}
In contrast to the solution of the nuclear Schr\"odinger equation within the harmonic approximation which refers to a local minimum region of the PES, an electronic transition can be induced at every point of a PES. An efficient but approximate method should recover qualitatively correct spectra to reliably highlight characteristic electronic structural features of the system of interest.
The solution of the Roothaan--Hall equation in a Hartree--Fock or Kohn--Sham density functional theory (DFT) formalism yields molecular orbital coefficients as eigenvectors and the molecular orbital energies as eigenvalues. Estimating a vertical transition energy $\omega_{ia\sigma}$ from the ground state to an electronic state assumed to be characterized by a single electron substitution from the occupied orbital $i$ to the virtual orbital $a$ (both of spin $\sigma$), denoted by $a \leftarrow i$, as the difference $\Delta_{ia\sigma}$ of the orbital energy of the virtual, $\varepsilon_{a\sigma}$, and the occupied, $\varepsilon_{i\sigma}$, orbitals,
\begin{equation}
\label{eq:orb_en_diff}
\omega_{ia\sigma} \approx \Delta_{ia\sigma} = \varepsilon_{a\sigma} - \varepsilon_{i\sigma} ,
\end{equation}
will not be reliable in most cases. Nonetheless, it may serve as a good baseline model to improve on.
The necessity of relaxation of the orbitals in the excited configuration required specific procedures.
The maximum overlap method\cite{Gilbert2008} relaxes the orbitals through an additional self-consistent-field calculation with the electronic occupation corresponding to the $a \leftarrow i$ excitation. Similarly, the restricted open-shell Kohn--Sham theory aims at relaxing the molecular orbitals in an excited state, but, in contrast to the previous method, does so simultaneously for a linear combination of all determinants that are spin partners in a transition, so that it provides an excited state that is a pure spin state\cite{Rohrig2003, Ziegler1977, Frank1998}.
The linear response TD-DFT (LR-TD-DFT)\cite{Runge1984, Casida1998, Burke2005} and the time-dependent Hartree--Fock (TD-HF) methods both derive from the problem of a molecular system perturbed by a small electric field.
They lead to an eigenvalue problem\cite{Casida1995, Dreuw2005}
\begin{equation}
\label{eq:RPA}
\left[ \begin{array}{cc}
\boldsymbol{A} & \boldsymbol{B} \\
\boldsymbol{B}^* & \boldsymbol{A}^*
\end{array} \right]
\left[ \begin{array}{c}
\boldsymbol{X} \\
\boldsymbol{Y}
\end{array} \right]
= \omega
\left[ \begin{array}{cc}
\boldsymbol{1} & \boldsymbol{0} \\
\boldsymbol{0} & -\boldsymbol{1}
\end{array} \right]
\left[ \begin{array}{c}
\boldsymbol{X} \\
\boldsymbol{Y}
\end{array} \right] ,
\end{equation}
with the elements of the matrices $\boldsymbol{A}$ and $\boldsymbol{B}$ expressed as
\begin{equation}
\label{eq:a_matrix_hf}
A_{ia\sigma,jb\tau} = \delta_{ij}\delta_{ab}\delta_{\sigma\tau}\Delta_{ia\sigma} + \left(ia|jb\right) - \delta_{\sigma\tau} \left(ij|ab\right) ,
\end{equation}
and
\begin{equation}
\label{eq:b_matrix_hf}
B_{ia\sigma,jb\tau} = \left(ia|bj\right) - \delta_{\sigma\tau} \left(ib|aj\right) ,
\end{equation}
for TD-HF, and
\begin{equation}
\label{eq:a_matrix_dft}
A_{ia\sigma,jb\tau} = \delta_{ij}\delta_{ab}\delta_{\sigma\tau}\Delta_{ia\sigma} + \left(ia|jb\right) + \delta_{\sigma\tau}\left(ia|f^{\sigma\tau}_{\rm xc}|jb\right) ,
\end{equation}
and
\begin{equation}
\label{eq:b_matrix_dft}
B_{ia\sigma,jb\tau} = \left(ia|bj\right) + \delta_{\sigma\tau}\left(ia|f^{\sigma\tau}_{xc}|bj\right) ,
\end{equation}
for TD-DFT, where the labels $\sigma$ and $\tau$ indicate the spin part of the molecular orbitals in the excitations $a \leftarrow i$ and $b \leftarrow j$, respectively. The kernel $f^{\sigma\tau}_{\rm xc}$ represents the second derivative of the exchange--correlation functional $E_{\rm xc}$ with respect to the spin densities $\rho_\sigma$ and $\rho_\tau$, $\delta_{ia}$ is a Kronecker delta, and $\boldsymbol{X}$ and $\boldsymbol{Y}$ are the eigenvectors for the excitations and de-excitations, respectively. The two-electron integrals $\left(ia|jb\right)$ are defined as
\begin{equation}
\left(ia|jb\right) = \iint \psi_{i\sigma}(\boldsymbol{r})\psi_{a\sigma}(\boldsymbol{r}) \frac{1}{r} \psi_{j\tau}(\boldsymbol{r}')\psi_{b\tau}(\boldsymbol{r}') \mathrm{d}^3r \mathrm{d}^3r'\mathrm{d}\sigma\mathrm{d}\tau.
\end{equation}
If real molecular orbitals are assumed, the non-Hermitian eigenvalue problem of Eq.~(\ref{eq:RPA}) can be simplified to a lower-dimensional Hermitian one\cite{Jorgensen1981, Dreuw2005},
\begin{equation}
\label{eq:RPA_Herm}
(\boldsymbol{A} - \boldsymbol{B})^\frac{1}{2} (\boldsymbol{A} + \boldsymbol{B}) (\boldsymbol{A} - \boldsymbol{B})^\frac{1}{2} \boldsymbol{Z}=\omega^2 \boldsymbol{Z},
\end{equation}
where $\boldsymbol{Z} = (\boldsymbol{A} - \boldsymbol{B})^{-\frac{1}{2}} (\boldsymbol{X} + \boldsymbol{Y})$. If no exact exchange is
present as in pure density functionals, the matrix $(\boldsymbol{A} - \boldsymbol{B})^\frac{1}{2}$ will be diagonal, because $\left(ia|f^{\sigma\tau}_{\rm xc}|jb\right)$ is equal to $\left(ia|f^{\sigma\tau}_{\rm xc}|bj\right)$, and its square root is easy to calculate. Where this is not the case, invoking the Tamm--Dancoff approximation\cite{Hirata1999} or working within the configuration interaction (CI) singles approximation allows for the solution of a problem of the same dimension as the one in Eq.~(\ref{eq:RPA_Herm}), but without the need to compute the expensive square root of a matrix\cite{Chantzis2013}.
The CIS and TDA are invoked by neglecting the matrix $\boldsymbol{B}$, therefore simplifying Eq.~(\ref{eq:RPA}) to
\begin{equation}
\label{eq:CIS}
\boldsymbol{A}\boldsymbol{X}=\omega \boldsymbol{X} .
\end{equation}
Expanding the excited states in a singly-excited-determinant or configuration state function (CSF, see below) basis causes a limitation in the description of excited states with a considerable double-excitation character in TD-DFT, TD-HF, CIS and TDA.
For their correct description more refined models are needed, which are currently out of reach
for a high-throughput framework, such as the explicit consideration of double excitations to yield
the configuration interaction with singles and doubles excitations (CISD) wave function
or multireference schemes\cite{Liu2018}, or by improving upon the adiabatic approximation in TD-DFT accounting for the effects of a frequency-dependent exchange--correlation kernel\cite{Maitra2004, Cave2004, Mazur2009, Mazur2011, Huix-Rotllant2011}. Furthermore, TD-DFT based on non-hybrid exchange--correlation functionals suffers from a lack of accuracy in the description of charge-transfer states\cite{Dreuw2004, Dreuw2005}. This problem may be mitigated by range-separated functionals\cite{Risthaus2014, Dreuw2004, Henderson2008, Baer2010, Leininger1997, Iikura2001, Niehaus2012} or by identification and subsequent removal the offending excited states\cite{Kovyrshin2010, Kovyrshin2012}.
\subsubsection{Subspace Solver}
\label{sec:subspace_solber}
In this work, we calculate the excited states with semi-empirical adaptations of TD-DFT/TDA based in a DFTB framework, \textit{i.e.},
TD-DFTB. In the following paragraphs, we outline the equations needed for the implementation of an iterative diagonalizer based on the Davidson algorithm\cite{Davidson1975, Liu1978} for solving Eq.~(\ref{eq:RPA_Herm}). At the end of this chapter, we summarize the equations that are specific for TD-DFTB\cite{Niehaus2001}.
In a non-orthogonal modification of the block-Davidson method\cite{Parrish2016, Furche2016}, the solution to the first few roots of the eigenvalue problems in Eq.~(\ref{eq:RPA_Herm}) and Eq.~(\ref{eq:CIS}) is approximated in an incrementally growing Krylov subspace $\Omega$ of the full space. The matrix $\boldsymbol{H}$ is $(\boldsymbol{A} - \boldsymbol{B})^\frac{1}{2} (\boldsymbol{A} + \boldsymbol{B}) (\boldsymbol{A} - \boldsymbol{B})^\frac{1}{2}$ in TD-DFTB, and $\boldsymbol{A}$ if the TDA is invoked. In contrast to the original block-Davidson method, the orthogonality of the basis functions is not enforced.
In each iteration, the product of matrix $\boldsymbol{H}$ and matrix $\boldsymbol{\Omega}$ containing the vectors $b^k$ ($k \in \{1, 2, 3, \ldots\}$) spanning the subspace $\Omega$ is calculated to obtain the so-called sigma vectors,
\begin{equation}
\label{eq:sigma}
\boldsymbol{\sigma} = \boldsymbol{H}\boldsymbol{\Omega} .
\end{equation}
In our implementation, in the first iteration $\boldsymbol{\Omega}$ has as many rows as $\boldsymbol{H}$ and a number of columns, $C$, that is defined on input and can range from the number of desired eigen pairs to the number of columns of $\boldsymbol{H}$. The elements of the top $C$ rows of $\boldsymbol{\Omega}$ are given by
\begin{equation}
\label{eq:init_guess}
\Omega_{ia\sigma, jb\tau} = \delta_{ij}\delta_{ab}\delta_{\sigma\tau} + \Gamma_{ia\sigma, jb\tau} ,
\end{equation}
where $\Gamma_{ia\sigma, jb\tau}$ is a random number between $-1\cdot 10^{-2}$ and $1\cdot 10^{-2}$, and the rest of the matrix is filled by zeroes. We noticed that this choice of $\boldsymbol{\Omega}$ was not able to produce solutions characterized by an eigenvector with no overlap with the initial $\boldsymbol{\Omega}$. Therefore, we added random numbers between $-1\cdot 10^{-5}$ and $1\cdot 10^{-5}$ to the first column vector of $\boldsymbol{\Omega}$, which solved the problem.
The matrix $\boldsymbol{H}$ is then projected onto the subspace $\Omega$ by
\begin{equation}
\boldsymbol{\tilde{H}} = \boldsymbol{\Omega}^\dagger \boldsymbol{\sigma} ,
\end{equation}
and the subspace generalized eigenvalue problem,
\begin{equation}
\label{eq:GEP}
\boldsymbol{\tilde{H}}v^h = \lambda^h \boldsymbol{S} v^h ,
\end{equation}
with the overlap matrix
\begin{equation}
\boldsymbol{S} = \boldsymbol{\Omega}^\dagger \boldsymbol{\Omega} ,
\end{equation}
is solved, yielding the subspace eigenvector $v^h$ corresponding to the $h$-th solution of Eq.~(\ref{eq:GEP}) and the estimate for the respective eigenvalue, $\lambda^h$. In the Davidson--Liu algorithm\cite{Liu1978}, the overlap matrix is taken to be equal to the identity matrix, as the orthogonality of the vectors $b^k$ spanning the subspace $\Omega$ is enforced. In the non-orthogonal version\cite{Furche2016} this is in general not the case. In particular, the norm of the vectors $b^k$ is allowed to decrease up to the point where the overlap matrix becomes almost singular. In this case, care must be taken while solving Eq.~(\ref{eq:GEP}) as the correct solution is only guaranteed for positive-definite overlap matrices, because the first step is the Cholesky decomposition of the overlap matrix.
Therefore, we implemented a preconditioning step to reduce the condition number of the overlap matrix and we use the simultaneous diagonalization technique\cite{Fukunaga1990} to obtain a solution to Eq.~(\ref{eq:GEP}) which is valid also for overlap matrices that are almost singular.
In this more stable implementation, Eq.~(\ref{eq:GEP}) is solved first by preconditioning the overlap matrix as proposed by Furche and co-workers\cite{Furche2016},
\begin{equation}
\boldsymbol{S'} = diag(\boldsymbol{S})^{-\frac{1}{2}} \, \boldsymbol{S} \, diag(\boldsymbol{S})^{-\frac{1}{2}} \, ,
\end{equation}
where $diag(\boldsymbol{S})^{-\frac{1}{2}}$ is the diagonal matrix containing the inverse square root of the diagonal elements of $\boldsymbol{S}$. Then, the matrix containing the eigenvectors $v^h$, $\boldsymbol{v}$, and the corresponding diagonal eigenvalue matrix $\boldsymbol{\Lambda}$ are recovered by simultaneously finding a solution to the two problems
\begin{align}
\boldsymbol{v'}^T \, \boldsymbol{\tilde{H'}} \, \boldsymbol{v'} &= \boldsymbol{\Lambda}
\end{align}
and
\begin{align}
\boldsymbol{v'}^T \, \boldsymbol{S'} \, \boldsymbol{v'} &= \boldsymbol{1} \, ,
\end{align}
where
\begin{equation}
\boldsymbol{\tilde{H'}} = diag(\boldsymbol{S})^{-\frac{1}{2}} \, \boldsymbol{\tilde{H}} \, diag(\boldsymbol{S})^{-\frac{1}{2}} \,
\end{equation}
and
\begin{equation}
\boldsymbol{v'} = diag(\boldsymbol{S})^{\frac{1}{2}} \boldsymbol{v}.
\end{equation}
An appropriate matrix $\boldsymbol{v}$ is found by first carrying out an eigenvalue decomposition of the matrix $\boldsymbol{S'}$ by finding the matrix $\boldsymbol{U}$ such that
\begin{equation}
\boldsymbol{U}^T \, \boldsymbol{S'} \, \boldsymbol{U} = \boldsymbol{\Sigma}\, ,
\end{equation}
with $\boldsymbol{\Sigma}$ being a diagonal matrix whose elements correspond to the eigenvalues of $\boldsymbol{S'}$.
A transformation matrix $\boldsymbol{T'}$ is constructed,
\begin{equation}
\boldsymbol{T '} = \boldsymbol{U}_{\rm R} \, \boldsymbol{\Sigma}^{-\frac{1}{2}}_{\rm R} \, ,
\end{equation}
where $\boldsymbol{U}_{\rm R}$ is the matrix whose columns are the columns of $\boldsymbol{U}$ corresponding to a non-zero eigenvalue and $\boldsymbol{\Sigma}^{-\frac{1}{2}}_{\rm R}$ is the diagonal matrix of the inverse square root of the non-zero eigenvalues. Notably, $\boldsymbol{U}_{\rm R}$ is a $m \times r$ matrix, $\boldsymbol{\Sigma}_{\rm R}$ a $r \times r$ matrix, with $m$
being the dimension of $\boldsymbol{S'}$ and $r$ its rank. This is equivalent to performing the whitening transformation in the linear space of $\boldsymbol{S}$. The matrix $\boldsymbol{\tilde{H'}}$ is transformed with $\boldsymbol{T'}$ to yield the matrix $\boldsymbol{Q}$,
\begin{equation}
\boldsymbol{Q} = \boldsymbol{T'}^T \, \boldsymbol{\tilde{H'}} \, \boldsymbol{T'} \, ,
\end{equation}
which is in turn diagonalized to yield its eigenvector matrix $\boldsymbol{T''}$ and the diagonal matrix $\boldsymbol{\Lambda}_{\rm R}$ containing the non-zero eigenvalues of $\boldsymbol{\Lambda}$,
\begin{equation}
\boldsymbol{T''}^T \, \boldsymbol{Q} \, \boldsymbol{T''} = \boldsymbol{\Lambda}_{\rm R}\, .
\end{equation}
The solution $\boldsymbol{v'}_{\rm R}$ is finally obtained by
\begin{equation}
\boldsymbol{v'}_{\rm R} = \boldsymbol{T'} \, \boldsymbol{T''} \, .
\end{equation}
This method necessitates two eigenvalue decompositions and is therefore slower than the ordinary algorithm employing a Cholesky decomposition of the overlap matrix. The main advantage, however, lies in its robustness, \textit{i.e.}, in the fact that it can handle almost singular overlap matrices. Calculations indicate that our non-orthogonal Davidson--Liu algorithm adaptation with simultaneous diagonalization is often more efficient than the ordinary Davidson--Liu algorithm.
The Ritz estimate for the eigenvector $h$ in the full space is given by
\begin{equation}
\theta^h = \boldsymbol{\Omega} v^{h} .
\end{equation}
At this point, the residual vector $R^h$ is calculated as
\begin{equation}
R^h = \boldsymbol{H}\theta^h - \lambda^h \theta^h = \boldsymbol{\sigma}v^h - \lambda^h \theta^h ,
\end{equation}
and a new preconditioned residual $\delta^h$, defined as
\begin{equation}
\delta^h = \left( \overline{\boldsymbol{H}} - \boldsymbol{1}\lambda^h \right)^{-1} R^h ,
\end{equation}
is added to the subspace $\Omega$ as the new guess vector $b^{{\rm dim}(\Omega)+1}$, where $\overline{\boldsymbol{H}}$ is the matrix containing the exact or approximated diagonal of $\boldsymbol{H}$. In our implementation,
$\overline{H}_{ia\sigma,ia\sigma} = \Delta_{ia\sigma}$. The iterations are repeated until the norm of $R^h$ of the desired roots drops below a user-specified threshold. \\
In Eq.~(\ref{eq:sigma}), the matrix $\boldsymbol{H}$ needs not be stored, and only its product with each vector $b^k$ spanning $\Omega$ is needed. How this product is constructed is the main algorithmic difference between CIS, TD-DFT and TD-DFTB.
In TD-DFTB, the sigma vector is the product of the matrix $(\boldsymbol{A} - \boldsymbol{B})^\frac{1}{2} (\boldsymbol{A} + \boldsymbol{B}) (\boldsymbol{A} - \boldsymbol{B})^\frac{1}{2}$ with a trial vector $b^k$. We will provide the working equations for the method and refer
for a detailed discussion and derivation to Refs.~\citenum{Niehaus2001, Rueger2015}. By noting that the matrix $(\boldsymbol{A} - \boldsymbol{B})$ is diagonal for the DFTB method based on DFT with a pure functional, Eq.~(\ref{eq:RPA_Herm}) becomes
\begin{equation}
\label{eq:tddftb}
\boldsymbol{\Delta}^\frac{1}{2} (\boldsymbol{A} + \boldsymbol{B}) \boldsymbol{\Delta}^\frac{1}{2} \boldsymbol{Z} = \omega^2 \boldsymbol{Z},
\end{equation}
where $\boldsymbol{\Delta}$ is the diagonal matrix of the orbital energy differences with elements defined in Eq.~(\ref{eq:orb_en_diff}). The matrix $(\boldsymbol{A} + \boldsymbol{B})$ is given according to Eqs.~(\ref{eq:a_matrix_dft}) and (\ref{eq:b_matrix_dft}) by
\begin{equation}
\label{eq:a_plus_b}
(\boldsymbol{A} + \boldsymbol{B})_{ia\sigma,jb\tau} = \delta_{ij}\delta_{ab}\delta_{\sigma\tau}\Delta_{ia\sigma} + 2\left( \left(ia|jb\right) + \left(ia|f^{\sigma\tau}_{\rm xc}|jb\right) \right) .
\end{equation}
In TD-DFTB, the integrals in Eq.~(\ref{eq:a_plus_b}) are approximated with the Mulliken approximation\cite{Niehaus2001, Niehaus2009}, and Eq.~(\ref{eq:a_plus_b}) simplifies to
\begin{equation}
\label{eq:a_plus_b_dftb}
(\boldsymbol{A} + \boldsymbol{B})_{ia\sigma,jb\tau} = \delta_{ij}\delta_{ab}\delta_{\sigma\tau}\Delta_{ia\sigma} + 2\left(
\sum_{A}\sum_B q_A^{ia\sigma} q_B^{jb\tau} \left( \gamma_{AB} + \delta_{AB} (2\delta_{\sigma\tau} - 1) m_A\right)
\right) ,
\end{equation}
where $A$, $B$ are atom indices, $\gamma_{AB}$ is an element of the matrix $\boldsymbol{\gamma}$ containing functionals of the distance of two atoms (directly recovered from the ground-state DFTB calculation), $m_A$ is the magnetic Hubbard parameter obtained from atomic DFT calculations\cite{Niehaus2001, Ruger2016}, and the elements of the matrix of Mulliken transition charges $\boldsymbol{q}$ are defined as\cite{Niehaus2001}
\begin{equation}
q^{ia\sigma}_{A} = \frac{1}{2} \sum_{\mu \in A} \sum_\nu \left( C^{(occ), \sigma}_{\mu i}C^{(vir), \sigma}_{\nu a} S_{\mu\nu} + C^{(occ), \sigma}_{\nu i}C^{(vir), \sigma}_{\mu a} S_{\nu\mu} \right) .
\end{equation}
In case of a closed-shell reference, the solution of Eq.~(\ref{eq:CIS}) is conveniently obtained by expressing the $(\boldsymbol{A} + \boldsymbol{B})$ matrix in the basis spanned by CSF corresponding to singlet (${}^1\Psi_{ia}$) and triplet (${}^3\Psi_{ia}$) states,
\begin{align}
\label{eq:CSF}
{}^1\Psi_{ia} &= \frac{1}{\sqrt{2}} \left(\Phi_{ia\alpha} + \Phi_{ia\beta}\right) , \\
{}^3\Psi_{ia} &= \frac{1}{\sqrt{2}} \left(\Phi_{ia\alpha} - \Phi_{ia\beta} \right) ,
\end{align}
where $\Phi_{ia\alpha}$ denotes a determinant obtained by the substitution of the orbital $i$ with the orbital $a$, both with spin state $\alpha$. In this representation, the $(\boldsymbol{A} + \boldsymbol{B})$ matrix is block-diagonal, and the eigenvalue problem can be split into two independent smaller problems corresponding to the singlet and the triplet excited states.
The matrix elements in the CSF basis are derived in the supplementary information.
If the elements are expressed in CSF basis, the spin labels $\sigma$ and $\tau$ will not be used anymore,
because CSFs are a combination of determinants corresponding to excitations with opposite spin parts from the HF determinant, as shown in Eq.~(\ref{eq:CSF}). Hence, the sigma vectors can be efficiently calculated in matrix notation by defining the matrix $\tilde{\boldsymbol{q}} = \boldsymbol{\Delta}^\frac{1}{2} \boldsymbol{q}$ as
\begin{align}
{}^1\boldsymbol{\sigma}^k &= \boldsymbol{b}^k \boldsymbol{\Delta}^2 + 4\tilde{\boldsymbol{q}}\boldsymbol{\gamma}\tilde{\boldsymbol{q}}^T\boldsymbol{b}^k \nonumber \\
{}^3\boldsymbol{\sigma}^k &= \boldsymbol{b}^k \boldsymbol{\Delta}^2 + 4\tilde{\boldsymbol{q}}\boldsymbol{m}\tilde{\boldsymbol{q}}^T\boldsymbol{b}^k ,
\end{align}
where $\boldsymbol{m}$ is a diagonal matrix with elements $m_{AA} = m_A$. The matrix products should be carried out from right to left in order to minimize their computational cost\cite{Rueger2015}. For the solution of the TDA problem of Eq.~(\ref{eq:CIS}), the sigma vectors are given in full analogy to the full TD-DFTB problem by
\begin{align}
\label{eq:sigma_TDA}
{}^1\boldsymbol{\sigma}_{\rm TDA}^k &= \boldsymbol{b}^k \boldsymbol{\Delta} + 2\boldsymbol{q}\boldsymbol{\gamma}\boldsymbol{q}^T\boldsymbol{b}^k \nonumber \\
{}^3\boldsymbol{\sigma}_{\rm TDA}^k &= \boldsymbol{b}^k \boldsymbol{\Delta} + 2\boldsymbol{q}\boldsymbol{m}\boldsymbol{q}^T\boldsymbol{b}^k .
\end{align}
The intensity of the electronic transition $I$ is given by its oscillator strength\cite{Casida1995}
\begin{equation}
f_I = \frac{2}{3}\omega_I \sum_{\alpha \in x,y,z} \Big| \sum_{ia\sigma} \langle\phi_{i}|\hat{r}_\alpha|\phi_a\rangle c_{ia\sigma} \Big|^2,
\end{equation}
where $\omega_I$ is the $I$-th electronic transition energy and $c_{ia\sigma} = X_{ia\sigma}$ in determinant basis for CIS and TDA, and $c_{ia\sigma} = \sqrt{\frac{\Delta_{ia\sigma}}{\omega_I}}Z_{ia\sigma}$ for the full TD-DFT or RPA problem\cite{Casida1995}. In a singlet state, the coefficients of the same spatial orbitals with opposite spin are equal, \textit{i.e.}, $c_{ia\alpha} = c_{ia\beta}$, whereas in a triplet state they are opposite, \textit{i.e.}, $c_{ia\alpha} = -c_{ia\beta}$. Consequently, the oscillator strength for triplet electronic transitions is 0. The electric dipole moment integral in the molecular orbital basis can be evaluated by approximating the integral with a Mulliken population analysis,
\begin{equation}
\langle\phi_i|\hat{\boldsymbol{r}}|\phi_a\rangle = \sum_A \boldsymbol{R}^{(c)}_A q^{ia\sigma}_{A} .
\end{equation}
\subsubsection{Pruning the Excited-State Basis}
The matrices entering the eigenvalue problems for CIS/TDA and TD-DFTB, Eq.~(\ref{eq:CIS}) and Eq.~(\ref{eq:RPA_Herm}), are assumed to be diagonally dominant\cite{Davidson1975}. As a corollary, each basis function (\textit{i.e.}, Slater determinant or CSF) interacts considerably with only few energetically close basis functions. This fact was exploited to limit the number of basis functions into which the excited states are expanded with modest effect on the accuracy of the excitation energy, the intensity, and the character of the electronic transitions\cite{Grimme2013}.
The major contribution in the electronic transition energy for a transition dominated by the excitation $a \leftarrow i$ of spin $\sigma$ is accounted for by the orbital energy difference $\Delta_{ia\sigma}$. Therefore, one can include only the basis functions with an orbital energy difference smaller than the maximum energy the UV/Vis spectrum should capture. This strategy has the unpleasant characteristic of rapidly degrading the quality of the higher excited states, as more and more basis functions that are important for them are excluded. Grimme\cite{Grimme2013} proposed a scheme based on second-order perturbation theory to mitigate this accuracy loss: one calculates the cumulative contribution of each remaining basis function corresponding to the excitation $b \leftarrow j$ with the space of the initially included basis functions corresponding to the excitation $a \leftarrow i$. In practice, the trial basis function is included as an excited-state basis function if its cumulative contribution,
\begin{equation}
\label{eq:pert_crit}
E_{jb\tau}^{(2)} = \sum_{ia\sigma} \frac{| A_{ia\sigma, jb\tau} |^2}{\Delta_{jb\tau} - \Delta_{ia\sigma}} ,
\end{equation}
is larger than a certain threshold, where the matrix $\boldsymbol{A}$ is substituted with the matrix $(\boldsymbol{A} - \boldsymbol{B})^\frac{1}{2} (\boldsymbol{A} + \boldsymbol{B}) (\boldsymbol{A} - \boldsymbol{B})^\frac{1}{2}$, and the energy differences in the denominator with ${\Delta^2_{jb\tau} - \Delta^2_{ia\sigma}}$ in the full TD-DFTB problem. In the latter case, $E_{jb\tau}^{(2)}$ is expressed in units of hartree$^2$.
This technique is readily applicable in case of a TD-DFTB or TDA calculation, as the matrices $\boldsymbol{A}$ and $(\boldsymbol{A} - \boldsymbol{B})^\frac{1}{2} (\boldsymbol{A} + \boldsymbol{B}) (\boldsymbol{A} - \boldsymbol{B})^\frac{1}{2}$ can be efficiently constructed. \\
The pruning of the excited-state basis introduces an error in the vertical transition energies. We outline the derivation of this error in case of the TDA, but it is analogous for TD-DFTB. The error $\Delta E_I$ on the energy of an electronic transition $I$ is given by
\begin{equation}
\Delta E_I = E_I^F - E_I^P ,
\end{equation}
where the basis set in which the matrix $\boldsymbol{A}$ is represented, $F$, is partitioned in two parts: (i) the set of basis functions spanning the pruned space, $P$, and (ii) the set of basis functions excluded from the pruning, $S$.
Obtaining the transition energy in the full space, $E_I^F$, is impracticable as it would require the solution of the excited-state problem in the $F$ space, nullifying the efficiency gain from the space truncation. $E_I^F$ is approximated with second-order perturbation theory,
\begin{equation}
E_I^F \approx E_I^{(0)} + E_I^{(1)} + E_I^{(2)} \; ,
\end{equation}
and the corrections to the energy are obtained as\cite{Sharma2017}
\begin{equation}
E_I^{(0)} = E_I^P \;
\end{equation}
\begin{equation}
E_I^{(1)} = \sum_{p,q \in P} c^P_{I,p} c^{P, *}_{I,q} A_{pq}^{(1)} = 0 \;
\end{equation}
and
\begin{equation}
\label{eq:pt2_error}
E_I^{(2)} = \sum_{s \in S} \frac{ \left(\sum_{p \in P} c^P_{I,p} A_{ps}^{(1)}\right)^2}{E^P_I - A_{ss}} \; ,
\end{equation}
where we partitioned the matrix $\boldsymbol{A}$ such that
\begin{equation}
\boldsymbol{A} = \left[ \begin{array}{cc}
\boldsymbol{A}^{PP} & \boldsymbol{A}^{PS} \\
\boldsymbol{A}^{SP} & \boldsymbol{A}^{SS}
\end{array} \right] = \left[ \begin{array}{cc}
\boldsymbol{A}^{PP} & \boldsymbol{0} \\
\boldsymbol{0} & \boldsymbol{0}
\end{array} \right] + \left[ \begin{array}{cc}
\boldsymbol{0} & \boldsymbol{A}^{PS} \\
\boldsymbol{A}^{SP} & \boldsymbol{A}^{SS}
\end{array} \right] = \boldsymbol{A}^{(0)} + \boldsymbol{A}^{(1)}\; ,
\end{equation}
and $c_{I,i}^P$ is the coefficient with which the $i$-th basis function enters in the electronic transition $I$ calculated in the pruned space $P$. An estimate of the error introduced by the pruning is therefore given by
\begin{equation}
\Delta E_I = E_I^F - E_I^P \approx E_I^{(2)} \;.
\end{equation}
After the solution of the excited-state problem in the pruned space $P$, obtaining a measure for the error is therefore convenient, as the evaluation of the matrix elements needed in Eq.~(\ref{eq:pt2_error}) is efficiently carried out analogously to Eq.~(\ref{eq:sigma_TDA}).
In direct methods where a full four-index transformation of the integrals from an atomic orbital basis to a molecular orbital basis is too expensive, as in the case of CIS or TD-DFT, one must develop a contraction scheme that allows to benefit from the excited-state space pruning. One of the computational bottlenecks within an iteration of the Davidson algorithm is the contraction of the two-electron integrals with the pseudo-density matrix in the atomic orbital basis for the generation of the sigma vectors. Since the atomic orbital basis is unaffected from the pruning described above, we employ a partial transformation of the basis in which the two-electron integrals are expressed. The benefit is twofold: first, the number of integrals is decreased from $\mathcal{O}((O+V)^4)$ to $\mathcal{O}((O+V)^2(OV))$, where $O$ is the number of occupied orbitals and $V$ is the number of virtual orbitals. Second, it allows for the pruning of the indices expressed in the molecular orbital basis. Both factors accelerate the contraction of the two-electron integrals with the trial vectors. This approach consists in the transformation of two of the four indices of the Coulomb $(\mu\nu|\lambda \sigma)$ and exchange $(\mu \sigma|\lambda\nu)$ integrals from the atomic orbital basis into a basis formed by pairs of molecular orbitals corresponding to an electronic transition $a' \leftarrow i'$ of spin $\tau$ still present after pruning,
\begin{align}
\label{eq:1etrans}
(\mu\nu|i'a') &= \sum_{\lambda\sigma} C^{(occ), \tau}_{i'\lambda} C^{(vir), \tau}_{a'\sigma} (\mu \nu | \lambda \sigma) \nonumber\\
(\mu a'|i'\nu) &= \sum_{\lambda\sigma} C^{(occ), \tau}_{i'\lambda} C^{(vir), \tau}_{a'\sigma} (\mu \sigma | \lambda \nu) .
\end{align}
\section{Computational Methodology}
\label{sec:comp_method}
In this work, DFTB3\cite{Gaus2011} (with the parameter set ``3ob-3-1'') was employed for ground-state calculations and the evaluation of the Hessian matrices for IR spectroscopy. In the TD-DFTB method, no specific DFTB3 term is included, and the excited-state calculation is limited to a second order expansion with respect to the density and, therefore, to terms specific to the DFTB2\cite{Elstner1998} method. However, this was shown not to affect the accuracy\cite{Nishimoto2015}.
Double-harmonic IR spectra were calculated in local minima of the PES. Along a trajectory, the exact local minimum was seldom reached. Therefore, we started an IR spectrum calculation as the molecular structure got close to a minimum, \textit{i.e.}, the sum of the atomic forces was smaller than a threshold $\epsilon_{\rm grad} = 0.55$\,hartree$\cdot$bohr$^{-1}$. At this point, the structure was optimized, where not otherwise specified, with the ``Very Tight'' convergence criteria described in Table~\ref{tab:conv_crit} and a frequency analysis was carried out\cite{Bosia2020}. The elements of the Hessian matrix were obtained by a seminumerical procedure with a step size of 0.01\,bohr.
For the partial Hessian approach, we devised an iterative algorithm that would avoid fitting to parts of the molecule that have been distorted. The algorithm fits the molecular structure corresponding to the current local minimum to the one of the preceding local minimum iteratively. During each iteration, the nuclei are classified depending on the RMSD given by a quaternion fitting procedure\cite{Coutsias2004} in three sets: one with the nuclei whose coordinates have abundantly diverged between the two structures (in this work, this is defined as nuclei with a RMSD determined by the fit exceeding 1.0\,bohr), one with nuclei that have an RMSD smaller than $\epsilon_{\rm RMSD}$, and the rest. The nuclei that have abundantly diverged are removed from the fitting set, and the next iteration is started. This procedure is repeated until the set of the nuclei with a RMSD smaller than $\epsilon_{\rm RMSD}$ does not change anymore.
How often the excited states are calculated along the trajectory is decided on by the user at the start of an exploration. In UV/Vis spectroscopy, we exploit algorithmic acceleration of the excited-states linear-response problem through a non-orthogonal implementation of the Davidson--Liu algorithm.
We studied our approximations at the example of three exemplary trajectories. The trajectories are available in the supplementary information in the concatenated XYZ format. The trajectories \textbf{T1} and \textbf{T2} involve long-chained enols undergoing an interactively induced keto--enol tautomerism. In \textbf{T1}, the reaction is induced at one end of the aliphatic chain. In \textbf{T2}, the reaction is induced in the middle of the aliphatic chain. During the interactive exploration session, an external force was applied by the user on the oxygen-bound hydrogen atom of the enol, in order to break the bond with the oxygen and build one with the carbon. The electronic structure in the interactive exploration was calculated with the PM6 method\cite{Stewart2007}, and both trajectories were refined with the DFTB3 method in a B-Spline optimization\cite{Vaucher2018}.
\begin{figure}[hbpt]
\centering
\includegraphics[width=0.5\textwidth]{figure1.pdf}
\caption{Lewis structures involved in the three trajectories labeled as \textbf{T1}, \textbf{T2}, and \textbf{MD} in this work. }
\label{fig:trajectories}
\end{figure}
The trajectory \textbf{MD} was generated by a force-field molecular dynamics simulation of allylphenylether \textit{in vacuo} with the leap-frog algorithm and an integration step of 1\,fs. One structure was recorded every 250 steps. The force-field parameters were optimized in a system-focused fashion according to the \textsc{SFAM} method\cite{Brunken2020} with RI/PBE-D3BJ/def2-SVP\cite{Perdew1996, Perdew1997, Weigend2005, Grimme2010} and the def2/J auxiliary basis\cite{Weigend2006} and are available in the supplementary information. The molecular dynamics simulation was carried out with the \textsc{SCINE} software package\cite{Bosia2020}. Initial velocities were sampled from a Maxwell--Boltzmann distribution at 300\,K and the system was coupled to a Berendsen thermostat\cite{Berendsen1984} at 300\,K with a coupling time of 10\,fs. The Berendsen thermostat is known not to create a canonical ensemble and to suffer from the ``flying ice cube'' effect\cite{Harvey1998}. However, both limitations are not relevant for the scope of this work, as no thermodynamic data are extracted from the molecular dynamics simulation.
With $\epsilon_{\rm grad} = 0.55$\,hartree$\cdot$bohr$^{-1}$, local minima are detected in the trajectory \textbf{T1} at the first and 177-th structures, and for the trajectory \textbf{T2} at the first and 184-th structure. These structures are labelled as \textbf{T1.I}, \textbf{T1.II}, \textbf{T2.I}, \textbf{T2.II}, respectively. The acronym of the optimization tightness is added as a suffix to indicate the convergence criterion of the structure optimization. For the convergence criteria summarized in Table~\ref{tab:conv_crit}, the optimized structures for the first minimum of the trajectory \textbf{T1} are labelled as \textbf{T1.I.N}, \textbf{T1.I.VL}, \textbf{T1.I.L}, \textbf{T1.I.M}, \textbf{T1.I.T}, and \textbf{T1.I.VT} in order of increasing tightness of the structure optimization.
In order to assess the reliability of our approach for the calculation of UV/Vis spectra, we compared UV/Vis spectra of 200 structures evenly spaced along the \textbf{MD} trajectory, \textit{i.e.}, every twentieth structure in the trajectory, calculated with the TD-DFTB method with the ones calculated with the linear-response SCS-CC2 method\cite{Christiansen1995, Hellweg2008} with default spin component scaling constants and the cc-pVTZ\cite{Dunning1989, Woon1993} as implemented in Turbomole 7.4.1\cite{Ahlrichs1989}. A comparison with the more accurate, but prohibitively costly equation-of-motion CC3 method implemented in the $e^T$ 1.0.7 program\cite{Folkestad2020}, demonstrated the accuracy of the SCS-CC2 method as a reference for the description of valence excitations in the system at hand (data available in the supplementary information). The UV/Vis spectra were generated by convolution of the stick-spectrum obtained on the linear-response calculation with a Lorentzian with full width at half maximum of 0.3\,eV. For TD-DFTB, the first 30 excited states were calculated, for linear-response SCS-CC2, the first 10 excited states were calculated. The difference in the number of calculated excited states can be explained with ``ghost'' states: misrepresented low-lying charge transfer states, present in GGA exchange-correlation density functionals upon which the TD-DFTB formalism is based\cite{Kovyrshin2012, Goerigk2010, Sundholm2003, Neugebauer2005}. Long-range-corrected TD-DFTB\cite{Niehaus2012, Bold2020} could mitigate this condition.
We compared the efficiency of our implementation against the one of the DFTB+ 18.2 software package\cite{Hourahine2020} for the calculation of the first 30 excited states with an initial guess space of 30 vectors of the first structure of the \textbf{T1} trajectory. A reference calculation with the DFTB2 model was carried out, and the excited states were calculated with the TD-DFTB2 model with both DFTB+ and \textsc{Sparrow}. The DFTB+ input file as well as the input for the calculation with \textsc{Sparrow} are available in the supplementary information in a compressed folder.
Normal modes were matched with a linear sum assignment\cite{Kuhn1955} as implemented in the SciPy 1.4.1 Python package\cite{Scipy2020} with the element-wise absolute value of the Duschinsky matrix $|\boldsymbol{R}^{(q)\dagger}\boldsymbol{R}^{(q)}|$ as score matrix, with exception of the normal modes calculated in section~\ref{sec:approx_struct_opt}, which were matched according to their energetic ordering, as the different molecular structures involved made the previous assignment unreliable.
All calculations were performed on a computer equipped with an Intel Xeon E-2176G CPU (3.70\,GHz base frequency) on 6 parallel threads. A very limited amount of virtual memory is needed for the calculation of systems of this size with semi-empirical methods.
\section{Results}
In this section we analyze the reliability of the approximations needed to carry out ultra-fast calculations of IR and UV/Vis spectra.
\subsection{Infrared Spectroscopy}
First, we inspect the reliability of two prototypical semi-empirical models, PM6 and DFTB3, for the calculation of IR spectra, by evaluating the absolute deviation of the Hessian matrix elements of \textbf{T2.I} calculated with DFT (PBE0/def2-TZVP/D3\cite{Adamo1999,Weigend2005, Grimme2010}, implemented in Orca 4.2.0\cite{Neese2012,Neese2018}), DFTB3 and PM6. Furthermore, the vibrational frequencies of \textbf{T2.I} obtained with these methods were compared to each other. These calculations were carried out on the structure optimized with DFT. Both the Orca input file and the coordinates of the structure analyzed are available in the supplementary information. It is obvious from Fig.~\ref{fig:freq_comparison}, especially when comparing the high-wavenumber modes, that DFTB3 is a good candidate semi-empirical method for the calculation of infrared spectra of quality similar to the ones calculated with DFT for the organic molecule under study. The superiority of DFTB3 over the PM6 model is corroborated by the analysis of the absolute deviations of the Hessian matrix elements between the different methods shown in the supplementary information, where the difference between the Hessian matrices calculated with PM6 and either DFT or DFTB3 is considerably larger than the one between the Hessian matrices calculated with DFT and DFTB3. Hence, DFTB3 was chosen for all IR spectroscopy calculations in this work.
\begin{figure}[hbpt]
\centering
\includegraphics[width=0.9\textwidth]{figure2.pdf}
\caption{Comparison of the vibrational frequencies calculated for the \textbf{T2.I} structure optimized with DFT (PBE0/def2-TZVP/D3), PM6 and DFTB3. Red circles refer to the comparison of DFT and PM6 vibrational frequencies, blue boxes to the one of DFT and DFTB3. The diagonal striped line indicates the identity. The normal modes were matched with a linear sum assignment with the absolute value of the Duschinsky matrix as score matrix.}
\label{fig:freq_comparison}
\end{figure}
Next, we study the two approximations which affect the IR spectrum calculations. First, we assess the loss in accuracy of the position of the peaks and in the elements of the Hessian matrix if the molecular structure is optimized with loose convergence criteria. Second, we evaluate the partial Hessian approach.
\begin{table}[hbpt]
\centering
\caption{Mean computational time and number of DFTB3 steps required for structure optimization over all traversed minima on the PES without any of the approximations presented in this work. Times are separated in the one required to calculate the structure optimizations and the one required to calculate and diagonalize the Hessian matrices and to evaluate the dipole gradient, the sum of which is under the column named ``Time for Hessian matrix''. Timings are presented for the \textbf{T1} and \textbf{T2} trajectories and are given as mean of the calculation time of 3 calculations $\pm$ standard deviation over all minima in the trajectory. Significant digits are given by the standard deviation: if it is larger than 2.5 multiplied by the appropriate power of ten, then it is rounded to the first digit, otherwise to the second one. The convergence criteria are listed in Table~\ref{tab:conv_crit} under the ``Tight'' optimization profile. In the structure optimization in the first minimum of the trajectory \textbf{T2} the internal coordinates break down after iteration 19. Afterwards, the optimization is resumed in Cartesian coordinates.}
\vspace{0.1cm}
\begin{tabular}{l|ccc}
\hline\hline
& Time for & Number of & Time for \\
System & structure optimization [ms] & steps & Hessian matrix [ms] \\
\hline
\textbf{T1} & & & \\
I. minimum & $4830 \pm 30$ & 111 & $2540 \pm 140$ \\
II. minimum & $4580 \pm 100$ & 113 & $2341 \pm 7$ \\
\hline
\textbf{T2} & & & \\
I. minimum & $18500 \pm 120$ & 805 & $1880.7\pm 1.5$ \\
II. minimum & $2886 \pm 23$ & 79 & $2200 \pm 500$ \\
\hline\hline
\end{tabular}
\label{tab:time_normal}
\end{table}
The trajectories \textbf{T1} and \textbf{T2} represent challenging targets for ultra-fast infrared spectroscopy because of their size. For these calculations, $\epsilon_{\rm grad} = 0.55$\,hartree$\cdot$bohr$^{-1}$ results in two minima being detected along the trajectories corresponding to the start and end states of the system as shown in Fig.~\ref{fig:trajectories} for \textbf{T1} and \textbf{T2}. The energies and gradients along these trajectories are available in the supplementary information.
\subsubsection{Approximate Structure Optimization}
\label{sec:approx_struct_opt}
The structure optimization represents a sizeable fraction of the computational effort to obtain an IR spectrum, as shown in Table~\ref{tab:time_normal}. Therefore, we explored to what extent a partial structure optimization affects the elements of the Hessian matrices of the structures with different structure optimization convergence criteria and the position of the peaks in the respective IR spectra. To this aim, we define five optimization profiles, \textit{i.e.}, different sets of criteria governing the convergence of the structure optimization, summarized in Table~\ref{tab:conv_crit}.
\begin{table}[hbpt]
\centering
\caption{Convergence criteria in atomic units for the different optimization profiles, defined in the main text. The profile ``None'' indicates no optimization. ``Max Step'' and ``RMS Step'' are the maximum deviations of any Cartesian coordinate and the root mean square deviation of the Cartesian coordinates vector between two iterations. ``Gradient'' is the nuclear gradient of the total energy. ``Max'' and ``RMS'' have the same meaning as above. $\Delta$ is the variation of the total energy between two iterations. \# indicates the number of criteria to satisfy besides $\Delta$ in order to reach convergence. All values are given in atomic units.}
\vspace{0.1cm}
\begin{tabular}{l|cccccc}
\hline \hline
Optimization & Max & RMS & Max & RMS & $\Delta$ & \# \\
profile&Step&Step&Gradient&Gradient& &\\
\hline
None & - & - & - & - & - & - \\
Very Loose & $1 \cdot 10^{-2}$ & $5 \cdot 10^{-2}$ & $5 \cdot 10^{-3}$ & $1 \cdot 10^{-2}$ & $1 \cdot 10^{-4}$ & 2 \\
Loose & $5 \cdot 10^{-3}$ & $1 \cdot 10^{-2}$ & $1 \cdot 10^{-3}$ & $5 \cdot 10^{-3}$ & $1 \cdot 10^{-5}$ & 2\\
Medium & $1 \cdot 10^{-4}$ & $5 \cdot 10^{-3}$ & $5 \cdot 10^{-4}$ & $1 \cdot 10^{-4}$ & $1 \cdot 10^{-6}$ &2\\
Tight & $1 \cdot 10^{-4}$ & $5 \cdot 10^{-4}$ & $5 \cdot 10^{-5}$ & $1 \cdot 10^{-5}$ & $1 \cdot 10^{-7}$ & 3\\
Very Tight & $2 \cdot 10^{-5}$ & $1 \cdot 10^{-5}$ & $2 \cdot 10^{-5}$ & $1 \cdot 10^{-5}$ & $1 \cdot 10^{-7}$ & 4\\
\hline \hline
\end{tabular}
\label{tab:conv_crit}
\end{table}
\begin{table}[hbpt]
\centering
\caption{Computational times required for the DFTB3 structure optimization of \textbf{T1.II} and \textbf{T2.II} with different convergence criteria. Timings are given as mean of the calculation time of 3 calculations $\pm$ standard deviation. Significant digits are given by the standard deviation: if it is larger than 2.5 multiplied by the appropriate power of ten, then it is rounded to the first digit, otherwise to the second one.}
\vspace{0.1cm}
\begin{tabular}{l|ll}
\hline \hline
& Optimization & Optimization \\
Structure & profile & time [ms] \\
\hline
\textbf{T1.II} & None & - \\
& Very Loose & $535.7 \pm 2.0$ \\
& Loose & $1738 \pm 6$ \\
& Medium & $3256 \pm 7$ \\
& Tight & $4580 \pm 100$ \\
& Very Tight & $5473.3 \pm 2.5$ \\
\hline
\textbf{T2.II} & None & - \\
& Very Loose & $503.7 \pm 2.3$ \\
& Loose & $1524.0 \pm 2.2$ \\
& Medium & $2357 \pm 7$ \\
& Tight & $2856 \pm 21$ \\
& Very Tight & $4710 \pm 160$ \\
\hline \hline
\end{tabular}
\label{tab:approx_opt_timings}
\end{table}
We assessed the viability of carrying out an approximate structure optimization before calculating an IR spectrum by two criteria: the gain in efficiency and the loss in accuracy with respect to performing a full structure optimization. We summarized the mean times needed to carry out a structure optimization for \textbf{T1.II} and \textbf{T2.II} in Table~\ref{tab:approx_opt_timings}. \textbf{T1.I} and \textbf{T2.I} are detected at the first structure along the trajectories, and their comparison is therefore skewed by the different starting situation of the trajectories. The acceleration factors range from 1.2 to more than a magnitude.
\begin{figure}[hbpt]
\centering
\includegraphics[width=\textwidth]{figure3.pdf}
\caption{Top panel: from left to right, the panels correspond to the absolute deviations of the DFTB3 Hessian matrix elements of \textbf{T1.II.N}, \textbf{T1.II.VL}, \textbf{T1.II.L}, \textbf{T1.II.M}, and \textbf{T1.II.T} from the ones of \textbf{T1.II.VT}. Bottom panel: from left to right, the panels correspond to the absolute deviations of the DFTB3 Hessian matrix elements of \textbf{T2.II.N}, \textbf{T2.II.VL}, \textbf{T2.II.L}, \textbf{T2.II.M}, and \textbf{T2.II.T} from the ones of \textbf{T2.II.VT}. Each pixel in a panel corresponds to an element of the Hessian matrix, darker colors correspond to larger deviations. The calculated matrices are sparse, therefore most of the matrix elements have a negligible absolute deviation.}
\label{fig:approxopt_hessian}
\end{figure}
In order to identify the sets of optimization criteria that still yield an acceptable accuracy, we compared the elements of the Hessian matrices obtained by the differently optimized molecular structures with the ones obtained from a structure optimized with the ``Very Tight'' convergence criteria. The deviation in the Hessian matrix elements in Cartesian coordinates of \textbf{T1.II} and \textbf{T2.II} are depicted in Fig.~\ref{fig:approxopt_hessian}. Even though the nature of the rearrangement is different, both molecular systems show a similar trend with increasing tightness of the convergence criteria: the deviation in the Hessian elements are strong in \textbf{T1.II.N} and \textbf{T2.II.N} and similarly present in \textbf{T1.II.VL} and \textbf{T2.II.VL}. In the Hessian matrices calculated from \textbf{T1.II.L} and \textbf{T2.II.L} the deviations diminish, especially noticeable in \textbf{T2.II.L}, and are negligible in the Hessian calculated from \textbf{T1.II.M} and \textbf{T2.II.M}.
\begin{table}[hbpt]
\centering
\caption{RMSD of the calculated vibrational frequencies obtained for the structure optimized with different convergence criteria and compared to the ones obtained after optimization with the ``Very Tight'' convergence profile. The IR spectral region is divided into a low frequency region ($< 800$\,cm$^{-1}$), a middle frequency region (between 800\,cm$^{-1}$ and 2000\,cm$^{-1}$), and a high frequency region ($> 2000$\,cm$^{-1}$). The vibrational frequencies are calculated for the second minimum along the trajectories \textbf{T1} and \textbf{T2} with the DFTB3 Hamiltonian.}
\vspace{0.1cm}
\tabcolsep=0.07cm
\begin{tabular}{l|lccc}
\hline\hline
& Optimization & Low $\tilde{\nu}_p$ & Middle $\tilde{\nu}_p$ & High $\tilde{\nu}_p$ \\
Structure & profile & RMSD [cm$^{-1}$]& RMSD [cm$^{-1}$] & RMSD [cm$^{-1}$] \\
\hline
\textbf{T1.II} & None & 47.7 & 20.5 & 22.3\\
& Very Loose & 16.1 & 6.0 & 2.1\\
& Loose & 9.8 & 4.6 & 3.1\\
& Medium & 3.14 & 1.5 & 1.6\\
& Tight & 0.3 & 0.2 & 0.2\\
\hline
\textbf{T2.II} & None & 45.9 & 23.0 & 47.0\\
& Very Loose & 12.6 & 5.2 & 2.6\\
& Loose & 7.5 & 3.4 & 1.5\\
& Medium & 1.4 & 0.5 & 0.5\\
& Tight & 1.1 & 0.4 & 0.5\\
\hline\hline
\end{tabular}
\label{tab:freq_rmsd_approxopt}
\end{table}
Comparing the elements of the Hessian matrix allows us to conservatively evaluate the error introduced because the Hessian matrix is calculated from structures that have been differently optimized and therefore are in slightly different conformations. The comparison of the Hessian elements expressed in Cartesian coordinates leads to spurious differences due to local rotations in the molecular structure. In this case, even if the effect on the normal mode vibrational frequency is negligible, the effect on the blocks of the Hessian matrix corresponding to the molecular fragments involved in the local rotation are sizeable. We therefore inspected the difference in the vibrational frequencies, summarized in Table~\ref{tab:freq_rmsd_approxopt}, which turned out to be robust with respect to the above-mentioned spurious effects in the Hessian matrix elements.
We assigned the normal modes of the different calculations according to their energetic order, because carrying out a linear sum assignment with the absolute value of the Duschinsky matrix suffers from the fact that the overlap of modes expressed in Cartesian coordinates is no appropriate similarity metric for different molecular structures.
Comparing the vibrational frequencies obtained from structures optimized with different convergence criteria shows that there are distinct differences between how the normal modes in the low and middle spectral regions ($< 2000$\,cm$^{-1}$) behave compared to the normal modes in the higher frequency region ($> 2000$\,cm$^{-1}$). While for normal modes lying in the lower spectral region a RMSD in the vibrational frequencies smaller than 5\,cm$^{-1}$, suitable for a qualitatively reliable spectrum, is reached only from a structure optimization with the ``Medium'' optimization profile, for the stiff modes this accuracy is reached already with the structure optimized with the ``Very Loose'' convergence criteria. Furthermore, the error in the frequencies in the higher IR spectral region decreases faster than the respective error in the Hessian matrix elements. Molecular fragments involved in localized, stiff modes, such as -CH or -OH stretching, usually found at the high-frequency end of the IR spectrum, are easier to optimize than low-frequency normal modes which are often delocalized across the whole molecule, or internal rotations of fragments of comparably great size.
This analysis highlights three important facts of partially optimizing a molecular structure prior to the calculation of an IR spectrum. First, the computational time can be significantly lowered by adopting looser convergence thresholds. Second, if necessary, the error introduced by such optimizations can be efficiently reduced by tightening the structure optimization convergence criteria. Third, for the spectral region including the diagnostic IR spectral bands ($> 2000$\,cm$^{-1}$), speedups by up to one order of magnitude are possible while keeping the RMSD of the frequencies below 5\,cm$^{-1}$.
\subsubsection{Partial Hessian Approach}
\begin{table}[hbpt]
\centering
\caption{Summary of the partial Hessian approach for the \textbf{T1} trajectory. The time needed to calculate the Hessian matrix as well as the number of nuclei for which the second derivative of the energy with respect to the nuclear Cartesian coordinates need to be evaluated is shown for \textbf{T1.II} for different thresholds $\epsilon_{\rm RMSD}$. The time is indicated as mean $\pm$ standard deviation in milliseconds. The IR spectral region is divided in a low frequency region ($< 800$\,cm$^{-1}$), a middle frequency region (between 800\,cm$^{-1}$ and 2000\,cm$^{-1}$), and a high frequency region ($> 2000$\,cm$^{-1}$). The RMSD between the vibrational frequencies with every $\epsilon_{\rm RMSD}$ and the one with $\epsilon_{\rm RMSD} = 0$\,bohr (equivalent to no partial Hessian approach) is given for the three spectral regions. }
\vspace{0.3cm}
\tabcolsep=0.2cm
\begin{tabular}{lccccc}
\hline
\hline
$\epsilon_{\rm RMSD}$ & Time & Nuclei to & Low $\tilde{\nu}_p$ & Middle $\tilde{\nu}_p$ & High $\tilde{\nu}_p$ \\
$[$bohr$]$ & [ms] & Evaluate & RMSD [cm$^{-1}$]& RMSD [cm$^{-1}$] & RMSD [cm$^{-1}$] \\
\hline
0.0 & $2341 \pm 5$ & 60 & - & - & - \\
0.05 & $1408 \pm 4$ & 32 & 108 & 10.6 & 0.77 \\
0.1 & $753 \pm 1$ & 16 & 120 & 11.6 & 1.47 \\
0.2 & $764 \pm 9$ & 13 & 86 & 12.1 & 1.42 \\
0.3 & $511 \pm 2$ & 9 & 18 & 7.0 & 1.17 \\
0.5 & $498 \pm 5$ & 7 & 165 & 145 & 248 \\
\hline
\hline
\end{tabular}
\label{tab:freq_rmsd_partial}
\end{table}
\begin{figure}[hbpt]
\centering
\includegraphics[width=\textwidth]{figure4.pdf}
\caption{Comparison of the elements of the Hessian matrices with the partial Hessian approach for the second minimum along trajectory \textbf{T1}. From left to right: increasingly loose thresholds for the detection of structural fragments to recalculate, $\epsilon_{\rm RMSD}$ (0.05\,bohr, 0.1\,bohr, 0.2\,bohr, 0.3\,bohr, 0.5\,bohr). Top row: deviation of the elements of the Hessian matrix calculated in the second minimum along the trajectory compared with the Hessian matrix for the same structure but without the partial Hessian approximation. Bottom row: deviation of the elements of the Hessian matrix calculated in the second minimum along the trajectory compared with the Hessian matrix calculated in the first minimum. A difference is present only for the elements of the Hessian matrix corresponding to molecular fragments that have a RMSD determined by quaternion fitting larger than $\epsilon_{\rm RMSD}$.}
\label{fig:t1_partial_hessian_compared}
\end{figure}
We assessed the partial Hessian approach at the example of the trajectories $\textbf{T1}$ and $\textbf{T2}$. The first minimum in both trajectories is calculated without any of the approximations introduced in this work. In the second minimum, only the elements of the Hessian matrix corresponding to fragments in the molecular structure that are not similar enough to the previous structure are evaluated. The rest of the Hessian matrix is copied from the one calculated in the previous minimum. A single parameter, $\epsilon_{\rm RMSD}$, controls the similarity threshold between the nuclear coordinates, and represents the maximum RMSD of the Cartesian coordinates of each nucleus after an iterative alignment. A least-square quaternion fitting of the molecular structures is not an ideal option for the identification of invariant structural fragments. Ideally, a local alignment algorithm would ignore the fragments of the molecule that were distorted and optimally fit the target molecule to the parts of the molecular structures that are not distorted between two neighboring local minima. In such a way the number of Hessian matrix elements that need to be reevaluated is kept at a minimum.
The partial Hessian approach consists of 2 ingredients. First, the iterative alignment algorithm identifies the molecular fragments for which the chemical environment has significantly changed from the previous minimum, \textit{i.e.}, for which the corresponding Hessian matrix elements need to be recalculated. Second, the blocks of the Hessian matrix corresponding to the identified fragments are evaluated as numerical differences of analytical gradients. This is a trivially parallel task and can be implemented by adapting a full semi-numerical Hessian evaluation algorithm. The iterative alignment algorithm ensures that the local distortions from one minimum to the other one do not affect the parts of the molecule that were not affected by the local distortion. Common structural fragments may be difficult to identify in the case of local minima on a trajectory connected by a global distortion, and the whole Hessian matrix may need to be calculated.
The partial Hessian approach is particularly advantageous for local minima connected to previous ones by localized structural distortions. This is corroborated by the data shown in Fig.~\ref{fig:t1_partial_hessian_compared} for trajectory \textbf{T1}, where the enol undergoing a keto--enol tautomerism is at the end of an aliphatic chain. Most of the molecular structure is largely unaffected by this rearrangement, and only a small fraction of the Hessian elements needs to be updated (the molecular fragments that need to be evaluated for each $\epsilon_{\rm RMSD}$ are indicated in the supplementary information). In the bottom row of Fig.~\ref{fig:t1_partial_hessian_compared}, the Hessian matrix calculated at a minimum along the trajectory \textbf{T1} is compared with the one calculated in the previous minimum. The molecular fragments responsible for the most intense deviations in the Hessian matrix elements with respect to the one of the previous minimum are readily identified and recalculated (Fig.~\ref{fig:t1_partial_hessian_compared}, bottom row), even at comparably high $\epsilon_{\rm RMSD}$. In this favorable example, the time needed to update the Hessian matrix is reduced from 2341\,ms in the full calculation to 511\,ms in the calculation with $\epsilon_{\rm RMSD} = 0.3$\,bohr. Higher thresholds lead to a severe misrepresentation of the normal modes and normal mode frequencies, as summarized in Table~\ref{tab:freq_rmsd_partial}. Normal modes were assigned through a linear sum assignment with the absolute value of the Duschinsky matrix as score matrix. The RMSD in the frequencies in the low and middle IR spectral regions is not minimal at the smallest $\epsilon_{\rm RMSD}$, but rather at a threshold of 0.3\,bohr. This is due to the fact that Hessian matrix elements important for delocalized normal modes are calculated at different minima. A higher fidelity is achieved by taking all the Hessian matrix elements at the same structure. Such delocalized normal modes are prominently present in the low frequency range of the spectra, and this explains why this effect is especially present in frequencies lower than 800\,cm$^{-1}$.
In \textbf{T2.II}, the rotation around a central dihedral after the tautomerization during the structure optimization causes a global structural rearrangement, and the alignment does not recognize any fragment which is similar enough to a fragment in the previous minimum (the alignment of the optimized structure is depicted in the supplementary information). In this case, the full Hessian matrix is calculated. Hence, if the character of the path connecting two local minima on a PES is that of a global structural rearrangement, the reliability of the Hessian matrix calculated is not affected, rather, the performance of the approach is. Two avenues could be explored to overcome the current limitation in the iterative alignment algorithm. First, the optimized structure corresponding to multiple local minima can be saved, and the structure at hand can be aligned to multiple previous structures to find the one with a better match. During an exploration, the probability to find similar structures grows with the number of structures that have been already explored. Second, the iterative alignment algorithm could be improved by first partitioning the molecular structure into local fragments that are independently aligned\cite{Brunken2020}.
A small difference in the time needed to evaluate the Hessian matrix with $\epsilon_{\rm RMSD} = 0.0$\,bohr and the time given in Table~\ref{tab:time_normal} is possible even though they result in the same number of matrix elements to calculate, as the algorithm for the evaluation of a partial Hessian matrix is slightly different to the one for the evaluation of the full Hessian matrix.
\subsection{UV/Vis Spectroscopy}
We calculated the UIV/Vis spectra for the trajectories \textbf{MD} and \textbf{T1}, \textit{cf.}, Figs.~\ref{fig:md_interactive_static} and~\ref{fig:t1_interactive_static} (in the interactive HTML version of this work, the spectrum of every structure along both trajectories can be inspected). While the spectrum for the \textbf{MD} system behaves rather eratic along the trajectory due to the comparatively large structural changes, the power of real-time UV/Vis spectroscopy for diagnostic purposes is very well illustrated by the \textbf{T1} system. Here, a small peak at about 6\,eV is present for the alcohol. This peak sharply increases in intensity around the transition state (see Fig.~\ref{fig:t1_interactive_static}), only to vanish completely for the ketone at the end of the trajectory.
\begin{figure}[hbpt]
\centering
\includegraphics[width=\textwidth]{figure5.png}
\caption{The interactive spectroscopy approach for the structures on the trajectory \textbf{MD}. On the left panel, a molecular structure is displayed. On the right panel, the corresponding UV/Vis spectrum is shown. In the HTML version of this work, this figure is available in an interactive format. One can indicate the desired structure along the trajectory in writing its index in the ``Index'' box or by moving the slider. Both spectra and structures are available as javascript arrays, and are not calculated on-the-fly.}
\label{fig:md_interactive_static}
\end{figure}
\begin{figure}[hbpt]
\centering
\includegraphics[width=\textwidth]{figure6.png}
\caption{The interactive spectroscopy approach for the structures on the trajectory \textbf{T1}. On the left panel, a molecular structure is displayed. On the right panel, the corresponding UV/Vis spectrum is shown. In the HTML version of this work, this figure is available in an interactive format. One can indicate the desired structure along the trajectory in writing its index in the ``Index'' box or by moving the slider. Both spectra and structures are available as javascript arrays, and are not calculated on-the-fly.}
\label{fig:t1_interactive_static}
\end{figure}
\begin{figure}[hbpt]
\centering
\includegraphics[width=\textwidth]{figure7.pdf}
\caption{Comparison of the UV/Vis spectra of TD-DFTB3 and linear-response SCS-CC2 for a subset of structures along the \textbf{MD} trajectory. Left panel: the first 30 electronic excited states calculated with the TD-DFTB3 method are convoluted with a Lorentzian function with full-width at half-maximum of 0.3\,eV. Right panel: the first 10 electronic excited states calculated with the linear-response SCS-CC2 method. Darker colors correspond to more intense spectral bands, and the color is given by the oscillator strength of the electronic transition normalized to the one of the most intense electronic transition for each method. Every horizontal projection is the UV/Vis spectrum of a single structure. One every twenty structures along the trajectory was sampled for calculation. A total of 101 structures was calculated.}
\label{fig:dftb3_vs_cc2}
\end{figure}
\begin{figure}[hbpt]
\centering
\includegraphics[width=0.65\textwidth]{figure8.pdf}
\caption{Difference between the UV/Vis spectra of TD-DFTB3 and linear-response SCS-CC2 for a subset of the \textbf{MD} trajectory, with a constant blue-shift of 0.46\,eV applied to the TD-DFTB spectrum. The spectrum of TD-DFTB is calculated by convolution of the first 30 excited states, the SCS-CC2 one by convolution of the first 10 excited states with a Lorentzian function with full-width at half-maximum of 0.3\,eV. Blue parts of the spectrum indicate that the normalized oscillator strength of the SCS-CC2 spectrum is larger than the one of TD-DFTB, and vice-versa for red colors. Darker colors correspond to larger differences. Every horizontal projection is the difference UV/Vis spectrum of a single structure. One every twenty structures along the trajectory was sampled for calculation. A total of 101 structures were calculated.}
\label{fig:dftb3_vs_cc2_diff}
\end{figure}
An appropriate method for efficient electronic excited-state calculations must yield qualitatively comparable results to more accurate methods at a fraction of the cost. In Fig.~\ref{fig:dftb3_vs_cc2}, we show that through the calculation of enough states, the linear-response SCS-CC2 spectrum is recovered qualitatively by the TD-DFTB method, albeit being red-shifted by 0.46\,eV. The difference spectrum in Fig.~\ref{fig:dftb3_vs_cc2_diff} also shows that the difference between the two spectra after blue-shifting the TD-DFTB spectra by 0.46\,eV is acceptable, in light of the fact that transition properties as oscillator strengths are particularly sensitive, with a mean of the maximum absolute deviation of the normalized intensity in each spectrum of 0.35. The results of a PBE/def2-TZVP/TD-DFT calculation are similar to the ones obtained with TD-DFTB, and the same calculation with the PBE0 hybrid exchange--correlation functional cures in part the ghost-state problem (data provided in the supplementary information).
The adequacy of the initial guess is of importance for the convergence properties of the subspace solver. Therefore, we attempted to devise two approaches to provide starting vectors to the iterative diagonalizer that are closer to the solution of the excited-state problem. First, the initial guess was provided by the solution of the excited-state problem of the previous structure along the trajectory under study. Second, a linear combination of previous excite-state solutions along the lines of the DIIS approach we introduced\cite{Muehlbach2016} for the acceleration of the self-consistence-field convergence in ground-state calculations was attempted. Both strategies showed a limited acceleration of the calculation of the first 30 excited states for each structure along the trajectory \textbf{T1} with an initial guess provided by one of the two previous strategies compared to the standard guess of Eq.~(\ref{eq:init_guess}) (see supplementary information). However, this effect was not observed anymore if the same initial subspace was complemented by 90 standard initial vectors for a total of 120 trial vectors.
The comparison of the efficiency of our implementation against the one of the DFTB+ software package for the calculation of the first 30 excited states with an initial guess space of 30 vectors of the first structure of the \textbf{T1} trajectory revealed that the average total wall time required by the DFTB+ program for the excited-state calculation was 1.0\,s, whereas for the same calculation the average wall time required by \textsc{Sparrow} was 246\,ms (wall time obtained as an average over 3 calculations).
\subsubsection{Pruning the Excited-State Basis}
Especially for systems with many possible electronic transitions, the improved iterative diagonalizer alone may be insufficient to provide the required acceleration. Therefore, we assessed the suitability of approximate solutions of the excited-state problem through the limitation of the size of the excited-state basis. This approach exploits the diagonally-dominant structure of the matrix $\boldsymbol{\Delta}^\frac{1}{2} (\boldsymbol{A} + \boldsymbol{B}) \boldsymbol{\Delta}^\frac{1}{2}$ in Eq.~(\ref{eq:tddftb}). Including only as many determinants with the lowest diagonal component as solutions required in the excited-state problem neglects the coupling between these determinants and the rest of the excited-state space. This issue could be tamed by additionally including all determinants that couple with the first determinants according to the criterion derived by perturbation theory described in Eq.~(\ref{eq:pert_crit}) more than a threshold $\epsilon_{\rm PT2}$. In Fig.~\ref{fig:md_t1_pruned}, we compare UV/Vis spectra obtained with several $\epsilon_{\rm PT2}$ for the first structure of the trajectories \textbf{MD} and \textbf{T1}. The first peaks of the two spectra are less dependent on $\epsilon_{\rm PT2}$ than the ones at the higher end of the spectra. The roots responsible for the first peaks are allowed to couple with the other determinants with a diagonal element lower than the maximally required energy span of the spectrum independently from $\epsilon_{\rm PT2}$. Hence, these states are often well described already without any additional basis function, provided a sufficient number of excited states is to be determined. The spectral features in both spectra in Fig.~\ref{fig:md_t1_pruned} will be qualitatively recovered if $\epsilon_{\rm PT2}$ is lower than $1\cdot10^{-4}$\,hartree$^2$.
In Table~\ref{tab:time_t1_md_pruning}, we provide the timings required to calculate the first 30 excited states with an initial subspace dimension of 30 of the first structure in the trajectories \textbf{MD} and \textbf{T1} with several $\epsilon_{\rm PT2}$. While the accuracy is not very dependent on the size, the computational gain of pruning the excited-state space is. Calculating the first 30 excited states of a structure in the \textbf{MD} trajectory with sufficient accuracy ($\epsilon_{\rm PT2} = 5\cdot 10^{-5}$\,hartree$^2$) is a modest two times faster than without pruning. The speedup will grow if the calculation is carried out for a larger system. In fact, the same spectrum for the first structure of the \textbf{T1} trajectory is calculated already 3.4 times faster than the respective calculation in the full space.
The reasons of the speedup are twofold: first, all linear algebra operations, such as the generation of the sigma vectors, are now carried out in a smaller space. Second, the number of iterations of the Davidson algorithm is smaller. The reduction of the dimension of the excited-state basis can potentially allow for the efficient non-iterative diagonalization of the matrix in the eigenvalue problem in Eq.~(\ref{eq:tddftb}).
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figure9.pdf}
\caption{Comparison of the UV/Vis spectra calculated with TD-DFTB3 and different $\epsilon_{\rm PT2}$ for the first structures of the trajectories \textbf{MD} and \textbf{T1}. For both systems, the first 30 electronic excited states of the first structure along the respective trajectory are calculated with the TD-DFTB3 method and are convoluted with a Lorentzian function with full width at half maximum of 0.3\,eV. Left panel: resulting UV/Vis spectra for the first structure along the \textbf{MD} trajectory. Right panel: resulting UV/Vis spectra for the first structure along the \textbf{T1} trajectory. The inclusion thresholds described in the theory section for each spectrum are indicated in the legend in units of hartree$^2$. If this threshold was zero, the whole excited-state basis had been included.}
\label{fig:md_t1_pruned}
\end{figure}
\begin{table}[hbpt]
\centering
\caption{Dimension of the excited-state basis, mean computational time and speedup for the calculation of the first 30 transitions of the UV/Vis spectra corresponding to the first structures along the trajectories \textbf{MD} and \textbf{T1} with several $\epsilon_{\rm PT2}$ for the pruning of the excited-state basis. Timings are given as the mean of the time required to calculate an UV/Vis spectrum over 3 calculations. The speedup is relative to the time without pruning in the first column. Significant digits are given by the standard deviation: if it is larger than 2.5 multiplied by the appropriate power of ten, then it is rounded to the first digit, otherwise to the second one.}
\vspace{0.1cm}
\begin{tabular}{l|cccccc}
\hline
\hline
& \multicolumn{6}{c}{$\epsilon_{\rm PT2}$ [hartree$^2$]} \\
System & 0 & $1\cdot 10^{-5}$ & $5\cdot 10^{-5}$ & $1\cdot 10^{-4}$ & $5\cdot 10^{-4}$ & $1\cdot 10^{-3}$\\
\hline
\textbf{MD} &&&&&&\\
\hline
Dimension & 634 & 471 & 288 & 182 & 53 & 38 \\
Time [ms] &$39.3 \pm 0.9$ & $40 \pm 3$ & $29.3 \pm 0.5$ & $29.3\pm 1.2$ & $12 \pm 3$ & $9.7 \pm 0.4$ \\
Speedup &1x&1.3x&2.0x& 2.5x &9.0x&8.8x\\
\hline
\textbf{T1} &&&&&&\\
\hline
Dimension & 4352 & 848 & 252 & 151 & 45 & 35\\
Time [ms] & $259.0 \pm 0.8$ & $116 \pm 6$ & $84.0 \pm 0.8$ & $52 \pm 6$ & $30.0 \pm 0.0$ & $29.0 \pm 0.0$\\
Speedup & 1x & 2.2x & 3.1x & 5.0x & 8.6x & 8.9x\\
\hline
\hline
\end{tabular}
\label{tab:time_t1_md_pruning}
\end{table}
\clearpage
\section{Conclusions}
Computational spectroscopy in high-throughput and interactive quantum chemistry settings is challenging due to its high computational cost. Even with suitably parametrized models, such as semi-empirical Hamiltonians, obtaining spectroscopic information with sufficient accuracy at a high rate is a formidable task that requires the development of tailored approaches for the reduction of computational hurdles.
The approaches discussed in this work allow for the efficient calculation of spectroscopic signals in high-throughput and interactive quantum chemistry. While some of these methods are specific for the calculations of closely related structures, others are of more general applicability.
Vibrational spectroscopy in the harmonic approximation presents two computational bottlenecks: structure optimization and Hessian-matrix calculation. We pursued two options to accelerate these calculations. First, we assessed the viability of incomplete structure optimizations for the calculation of vibrational spectra.
At an example, we characterized how different tightness of convergence thresholds for structure optimization affects the error in the spectroscopic peak positions and intensities and in the Hessian matrix elements. We identified a set of convergence criteria that were sufficient to reduce the computational time at a limited toll on accuracy. In particular, the diagnostic high-frequency spectral bands were well reproduced already with an approximate structure optimization due to the localized nature of the corresponding normal modes.
Second, we introduced a partial Hessian approach to reduce the number of Hessian matrix elements to be calculated by leveraging the similarities between the structures corresponding to the local minima on the PES for which a spectrum is required. In order to do so, the structure corresponding to the local minimum for which a spectrum needs to be evaluated is compared with the one of the previous minimum. A local iterative alignment scheme, controlled by a single parameter, was designed to identify the invariant parts of the molecule. The elements of the Hessian matrix corresponding to parts of the molecular structure that have been successfully aligned and are therefore sufficiently similar are not recalculated but inherited from the previous structure.
The application of these two approaches allowed for the acceleration of vibrational spectroscopy under control of the tolerable error.
The approximations introduced for the calculation of infrared spectra are particularly reliable for high-frequency, stiff normal modes. The localization of these vibrational modes also makes the two approaches more transferable to different molecular systems, as these modes are then less dependent on their chemical environment.
UV/Vis spectroscopy requires efficient methods for recovering sufficiently accurate vertical electronic transition energies and corresponding oscillator strengths.
To tackle the high computational cost of the linear-response excited-state calculation, we implemented a non-orthogonal modification of the Davidson algorithm\cite{Parrish2016, Furche2016}. Furthermore, we devised a strategy to leverage this similarity by improving the initial guess of the iterative diagonalization, which we obtained as a linear combination of previous solutions of the excited-state problem with the DIIS algorithm\cite{Muehlbach2016}. However, the improved initial guess did not consistently decrease the time needed to reach a solution. A complementary approach is to solve the excited-state problem in a limited excited-state determinant space. By neglecting all determinants that are not coupling considerably with the solution subspace, the size of the excited-state problem could be massively reduced with limited accuracy losses that can be controlled by a single parameter.
Even though the approaches implemented in this work are primarily intended for the ultra-fast application with semi-empirical Hamiltonians, they are agnostic to the electronic structure model; i.e., they can also be applied to accelerate calculations with more accurate and computationally expensive methods. The application to semi-empirical models based on the neglect of diatomic differential overlap, such as MNDO, AM1, RM1, PM3 and PM6, yields, however, no reliable vibrational and electronic spectra (the latter calculated with the configuration interaction singles method). The extension of the approaches discussed in this work to modern semi-empirical models, such as the extended tight-binding method family (GFNn-xTB, n = 0, 1, 2) and orthogonalization-corrected methods (OMn, n = 1, 2, 3), is rather straightforward and will therefore be considered in future work.
\section*{Acknowledgments}
We gratefully acknowledge financial support by the Swiss National Science Foundation (Project No.~200021\_182400).
We thank Dr.~Alain Vaucher for discussions at the beginning of this work in 2018.
\providecommand{\latin}[1]{#1}
\makeatletter
\providecommand{\doi}
{\begingroup\let\do\@makeother\dospecials
\catcode`\{=1 \catcode`\}=2 \doi@aux}
\providecommand{\doi@aux}[1]{\endgroup\texttt{#1}}
\makeatother
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{120}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Marx and Hutter(2009)Marx, and Hutter]{Marx2009}
Marx,~D.; Hutter,~J. \emph{{Ab Initio Molecular Dynamics: Basic Theory and
Advanced Methods}}; Cambridge University Press: Cambridge, United Kingdom,
2009\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hill(2012)]{Hill2012}
Hill,~T.~L. \emph{{An Introduction to Statistical Thermodynamics}}; Dover:
Newburyport, 2012\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Haag and Reiher(2013)Haag, and Reiher]{Haag2013}
Haag,~M.~P.; Reiher,~M. {Real-Time Quantum Chemistry}. \emph{Int. J. Quantum
Chem.} \textbf{2013}, \emph{113}, 8--20\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Haag \latin{et~al.}(2014)Haag, Vaucher, Bosson, Redon, and
Reiher]{Haag2014}
Haag,~M.~P.; Vaucher,~A.~C.; Bosson,~M.; Redon,~S.; Reiher,~M. {Interactive
Chemical Reactivity Exploration}. \emph{ChemPhysChem} \textbf{2014},
\emph{15}, 3301--3319\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Vaucher and Reiher(2016)Vaucher, and Reiher]{Vaucher2016}
Vaucher,~A.~C.; Reiher,~M. {Molecular Propensity as a Driver for Explorative
Reactivity Studies}. \emph{J. Chem. Inf. Model.} \textbf{2016}, \emph{56},
1470--1478\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Vaucher and Reiher(2018)Vaucher, and Reiher]{Vaucher2018}
Vaucher,~A.~C.; Reiher,~M. {Minimum Energy Paths and Transition States by Curve
Optimization}. \emph{J. Chem. Theory Comput.} \textbf{2018}, \emph{14},
3091--3099\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wilson \latin{et~al.}(1955)Wilson, Decius, and Cross]{Wilson1955}
Wilson,~E.~B.; Decius,~J.~C.; Cross,~P.~C. \emph{{Molecular Vibrations}};
McGraw-Hill: New York, 1955\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Califano(1976)]{Califano1976}
Califano,~S. \emph{{Vibrational States}}; John Wiley and Sons Ltd: New York,
1976\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Brato{\v{z}}(1958)]{Bratoz1958}
Brato{\v{z}},~S. {Le calcul non empirique des constantes de force et des
d{\'{e}}riv{\'{e}}es du moment dipolaire}. {Calcul des fonctions d'onde
mol\'{e}culaire}. Paris, 1958; pp 287--301\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Heitler(1994)]{Heitler1994}
Heitler,~W. \emph{{The Quantum Theory Of Radiation}}; Dover: New York, United
States of America, 1994\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Craig and Thirunamachandran(1998)Craig, and
Thirunamachandran]{Craig1998}
Craig,~D.~P.; Thirunamachandran,~T. \emph{{Molecular Quantum Electrodynamics}};
Dover: New York, United States of America, 1998\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Reiher and Neugebauer(2003)Reiher, and Neugebauer]{Reiher2003}
Reiher,~M.; Neugebauer,~J. {A mode-selective quantum chemical method for
tracking molecular vibrations applied to functionalized carbon nanotubes}.
\emph{J. Chem. Phys.} \textbf{2003}, \emph{118}, 1634--1641\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Reiher and Neugebauer(2004)Reiher, and Neugebauer]{Reiher2004}
Reiher,~M.; Neugebauer,~J. {Convergence characteristics and efficiency of
mode-tracking calculations on pre-selected molecular vibrations}. \emph{Phys.
Chem. Chem. Phys.} \textbf{2004}, \emph{6}, 4621--4629\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Herrmann \latin{et~al.}(2007)Herrmann, Neugebauer, and
Reiher]{Hermann2007}
Herrmann,~C.; Neugebauer,~J.; Reiher,~M. {Finding a needle in a haystack:
direct determination of vibrational signatures in complex systems}. \emph{New
J. Chem.} \textbf{2007}, \emph{31}, 818--831\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Davidson(1975)]{Davidson1975}
Davidson,~E.~R. {The Iterative Calculation of a Few of the Lowest Eigenvalues
and Corresponding Eigenvectors of Large Real-Symmetric Matrices}. \emph{J.
Comput. Phys.} \textbf{1975}, \emph{17}, 87--94\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kiewisch \latin{et~al.}(2008)Kiewisch, Neugebauer, and
Reiher]{Kiewisch2008}
Kiewisch,~K.; Neugebauer,~J.; Reiher,~M. {Selective calculation of
high-intensity vibrations in molecular resonance Raman spectra}. \emph{J.
Chem. Phys.} \textbf{2008}, \emph{129}, 204103\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kiewisch \latin{et~al.}(2009)Kiewisch, Luber, Neugebauer, and
Reiher]{Kiewisch2009}
Kiewisch,~K.; Luber,~S.; Neugebauer,~J.; Reiher,~M. {Intensity Tracking for
Vibrational Spectra of Large Molecules}. \emph{Chimia} \textbf{2009},
\emph{63}, 270--274\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Luber \latin{et~al.}(2009)Luber, Neugebauer, and Reiher]{Luber2009}
Luber,~S.; Neugebauer,~J.; Reiher,~M. {Intensity tracking for theoretical
infrared spectroscopy of large molecules}. \emph{J. Chem. Phys.}
\textbf{2009}, \emph{130}, 64105\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kovyrshin and Neugebauer(2010)Kovyrshin, and
Neugebauer]{Kovyrshin2010}
Kovyrshin,~A.; Neugebauer,~J. {State-selective optimization of local excited
electronic states in extended systems}. \emph{J. Chem. Phys.} \textbf{2010},
\emph{133}, 174114\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kovyrshin \latin{et~al.}(2012)Kovyrshin, {De Angelis}, and
Neugebauer]{Kovyrshin2012}
Kovyrshin,~A.; {De Angelis},~F.; Neugebauer,~J. {Selective TDDFT with automatic
removal of ghost transitions: application to a perylene-dye-sensitized solar
cell model}. \emph{Phys. Chem. Chem. Phys.} \textbf{2012}, \emph{14},
8608--8619\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Teodoro \latin{et~al.}(2018)Teodoro, Koenis, Galembeck, Nicu, Buma,
and Visscher]{Teodoro2018}
Teodoro,~T.~Q.; Koenis,~M. A.~J.; Galembeck,~S.~E.; Nicu,~V.~P.; Buma,~W.~J.;
Visscher,~L. {Frequency Range Selection Method for Vibrational Spectra}.
\emph{J. Phys. Chem. Lett.} \textbf{2018}, \emph{9}, 6878--6882\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{dos Santos} \latin{et~al.}(2014){dos Santos}, Proenza, and
Longo]{DosSantos2014}
{dos Santos},~M. V.~P.; Proenza,~Y.~G.; Longo,~R.~L. {PICVib: An accurate, fast
and simple procedure to investigate selected vibrational modes and evaluate
infrared intensities}. \emph{Phys. Chem. Chem. Phys.} \textbf{2014},
\emph{16}, 17670--17680\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sahu and Gadre(2015)Sahu, and Gadre]{Sahu2015}
Sahu,~N.; Gadre,~S.~R. {Accurate vibrational spectra via molecular tailoring
approach: A case study of water clusters at MP2 level}. \emph{J. Chem. Phys.}
\textbf{2015}, \emph{142}, 014107\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wang \latin{et~al.}(2016)Wang, Ozhgibesov, and Hirao]{Wang2016}
Wang,~R.; Ozhgibesov,~M.; Hirao,~H. {Partial Hessian Fitting for Determining
Force Constant Parameters in Molecular Mechanics}. \emph{J. Comput. Chem.}
\textbf{2016}, \emph{37}, 2349--2359\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Head(1997)]{Head1997}
Head,~J.~D. {Computation of Vibrational Frequencies for Adsorbates on
Surfaces}. \emph{Int. J. Quantum Chem.} \textbf{1997}, \emph{65},
827--838\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Li and Jensen(2002)Li, and Jensen]{Li2002}
Li,~H.; Jensen,~J.~H. {Partial Hessian vibrational analysis: the localization
of the molecular vibrational energy and entropy}. \emph{Theor. Chem. Acc.}
\textbf{2002}, \emph{107}, 211--219\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ghysels \latin{et~al.}(2007)Ghysels, {Van Neck}, {Van Speybroeck},
Verstraelen, and Waroquier]{Ghysels2007}
Ghysels,~A.; {Van Neck},~D.; {Van Speybroeck},~V.; Verstraelen,~T.;
Waroquier,~M. {Vibrational modes in partially optimized molecular systems}.
\emph{J. Chem. Phys.} \textbf{2007}, \emph{126}, 224102\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Durand \latin{et~al.}(1994)Durand, Trinquier, and
Sanejouand]{Durand1994}
Durand,~P.; Trinquier,~G.; Sanejouand,~Y.-H. {A New Spproach for Determining
Low-Frequency Normal Modes in Macromolecules}. \emph{Biopolymers}
\textbf{1994}, \emph{34}, 759--771\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Tama \latin{et~al.}(2000)Tama, Gadea, Marques, and
Sanejouand]{Tama2000}
Tama,~F.; Gadea,~F.~X.; Marques,~O.; Sanejouand,~Y.-H. {Building-Block Approach
for Determining Low-Frequency Normal Modes of Macromolecules}.
\emph{Proteins} \textbf{2000}, \emph{41}, 1--7\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bou\v{r} \latin{et~al.}(1997)Bou\v{r}, Sopkov{\'{a}},
Bedn{\'{a}}rov{\'{a}}, Malo\v{n}, and Keiderling]{Bour1997}
Bou\v{r},~P.; Sopkov{\'{a}},~J.; Bedn{\'{a}}rov{\'{a}},~L.; Malo\v{n},~P.;
Keiderling,~T.~A. {Transfer of Molecular Property Tensors in Cartesian
Coordinates: A New Algorithm for Simulation of Vibrational Spectra}. \emph{J.
Comput. Chem.} \textbf{1997}, \emph{18}, 646--659\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bieler \latin{et~al.}(2011)Bieler, Haag, Jacob, and
Reiher]{Bieler2011}
Bieler,~N.~S.; Haag,~M.~P.; Jacob,~C.~R.; Reiher,~M. {Analysis of the Cartesian
Tensor Transfer Method for Calculating Vibrational Spectra of Polypeptides}.
\emph{J. Chem. Theory Comput.} \textbf{2011}, \emph{7}, 1867--1881\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[R{\"{u}}ger \latin{et~al.}(2015)R{\"{u}}ger, van Lenthe, Lu, Frenzel,
Heine, and Visscher]{Rueger2015}
R{\"{u}}ger,~R.; van Lenthe,~E.; Lu,~Y.; Frenzel,~J.; Heine,~T.; Visscher,~L.
{Efficient Calculation of Electronic Absorption Spectra by Means of
Intensity-Selected Time-Dependent Density Functional Tight Binding}. \emph{J.
Chem. Theory Comput.} \textbf{2015}, \emph{11}, 157--167\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Grimme(2013)]{Grimme2013}
Grimme,~S. {A simplified Tamm-Dancoff density functional approach for the
electronic excitation spectra of very large molecules}. \emph{J. Chem. Phys.}
\textbf{2013}, \emph{138}, 244104\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bannwarth and Grimme(2014)Bannwarth, and Grimme]{Bannwarth2014}
Bannwarth,~C.; Grimme,~S. {A simplified time-dependent density functional
theory approach for electronic ultraviolet and circular dichroism spectra of
very large molecules}. \emph{Comput. Theor. Chem.} \textbf{2014},
\emph{1040-1041}, 45--53\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Niehaus \latin{et~al.}(2001)Niehaus, Suhai, {Della Sala}, Lugli,
Elstner, Seifert, and Frauenheim]{Niehaus2001}
Niehaus,~T.~A.; Suhai,~S.; {Della Sala},~F.; Lugli,~P.; Elstner,~M.;
Seifert,~G.; Frauenheim,~T. {Tight-binding approach to time-dependent
density-functional response theory}. \emph{Phys. Rev. B} \textbf{2001},
\emph{63}, 085108\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Risthaus \latin{et~al.}(2014)Risthaus, Hansen, and
Grimme]{Risthaus2014}
Risthaus,~T.; Hansen,~A.; Grimme,~S. {Excited states using the simplified
Tamm--Dancoff-Approach for range-separated hybrid density functionals:
development and application}. \emph{Phys. Chem. Chem. Phys.} \textbf{2014},
\emph{16}, 14408--14419\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Grimme and Bannwarth(2016)Grimme, and Bannwarth]{Grimme2016}
Grimme,~S.; Bannwarth,~C. {Ultra-fast computation of electronic spectra for
large systems by tight-binding based simplified Tamm-Dancoff approximation
(sTDA-xTB)}. \emph{J. Chem. Phys.} \textbf{2016}, \emph{145}, 054103\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Isborn \latin{et~al.}(2011)Isborn, Luehr, Ufimtsev, and
Mart{\'{i}}nez]{Isborn2011}
Isborn,~C.~M.; Luehr,~N.; Ufimtsev,~I.~S.; Mart{\'{i}}nez,~T.~J. {Excited-State
Electronic Structure with Configuration Interaction Singles and Tamm--Dancoff
Time-Dependent Density Functional Theory on Graphical Processing Units}.
\emph{J. Chem. Theory Comput.} \textbf{2011}, \emph{7}, 1814--1823\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu and Thiel(2018)Liu, and Thiel]{Liu2018}
Liu,~J.; Thiel,~W. {An efficient implementation of semiempirical
quantum-chemical orthogonalization-corrected methods for excited-state
dynamics}. \emph{J. Chem. Phys.} \textbf{2018}, \emph{148}, 154103\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sullivan(2011)]{Sullivan2011}
Sullivan,~T.~J. \emph{{Introduction to Uncertainty}}, 1st ed.; Springer: New
York, 2011\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Simm and Reiher(2016)Simm, and Reiher]{Simm2016}
Simm,~G.~N.; Reiher,~M. {Systematic Error Estimation for Chemical Reaction
Energies}. \emph{J. Chem. Theory Comput.} \textbf{2016}, \emph{12},
2762--2773\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Simm \latin{et~al.}(2017)Simm, Proppe, and Reiher]{Simm2017b}
Simm,~G.~N.; Proppe,~J.; Reiher,~M. {Error Assessment of Computational Models
in cChemistry}. \emph{Chimia} \textbf{2017}, \emph{71}, 202--208\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Proppe and Reiher(2017)Proppe, and Reiher]{Proppe2017}
Proppe,~J.; Reiher,~M. {Reliable Estimation of Prediction Uncertainty for
Physicochemical Property Models}. \emph{J. Chem. Theory Comput.}
\textbf{2017}, \emph{13}, 3297--3317\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Proppe \latin{et~al.}(2016)Proppe, Husch, Simm, and
Reiher]{Proppe2016}
Proppe,~J.; Husch,~T.; Simm,~G.~N.; Reiher,~M. {Uncertainty quantification for
quantum chemical models of complex reaction networks}. \emph{Faraday
Discuss.} \textbf{2016}, \emph{195}, 497--520\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Proppe and Reiher(2019)Proppe, and Reiher]{Proppe2019}
Proppe,~J.; Reiher,~M. {Mechanism Deduction from Noisy Chemical Reaction
Networks}. \emph{J. Chem. Theory Comput.} \textbf{2019}, \emph{15},
357--370\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Weymuth \latin{et~al.}(2018)Weymuth, Proppe, and Reiher]{Weymuth2018}
Weymuth,~T.; Proppe,~J.; Reiher,~M. {Statistical Analysis of Semiclassical
Dispersion Corrections}. \emph{J. Chem. Theory Comput.} \textbf{2018},
\emph{14}, 2480--2494\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Proppe \latin{et~al.}(2019)Proppe, Gugler, and Reiher]{Proppe2019b}
Proppe,~J.; Gugler,~S.; Reiher,~M. {Gaussian Process-Based Refinement of
Dispersion Corrections}. \emph{J. Chem. Theory Comput.} \textbf{2019},
\emph{15}, 6046--6060\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Oung \latin{et~al.}(2018)Oung, Rudolph, and Jacob]{Oung2018}
Oung,~S.~W.; Rudolph,~J.; Jacob,~C.~R. {Uncertainty quantification in
theoretical spectroscopy: The structural sensitivity of X-ray emission
spectra}. \emph{Int. J. Quantum Chem.} \textbf{2018}, \emph{118},
e25458\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sameera \latin{et~al.}(2016)Sameera, Maeda, and Morokuma]{Sameera2016}
Sameera,~W. M.~C.; Maeda,~S.; Morokuma,~K. {Computational Catalysis Using the
Artificial Force Induced Reaction Method}. \emph{Acc. Chem. Res.}
\textbf{2016}, \emph{49}, 763--773\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Dewyer \latin{et~al.}(2018)Dewyer, Arg{\"u}elles, and
Zimmerman]{Dewyer2018}
Dewyer,~A.~L.; Arg{\"u}elles,~A.~J.; Zimmerman,~P.~M. {Methods for exploring
reaction space in molecular systems}. \emph{Wiley Interdiscip. Rev. Comput.
Mol. Sci.} \textbf{2018}, \emph{8}, e1354\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Simm \latin{et~al.}(2019)Simm, Vaucher, and Reiher]{Simm2019}
Simm,~G.~N.; Vaucher,~A.~C.; Reiher,~M. {Exploration of Reaction Pathways and
Chemical Transformation Networks}. \emph{J. Phys. Chem. A} \textbf{2019},
\emph{123}, 385--399\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Unsleber and Reiher(2020)Unsleber, and Reiher]{Unsleber2020}
Unsleber,~J.~P.; Reiher,~M. {The Exploration of Chemical Reaction Networks}.
\emph{Annu. Rev. Phys. Chem.} \textbf{2020}, \emph{71}, 121--142\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bosia \latin{et~al.}(2021)Bosia, Husch, M\"{u}ller, Polonius, Sobez,
Steiner, Unsleber, Vaucher, Weymuth, and Reiher]{Sparrow300}
Bosia,~F.; Husch,~T.; M\"{u}ller,~C.~H.; Polonius,~S.; Sobez,~J.-G.;
Steiner,~M.; Unsleber,~J.~P.; Vaucher,~A.~C.; Weymuth,~T.; Reiher,~M.
{qcscine/sparrow: Release 3.0.0}. 2021; Zenodo\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Neugebauer \latin{et~al.}(2002)Neugebauer, Reiher, Kind, and
Hess]{Neugebauer2002}
Neugebauer,~J.; Reiher,~M.; Kind,~C.; Hess,~B.~A. {Quantum Chemical Calculation
of Vibrational Spectra of Large Molecules---Raman and IR Spectra for
Buckminsterfullerene}. \emph{J. Comput. Chem.} \textbf{2002}, \emph{23},
895--910\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Szabo and Ostlund(1996)Szabo, and Ostlund]{Szabo1996}
Szabo,~A.; Ostlund,~N.~S. \emph{{Modern Quantum Chemistry: Introduction to
Advanced Electronic Structure Theory}}, 1st ed.; Dover Publications: New
York, 1996\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mulliken(1955)]{Mulliken1955}
Mulliken,~R.~S. {Electronic Population Analysis on LCAO-MO Molecular Wave
Functions. I}. \emph{J. Chem. Phys.} \textbf{1955}, \emph{23},
1833--1840\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Reed \latin{et~al.}(1985)Reed, Weinstock, and Weinhold]{Reed1985}
Reed,~A.~E.; Weinstock,~R.~B.; Weinhold,~F. {Natural population analysis}.
\emph{J. Chem. Phys.} \textbf{1985}, \emph{83}, 735--746\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Herrmann \latin{et~al.}(2005)Herrmann, Reiher, and Hess]{Herrmann2005}
Herrmann,~C.; Reiher,~M.; Hess,~B.~A. {Comparative analysis of local spin
definitions}. \emph{J. Chem. Phys.} \textbf{2005}, \emph{122}, 034102\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gilbert \latin{et~al.}(2008)Gilbert, Besley, and Gill]{Gilbert2008}
Gilbert,~A. T.~B.; Besley,~N.~A.; Gill,~P. M.~W. {Self-Consistent Field
Calculations of Excited States Using the Maximum Overlap Method (MOM)}.
\emph{J. Phys. Chem. A} \textbf{2008}, \emph{112}, 13164--13171\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[R{\"{o}}hrig \latin{et~al.}(2003)R{\"{o}}hrig, Frank, Hutter, Laio,
VandeVondele, and Rothlisberger]{Rohrig2003}
R{\"{o}}hrig,~U.~F.; Frank,~I.; Hutter,~J.; Laio,~A.; VandeVondele,~J.;
Rothlisberger,~U. {QM/MM Car-Parrinello Molecular Dynamics Study of the
Solvent Effects on the Ground State and on the First Excited Singlet State of
Acetone in Water}. \emph{ChemPhysChem} \textbf{2003}, \emph{4},
1177--1182\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ziegler \latin{et~al.}(1977)Ziegler, Rauk, and Baerends]{Ziegler1977}
Ziegler,~T.; Rauk,~A.; Baerends,~E.~J. {On the Calculation of Multiplet
Energies by the Hartree--Fock--Slater Method}. \emph{Theor. Chim. Acta}
\textbf{1977}, \emph{43}, 261--271\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Frank \latin{et~al.}(1998)Frank, Hutter, Marx, and
Parrinello]{Frank1998}
Frank,~I.; Hutter,~J.; Marx,~D.; Parrinello,~M. {Molecular dynamics in low-spin
excited states}. \emph{J. Chem. Phys.} \textbf{1998}, \emph{108},
4060--4069\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Runge and Gross(1984)Runge, and Gross]{Runge1984}
Runge,~E.; Gross,~E. K.~U. {Density-Functional Theory for Time-Dependent
Systems}. \emph{Phys. Rev. Lett.} \textbf{1984}, \emph{52}, 997--1000\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Casida \latin{et~al.}(1998)Casida, Jamorski, Casida, and
Salahub]{Casida1998}
Casida,~M.~E.; Jamorski,~C.; Casida,~K.~C.; Salahub,~D.~R. {Molecular
excitation energies to high-lying bound states from time-dependent
density-functional response theory: Characterization and correction of the
time-dependent local density approximation ionization threshold}. \emph{J.
Chem. Phys.} \textbf{1998}, \emph{108}, 4439--4449\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Burke \latin{et~al.}(2005)Burke, Werschnik, and Gross]{Burke2005}
Burke,~K.; Werschnik,~J.; Gross,~E. K.~U. {Time-dependent density functional
theory: Past, present, and future}. \emph{J. Chem. Phys.} \textbf{2005},
\emph{123}, 062206\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Casida(1995)]{Casida1995}
Casida,~M.~E. \emph{{Recent Advances in Density Functional Methods}}; World
Scientific: Singapore, 1995; pp 155--192\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Dreuw and Head-Gordon(2005)Dreuw, and Head-Gordon]{Dreuw2005}
Dreuw,~A.; Head-Gordon,~M. {Single-Reference ab Initio Methods for the
Calculation of Excited States of Large Molecules}. \emph{Chem. Rev.}
\textbf{2005}, \emph{105}, 4009--4037\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[J\o{}rgensen and Simons(1981)J\o{}rgensen, and Simons]{Jorgensen1981}
J\o{}rgensen,~P.; Simons,~J. \emph{{Second Quantization-Based Methods in
Quantum Chemistry}}, 1st ed.; Academic Press: New York, United States of
America, 1981\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hirata and Head-Gordon(1999)Hirata, and Head-Gordon]{Hirata1999}
Hirata,~S.; Head-Gordon,~M. {Time-dependent density functional theory within
the Tamm--Dancoff approximation}. \emph{Chem. Phys. Lett.} \textbf{1999},
\emph{314}, 291--299\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chantzis \latin{et~al.}(2013)Chantzis, Laurent, Adamo, and
Jacquemin]{Chantzis2013}
Chantzis,~A.; Laurent,~A.~D.; Adamo,~C.; Jacquemin,~D. {Is the Tamm-Dancoff
Approximation Reliable for the Calculation of Absorption and Fluorescence
Band Shapes?} \emph{J. Chem. Theory Comput.} \textbf{2013}, \emph{9},
4517--4525\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Maitra \latin{et~al.}(2004)Maitra, Zhang, Cave, and Burke]{Maitra2004}
Maitra,~N.~T.; Zhang,~F.; Cave,~R.~J.; Burke,~K. {Double excitations within
time-dependent density functional theory linear response}. \emph{J. Chem.
Phys.} \textbf{2004}, \emph{120}, 5932--5937\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Cave \latin{et~al.}(2004)Cave, Zhang, Maitra, and Burke]{Cave2004}
Cave,~R.~J.; Zhang,~F.; Maitra,~N.~T.; Burke,~K. {A dressed TDDFT treatment of
the 2\textsuperscript{1}A\textsubscript{g} states of butadiene and
hexatriene}. \emph{Chem. Phys. Lett.} \textbf{2004}, \emph{389}, 39--42\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mazur and W{\l}odarczyk(2009)Mazur, and W{\l}odarczyk]{Mazur2009}
Mazur,~G.; W{\l}odarczyk,~R. {Application of the Dressed Time-Dependent Density
Functional Theory for the Excited States of Linear Polyenes}. \emph{J.
Comput. Chem.} \textbf{2009}, \emph{30}, 811--817\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mazur \latin{et~al.}(2011)Mazur, Makowski, W{\l}odarczyk, and
Aoki]{Mazur2011}
Mazur,~G.; Makowski,~M.; W{\l}odarczyk,~R.; Aoki,~Y. {Dressed TDDFT Study of
Low-Lying Electronic Excited States in Selected Linear Polyenes and
Diphenylopolyenes}. \emph{Int. J. Quantum Chem.} \textbf{2011}, \emph{111},
819--825\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Huix-Rotllant \latin{et~al.}(2011)Huix-Rotllant, Ipatov, Rubio, and
Casida]{Huix-Rotllant2011}
Huix-Rotllant,~M.; Ipatov,~A.; Rubio,~A.; Casida,~M.~E. {Assessment of dressed
time-dependent density-functional theory for the low-lying valence states of
28 organic chromophores}. \emph{Chem. Phys.} \textbf{2011}, \emph{391},
120--129\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Dreuw and Head-Gordon(2004)Dreuw, and Head-Gordon]{Dreuw2004}
Dreuw,~A.; Head-Gordon,~M. {Failure of Time-Dependent Density Functional Theory
for Long-Range Charge-Transfer Excited States: The
Zincbacteriochlorin--Bacteriochlorin and Bacteriochlorophyll--Spheroidene
Complexes}. \emph{J. Am. Chem. Soc.} \textbf{2004}, \emph{126},
4007--4016\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Henderson \latin{et~al.}(2008)Henderson, Janesko, and
Scuseria]{Henderson2008}
Henderson,~T.~M.; Janesko,~B.~G.; Scuseria,~G.~E. {Range Separation and Local
Hybridization in Density Functional Theory}. \emph{J. Phys. Chem. A}
\textbf{2008}, \emph{112}, 12530--12542\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Baer \latin{et~al.}(2010)Baer, Livshits, and Salzner]{Baer2010}
Baer,~R.; Livshits,~E.; Salzner,~U. {Tuned Range-Separated Hybrids in Density
Functional Theory}. \emph{Annu. Rev. Phys. Chem.} \textbf{2010}, \emph{61},
85--109\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Leininger \latin{et~al.}(1997)Leininger, Stoll, Werner, and
Savin]{Leininger1997}
Leininger,~T.; Stoll,~H.; Werner,~H.-J.; Savin,~A. {Combining long-range
configuration interaction with short-range density functionals}. \emph{Chem.
Phys. Lett.} \textbf{1997}, \emph{275}, 151--160\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Iikura \latin{et~al.}(2001)Iikura, Tsuneda, Yanai, and
Hirao]{Iikura2001}
Iikura,~H.; Tsuneda,~T.; Yanai,~T.; Hirao,~K. {A long-range correction scheme
for generalized-gradient-approximation exchange functionals}. \emph{J. Chem.
Phys.} \textbf{2001}, \emph{115}, 3540--3544\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Niehaus and {Della Sala}(2012)Niehaus, and {Della Sala}]{Niehaus2012}
Niehaus,~T.~A.; {Della Sala},~F. {Range separated functionals in the density
functional based tight-binding method: Formalism}. \emph{Phys. Status Solidi
B} \textbf{2012}, \emph{249}, 237--244\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu(1978)]{Liu1978}
Liu,~B. {The simultaneous expansion method for the iterative solution of
several of the lowest-lying eigenvalues and corresponding eigenvectors of
large real-symmetric matrices}. Numerical Algorithms in Chemistry: Algebraic
Methods. Santa Cruz, USA, 1978; pp 49--53\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Parrish \latin{et~al.}(2016)Parrish, Hohenstein, and
Mart{\'{i}}nez]{Parrish2016}
Parrish,~R.~M.; Hohenstein,~E.~G.; Mart{\'{i}}nez,~T.~J. {``Balancing'' the
Block Davidson--Liu Algorithm}. \emph{J. Chem. Theory Comput.} \textbf{2016},
\emph{12}, 3003--3007\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Furche \latin{et~al.}(2016)Furche, Krull, Nguyen, and
Kwon]{Furche2016}
Furche,~F.; Krull,~B.~T.; Nguyen,~B.~D.; Kwon,~J. {Accelerating molecular
property calculations with nonorthonormal Krylov space methods}. \emph{J.
Chem. Phys.} \textbf{2016}, \emph{144}, 174105\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Fukunaga(1990)]{Fukunaga1990}
Fukunaga,~K. \emph{{Introduction To Statistical Pattern Recognition}}, 2nd ed.;
Academic Press: San Diego, 1990\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Niehaus(2009)]{Niehaus2009}
Niehaus,~T.~A. {Approximate time-dependent density functional theory}. \emph{J.
Mol. Struct.: THEOCHEM} \textbf{2009}, \emph{914}, 38--49\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[R{\"{u}}ger \latin{et~al.}(2016)R{\"{u}}ger, {van Lenthe}, Heine, and
Visscher]{Ruger2016}
R{\"{u}}ger,~R.; {van Lenthe},~E.; Heine,~T.; Visscher,~L. {Tight-binding
approximations to time-dependent density functional theory\,---\,A fast
approach for the calculation of electronically excited states}. \emph{J.
Chem. Phys.} \textbf{2016}, \emph{144}, 184103\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sharma \latin{et~al.}(2017)Sharma, Holmes, Jeanmairet, Alavi, and
Umrigar]{Sharma2017}
Sharma,~S.; Holmes,~A.~A.; Jeanmairet,~G.; Alavi,~A.; Umrigar,~C.~J.
{Semistochastic Heat-Bath Configuration Interaction Method: Selected
Configuration Interaction with Semistochastic Perturbation Theory}. \emph{J.
Chem. Theory Comput.} \textbf{2017}, \emph{13}, 1595--1604\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gaus \latin{et~al.}(2011)Gaus, Cui, and Elstner]{Gaus2011}
Gaus,~M.; Cui,~Q.; Elstner,~M. {DFTB3: Extension of the Self-Consistent-Charge
Density-Functional Tight-Binding Method (SCC-DFTB)}. \emph{J. Chem. Theory
Comput.} \textbf{2011}, \emph{7}, 931--948\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Elstner \latin{et~al.}(1998)Elstner, Porezag, Jungnickel, Elsner,
Haugk, Frauenheim, Suhai, and Seifert]{Elstner1998}
Elstner,~M.; Porezag,~D.; Jungnickel,~G.; Elsner,~J.; Haugk,~M.;
Frauenheim,~T.; Suhai,~S.; Seifert,~G. {Self-consistent-charge
density-functional tight-binding method for simulations of complex materials
properties}. \emph{Phys. Rev. B} \textbf{1998}, \emph{58}, 7260--7268\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Nishimoto(2015)]{Nishimoto2015}
Nishimoto,~Y. {Time-dependent density-functional tight-binding method with the
third-order expansion of electron density}. \emph{J. Chem. Phys.}
\textbf{2015}, \emph{143}, 094108\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bosia \latin{et~al.}(2020)Bosia, Brunken, Grimmel, Haag, Heuer, Simm,
Sobez, Steiner, Unsleber, Vaucher, Weymuth, and Reiher]{Bosia2020}
Bosia,~F.; Brunken,~C.; Grimmel,~S.~A.; Haag,~M.~P.; Heuer,~M.~A.; Simm,~G.~N.;
Sobez,~J.-G.; Steiner,~M.; Unsleber,~J.~P.; Vaucher,~A.~C.; Weymuth,~T.;
Reiher,~M. {qcscine/utilities: Release 2.0.0 (Version 2.0.0)}. 2020; Zenodo,
https://doi.org/10.5281/zenodo.3828692\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Coutsias \latin{et~al.}(2004)Coutsias, Seok, and Dill]{Coutsias2004}
Coutsias,~E.~A.; Seok,~C.; Dill,~K.~A. {Using Quaternions to Calculate RMSD}.
\emph{J. Comput. Chem.} \textbf{2004}, \emph{25}, 1849--1857\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Stewart(2007)]{Stewart2007}
Stewart,~J. J.~P. {Optimization of parameters for semiempirical methods V:
Modification of NDDO approximations and application to 70 elements}. \emph{J.
Mol. Model.} \textbf{2007}, \emph{13}, 1173--1213\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Brunken and Reiher(2020)Brunken, and Reiher]{Brunken2020}
Brunken,~C.; Reiher,~M. {Self-Parametrizing System-Focused Atomistic Models}.
\emph{J. Chem. Theory Comput.} \textbf{2020}, \emph{16}, 1646--1665\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Perdew \latin{et~al.}(1996)Perdew, Burke, and Ernzerhof]{Perdew1996}
Perdew,~J.~P.; Burke,~K.; Ernzerhof,~M. {Generalized Gradient Approximation
Made Simple}. \emph{Phys. Rev. Lett.} \textbf{1996}, \emph{77},
3865--3868\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Perdew \latin{et~al.}(1997)Perdew, Burke, and Ernzerhof]{Perdew1997}
Perdew,~J.~P.; Burke,~K.; Ernzerhof,~M. {Generalized Gradient Approximation
Made Simple [Phys. Rev. Lett. 77, 3865 (1996)]}. \emph{Phys. Rev. Lett.}
\textbf{1997}, \emph{78}, 1396\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Weigend and Ahlrichs(2005)Weigend, and Ahlrichs]{Weigend2005}
Weigend,~F.; Ahlrichs,~R. {Balanced basis sets of split valence, triple zeta
valence and quadruple zeta valence quality for H to Rn: Design and assessment
of accuracy}. \emph{Phys. Chem. Chem. Phys.} \textbf{2005}, \emph{7},
3297--3305\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Grimme \latin{et~al.}(2010)Grimme, Antony, Ehrlich, and
Krieg]{Grimme2010}
Grimme,~S.; Antony,~J.; Ehrlich,~S.; Krieg,~H. {A consistent and accurate
\textit{ab initio} parametrization of density functional dispersion
correction (DFT-D) for the 94 elements H-Pu}. \emph{J. Chem. Phys.}
\textbf{2010}, \emph{132}, 154104\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Weigend(2006)]{Weigend2006}
Weigend,~F. {Accurate Coulomb-fitting basis sets for H to Rn}. \emph{Phys.
Chem. Chem. Phys.} \textbf{2006}, \emph{8}, 1057--1065\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Berendsen \latin{et~al.}(1984)Berendsen, Postma, van Gunsteren,
DiNola, and Haak]{Berendsen1984}
Berendsen,~H. J.~C.; Postma,~J. P.~M.; van Gunsteren,~W.~F.; DiNola,~A.;
Haak,~J.~R. {Molecular dynamics with coupling to an external bath}. \emph{J.
Chem. Phys.} \textbf{1984}, \emph{81}, 3684--3690\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Harvey \latin{et~al.}(1998)Harvey, Tan, and Cheatham]{Harvey1998}
Harvey,~S.~C.; Tan,~R. K.-Z.; Cheatham,~T.~E.,~III {The Flying Ice Cube:
Velocity Rescaling in Molecular Dynamics Leads to Violation of Energy
Equipartition}. \emph{J. Comp. Chem.} \textbf{1998}, \emph{19},
726--740\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Christiansen \latin{et~al.}(1995)Christiansen, Koch, and
J{\o}rgensen]{Christiansen1995}
Christiansen,~O.; Koch,~H.; J{\o}rgensen,~P. {The second-order approximate
coupled cluster singles and doubles model CC2}. \emph{Chem. Phys. Lett.}
\textbf{1995}, \emph{243}, 409--418\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hellweg \latin{et~al.}(2008)Hellweg, Gr{\"{u}}n, and
H{\"{a}}ttig]{Hellweg2008}
Hellweg,~A.; Gr{\"{u}}n,~S.~A.; H{\"{a}}ttig,~C. {Benchmarking the performance
of spin-component scaled CC2 in ground and electronically excited states}.
\emph{Phys. Chem. Chem. Phys.} \textbf{2008}, \emph{10}, 4119--4127\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Dunning(1989)]{Dunning1989}
Dunning,~J.,~Thom~H. {Gaussian basis sets for use in correlated molecular
calculations. I. The atoms boron through neon and hydrogen}. \emph{J. Chem.
Phys.} \textbf{1989}, \emph{90}, 1007--1023\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Woon and Dunning(1993)Woon, and Dunning]{Woon1993}
Woon,~D.~E.; Dunning,~T.~H.,~Jr. {Gaussian basis sets for use in correlated
molecular calculations. III. The atoms aluminum through argon}. \emph{J.
Chem. Phys.} \textbf{1993}, \emph{98}, 1358--1371\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ahlrichs \latin{et~al.}(1989)Ahlrichs, B{\"{a}}r, H{\"{a}}ser, Horn,
and K{\"{o}}lmel]{Ahlrichs1989}
Ahlrichs,~R.; B{\"{a}}r,~M.; H{\"{a}}ser,~M.; Horn,~H.; K{\"{o}}lmel,~C.
{Electronic structure calculations on workstation computers: The program
system Turbomole}. \emph{Chem. Phys. Lett.} \textbf{1989}, \emph{162},
165--169\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Folkestad \latin{et~al.}(2020)Folkestad, Kj{\o}nstad, Myhre, Andersen,
Balbi, Coriani, Giovannini, Goletto, Haugland, Hutcheson, H{\o}yvik, Moitra,
Paul, Scavino, Skeidsvoll, Tveten, and Koch]{Folkestad2020}
Folkestad,~S.~D. \latin{et~al.} {$e^T$ 1.0: An open source electronic
structure program with emphasis on coupled cluster and multilevel methods}.
\emph{J. Chem. Phys.} \textbf{2020}, \emph{152}, 184103\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Goerigk and Grimme(2010)Goerigk, and Grimme]{Goerigk2010}
Goerigk,~L.; Grimme,~S. {Assessment of TD-DFT methods and of various spin
scaled CIS(D) and CC2 versions for the treatment of low-lying valence
excitations of large organic dyes}. \emph{J. Chem. Phys.} \textbf{2010},
\emph{132}, 184103\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sundholm(2003)]{Sundholm2003}
Sundholm,~D. {A density-functional-theory study of bacteriochlorophyll
\textit{b}}. \emph{Phys. Chem. Chem. Phys.} \textbf{2003}, \emph{5},
4265--4271\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Neugebauer \latin{et~al.}(2005)Neugebauer, Louwerse, Baerends, and
Wesolowski]{Neugebauer2005}
Neugebauer,~J.; Louwerse,~M.~J.; Baerends,~E.~J.; Wesolowski,~T.~A. {The merits
of the frozen-density embedding scheme to model solvatochromic shifts}.
\emph{J. Chem. Phys.} \textbf{2005}, \emph{122}, 094115\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bold \latin{et~al.}(2020)Bold, Sokolov, Maity, Wanko, Dohmen, Kranz,
Kleinekath{\"{o}}fer, H{\"{o}}fener, and Elstner]{Bold2020}
Bold,~B.~M.; Sokolov,~M.; Maity,~S.; Wanko,~M.; Dohmen,~P.~M.; Kranz,~J.~J.;
Kleinekath{\"{o}}fer,~U.; H{\"{o}}fener,~S.; Elstner,~M. {Benchmark and
performance of long-range corrected time-dependent density functional tight
binding (LC-TD-DFTB) on rhodopsins and light-harvesting complexes}.
\emph{Phys. Chem. Chem. Phys.} \textbf{2020}, \emph{22}, 10500--10518\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hourahine \latin{et~al.}(2020)Hourahine, Aradi, Blum, Bonaf{\'{e}},
Buccheri, Camacho, Cevallos, Deshaye, Dumitric\v{a}, Dominguez, Ehlert,
Elstner, {van der Heide}, Hermann, Irle, Kranz, K{\"{o}}hler, Kowalczyk,
Kuba\v{r}, Lee, Lutsker, Maurer, Min, Mitchell, Negre, Niehaus, Niklasson,
Page, Pecchia, Penazzi, Persson, \v{R}ez{\'{a}}{\v{c}}, S{\'{a}}nchez,
Sternberg, St{\"{o}}hr, Stuckenberg, Tkatchenko, Yu, and
Frauenheim]{Hourahine2020}
Hourahine,~B. \latin{et~al.} {DFTB+, a software package for efficient
approximate density functional theory based atomistic simulations}. \emph{J.
Chem. Phys.} \textbf{2020}, \emph{152}, 124101\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kuhn(1955)]{Kuhn1955}
Kuhn,~H.~W. {The Hungarian method for the assignment problem}. \emph{Nav. Res.
Logist.} \textbf{1955}, \emph{2}, 83--97\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Virtanen \latin{et~al.}(2020)Virtanen, Gommers, Oliphant, Haberland,
Reddy, Cournapeau, Burovski, Peterson, Weckesser, Bright, {van der Walt},
Brett, Wilson, Millman, Mayorov, Nelson, Jones, Kern, Larson, Carey, Polat,
Feng, Moore, VanderPlas, Laxalde, Perktold, Cimrman, Henriksen, Quintero,
Harris, Archibald, Ribeiro, Pedregosa, {van Mulbregt}, Vijaykumar, Bardelli,
Rothberg, Hilboll, Kloeckner, Scopatz, Lee, Rokem, Woods, Fulton, Masson,
H{\"{a}}ggstr{\"{o}}m, Fitzgerald, Nicholson, Hagen, Pasechnik, Olivetti,
Martin, Wieser, Silva, Lenders, Wilhelm, Young, Price, Ingold, Allen, Lee,
Audren, Probst, Dietrich, Silterra, Webber, Slavi{\v{c}}, Nothman, Buchner,
Kulick, Sch{\"{o}}nberger, {de Miranda Cardoso}, Reimer, Harrington,
Rodr{\'{i}}guez, Nunez-Iglesias, Kuczynski, Tritz, Thoma, Newville,
K{\"{u}}mmerer, Bolingbroke, Tartre, Pak, Smith, Nowaczyk, Shebanov, Pavlyk,
Brodtkorb, Lee, McGibbon, Feldbauer, Lewis, Tygier, Sievert, Vigna, Peterson,
More, Pudlik, Oshima, Pingel, Robitaille, Spura, Jones, Cera, Leslie, Zito,
Krauss, Upadhyay, Halchenko, and V{\'{a}}zquez-Baeza]{Scipy2020}
Virtanen,~P. \latin{et~al.} {SciPy 1.0: fundamental algorithms for scientific
computing in Python}. \emph{Nat. Methods} \textbf{2020}, \emph{17},
261--272\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Adamo and Barone(1999)Adamo, and Barone]{Adamo1999}
Adamo,~C.; Barone,~V. {Toward reliable density functional methods without
adjustable parameters: The PBE0 model}. \emph{J. Chem. Phys.} \textbf{1999},
\emph{110}, 6158--6170\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Neese(2012)]{Neese2012}
Neese,~F. {The ORCA program system}. \emph{Wiley Interdiscip. Rev. Comput. Mol.
Sci.} \textbf{2012}, \emph{2}, 73--78\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Neese(2018)]{Neese2018}
Neese,~F. {Software update: the ORCA program system, version 4.0}. \emph{Wiley
Interdiscip. Rev. Comput. Mol. Sci.} \textbf{2018}, \emph{8}, e1327\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[M{\"{u}}hlbach \latin{et~al.}(2016)M{\"{u}}hlbach, Vaucher, and
Reiher]{Muehlbach2016}
M{\"{u}}hlbach,~A.~H.; Vaucher,~A.~C.; Reiher,~M. {Accelerating Wave Function
Convergence in Interactive Quantum Chemical Reactivity Studies}. \emph{J.
Chem. Theory Comput.} \textbf{2016}, \emph{12}, 1228--1235\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
|
1,314,259,994,784 | arxiv | \section{Introduction}
There is a rich interplay between supersymmetry and geometry in non-linear sigma models. Supersymetric sigma models have led to the discovery of a rich class of complex geometries.
Our purpose here is to revisit this story, and we find that past results readily extend to the construction of some interesting new geometries with $(4,q)$ supersymmetry.
The sigma model in 2 dimensions with $(1,1)$ supersymmetry has a target space with a metric $g$ and closed 3-form $H$ given locally in terms of a 2-form potential $B$, $H=dB$ \cite{Gates:1984nk}.
The action can be written in $(1,1)$ superspace with coordinates
$(x^{\mu},\theta^\alpha)$
where
$x^\mu = (x^\+,x^=)$ are null coordinates, $x^\+=\tau+\sigma$, $x^-=\tau-\sigma$,
and $\theta ^\alpha =( \theta^+, \theta ^-)$.
We use spinor indices $+,-$ so that $\psi ^+$ is a positive chirality 1-component Weyl spinor and $\psi ^-$ is a left-handed one, for any spinor $\psi$.
If the target space coordinates are $X^i$, $i=1,...,n$, the map from the worldsheet superspace to the target space is given locally by scalar superfields $X^i(x,\theta)$ and the action is %
\begin{eqnarray}
S=\frac 1 2 \int d^2x
d^2 \theta \,
D_-X^i(g+B)_{ij}(X)D_+X^j~.
\end{eqnarray}
For particular geometries, the sigma model can have extended supersymmetry.
The conditions for $(2,2)$ and (4,4) supersymmetry were found in \cite{Gates:1984nk}
and the conditions for (2,0) supersymmetry were found in \cite{Hull:1985jv}.
This was generalised to the case of $(p,q)$ supersymmetry in \cite{Hull:1986hn} and
the geometry was further studied in \cite{Howe:1988cj}.
The $(1,1)$ theory will in fact have $(p,q)$ supersymmetry (with $p,q=1,2$ or $4$) if the target space has $p-1$ complex structures $J_{(+)}$ and $q-1$ complex structures $J_{(-)}$
satisfying
\begin{eqnarray}
J^t_{(\pm)}gJ_{(\pm)}=g~,~~~(J_{(\pm)})^2=-\mathbb{1}~,~~~\nabla^{(\pm)}J_{(\pm)}=0~,
\end{eqnarray}
where
\begin{eqnarray}
\nabla^{(\pm)}:=(\nabla^{(0)}\pm{\textstyle{\frac12}} g^{-1}H)
\end{eqnarray}
is the connection with torsion $\pm \frac 1 2 g^{il}H_{ljk}$ added to the Levi-Civita connection
$\nabla^{(0)}$.
Then the extra supersymmetry transformations are given in terms of these complex structures by
\begin{eqnarray}\label{nis1tfs}
\delta X^i=\epsilon_A^+\left(J_{(+)}^{(A)}\right)^i_{~j}D_+X^j+\epsilon_{\tilde A}^-\left(J_{(-)}^{(\tilde A)}\right)^i_{~j}D_-X^j~,
\end{eqnarray}
where $A=1,...,p\!-\!1$ and $\tilde A=1,...,q\!-\!1$ label the complex structures.
Closure of the algebra requires that $J^{(A)}$ is an almost complex structure, $(J^{(A)})^2=-\mathbb{1}$ and that it is integrable, i.e. the Nijenhuis tensor vanishes, ${\cal N}(J^{(A)})=0$, so that it is a complex structure. Similarly, the $J^{(\tilde A)}$ are also complex structures.
When $p>1$ and/or $q>1$, the commutator of supersymmetries $[\delta _{\epsilon_A} ,\delta _{\epsilon_B} ]$ gives a term with
involving a tensor ${\cal N}(J^{(A)}, J^{(B)})$ constructed from the complex structures, known as
the Nijenhuis concomitant,
so that for closure it is necessary that this vanishes.
For three anticommuting almost complex structures $I,J,K$ satisfying the algebra of the quaternions it was shown in \cite{Yano:1972} that the vanishing of the Nijenhuis tensor of any two of the complex structures implies the vanishing of that of the third, and of all of the concomitants, so the integrability of the three complex structures $J^{(A)}$ is sufficient for closure, and in particular implies the vanishing of the Nijenhuis concomitant.
In what follows we shall be particularly interested in cases when there is a coordinate system (atlas) for which all the complex structures are constant in all coordinate patches, i.e. they are simultaneously integrable.
Three anticommuting complex structures $I,J,K$ are simultaneously integrable when a certain curvature formed from the three of them vanishes $R(I,J,K)=0$ \cite{Obata 56, Yano:1973, Howe:1988cj}.
For two complex structures, $J^{(+)}$ and $J^{(-)}$ that commute, $[J^{(+)},J^{(-)}]=0$, it is instead the vanishing of the Magri-Morosi concomitant, ${\cal M}(J^{(+)},J^{(-)})$ that signals simultaneous integrability. For details see \cite{Howe:1988cj}.
If $H=0$, then there are equal numbers of left and right handed supersymmetries, $p=q$, and the target space is K\"ahler for $(2,2)$ supersymmetry and hyperk\"ahler for (4,4) supersymmetry.
For the $(2,2)$ case, the supersymmetry algebra closes off-shell and the theory can be formulated in terms of chiral superfields, while for (4,4) supersymmetry, the supersymmetry algebra closes on-shell only, or after introducing an infinite number of auxiliary fields\footnote{E.g. using projective or harmonic superspace.}, as the 3 complex structures are not simultaneously integrable.
For $H\ne 0$, there is a richer structure. For $(2,1)$ supersymmetry, the supersymmetry algebra closes off-shell and the theory can be formulated in terms of chiral superfields, while for $(2,2)$ supersymmetry the
supersymmetry algebra closes off-shell only once suitable auxiliary fields are introduced.
The theory can then be formulatd in terms of chiral superfields, twisted chiral superfields, and semi-chiral superfields \cite{Lindstrom:2005zr}.
For $(4,q)$ supersymmetry, the
supersymmetry algebra only closes on-shell in general, but there are interesting cases in which the algebra closes off-shell, and the three complex structures $J^{(+)}$ are simultaneously integrable, i.e. there is a coordinate system where all of them are constant \cite{Howe:1988cj}.
One example of this is the (4,4) supersymmetric model found in \cite{Gates:1984nk} that generalises that obtained from the dimensional reduction of $N=2$ super-Yang-Mills theory in 4 dimensions.
The aim of this paper is to investigate such cases with off-shell $(4,q)$ supersymmetry, with simultaneously integrable complex structures $J^{(+)}$.
In such cases, there is an off-shell superfield formulation, and a superspace formulation of the action that gives a general local construction of the geometry in terms of
certain potentials.
In this paper, a number of new multiplets will be found and analysed.
Actions for these multiplets will then be constructed using projective superspace. Projective superspace has a long history
\cite{Karlhede:1984vr, Lindstrom:1987ks, Buscher:1987uw, Lindstrom:1994mw, Lindstrom:1989ne, GonzalezRey:1997qh, Lindstrom:2008gs}
paralleling and complementing that of harmonic superspace \cite{Galperin:1984av}, \cite{Galperin:1985an},
\cite{Delduc:1989gx} \cite{Galperin:2001uw}. General superspaces of this type have been described in \cite{Howe:1995md}, \cite{Hartwell:1994rp}. For detailed reviews of projective $(4,4)$ superspace see \cite{Lindstrom:2008gs} and the lectures \cite{Kuzenko:2010bd}.
The plan of the paper is as follows. In Section 2 we define an off-shell $(4,1)$ multiplet that will play a key role in what follows. Its $(2,1)$ superspace formulation is given in Section 3 and general $(4,1)$ sigma model actions written in $(2,1)$ superspace are introduced in Section 4 and the geometric conditions for $(4,1)$ supersymmetry are studied.
Related $(4,2)$ multiplets are discussed in Section 5. General
$(4,2)$ supersymmetric sigma models are studied in $(2,2)$ superspace in Section 6 and the conditions for $(4,2)$ supersymmetry are analysed. The relationship to the $(4,4)$ hypermultiplet is discussed in Section 7, while Section 8 contains results on the $(4,1)$ superspace action. In Section 9 we introduce $(4,q)$ projective superspace and use it to formulate multiplets and actions, giving explicit constructions of target space geometries.
\section{$(4,1)$ Off-Shell Supermultiplets}
\label{Off}
In \cite{Gates:1984nk}, a $(4,4)$ off-shell multiplet was
found by dimensional reduction of $N=2$ super Yang-Mills theory from 4 dimensions.
Truncating this gives an off-shell $(4,1)$ supermultiplet that can be formulated as follows. We use
$(4,1)$ superspace with coordinates $x^\+,x^=, \theta ^+_a, \bar \theta ^{+a},\theta^-$ where the index $a=1,2$ is an $SU(2)$ index. Here $\theta ^+_a$ are complex and $\theta^-$ is real.
There are
two right-handed complex spinorial covariant derivatives $\bbD{+}^a$ and a real
left-handed spinorial covariant derivative $D_-$, satisfying
\begin{eqnarray}\nonumber\label{talg1}
&&\{\bbD{+a},\bbDB{+} ^b\}=~2i\delta^b_a\partial_\+~, ~~~a,b,=1,2.\\[1mm]
&&(D_-)^2=i\partial_=~.
\end{eqnarray}
The $(4,1)$ multiplet obtained by truncating the $(4,4)$ multiplet of \cite{Gates:1984nk}
consists of a pair of $(4,1)$ superfields $\phi, \chi$ satisfying the constraints
\begin{eqnarray}\nonumber\label{constr2}
&&\bbDB{+}^1\phi = 0=\bbD{+2}\phi~,~~~\bbDB{+}^1 \chi =0=\bbD{+2}\chi~,\\[1mm]
&&\bbDB{+}^2\chi=-i\bbDB{+}^1\bar \phi~,~~~\bbDB{+}^2\phi=i\bbDB{+}^1\bar\chi~.
\end{eqnarray}
The supersymmetry transformations can be put into the form \re{nis1tfs} by expanding in $(1,1)$ superspace.
The $(4,1)$ multiplet in \re{constr2} can be formulated in $(1,1)$ superspace
by defining
\begin{eqnarray}\label{1comp}
\phi \big\vert _{\theta _2^+=0,\theta _1^+=\bar \theta _1^+} = \tilde \phi , \qquad \chi \big \vert _{\theta _2^+=0,\theta _1^+=\bar \theta _1^+} = \tilde \chi~.
\end{eqnarray}
The constraints \re{constr2} then determine the terms in $\phi, \chi $ of higher order in $\theta_2^+,\theta _1^+-\bar \theta _1^+$
in terms of $\tilde \phi, \tilde \chi $
and give the supersymmetry transformations under the non-manifest supersymmetries.
We define four real $(4,1)$ superspace spinor derivatives $D_+$ and $\check{D}_+^{(A)}~,~A=1,2,3$ by
\begin{eqnarray}\nonumber\label{8}
&&\bbD{+1}=: D_+ - i\check{D}_+^{(1)}\\[1mm]
&&\bbD{+2}=: \check{D}_+^{(2)} - \check{D}_+^{(3)}~.
\end{eqnarray}
Then $D_+$ is
the $(1,1)$ superspace spinor derivative and the three differential operators $\check{D}_+^{(A)}$, $A=1,2,3$, determine the generators of nonmanifest supersymmetries $Q_+^{(A)}$ via the constraint \re{constr2}
\begin{eqnarray}
\check{D}^{(A)}_+\phi\Big\vert _{\theta _1^+ = \bar \theta _1^+,\theta _2^+=0}& =&Q^{(A)}_+ \tilde \phi~,
\\
\check{D}^{(A)}_+\chi\Big\vert _{\theta _1^+ = \bar \theta _1^+,\theta _2^+=0}& =&Q^{(A)}_+ \tilde \chi~.
\end{eqnarray}
This results in the following relation for the extended supersymmetries for $d$ superfields,
\begin{eqnarray}
Q_+^{(A)}\left(\begin{array}{c}\tilde\phi\\
\tilde\chi\\
\bar{\tilde\phi}\\
\bar{\tilde\chi}\end{array}\right)=:\mathbb{J}^{(A)}D_+\left(\begin{array}{c}\tilde\phi\\
\tilde\chi\\
\bar{\tilde\phi}\\
\bar{\tilde\chi}\end{array}\right)~,\end{eqnarray}
where the complex structures
\begin{eqnarray}\label{comstr}
\mathbb{J}^{(A)}= \mathbb{I}^{(A)}\otimes \mathbb{1}_{d\times d}
\end{eqnarray}
with
\begin{eqnarray}\label{comstr1}
\mathbb{I}^{(1)}=\left(\begin{array}{cc}i\mathbb{1}&0\\
0&-i\mathbb{1}\end{array}\right)~,~~~
\mathbb{I}^{(2)}=\left(\begin{array}{cc}0& i\sigma_2\\ i\sigma_2&0\end{array}\right)
~,~~~
\mathbb{I}^{(3)}=\left(\begin{array}{cc}0&-\sigma_2\\
\sigma_2&0\end{array}\right)
\end{eqnarray}
are constant in this coordinate system and satisfy the quaternion algebra
\begin{eqnarray}
\mathbb{J}^{(A)}\mathbb{J}^{(B)}=-\delta^{AB}+\epsilon^{ABC}\mathbb{J}^{(C)}~.
\end{eqnarray}
Then this gives transformations for $\tilde \phi, \tilde \chi $ of the form \re{nis1tfs}.
\section{$(2,1)$ Superspace Formulation}
The general $(2,1)$ sigma model action can be written in $(2,1)$ superspace as \cite{Hull:1985jv,AbouZeid:1997cw}
\begin{eqnarray}\label{2,1}
S=
\int d^2x d^3 \theta
\left(k_ \alpha D_-\varphi^\alpha
+
\bar k_{\bar \alpha} D_-\bar \varphi^{\bar \alpha}
\right)~.
\end{eqnarray}
The fields $\varphi^\alpha$ are $(2,1)$ chiral
\begin{eqnarray}
\bbDB{+}\varphi^\alpha=0~,
\end{eqnarray}
and $\bar \varphi^{\bar \alpha} $ are their complex conjugates $\bar \varphi^{\bar \alpha} =
(\varphi^\alpha)^*$.
The
theory is defined locally by a 1-form potential $k_\alpha (\varphi, \bar \varphi)$ with
$
\bar k_{\bar \alpha} = (k_\alpha)^*$, which is
defined up to the addition of the gradient of a function $h(\varphi,\bar\varphi) $ and a holomorphic 1-form $f_\alpha (\varphi)$,
\begin{eqnarray}\label{kfreedom}
k_\alpha(\varphi,\bar\varphi) \to k_\alpha(\varphi,\bar\varphi)+\partial_\alpha h(\varphi,\bar\varphi) + f_\alpha (\varphi)~.
\end{eqnarray}
The metric $g$ and $B$ field for the model \re{2,1} are (in a particular gauge)\cite{AbouZeid:1997cw}
\begin{eqnarray}\nonumber\label{gB}
&&g_{\alpha \bar \beta}=i(\partial_{ \alpha} \bar k_{\bar \beta}-\partial_{\bar \beta} k_{\alpha} ) \\[1mm]\nonumber
&&B^{(2,0)}_{\alpha\beta}= i(
\partial_{\alpha} k_{\beta} -\partial_{ \beta} k_{\alpha} )
\\[1mm]
&&B=B^{(2,0)}+B^{(0,2)}
\end{eqnarray}
as may be verified by reducing to the $(1,1)$ superspace formulation \cite{Hull:1985jv,AbouZeid:1997cw}.
The $(4,1)$ multiplet \re{constr2} can be expanded into $(2,1)$ superspace
by writing\footnote{We temporarily use the tilde notation for the $(2,1)$ components in this section, just as we did for the $(1,1)$ components in section \ref{Off}.}
\begin{eqnarray}
\phi \vert _{\theta _2^+=0} = \tilde \phi , \qquad \chi \vert _{\theta _2^+=0} = \tilde \chi~.
\end{eqnarray}
The constraints \re{constr2} then define the terms in $\phi, \chi $ of higher order in $\theta_2$
and give the supersymmetry transformations under the non-manifest supersymmetries.
The $(4,1)$ derivative $\bbD{+1}$ survives as the (2,1) derivative $\bbD{+}$
{while
$\bbD{+2}$ gives the generator $Q$ of non-manifest supersymmetries, acting as:
\begin{eqnarray}\label{quis}
\bar Q_+\tilde \phi=(\bbDB{+}^2 \phi )\Big| _{\theta^2=0} \, , \qquad \bar Q_+ \tilde \chi=(\bbDB{+}^2 \chi )\Big|_{\theta^2=0}
\, , \qquad \bar Q_+ \bar {\tilde \phi }=0
\, , \qquad \bar Q_+ \bar {\tilde \chi }=0
~.
\end{eqnarray}
Complex conjugation then gives the action of the generator $Q$.
The action for $d$ $(4,1)$ multiplets must take the form \re{2,1} when written in $(2,1)$ superspace, with
$(2,1)$ chiral superfields $\varphi^\alpha = (\tilde \phi ^i, \tilde \chi ^i)$ with $i=1,\dots , d$.
We will henceforth drop the tildes on $\tilde \phi ^i, \tilde \chi ^i$.
Then using the constraints \re{constr2}, \re{quis} gives the non-manifest supersymmetry transformations
\begin{eqnarray}\label{spec}
\bar Q_+ \phi= i\bbDB{+} \bar {\chi }, \qquad \bar Q_+ \chi
= -i\bbDB{+} \bar {\phi } \, , \qquad \bar Q_+ \bar { \phi }=0
\, , \qquad \bar Q_+ \bar { \chi }=0~.
\end{eqnarray} }
The potential has components
$k_\alpha = (k_{\phi^i},k_{\chi^i})$
and the variation of the action \re{2,1} under the non-manifest supersymmetries generated by
$\bar Q_+$ takes the form
\begin{eqnarray}
\label{erterwf}
\delta S=\int d^2x \bbD{+}\bbDB{+}D_- \Delta
\end{eqnarray}
where
\begin{eqnarray}\nonumber
\Delta=
&&iD_-\phi^i\left(k_{\phi^i ,\chi^j}\bbDB{+}\bar\phi^j-k_{\phi^i, \phi^j}\bbDB{+}\bar\chi^j\right)\\[1mm]\nonumber
&&-iD_-\bar\phi^i\left[(\bar k_{\bar\phi^i, \phi^j}+k_{\chi^i\bar\chi^j})\bbDB{+}\bar\chi^j-(\bar k_{\bar\phi^i ,\chi^j}-k_{\chi^i , \bar\phi^j})\bbDB{+}\bar\phi^j\right]
\\[1mm]
&&-(\phi\leftrightarrow\chi)~.
\end{eqnarray}
The second line vanishes if $k$ satisfies
\begin{eqnarray}\nonumber\label{vpotcs}
&&k_{\phi^i,\bar \phi^{ j}}+\bar k_{\bar\chi^i, \chi^{ j}}=0~,\\[1mm]
&&k_{\phi^i,\bar \chi^ j}-\bar k_{\bar\chi^i, \phi^j}=0~,
\end{eqnarray}
where the comma denotes a partial derivative, so that e.g. $k_{\phi^i,\bar \phi^{ j}}= \partial
k_{\phi^i} / \partial \bar \phi^{ j}
$.
Then $\bbDB{+}\Delta $
gives expressions that vanish after repeated use of \re{vpotcs} and their derivatives.
Thus
\re{vpotcs} implies that the variation (\ref{erterwf}) of the action under the extra supersymmetries vanishes.
We note that the vanishing of $\Delta$ and $\bbDB{+}\Delta $ is sufficient for invariance, but not necessary. For invariance, it is only necessary that they reduce to terms that vanish when integrated,
so that $\bbD{+}\bbDB{+}D_- \Delta $ is a total derivative with
$\int d^2x \bbD{+}\bbDB{+}D_- \Delta=0$ (up to a boundary term).
This is essentially the condition that the variation of the action under the non-manifest supersymmetries can be cancelled by transformations of the form
\re{kfreedom}.
The full necessary and sufficient conditions for supersymmetry
will be given in the next section, from a
geometric analysis.
We will return to the $(4,1)$ superspace formulation of these actions
in sections \ref{general} and \ref{GEN}.
\section{General $(4,1)$ Sigma Models}
\label{general}
We now consider the general conditions for the $(2,1)$ superspace action (\ref{2,1}) to be $(4,1)$ supersymmetric so that it is invariant under two further supersymmetries.
Following \cite{Hull:1985pq} and \cite{Goteman:2009ye}, we make the ansatz
\begin{eqnarray}\nonumber
\label{fvar}
&&\delta\varphi^\alpha =\bar\epsilon^+\bbDB{+}f^\alpha(\varphi,\bar\varphi)\\[1mm]
&&\delta\bar\varphi^{\bar\alpha} = \epsilon^+\bbD{+}\bar f^{\bar\alpha}(\varphi,\bar\varphi)
\end{eqnarray}
for the additional supersymmetries of the action \re{2,1}. Up to central charge transformations, this is the most general ansatz compatible with the chirality properties \cite{Goteman:2009ye}.
Expanding in components and comparing with (\ref{nis1tfs}), we can read off the form of the complex structures.
The manifest (2,1) supersymmetry involves
the canonical complex structure
\begin{eqnarray}
\mathbb{J}^{(1)}=\left(\begin{array}{cc}i\mathbb{1}&0\\
0&-i\mathbb{1}\end{array}\right)\, ,
\end{eqnarray}
while the transformation (\ref{fvar}) yields second and third ones
\begin{eqnarray}\label{compl2}
\mathbb{J}^{(2)}=\left(\begin{array}{cc}0&f^\alpha_{~\bar\beta}\\\bar f^{\bar\alpha}_{~\beta}&0\end{array}\right)~,~~~\mathbb{J}^{(3)}=\left(\begin{array}{cc}0&if^\alpha_{~\bar\beta}\\-i\bar f^{\bar\alpha}_{~\beta}&0\end{array}\right)~.
\end{eqnarray}
Here, the lower index on $f$ denotes a derivative,
\begin{eqnarray}
f^\alpha _{~\bar\beta}:=\frac {\partial f^\alpha} {\partial \bar\varphi^{\bar\beta}}~.
\end{eqnarray}
From off-shell closure of the algebra,
\begin{eqnarray}\nonumber
[\delta_1,\delta_2]\varphi^\alpha=2i\epsilon^+_{[2}\bar\epsilon^+_{1]}\partial_\+\varphi^\alpha _,
\end{eqnarray}
we deduce that (c.f.\cite{Hull:1985pq})
\begin{eqnarray}\nonumber
&&f^\alpha_{~\bar\beta}\bar f^{\bar\gamma}_{~\alpha}=-\delta^{\bar\gamma}_{~\bar\beta}~,~~~\bar f^{\bar\alpha}_{~\beta}f^{\gamma}_{~\bar\alpha}=-\delta^{\gamma}_{~\beta}~,\\[1mm]
&& f^\alpha_{~[\bar\alpha}f^\beta_{~\bar\beta]\alpha}=0~,~~~ \bar f^{\bar\alpha}_{~[\alpha}\bar f^{\bar\beta}_{~\beta]\bar\alpha}=0~.
\end{eqnarray}
(See \cite{Hull:1985pq}
for similar relations for $N=2$ in $d=4$.)
Here $f^\beta_{~\bar\beta\alpha}=
\partial f^\beta/ \partial \bar\varphi^{\bar\beta}\partial \varphi^\alpha$ etc.
Then the matrices $\mathbb{J}^{(1)},\mathbb{J}^{(2)},\mathbb{J}^{(3)}$
satisfy the quaternion algebra and have vanishing Nijenhuis tensors
\begin{eqnarray}\label{nij}
{\cal N}^i_{jk}(\mathbb{J}^{(A)})=0~,
\end{eqnarray}
so that they are each complex structures.
The remaining geometric constraints follow from invariance of the action.
Varying the action we find
\begin{eqnarray}
\delta S= \int d^2x d^3 \theta \, \Delta
\end{eqnarray}
where
\begin{eqnarray}\label{varS}
\Delta=
\bar\epsilon^+\bbDB{+}f^\beta\left(B_{\beta\alpha}D_-\varphi^\alpha+g_{\beta\bar\alpha}D_-\bar\varphi^{\bar\alpha}\right)~.
\end{eqnarray}
Pushing in $\bbDB{+}$ from the measure yields\footnote{If we performed the full reduction to $(1,1)$ this would to parallel the calculation in \cite{Gates:1984nk} for $(2,2)$ supersymmetry.}
\begin{eqnarray}\label{mess}
\bbDB{+} \Delta=
\bar\epsilon^+\bbDB{+}f^\beta\bbDB{+}\bar\varphi^{\bar\beta}\left(B_{\beta\alpha ,\bar\beta}D_-\varphi^\alpha+g_{\beta\bar\alpha, \bar\beta}D_-\bar\varphi^{\bar\alpha}\right)-\bar\epsilon^+\bbDB{+}f^\beta D_-\bbDB{+}\bar\varphi^{\bar\alpha}g_{\beta\bar\alpha}~.
\end{eqnarray}
Integrating $D_-$ by parts and defining
\begin{eqnarray}
\omega_{\bar\beta\bar\alpha}:=f^\beta_{~[\bar\alpha}g_{\bar\beta]\beta}
=
\frac 1 2 \left(
f^\beta_{~\bar\alpha}g_{\beta\bar\beta} -f^\beta_{~\bar
\beta
}g_{\beta\bar \alpha
}
\right)
\end{eqnarray} we rewrite the last term as
\begin{eqnarray}\label{messi}
-\bar\epsilon^+\left(\bbDB{+}\bar\varphi^{\bar\beta}D_-\bbDB{+}\bar\varphi^{\bar\alpha}f^\beta_{~(\bar\beta}g_{\beta\bar\alpha)}+{\textstyle{\frac12}} D_-\omega_{\bar\beta\bar\alpha}\bbDB{+}\bar\varphi^{\bar\alpha}\bbDB{+}\bar\varphi^{\bar\beta}\right)
-{\textstyle{\frac12}} D_- \Bigl( \epsilon^+\omega_{\bar\beta\bar\alpha}\bbDB{+}\bar\varphi^{\bar\alpha}\bbDB{+}\bar\varphi^{\bar\beta}
\Bigr)
\end{eqnarray}
and drop the final term here as it is a total derivative.
Then the condition for supersymmetry is
\begin{eqnarray}\label{messy}\nonumber
&&\bar\epsilon^+\bbDB{+}f^\beta\bbDB{+}\bar\varphi^{\bar\beta}\left(B_{\beta\alpha ,\bar\beta}D_-\varphi^\alpha+g_{\beta\bar\alpha, \bar\beta}D_-\bar\varphi^{\bar\alpha}\right)
\\
&&-\bar\epsilon^+\left(\bbDB{+}\bar\varphi^{\bar\beta}D_-\bbDB{+}\bar\varphi^{\bar\alpha}f^\beta_{~(\bar\beta}g_{\beta\bar\alpha)}+{\textstyle{\frac12}} D_-\omega_{\bar\beta\bar\alpha}\bbDB{+}\bar\varphi^{\bar\alpha}\bbDB{+}\bar\varphi^{\bar\beta}\right)=0~.
\end{eqnarray}
Then the independent terms in \re{messy} give the equations \footnote{{Pushing in additional $\bbD{}$s from the measure and/or partial integration of bosonic derivatives does not relate these terms or lead to any further simplifications.}}
\begin{eqnarray}\label{geom}
&&f^\beta_{~(\bar \alpha}g_{\bar\gamma)\beta}=0~,~~~\Rightarrow f^\beta_{~\bar \alpha}g_{\bar\gamma\beta}=
\omega_{\bar \gamma\bar\alpha}~,
\end{eqnarray}
together with
\begin{eqnarray}
\label{geom2}
\nonumber
&& {\textstyle{\frac12}} \omega_{\bar\alpha\bar\gamma,\bar\beta}-g_{\beta\bar\beta,[\bar\alpha}
f^\beta_{~\bar\gamma]}=0~,~~~\Rightarrow \nabla^{(+)}_{\bar\beta}\omega_{\bar\alpha\bar\gamma}=0~,\\[1mm]
&&{\textstyle{\frac12}}\omega_{\bar\alpha\bar\gamma,\beta}-B^{(2,0)}_{\sigma\beta ,[\bar\alpha}f^{\sigma}_{~\bar\gamma]}=0~ ~,~~~\Rightarrow \nabla^{(+)}_{\beta}\omega_{\bar\alpha\bar\gamma}=0~,
\end{eqnarray}
where we have used the geometric constraints on the connection and torsion that follow from the underlying $(2,1)$ geometry, as well as the definitions \re{gB}. Some of this structure is described in Appendix \ref{app1}.
The conditions \re{geom} imply that the metric is hermitian with respect to the complex structures \re{compl2} while \re{geom2} implies
that these complex structures are covariantly constant with respect to the
connection with torsion $\Gamma^{(+)}=\Gamma^{(0)}+T$:
\begin{eqnarray} \label{nabf}
\nabla^{(+)}_i f^\kappa{}_{\bar\lambda}=0~,
\end{eqnarray}
where $\Gamma^{(0)}$ is the levi-Civita connection and the torsion is formed from the $B$ field strength as $T={\textstyle{\frac12}} g^{-1}H$.
We note that this geometry is sometimes referred to as hyperk\"ahler with torsion.
{Finally, the vanishing of the Nijenhuis tensor \re{nij} in conjunction with the covariant constancy conditions in \re{geom2} leads to
\begin{eqnarray}
H=
d^{(A)} \omega^{(A)}~,
\end{eqnarray}
for each $A$, where $\omega^{(A)}$ is the 2-form with components $\omega^{(A)}_{ij} =g_{ik} (\mathbb{J}^{(A)})^k{}_j$, and $d^{(A)}$ is the $i(\bar\partial -\partial)$ operator for the complex structure $\mathbb{J}^{(A)}$. This can also be derived from $\nabla^{(+)}J^{(A)}=0$ and ${\cal N}^{~~i}_{jk}(J^{(A)})=0$.}
The transformations \re{fvar} correspond to generalising the constraints \re{constr2} to
\begin{eqnarray}\nonumber\label{constr22}
&&\bbDB{+}^1\varphi^\alpha = 0 ~,\\[1mm]
&&\bbDB{+}^2\varphi^\alpha = f^\alpha _{~\bar\beta}\bbDB{+}^1\bar \varphi ^{\bar\beta}
\end{eqnarray}
in $(4,1) $ superspace.
Note that the constraints \re{fvar} require the existence of a local product structure in addition to the structure required for $(4,1)$ geometry, as this is necessary to split the coordinates into two sets, $\varphi = (\phi, \chi)$. For $(4,2)$ or $(4,4)$ supersymmetry, the existence of this product structure follows from the conditions for extended supersymmetry.
For constant complex structures $f^\alpha_{~\bar\beta}$, \re{fvar} implies that
\begin{eqnarray}
\Gamma^{(+)}_{i\sigma\bar\kappa}f^\sigma{}_{\bar\lambda}+ \Gamma^{(+)}_{i\bar\lambda\sigma}f^\sigma{}_{\bar\kappa}=0~,
\end{eqnarray}
where $i=(\beta,\bar\beta)$, we have lowered $\kappa$ to $\bar\kappa$ and used the antisymmetry of the two-forms $\omega$.
This (non-covariant) condition can be rewritten using formulae from the appendix as
\begin{eqnarray}\nonumber\label{noncov}
&&f^\sigma_{~\bar\lambda}g_{\sigma\bar\kappa,\bar\beta}+2g_{\sigma \bar\beta,[\bar\lambda}f^\sigma_{~\bar\kappa]}=0\\[1mm]
&&f^\sigma_{~\bar\kappa}g_{\bar \lambda\sigma,\beta}+2f^\sigma_{~[\bar\lambda}g_{\bar\kappa]\beta,\sigma}=0~.
\end{eqnarray}
For the constant complex structures \re{comstr},
we have
\begin{eqnarray}
f^\alpha_{~\bar\beta}= i( \sigma _2) ^\alpha_{~\bar\beta}
\end{eqnarray}
and
the hermiticity condition \re{geom} becomes
\begin{eqnarray}\label{herm1}\nonumber
&&\bar k_{\bar\phi^i,\phi^j}-k_{\phi^j,\bar\phi^i}-\bar k_{\bar\chi^j,\chi^i}+k_{\chi^i,\bar\chi^j}=0\\[1mm]
&&\bar k_{\bar\chi^{(i},\phi^{j)}}-k_{\phi^{(i},\bar\chi^{j)}}=0~,\end{eqnarray}
while the covariant constancy conditions
\re{geom2} or \re{noncov}
become
\begin{eqnarray}\nonumber\label{herm2}
&&{\textstyle{\frac12}}\left(k_{\phi^{[j} ,\bar\chi^{k]}}-\bar k _{\bar\chi^{[j},\phi^{k]}}\right)_{, \bar\beta}-\bar k_{\bar\beta, \phi^{[j} \bar\chi^{k]}}=0\\[1mm]\nonumber
&&{\textstyle{\frac12}}\left(\bar k_{\bar\phi^k,\phi^j}+ k_{\chi^k, \bar\chi^j}+\bar k_{\bar\chi^j,\chi^k}+ k_{\phi^j, \bar\phi^k}\right)_{, \bar\beta}
-\bar k_{\bar\beta, \phi^j\bar \phi^k}-\bar k_{\bar\beta, \chi^k \bar\chi^j}=0\\[1mm]
&&{\textstyle{\frac12}}\left(k_{\chi^{[j}, \bar\phi^{k]}}-\bar k _{\bar\phi^{[j}, \chi^{k]}}\right)_{,
\bar\beta}-\bar k_{\bar\beta, \chi^{[j}
\bar\phi^{k]}}=0~.
\end{eqnarray}
We note that if \re{vpotcs} are satisfied, then this implies that \re{herm1} and \re{herm2} are satisfied. The converse is not true, and \re{vpotcs} gives a special case of the general conditions \re{herm1} and \re{herm2}. E.g. \re{vpotcs} requires that $k_{\phi^i,\bar \phi^{ j}}+\bar k_{\bar\chi^i, \chi^{ j}}$ vanishes whereas \re{herm1} only sets it equal to its hermitean conjugate.
\section{$(4,2)$ Off-Shell Supermultiplets}
Truncating the
$(4,4)$ off-shell multiplet of \cite{Gates:1984nk} to $(4,2)$ superspace gives
an off-shell $(4,2)$ supermultiplet that can be formulated as follows. We use
$(4,2)$ superspace with coordinates $x^\+,x^=, \theta ^{+a}, \bar \theta ^{+}_a,\theta^-, \bar \theta^-$ where $a=1,2$ is an $SU(2)$ index.\footnote{ There is a possible confusion between the $SU(2)$ index $2$ and a $2$ indicating the square. This is resolved by noting that a bold face $\bbD{\pm} $ never appears squared.} All fermionic coordinates are complex.
There are
two complex right-handed spinorial covariant derivatives $\bbD{+}^a$ and a complex
left-handed spinorial covariant derivative $\bbD{-}$, satisfying
\begin{eqnarray}\nonumber\label{talg}
&&\{\bbD{+a},\bbDB{+} ^b\}=~2i\delta^b_a\partial_\+~, ~~~a,b,=1,2,\\[1mm]
&&\{\bbD{-}, \bbDB{-}\}= 2i\partial_=~,
\end{eqnarray}
The $(4,2)$ multiplet obtained from truncating the $(4,4)$ multiplet of \cite{Gates:1984nk}
consists of a pair of $(4,2)$ superfields $\phi, \chi$ satisfying the constraints
\begin{eqnarray}\nonumber\label{constr21}
&&\bbDB{+}^1\phi = 0=\bbD{+2}\phi~,~~~\bbDB{+}^1 \chi =0=\bbD{+2}\chi~,\\[1mm]\nonumber
&&\bbDB{+}^2\chi=-i\bbDB{+}^1\bar \phi~,~~~\bbDB{+}^2\phi=i\bbDB{+}^1\bar\chi,\\[1mm]
&&\bbDB{-} \phi =0 ~,~~~ \bbD{-} \chi =0.
\end{eqnarray}
An alternative truncation has the $\bbD{-}$ constraints on the two fields switched. The two multiplets are related by interchanging $ \theta_- \leftrightarrow \bar \theta _-$, so a theory written in terms of one multiplet is equivalent to one written in terms of the other. Indeed, we show in section \ref{4.2} that their projective superspace formulations are isomorphic.
However, just as for the $(2,2)$ chiral and twisted chiral multiplets, one might suspect that there could be new non-trivial theories that have both kinds of supermultiplet. As far as we have been able to ascertain, this is not the case (as long as no further superfields are involved) as no supersymmetric interaction between the two kinds of multiplets seems possible\rd{\footnote{Added in proof: The referee informs us that this is in agreement with the results of \cite{Ivanov:2004re} derived using bi-harmonic superspace}}.
\section{$(4,2)$ Supersymmetry in $(2,2)$ Superspace}
In $(2,2)$ superspace, chiral superfields
$\varphi$
satisfy
\begin{eqnarray}
\bbDB{\pm} \varphi =0
\end{eqnarray}
while twisted chiral superfields $\psi$ satisfy
\begin{eqnarray}
\bbDB{+} \psi =0, \qquad \bbD{-}\psi=0
\end{eqnarray}
There are other possible $(2,2)$ multiplets such as semichiral multiplets \cite{Buscher:1987uw}, but here we shall restrict ourselves to
these two.
The general action for chiral and twisted chiral multiplets is given
by \cite{Gates:1984nk}
\begin{eqnarray}
S= \int d^2x d^4\theta \,K ( \varphi, \bar \varphi, \psi, \bar \psi )
\end{eqnarray}
in terms of an unconstrained scalar potential $K( \varphi, \bar \varphi, \psi, \bar \psi ) $.
Expanding in $(2,1)$ superfields
by writing
\begin{eqnarray}
\varphi \vert _{\theta _2^-=0} = \tilde \varphi , \qquad \psi \vert _{\theta _2^-=0} = \tilde \psi
\end{eqnarray}
one finds the action
\re{2,1} and the
vector potentials are gradients of the scalar potential $K$ \cite{Hull:2012dy},
\begin{eqnarray}
k_{\varphi}= i \partial_{\varphi}K~,~~~k_{\psi}=-i \partial_{\psi}K.
\label{kKcom}
\end{eqnarray}
where the tildes and indices enumerating multiplets have been suppressed.
We now turn to the off-shell $(4,2)$ supermultiplet introduced in the last section.
It contains a $(2,2)$ chiral superfield $\phi$ and a twisted chiral superfield $\chi$ with the transformation under the {extra supersymmetries $Q, \bar Q$ given by
\begin{eqnarray}
\bar Q_+ \phi = i\bbDB{+} \bar \chi, \qquad \bar Q_+ \chi = -i\bbDB{+} \bar \phi
, \qquad \bar Q_+ \bar \phi=0
, \qquad \bar Q_+ \bar \chi=0
~,
\end{eqnarray}
together with the complex conjugate expressions.}
Consider a model with $d$ multiplets $\phi^i, \chi^i$, so the action is
\begin{eqnarray}
S= \int d^2x d^4\theta \,K ( \phi^i, \chi^i,\bar\phi^i, \bar\chi^i ) ~.
\end{eqnarray}
Then under the
$\bar Q$ transformation
\begin{eqnarray}
\delta S=\int d^2x \bbD{+}\bbDB{+}\bbD{-}\bbDB{-} \Delta
\end{eqnarray}
where
\begin{eqnarray}
\Delta = \bar QK =i K,_{\phi ^i} \bbDB{+} \bar \chi ^i -iK,_{\chi ^i} \bbDB{+} \bar \phi ^i ~.
\end{eqnarray}
Then acting with $\bbDB{+}$ gives
\begin{eqnarray}
\delta S=\int d^2x \bbD{+}\bbD{-}\bbDB{-} (\bbDB{+} \Delta)
\end{eqnarray}
where
\begin{eqnarray}
\bbDB{+}\Delta = \bbDB{+}\bar QK =\bbDB{+}(i K,_{\phi ^i} \bbDB{+} \bar \chi ^i -iK,_{\chi ^i} \bbDB{+} \bar \phi ^i )~.
\end{eqnarray}
This gives
\begin{eqnarray}\nonumber
\bbDB{+}\Delta =
&&
i K,_{\phi ^i \bar \phi ^j} \bbDB{+} \bar \phi ^j \bbDB{+} \bar \chi ^i
+i K,_{\phi ^i \bar \chi ^j } \bbDB{+} \bar \chi ^j \bbDB{+} \bar \chi ^i
\\ &&-iK,_{\chi ^i \bar \phi ^j} \bbDB{+} \bar \phi ^j\bbDB{+} \bar \phi ^i
-iK,_{\chi ^i \bar \chi ^j } \bbDB{+} \bar \chi ^j \bbDB{+} \bar \phi ^i
\\
\nonumber
=
&&
i
(K,_{\phi ^i \bar \phi^j} + K,_{\chi ^j \bar \chi^i})
\bbDB{+} \bar \phi ^j \bbDB{+} \bar \chi ^i
\\ &&+i K,_{\phi ^i \bar \chi ^j } \bbDB{+} \bar \chi ^j \bbDB{+} \bar \chi ^i
-iK,_{\chi ^i \bar \phi ^j} \bbDB{+} \bar \phi ^j\bbDB{+} \bar \phi ^i .
\end{eqnarray}
The first term vanishes if
\begin{eqnarray}\label{4.1cnd}
K,_{\phi ^i \bar \phi^j} + K,_{\chi ^j \bar \chi^i} =0~.
\end{eqnarray}
This is a sufficient condition for full invariance, since using it one finds that the remaining terms vanish using $\bbDB{-} $ or $\bbD{-} $ from the remaining measure:
\begin{eqnarray}\nonumber
&&\bbDB{-} (K,_{\phi ^i \bar \chi ^j } \bbDB{+} \bar \chi ^j \bbDB{+} \bar \chi ^i)=0\\[1mm]
&&\bbD{-} (K,_{\chi ^i \bar \phi ^j} \bbDB{+} \bar \phi ^j\bbDB{+} \bar \phi ^i )=0~.
\end{eqnarray}
To find the necessary and sufficient conditions for $(4,2)$ supersymmetry, we
start with the conditions for $(4,1)$ supersymmetry given by \re{herm1} and \re{herm2}.
For the sigma model to have $(4,2)$ supersymmetry requires in addition the condition (\ref{kKcom}) which here implies that the $(4,1)$ potential $k$ is given by derivatives of a scalar potential $K$:
\begin{eqnarray}\label{VecScal}
k_{\phi ^i}=iK,_{\phi ^i}, ~~~k_{\chi ^i} =-iK,_{\chi ^i} ~.
\end{eqnarray}
Then the hermiticity condition \re{herm1} together with \re{VecScal} gives precisely the condition \re{4.1cnd}, and then the remaining conditions \re{herm2} are all satisfied identically using
\re{4.1cnd} and \re{VecScal}, and give no further constraints.
Thus \re{4.1cnd} is the necessary and sufficient condition for a $(2,2)$ model to have $(4,2)$ supersymmetry.
In section 3, we considered $(4,1)$ models whose potentials satisfied the conditions (\ref{vpotcs}). These models will have $(4,2)$ supersymmetry if \re{VecScal} is satisfied, which implies \re{4.1cnd} together with
\begin{eqnarray}
K,_{\phi ^i \bar \chi^j} =K,_{\phi ^j \bar \chi^i} ~.
\end{eqnarray}
This gives a special class of $(4,2)$ models.
\section{$(4,4)$ Supermultiplet and Action}
\label{4,4}
The
$(4,4)$ off-shell multiplet of \cite{Gates:1984nk} is formulated in (4,4) superspace
with
two complex right-handed spinorial covariant derivatives $\bbD{+a}$ and two complex
left-handed spinorial covariant derivatives $\bbD{-a}$, satisfying
\begin{eqnarray}\nonumber\label{talg44}
&&\{\bbD{+a},\bbDB{+} ^b\}=~2i \delta^b_a\partial_\+~, ~~~a,b,=1,2.\\[1mm]
&&\{\bbD{-a}, \bbDB{-}^b\}= 2i \delta^b_a \partial_=~,
\end{eqnarray}
The $(4,4)$ multiplet of \cite{Gates:1984nk}
consists of a pair of superfields $\phi, \chi$ satisfying the constraints
\begin{eqnarray}\nonumber\label{constr44}
&&\bbDB{+}^1\phi = 0=\bbD{+2}\phi~,~~~\bbDB{+}^1 \chi =0=\bbD{+2}\chi~, ~,~~~ \bbD{-a} \chi =0\\[1mm]\nonumber
&&\bbDB{+}^2\chi=-i\bbDB{+}^1\bar \phi~,~~~\bbDB{+}^2\phi=i\bbDB{+}^1\bar\chi,\\[1mm]
&&\bbDB{-}^2\chi= i\bbD{-1} \phi~,~~~\bbD{-2}\phi=i\bbDB{-}^1\chi~.
\end{eqnarray}
As before, the action can be written in (2,2) superspace in terms of
$d$ (2,2) chiral multiplets $\phi^i$ and $d$ twisted chiral multiplets $\chi^i$, so the action is
\begin{eqnarray}
S= \int d^2x d^4\theta \,K ( \phi^i, \chi^i,\bar\phi^i, \bar\chi^i )
\end{eqnarray}
with the non-manifest supersymmetry transformations given by
{
\begin{eqnarray}
\bar Q_+ \phi = i\bbDB{+} \bar \chi, \qquad \bar Q_+ \chi = -i\bbDB{+} \bar \phi
, \qquad \bar Q_+ \bar \phi=0
, \qquad \bar Q_+ \bar \chi=0
~,
\end{eqnarray}
and
\begin{eqnarray}
Q_- \phi = i \bbDB{-} \chi, \qquad Q_- \bar \chi = -i \bbDB{-} \bar \phi
, \qquad Q_-\bar \phi=0
, \qquad Q_- \chi=0
~,
\end{eqnarray}
together with the complex conjugate expressions.}
Then under the
$\bar Q_+$ transformation
\begin{eqnarray}
\label{vardel}
\delta S=\int d^2x \bbD{+}\bbDB{+}\bbD{-}\bbDB{-} \Delta
\end{eqnarray}
where
\begin{eqnarray}
\Delta = \bar Q_+K =i K,_{\phi ^i} \bbDB{+} \bar \chi ^i -iK,_{\chi ^i} \bbDB{+} \bar \phi ^i
\end{eqnarray}
and, as in the last section, the action is invariant if
\begin{eqnarray}
\label{erty}
K,_{\phi ^i \bar \phi^j} + K,_{\chi ^j \bar \chi^i} =0~.
\end{eqnarray}
Under the $Q_-$ transformation we obtain (\ref{vardel}) but with
\begin{eqnarray}
\Delta = Q_-K =i K,_{\phi ^i}
\bbDB{-} \chi ^i
-iK,_{\bar \chi ^i}
\bbDB{-} \bar \phi
^i
\end{eqnarray}
Then a similar analysis to the above gives that the action is invariant under the $Q_-$ transformation if
\begin{eqnarray}
\label{ertyb}
K,_{\phi ^i \bar \phi^j} + K,_{\chi ^i \bar \chi^j} =0~.
\end{eqnarray}
Then the necessary and sufficient conditions for $(4,4)$ supersymmetry are (\ref{erty}) and
(\ref{ertyb}).
Together, (\ref{erty}) and
(\ref{ertyb})
imply
\begin{eqnarray}
\label{ertyc}
K,_{\phi ^i \bar \phi^j} =K,_{\phi ^j \bar \phi^i}~.
\end{eqnarray}
We can then instead take the necessary and sufficient conditions for $(4,4)$ supersymmetry to be (\ref{ertyc}) and
(\ref{ertyb}), which are precisely
the conditions that were found in \cite{Gates:1984nk}.
\section{$(4,1)$ Superspace Action}
\label{GEN}
\subsection{General}
\label{gen}
A superspace action for ${\cal N}$ supersymmetries in $D$ dimensions
involves integration over the $d=s{\cal N}$ fermi coordinates $\theta$, where $s$ is the dimension of the spinor representation in $D$ dimensions
(e.g. $s=4$ in $D=4$).
This picks out the highest $\theta$ component from the superspace Lagrangian ${\cal L}$. Equivalence between Berezin integration and differentiation means that the integration may be written schematically as
\begin{eqnarray}
\int d^Dx d^d\theta{\cal L}=\int d^Dx \frac {\partial^d{\cal L}}{\partial\theta^d}=\int d^Dx D^d{\cal L}\Big| _{\theta =0}~,
\end{eqnarray}
where the vertical bar denotes the $\theta$-independent part of the expression and use has been made of the fact that the spinorial covariant derivatives $D$ differ from the partial spinorial derivatives by $\theta$ terms involving a spacetime derivative, and total derivative terms are dropped from the spacetime integral. Since the product $DD\sim\partial$, with $\partial$ a space time derivative, it is clear that
even if the Lagrangian ${\cal L}$ contains no derivatives, there is a limit to $d \leq 4$ in spacetime dimensions $D\ge 3$, for the action to be physical, i.e. for its bosonic part to be quadratic in space time derivatives. In $D=2$ dimensions with $(p,q)$ supersymmetry, $D_-D_-\sim\partial _=$ and $D_+D_+\sim\partial _\+$ and a similar argument shows that $p \le 2$ and $q \le 2$ for the action to be physical.
This bound on $d$ or $(p,q)$
can be circumvented by finding subspaces that are invariant under supersymmetry and integrating constrained Lagrangians over those.
The prime example of such subspaces are the chiral and antichiral subspaces of $D=4, ~{\cal N}=1$ superspace, where
the complex superfields $\phi$ obey the chirality condition $\bar D\phi=0$,
and a chiral Lagrangian is
integrated with the chiral measure $D^2$, and
an anti-chiral Lagrangian is integrated with the anti-chiral measure
$\bar D^2$.
The projective superspace construction described in
section \ref{proj} below provides a systematic method of constructing such
constrained superfields and the corresponding
invariant subspaces, but we first describe the approach of \cite{Gates:1984nk} .
\subsection{The GHR approach}
\label{GHR}
In \cite{Gates:1984nk} a general invariant action for an off-shell $(4,4)$ multiplet was found. Here we adapt this to our $(4,1)$ models.
In constructing an action for $(4,1)$ multiplets we face the problem discussed above in section \ref{gen}.
The algebra involves four real or two complex positive chirality derivatives
$\bbD{+a}, \bbDB{+}^a$, and so the full $(4,1)$ superspace measure has too large a dimension. We then seek an invariant subspace and corresponding subintegration, similar to the chiral subspaces in ${\cal N}=1, D=4$ superspace. We use the procedure of \cite{Gates:1984nk} and define two linear combinations of positive chirality spinor derivatives:
\begin{eqnarray}\nonumber\label{NAdefs}
&&\nabla_+=\beta\bbD{+1}+i\alpha\bbD{+2}\\[2mm]
&&\Delta_+={\alpha\bbDB{+}^1+i\beta\bbDB{+}^2}
\end{eqnarray}
for some choice of complex parameters $\alpha , \beta$.
For a given choice of parameters $\alpha , \beta$, the $(4,1)$ superfields
$\eta, \breve\eta$
given by
\begin{eqnarray} \label{mult}
\label{etaetab}
\eta:=\alpha \phi +\beta\bar\chi~,~~~\breve{\eta}:=\beta\bar\phi-\alpha \chi
\end{eqnarray}
are annihilated by $\nabla_+$ and $\Delta_+$
\begin{eqnarray}\nonumber\label{etaetaba}
\nabla_+\eta =\Delta_+\eta=0~,~~~
\nabla_+\breve{\eta} =\Delta_+\breve{\eta}=0~.
\end{eqnarray}
Then for a Lagrangian constructed from these constrained superfields, a $(4,1)$ supersymmetric action is given using the conjugate operators $\bar \nabla_+$ and $\bar\Delta_+$ to define the supermeasure.
The action is then
\begin{eqnarray}\label{4,1}
i\int d^2x \bar \nabla_+\bar \Delta_+D_-L_-+h.c.~.
\end{eqnarray}
where $h.c.$ denotes hermitian conjugate, and we take
\begin{eqnarray}
L_-:=\lambda_i(\eta,\breve{\eta})D_-\eta^i+\tilde\lambda_i(\eta,\breve{\eta})D_-\breve{\eta}^i~,
\end{eqnarray}
for a set of multiplets labelled by the index $i$, for some potentials $\lambda_i,\tilde\lambda_i$.
A general action will be a linear superposition of actions of the form \re{4,1}. We then
allow the potentials $\lambda_i,\tilde\lambda_i$ to depend explicitly on $\alpha,\beta$ and
integrate over all possible values of $\alpha,\beta$.
The $(4,1)$ supersymmetric action constructed from the constrained superfields in \re{etaetab} is then
\begin{eqnarray}\nonumber\label{fullact}
&&i\int d^2x\left[\int d\alpha d\beta \bar \nabla_+\bar \Delta_+D_- L_-\right]~+h.c.\\[2 mm]
&&L_-:=\lambda_i(\eta,\breve{\eta}; \alpha,\beta)D_-\eta+\tilde\lambda_i(\eta,\breve{\eta}; \alpha,\beta)D_-\breve{\eta}~,
\end{eqnarray}
where the operators $\bar \nabla_+$ and $\bar\Delta_+$ define the supermeasure.
The parameter integration must be specified as some contour integral.
In the special case when the action is a reduction of the $(4,4)$ action of \cite{Gates:1984nk} which has a scalar function $L$ as its Lagrangian, one finds
\begin{eqnarray}
-\tilde \lambda_i=\lambda_i=i\partial_{\eta^i+\breve{\eta}^i}L(\eta+\breve{\eta})~.
\end{eqnarray}
The measure in \re{4,1} can be rewritten in a form suitable for reduction to $(2,1)$ superspace using
\begin{eqnarray}\label{NaDe}
\bar\Delta_+=-\frac{\bar \beta} \alpha \nabla_++\frac 1 \alpha (|\alpha |^2+|\beta |^2) \bbD{+1}~.
\end{eqnarray}
Since $\nabla_+$ and $\Delta_+$ annihilate the Lagrangian, the measure becomes
\begin{eqnarray}\label{measure}
\bar\nabla_+\bar\Delta_+D_-\propto \bbD{+1}\bbDB{+}^1D_-~.
\end{eqnarray}
In the reduction we identify $\bbD{+1}\to\bbD{+}$ which gives the $(2,1)$ measure when the second $\theta^+$ is set to zero
\begin{eqnarray}\label{measure2}
\bbD{+1}\bbDB{+}^1D_-(\dots)|=\bbD{+}\bbDB{+}D_-(\dots)|~.
\end{eqnarray}
This gives rise to an expression for the potential $k_\alpha$ in terms of an integral of an expression constructed from the $\lambda_i,\tilde \lambda_i $; we will give similar forms explicitly in later sections.
By construction, the potential $k_\alpha$ will necessarily satisfy the conditions \re{herm1},\re{herm2} for $(4,1) $ supersymmetry.
The form of $\eta,\breve{\eta}$ given in
\re{etaetab} implies that any function
function $f(\eta,\breve{\eta})$ will automatically satisfy
\begin{eqnarray}\label{ohmy}
\frac{\partial^2f}{\partial\phi^i\partial\bar\phi^{\bar k}}+\frac{\partial^2f}{\partial\chi^k\partial\bar\chi^{\bar i}}=0~.
\end{eqnarray}
For the multiplet \re{mult}, this implies that the potential $k_\alpha$ constructed in this way
will satisfy
\begin{eqnarray}
\label{asda}
k_{\alpha,\phi^k \bar \phi ^j}+k_{\alpha,\chi^j\bar \chi ^k}=0~,
\end{eqnarray}
and its complex conjugate, in addition to the conditions \re{herm1},\re{herm2} for $(4,1)$ supersymmetry.
Further, the potentials may be written
\begin{eqnarray}\nonumber
&& k_{\phi^i}= i\left(\int d\alpha d\beta\alpha\lambda_i-\int d\bar\alpha d\bar\beta\bar\beta\bar{\tilde\lambda}_i\right) \\[1mm]
&& k_{\chi^i}= -i\left(\int d\alpha d\beta\alpha\tilde\lambda_i+\int d\bar\alpha d\bar\beta\bar\beta\bar{\lambda}_i\right)~,
\end{eqnarray}
along with their complex conjugates. Using this form, it is easy to show that the potentials actually satisfy the stronger conditions \re{vpotcs}. Thus the models constructed in this way
constitute a subclass of the possible $(4,1)$ models.
\section{Projective superspace}
\label{proj}
{The procedure from \cite{Gates:1984nk} used in the derivation of the action \re{fullact} was introduced to construct an action for a particular multiplet.
It was later realised that there is a generalisation that works the other way:
the superspace can be enlarged by an extra coordinate or coordinates in such a way that
superfields and actions in this enlarged superspace automatically
have extended supersymmetry. This is the Projective Superspace construction
\cite{Karlhede:1984vr}, \cite{Lindstrom:1987ks}, \cite{Buscher:1987uw}, \cite{Lindstrom:1994mw}, a useful tool for finding new multiplets and constructing actions in various dimensions. We begin by making contact with the discussion in the previous section.}
\subsection{Relation of the GHR construction to Projective superspace.}
In the previous section we summed over theories parameterised by
complex variables $(\alpha,\beta)$. The overall scale is unimportant, so they can be viewed as homogeneous coordinates on $ \mathbb{CP}^1$.
It is useful to instead use an inhomogeneous coordinate
\begin{eqnarray}
\zeta =i \alpha/\beta
\end{eqnarray}
in the region where $\beta \ne 0$, or
{$\zeta ' =-i \beta/ \alpha $} in the patch where $\alpha\ne 0$.
Then the summation over theories corresponds to a contour integral on
$ \mathbb{CP}^1$,{ covered by two patches, one with inhomogeneous coordinate $\zeta $ and
one with inhomogeneous coordinate $\zeta '$.
We now discuss Projective Superpace in more detail.}
\subsection{$(4,q)$ Projective superspace defined}
Projective superspace is defined to deal with the limitations outlined in section \ref{gen} and at the same time gives a constructive method for finding new multiplets.
We shall be concerned with $(4,q)$ superspace for $q=4,2,1$.
In all these cases a full superspace measure has more spinorial derivatives than allowed {and so we seek invariant subintegrations.
Part of the construction is the same for all $p$, the difference is mainly in the form of the actions.
We start from the positive chirality part of the $D$ algebra given in the first line of \re{talg1} or \re{talg}.
A projective coordinate $\zeta$ on $ \mathbb{CP}^1$ is used to construct the combinations\footnote{The conventions have varied over time. The present choice are those of \cite{GonzalezRey:1997qh}, up to an unimportant overall $\zeta$ factor multiplying $\breve{\nabla}$ .}
\begin{eqnarray}\nonumber\label{defN}
&&\nabla_+:=\bbD{+1}+\zeta\bbD{+2}~,\\[1mm]
&&\breve{\nabla}_+:=\bbDB{+}^1-\zeta^{-1}\bbDB{+}^2~.
\end{eqnarray}
We introduce a conjugation acting on meromorphic functions of $f(\zeta)$ by
\begin{eqnarray}
f(\zeta)\to \breve{f}(\zeta)
\end{eqnarray}
given by the composition of complex conjugation
\begin{eqnarray}
f(\zeta)\to :f^*(\bar \zeta)\equiv ( f(\zeta))^*
\end{eqnarray}
and the antipodal map
\begin{eqnarray}
\zeta \to -\bar \zeta^{-1}
\end{eqnarray}
so that\footnote{Projective superspace uses complex conjugation composed with the antipodal map on $\mathbb{CP}^1$ \cite{Lindstrom:1987ks}, as described here. It is the relevant conjugation in projective superspace, and in the literature it is often denoted by just a bar. A closely related conjugation in harmonic superspace was earlier introduced in \cite{Galperin:1984av}.}
\begin{eqnarray}
\breve{f}(\zeta)
=
f^*(-\zeta^{-1}) ~.
\end{eqnarray}
The derivatives \re{defN} are related by the
this conjugation.}
We shall be interested in projectively chiral superfields $\eta$ that satisfy
\begin{eqnarray}\label{projchir}
\nabla_+\eta=0~,~~~\breve{\nabla}_+\eta=0~,
\end{eqnarray}
as well as being $(4,q)$ superfields.
We assume that they have the $\zeta$-expansion
\begin{eqnarray}
\eta=\sum_{\mu= -m}^n \zeta^\mu\eta_\mu~,
\end{eqnarray}
where $\eta_\mu$ is the expansion coefficient superfields for the $\mu$'th power of $\zeta$. {The constraints \re{projchir} then lead to the following conditions on the fields $\eta_\mu$:}
\begin{eqnarray}\nonumber\label{cons}
&&\bbD{+1}\eta_\mu+\bbD{+2}\eta_{\mu-1}=0\\[1mm]
&&\bbDB{+}^1\eta_\mu-\bbDB{+}^2\eta_{\mu+1}=0~.
\end{eqnarray}
{{Here
$\eta_\mu=0$ for $\mu <-m$ and $\mu>n$, so that the highest and lowest components are constrained
\begin{eqnarray}\nonumber\label{consasd}
&&\bbD{+1}\eta_ {-m} =0\\[1mm]
&&\bbDB{+}^1\eta_n=0~.
\end{eqnarray}}
To be able to write actions, two independent orthogonal derivatives are needed.
The following pair can be used for the supermeasure for fields annihilated by the operators (\ref{defN}):
\begin{eqnarray}\nonumber\label{defM}
&&\Delta_+:=\bbD{+1}-\zeta\bbD{+2}~,\\[1mm]
&&\breve{\Delta}_+:=\bbDB{+}^1+\zeta^{-1}\bbDB{+}^2~.
\end{eqnarray}.
The algebra obeyed by the $\nabla$'s and $\Delta$'s is
\begin{eqnarray}\nonumber\label{nadealg}
&&\{\nabla_+,\nabla_+\}=\{\breve{\nabla_+},\breve{\nabla_+}\}=\{\Delta_+,\Delta_+\}=\{\breve{\Delta}_+,\breve{\Delta}_+\}=\{\nabla_+,\Delta_+\}=\{\breve{\nabla}_+,\breve{\Delta}_+\}=0\\[1mm]
&&\{\nabla_+,\breve{\Delta}_+\}=\{\breve{\nabla}_+,{\Delta}_+\}=4i\partial_\+~.
\end{eqnarray}
\subsection{$(4,1)$ projective superspace}
For the $(4,1)$ theories the algebra is \re{talg1}.
The $(2,1)$ content of \re{cons} is then obtained as discussed previously in Sec.\ref{Off}, by identifying the $(2,1)$ derivative as $\bbD{+}=\bbD{+1}$ and the generator of the non-manifest extra supersymmetries\footnote{See the comments following \re{8}. } as ${\mathbb{Q}}_{+}=\bbD{+2}$. Most of the relations in \re{cons} will just give the $\mathbb{Q}_+$ action of the second supersymmetry on the $\zeta$ coefficients fields $\eta_\mu$ . {Only the first and last fields in the $\zeta$-expansion in \re{cons} will be constrained \footnote{ {We suppress the tildes that we previously used to denote $(2,1)$ superfields.}}
\begin{eqnarray}\nonumber
&&\bbD{+}\eta_{-m}=0\\[1mm]
&&\bbDB{+}\eta_n=0~.
\end{eqnarray}}
{The rest of the fields $\eta_\mu$ are unconstrained, with the conditions \re{cons}
giving relations between $\eta_\mu$ and $\eta_{\mu\pm1}$.}
.
A $(4,1)$ Lagrangian is
\begin{eqnarray}\label{exex}
i\oint_C\frac{d\zeta}{2\pi i \zeta} \Delta_+\breve{\Delta}_+D_-\left(\lambda_\alpha(\eta,\breve{\eta};\zeta)D_-\eta^\alpha+\breve{\lambda}_{\alpha}(\eta, \breve{\eta};\zeta)D_-\breve{\eta}^{\alpha}\right)~.
\end{eqnarray}
{The potentials $\lambda, \breve{\lambda}$ can depend explicitly on $\zeta$, and we perform a contour integration over a suitable contour $C$. In many examples, $C$ will be a small contour encircling the origin.}
Since it follows from \re{defN} and \re{defM} that $\Delta$ anticommutes with $D_-$, and that
\begin{eqnarray}
\Delta_+=2\bbD{+}-\nabla_+~,
\end{eqnarray}
and since further $\nabla$ annihilates $\eta$ , we may make the following replacement in reducing a Lagrangian to $(2,1)$ superspace:
\begin{eqnarray}\label{rep}
i\oint_C\frac{d\zeta}{2\pi i \zeta} \Delta_+\breve{\Delta}_+D_-{\cal L}_-(\eta,\breve{\eta})\to i\oint_C\frac{d\zeta}{2\pi i \zeta} \bbD{+}\bbDB{+}D_-{\cal L}_-(\eta,\breve{\eta})~.
\end{eqnarray}
The relation of $\lambda_\alpha$ to $k_\alpha$ in \re{2,1} depends on the form of $\eta$, as illustrated in the examples below.
{After the reduction, \re{rep} gives a $(4,1)$ supersymmetric action written in $(2,1)$ superspace with the non-manifest supersymmetry ensured by the construction.
For the multiplet \re{etaetab},
this will lead to constraints on ${\cal L}_- $ of the type \re{ohmy} . As before, these
lead to a potential $k$ satisfying \re{asda} in addition to the conditions \re{herm1},\re{herm2} for $(4,1)$ supersymmetry.
Thus for this multiplet,
the models constructed in projective superspace represent a subclass of the general $(4,1)$ models.
\subsubsection{Examples}
If we consider $\eta$'s with $m=0, n=1$, and denote $\eta_0=\bar \phi$, $\eta_1=\chi$, we have
\begin{eqnarray}\label{simpeta}\nonumber
&&\eta^i=\bar\phi^i+\zeta \chi^i\\[1mm]
&&\breve{\eta}^i=\phi^i-\zeta^{-1}\bar\chi^i~,
\end{eqnarray}
{with $i=1\dots d$ for $d$ fields $\eta^i$.}
From \re{cons} we find that the coefficients obey
\begin{eqnarray}\nonumber
&&\bbDB{+} \phi^i=0~,~~~\bbDB{+}\chi^i=0~,~~~{\mathbb{Q}}_+\phi^i=0~,~~~{\mathbb{Q}}_+\chi^i=0\\[1mm]
&&\bar{\mathbb{Q}}_+\phi^i=-\bbDB{+}\bar\chi^i~,~~~\bar{\mathbb{Q}}_+\chi^i=\bbDB{+}\bar\phi^i~.
\end{eqnarray}
For each $i$, this is \re{constr2} with $i\bbD{+2}=\mathbb{Q}_+$. From \re{exex}, the $(2,1)$ Lagrangian is
\begin{eqnarray}\label{etact1}
i\oint_C\frac{d\zeta}{2\pi i \zeta} \bbD{+}\bbDB{+}D_-
\left(\lambda_{\eta^i}
(\eta,\breve{\eta})D_-\eta^i
+\breve{ \lambda}_{\breve{\eta}^i}
(\eta,\breve{\eta})
D_-{\breve{\eta}}^i \right)~,
\end{eqnarray}
In this case, the relation of $\lambda_i$ to $k_i$ in \re{2,1} is given by\footnote{Note that the $\zeta$ measure is invariant under conjugation.}
\begin{eqnarray}\nonumber\label{ks}
&&k_{\phi^i}=\oint_C\frac{d\zeta}{2\pi i \zeta}\breve{\lambda}_i~,~~~\bar k_{\bar\phi^i}=\oint_C\frac{d\zeta}{2\pi i \zeta}{\lambda}_i\\[1mm]
&&k_{\chi^i}=\oint_C\frac{d\zeta}{2\pi i \zeta}\zeta{\lambda}_i~,~~~~\bar k_{\bar\chi^i}=-\oint_C\frac{d\zeta}{2\pi i \zeta}\zeta^{-1}\breve{\lambda}_i~.
\end{eqnarray}
By construction, these potentials satisfy \re{asda}
as well as
\re{herm1} and \re{herm2}. In fact, again, a direct calculation using \re{ks} shows they satisfy the stronger condition \re{vpotcs}. As a result, the Lagrangian \re{exex} is not the most general one with $(4,1)$ supersymmetry.
To see that the vector potentials in \re{ks} satisfy \re{vpotcs}, we form their derivatives, using \re{simpeta},
\begin{eqnarray}\nonumber
&&k_{\phi^i,\bar\phi^j}=\oint_C\frac{d\zeta}{2\pi i \zeta}\breve{\lambda}_{i,\eta^j}~,~~~\bar k_{\bar\chi^i,\chi^j}=-\oint_C\frac{d\zeta}{2\pi i \zeta}\zeta^{-1}\breve{\lambda}_{i,\eta^j}\zeta.\\[1mm]
&&k_{\phi^i,\bar\chi^j}=-\oint_C\frac{d\zeta}{2\pi i \zeta}\breve{\lambda}_{i,\eta^j}\zeta^{-1}~,~~~\bar k_{\bar\chi^i,\phi^j}=-\oint_C\frac{d\zeta}{2\pi i \zeta}\zeta^{-1}\breve{\lambda}_{i,\eta^j}~.
\end{eqnarray}
They clearly satisfy \re{vpotcs}.
Consider now the example of a quadratic Lagrangian for $d$ multiplets ${\eta}^i$ given by
\begin{eqnarray}
{\cal L}_-=i\oint_C\frac{d\zeta}{2\pi i \zeta}\left(\eta^iD_-\breve{\eta}^i-\breve{\eta}^iD_-\eta^i\right)~,
\end{eqnarray}
where the contour $C$ is a small circle around the origin. Using (\ref{simpeta})
and
performing the $\zeta$ integration results in the following action
\begin{eqnarray}\label{ex103}
&&-\int d^2x\bbD{+}\bbDB{+}D_-\left(\bar\phi^i D_-\phi^i+\bar\chi^iD_-\chi^i-\phi^i D_-\bar\phi^i-\chi^iD_-\bar\chi^i\right)~
\end{eqnarray}
with
\begin{eqnarray}
k_{\phi^i}=i\bar\phi^i~, ~~~k_{\chi^i}=i\bar\chi^i~,
\end{eqnarray}
in agreement with \re{ks}.
A more interesting example arises if we take a general real function $L(\eta+\breve{\eta})$ and set
$\lambda=-\breve{\lambda}=iL$. . The vector potentials may be immediately read off from \re{ks} using these expressions:
\begin{eqnarray}\nonumber\label{ks1}
&&k_{\phi}=\oint_C\frac{d\zeta}{2\pi i \zeta}\breve{\lambda}=-iL_0~,~~~~~\bar k_{\bar\phi}=\oint_C\frac{d\zeta}{2\pi i \zeta}{\lambda}=iL_0\\[1mm]
&&k_{\chi}=\oint_C\frac{d\zeta}{2\pi i \zeta}\zeta{\lambda}=iL_{-1}~,~~~~\bar k_{\bar\chi}=-\oint_C\frac{d\zeta}{2\pi i \zeta}\zeta^{-1}\breve{\lambda}=iL_1~,
\end{eqnarray}
where $L_\mu$ are the coefficients in a expansion of $L$ in powers of $\zeta$.
This will lead to a metric $g$ and $B$ field given by the $\zeta^1, \zeta^0$ and $\zeta^{-1}$ components of the derivative of $L$ according to
\begin{eqnarray}
E=g+B=\left(\begin{array}{cccc}
0&L'_0& L'_{-1}&0\\
L'_0&0&0&-L'_1\\-L'_{-1}&0&0&L_0'\\
0&L'_1&L'_0&0
\end{array}\right)~,
\end{eqnarray}
with prime denoting derivative with respect to the argument and rows and columns ordered as $(\phi,\bar\phi,\chi,\bar\chi)$. As an example, a function
\begin{eqnarray}
L=\frac 1 {3!} (\eta+\breve{\eta})^3
\end{eqnarray}
gives
\begin{eqnarray}\nonumber
&&g_{\phi\bar\phi}=g_{\chi\bar\chi}=(\phi+\bar\phi)^2-2\chi\bar\chi\\[1mm]\nonumber
&&B_{\phi\chi}=-2(\phi+\bar\phi)\bar\chi~,\\[1mm]
&&B_{\bar\phi\bar\chi}=-2(\phi+\bar\phi)\chi~.
\end{eqnarray}
An example involving unconstrained fields arises when $m\ne 0$. We then consider
\begin{eqnarray}\label{genex}
\eta=\sum_{-m}^n\zeta^\mu\eta_\mu~
\end{eqnarray}
where the top coefficient $\eta _n\equiv \chi$ and the bottom component $\eta_{-m}\equiv\bar\phi$ give chiral fields $\phi, \chi$ in the $(2,1)$ reduction, while the rest of the fields $\eta^i_\mu$
for $ -m<\mu <n$
are unconstrained. The $(4,1)$ transformations that follow from the constraints are
\begin{eqnarray}\nonumber
&&\bbD{+} \eta_{-m}=\bbD{+} \bar \phi=0~, ~~~\bbDB{+} \eta_{n}=\bbDB{+} \chi=0\\[1mm]\nonumber
&&\bar{\mathbb{Q}}_+\eta_{\mu+1}=\bbDB{+} \eta_{\mu}~,~~~{\mathbb{Q}}_+\eta_{\mu-1}=-\bbD{+} \eta_{\mu}~,~~~\mu=n-1,...,-m+1\\[1mm]
&&\bar{\mathbb{Q}}_+\eta_{-m}=\bar{\mathbb{Q}}_+\bar\phi=0~,~~~{\mathbb{Q}}_+ \eta_n={\mathbb{Q}}_+ \chi=0~.
\end{eqnarray}
This last example goes beyond the models described by the action \re{2,1}, and introduces new unconstrained superfields.
In particular, consider the following $\eta$ with $m=1=n$:
\begin{eqnarray}\nonumber
&&\eta=\zeta^{-1}\bar\phi+X+\zeta \chi\\[1mm]
&&\breve{\eta}=-\zeta\phi+\bar X -\zeta^{-1}\bar\chi~,
\end{eqnarray}
The fields satisfy
\begin{eqnarray}\nonumber
&&\bbD{+} \bar \phi=0~, ~~~\bbDB{+} \chi=0~,\\[1mm]\nonumber
&&\bar{\mathbb{Q}}_+ X=\bbDB{+} \bar \phi~,~~~{\mathbb{Q}}_+X=-\bbD{+} \chi~,\\[1mm]\nonumber
&&{\mathbb{Q}}_+\bar\phi=-\bbD{+} X~, \bar {\mathbb{Q}}_+\chi=\bbDB{+} X\\[1mm]
&&\bar{\mathbb{Q}}_+\bar\phi=0~,~~~{\mathbb{Q}}_+ \chi=0~,
\end{eqnarray}
which leaves $X$ unconstrained. A quadratic Lagrangian is
\begin{eqnarray}
{\cal L}_-=i\oint_C\frac{d\zeta}{2\pi i \zeta}\left(\eta D_-\breve{\eta}-\breve{\eta}D_-\eta\right)~.
\end{eqnarray}
Performing the $\zeta$ integration results in the $(2,1)$ action taking the form
\begin{eqnarray}\label{flatact}
S=-\int d^2x\bbD{+}\bbDB{+}D_-\left(\bar\phi D_-\phi-\phi D_-\bar\phi+XD_-\bar X-\bar XD_-X-\bar\chi D_-\chi+\chi D_-\bar\chi\right)~.
\end{eqnarray}
The superfields $\phi,\chi$ are chiral and satisfy the standard free field equations
\begin{equation}
\bbD{+} D_- \phi=0,
\qquad
\bbD{+} D_- \chi=0~.
\end{equation}
However, note that their kinetic terms in the action have opposite sign.
The superfield
$X$ is unconstrained and its
field equation is $D_-X=0$, which implies
$ \partial _=X=0$.
The components $X \vert, \bbD{+}X \vert, \bbDB{+}X \vert, \bbD{+}\bbDB{+}X \vert$
are all right-moving, i.e. are independent of $x^=$, while the remaining components
$D_-X \vert, \bbD{+}D_-X \vert, \bbDB{+}D_-X \vert, \bbD{+}\bbDB{+}D_-X \vert$
are all set to zero by the field equations.
\subsection{$(4,2)$ projective superspace}
\label{4.2}
For $(4,2)$ {superspace}
the derivative algebra is \re{talg}:
\begin{eqnarray}\nonumber
&&\{\bbD{+a},\bbDB{+} ^b\}=~2i\delta^b_a\partial_\+~, ~~~a,b,=1,2.\\[1mm]
&&\{\bbD{-},\bbDB{-}\}=2i \partial_=~.
\end{eqnarray}
As before, we introduce projectively chiral superfields $\eta$, now in $(4,2)$ superspace, that satisfy
\begin{eqnarray}\label{projchir1}
\nabla_+\eta=0~,~~~\breve{\nabla}_+\eta=0~,
\end{eqnarray}
which have the $\zeta$-expansion
\begin{eqnarray}
\eta=\sum_{-m}^n \zeta^\mu\eta_\mu~.
\end{eqnarray}
In addition, we impose chirality constraints, to obtain irreducible multiplets
\begin{eqnarray}\label{dminus}
\bbD{-}\eta=0~,~~\Rightarrow ~~~\bbDB{-}\breve{\eta}=0~.
\end{eqnarray}
Then
the top coefficient $\eta _n\equiv \chi$ and the bottom component $\eta_{-m}\equiv\bar\phi$ give fields $\phi, \chi$ in the $(2,2)$ reduction, where $\phi$ is chiral; and $\chi$ is twisted chiral.
An invariant action is
\begin{eqnarray}\nonumber \label{actar}
&&\oint_C\frac{d\zeta}{2\pi i \zeta}\Delta_+\bar\Delta_+\bbD{-}\bbDB{-}L(\eta,\breve{\eta}:\zeta)=\oint_C\frac{d\zeta}{2\pi i \zeta}\bbD{+}\bbDB{+}\bbD{-}\bbDB{-}L(\eta,\breve{\eta}:\zeta)\\[2mm]
&&=:\bbD{+}\bbDB{+}\bbD{-}\bbDB{-}K|
\end{eqnarray}
where $L$ and its $\zeta$ integral $K$ are real potentials. Note that terms of the form $f(\eta)+\breve{f}(\breve{\eta})$ integrate to zero in the action and thus
{shifts $L \to L+ f(\eta)+\breve{f}(\breve{\eta})$}
constitute ``K\"ahler gauge transformations''.
The non-manifest supersymmetry transformations are
\begin{eqnarray}
\bar{\mathbb{Q}}_+\eta=\zeta\bbDB{+}\eta~
\end{eqnarray}
The reduction to $(2,2)$ superspace \re{actar} gives a potential $K$ which
automatically satisfies
\begin{eqnarray}\label{ohmysd}
\frac{\partial^2K}{\partial\phi^i\partial\bar\phi^{\bar k}}+\frac{\partial^2K}{\partial\chi^k\partial\bar\chi^{\bar i}}=0~.
\end{eqnarray}
This is precisely the condition \re{4.1cnd} for $(4,2)$ supersymmetry, so in this case projective superspace gives the most general $(4,2)$ supersymmetric model.
Note that a variant multiplet $\tilde \eta$ arises if we replace \re{dminus} by
\begin{eqnarray}\label{dminus2}
\bbDB{-}\hat\eta=0~,~~\Rightarrow ~~~\bbD{-}\breve{\hat\eta}=0~
\end{eqnarray}
which corresponds to $\theta_1^- \leftrightarrow \bar\theta^{1-}$. However, it is easy to see that $\hat \eta(\bar\phi,\chi) $ is equivalent to $ \breve{\eta}(-\bar\phi,\chi)$ for $\eta=\bar\phi+\zeta\chi$.
\subsubsection{Example}
A simple example of a $(4,2)$ multiplet is
\begin{eqnarray}\label{42mult}
&&\eta=\bar\phi+\zeta\chi~.
\end{eqnarray}
The
projective chirality constraints {result in $(2,2)$ superfields $\phi,\chi$
with $\phi$ chiral and $\chi$ twisted chiral.} They also yield the transformations
\begin{eqnarray}\label{2var}
&&\bar{\mathbb{Q}}_+\bar\phi=0~,~~~\bar{\mathbb{Q}}_+\bar\chi=0~,~~~\bar{\mathbb{Q}}_+\chi=\bbDB{+}\bar\phi~,~~~~\bar{\mathbb{Q}}_+\phi=-\bbDB{+}\bar\chi
\end{eqnarray}
corresponding to the truncation of the $(4,4)$ multiplet. The formulae in \re{2var} are those in \re{constr21}
$\mathbb{Q}_+ \to i\bbD{+2}$.
We obtain the scalar potential
\begin{eqnarray}
K=i\oint_C\frac{d\zeta}{2\pi i \zeta}L~.
\end{eqnarray}
From the action \re{actar}, reducing to $(2,1)$ and setting $\lambda_{\eta^i} \to -iL_{,\eta^i}$ we find
\begin{eqnarray}\nonumber
&&k_{\phi^i}=i\oint_C\frac{d\zeta}{2\pi i \zeta}L_{,\breve{\eta}^i}~,~~~\bar k_{\bar\phi^i}=-i\oint_C\frac{d\zeta}{2\pi i \zeta}L_{,{\eta}^i}\\[1mm]
&&k_{\chi^i}=-i\oint_C\frac{d\zeta}{2\pi i \zeta}\zeta L_{,{\eta}^i}~,~~~\bar k_{\bar\chi^i}=-i\oint_C\frac{d\zeta}{2\pi i \zeta}\zeta^{-1}L_{,\breve{\eta}^i}~.
\end{eqnarray}
\subsubsection{Flat space}
In particular, considering a quadratic function of $d$ multiplets
\begin{eqnarray}
L=\eta^i\breve{\eta}^i~,
\end{eqnarray} with the contour $C$ a small circle around the origin, gives the following $(2,1)$ action
\begin{eqnarray}\nonumber
&&\int d^2x\oint_C\frac{d\zeta}{2\pi i \zeta}\bbD{+}\bbDB{+}D_-\left(\breve{\eta}^iD_-\eta^i-\eta^iD_-\breve{\eta}^i\right)~.
\end{eqnarray}
Performing the $\zeta$ integration reproduces the component action \re{ex103}, but with the fields now being chiral and twisted chiral. The target space geometry is $2d$ dimensional, flat with zero $B$-field.
\subsubsection{Curved space}
We can construct a more interesting model using the propeller contour $\Gamma$ in Fig.1; a similar construction was used in \cite{Karlhede:1984vr}.
\captionsetup{width=0.8\textwidth}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.6\textwidth=1.0]{propeller}
\caption{A propeller contour encircling two singularities of $ln~\!(\bullet )$. The zeros are depicted as lying on the real axis, but in our example they lie on a line tilted to an angle $\theta$ with the real axis, see \re{zeros}.}
\end{figure}
We use the
the $(4,2)$ multiplet $\eta$ in \re{42mult} and consider the Lagrangian given by the following integral over the contour $\Gamma$:
\begin{eqnarray}
-\oint_\Gamma \frac{d\zeta}{2\pi i \zeta}(\eta+\breve{\eta})~\!\! ln~\!\! (\eta\breve{\eta} )~.
\end{eqnarray}
Regarding $\eta\breve{\eta}$ as a function of $\zeta$, it has two zeroes
\begin{eqnarray}\label{zeros}
\zeta_1= -\frac{\bar \phi} \chi=:-\frac 1 r e^{i\theta}~,~~~~\zeta_2= \frac{\bar \chi}\phi =: r e^{i\theta}~,
\end{eqnarray}
and these are branch points of $ln (\eta\breve{\eta})$. We take one branch cut to go from $\zeta _1$ to $-\infty$ on the real axis, and the other to go from $\zeta _2$ to $+\infty$ on the real axis.
For any $f(\zeta) $, the integral
$$\frac 1 {2\pi i } \, \int _\Gamma{d\zeta} f(\zeta) ln (\eta\breve{\eta})$$
gives
the definite integral
$$\int _{\zeta_1}^{\zeta_2}{d\zeta} f(\zeta) $$
along the straight line between $\zeta_1$ and $\zeta_2$.
\footnote{ {This can be seen as follows. The real and imaginary axes divide the $\zeta$-plane into four quadrants.
Choose a branch where the integral is $\int {d\zeta} f(\zeta) ln (\eta\breve{\eta})$ along the part of the curve below the negative real axis,
i.e. in the bottom left quadrant.
Above the negative and below the positive real axes (i.e. in the upper left and lower right quadrants)
we then have $\int {d\zeta} f(\zeta) (ln(\eta\breve{\eta}) +2\pi i)$ and above the positive real axis (i.e. in the upper right quadrant), changing sheet in the opposite direction, it is $\int {d\zeta} f(\zeta) ln (\eta\breve{\eta})$ again. Combining the integrals and paying attention to the directions of integration the net result is $\int _{\zeta_1}^{\zeta_2} d\zeta f(\zeta)$.
}}
For $f(\zeta)= \frac{1}{ \zeta}(\eta+\breve{\eta})$,
the resulting $(2,2)$ scalar potential is then
\begin{eqnarray}
K=-(\phi+\bar\phi)\left( \frac{\chi\bar\chi}{\phi\bar\phi}+ln~\!(\frac{\chi\bar\chi}{\phi\bar\phi})\right)~,
\end{eqnarray}
(up to K\"ahler gauge transformations), which is indeed invariant under the additional supersymmetry in \re{2var}. The geometry has a conformally flat metric $g$ and is given by
\begin{eqnarray}\nonumber
&&g_{a\bar b}=\left(\begin{array}{cc}g_{\phi\bar\phi}&0\\
0&g_{\chi\bar\chi}\end{array}\right)=\frac{(\phi+\bar\phi)}{\phi\bar\phi}\left(\begin{array}{cc}\mathbb{1}&0\\
0&\mathbb{1}\end{array}\right)\\[2mm]\nonumber
&&H=\phi^{-2}d\chi\wedge d\bar\chi\wedge d\phi-\bar\phi^{-2}d\chi\wedge d\bar\chi\wedge d\bar\phi\\[1mm]
&& R= {\frac 3 2}
\frac {\phi\bar\phi} {(\phi+\bar\phi)^3}
\end{eqnarray}
where $H=dB$ and $R$ is the curvature scalar (see, e.g., \cite{Brendle}).
Note the vector field $\partial/\partial \chi$ generates an isometry.
\subsubsection{Semichiral Superfields}
When we want to generalise the constructions along the lines of the second $(4,1)$ example \re{genex} above, we run into an interesting problem. When $n\leq 1$, and $\eta$ is a series as in \re{genex} with the additional condition $D_-\eta=0$, { its reduction to $(2,2)$ superspace contains right semichiral fields rather than unconstrained superfields, } e.g.,
\begin{eqnarray}
\eta=\zeta^{-1}\eta_{-1}+\eta_0+\zeta\eta_1 = \zeta^{-1}\bar\phi+\bar r +\zeta\chi~,
\end{eqnarray}
the constraints imply that $\phi$ is chiral, $\chi$ twisted chiral and $r$ right semichiral: $\bbDB{-}r=0$. {However, to construct a sigma model with a non-degenerate kinetic term, one needs an equal number of left and right semichiral superfields. Here by necessity we get right semichiral superfields but no left semichiral superfields. Such a model typically contains right-moving multiplets
\cite{Buscher:1987uw}. The $(4,2)$ projective superfields are thus restrictive when it comes to constructing sigma models.
To construct a sigma model with non-degenerate kinetic term, the multiplets considered here would need to be combined with other multiplets.}
\subsection{$(4,4)$ projective superspace}
This case is well documented in the literature \cite{Karlhede:1984vr}-\cite{Lindstrom:2008gs}, {and we make no claim of completeness for the following brief presentation. The construction is always off-shell, typically involving auxiliary fields (sometimes an infinite number).} The application to our present type of multiplets, notably the $(4,4)$ twisted multiplet, requires use of the doubly projective superspace based on $\mathbb{CP}^1 \otimes\mathbb{CP}^1$. The two coordinates on these are labeled $\zeta_L$ and $\zeta_R$, respectively. The linear combinations of the four $(4,4)$ derivatives are
\begin{eqnarray}\nonumber
&&\nabla_+:=\bbD{+1}+\zeta_L\bbD{+2}~, ~~~\nabla_-:=\bbD{-1}+\zeta_R\bbD{-2}\\[1mm]
&&\Delta_+=\bbD{+1}-\zeta_L\bbD{+2}~, ~~~\Delta_-=\bbD{-1}-\zeta_R\bbD{-2}~,
\end{eqnarray}
and their
conjugates. Now the anticommutation relations are \re{nadealg} for
positive chirality derivatives $\nabla_+,\Delta_+$
with similar relations for the negative chirality ones $\nabla_-,\Delta_-$.
A projectively chiral superfield $\eta$ satisfies
\begin{eqnarray}\label{fetadef}
\nabla_\pm \eta=0~.
\end{eqnarray}
We consider the real multiplet
\begin{eqnarray}
\eta=\bar\phi+\zeta_L\chi+\zeta_R\bar\chi-\zeta_L\zeta_R\phi~,
\end{eqnarray}
where the components and transformations are those of the $(4,4)$ twisted multiplet and the reality condition is
\begin{eqnarray}
\eta=-\zeta_L^{-1}\zeta_R^{-1}\breve{\eta}~.
\end{eqnarray}
A $(4,4)$ Lagrangian is
\begin{eqnarray}
\oint _{C_L}\frac{d\zeta_L}{2\pi i \zeta_L}\oint_{C_R} \frac{d\zeta_R}{2\pi i \zeta_R} \Delta_+\Delta_-\breve{\Delta}_+\breve{\Delta}_-L(\eta)~,
\end{eqnarray}
where $C_L$ and $C_R$ are some suitable contours.
By construction,
this will be invariant under the full $(4,4)$ supersymmetry. In fact, $L=L(\eta^i)$ ensures that the potential $K$ satisfies the general $(4,4)$ conditions \re{ertyb} and \re{ertyc}, where the indices now refer to a set of $\eta^i~\!\!$s.
Other multiplets involving semichirals and auxiliaries may be constructed as in \cite{Buscher:1987uw} and \cite{Lindstrom:1994mw}.
{Finally, we mention that other extended superspaces, such as Harmonic Superspace \cite{Galperin:1984av}-\cite{Galperin:2001uw}, have also been used to describe off-shell $(4,4)$ multiplets and actions. The construction closest to what we describe in this section uses {bi-harmonic superspace, as described in, e.g., \cite{Ivanov:1995jb}.}
\bigskip
{\section{Conclusion}
In this paper we introduce new $(4,1)$ and $(4,2)$ multiplets and construct actions for them using new projective superspaces and their progenitors in the GHR formalism. We find the conditions for additional supersymmetries as conditions on the geometric objects: the vector or scalar potentials for the metric and $B$-field. Our multiplets and actions display off-shell supersymmetry and simultaneously integrable complex structures.
The general conditions for a $(2,1)$ model to have $(4,1)$ symmetry are given in \re{herm1} and \re{herm2}. The conditons for a $(2,2)$ model to have $(4,2)$ symmetry are \re{4.1cnd}, and the conditons for a $(2,2)$ model to have $(4,4)$ symmetry are the well known relations \re{ertyc} and \re{ertyb}.
We also consider a stronger condition \re{vpotcs} that is sufficient but not necessary for a $(2,1)$ model to have $(4,1)$ symmetry.
Actions for the $(4,1)$ multiplet \re{constr2} as well as for $(4,2)$ multiplets are constructed both using the GHR approach and novel $(4,1)$ and $(4,2)$ projective superspaces.
We briefly reviewed the $(4,4)$ models. General $(4,4)$ models were formulated in $(4,4)$ superspace using the GHR approach in \cite{Gates:1984nk} later using projective superspace actions. In both approaches, the scalar potential satisfies certain conditions by construction. These full conditions for $(4,4)$ supersymmetry arise when we combine the conditions for $(4,2)$ with the conditions for $(2,4)$, supersymmetry.
Examining the $(4,p)$ supersymmetric actions constructed in $(4,p)$ superspace using both the projective superspace and GHR constructions, we find that
they give the most general $(4,p)$ supersymmetric sigma models for both the $(4,2)$ and $(4,4)$ cases, but for the $(4,1)$ case we obtain only the special class of models for which the constraint \re{constr2} is satisfied.
This can be viewed as follows.
The $(4,1)$ actions we have constructed are based on superfields that depend on additional parameters apart from the worldsheet superspace coordinates. The additional parameters enter in such a way that the second derivative conditions \re{ohmy} are satisfied. In addition to this, the form of the actions leads to vector potentials that satisfy \re{vpotcs}. Together these conditions are stronger than the general conditions \re{herm1} and \re{herm2} for extra supersymmetry of a $(2,1)$ action. This is in contrast to the $(4,2)$ case where the conditions derived for the scalar potential that depends on extra parameters satisfies the general condition \re{4.1cnd} for $(4,2)$ supersymmetry. At present we do not fully understand this discrepancy, but perhaps there is a more general construction which gives a manifest formulation of the general $(4,1)$ case.
\bigskip
\noindent{\bf Acknowledgement}:\\
During the final stage of writing this article, we were informed of interesting results by Naveen Prabhakar and Martin Ro\v cek on $(0,4)$ and $(0,2)$ multiplets and projective superspace that partly overlap, but mostly complement our presentation. We hope to return to the topic in a joint publication.
Discussions with both these authors, as well as the stimulating atmosphere at the 2016 Simons workshop on geometry and physics, are gratefully acknowledged. UL gratefully acknowledges the hospitality of the theory group at Imperial College, London, as
well as partial financial support by the Swedish Research Council through VR grant 621-2013-4245.This work was supported by the EPSRC programme grant "New Geometric
Structures from String Theory", EP/K034456/1.
|
1,314,259,994,785 | arxiv | \section{Introduction}\label{sec1}
A dynamical system $(X,d,f)$ means always that $(X,d)$ is a compact metric space and $f$ is a continuous self map on $X$. One of the fundamental issues in dynamical systems is how points with similar asymptotic behavior influence or determine the system's complexity. Topological entropy is a term used to describe the dynamical complexity of a dynamical system. {Using a similar defining way as the Hausdorff dimension, Bowen \cite{MR338317} and Pesin \cite{MR1489237} extended the concept of topological entropy to non-compact sets. On the basis of this theory, researchers are paying more attention to study Hausdorff dimension or topological entropy for non-compact sets, for example, multifractal spectra and saturated sets of dynamical systems, see, \cite{MR2322186,MR3436391,MR1971209, MR1439805, MR2931333, MR1759398, MR3963890, MR3833343, MR2158401}. In particular, recent work about non-compact sets with full entropy has attracted greater attention than before, see \cite{ MR4200965, MR3963890, MR3833343, MR3436391,MR2158401}.}
As a result, the size of the topological entropy of the ‘periodic-like’ recurrent level sets is always in focus, see \cite{MR3963890,DongandTian1,DongandTian2,MR3436391,MR4200965}.
Most of different recurrent level sets are considered. The concepts of periodic point, recurrent point, almost periodic point and non-wandering point are described, see \cite{MR648108}, the concepts of (quai)weakly almost periodic point (sometimes called (upper)lower density recurrent point) can be see in \cite{MR1223083,MR2039054,1993Measure,1995Level}. Moreover, Zhou \cite{1995Level} firstly linked the recurrent frequency with the support of invariant measures which are the limit points of the empirical measure, which provided a tremendous benefit to researchers.
Birkhoff ergodic average \cite{MR1439805,MR1489237}(or Lyapunov exponents \cite{MR2645746}) is a well-known method to distinguish the asymptotic behavior of a dynamical orbit. Let $\varphi: X\to\mathbb{R}$ be a continuous function and $a$ a real number. Consider a level set of Birkhoff averages
$$
R_\varphi(a):=\left \{x\in X: \lim_{n\to \infty}\frac{1}{n}\sum_{j=0}^{n-1}\varphi\left (f^j(x)\right )=a\right \}.
$$
These $R_\varphi(a)$ form a multifractal decomposition and the function
$$
a\to h_{R_\varphi(a)}(f)
$$
is a entropy spectrum, where $h_{R_\varphi(a)}(f)$ denotes the topological entropy of $R_\varphi(a)$ defined by \cite{MR338317}. Takens and Verbitskiy \cite{MR1971209} proved that transformations satisfying the specification has the following variational principle
\begin{equation}\label{1.1}
h_{R_\varphi(a)}\left(f\right)=\sup \left\{ h_{\mu}(f): \mu\text{ is invariant and }\int \varphi d \mu=a\right\}
\end{equation}
where $h_\mu(f)$ is the measure-theoretic (or Kolmogorov-Sinai) entropy of the invariant measure $\mu$. In \cite{MR2322186}, the formula (\ref{1.1}) holds under the conditions of the $\mathbf{g}$-almost product property which was introduced by Pfister and Sullivan. Consider $\varphi$-irregular set as follows:
$$
I_\varphi:=\left \{x\in X:\lim_{n\to\infty}\frac{1}{n}\sum_{j=0}^{n-1}\varphi \left (f^j(x)\right )\text{ diverges}\right \}.
$$
The irregular set is not detectable from the standpoint of invariant measures, according to Birkhoff's ergodic theorem. However, in systems with certain properties, the irregular set may carry full topological entropy, see \cite{MR2931333,MR3833343,MR2158401,MR775933,MR1759398} and references therein.
Based on these results, Tian \cite{MR3436391} distinguished various periodic-like recurrences and discovered that they all carry full topological entropy, as do their gap-sets, under the $\mathbf{g}$-almost product property and other conditions. Furthermore, the author \cite{MR3436391} combined the periodic-like recurrences with (ir)regularity to obtain a large number of generalized multifractal analyses for all continuous observable functions. Later on, Huang, Tian and Wang \cite{MR3963890} introduced an abstract version of multifractal analysis for possible applicability to more general function. Let $\alpha :\mathcal{M}(X,f)\to\mathbb{R}$ be a continuous function where $\mathcal{M}(X,f)$ denotes the set of all $f$-invariant probability measures on $X$. Define more general regular and irregular sets as follows
$$
R_\alpha:=\left \{x\in X : \inf_{\mu\in M_x}\alpha (\mu)= \sup_{\mu\in M_x}\alpha (\mu)\right \},
$$
$$
I_\alpha:=\left \{x\in X : \inf_{\mu\in M_x}\alpha (\mu)< \sup_{\mu\in M_x}\alpha (\mu)\right \},
$$
respectively, where $M_x$ stands for the set of all limit points of the empirical measure. Furthermore, the authors \cite{MR3963890} established an abstract framework on the combination of {Banach upper recurrence,} transitivity and multifractal analysis of general observable functions. Learned from \cite{MR1601486} for maps and \cite{MR1716564} for flows, Dong and Tian \cite{DongandTian1} defined the statistical $\omega$-limit sets to describe different statistical structure of dynamical orbits using concepts of density, and considered multifractal analysis on various non-recurrence and Birkhoff ergodic averages. Furthermore, the authors \cite{MR3963890} used statistical $\omega$-limit sets to obtain the results of topological entropy on the refined orbit distribution of
{Banach upper recurrence.}
On the other hand, Ghys et al \cite{MR926526} proposed a definition of topological entropy for finitely generated pseudo-groups of continuous maps.
Later on, the topological entropy of free semigroup actions on a compact metric space defined by Bi\'{s} \cite{MR2083436} and Bufetov \cite{MR1681003}, respectively. Rodrigues and Varandas \cite{MR3503951} and Lin et al \cite{MR3774838} extended the work of Bufetov \cite{MR1681003} from various perspectives. Similar to the methods of Bowen \cite{MR338317} and Pesin \cite{MR1489237}, Ju et al \cite{MR3918203} introduced topological entropy and upper capacity topological entropy of free semigroup actions on non-compact sets which extended the results obtained by \cite{MR1681003,MR338317,MR1489237}. Zhu and Ma \cite{MR4200965} investigated the upper capacity topological entropy of free semigroup actions for certain non-compact sets. More relevant results are obtained, see \cite{MR3503951,MR3828742,MR3784991,MR3774838,MR3918203,MR4200965,MR3592853,MR1767945}.
{
The above results raise the question of whether similar sets exist in dynamical system of free semigroup actions. Further, one can ask if the sets have full topological entropy or full upper capacity topological entropy, and if an abstract framework of general observable functions can be established. To answer the above questions, we introduce different asymptotic behavior of points and more general irregular set. Our results show that various subsets characterized by distinct asymptotic behavior may carry full upper capacity topological entropy of free semigroup actions. Our analysis is to continue the work of \cite{MR4200965} and generalize the results obtained by \cite{MR3963890} and \cite{MR2322186}.}
This paper is organized as follows. In Sec. \ref{main results}, we give our main results. In Sec. \ref{Section-Preliminaries-2}, we give some preliminaries.
{
In Sec. \ref{3}, we introduce some concepts of transitive points, quasiregular points, upper recurrent points and Banach upper recurrent points with respect to a certain orbit of free semigroup actions.
On the other hand, the g-almost product property of free semigroup actions,
which is weaker than the specification property of free semigroup actions, is introduced.
}
In Sec. \ref{Irregular and regular set}, the upper capacity topological entropy of free semigroup actions on the more general irregular and regular sets is considered, and some examples satisfying the assumptions of our main results are described. In Sec. \ref{BR and QW}, we give the proofs of our main results.
\section{Statement of main results}\label{main results}
Let $(X,d)$ be a compact metric space and $G$ the free semigroup generated by $m$ generators $f_0,\cdots,f_{m-1}$ which are continuous maps on $X$. Let $\Sigma^+_m$ denote the one-side symbol space generated by the digits $\{0,1,\cdots,m-1\}$. Recall that the skew product transformation {is given as follows}:
$$
F:\Sigma^+ _m \times X\to\Sigma^+ _m \times X,\:\, (\iota,x)\mapsto \big(\sigma(\iota),f_{i_0}(x)\big),
$$
where $\iota=(i_0, i_1,\cdots)\in\Sigma_{m}^+$, $x\in X$ and $\sigma$ is the shift map of $\Sigma^+ _m $. Let $\mathcal{M}(\Sigma_{m}^+\times X,F)$ denote the set of invariant measures of the skew product $F$. Denote by $\mathrm{Tran}(\iota,G)$, $\mathrm{QR}(\iota,G)$, $\mathrm{QW}(\iota,G)$ and $\mathrm{BR}(\iota,G)$ the sets of the transitive points, the quasiregular points, the upper recurrent points and the Banach upper recurrent points of free semigroup action $G$ with respect to $\iota\in\Sigma_{m}^+$, respectively (see Sec. \ref{3}).
Let $\alpha: \mathcal{M}(\Sigma_{m}^+\times X, F)\to\mathbb{R}$ be a continuous function. Given $(\iota, x) \in \Sigma_{m}^+\times X$, let $M_{(\iota,x)}(F)$ be the set of all limit points of $\{\frac{1}{n}\sum_{j=0}^{n-1}\delta_{F^j(\iota,x)}\}$ in weak$^*$ topology. Let $L_\alpha:=[\inf_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu),\, \sup_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu)]$ and $\mathrm{Int}(L_\alpha)$ denote the interior of interval $L_\alpha$.
{
For possibly applicability to more general functions, \cite{MR3963890} defined three conditions for $\alpha$ to introduce an abstract version of multifractal analysis. To understand our main results, we list the three conditions}(see \cite{MR3963890} for details):
\begin{itemize}
\item [A.1] For any $\mu, \nu \in \mathcal{M}(\Sigma_{m}^+\times X, F)$, $\beta(\theta):=\alpha(\theta \mu+(1-\theta) \nu)$ is strictly monotonic on $[0,1]$ when $\alpha(\mu) \neq \alpha(\nu)$.
\item [A.2] For any $\mu, \nu \in \mathcal{M}(\Sigma_{m}^+\times X, F)$, $\beta(\theta):=\alpha(\theta \mu+(1-\theta) \nu)$ is constant on $[0,1]$ when $\alpha(\mu)=\alpha(\nu)$.
\item [A.3] For any $\mu, \nu \in \mathcal{M}(\Sigma_{m}^+\times X, F)$, $\beta(\theta):=\alpha(\theta \mu+(1-\theta) \nu)$ is not constant over any subinterval of $[0,1]$ when $\alpha(\mu) \neq \alpha(\nu)$.
\end{itemize}
Given $\iota\in\Sigma_{m}^+$, we introduce the general regular set and irregular set with respect to $\iota$ as follows:
$$
R_\alpha(\iota,G):=\left \{x\in X: \inf_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu)= \sup_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu)\right \},
$$
$$
I_\alpha(\iota,G):=\left \{x\in X: \inf_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu)< \sup_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu)\right \}.
$$
Let $C_{(\iota,x)}=\overline{\cup_{\nu \in M_{(\iota,x)}(F)} S_{\nu}}$, where $S_\nu$ denotes the support of measure $\nu$.
Let $\mathrm{BR}^{\#}(\iota,G):=\mathrm{BR}(\iota,G) \backslash \mathrm{QW} (\iota,G)$. Fixed $\iota\in \Sigma_{m}^+$, the following sets are introduced to more precisely characterize the recurrence of the orbit,
$$
\begin{aligned}
W(\iota,G)&:=\left\{x\in X :S_{\nu}=C_{(\iota,x)} \text { for every } \nu \in M_{(\iota,x)}(F)\right\}, \\
V(\iota,G)&:=\left\{x\in X: \exists \nu \in M_{(\iota,x)}(F) \text { such that } S_{\nu}=C_{(\iota, x)}\right\}, \\
S(\iota,G)&:=\left\{x\in X: \cap_{\nu \in M_{(\iota,x)}(F)} S_{\nu} \neq \emptyset\right\} .
\end{aligned}
$$
More specifically, $\mathrm{BR}^{\#}(\iota,G)$ is subdivided into the following several levels with different asymptotic behaviour:
$$
\begin{aligned}
&\mathrm{BR}_{1}(\iota,G):=\mathrm{BR}^{\#}(\iota,G)\cap W(\iota,G), \\
&\mathrm{BR}_{2}(\iota,G):=\mathrm{BR}^{\#}(\iota,G)\cap(V(\iota,G) \cap S(\iota,G)), \\
&\mathrm{BR}_{3}(\iota,G):=\mathrm{BR}^{\#}(\iota,G)\cap V(\iota,G),\\
&\mathrm{BR}_{4}(\iota,G):=\mathrm{BR}^{\#}(\iota,G)\cap (V(\iota,G) \cup S(\iota,G)),\\
&\mathrm{BR}_{5}(\iota, G):=\mathrm{BR}^{\#}(\iota,G).
\end{aligned}
$$
Then $\mathrm{BR}_{1}(\iota,G) \subseteq \mathrm{BR}_{2}(\iota,G) \subseteq \mathrm{BR}_{3}(\iota,G) \subseteq \mathrm{BR}_{4}(\iota,G) \subseteq \mathrm{BR}_{5}(\iota,G)$.
Analogously, $\mathrm{QW}(\iota,G)$ is also subdivided into the following several levels with varying asymptotic behaviour:
$$
\begin{aligned}
&\mathrm{QW}_{1}(\iota,G):=\mathrm{QW}(\iota,G)\cap W(\iota,G), \\
&\mathrm{QW}_{2}(\iota,G):=\mathrm{QW}(\iota,G)\cap (V(\iota,G) \cap S(\iota,G)), \\
&\mathrm{QW}_{3}(\iota,G):=\mathrm{QW}(\iota,G)\cap V(\iota,G),\\
&\mathrm{QW}_{4}(\iota,G):=\mathrm{QW}(\iota,G)\cap (V(\iota,G) \cup S(\iota,G)),\\
&\mathrm{QW}_{5}(\iota, G):=\mathrm{QW}(\iota,G).
\end{aligned}
$$
Note that $\mathrm{QW}_{1}(\iota,G) \subseteq \mathrm{QW}_{2}(\iota,G) \subseteq \mathrm{QW}_{3}(\iota,G) \subseteq \mathrm{QW}_{4}(\iota,G) \subseteq \mathrm{QW}_{5}(\iota,G)$.
Let $Z_0(\iota,G),Z_1(\iota,G),\cdots,Z_k(\iota,G)$ and $Y(\iota,G)$ be the subsets of $X$ with respect to $\iota\in\Sigma_{m}^+$. Consider the sets, for $j=1,\cdots,k$,
$$
M_j(Y):=\cup_{\iota\in\Sigma_{m}^+}\big(Y(\iota,G)\cap(Z_j(\iota,G)\setminus Z_{j-1}(\iota,G))\big),
$$
then we say that the sets $M_1(Y), M_2(Y),\cdots,M_k(Y)$ is the unions of gaps of $\{Z_0(\iota,G),Z_1(\iota,G),\cdots,Z_k(\iota,G)\}$ with respect to $Y(\iota,G)$ for all $\iota\in\Sigma_{m}^+$.
Now we start to state our main theorems.
\begin{theorem}\label{entropy of BR-1}
Suppose that $G$ has the $\mathbf{g}$-almost product property, there exists a $\mathbb{P}$-stationary measure (see Sec. \ref{Stationary measure}) on $X$ with full support where $\mathbb{P}$ is a Bernoulli measure on $\Sigma_{m}^+$. Let $\alpha: \mathcal{M}(\Sigma_{m}^+\times X, F)\to\mathbb{R}$ be a continuous function.
\begin{itemize}
\item [(1)] If $\alpha$ satisfies A.3 and $\mathrm{Int}(L_\alpha)\neq\emptyset$, then the unions of gaps of
$$
\left \{\mathrm{QR}(\iota, G), \mathrm{BR}_1(\iota,G), \mathrm{BR}_2(\iota,G),\mathrm{BR}_3(\iota,G),\mathrm{BR}_4(\iota,G),\mathrm{BR}_5(\iota,G)\right \}
$$
with respect to $I_\alpha (\iota,G)\cap\mathrm{Tran}(\iota,G)$ for all $\iota\in\Sigma_{m}^+$ have full upper capacity topological entropy of free semigroup action $G$.
\item [(2)] If the skew product $F$ is not uniquely ergodic, then the unions of gaps of
$$
\left \{\emptyset, \mathrm{QR}(\iota, G)\cap \mathrm{BR}_1(\iota,G),\mathrm{BR}_1(\iota,G), \mathrm{BR}_2(\iota,G),\mathrm{BR}_3(\iota,G),\mathrm{BR}_4(\iota,G),\mathrm{BR}_5(\iota,G)\right \}
$$
with respect to $\mathrm{Tran}(\iota,G)$ for all $\iota\in\Sigma_{m}^+$ have full upper capacity topological entropy of free semigroup action $G$.
\item [(3)] If the skew product $F$ is not uniquely ergodic and $\alpha$ satisfies A.1 and A.2, then the unions of gaps of
$$
\left \{\emptyset, \mathrm{QR}(\iota, G)\cap \mathrm{BR}_1(\iota,G),\mathrm{BR}_1(\iota,G), \mathrm{BR}_2(\iota,G),\mathrm{BR}_3(\iota,G),\mathrm{BR}_4(\iota,G),\mathrm{BR}_5(\iota,G)\right \}
$$
with respect to $R_\alpha(\iota,G)\cap\mathrm{Tran}(\iota,G)$ for all $\iota\in\Sigma_{m}^+$ have full upper capacity topological entropy of free semigroup action $G$.
\end{itemize}
\end{theorem}
\begin{theorem}\label{entropy of QW-1}
Suppose that $G$ has the $\mathbf{g}$-almost product property and positively expansive, there exists a $\mathbb{P}$-stationary measure with full support on $X$ where $\mathbb{P}$ is a Bernoulli measure on $\Sigma_{m}^+$. Let $\varphi: X\to\mathbb{R}$ be a continuous function. If the skew product $F$ is not uniquely ergodic, then the unions of gaps of
$$
\left \{\emptyset, \mathrm{QR}(\iota,G)\cap\mathrm{QW}_1(\iota,G),\mathrm{QW}_1(\iota,G),\mathrm{QW}_2(\iota,G),\mathrm{QW}_3(\iota,G),\mathrm{QW}_4(\iota,G),\mathrm{QW}_5(\iota,G)\right \}
$$
with respect to
$\mathrm{Tran}(\iota,G)$, $R_\alpha(\iota,G)\cap\mathrm{Tran}(\iota,G)$ for all $\iota\in\Sigma_{m}^+$ have full upper capacity topological entropy of free semigroup action $G$, respectively.
If {$I_\alpha(\iota,G)$} is non-empty for some $\iota\in\Sigma_{m}^+$, similar arguments hold with respect to $I_\alpha (\iota,G)\cap\mathrm{Tran}(\iota,G)$.
\end{theorem}
For $S\subseteq\mathbb{N}$, let $\overline{d}(S) $, $\underline{d}(S)$, $B^{*}(S)$ and $B_*(S)$ denote the upper density, lower density, Banach upper density and Banach lower density of $S$, respectively. Given $(\iota, x)\in\Sigma_{m}^+\times X$ and $\xi\in\{\,\overline{d}, \underline{d},B^{*}, B_{*} \}$, denote by $\omega_\xi\left ((\iota,x),F\right )$ the $\xi$-$\omega$-limit set of $(\iota, x)$. The notions will be given in more
detail later in Section \ref{Section-Preliminaries-2-1} (also see \cite{MR3963890, DongandTian1,DongandTian2}). If $\omega_{B_*}\left ((\iota,x),F\right )=\emptyset$ and $\omega_{B^*}\left ((\iota,x),F\right )=\omega\left ((\iota,x),F\right )$, then from \cite{DongandTian1} one has that $(\iota,x)$ satisfies only one of the following six cases:
\begin{enumerate}
\item [(1)]
$\emptyset=\omega_{B_{*}}\left ((\iota,x),F\right ) \subsetneq \omega_{\underline{d}}\left ((\iota,x),F\right )=\omega_{\overline{d}}\left ((\iota,x),F\right )=\omega_{B^{*}}\left ((\iota,x),F\right )=\omega\left ((\iota,x),F\right )$;
\item [(2)]
$\emptyset=\omega_{B_{*}}\left ((\iota,x),F\right ) \subsetneq \omega_{\underline{d}}\left ((\iota,x),F\right )=\omega_{\overline{d}}\left ((\iota,x),F\right ) \subsetneq \omega_{B^{*}}\left ((\iota,x),F\right )=\omega\left ((\iota,x),F\right )$;
\item [(3)]
$\emptyset=\omega_{B_{*}}\left ((\iota,x),F\right )=\omega_{\underline{d}}\left ((\iota,x),F\right ) \subsetneq \omega_{\overline{d}}\left ((\iota,x),F\right )=\omega_{B^{*}}\left ((\iota,x),F\right )=\omega\left ((\iota,x),F\right )$;
\item [(4)]
$\emptyset=\omega_{B_{*}}\left ((\iota,x),F\right ) \subsetneq \omega_{\underline{d}}\left ((\iota,x),F\right ) \subsetneq \omega_{\overline{d}}\left ((\iota,x),F\right )=\omega_{B^{*}}\left ((\iota,x),F\right )=\omega\left ((\iota,x),F\right )$;
\item [(5)]
$\emptyset=\omega_{B_{*}}\left ((\iota,x),F\right )=\omega_{\underline{d}}\left ((\iota,x),F\right ) \subsetneq \omega_{\overline{d}}\left ((\iota,x),F\right ) \subsetneq \omega_{B^{*}}\left ((\iota,x),F\right )=\omega\left ((\iota,x),F\right )$;
\item [(6)]
$\emptyset=\omega_{B_{*}}\left ((\iota,x),F\right ) \subsetneq \omega_{\underline{d}}\left ((\iota,x),F\right ) \subsetneq \omega_{\overline{d}}\left ((\iota,x),F\right ) \subsetneq \omega_{B^{*}}\left ((\iota,x),F\right )=\omega\left ((\iota,x),F\right )$.
\end{enumerate}
Consider the two sets as follows:
$$
T_j(\iota,G):=\left \{x\in \mathrm{Tran} (\iota,G):(\iota,x) \text{ satisfies Case } (j)\right \},
$$
$$
B_j(\iota,G):=\left \{x\in \mathrm{BR} (\iota,G):(\iota,x) \text{ satisfies Case } (j)\right \},
$$
where $j=1,\cdots,6$. Let
$$
T_j(G):=\cup_{\iota\in\Sigma_{m}^+}T_j(\iota,G),\quad B_j(G):=\cup_{\iota\in\Sigma_{m}^+}B_j(\iota,G).
$$
\begin{theorem}\label{entropy of omega limit set}
Suppose that $G$ has the $\mathbf{g}$-almost product property, there exists a $\mathbb{P}$-stationary measure with full support on $X$ where $\mathbb{P}$ is a Bernoulli measure on $\Sigma_{m}^+$. If the skew product $F$ is not uniquely ergodic, then $T_j(G)\neq\emptyset$ and $B_j(G)\neq\emptyset$.
Moreover, they all have full capacity topological entropy of free semigroup action $G$, that is,
$$
\overline{Ch}_{T_j(G)}(G)=\overline{Ch}_X(G)=h(G),
$$
$$
\overline{Ch}_{B_j(G)}(G)=\overline{Ch}_X(G)=h(G),
$$
for all $j=1,\cdots,6$, where $\overline{Ch}_Z(G)$ denotes the upper capacity topological entropy on any subset $Z\subseteq X$ in the sense of \cite{MR3918203}, $h(G)$ denotes the topological entropy in the sense of Bufetov \cite{MR1681003}. If $Z=X$, we have $\overline{Ch}_{X}(G)=h(G)$ from Remark 5.1 of \cite{MR3918203}.
\end{theorem}
\section{Preliminaries}\label{Section-Preliminaries-2}
\subsection{Some notions} \label{Section-Preliminaries-2-1}
Let $(X,d)$ be a compact metric space and $f$ be a continuous map on $X$.
For $S\subseteq \mathbb{N}$, the upper density and the Banach upper density of $S$ are defined by
$$
\overline{d}(S):=\limsup_{n \rightarrow \infty} \frac{\sharp\left \{S \cap\{0,1, \cdots, n-1\}\right \}}{n},\quad B^{*}(S):=\limsup _{\sharp I \rightarrow \infty} \frac{\sharp \left \{S \cap I\right \}}{\sharp I},
$$
respectively, where $\sharp Y$ denotes the cardinality of the set $Y$ and $I \subseteq \mathbb{N}$ is taken from finite continuous integer intervals. Similarly, one can define
the lower density and the Banach lower density of $S$, denoted as $\underline{d}(S)$ and $B_{*}(S)$, respectively. Let $U\subset X$ be a nonempty open set and $x\in X$. Define the set of visiting time,
$$
N(x,U):=\left \{ n\ge 0: f^n(x)\in U\right \}.
$$
Recall that
{a point $x\in X$ is called to be Banach upper recurrent,}
if for any $\varepsilon>0$, the set of visiting time $N(x,B(x,\varepsilon))$ has a positive Banach upper density where $B(x,\varepsilon)$ denotes the ball centered at $x$ with radius $\varepsilon$. Similarly, one can call
{a point $x\in X$ upper recurrent,}
if for any $\varepsilon>0$, the set of visiting time $N(x,B(x,\varepsilon))$ has a positive upper density. Let us denote by $\mathrm{BR}(f)$ and $\mathrm{QW}(f)$ the sets of the Banach upper recurrent points and the upper recurrent points of $f$, respectively. It is immediate that
$$
\mathrm{QW}(f)\subseteq \mathrm{BR}(f).
$$
A {point} $x\in X$ is called transitive {if its orbit $\{x,f(x),f^2(x),\cdots\}$} is dense in $X$. Let us denote by $\mathrm{Tran}(f)$ the set of transitive points of $f$.
{We recall} that several concepts were introduced in \cite{MR3963890}. For $x \in X$ and $\xi\in\{\,\overline{d}, \underline{d}, B_{*}, B^{*} \}$, a point $y \in X$ is called $x$-$\xi$-accessible, if for any $\varepsilon>0$, the set of visiting time $N\left(x, B(y,\varepsilon)\right)$ has positive density with respect to $\xi$. Let
$$
\omega_{\xi}(x):=\{y \in X: y \text { is } x\text{-}\xi\text{-accessible }\}.
$$
For convenience, it is called the $\xi$-$\omega$-limit set of $x$.
The set of invariant measures under $f$ will be denote by $\mathcal{M}(X,f)$. For $\mu\in\mathcal{M}(X,f)$, a point $x \in X$ is $\mu$-generic if
$$
\lim _{n \rightarrow \infty} \frac{1}{n} \sum_{j=0}^{n-1} \delta_{f^{j}(x)}=\mu
$$
where $\delta_{y}$ denotes the Dirac measure on $y$. We will use $G_\mu(f)$ to denote the set of $\mu$-generic points. Let $\mathrm{QR}(f):=\bigcup_{\mu\in\mathcal{M}(X,f)}G_\mu(f)$. The points in $\mathrm{QR}(f)$ are called quasiregular points of $f$.
\subsection{The topological entropy and others concepts of free semigroup actions} \label{the concepts of free semigroup actions}
In this paper, we use the topological entropy and upper capacity topological entropy of free semigroup actions {defined by \cite{MR1681003} and \cite{MR3918203}, respectively.} Let $(X,d)$ be a compact metric space and $G$ the free semigroup action on $X$ generated by $f_0,\cdots,f_{m-1}$. For convenience, we first recall the notion of words.
Let $F_m^+$ be the set of all finite words of symbols $0,1,\cdots,m-1$. For any $w\in F_m^+$, $\lvert w\rvert$ stands for the length of $w$, that is, the number of symbols in $w$. Obviously, $F^+_m$ with respect to the law of composition is a free semigroup with $m$ generators. We write $w'\leq w$ if there exists a word $w''\in F^+_m$ such that $w=w''w'$. Remark that $\emptyset\in F_m^+$ and $\emptyset\leq w$. For $w=i_0i_1\cdots i_k\in F^+_m$, denote $\overline{w}=i_k\cdots i_1i_0$.
Denote by $\Sigma^+_m$ the set of all one-side infinite sequences of symbols $\{0,1,\cdots,m-1\}$, that is,
$$\Sigma^+_m=\left\{\iota=(i_0,i_1,\cdots)\, : \,i_k\in \{0,1,\cdots,m-1\},\: k\in\mathbb{N}\right\}.
$$
{The metric on $\Sigma^+_m$ is given by
$$d'(\iota,\iota'):=2^{-j},\quad j=\inf\{n\, : \,i_n\neq i'_n\}.
$$}It is easy to check that $\Sigma^+_m$ is compact with respect to this metric. The shift $\sigma:\Sigma^+_m\to \Sigma^+_m $ is given by the formula, for each $\iota=(i_0,i_1,\cdots)\in\Sigma^+_m$,
$$
\sigma(\iota)=(i_1,i_2,\cdots).
$$
Suppose that $\iota\in\Sigma^+_m$, and $a,b\in \mathbb{N}$ with $a\leq b$. We write $\iota\lvert_{[a,b]}=w$ if $w=i_ai_{a+1}\cdots i_b$.
To each $w\in F^+_m$, {$w=i_{0}\cdots i_{k-1}$, let us write $f_w=f_{i_{0}}\circ\cdots \circ f_{i_{k-1}}$} if $k>0$, and $f_w=\mathrm{Id}$ if $k=0$, where $\mathrm{Id}$ is the {identity map}. Obviously, $f_{ww'}=f_wf_{w'}$.
For $w\in F^+_m$, we assign a metric $d_w$ on $X$ by setting
$$
d_w(x_1,x_2)=\max_{w^{\prime} \leq \overline{w}}d\left (f_{w^{\prime}}(x_1),f_{w^{\prime}}(x_2)\right ).
$$
Given a number $\delta>0$ and a point $x \in X$, define the $(w, \delta)$-Bowen ball at $x$ by
$$
B_{w}(x, \delta):=\left\{y \in X:{d_w\left(x, y\right) < \delta}\right\} .
$$
Recall that the positively expansive of the free semigroup actions means that if there exists $\delta>0$, such that any $x,y\in X$ with $x\neq y$, for any $\iota\in\Sigma_{m}^+$ there exists $n\ge 1$ satisfying $d\left (f_{\overline{\iota \lvert_{[0,n-1]}}}(x),f_{\overline{\iota \lvert_{[0,n-1]}}}(y)\right )\ge \delta$, which was introduced by Zhu and Ma \cite{MR4200965}.
The specification property of free semigroup actions was introduced by Rodrigues and Varandas \cite{MR3503951}. We say that $G$ has the specification property if for any $\varepsilon>0$, there exists $\mathfrak{p}(\varepsilon)>0$, such that for any $k>0$, any points $x_{1},\cdots, x_{k} \in X$, any positive integers $n_{1}, \cdots, n_{k}$, any $p_{1},\cdots, p_{k} \geq \mathfrak{p}(\varepsilon)$, any $w_{(p_{j})} \in F_{m}^{+}$ with $\lvert w_{(p_{j})}\rvert =p_{j}$, $w_{(n_{j})} \in F_{m}^{+}$ with $\lvert w_{(n_{j})}\rvert =n_{j}, 1 \leq j \leq k$, one has
$$
B_{w_{(n_1)}}\left (x_1,\varepsilon\right )\cap\left (\bigcap_{j=2}^k {f^{-1}_{\overline{w_{(p_{j-1})}}\, \overline{w_{(n_{j-1})}}\cdots\overline{w_{(p_1)}}\, \overline{w_{(n_1)}}}}B_{w_{(n_j)}}\left (x_j,\varepsilon\right )\right )\neq\emptyset.
$$If $m=1$, the specification property of free semigroup actions coincides with the classical definition introduced by Bowen \cite{MR282372}.
{We recall the definition of topological entropy} for free semigroup actions introduced by \cite{MR1681003}.
A subset $K$ of $X$ is called a $(w, \varepsilon, G)$-separated subset if, for any $x_{1}, x_{2} \in K$ with $x_{1} \neq x_{2}$, one has $d_{w}\left(x_{1}, x_{2}\right) \geq \varepsilon$. The maximum cardinality of a $(w, \varepsilon, G)$-separated subset of $X$ is denoted by $N(w, \varepsilon, G)$.
The topological entropy of free semigroup actions is defined by the formula
$$
h (G):=\lim _{\varepsilon \rightarrow 0} \limsup_{n\rightarrow\infty} \frac{1}{n} \log \left (\frac{1}{m^{n}} \sum_{\lvert w\rvert =n} N(w, \varepsilon, G)\right ).
$$
\begin{remark}
If $m=1$, this definition coincides with the topological entropy of a single map defined by \cite{MR175106}. For more information, see Chapter 7 of \cite{MR648108}.
\end{remark}
{The dynamical systems given by free semigroup action}
have a strong connection with skew product which has been analyzed to obtain properties of free semigroup actions through fiber associated with the skew product (see for instance \cite{MR4200965, MR3784991}). Recall that the skew product transformation is given by as follows:
$$
F:\Sigma^+ _m \times X\to\Sigma^+ _m \times X,\:\, (\iota,x)\mapsto \big(\sigma(\iota),f_{i_0}(x)\big),
$$
where $\iota=(i_0, i_1,\cdots)$ and $\sigma$ is the shift map of $\Sigma^+ _m $. {The metric $D$ on} $\Sigma^+ _m \times X$ is given by the formula
$$
D\left (\left (\iota,x\right ),\left (\iota',x'\right )\right ):=\max\left \{d'\left (\iota,\iota'\right ),d\left (x,x'\right )\right \}.
$$
\begin{theorem}
\label{Topological entropy skew product}
(\cite{MR1681003},Theorem 1) Topological entropy of the skew product transformation $F$ satisfies
$$
h(F)=\log m +h(G),
$$
where $h(F)$ denotes the topological entropy of $F$.
\end{theorem}
Now, let us recall the topological entropy and upper capacity topological entropy of free semigroup actions for non-compact sets defined by \cite{MR3918203}. Fixed $\delta>0$, we define the collection of subsets
$$
\mathcal{F}:=\left \{B_{w}(x, \delta): \,x\in X,\, w\in F^+_m, \,\lvert w\rvert =n \text{ and } n\in\mathbb{N}\right \}.
$$
Given subset $Z\subset X$, we define, for $\gamma\ge 0$, $N>0$ and $w\in F_m^+$ with $\lvert w\rvert =N$,
$$
{M}_{w}(Z, \gamma, \delta, N):=\inf _{\mathcal{G}_{w}}\left \{\sum_{B_{w'}(x, \delta) \in \mathcal{G}_{w}} \exp {\left (-\gamma \cdot\left(\lvert w'\rvert +1\right)\right )}\right\},
$$
where the infimum is taken over all finite or countable subcollections $\mathcal{G}_w\subseteq\mathcal{F}$ covering $Z$ (i.e. for any $B_{w'}(x,\delta)\in\mathcal{G}_w$, $\overline{w}\le \overline{w'}$ and $\bigcup_{B_{w'}(x,\delta)\in\mathcal{G}_w}B_{w'}(x,\delta)\supseteq Z$). Let
$$
{M}(Z, \gamma, \delta, N):=\frac{1}{m^{N}} \sum_{\lvert w\rvert =N} {M}_{w}(Z, \gamma, \delta, N).
$$
It is easy to verify that the function ${M}(Z, \gamma, \delta, N)$ is non-decreasing as $N$ increases. Therefore, there exists the limit
$$
{m}(Z, \gamma, \delta)=\lim _{N \rightarrow \infty} {M}(Z, \gamma, \delta, N).
$$
Furthermore, we can define
$$
{R}_{w}(Z, \gamma, \delta, N) :=\inf _{\mathcal{G}_{w}}\left \{\sum_{B_{w}(x, \delta) \in \mathcal{G}_{w}} \exp{\left ( -\gamma \cdot(N+1)\right )}\right \},
$$
where the infimum is taken over all finite or countable subcollections $\mathcal{G}_w\subseteq\mathcal{F}$ covering $Z$ {and the length word} correspond to every ball in $\mathcal{G}_w$ are all equal to $N$. Let
$$
{R}(Z, \gamma, \delta, N) :=\frac{1}{m^{N}} \sum_{\lvert w\rvert=N} {R}_{w}(Z, \gamma, \delta, N) ,
$$
and set
$$
\overline{r}(Z, \gamma, \delta):=\limsup _{N \rightarrow \infty} {R}(Z, \gamma, \delta, N).
$$
The critical values $h_Z(G,\delta)$ and $\overline{Ch}_Z(G,\delta)$ are defined as
$$
\begin{aligned}
h_Z(G,\delta)&:=\inf\{\gamma:{m}(Z,\gamma,\delta)=0\}=\sup \{\gamma:{m}(Z,\gamma,\delta)=\infty\},\\
\overline{Ch}_{Z}(G,\delta)&:=\inf \{\gamma:\,\, \overline{r}(Z, \gamma, \delta)=0\}=\sup \{\gamma: \,\,\overline{r}(Z, \gamma, \delta)=\infty\}.
\end{aligned}
$$
The topological entropy and upper capacity topological entropy of $Z$ of free semigroup action $G$ are then defined as
$$
\begin{aligned}
h_Z(G)&:=\lim _{\delta \rightarrow 0}h_{Z}(G,\delta),\\
\overline{Ch}_{Z}(G)&:=\lim _{\delta \rightarrow 0} \overline{Ch}_{Z}(G,\delta).
\end{aligned}
$$
\begin{remark}
Let $f: X \to X$ be a continuous transformation and $G$ the free semigroup generated by the map $f$. Then $h_{Z}(G)=h_{Z}(f)$, $\overline{C h}_{Z}(G)=\overline{C h}_{Z}(f)$, for any set $Z \subset X$, where $h_{Z}(f)$ and $\overline{C h}_{Z}(f)$ are the topological entropy and upper capacity topological entropy defined by Pesin \cite{MR1489237}. If $Z=X$, then $h(G)=h(f)=h_X(f)=\overline{Ch}_X(f)$, i.e., the classical topological entropy defined by Adler et al \cite{MR175106}.
\end{remark}
In \cite{MR3918203},
{they proved that the upper} capacity topological entropy of the skew product $F$ satisfies the following result for any subset $Z\subseteq X$.
\begin{theorem}(\cite{MR3918203}, Theorem 5.1)
\label{Topological upper capacity entropy skew product}
For any subset $Z \subset X$, then
$$
\overline{Ch}_{\Sigma_{m}^+ \times Z}(F)=\log m+\overline{Ch}_{Z}(G) .
$$
\end{theorem}
\begin{remark}
If $Z=X,$ the authors of \cite{MR3918203} proved that $h(G)=\overline{Ch}_X(G)$. Hence, we have
$$
\overline{Ch}_{\Sigma_{m}^+ \times X}(F)=\log m+\overline{Ch}_{X}(G) .
$$
Since $h(F)=h_{\Sigma_{m}^+ \times X}(F)=\overline{Ch}_{\Sigma_{m}^+ \times X}(F)$, then Theorem \ref{Topological entropy skew product} can be restated as
$$
h_{\Sigma_{m}^+ \times X}(F)=\log m+h(G) .
$$
\end{remark}
\subsection{Stationary measure}\label{Stationary measure}
Let $\mathbf{p}:=(p_0,\cdots,p_{m-1})$ be a probability vector with non-zero entries (i.e., $p_j>0$ for each $j$ and $\sum_{j=0}^{m-1}p_j=1$). The Bernoulli measure $\mathbb{P}$ on $\Sigma_{m}^+$ generated {by the probability vector}
$\mathbf{p}$ is $\sigma$-invariant and ergodic. Given a point $x\in X$ and measurable set $A\subseteq X$, the transition probabilities {are defined by the formula}
$$
\mathcal{P}(x, A)=\int \chi_{A}\left(f_{j}(x)\right) \mathbf{p}(\mathrm{d} j),
$$
where $\chi_{A}$ denotes the indicator map corresponding with the set $A$. Let $\mathcal{M}(X)$ denote the set of all probability measures on $X$. For every probability measure $\mu\in \mathcal{M}(X)$, the adjoint operator $\mathcal{P}^{*}$ is defined by the following way,
$$
\begin{aligned}
\mathcal{P}^{*} \mu(A)=\int \mathcal{P}\left(x, {A}\right) d \mu(x)=\int \int \chi_{A}\left(f_{j}(x)\right)\mathbf{p}(\mathrm{d} j) \mu(\mathrm{d}x)
=\sum_{j=0}^{m-1}p_j\mu\left (f_j^{-1}A\right ).
\end{aligned}
$$
A Borel probability measure $\mu\in\mathcal{M}(X)$ is said to be $\mathbb{P}$-stationary if, $\mathcal{P}^{*} \mu=\mu$.
As $X$ is a compact metric space, the set of $\mathbb{P}$-stationary probability measures is a nonempty compact convex set with respect to the weak$^{*}$ topology for every $\mathbb{P}$. Its extreme points are called $\mathbb{P}$-ergodic. For more information, see \cite{MR884892}. When convenient, we will use the following criterium :
\begin{proposition} \label{invariant measure of random}
(\cite{MR884892}, Lemma I.2.3)
Let $\mathbb{P}$ be a Bernoulli measure on $\Sigma_{m}^+$, and $\mu$ be a probability measure on $X$, then
\begin{itemize}
\item [(1)] \label{invariant} $\mu$ is $\mathbb{P}$-stationary if and only if the product probability measure $\mathbb{P} \times \mu$ is $F$-invariant.
\item [(2)] $\mu$ is $\mathbb{P}$-stationary and ergodic if and only if the product probability measure $\mathbb{P} \times \mu$ is $F$-invariant and ergodic.
\end{itemize}
\end{proposition}
\section{Periodic-like recurrence and $\mathbf{g}$-almost product property of free semigroup actions}\label{3}
In this section, we introduce the new concept of $\mathbf{g}$-almost product property of free semigroup actions, and some concepts of transitive points, quasiregular points, upper recurrent points and Banach upper recurrent points with respect to a certain orbit of free semigroup actions.
We obtain that the $\mathbf{g}$-almost product property is weaker than the specification property under free semigroup actions. The results in this section are inspired by \cite{MR2322186}. Throughout this section we assume that $X$ is a compact metric space, $G$ is the free semigroup generated by $m$ generators $f_0,\cdots,f_{m-1}$ which are continuous maps on $X$ and $F$ is the skew product map corresponding to the maps $f_0,\cdots,f_{m-1}$.
Let us introduce the definitions of recurrence for free semigroup actions.
\begin{definition}
Given $\iota=(i_0,i_1,\cdots)\in\Sigma_{m}^+$, a point $x\in X$ is called {a transitive point}
with respect to $\iota$ of free semigroup action $G$ if the orbit of $x$ under $\iota$,
$$
orb(x,\iota,G):=\{x, f_{i_0}(x),f_{i_1i_0}(x),\cdots\}
$$
is dense in $X$.
\end{definition}
\begin{definition}
Given $\iota=(i_0,i_1,\cdots)\in\Sigma_{m}^+$, a point $x\in X$ is called {a quasiregular point}
with respect to $\iota$ of free semigroup action $G$ if a sequence
$$
\frac{1}{n}\sum_{j=0}^{n-1}\delta_{F^j(\iota,x)}
$$
{converges} in the weak$^{*}$ topology.
\end{definition}
Denote by $\mathrm{Tran}(\iota,G)$ and $\mathrm{QR}(\iota,G)$ the sets of the transitive points and the quasiregular points with respect to $\iota$
{of free semigroup action,}
respectively. We write $\mathrm{Tran}(G)$ and $\mathrm{QR}(G)$ for the union of $\mathrm{Tran}(\iota,G)$ and $\mathrm{QR}(\iota,G)$ for all $\iota$, respectively.
Let $U\subset X$ be a nonempty open set and $x\in X$, $\iota=(i_0,i_1,\cdots) \in\Sigma_{m}^+$, the set of visiting time with respect to $\iota$ is defined by
$$
N_\iota (x,U):=\left \{n\in\mathbb{N}: f_{i_{n-1}\cdots i_0}(x)\in U \right \}.
$$
\begin{definition}
Given $\iota=(i_0,i_1,\cdots)\in\Sigma_{m}^+$, a point $x\in X$ is called {a upper recurrent point}
with respect to $\iota$ of free semigroup action $G$ if for any $\varepsilon>0$, the set of visiting time $N_\iota\left (x,B(x,\varepsilon)\right )$ has a positive upper density.
\end{definition}
\begin{definition}
Given $\iota=(i_0,i_1,\cdots)\in\Sigma_{m}^+$, a point $x\in X$ is called {a Banach upper} recurrent point
with respect to $\iota$ of free semigroup action $G$ if for any $\varepsilon>0$, the set of visiting time $N_\iota\left (x,B(x,\varepsilon)\right )$ has a positive Banach upper density.
\end{definition}
Denote by $\mathrm{QW}(\iota,G)$ and $\mathrm{BR}(\iota,G)$ the sets of the upper recurrent points and the Banach upper recurrent points with respect to $\iota$ of free semigroup action $G$, respectively. Let
$$
\mathrm{QW}(G):=\bigcup_{\iota\in\Sigma_{m}^+}\mathrm{QW}(\iota,G), \,\mathrm{BR}(G):=\bigcup_{\iota\in\Sigma_{m}^+}\mathrm{BR}(\iota,G).
$$
Let us call $\mathrm{QW}(G)$ and $\mathrm{BR}(G)$ the sets of the upper recurrent points and the Banach upper recurrent points of free semigroup action, respectively. It is easy to check that $\mathrm{QW}(G)$ coincides with the set of the quasi-weakly almost periodic points of free semigroup action defined by Zhu and Ma \cite{MR4200965}. Clearly,
$$
\mathrm{QW}(\iota,G) \subseteq \mathrm{BR}(\iota,G).
$$
The notion of specification, introduced by Bowen \cite{MR282372}, says that one can always find a single orbit to interpolate between different pieces of orbits. In the case of $\beta$-shifts it is known that the specification property holds for a set of $\beta$ of Lebesgue measure zero (see \cite{MR1452189}). In \cite{MR2322186}, the authors studied a new condition, called $\mathbf{g}$-almost product product property, which is weaker that specification property, and proved the $\mathbf{g}$-almost product product property always {holds for $\beta$-shifts. }
{Next we introduce} the concept of {$\mathbf{g}$}-almost product property of free semigroup actions:
\begin{definition}
Let $\mathbf{g}: \mathbb{N} \rightarrow \mathbb{N}$ be a given nondecreasing unbounded map with the properties
$$
\mathbf{g}(n)<n \quad \text { and } \quad \lim _{n \rightarrow \infty} \frac{\mathbf{g}(n)}{n}=0 .
$$
The function $\mathbf{g}$ is called blowup function.
\end{definition}
Fixed $\varepsilon>0$, $w \in F_{m}^{+}$ and $x \in X$, define the $\mathbf{g}$-blowup of $B_{w}(x, \varepsilon)$ {as the closed set }
$$
B_{w}(\mathbf{g} ; x, \varepsilon):=\Big \{y\in X : \sharp\left \{ w'\le\overline{w}:d\left (f_{w'}(x),f_{w'}(y)\right )>\varepsilon\right \}<\mathbf{g}\left (\lvert w\rvert+1\right )\Big\}.
$$
\begin{definition}
We say $G$ satisfies the $\mathbf{g}$-almost product property with the blowup function $\mathbf{g}$, if there exists a nonincreasing function $\mathfrak{m}:\mathbb{R}^+\to\mathbb{N}$, such that for $k\ge 2$, any $k$ points $x_1,\cdots,x_k\in X$, any positive $\varepsilon_1,\cdots,\varepsilon_k$, and any words $w_{(\varepsilon_1)},\cdots,w_{(\varepsilon_{k})}\in F_m^+$ with $\lvert w_{(\varepsilon_1)}\rvert \ge \mathfrak{m}(\varepsilon_{1}),\cdots,\lvert w_{(\varepsilon_{k})}\rvert \ge \mathfrak{m}(\varepsilon_{k})$,
$$
B_{w_{(\varepsilon_{1})}}(\mathbf{g};x_1,\varepsilon_{1})\cap\left (\bigcap_{j=2}^{k} f^{-1}_{\overline{w_{(\varepsilon_{j-1})}}\cdots \overline{w_{(\varepsilon_1)}}} B_{w_{(\varepsilon_{j})}}\left(\mathbf{g} ; x_{j}, \varepsilon_{j}\right)\right ) \neq \emptyset.
$$
\end{definition}
{Under $\mathbf{g}$-almost product property, the topological entropy of periodic-like recurrent sets has been studied in \cite{MR3963890}, but the topological entropy of such sets has not been studied in dynamical systems of free semigroup actions. In this paper, we focus on the topological entropy of similar sets of free semigroup actions and obtain more extensive results. Therefore, it is important and necessary to introduce the $\mathbf{g}$-almost product property of free semigroup actions.}
If $m=1$, the $\mathbf{g}$-almost product property of free semigroup actions coincides with the definition introduced by Pfister and Sullivan \cite{MR2322186,MR2109476}.
The next proposition asserts the relationship between specification property and $\mathbf{g}$-almost product property of free semigroup actions.
\begin{proposition}\label{proposition almost}
Let $\mathbf{g}$ be any blowup function {and $G$ satisfies the specification} property. Then it has the $\mathbf{g}$-almost product property.
\end{proposition}
\begin{proof}
This proof extends the method of Proposition 2.1 in \cite{MR2322186} to the free semigroup actions, but we provide the complete proof for the reader’s convenience.
Let $\mathfrak{p}(\varepsilon)$ be the positive integer in the definition of specification property of $G$ (see Sec. \ref{the concepts of free semigroup actions}) for $\varepsilon>0$. It is no restriction to suppose that the function $\mathfrak{p}(\varepsilon)$ is nonincreasing. Let $\left\{x_{1}, \cdots, x_{k}\right\}$ and $\left\{\varepsilon_{1}, \cdots, \varepsilon_{k}\right\}$ be given. Let $\delta_{r}:=2^{-r}, r \in \mathbb{N}$. Next, we may define
{a nonincreasing function $\mathfrak{m} :\mathbb{R}^+\to\mathbb{N}$}
as follows: $$\mathfrak{m}(\varepsilon):=\widetilde{\mathfrak{m}}(2\delta_r),$$
where $r=\min\{i:2 \delta_{i} \leq \varepsilon \}$ and $\widetilde{\mathfrak{m}}(2\delta_r):= \min\left \{m: \mathbf{g}(m)\ge 2 \mathfrak{p}(\delta_r)\right \}$.
It is sufficient to prove the statement for $\varepsilon_{j}$ of the form $2 \delta_{r_j}, j=1, \cdots, k$, where, as above $r_j=\min\{i: 2\delta_i\le \varepsilon_{j}\}$. Precisely, if $\varepsilon_{j}$ is not of that form, we change it into $2 \delta_{r_j}$. From now on we assume that, for all $j$, $\varepsilon_{j}$ is of the form $2 \delta_{r_j}$. Let $w_{(\varepsilon_1)},\cdots,w_{(\varepsilon_{k})}$ be the words with the length not less than $\mathfrak{m}(\varepsilon_{1}),\cdots,\mathfrak{m}(\varepsilon_{k})$, respectively. Let $n_1,\cdots,n_k$ denote the length of $w_{(\varepsilon_1)},\cdots,w_{(\varepsilon_{k})}$, respectively.
We prove the proposition by an iterative construction. Let $\Delta\left(x_{j}\right):=\varepsilon_{j} / 2=\delta_{r_j}$, $w(x_j):=w_{(\varepsilon_j)}$, $p(x_j):=\mathfrak{p}\left(\Delta(x_j)\right)$ and $n(x_{j}):=n_{j}$. The sequence $\{x_{1}, \cdots, x_{k}\}$ is considered as an ordered sequence; its elements are called original points. The possible values of $\Delta\left(x_{j}\right)$ are rewritten $\Delta_{1}>\Delta_{2}>\cdots>\Delta_{q}$. A level-$i$ point is defined by an original point $x_{j}$ such that $\Delta\left(x_{j}\right)=\Delta_{i} .$
At step 1 we consider the level-1 points labeled by
$$
S_{1}:=\left\{j \in[1, k]: \Delta\left(x_{j}\right)=\Delta_{1}\right\} .
$$
If $S_{1}=[1, k]$, then by the specification property there exists $y$ such that
{$$d\left (f_{\overline{w{(x_1)}\lvert_{[0,i]}}}(x_1),f_{\overline{w{(x_1)}\lvert_{[0,i]}}}(y)\right )\le\Delta_{1},\quad i=p(x_1),\cdots, n(x_1)-p(x_1)-1,$$
$$d\big(f_{\overline{w{(x_j)}\lvert_{[0,i]}}}(x_j) ,f_{\overline{w{(x_j)}\lvert_{[0,i]}}\,\overline{w{(x_{j-1})}}\cdots\overline{w{(x_1)}}}(y)\big )\le\Delta_{1},i=p(x_j),\cdots, n(x_j)-p(x_j)-1,
$$}where $j=2,\cdots,k$, which proves this case by the definition of the function $\mathfrak{m}$. If $S_{1} \neq[1, k]$, then we decompose it into maximal subsets of consecutive points, called components. (The components are defined with respect to the whole sequence.) Let $J$ be a component, say $[r, s]$ with $r<s$. By the specification property there exists $y$ such that
{$$d\left (f_{\overline{w{(x_r)}\lvert_{[0,i]}}}(x_r),f_{\overline{w{(x_r)}\lvert_{[0,i]}}}(y)\right )\le\Delta_{1},\quad i=p(x_r),\cdots, n(x_r)-p(x_r)-1,$$
$$
d\big (f_{\overline{w{(x_j)}\lvert_{[0,i]}}}(x_j) ,f_{\overline{w{(x_j)}\lvert_{[0,i]}}\,\overline{w{(x_{j-1})}}\cdots\overline{w{(x_r)}}}(y)\big)\le\Delta_{1}, i=p(x_j),\cdots, n(x_j)-p(x_j)-1,
$$}where $j=r+1,\cdots,s$. Hence,
$$
y\in
B_{w{(x_r)}}(\mathbf{g};x_r,\varepsilon_r)\cap\left (\bigcap_{j=r+1}^s f^{-1}_{\overline{w{(x_{j-1})}}\cdots\overline{w{(x_r)}}}B_{w{(x_j)}}\left (\mathbf{g};x_j,\varepsilon_j\right )\right ).
$$
We replace the sequence $\left\{x_{1}, \cdots, x_{k}\right\}$ by the (ordered) sequence
$$
\left\{x_{1}, \cdots, x_{r-1}, y, x_{s+1} \cdots, x_{k}\right\}
$$
and set, for the concatenated point $y$, let
$$
\Delta(y):=\Delta_{1},\, p(y):= \mathfrak{p}(\Delta(y)), \,n(y):=n(x_{r})+\cdots+n(x_{s}),\,w(y):=w(x_r)\cdots w(x_s).
$$
We do this operation for all components which are not singletons. After these operations we have a new (ordered) sequence $\left\{z_{1}, \cdots, z_{k_{1}}\right\}, k_{1} \leq k$, where the point $z_{i}$ is either a point of the original sequence, or a concatenated point. This ends the construction at step 1.
Let
$$
S_{2}:=\left\{j \in\left[1, k_{1}\right]: \Delta\left(z_{j}\right) \geq \Delta_{2}\right\} .
$$
We decompose this set into components. Let $[r, s]$ be a component which is not a singleton $(r<s)$. We replace that component by a single concatenated point $y$ such that if $z_r$ is concatenated point of $S_1$,
$$
d\left (f_{\overline{w{(z_r)}\lvert_{[0,i]}}}(z_r),f_{\overline{w{(z_r)}\lvert_{[0,i]}}}(y)\right )\le\Delta_{2},\quad i=0,\cdots, n(z_r)-1,
$$
otherwise,
$$
d\left (f_{\overline{w{(z_r)}\lvert_{[0,i]}}}(z_r),f_{\overline{w{(z_r)}\lvert_{[0,i]}}}(y)\right )\le\Delta_{2},\quad i=p(z_r),\cdots, n(z_r)-p(z_r)-1;
$$
and, for $j=r+1,\cdots,s$, if $z_j$ is concatenated point of $S_1$,
$$
d\left (f_{\overline{w{(z_j)}\lvert_{[0,i]}}}(z_j),f_{\overline{w{(z_j)}\lvert_{[0,i]}}\,\overline{w(z_{j-1})}\cdots\overline{w(z_r)}}(y)\right )\le\Delta_{2},\quad i=0,\cdots, n(z_j)-1,
$$
otherwise,
{ $$
d\big (f_{\overline{w{(z_j)}\lvert_{[0,i]}}}(z_j),f_{\overline{w{(z_j)}\lvert_{[0,i]}}\,\overline{w(z_{j-1})}\cdots\overline{w(z_r)}}(y)\big )\le\Delta_{2}, i=p(z_j),\cdots, n(z_j)-p(z_j)-1.
$$}Existence of such a $y$ is a consequence of the specification property. We set
$$
\Delta(y):=\Delta_{2},\, p(y):= \mathfrak{p}(\Delta(y)),\, n(y):=n(z_{r})+\cdots+n(z_{s}), \,w(y):=w(z_r)\cdots w(z_s).
$$
The construction of $y$ involves consecutive points of the original sequence (via the concatenated points), say points $x_{j}, j \in[u, t] .$ Since $\delta_{j}=\sum_{i>j} \delta_{i}$,
$$
y\in
B_{w{(x_u)}}(\mathbf{g};x_u,\varepsilon_u)\cap\left (\bigcap_{j=u+1}^t f^{-1}_{\overline{w{(x_{j-1})}}\cdots\overline{w{(x_u)}}}B_{w{(x_j)}}\left (\mathbf{g};x_j,\varepsilon_j\right )\right ).
$$
We do these operations for all components of $S_{2}$, which are not singletons. We get a new ordered sequence, still denoted by $\left\{z_{1}, \cdots, z_{k_{2}}\right\}$. This ends the construction at level 2.
The construction at level 3 is similar to the construction at level 2, using
$$
S_{3}:=\left\{j\in\left[1, k_{2}\right]: \Delta\left(z_{j}\right) \geq \Delta_{3}\right\} .
$$
Once step $q$ is completed, we have a single concatenated point $y$ such that
{
$$
d\left (f_{\overline{w{(x_1)}\lvert_{[0,i]}}}(x_1),f_{\overline{w{(x_1)}\lvert_{[0,i]}}}(y)\right )\le\varepsilon_{1},\quad i=p(x_1),\cdots, n(x_1)-p(x_1)-1,
$$
$$
d\big (f_{\overline{w{(x_j)}\lvert_{[0,i]}}}(x_j) ,f_{\overline{w{(x_j)}\lvert_{[0,i]}}\,\overline{w{(x_{j-1})}}\cdots\overline{w{(x_1)}}}(y)\big )\le\varepsilon_j, i=p(x_j),\cdots, n(x_j)-p(x_j)-1,
$$}where $j=2,\cdots,k.$ Observe that, for all $j$, $\mathbf{g}(n_j)\ge 2p(x_j)$. As a consequence,
\begin{equation}\label{Prop-1}
y\in B_{w_{(\varepsilon_{1})}}(\mathbf{g};x_1,\varepsilon_{1})\cap\left (\bigcap_{j=2}^{k} f^{-1}_{\overline{w_{(\varepsilon_{j-1})}}\cdots \overline{w_{(\varepsilon_1)}}} B_{w_{(\varepsilon_{j})}}\left(\mathbf{g} ; x_{j}, \varepsilon_{j}\right)\right ).
\end{equation}
\end{proof}
\begin{remark}\label{saturated}
If $m=1$, it generates the Proposition 2.1 in \cite{MR2322186}.
\end{remark}
In \cite{MR3503951}, Rodrigues and Varandas proved that if $X$ is a compact Riemannian manifold, and $G$ is free semigroup generated by ${f_0,\cdots,f_{m-1}}$ which are all expanding maps, then $G$ satisfies the specification property, furthermore, it has the $\mathbf{g}$-almost product property by Proposition \ref{proposition almost}.
Next, we describe an example to help us interpret the $\mathbf{g}$-almost product property of free semigroup actions.
{
\begin{example}\label{example-1}
Let $M$ be a compact Riemannian manifold and $G$ the free semigroup generated by $f_0,\cdots,f_{m-1}$ on $M$ which are $C^1$-local diffeomorphisms such that for any $j=0,\cdots,m-1$, $\|Df_j(x)v\|\ge\lambda_j\|v\|$ for all $x\in M$ and all $v\in T_x M$, where $\lambda_j$ is a constant larger than 1. It follows from \cite{MR4200965} and Theorem 16 of \cite{MR3503951} that $G$ satisfies positively expansive and specification property. Let $\mathbf{g}$ be a blowup function. Consider the nonincreasing function $\mathbf{m}:\mathbb{R}^+\to\mathbb{N}$ given by Proposition \ref{proposition almost}. For $k\ge 2$, let $x_1,\cdots,x_k\in X$, $\varepsilon_1,\cdots,\varepsilon_k>0$, and $w_{(\varepsilon_1)},\cdots,w_{(\varepsilon_{k})}\in F_m^+$ with $\lvert w_{(\varepsilon_1)}\rvert \ge \mathfrak{m}(\varepsilon_{1}),\cdots,\lvert w_{(\varepsilon_{k})}\rvert \ge \mathfrak{m}(\varepsilon_{k})$ be given. By the formula (\ref{Prop-1}) in Proposition \ref{proposition almost}, we have that
$$
B_{w_{(\varepsilon_{1})}}(\mathbf{g};x_1,\varepsilon_{1})\cap\left (\bigcap_{j=2}^{k} f^{-1}_{\overline{w_{(\varepsilon_{j-1})}}\cdots \overline{w_{(\varepsilon_1)}}} B_{w_{(\varepsilon_{j})}}\left(\mathbf{g} ; x_{j}, \varepsilon_{j}\right)\right )\neq\emptyset.
$$
Hence $G$ satisfies the $\mathbf{g}$-almost product property for any blowup function $\mathbf{g}$.
\end{example}}
\begin{proposition}
\label{2g-almost product property}
If $G$ satisfies the $\mathbf{g}$-almost product property, then the skew product map $F$ corresponding to the maps $f_0,\cdots,f_{m-1}$ has $2\mathbf{g}$-almost product property.
\end{proposition}
\begin{proof}
The shift map $\sigma:\Sigma_m^+\to\Sigma_m^+$ has specification property (see \cite{MR0457675}). Let $\mathfrak{p}(\varepsilon)$ be the positive integer in the definition of specification property of $\sigma$ for $\varepsilon>0$. Let $\mathfrak{m}_G$ denote the function in the $\mathbf{g}$-almost product property for $G$. Let $\delta_{r}:=2^{-r}, \,r\in \mathbb{N}$. It is no restriction to suppose that $\mathfrak{p}(\delta_r)>r$ and the function $\mathfrak{p}(\delta_r)$ is increasing as $r$ increases. Next, we may define a nonincreasing function $\mathfrak{m}_F :\mathbb{R}^+\to\mathbb{N}$ as follows:
$$\mathfrak{m}_F(\varepsilon):=\widetilde{\mathfrak{m}}(2\delta_r)$$
where $r=\min\{i :2\delta_i\le\varepsilon\}$ and $\widetilde{\mathfrak{m}}(2\delta_r):= \min \left\{n: \mathbf{g}(n) \geq 2 \mathfrak{p}\left(\delta_{r}\right)\,\text{and } n\ge \mathfrak{m}_G\left(\delta_{r}\right)\right\}$.
For $k\ge 2$, let $(\iota^{(1)},x_1),\cdots, (\iota^{(k)},x_k)\in\Sigma_m^+\times X$ and $\varepsilon_{1},\cdots,\varepsilon_{k}>0$ be given. It is sufficient to prove the statement for $\varepsilon_{j}$ of the form $2 \delta_{r_j}, j=1, \cdots, k$, where, as above $r_j=\min\{i :2 \delta_{i} \leq \varepsilon_{j}\}$. Precisely, if $\varepsilon_{j}$ is not of that form, we change it into $2 \delta_{r_j}$. From now on we assume that, for all $j, \varepsilon_{j}$ is of the form $2 \delta_{r_j}$. For convenience, write $p_j:=\mathfrak{p}(\delta_{r_j})$ for all $j=1,\cdots,k$.
For any $n_1\ge \mathfrak{m}_F(\varepsilon_{1}),\cdots, n_k\ge \mathfrak{m}_F(\varepsilon_{k})$, let $\iota\in \Sigma_{m}^+$ satisfy the following condition:
$$
\iota\lvert_{[n_1+n_2+\cdots +n_{j-1},\, n_1+n_2+\cdots +n_j-1]}=\iota^{(j)}\lvert_{[0,\, n_j-1]}, \quad j=1,\cdots, k,
$$
where $n_0=0$. We now apply the argument $p_j>r_j$ for all $j$ to obtain
\begin{equation}\label{3.1}
d^\prime\left(\sigma^{n_{1}+n_2+\cdots+n_{j-1}+r} (\iota), \sigma^{r} (\iota^{(j)})\right) \leq \varepsilon_{j}, \quad r=p_{j},p_j+1, \cdots, n_{j}-p_{j}-1.
\end{equation}
Let
$$
\begin{aligned}
w_{(\varepsilon_{1})}:&=\iota\lvert_{[0, \,n_1-1]}, \\
w_{(\varepsilon_2)}:&=\iota\lvert_{[n_1,\, n_1+n_2-1]}, \\
&\vdots \\
w_{(\varepsilon_{k})}:&=\iota\lvert_{[n_1+\cdots+n_{k-1},\, n_1+\cdots +n_k-1]}.
\end{aligned}
$$
Observe that $\lvert w_{(\varepsilon_j)}\rvert \ge \mathfrak{m}_G(\varepsilon_{j})$ for each $j=1,\cdots, k$. The $\mathbf{g}$-almost product property of $G$ implies that
$$
B_{w_{(\varepsilon_{1})}}(\mathbf{g};x_1,\varepsilon_{1})\cap\left (\bigcap_{j=2}^{k} f^{-1}_{\overline{w_{(\varepsilon_{j-1})}}\cdots \overline{w_{(\varepsilon_1)}}} B_{w_{(\varepsilon_{j})}}\left(\mathbf{g} ; x_{j}, \varepsilon_{j}\right)\right ) \neq \emptyset.
$$
Take an element $x$ from the left set. For $j=1,\cdots, k$, define
{$$
\Gamma_j:=\bigg \{p_j\le r< n_j-p_j:
d\big(f_{\overline{w_{(\varepsilon_j)}\lvert_{[0,r]}}\overline{w_{(\varepsilon_{j-1})}}\cdots\overline{w_{(\varepsilon_{1})}}}(x),
f_{\overline{w_{(\varepsilon_j)}\lvert_{[0,r]}}}(x_j)
\big ) \le\varepsilon_j \bigg\}.
$$}To be more precise, for any $r\in\Gamma_j$,
\begin{equation}\label{3.2}
d\left (f_{\overline{\iota\lvert_{[0,\, n_1+n_2+\cdots +n_{j-1}+r]}}}(x),
f_{\overline{\iota^{(j)}\lvert_{[0,r]}}}(x_j)
\right )\le\varepsilon_{j}.
\end{equation}
Accordingly,
{$$
\begin{aligned}
&D\Big (F^{n_1+n_2+\cdots +n_{j-1}+r} (\iota,x ) , F^r (\iota^{(j)}, x_j )\Big )\\
=&D\Big (\big (\sigma^{n_1+n_2+\cdots +n_{j-1}+r}(\iota),\, f_{\overline{\iota\lvert_{[0,\, M_{j-1}+r]}}}(x) \big), \big(\sigma^r(\iota^{(j)}), f_{\overline{\iota^{(j)}\lvert_{[0,r]}}}(x_j) \big )\Big)\\
=&\max\Big \{d^\prime\big ( \sigma^{n_1+n_2+\cdots +n_{j-1}+r}(\iota),\sigma^r(\iota_j) \big ),
d\big( f_{\overline{\iota\lvert_{[0,\, n_1+n_2+\cdots +n_{j-1}+r]}}}(x),f_{\overline{\iota^{(j)}\lvert_{[0,r]}}}(x_j) \big )\Big \}\\
\le& \varepsilon_{j}, \quad\text{by these inequations (\ref{3.1}) and (\ref{3.2})}.
\end{aligned}
$$}
Observe that $\sharp\left (\Gamma_j\right )\ge n_j-2p_j-\mathbf{g}(n_j)\ge n_j-2\mathbf{g}(n_j)$.
As a consequence,
{
$$
(\iota,x)\in B_{n_1}\big (2\mathbf{g};(\iota^{(1)},x_1 ),\varepsilon_{1}\big) \cap \bigg (\bigcap_{j=2}^{k} F^{-(n_1+n_2+\cdots +n_{j-1})} B_{n_{j}}\big(2\mathbf{g} ; (\iota^{(j)}, x_{j}), \varepsilon_{j}\big)\bigg).
$$
}This proves that $F$ has 2$\mathbf{g}$-almost product property.
\end{proof}
\section{General (ir)regularity}\label{Irregular and regular set}
In this section, we study the more general irregular and regular sets of free semigroup actions and calculate the upper capacity topological entropy of the irregular and regular sets of free semigroup actions. The results in this section are inspired by \cite{MR3963890, MR4200965}. Throughout this section we assume that $X$ is a compact metric space, $G$ is the free semigroup generated by $m$ generators $f_0,\cdots,f_{m-1}$ which are continuous maps on $X$ and $F$ is the skew product map corresponding to the maps $f_0,\cdots,f_{m-1}$.
Let
$$
R_\alpha(G):=\bigcup_{\iota\in\Sigma_{m}^+} R_\alpha(\iota, G),\quad I_\alpha (G):=\bigcup_{\iota\in\Sigma_{m}^+}I_\alpha(\iota, G).
$$
Let us call $R_\alpha(G)$ and $I_\alpha (G)$ the $\alpha$-regular set and $\alpha$-irregular set of free semigroup actions, respectively.
\begin{theorem}\label{full entropy 1-1}
Let $(X,d)$ be a compact metric space and $G$ the free semigroup action on $X$ generated by $f_0,\cdots,f_{m-1}$. Let $\alpha: \mathcal{M}(\Sigma_{m}^+\times X, F)\to\mathbb{R}$ be a continuous function. Then,
$$
\overline{Ch}_{R_\alpha(G)}(G)=\overline{Ch}_X(G)=h(G).
$$
\end{theorem}
\begin{proof}
Consider a set
$$
R_\alpha(F):=\left \{(\iota,x)\in\Sigma_{m}^+\times X: \inf_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu)= \sup_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu)\right \}.
$$
It follows from Theorem 4.1(4) of \cite{MR3963890} that
\begin{equation}\label{full entropy1-1-1}
h_{R_\alpha(F)}(F)=h_{\Sigma_{m}^+\times X}(F).
\end{equation}
For $(\iota,x)\in R_\alpha(F)$, it is immediate that $x\in R_\alpha(\iota,G)$, then $x\in R_\alpha(G)$. This implies that
$$
R_\alpha(F)\subseteq \Sigma_{m}^+\times R_\alpha(G)\subseteq \Sigma_{m}^+\times X.
$$
In this way we conclude from the formula (\ref{full entropy1-1-1}) that
\begin{equation}\label{full entropy1-1-2}
h_{\Sigma_{m}^+\times X}(F)=h_{R_\alpha(F)}(F)\le \overline{Ch}_{\Sigma_{m}^+\times R_\alpha(G)}(F).
\end{equation}
From Theorem \ref{Topological entropy skew product}, we obtain that
\begin{equation}\label{full entropy1-1-4}
\log m +h(G)=h_{\Sigma_{m}^+\times X}(F).
\end{equation}
By Theorem \ref{Topological upper capacity entropy skew product}, one has
\begin{equation}\label{full entropy1-1-3}
\overline{Ch}_{\Sigma_{m}^+\times R_\alpha(G)}(F)= \log m +\overline{Ch}_{R_\alpha(G)}(G).
\end{equation}
Combining the equations (\ref{full entropy1-1-2}), (\ref{full entropy1-1-4}) and (\ref{full entropy1-1-3}), we get that
$$
\begin{aligned}
\log m+h(G)&=h_{R_\alpha(F)}(F)\\
&\le\overline{Ch}_{\Sigma_{m}^+\times R_\alpha(G)}(F)\\
&=\log m +\overline{Ch}_{R_\alpha(G)}(G).
\end{aligned}
$$
Hence,
$$
\overline{Ch}_X(G)=h(G)\le \overline{Ch}_{R_\alpha(G)}(G).
$$
Obviously,
$$
\overline{Ch}_{R_\alpha(G)}(G)\le \overline{Ch}_X(G)=h(G).
$$
Consequently,
$$
\overline{Ch}_{R_\alpha(G)}(G)=\overline{Ch}_X(G)=h(G).
$$
\end{proof}
\begin{theorem}\label{full entropy 2-2}
Suppose that $G$ has the $\mathbf{g}$-almost product property, there exists a $\mathbb{P}$-stationary measure with full support where $\mathbb{P}$ is a Bernoulli measure on $\Sigma_{m}^+$. Let $\alpha: \mathcal{M}(\Sigma_{m}^+\times X, F)\to\mathbb{R}$ be a continuous function satisfying the condition A.3. If $\inf_{\nu \in \mathcal{M}(\Sigma_{m}^+\times X, F)}\alpha(\nu)<\sup_{\nu \in \mathcal{M}(\Sigma_{m}^+\times X, F)}\alpha (\nu)$, then
$$
\overline{Ch}_{I_\alpha(G)}(G)=\overline{Ch}_{E(I_\alpha,\mathrm{Tran})} (G)=\overline{Ch}_X(G)=h(G),
$$
where $E(I_\alpha,\mathrm{Tran}):=\cup_{\iota\in\Sigma_{m}^+}\left ( I_\alpha(\iota,G)\cap \mathrm{Tran}(\iota,G)\right )$. In particular,
$$
\overline{Ch}_{I_\alpha(G)}(G)=\overline{Ch}_{I_\alpha (G)\cap\mathrm{Tran}(G)}(G)=\overline{Ch}_X(G)=h(G).
$$
\end{theorem}
\begin{proof}
Suppose $\mu$ is the $\mathbb{P}$-stationary measure with full support. Then, Proposition \ref{invariant measure of random} ensures that $\mathbb{P}\times \mu$ is an invariant measure under the skew product $F$ with support $\Sigma_{m}^+\times X$. From Proposition \ref{2g-almost product property}, the skew product $F$ has 2$\mathbf{g}$-almost product property. Consider a set
$$
I_\alpha(F):=\left \{(\iota,x)\in\Sigma_{m}^+\times X: \inf_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu)< \sup_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu)\right \}.
$$
Hence, from Theorem 4.1 (2) of \cite{MR3963890}, one has
\begin{equation}\label{full entropy2-2-1}
h_{I_\alpha(F)}(F)=h_{I_\alpha (F)\cap \mathrm{Tran}(F)}(F) =h_{\Sigma_{m}^+\times X}(F).
\end{equation}
It is clear that if $(\iota,x)\in I_\alpha (F)\cap \mathrm{Tran}(F)$, then $x\in I_\alpha (\iota,G)\cap \mathrm{Tran}(\iota,G)$. Accordingly, $x\in E(I_\alpha,\mathrm{Tran})$. This yields that
$$
I_\alpha (F)\cap \mathrm{Tran}(F) \subseteq \Sigma_{m}^+\times E(I_\alpha,\mathrm{Tran}) \subseteq \Sigma_{m}^+\times X.
$$
In this way we conclude from the formula (\ref{full entropy2-2-1}) that
\begin{equation}\label{full entropy2-2-2}
h_{\Sigma_{m}^+\times X}(F)=h_{I_\alpha (F)\cap \mathrm{Tran}(F)}(F)\le \overline{Ch}_{\Sigma_{m}^+\times E(I_\alpha,\mathrm{Tran})}(F).
\end{equation}
By Theorem \ref{Topological upper capacity entropy skew product}, one has
\begin{equation}\label{full entropy2-2-3}
\overline{Ch}_{\Sigma_{m}^+\times E(I_\alpha,\mathrm{Tran})}(F)= \log m +\overline{Ch}_{E(I_\alpha,\mathrm{Tran})}(G).
\end{equation}
Combining the equations (\ref{full entropy1-1-4}), (\ref{full entropy2-2-2}) and (\ref{full entropy2-2-3}), we get that
$$
\begin{aligned}
\log m+h(G)&=h_{I_\alpha (F)\cap \mathrm{Tran}(F)}(F)\\
&\le\overline{Ch}_{\Sigma_{m}^+\times E(I_\alpha,\mathrm{Tran})}(F)\\
&=\log m +\overline{Ch}_{E(I_\alpha,\mathrm{Tran})}(G).
\end{aligned}
$$
Hence,
$$
\overline{Ch}_X(G)=h(G)\le \overline{Ch}_{E(I_\alpha,\mathrm{Tran})}(G).
$$
Obviously,
$$
\overline{Ch}_{E(I_\alpha,\mathrm{Tran})}(G)\le \overline{Ch}_X(G)=h(G).
$$
Consequently,
$$
\overline{Ch}_{E(I_\alpha,\mathrm{Tran})}(G)=\overline{Ch}_X(G)=h(G).
$$
We may obtain from $E(I_\alpha,\mathrm{Tran})\subseteq I_\alpha(G)$ that
$$
\overline{Ch}_{I_\alpha(G)}(G)=\overline{Ch}_{E(I_\alpha,\mathrm{Tran})}(G)=\overline{Ch}_X(G)=h(G).
$$
It is easy to check that
$$
\begin{aligned}
I_\alpha(G)\cap \mathrm{Tran}(G)&=\left ( \bigcup_{\iota\in\Sigma_{m}^+} I_\alpha(\iota,G)\right )\cap \left ( \bigcup_{\iota^\prime\in\Sigma_{m}^+} \mathrm{Tran}(\iota^\prime,G)\right )\\
&= \bigcup_{\iota,\iota^\prime\in\Sigma_{m}^+}\left ( I_\alpha(\iota,G)\cap \mathrm{Tran}(\iota^\prime,G)\right )\\
&\supseteq \bigcup_{\iota\in\Sigma_{m}^+} \left (I_\alpha(\iota,G)\cap \mathrm{Tran}(\iota,G)\right )=E(I_\alpha,\mathrm{Tran}).
\end{aligned}
$$
Hence,
$$
\overline{Ch}_{I_\alpha(G)}(G)=\overline{Ch}_{I_\alpha(G)\cap \mathrm{Tran}(G)}(G)=\overline{Ch}_X(G)=h(G).
$$
\end{proof}
\begin{remark}
Both Theorem \ref{full entropy 1-1} and \ref{full entropy 2-2} are extension of Theorem 4.1 of \cite{MR3963890}.
\end{remark}
For $\iota=(i_0,i_1,\cdots)\in \Sigma^+_m$, consider a set
$$
I_\varphi(\iota,G):=\left \{ x\in X: \lim_{n\to \infty}\frac{1}{n}\sum_{j=0}^{n-1}\varphi\left (f_{i_{j-1}\cdots i_0}(x)\right )\text{ does not exist}\right \}.
$$
Let $R_\varphi(\iota,G):= X \backslash I_\varphi(\iota,G)$, and
$$
R_\varphi(G):=\bigcup_{\iota\in\Sigma_{m}^+} R_\varphi(\iota,G),\quad I_\varphi(G):=\bigcup_{\iota\in\Sigma_{m}^+} I_\varphi(\iota,G).
$$
{It is easy to find that $I_\varphi(G)$ coincides with the $\varphi$-irregular set of free semigroup action defined by Zhu and Ma \cite{MR4200965}.}
For convenience, we call $R_\varphi(G)$ to be $\varphi$-regular set of free semigroup action.
For a continuous function $\varphi:X\to\mathbb{R}$, consider a function $\psi:\Sigma_{m}^+\times X\to\mathbb{R}$ such that for any $\iota=(i_0,i_1,\cdots)\in\Sigma_{m}^+$, the map $\psi$ satisfies $\psi(\iota,x)=\varphi(x)$, then $\psi$ is continuous. The continuous function $\alpha :\mathcal{M}(\Sigma_{m}^+\times X, F)\to\mathbb{R}$ is given by $\alpha(\nu)=\int \psi \mathrm{d}\nu$. It is easy to check that the function $\alpha$ satisfies the conditions A.1, A.2 and A.3. It follows from the definition of the function $\psi$ that the limit
$$
\lim_{n\to \infty}\frac{1}{n}\sum_{j=0}^{n-1}\varphi\left (f_{i_{j-1}\cdots i_0}(x)\right )
$$
exists if and only if
$$
\inf_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu)= \sup_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu).
$$
Hence, $R_{\alpha}(\iota,G)=R_{\varphi}(\iota,G)$. Analogously, $I_\alpha (\iota,G)=I_{\varphi}(\iota,G)$.
\begin{corollary}\label{full entropy 1}
Let $(X,d)$ be a compact metric space and $G$ the free semigroup action on $X$ generated by $f_0,\cdots,f_{m-1}$. Then the $\varphi$-regular set of free semigroup {action}
carries full upper capacity topological entropy, that is,
$$
\overline{Ch}_{R_\varphi(G)}(G)=\overline{Ch}_X(G)=h(G).
$$
\end{corollary}
If $m = 1$, {then the above corollary} coincides with {the result of Theorem 4.2} that Tian proved in \cite{MR3436391}.
\begin{corollary}\label{full entropy 2}
Suppose that $G$ has the $\mathbf{g}$-almost product property, there exists a $\mathbb{P}$-stationary measure with full support where $\mathbb{P}$ is a Bernoulli measure on $\Sigma_{m}^+$. Let $\varphi :X\to\mathbb{R}$ be a continuous function. If {$I_\varphi(\iota,G)$} is non-empty for some $\iota\in\Sigma_{m}^+$, then
$$
\overline{Ch}_{I_\varphi(G)}(G)=\overline{Ch}_{E(I_\varphi,\mathrm{Tran})}(G)=\overline{Ch}_X(G)=h(G),
$$
where ${E(I_\varphi,\mathrm{Tran})}:=\cup_{\iota\in\Sigma_{m}^+}\left ( I_\varphi(\iota,G)\cap \mathrm{Tran}(\iota,G)\right )$. In particular,
$$
\overline{Ch}_{I_\varphi(G)}(G)=\overline{Ch}_{I_\varphi (G)\cap\mathrm{Tran}(G)}(G)=\overline{Ch}_X(G)=h(G).
$$
\end{corollary}
\begin{remark}
{The previous corollary generalizes Theorem 2}
obtained by Zhu and Ma \cite{MR4200965}. Indeed, from Lemma 3.3 of \cite{MR4200965}, if the free semigroup action $G$ has specification property, then the skew product $F$ has specification property. This yields from \cite{MR646049} {that there exists a $F$-invariant probability} measures on $\Sigma_{m}^+\times X$ with full support. On the other hand, from Proposition \ref{proposition almost} we know that specification implies the $\mathbf{g}$-almost product property for free semigroup actions. Therefore, the specification property of the free semigroup $G$ implies the hypothesis of Corollary \ref{full entropy 2}, as we wanted to prove.
\end{remark}
{We provide an example that satisfies the assumptions}
of Theorem \ref{entropy of BR-1}, \ref{entropy of QW-1} and \ref{entropy of omega limit set}.
\begin{example}\label{example 3}
Given $q\in\mathbb{N}$, let $A_j\in\mathrm{GL}(q,\mathbb{Z})$ be the integer coefficients matrix whose
{the determinant is different} from zero and eigenvalues have absolute value bigger than one, for $j=0,\cdots,m-1$. Let $f_{A_j}:\mathbb{T}^q\to\mathbb{T}^q$ be the linear endomorphism of the torus induced by the matrix $A_j$. Then the transformations $f_{A_0},f_{A_1},\cdots,f_{A_{m-1}}$ are all expanding (see Sec. 11.1 of \cite{MR3558990} for details). Let $G$ be the free semigroup action generated by $f_{A_0},f_{A_1},\cdots,f_{A_{m-1}}$. It follows from \cite{MR3503951} that $G$ is positively expansive with $\mathbf{g}$-almost product property.
Suppose that $F:\Sigma_m^+\times X \to\Sigma_m^+\times X$ is the skew product map corresponding to the maps $f_{A_0},f_{A_1},\cdots,f_{A_{m-1}}$.
{From Section 4.2.5 of \cite{MR3558990}, we have that $f_{A_j}$ preserves the Lebesgue measure}
$\mu$ on $\mathbb{T}^q$ for all $j=0,\cdots,m-1$.
Hence the Lebesgue measure is stationary with respect to any Bernoulli measure, so the skew product $F$ is not uniquely ergodic. This may be seen as follows. Let $\mathbb{P}_a$ and $\mathbb{P}_b$ be two Bernoulli measures on $\Sigma_{m}^+$ generated by different probability vectors $a=(a_0,a_1,\cdots,a_{m-1})$ and $b=(b_0, b_1,\cdots,b_{m-1})$, respectively. It follows from Proposition \ref{invariant measure of random} that the different product measures both $\mathbb{P}_a\times\mu$ and $\mathbb{P}_b\times\mu$ are invariant under the skew product $F$. This shows that $F$ is not uniquely ergodic. Hence, it satisfies the hypothesis of Theorem \ref{entropy of BR-1}, \ref{entropy of QW-1} and \ref{entropy of omega limit set}, as we wanted to prove.
\end{example}
\section{Proofs of the main results}\label{BR and QW}
In this section, we complete the proofs of Theorem \ref{entropy of BR-1}, \ref{entropy of QW-1} and \ref{entropy of omega limit set}. Let $(X,f)$ be a dynamical system and $Z_{1}, Z_{2}, \cdots, Z_{k} \subseteq X(k \ge 2)$ be a collection of subsets of $X$. Recall that $\left\{Z_{i}\right\}$ has full entropy gaps with respect to $Y \subseteq X$ if
$$
h_{(Z_{i+1} \backslash Z_{i}) \cap Y}(f)=h_{Y}(f) \text { for all } 1 \le i<k.
$$
Next throughout this section we assume that $X$ is a compact metric space, $G$ is the free semigroup generated by $m$ generators $f_0,\cdots,f_{m-1}$ which are continuous maps on $X$, and $F$ is the skew product map corresponding to the maps $f_0,\cdots,f_{m-1}$.
From \cite{MR3963890}, Tian defined the recurrent level sets of the upper Banach recurrent points with respect to a single map. For the skew product map $F$, let $\mathrm{BR}^{\#}(F):=\mathrm{BR}(F) \backslash \mathrm{QW}(F)$,
$$
\begin{aligned}
W(F) &:=\left\{(\iota,x) \in \Sigma_{m}^+\times X : S_{\mu}=C_{(\iota,x)} \text { for every } \mu \in M_{(\iota,x)}(F)\right\}, \\
V(F) &:=\left\{(\iota, x) \in \Sigma_{m}^+\times X: \exists \mu \in M_{(\iota,x)}(F) \text { such that } S_{\mu}=C_{(\iota,x)}\right\}, \\
S(F)&:=\left\{(\iota,x) \in \Sigma_{m}^+\times X : \cap_{\mu \in M_{(\iota,x)}(F)} S_{\mu} \neq \emptyset\right\} .
\end{aligned}
$$
More precisely, $\mathrm{BR}^{\#}(F)$ is divided into the following several levels with different asymptotic behaviour:
$$
\begin{aligned}
&\mathrm{BR}_{1}(F):=\mathrm{BR}^{\#}(F)\cap W(F), \\
&\mathrm{BR}_{2}(F):=\mathrm{BR}^{\#}(F)\cap(V(F) \cap S(F)), \\
&\mathrm{BR}_{3}(F):=\mathrm{BR}^{\#}(F)\cap V(F),\\
&\mathrm{BR}_{4}(F):=\mathrm{BR}^{\#}(F)\cap(V(F) \cup S(F)),\\
&\mathrm{BR}_{5}(F):=\mathrm{BR}^{\#}(F).
\end{aligned}
$$
Then $\mathrm{BR}_{1}(F) \subseteq \mathrm{BR}_{2}(F) \subseteq \mathrm{BR}_{3}(F) \subseteq \mathrm{BR}_{4}(F) \subseteq \mathrm{BR}_{5}(F)$.
\begin{proof}[Proof of Theorem \ref{entropy of BR-1}]
Suppose $\mu$ is the $\mathbb{P}$-stationary measure with full support. Then, Proposition \ref{invariant measure of random} ensures that $\mathbb{P}\times \mu$ is an invariant measure under the skew product $F$ with support $\Sigma_{m}^+\times X$. From Lemma \ref{2g-almost product property}, the skew product $F$ has 2$\mathbf{g}$-almost product property.
(1) Consider a set
$$
I_\alpha(F):=\left \{(\iota,x)\in\Sigma_{m}^+\times X: \inf_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu)< \sup_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu)\right \}.
$$
If $\alpha$ satisfies A.3 and $\mathrm{Int}(L_\alpha)\neq\emptyset$, it follows from Theorem 6.1(1) of \cite{MR3963890} that
$$
\left \{\mathrm{QR}(F), \mathrm{BR}_1(F),\mathrm{BR}_2(F),\mathrm{BR}_3(F),\mathrm{BR}_4(F),\mathrm{BR}_5(F)\right \}
$$
has full entropy gaps with respect to $I_\alpha(F)\cap\mathrm{Tran}(F)$. Hence,
$$
h_{I_\alpha(F)\cap\mathrm{Tran}(F) \cap \left (\mathrm{BR}_1(F)\setminus \mathrm{QR}(F)\right )}(F)=h_{I_\alpha(F)\cap\mathrm{Tran}(F)}(F)=h_{\Sigma_{m}^+\times X}(F)
$$
and
$$
h_{I_\alpha(F)\cap\mathrm{Tran}(F) \cap \left (\mathrm{BR}_j(F)\setminus \mathrm{BR}_{j-1}(F)\right )}(F)=h_{I_\alpha(F)\cap\mathrm{Tran}(F)}(F)=h_{\Sigma_{m}^+\times X}(F),
$$
for $j=2,\cdots,5$. From the Theorem \ref{Topological entropy skew product}, we get that
\begin{equation}\label{entropy of BR-1-1-3}
\log m +h(G)=h_{I_\alpha(F)\cap\mathrm{Tran}(F) \cap \left (\mathrm{BR}_1(F)\setminus \mathrm{QR}(F)\right)}(F),
\end{equation}
and
\begin{equation}\label{entropy of BR-1-1-4}
\log m +h(G)=h_{I_\alpha(F)\cap \mathrm{Tran}(F) \cap \left (\mathrm{BR}_j(F)\setminus \mathrm{BR}_{j-1}\right )}(F)\quad\text{for } j=2,\cdots,5.
\end{equation}
By the definitions of the sets, if $(\iota,x)\in I_\alpha(F)\cap\mathrm{Tran}(F)\cap \left ( \mathrm{BR}_1(F)\setminus\mathrm{QR}(F)\right )$ then
$$x\in I_\alpha(\iota,G)\cap\mathrm{Tran}(\iota, G)\cap \left ( \mathrm{BR}_1(\iota,G)\setminus\mathrm{QR}(\iota,G)\right ).$$ This shows that
\begin{equation}\label{entropy of BR-1-1-1}
I_\alpha(F)\cap\mathrm{Tran}(F) \cap \left (\mathrm{BR}_1(F)\setminus \mathrm{QR}(F)\right )\subseteq\Sigma_{m}^+\times M_1(I_\alpha,\mathrm{Tran})\subseteq\Sigma_{m}^+\times X,
\end{equation}
where
$$
M_1(I_\alpha, \mathrm{Tran}):=\cup_{\iota\in\Sigma_{m}^+}\big \{I_\alpha(\iota,G)\cap\mathrm{Tran}(\iota, G) \cap\left (\mathrm{BR}_1(\iota, G)\setminus \mathrm{QR}(\iota, G)\right )\big \}.
$$
It follows using the formula (\ref{entropy of BR-1-1-1}) and Theorem \ref{Topological upper capacity entropy skew product} that
\begin{equation}\label{entropy of BR-1-1-2}
\begin{aligned}
h_{\Sigma_{m}^+\times X}(F)&=h_{I_\alpha(F)\cap\mathrm{Tran}(F) \cap \left (\mathrm{BR}_1(F)\setminus \mathrm{QR}(F)\right)}(F)\\
&\le \overline{Ch}_{\Sigma_{m}^+\times M_1(I_\alpha,\mathrm{Tran})}(F)\\
&=\log m + \overline{Ch}_{ M_1(I_\alpha,\mathrm{Tran})}(G).
\end{aligned}
\end{equation}
Combining these two relations (\ref{entropy of BR-1-1-3}) and (\ref{entropy of BR-1-1-2}), we find that
$$
\overline{Ch}_{ M_1(I_\alpha,\mathrm{Tran})}(G)=\overline{Ch}_X(G)=h(G).
$$
Denote
$$
M_j (I_\alpha,\mathrm{Tran}):=\cup_{\iota\in\Sigma_{m}^+} \left \{I_\alpha(\iota,G)\cap \mathrm{Tran}(\iota, G) \cap\left (\mathrm{BR}_j(\iota, G)\setminus \mathrm{BR}_{j-1}(\iota, G)\right )\right \},
$$ for $j=2,\cdots,5.$
Similarly, if $(\iota,x)\in I_\alpha(F)\cap\mathrm{Tran}(F)\cap \left ( \mathrm{BR}_j(F)\setminus\mathrm{BR}_{j-1}(F)\right )$, we obtain that $x\in I_\alpha(\iota,G)\cap\mathrm{Tran}(\iota, G)\cap \left ( \mathrm{BR}_j(\iota,G)\setminus\mathrm{BR}_{j-1}(\iota,G)\right )$. This shows that
$$
I_\alpha(F)\cap\mathrm{Tran}(F) \cap \left (\mathrm{BR}_j(F)\setminus \mathrm{BR}_{j-1}(F)\right )\subseteq\Sigma_{m}^+\times M_j(I_\alpha,\mathrm{Tran})\subseteq\Sigma_{m}^+\times X.
$$
It follows using the Theorem \ref{Topological upper capacity entropy skew product} that
\begin{equation}\label{entropy of BR-1-1-5}
\begin{aligned}
h_{\Sigma_{m}^+\times X}(F)
&=h_{I_\alpha(F)\cap\mathrm{Tran}(F) \cap \left (\mathrm{BR}_j(F)\setminus \mathrm{BR}_{j-1}(F)\right)}(F)\\
&\le \overline{Ch}_{\Sigma_{m}^+\times M_j(I_\alpha,\mathrm{Tran})}(F)\\
&=\log m + \overline{Ch}_{ M_j(I_\alpha,\mathrm{Tran})}(G).
\end{aligned}
\end{equation}
Combining these two relations (\ref{entropy of BR-1-1-3}) and (\ref{entropy of BR-1-1-5}), we find that
$$
\overline{Ch}_{ M_j(I_\alpha, \mathrm{Tran})}(G) =\overline{Ch}_X(G)=h(G),\,\text{for }j=2,\cdots,5.
$$
This completes the proof.
(2) By the hypothesis about the skew product, it follows from Theorem 6.1(3) of \cite{MR3963890} that
$$
\left \{\emptyset,\mathrm{QR}(F)\cap \mathrm{BR}_1(F), \mathrm{BR}_1(F),\mathrm{BR}_2(F),\mathrm{BR}_3(F),\mathrm{BR}_4(F),\mathrm{BR}_5(F)\right \}
$$
has full entropy gaps with respect to $\mathrm{Tran}(F)$. Hence,
$$
h_{\mathrm{Tran}(F)\cap \mathrm{QR}(F)\cap \mathrm{BR}_1(F)}(F)=h_{\mathrm{Tran}(F)}(F)=h_{\Sigma_{m}^+\times X}(F),
$$
$$
h_{\mathrm{Tran}(F) \cap \left (\mathrm{BR}_1(F)\setminus \mathrm{QR}(F)\right )}(F)=h_{\mathrm{Tran}(F)}(F)=h_{\Sigma_{m}^+\times X}(F),
$$
$$
h_{\mathrm{Tran}(F) \cap \left (\mathrm{BR}_j(F)\setminus \mathrm{BR}_{j-1}(F)\right )}(F)=h_{\mathrm{Tran}(F)}(F)=h_{\Sigma_{m}^+\times X}(F),\,\text{for } j=2,\cdots,5.
$$
Applying Theorem \ref{Topological entropy skew product}, we obtain that
\begin{equation}\label{entropy of BR-1-2-4}
\log m +h(G)=h_{\mathrm{Tran}(F)\cap \mathrm{QR}(F)\cap \mathrm{BR}_1(F)}(F),
\end{equation}
\begin{equation}\label{entropy of BR-1-2-2}
\log m +h(G)=h_{\mathrm{Tran}(F) \cap \left (\mathrm{BR}_1(F)\setminus \mathrm{QR}(F)\right)}(F),
\end{equation}
\begin{equation}\label{entropy of BR-1-2-3}
\log m +h(G)=h_{\mathrm{Tran}(F) \cap \left (\mathrm{BR}_j(F)\setminus \mathrm{BR}_{j-1}(F)\right )}(F),\,\text{for } j=2,\cdots,5.
\end{equation}
By the definitions of the sets, if $(\iota,x)\in\mathrm{Tran}(F)\cap \mathrm{QR}(F)\cap \mathrm{BR}_1(F)$, then $x\in\mathrm{Tran}(\iota, G)\cap \mathrm{QR}(\iota,G)\cap \mathrm{BR}_1(\iota,G)$. This shows that
$$
\mathrm{Tran}(F)\cap \mathrm{QR}(F)\cap \mathrm{BR}_1(F)\subseteq\Sigma_{m}^+\times M_0(\mathrm{Tran})\subseteq\Sigma_{m}^+\times X,
$$
where $M_0(\mathrm{Tran}):=\bigcup_{\iota\in\Sigma_{m}^+}\{\mathrm{Tran}(\iota, G)\cap \mathrm{QR}(\iota,G)\cap \mathrm{BR}_1(\iota,G)\}$.
It follows using the Theorem \ref{Topological upper capacity entropy skew product} that
\begin{equation}\label{entropy of BR-1-2-5}
\begin{aligned}
h_{\Sigma_{m}^+\times X}(F)
&=h_{\mathrm{Tran}(F)\cap \mathrm{QR}(F)\cap \mathrm{BR}_1(F)}(F)\\
&\le \overline{Ch}_{\Sigma_{m}^+\times M_0(\mathrm{Tran})}(F)\\
&=\log m + \overline{Ch}_{ M_0(\mathrm{Tran})}(G).
\end{aligned}
\end{equation}
Combining these two relations (\ref{entropy of BR-1-2-4}) and (\ref{entropy of BR-1-2-5}), we find that
$$
\overline{Ch}_{ M_0(\mathrm{Tran})}(G)=\overline{Ch}_X(G)=h(G).
$$
By the definitions of the sets, if $(\iota,x)\in\mathrm{Tran}(F)\cap \left ( \mathrm{BR}_1(F)\setminus\mathrm{QR}(F)\right )$, then $x\in\mathrm{Tran}(\iota, G)\cap \left ( \mathrm{BR}_1(\iota,G)\setminus\mathrm{QR}(\iota,G)\right )$. This shows that
$$
\mathrm{Tran}(F) \cap \left (\mathrm{BR}_1(F)\setminus \mathrm{QR}(F)\right )\subseteq\Sigma_{m}^+\times M_1(\mathrm{Tran})\subseteq\Sigma_{m}^+\times X,
$$
where $M_1(\mathrm{Tran}):=\bigcup_{\iota\in\Sigma_{m}^+} \left \{\mathrm{Tran}(\iota, G) \cap\left (\mathrm{BR}_1(\iota, G)\setminus \mathrm{QR}(\iota, G)\right )\right \}$. It follows using the Theorem \ref{Topological upper capacity entropy skew product} that
\begin{equation}\label{entropy of BR-1-2-6}
\begin{aligned}
h_{\Sigma_{m}^+\times X}(F)
&=h_{\mathrm{Tran}(F) \cap \left (\mathrm{BR}_1(F)\setminus \mathrm{QR}(F)\right)}(F)\\
&\le \overline{Ch}_{\Sigma_{m}^+\times M_1(\mathrm{Tran})}(F)\\
&=\log m + \overline{Ch}_{ M_1(\mathrm{Tran})}(G).
\end{aligned}
\end{equation}
Combining these two relations (\ref{entropy of BR-1-2-2}) and (\ref{entropy of BR-1-2-6}), we find that
$$
\overline{Ch}_{ M_1(\mathrm{Tran})}(G)=\overline{Ch}_X(G)=h(G).
$$
Denote
$$
M_j (\mathrm{Tran}):=\cup _{\iota\in\Sigma_{m}^+} \left \{\mathrm{Tran}(\iota, G) \cap\left (\mathrm{BR}_j(\iota, G)\setminus \mathrm{BR}_{j-1}(\iota, G)\right )\right \},
$$
for $j=2,\cdots,5$.
Similarly, if $(\iota,x)\in\mathrm{Tran}(F)\cap \left ( \mathrm{BR}_j(F)\setminus\mathrm{BR}_{j-1}(F)\right )$, we obtain that $x\in\mathrm{Tran}(\iota, G)\cap \left ( \mathrm{BR}_j(\iota,G)\setminus\mathrm{BR}_{j-1}(\iota,G)\right )$. This shows that
$$
\mathrm{Tran}(F) \cap \left (\mathrm{BR}_j(F)\setminus \mathrm{BR}_{j-1}(F)\right )\subseteq\Sigma_{m}^+\times M_j(\mathrm{Tran})\subseteq\Sigma_{m}^+\times X.
$$
Analogously, using Theorem \ref{Topological upper capacity entropy skew product} and the formula \ref{entropy of BR-1-2-3}, we conclude that
$$
\overline{Ch}_{ M_j(\mathrm{Tran})}(G)=\overline{Ch}_X(G)=h(G), \,\text{for }j=2,\cdots,5.
$$
This completes the proof.
(3) Define a set as
$$
R_{\alpha}(F):=\left \{(\iota,x)\in\Sigma_{m}^+\times X: \inf_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu)=\sup_{\nu\in M_{(\iota,x)}(F)}\alpha(\nu)\right \}.
$$
If the skew product $F$ is not uniquely ergodic and $\alpha$ satisfies A.1 and A.2, from Theorem 6.1(4) of \cite{MR3963890}, we obtain that
\begin{equation}\label{entropy of BR-1-2-1}
\left \{\emptyset,\mathrm{QR}(F)\cap \mathrm{BR}_1(F), \mathrm{BR}_1(F),\mathrm{BR}_2(F),\mathrm{BR}_3(F),\mathrm{BR}_4(F),\mathrm{BR}_5(F)\right \}
\end{equation}
has full entropy gaps with respect to $R_\alpha(F)\cap\mathrm{Tran}(F)$. Similar to Theorem \ref{entropy of BR-1} (1) and (2), one can adopt the proof to complete the proof for $R_\alpha (\iota,G)\cap \mathrm{Tran}(\iota,G)$. Here we omit the details.
\end{proof}
\begin{proof}[Proof of Theorem \ref{entropy of QW-1}]
By Lemma 3.4 of \cite{MR4200965}, it follows that $G$ is expansive if and only if the skew product $F$ is expansive. In the same spirit, one can adapt the proof of Theorem \ref{entropy of BR-1} using Theorem 7.1 of \cite{MR3963890} to complete the proof. Here we omit the details.
\end{proof}
\begin{remark}
We extend the some results of \cite{MR3963890} to the dynamical systems of free semigroup actions.
\end{remark}
\begin{corollary}\label{entropy of BR}
Suppose that $G$ has the $\mathbf{g}$-almost product property, there exists a $\mathbb{P}$-stationary measure on $X$ with full support where $\mathbb{P}$ is a Bernoulli measure on $\Sigma_{m}^+$. Let $\varphi: X\to\mathbb{R}$ be a continuous function.
\begin{itemize}
\item [(1)] If $I_\varphi (\iota,G)\neq \emptyset$ for some $\iota\in\Sigma_{m}^+$, then the unions of gaps of
$$
\left \{\mathrm{QR}(\iota, G), \mathrm{BR}_1(\iota,G), \mathrm{BR}_2(\iota,G),\mathrm{BR}_3(\iota,G),\mathrm{BR}_4(\iota,G),\mathrm{BR}_5(\iota,G)\right \}
$$
with respect to $I_\varphi (\iota,G)\cap\mathrm{Tran}(\iota,G)$ for all $\iota\in\Sigma_m^+$ have full upper capacity topological entropy of free semigroup action $G$.
\item [(2)] If the skew product $F$ is not uniquely ergodic, then the unions of gaps of
$$
\left \{\emptyset, \mathrm{QR}(\iota, G)\cap \mathrm{BR}_1(\iota,G),\mathrm{BR}_1(\iota,G), \mathrm{BR}_2(\iota,G),\mathrm{BR}_3(\iota,G),\mathrm{BR}_4(\iota,G),\mathrm{BR}_5(\iota,G)\right \}
$$
with respect to $\mathrm{Tran}(\iota,G)$, $R_\varphi(\iota,G)\cap\mathrm{Tran}(\iota,G)$ for all $\iota\in\Sigma_{m}^+$ have full upper capacity topological entropy of free semigroup action $G$, respectively.
\end{itemize}
\end{corollary}
\begin{corollary}\label{entropy of QW}
Suppose that $G$ has the $\mathbf{g}$-almost product property and positively expansive, there exists a $\mathbb{P}$-stationary measure with full support on $X$ where $\mathbb{P}$ is a Bernoulli measure on $\Sigma_{m}^+$. Let $\varphi: X\to\mathbb{R}$ be a continuous function. If the skew product $F$ is not uniquely ergodic, then the unions of gaps of
$$
\left \{\emptyset, \mathrm{QW}_1(\iota,G),\mathrm{QW}_2(\iota,G),\mathrm{QW}_3(\iota,G),\mathrm{QW}_4(\iota,G),\mathrm{QW}_5(\iota,G)\right \}
$$
with respect to
$\mathrm{Tran}(\iota,G)$, $R_\varphi(\iota,G)\cap\mathrm{Tran}(\iota,G)$ for all $\iota\in\Sigma^+_m$ have full upper capacity topological entropy of free semigroup action $G$, respectively. {If $I_\varphi(\iota,G)$}
is non-empty for some $\iota\in\Sigma_{m}^+$, similar arguments hold with respect to $I_\varphi (\iota,G)\cap\mathrm{Tran}(\iota,G)$.
\end{corollary}
\begin{proof}[Proof of {Theorem} \ref{entropy of omega limit set}]
Suppose that $\mu$ is the $\mathbb{P}$-stationary measure with full support. Then, Proposition \ref{invariant measure of random} ensures that $\mathbb{P}\times \mu$ is an invariant measure under the skew product transformation $F$ with support $\Sigma_{m}^+\times X$. From Lemma \ref{2g-almost product property}, the skew product $F$ has 2$\mathbf{g}$-almost product property. For $j=1,\cdots,6$, let us denote:
$$
T_j(F):=\left \{(\iota,x)\in \mathrm{Tran} (F):(\iota,x) \text{ satisfies Case } (j)\right \},
$$
and
$$
B_j(F):=\left \{(\iota,x)\in \mathrm{BR} (F):(\iota,x) \text{ satisfies Case } (j)\right \}.
$$
It follows from Theorem 1.3 of \cite{MR3963890} that
$$
T_j(F)\neq\emptyset,\quad B_j(F)\neq\emptyset,
$$
and they all have full topological entropy for all $j=1,\cdots,6$.
For $j=1,\cdots,6$, if $(\iota,x)\in T_j(F)$ with $\iota=(i_0,i_1,\cdots)$, then the orbit of $(\iota,x)$ under $F$, that is, $\{F^k(\iota,x):k\in\mathbb{N} \}$ is dense in $\Sigma_{m}^+\times X$. This implies that $orb(x,\iota,G)$ is dense in $X$, hence $x\in \mathrm{Tran}(\iota,G)$. Notice that $(\iota,x)$ satisfies Case $(j)$. Hence $x\in T_j(G)$ and $T_j(G)\neq\emptyset$. In particular, one has that
$$
T_j(F)\subseteq\Sigma_{m}^+\times T_j(G) \subseteq\Sigma_{m}^+\times X.
$$
Summing the two Theorems \ref{Topological entropy skew product} and \ref{Topological upper capacity entropy skew product}, we get that
$$
\begin{aligned}
\log m + h(G)=h_{\Sigma_{m}^+\times X}(F)
&=h_{T_j(F)}(F)\\
&\le \overline{Ch}_{\Sigma_{m}^+\times T_j(G)}(F)\\
&=\log m +\overline{Ch}_{T_j(G)}(G).
\end{aligned}
$$
Since $h(G)=\overline{Ch}_X(G)$, this proves that
$$
h(G)=\overline{Ch}_X(G)\le\overline{Ch}_{T_j(G)}(G).
$$
Finally, we conclude that
$$
h(G)=\overline{Ch}_X(G)=\overline{Ch}_{T_j(G)}(G).
$$
On the other hand, for $j=1,\cdots,6$, if $(\iota,x)\in B_j(F)$ with $\iota=(i_0,i_1,\cdots)$, then for any $\varepsilon>0$, the set of visiting time $N\left ((\iota,x), B\left ((\iota,x),\varepsilon\right )\right )$ has a positive Banach upper density and so does $N_\iota(x,B(x,\varepsilon))$. Hence, we get that $x\in \mathrm{BR}(\iota,G)$. Using the fact that $(\iota,x)$ satisfies Case $(j)$, hence $x\in B_j(G)$, so $B_j(G)\neq\emptyset$. In particular, one has that
$$
B_j(F)\subseteq\Sigma_{m}^+\times B_j(G) \subseteq\Sigma_{m}^+\times X.
$$
Summing the two Theorems \ref{Topological entropy skew product} and \ref{Topological upper capacity entropy skew product}, we get that
$$
\begin{aligned}
\log m + h(G)=h_{\Sigma_{m}^+\times X}(F)
&=h_{B_j(F)}(F)\\
&\le \overline{Ch}_{\Sigma_{m}^+\times B_j(G)}(F)\\
&=\log m +\overline{Ch}_{B_j(G)}(G).
\end{aligned}
$$
Since $h(G)=\overline{Ch}_X(G)$, this proves that
$$
h(G)=\overline{Ch}_X(G)\le\overline{Ch}_{B_j(G)}(G).
$$
Finally, we conclude that
$$
h(G)=\overline{Ch}_X(G)=\overline{Ch}_{B_j(G)}(G).
$$
\end{proof}
\begin{remark}
Theorem \ref{entropy of omega limit set} is a generalization of Theorem 1.3 of \cite{MR3963890}.
\end{remark}
\textbf{Acknowledgements}
The authors really appreciate the referees’ valuable remarks and suggestions that helped a lot.
|
1,314,259,994,786 | arxiv | \section{Introduction}
To interpret the surface composition of minor bodies it is necessary to observe
their spectra in the widest possible spectral range.
This is typically achieved by observing the target with
different instruments, techniques, and calibrations.
This results in a lack of simultaneous datasets and having often to rely on other parameters
(e.g., albedo, photometric magnitudes) to scale the complete spectrum.
Luckily, new instrumentation allows us to overcome some of these problems. One of such
new instruments is X-Shooter (XS hereafter), which is also the first second-generation instrument
developed for the ESO-Very Large Telescope (D'Odorico et al. \cite{dodor06}) and is currently
mounted in its unit 2 (Kueyen).
More detailed discussion of XS is given in Section 2, but, as an overview, XS is a spectrograph
that obtains spectra from 300 to 2480 nm in a single image at high resolving powers (from 3,000
up to 10,000).
This wavelength range sufficiently covers many known ices and silicates.
At the time of its science
verification we decided to test its capabilities by
observing a well known trans-Neptunian object with absorption features in most of this spectral range:
(136199) Eris (hereafter Eris).
Eris, formerly known as 2003 UB$_{313}$, is one of the largest bodies in the
trans-Neptunian region, with a diameter $<2340$ km (B. Sicardy, personal communication),
and, together with other two thousand-km-sized objects
[(134340) Pluto and (136472) Makemake], shows CH$_4$ absorption bands in the spectrum,
from the visible to the near infrared, (see Brown et al. \cite{brown05},
Licandro et al. \cite{lica06b}).
Rotational light-curves of Eris have had little success obtaining a reliable rotational period
(e.g., Carraro et al. \cite{carra06} or Duffard et al. \cite{duffa08}). Roe et al. (\cite{roe08})
proposed a rotational period of 1.08 days. The light-curve seems not related to Eris' shape,
but probably to an albedo patch. Nevertheless, due to the noise in the data, even if this albedo
patch indeed exists, it is clear that there are no large albedo heterogeneities.
A close look at the spectroscopic data shows that there exist a subtle heterogeneity.
Abernathy et al. (\cite{abern09}) compared their spectra with that of Licandro et al.
(\cite{lica06b}) finding different central wavelengths and different depths for a few
selected methane absorption bands. Nevertheless, recent results pointed otherwise:
Tegler et al. (\cite{tegle10}), using their own results, and those of Quirico
and Schmitt (\cite{quir97a}), showed that a single component, CH$_4$ diluted in N$_2$, could
explain the wavelength shifts because those shifts change by increasing with increasing
wavelength.
The orbit of Eris is
that of a scattered disk object (Gladman et al. \cite{gladm08}). It has a
perihelion distance of 49.5 AU, and it is currently located at 97 AU and
approaching perihelion. At 97 AU, its equilibrium temperature is $\lesssim30$ K
as estimated by the Stefan-Boltzman law and assuming conservation
of energy.
Therefore, based on models of volatiles retention (see Schaller and Brown \cite{schal07})
and its similarities with Pluto, Eris' surface is expected to be covered in N$_2$,
CO and CH$_4$ ices. Of these species, only CH$_4$ has been clearly seen on Eris.
Therefore, we requested observing time during XS science verification to make the most of
its high resolving power to detect features of CO and N$_2$, and to obtain Eris’ spectrum
simultaneously between 300 and 2480 nm.
The article is organized as follows: in Sec. 2 we describe the instrument and the observations.
In Sec. 3 we present the results
obtained from our analysis, while the discussion and conclusions are presented in Secs. 4 and 5
respectively.
\section{Observation and data reduction}
\subsection{X-Shooter}
The instrument is an {\it echelle} spectrograph that has an almost fixed
spectral setup. The observer can choose between SLIT (and
slit width) or IFU (Integral Field Unit) modes. We used the SLIT mode. The detailed
description of the instrument is available at ESO's
webpage\footnote{{\tt http://www.eso.org/sci/facilities/paranal/instruments/xshooter/}}.
This spectrograph has the ability to simultaneously obtain data over the entire $300-2480$ nm
spectral range by splitting the incoming
light from the telescope into three beams, each sent to a different arm: ultra-violet---blue (UVB),
visible (VIS), and near-infrared (NIR).
Using two dicroichs the light is first sent to the UVB arm,
then the VIS arm and the remaining light arrives to the NIR arm. The disadvantage
of this optical path is the high thermal background in the K region of the NIR spectrum.
Each arm operates independently from the others, with the sole exception
of the two CCDs (UVB and VIS) which are controlled by the same FIERA
(Fast Imaging Electronic Readout Assembly) controller.
Therefore, if the two exposures finish at the same time, their read-out is made
sequentially. The read-out mode of the NIR detector
is totally independent from the others, operated by an IRACE
(Infra Red Array Control Electronics) controller.
\subsection{The observational setup}
We observed Eris on two nights of the XS Science Verification,
August 14$^{\rm th}$ and September 29$^{\rm th}$, 2009. As mentioned
above we used the SLIT mode selecting the high gain readout mode and
$2\times1$ binning for the UVB and VIS detectors. The read-out and binning for
the NIR detector are fixed.
The slit widths were 1.0\arcsec, 0.9\arcsec, and 0.9\arcsec, for the UVB, VIS, and NIR
arms, respectively, giving a resolving power of about 5,000 per arm. As usual with
NIR observations, we nodded on the slit to remove the sky contribution.
As the nod is made by the telescope, we finished with two exposures per arm per night.
To remove the solar and telluric signals from Eris spectra,
we observed the G5 star SA93-101 (Landolt \cite{lando92}) with the same observational setup as
Eris and a similar airmass to minimize effects of differential refraction.
Details of the observations are presented in Table \ref{table1}.
\begin{table*}
\caption{Observational Circumstances.
We show the exposure time per each arm, t$_{\rm ARM}$.}
\label{table1}
\centering
\begin{tabular}{c c c c c c c}
\hline\hline
Date$^a$ & Airmass$^b$ & Seeing$^b$ (\arcsec)&t$_{\rm UVB}$ (s)&t$_{\rm VIS}$ (s)&t$_{\rm NIR}$ (s)& Star (Airmass)\\
\hline
2009-08-14UT07:24 & 1.073-1.146 & 0.7-0.8 & $2\times1680$ & $2\times1680$ & $2\times1800$ & SA93-101 (1.113) \\
2009-09-29UT05:50 & 1.065-1.163 & 0.9-1.2 & $2\times1680$ & $2\times1680$ & $2\times1800$ & SA93-101 (1.153) \\
\hline
\end{tabular}
\smallskip
$^a$ Beginning of the observation.\\
$^b$ Minimum and maximum values during the span of the observation.
\end{table*}
\subsection{Data reduction}
The data presented here were reduced using the XS pipeline (0.9.4). There are newer versions
of the pipeline, but we have verified, after processing the spectra,
that no significant improvements are obtained in terms of SNR, therefore we use version 0.9.4.
The pipeline accounts for flat-fielding, wavelength calibration, merging of different orders,
and further extraction of the spectra.
The whole reduction relies on calibration files taken during daytime
and on a set of static files provided as part of the pipeline release.
The data were wavelength and spatially calibrated by creating a two-dimensional wave map.
This is necessary due to the curvature of the {\it echelle} orders. The map is created
by taken images of ThAr lamps combined with pinhole masks, for all three arms.
The finale accuracy of the calibration is better than 1 \AA\ through
the complete spectral range
After careful evaluation we decided not to use the one-dimensional spectra produced by the pipeline,
but instead make our own extraction using a merged two-dimensional spectrum generated during
the process. The extraction was made in a usual way with {\tt apall.iraf}.
Once all spectra were extracted, we divided those of Eris by the corresponding star, used
as telluric and solar analogue star
(Table \ref{table1}).
After the division we removed
the remaining bad pixels from the spectra using a median filtering technique
(as in Alvarez-Candal et al. \cite{alcan07}).
This last step left us
with three separated spectra, one per arm. As there exists a small overlap between their
spectral ranges, the construction of the final spectrum is straightforward.
The resulting spectra, between 350 and 2350 nm, normalized to unity at 600 nm, are presented in
Fig. \ref{fig1}.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{Eris_xs.ps}
\caption{Eris spectra obtained with X-Shooter.
The resolving power is about 5,000 for the spectral
range. The spectra were arbitrarily
normalized to unity at 600 nm. The spectrum of September 2009 was shifted by 1 in the
flux scale for clarity. Note that we removed parts with strong atmospheric absorption.}
\label{fig1}
\end{figure}
A few of the {\it echelle} orders were not well merged, in particular in the visible.
We checked that none of them removed important information, e.g., being the
minimum or the edge of an absorption band, and decided to ignore them.
Two other features at 1576 and 1583 nm, each in a different spectrum,
are artifacts introduced by the star and poor telluric correction, respectively.
Another feature, at 2160 nm, is due to an order badly merged during the reduction process.
Finally, the feature at about 400 nm is introduced by the star (Alvarez-Candal et al. \cite{alcan08})
and does not represent any compound present on Eris surface. All these features are marked
in Figs. \ref{fig2} to \ref{fig4}.
\section{Spectroscopic Analysis}
The spectra presented in Fig. \ref{fig1} are similar throughout the complete wavelength range,
as shown by the small residues resulting from the calculated ratio of both spectra (Figs.
\ref{fig2}, \ref{fig3}, and \ref{fig4}).
They both present clear CH$_4$ absorptions from the visible up to the near infrared and
have similar spectral slopes in the visible (see first row of Table \ref{table3} below).
\begin{figure}
\centering
\includegraphics[width=8.5cm]{ratio_uvb.ps}
\caption{
Ratio of the X-Shooter spectra obtained in the ultra-violet---blue region.
The artifacts mentioned in the text are marked with a vertical dashed line.
Normalization and offsets are the same as in Fig. \ref{fig1}. At the bottom it
is shown the ratio of both spectra, offset by -1 in the scale for clarity.
}
\label{fig2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{ratio_vis.ps}
\caption{
Same as Fig. \ref{fig2}, but only showing the visible region.
}
\label{fig3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{ratio_nir.ps}
\caption{
Same as Fig. \ref{fig2} but showing the near-infrared region. The normalization is the same as in
Fig. \ref{fig1}, but we applied offstes of 0.5 and 1.5 in the flux scale for the August and September
2009 spectra, respectively, for clarity. The ratio of the spectra is not shown in the K region due to
low SNR.
}
\label{fig4}
\end{figure}
One interesting absorption detected on Eris spectra is located at 620 nm.
This band has been observed in all the giant planets and in Titan's spectrum
and it appears in other previously published spectra of Eris, but has not been discussed.
The band has been assigned to CH$_4$ in its gaseous (Giver \cite{giver78}) and liquid
(Ramaprasad et al. \cite{ramap78}, Patel et al. \cite{patel80}) phases, in laboratory experiments.
The band strength is $\sim1$ order of magnitude smaller than the 730 nm band, thus,
its appearance certainly suggests relatively long path-lengths, and likely high concentrations of CH$_4$.
Also, at first glance, these data resemble those already available in the literature, for example:
Licandro et al. (\cite{lica06b}), Alvarez-Candal et al. (\cite{alcan08}) in the visible;
Dumas et al. (\cite{dumas07}), Guilbert et al. (\cite{guilb09}), and Merlin et al. (\cite{merli09})
in the near infrared. These similarities indicate that there are no spectral differences
within the uncertainties of the data, suggesting an {\it a priori} homogeneous surface.
With this in mind, and along with the datasets mentioned above, we will use a combined spectrum
of Eris, obtained as the average of the two XS spectra, unless explicitly mentioned otherwise.
\subsection{Quantitative interpretation of the spectra}
In order to quantify the information in the Eris spectrum we have determined a number of
spectral parameters and these are listed in Table \ref{table3}.
All the measurements presented in the following sections have been made by our team.
We preferred, for the sake of keeping the data as homogeneous as possible, to measure
all quantities rather than use published values.
Along with our Eris data we present values obtained for other Eris spectra, see below, and
Pluto (column labeled $h$), these last data from Merlin et al.
(\cite{merli10}, their April 13, 2008 spectrum.).
The Eris spectra we use are: {\it visible} from
Licandro et al. (\cite{lica06b}), Alvarez-Candal et al. (\cite{alcan08}, their October 2006 spectrum),
and Abernathy et al. (\cite{abern09}, both spectra); and {\it near-infrared} from Dumas et al. (\cite{dumas07}),
Guilbert et al. (\cite{guilb09}), and Merlin et al. (\cite{merli09}, their December 2007 spectrum).
In this paper we use the shorthand term organics to mean the relatively refractory solid material
consisting of complex macromolecular carbonaceous material that is similar to the insoluble organic matter
found in most carbonaceous meteorites. This material contains both aromatic and aliphatic hydrocarbons,
amorphous carbon, and other materials of undetermined structure.
Organics of this general kind characteristically have very low albedo ($\sim0.02-0.06$)
and distinctive red color in the spectral region $300-1000$ nm.
We measured the spectral slope, which provides an indication of the presence of organic materials.
To compute it we fit the continuum with a linear function, ignoring absorption
features, between 600 and 800 nm. The process was repeated many times, each time removing
randomly points from the datasets. The average value was adopted as the spectral slope and the
standard deviation as the error. The results are shown in the first row of Table \ref{table3}.
\begin{table*}[]
\caption{Comparison of spectral parameters: Spectral slope in the visible, $S'_{v}$, in \% $(100~{\rm nm})^{-1}$,
and percentile depth of the absorption feature, D$_{\lambda}$ at a given wavelength $\lambda$.
}\label{table3}
\centering
\begin{tabular}{r c c c c c c c c c}
\hline\hline
Band (nm)& $a$ (\%) & $b$ (\%) & $c$ (\%) & $d$ (\%) & $e$ (\%) & $f$ (\%) & $g$ (\%) & $h$ (\%) \\
\hline
$S'_{v}$ &$ 3.8\pm1.1$& (\ldots) &$ 3.4\pm0.5$&$ 2.9\pm0.6$& (\ldots) & (\ldots) & (\ldots) &$12.5\pm1.2$\\
\hline
620 &$ 5.3\pm1.0$& (\ldots) &$ 3.9\pm3.7$&$ 4.7\pm1.0$& (\ldots) & (\ldots) & (\ldots) & Undet. \\
730 &$15.8\pm1.8$&$15.0\pm4.6$&$15.7\pm5.2$&$15.1\pm2.0$& (\ldots) & (\ldots) & (\ldots) &$ 5.6\pm1.1$\\
790 &$ 9.7\pm0.7$&$ 8.1\pm3.6$&$ 8.8\pm3.3$&$ 7.2\pm0.8$& (\ldots) & (\ldots) & (\ldots) & Undet. \\
840 &$ 5.9\pm1.4$& Undet. &$ 8.2\pm5.4$&$ 5.4\pm1.1$& (\ldots) & (\ldots) & (\ldots) & $2.7\pm0.9$\\
870 &$13.2\pm1.8$&$15.1\pm1.4$&$19.0\pm7.5$&$14.1\pm1.1$& (\ldots) & (\ldots) & (\ldots) &$ 3.8\pm3.3$\\
890 &$46.3\pm1.4$&$35.1\pm7.6$&$55.9\pm8.8$&$41.8\pm3.9$& (\ldots) & (\ldots) & (\ldots) &$16.2\pm1.1$\\
1000 &$20.2\pm8.3$& (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) \\
1015 &$31.2\pm4.3$& (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) \\
1135 &$39.3\pm2.1$& (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) \\
1160 &$54.5\pm5.0$& (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) \\
1190 &$34.4\pm2.0$& (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) \\
1240 &$ 9.0\pm0.9$& (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) \\
1335 &$72.2\pm2.6$& (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) \\
1485 & (\ldots) & (\ldots) & (\ldots) & (\ldots) &$18.9\pm 5.8$&$32.1\pm 1.7$&$31.1\pm0.9$&$12.9\pm0.9$\\
1670 &$64.4\pm7.3$& (\ldots) & (\ldots) & (\ldots) &$61.2\pm 9.5$&$77.7\pm 7.5$&$82.1\pm8.3$&$59.1\pm2.9$\\
1690 &$17.4\pm4.8$& (\ldots) & (\ldots) & (\ldots) &$15.5\pm 6.1$&$16.3\pm 6.8$&$14.6\pm2.2$&$ 5.3\pm0.2$\\
1720 &$66.2\pm7.3$& (\ldots) & (\ldots) & (\ldots) &$53.6\pm10.9$&$68.3\pm 5.4$&$59.2\pm3.0$&$50.2\pm1.2$\\
1800 & Undet. & (\ldots) & (\ldots) & (\ldots) &$66.8\pm 4.8$&$72.4\pm13.7$&$70.3\pm3.4$&$56.0\pm0.6$\\
2200 & Undet. & (\ldots) & (\ldots) & (\ldots) &$89.5\pm29.2$&$81.1\pm18.1$&$60.4\pm8.4$&$50.7\pm1.1$\\
\hline
\end{tabular}
\smallskip
Undet. = Undetermined ; References:
($a$) This work;
($b$) Abernathy et al. (\cite{abern09});
($c$) Licandro et al. (\cite{lica06b});
($d$) Alvarez-Candal et al. (\cite{alcan08});
($e$) Guilbert et al. (\cite{guilb09});
($f$) Merlin et al. (\cite{merli09});
($g$) Dumas et al. (\cite{dumas07});
($h$) Merlin et al. (\cite{merli10}), Pluto
\end{table*}
The depth of an absorption feature is computed following $$D_{\lambda}[\%]=(1-f_{\lambda})\times100,$$
where $f_{\lambda}$ is the normalized reflectance at the wavelength $\lambda$.
This formula gives the percentile value of the absorption depth of a given band and
information about the quantity of the absorbing material, as well as a possible
indication of the path-length traveled by the light.
To compute it we first normalized the band by fitting a linear function to the band's borders and dividing
the band by that function. Note that this does not change the position of the minimum, while making it easier
to measure $D$ for multiple, overlapping bands (such as the 1690 nm one). The values
are reported in Table \ref{table3}. All values for Eris are compatible to within the errors.
Information about the physical state of the near-surface ices was evaluated by measuring band
shifts between the Eris data, in several small spectral windows, with respect to
synthetic spectra calculated
using Hapke models describing the light reflected from particulate surfaces. We used the optical
constants of pure CH$_4$ at 30 K (Grundy et al. \cite{grund02}), the most likely temperature at Eris
surface (J. Cook, personal communication), to model the Eris spectrum and evaluated the goodness
of the fit using a $\chi^2$ criterion.
Because the CH$_4$ bands at different wavelengths are caused by differing path-lengths,
we used two different grain size as free parameters in the models. The grain sizes range
from tens of micrometers to the scale of meters (for the weakest bands).
Using such a large range of particle sizes
was necessary because we are only using pure CH$_4$, a more complete modeling
(beyond the scope of this article) including
neutral darkening and reddening material would likely result in smaller particle sizes.
The spectral model uses only the optical constants
of CH$_4$ ice from Grundy et al. (\cite{grund02}) at full resolution.
The optical constants were shifted from $-2$ to $+1$ nm at 0.1 nm intervals.
At each spectral shift, a best fit model spectrum is derived and $\chi^2$ is evaluated. The $\chi^2$
measurements are then fit to a quadratic curve to estimate the best fit shift.
To evaluate the error, we
calculated a correlation matrix around the minimum and added quadratically the uncertainty
of the wavelength calibration, as mentioned on the corresponding works:
1 \AA\ for this work's data and Licandro et al. (\cite{lica06b}),
0.3 \AA\ for Abernathy et al. (\cite{abern09}),
and 4 \AA\ for all the rest, as estimated by Merlin et al. (\cite{merli09}).
\begin{table*}
\caption{Shifts of CH$_4$ bands, with respect to the spectrum of pure CH$_4$ at 30 K.
}
\label{table2}
\centering
\begin{tabular}{r c c c c c c c c }
\hline\hline
Band (nm) &$a$ (\AA) & $b$ (\AA) & $c$ (\AA) & $d$ (\AA) & $e$ (\AA) & $f$ (\AA) & $g$ (\AA) & $h$ (\AA) \\
\hline
730 &$ -4.1\pm1.3$ &$-13.9\pm1.1$&$ -0.3\pm1.6$&$-8.0\pm4.2$& (\ldots) & (\ldots) & (\ldots) &$-19.7\pm4.2$\\
790 &$ -0.2\pm1.3$ &$ -6.0\pm1.0$&$ -4.3\pm1.7$&$+0.9\pm4.1$& (\ldots) & (\ldots) & (\ldots) & (\ldots) \\
840 &$ +3.9\pm1.6$ & (\ldots) &$ -5.9\pm3.4$&$-3.3\pm4.5$& (\ldots) & (\ldots) & (\ldots) &$ -9.5\pm5.4$\\
870 &$ -3.3\pm2.0$ &$ -5.9\pm1.1$&$ -8.7\pm2.8$&$-3.0\pm5.0$& (\ldots) & (\ldots) & (\ldots) &$-21.0\pm5.2$\\
890 &$ -4.2\pm1.3$ &$ -4.7\pm1.0$&$-12.4\pm1.5$&$-8.1\pm4.3$& (\ldots) & (\ldots) & (\ldots) &$-17.6\pm4.2$\\
1000 &$ -2.0\pm1.6$ & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) \\
1015 &$ -2.8\pm3.3$ & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) \\
1135 &$ -5.6\pm1.6$ & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) \\
1160 &$ -4.5\pm1.5$ & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) \\
1190 &$ -5.4\pm2.0$ & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) \\
1240 &$ -1.4\pm1.8$ & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) \\
1335 &$ -6.7\pm2.1$ & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) & (\ldots) \\
1670 &$ -9.4\pm1.9$ & (\ldots) & (\ldots) & (\ldots) &$-1.7\pm5.2$&$-0.3\pm5.0$&$ -7.7\pm4.5$ &$-26.3\pm4.3$\\
1690 &$ -7.1\pm3.2$ & (\ldots) & (\ldots) & (\ldots) &$-6.8\pm5.6$&$-1.4\pm5.1$&$ -5.5\pm5.0$ &$ +2.6\pm5.3$\\
1720 &$-10.5\pm2.2$ & (\ldots) & (\ldots) & (\ldots) &$+3.5\pm5.4$&$+3.7\pm4.7$&$ -8.4\pm4.7$ &$-24.8\pm4.2$\\
1800 & (\ldots) & (\ldots) & (\ldots) & (\ldots) &$-3.5\pm4.3$& Undet. &$-10.6\pm4.5$ &$-23.4\pm4.1$\\
2200 & (\ldots) & (\ldots) & (\ldots) & (\ldots) & Undet. & Undet. &$ -4.3\pm6.9$ &$-34.9\pm4.2$\\
\hline
\end{tabular}
\smallskip
Undet. = Undetermined ; References:
($a$) This work;
($b$) Abernathy et al. (\cite{abern09});
($c$) Licandro et al. (\cite{lica06b});
($d$) Alvarez-Candal et al. (\cite{alcan08});
($e$) Guilbert et al. (\cite{guilb09});
($f$) Merlin et al. (\cite{merli09});
($g$) Dumas et al. (\cite{dumas07});
($h$) Merlin et al. (\cite{merli10}), Pluto
\end{table*}
Table \ref{table2} lists the measured shifts in wavelength, and Fig. \ref{fig6}
illustrates the relationship between shifts and wavelength.
Eris observations indicate an average
blue shift of $\sim5$ \AA, while Pluto is $\sim20$ \AA.
Note that considering all the available datasets, no apparent relation seems to exits. Nevertheless,
if we consider one isolated spectrum, like XS for instance, it is possible to see that the shifts
increase towards longer wavelengths (black diamonds in Fig. \ref{fig6}),
as shown by Tegler et al. (\cite{tegle10}) and Licandro et al. (\cite{lica06b},
light-brown triangles) but derived from smaller datasets, while
other spectra do not show any tendency, unless subsets of data are used.
Licandro et al.'s band at 890 nm has a blue shifts comparable to that measured for Pluto.
There are four Eris absorption bands which
show redshifts: Alvarez-Candal et al. (\cite{alcan08}) @ 790 nm, this work's spectrum @ 840 nm,
and the spectra of Guilbert et al. (\cite{guilb09}) and Merlin et al. (\cite{merli09}) @ 1720
nm, nevertheless all of them are within three sigmas from a null shift.
The 1690 nm absorption feature, that only appears on pure CH$_4$, also shows non zero
wavelength shifts, this might be due to a temperature difference between the models
and the actual ice on Eris' surface. Anyhow, the shift is in all cases within three
sigma from the laboratory position.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{shifts_xshoo.ps}
\caption{Wavelength shifts for the spectra analyzed in this work.}
\label{fig6}
\end{figure}
As mentioned above, we used only pure CH$_4$ at 30 K in the models. Methane ice at this
temperature is at its phase I. During the modeling, we determined that some bands could not be
fit with pure CH$_4$(I), but could be better explained by a mix of CH$_4$(I) and CH$_4$(II).
This phase of methane ice, CH$_4$(II), occurs at temperatures below 20.4 K. The detailed results and implications are
beyond the scope of the current report.
Using all the CH$_4$ bands in the Eris data and comparison with similar bands in data reported for
Pluto (Merlin et al. \cite{merli10}), we conclude that at least some of the CH$_4$ on Eris appears to
be diluted in another
material. By analogy to Pluto we assume N$_2$ is the main diluting agent.
\subsection{Qualitative interpretation of the spectra}
Other than the overall blue shift of the bands discussed in the previous section, at first glance
there does not appear to be any systematic behavior of the individual bands analyzed.
Licandro et al. (\cite{lica06b}) suggested that different measured
shifts indicated different levels of dilution of CH$_4$ in a N$_2$ matrix:
Pure CH$_4$ at the bottom, with CH$_4$ concentration decreasing towards the surface.
The proposed mechanism is based on the fact that N$_2$ condenses at a lower temperature than
CH$_4$. Therefore, while approaching aphelion CH$_4$ condensed first while N$_2$ was still in
its gaseous phase. As Eris gets farther away from the Sun, its surface temperature decreases
and N$_2$ starts condensing as well, thus the mixing of CH$_4$ in N$_2$ increases.
This was later supported by Merlin et al. (\cite{merli09}), who also added a
top layer of pure CH$_4$, as a result of the null shift they measured in near-infrared bands.
In contrast, Abernathy et al. (\cite{abern09}), measured blue shifts for five selected absorptions
and identified a
correlation between the shift and the geometric albedo at the wavelength of maximum band absorption,
opposite to the behavior reported by Licandro et al. (\cite{lica06b}). They proposed the difference
was due to a heterogeneous surface.
More recent data presented by Tegler et al. (\cite{tegle10}),
on Pluto and Eris, suggests that this stratigraphic analysis of the data has overlooked the
fact that the blue-shifts increase as CH$_4$ becomes more diluted, i.e., less abundant, in N$_2$.
Our results support Abernathy et al.'s (\cite{abern09}) suggestion: that Eris surface is heterogeneous.
Considering all the data sets, there is no apparent correlation between the shift of central
wavelengths of bands and the supposed depths where they form (deeper bands closer to the
surface, shallower bands deeper on the surface, Fig. \ref{fig7}).
The XS data alone shows increasing blue shifts towards longer wavelength (shallower depths),
which supports the condensation mechanism proposed in Licandro et al. (\cite{lica06b}),
but using all the data we cannot confirm this hypothesis. In conclusion,
any stratigraphic analysis should be regarded as local representing the part of the surface
observed during the exposure.
\subsection{Comparison between X-Shooter spectra}
The molecule of methane ice is optically very active. It has four fundamental vibrational modes
in the infrared between 3000 and 8000 nm, but all of them lead to a huge number of overtones and
combinations, which can be observed with progressively diminishing absorbances into the
near-infrared and visible wavelength range (Grundy et al. \cite{grund02}).
We can see several of these bands in our two XS spectra. Although both spectra look very similar and
display the same bands, some of them display subtle differences as can be seen in Figs. \ref{fig2},
\ref{fig3}, and \ref{fig4} (up to 1800 nm where the noise in the spectra does not allow detailed
comparisons).
There are several factors that can be responsible for these differences. In pure methane ice,
differences in the optical path-lengths followed by the scattered light result in different depths
and widths for the bands. Moreover, as investigated by Grundy et al. (\cite{grund02}),
temperature changes in the ice produce slight differences in the central peaks of these absorption bands.
The situation is a bit different when CH$_4$ is diluted in other ices, as it could be for Eris, and
the shape of the bands is influenced by the concentration of CH$_4$ in the matrix.
According to the classic treatment of molecules trapped in matrices, the guest CH$_4$ molecules can
exist as isolated molecules or as clusters of various sizes in the nitrogen matrix (dimer, trimer, etc).
Quirico and Schmitt (\cite{quir97a}) made a detailed study for the case of methane diluted in nitrogen.
In the specific case of an isolated CH$_4$ molecule (i.e., which has only N$_2$ molecules as first neighbors),
the motion of the molecule is a slightly hindered rotation, thus, the CH$_4$ bands show a characteristic
fine structure that can be seen in comparison with the shape of these bands for pure ice. However, for a
cluster, the different CH$_4$ molecules interact with each other, the molecular rotational motion is
perturbed or no longer exist, and the profile of a band can be strongly modified with respect to that of
the isolated molecule (monomer). Consequently, a CH$_4$ absorption band can be the sum of several different bands
corresponding to the monomer and other various clusters all present in the sample.
Its dilution alters the shape and central frequency of the bands.
\subsection{Comparison with Pluto}
The immediate comparison for Eris is Pluto: both objects present CH$_4$-dominated spectra, and similar sizes.
Pluto's surface composition is made of N$_2$ (the major constituent of the surface), CH$_4$, and CO ices.
Eris content of pure CH$_4$ seems higher than in Pluto,
as witnessed by the deeper band at 1690 nm (and also the rest) and the smaller spectral
shifts of the absorption bands when compared to pure CH$_4$.
Eris lower visible spectral slope indicates a lower content of organics than on Pluto.
The other ices, N$_2$ and CO, were observed and identified on Pluto by absorption bands at
2150 and 2350 nm respectively. Thus we would expect to find them on Eris.
Unfortunately, due to the low SNR in the K region of our spectra, we did not find any of them.
Note that an overtone of CO is observed on Pluto at 1579 nm (Dout\'e et al. \cite{doute99}) that
we do not detect. The SNR of our spectra in that region is about 15 while the spectral
resolution is more than five times higher than that reported by Dout\'e et al. (750), therefore
if the CO content, and physical-chemical properties, were similar on Eris, then that absorption should
have been detected.
Even if not directly detected, N$_2$ can be inferred on Eris surface. The shifts measured above are evidence
of the mixture of CH$_4$ with another ice. On Pluto that ice is N$_2$, and probably CO.
The measured blue-shifts for Eris are not as large as those measured in Pluto,
possibly indicating a lower dilution level of CH4 in N2 than for Pluto.
\section{Discussion}
Comparing the different datasets of Eris we do not find any evidence of major heterogeneities on its surface.
For instance, the values of spectral slopes and depth of absorption bands presented in
Table \ref{table3} are all compatible within the uncertainties.
As a comparison, Pluto's surface has variations in albedo of up to 35 \% (Stern et al. \cite{stern97}).
The visible slope is indicative of the presence of organics, i.e., the redder the slope, the larger
the amount of organics and the lower the albedo in the visible.
In the case of Eris all slopes have a value close to 3 \% $(100~{\rm nm})^{-1}$, with error bars in the range of 1 \% $(100~{\rm nm})^{-1}$\
(see Table \ref{table3}), smaller than that for Pluto (12 \% $(100~{\rm nm})^{-1}$), thus pointing to a larger fraction of
organics on Pluto than on Eris. This is compatible with a higher albedo
in the visible for Eris (over 0.7, e.g., Stansberry et al. \cite{stans08})
compared to Pluto (averaging 0.6, Young et al. \cite{young01}).
One mechanism to form organic material is the polymerization of carbon bearing molecules, such as CH$_4$.
Laboratory experiments show that long-term irradiation of astrophysical ice mixtures (with the presence
of simple hydrocarbons and nitrogen) results in the selective loss of hydrogen and the formation of an
irradiation mantle of carbon residues (Moore et al. \cite{moore83}; Johnson et al \cite{johns84};
Strazzulla et al. \cite{straz91}).
They also show how as the radiation dose increases, more complex polymers, poor in hydrogen content, form
and an initially neutral-colored and high-albedo ice becomes red as the albedo in the visible decreases. Further
irradiation gradually reduces the albedo at all wavelengths and, finally, the material becomes very dark,
neutral in color, and spectrally featureless (Andronico et al. \cite{andro87}; Thompson et al. \cite{thomp87}).
Since
TNOs are exposed to ultraviolet solar radiation, the solar wind, and the flux
of galactic cosmic-rays, it is reasonable to expect that an irradiation mantle with a red color
could be formed on their surfaces. A
continuous deposition of fresh volatiles, via sublimation and condensation, acts to keep a neutral
color surface with high albedo.
Assuming the lower visible spectral slope of Eris is due purely to a lower content of organics, then
CH$_4$ on its surface if either younger than that of Pluto, or it is somehow protected against polymerization.
One possible explanation would the recent collapse of Eris' atmosphere, that covered its surface masking the
organics below.
Schaller and Brown (\cite{schal07}) showed, using a simple model of loss of volatiles
as a function of the surface temperature and size of the objects, that Eris, as well
as Pluto, should have retained CH$_4$, N$_2$, and CO, therefore we expect to detect them. Methane
is easily seen, while N$_2$ can be inferred from the wavelengths shifts measured in the visible.
The case of CO is more complicated, as mentioned above we do not have any detection. But,
building on {\it (i)} Schaller and Brown (\cite{schal07}) results, {\it (ii)} the fact that N$_2$ and CO are
very similar molecules and therefore likely to co-exist (Scott \cite{scott76}), {\it (iii)} CO observation
on Pluto; CO should be present on Eris surface. We will discuss below the possible cause
of the non direct detection of either N$_2$ nor CO.
Inspecting at a more subtle level, wavelength shifts of absorption features,
we find evidence of variable shifts, along one single spectrum and comparing the
same band in different spectra, which indicate changing mixing ratios of CH$_4$ in another ice,
probably N$_2$ and/or CO. Different spectra sample different regions on Eris, so,
assuming that a given absorption is always produced at the same depth, then
most of the layers sampled by the spectra show evidence of heterogeneity (Fig. \ref{fig7}).
Therefore, the surface of Eris is covered by an heterogeneous mix of ices:
patches of pure CH$_4$ mixed with a dilution of CH$_4$ in N$_2$ and/or CO.
In both cases, the CH$_4$ is so active optically that its features dominate the appearance of the spectrum.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{shifts_d.ps}
\caption{Measured wavelength shifts {\it vs.} averaged D$_{\lambda}$. The averaged D$_{\lambda}$\ is
the average value as obtained from the Eris entries in Table \ref{table3}, the error bar is
just the standard deviation around the average. D$_{\lambda}$\ is used as proxy of real depth in the surface,
as indicated by the legends on the top of the figure. Note that, with exception of XS spectrum,
filled symbols are used for absorption bands in the visible and open ones for bands in the near
infrared ($\lambda>1000$ nm).}
\label{fig7}
\end{figure}
Figure \ref{fig7} was constructed combining the information contained in Tables \ref{table3} and \ref{table2}.
As mentioned above, each absorption band maps a determined depth on Eris' surface, therefore,
we can picture it as composed of layers. For simplicity
we adopt an average value of D$_{\lambda}$, computed from every Eris entry, as a proxy of real depth: topmost
layers (shorter path-lengths) have larger values of D$_{\lambda}$, while deeper layers (large path-lengths) have smaller
values of D$_{\lambda}$.
Given all of the data presented, Fig. 6 shows
variability between datasets suggesting Eris' surface is
heterogeneous. If we consider individual spectra,
or restricted datasets, relations appear. If we concentrate
only on XS data, black diamonds, the topmost layers show on average the larger blue shifts
and probably higher level of dilution of CH$_4$ in N$_2$ (Brunetto et al. \cite{brune08}),
while the deepest layer mapped by the
spectra show purer on average CH$_4$, smaller shifts.
It is important to keep in mind that the relationship between the depth of an absorption band and the path-lengths
traveled by a photon is not a direct one, multiple scattering could account for large path-lengths without the need
to traverse large physical depths.
The uncertainty in Eris rotational period unfortunately precludes any attempt of rotationally resolved
spectroscopy with our current dataset. Considering the Roe et al. (\cite{roe08}) measurement of
$1.08\pm0.02$ days for the rotation of Eris and the 48 days interval between our spectra,
the indetermination on the rotational
phase will add up to about one day, i.e., almost one rotational period.\\
Why was it not possible to directly detect neither N$_2$ nor CO in Eris spectra?
At the temperature predominating at Eris' orbit (less than 36 K), N$_2$ should be in
its $\alpha$ state and therefore show an absorption band at 2160 nm, the only band covered by our K region spectra.
However, this band is
too narrow for the resolution of the spectra and the SNR is insufficient.
Previous spectra of Eris in the near-infrared (Brown et al. \cite{brown05}
Dumas et al. \cite{dumas07}, Guilbert et al. \cite{guilb09},
Merlin et al. \cite{merli09}) have had better SNR than our XS spectra, in the K region,
and the resolving power sufficient to detect $\beta$N$_2$ but
have failed to detect it. One possibility is that the majority of the N$_2$,
if present on Eris' surface, is in the $\alpha$ state.
Quirico and Schmitt (\cite{quir97b}) showed that CO diluted in $\alpha$N$_2$ has a narrow transition band.
Its $0-2$ transition, 2352 nm, is the strongest one and should appear on our data, unfortunately the low SNR
in that region of our spectra does not allow us to impose any constraint on it. However,
as mentioned in Sec. 3.3, its $0-3$ transition at 1579 nm is in a
region with good SNR, but we do not detect it. According to the quoted experimental setup in
Quirico and Schmitt's work, this transition would need a resolving power of over 12,000 to be observable,
if mixed with $\alpha$N$_2$. This resolution is
twice as large as that of XS, and therefore undetectable with our observational setup.
To detect the features at 1579 and 2352 nm of CO in a dilution with N$_2$ in $\alpha$-state,
we would need to have resolving power of more than 10000, which could be achieved with XS
using its smallest slit width, 0.4\arcsec.
Given a similar set up and that Eris has $m_v < 17$, we would need several hours
to gather enough signal as to resolve the bands.
The alternative is to wait until Eris enters into a region where the surface temperature rises
above that of the phase transition from $\alpha$N$_2$ to $\beta$N$_2$, the features get wider and
easier to resolve with less resolution, on the other hand they also decrease in intensity, but a proper
rebinning of the spectrum should be enough to avoid increasing the exposure times beyond practicality.
However this will not happen until the year 2166.
\section{Conclusions}
Using X-Shooter, a new instrument at the ESO-Very Large Telescope, we obtained
two new spectra of Eris. The spectra were obtained at high resolving power, $\lambda/\Delta\lambda\sim5,000$,
covering the 300 to 2480 nm at once. We compared these datasets with those available
in the literature and with a dataset of Pluto.\\
\noindent
The main results are summarized below:
\begin{itemize}
\item The deeper CH$_4$ absorption bands on Eris indicate a higher content of CH$_4$ than on Pluto.
This interpretation is also supported by the smaller shift of the CH$_4$ features, from the positions of
pure CH$_4$, when compared to Pluto.
\item Neither N$_2$ or CO are directly detected in our spectra, whereas both have been reported for Pluto.
\item CH$_4$ is probably diluted in another ice, likely N$_2$ and/or CO. This can be inferred from the
systematic wavelength shift of the CH$_4$ absorption bands in the Eris spectra from all individual observations.
\item We do not see major differences between the Eris spectra, indicating that there are no
large heterogeneities on the regions of its surface sampled by the two observations.
\item We do observe indications of heterogeneity at a more subtle level. Central wavelengths of
individual CH$_4$ absorption bands have different shifts when independent spectra of Eris are
compared. Considering only the XS data for Eris, there is an indication of an increasing blue
shift with increasing wavelength. This nicely illustrates the advantage the XS design for
simultaneously obtaining data over a broad wavelength range.
\end{itemize}
\vspace{0.5cm}
\begin{acknowledgements}
We would like to thank the X-Shooter team who made it possible that we have these data to work on.
NPA wants to acknowledge the support from NASA Postdoctoral Program administered by Oak Ridge
Associated Universities through a contract with NASA.
JL gratefully acknowledges support from the Spanish ``Ministerio de Ciencia e Innovaci\'on''
project AYA2008-06202-C03-02. JC acknowledges support of the NPP program.
Also we thank C. Dumas and F. Merlin who kindly made their (reduced) data available to us,
and an anonymous referee for the comments that helped to improve the quality of the
manuscript.
\end{acknowledgements}
|
1,314,259,994,787 | arxiv | \section{Introduction}
As we know, for a given matrix $A\in \mathbb{C}^{\text{n} \times \text{m}}$, the computation of its eigenvalues and eigenvectors is easy when its dimension is smal but this computation will become difficult when the dimension of the matrix is big. For the matrices with big dimensions many researchers have contributed and gave different numerical methods to compute their eigenvalues and eigenvectors, as exapmles, we cite the QR method (see \cite{kubla, fran}), the power method \cite{muntz} and Sturm sequences method which can be found in the book of Quarteroni {\em et al} \cite{quart}; and for a real symetric matrix usually we use the Jacobi method \cite{jacobi}.\\
This paper deals with the computation of eigenvalues and eigenvectors of a real symetric matrix $A\in \mathbb{R}^{\text{n} \times \text{n}}$, by changing the Givens matrix using on the Jacobi method by another orthogonal matrix, i.e., we will replace the matrix:
\[ G = \begin{pmatrix}
1 & 0 & & & &0\\
0 & \ddots & & & & \\
& & \cos(\theta) & -\sin(\theta)& & \\
& & \sin(\theta) & \cos(\theta)& & \\
& & & &\ddots&\\
0& & & & &1
\end{pmatrix} \]
by the following matrix H:
\[ H = \begin{pmatrix}
1 & 0 & & & &0\\
0 & & & & & \\
& & \sqrt{x+\delta} & -\sqrt{-x-\delta+1}& & \\
& & \sqrt{-x-\delta+1} &\sqrt{x+\delta} & & \\
& & & & &\\
0& & & & &1
\end{pmatrix} \]
such that \quad $\delta \in \R$ \quad and \quad $-\delta \leq x \leq 1-\delta$.
\vspace{0.5cm}
The paper is organized as follows: in Section $2$ we present our main results and in Section $3$ we give a MATLAB program to this new method. In the remaider of this paper and without loss of generality we will choose $\delta=\frac{1}{2}$.\\
\section{The computation of eigenvalues and eigenvectors}
\vspace{0.3cm}
In this section, in order to find the eigenvalues and eigenvectors of a real symetric matrix $A\in \mathbb{R}^{\text{n} \times \text{n}}$, we will repeat the procedures of Jacobi method by using the matrix $H$ introduced above and we shall give all the steps of the calculus. \\
Let $A\in \mathbb{R}^{\text{n} \times \text{n}}$ be a real symetric matrix of $n$ dimensions with coefficients $(a_{ij})_{1 \leq i,j \leq n}$.\\
The jacobi method \cite{quart} is an iterative method consists to build a sequences of orthogonal matrices $A^{(k)}$ such that on the $k$-iteration we have
\begin{center}
$B^{(k)}=H_{pq}^{T} A^{(k-1)} H_{pq},\quad (A^{(0)}=A)$
\end{center}
where $a^{(k)}_{ij}=0$ if $(i,j)=(p,q)$ and the matrix $B^{(k)}$ converge to the matrix of eigenvalues. \\
\vspace{0.2cm}
In a block $A^{(k-1)}$ of the matrix $A$, we find
\[
\mathbf{\begin{pmatrix}
a^{(k)}_{pp} & a^{(k)}_{pq} \\ a^{(k)}_{pq}& a^{(k)}_{qq} \end{pmatrix}} = \begin{pmatrix}
\sqrt{x+\frac{1}{2}} & \sqrt{-x+\frac{1}{2}} \\ -\sqrt{-x+\frac{1}{2}} & \sqrt{x+\frac{1}{2}} \end{pmatrix}
\begin{pmatrix}
a^{(k-1)}_{pp} & a^{(k-1)}_{pq} \\ a^{(k-1)}_{pq}& a^{(k-1)}_{qq} \end{pmatrix}
\begin{pmatrix}
\sqrt{x+\frac{1}{2}} & -\sqrt{-x+\frac{1}{2}} \\\sqrt{-x+\frac{1}{2}} & \sqrt{x+\frac{1}{2}} \end{pmatrix}
\]
\[
= \begin{pmatrix}
a_{pp}(x+\frac{1}{2})+2a_{pq}\sqrt{-x^{2}+\frac{1}{4}}+(-x+\frac{1}{2})a_{qq}* & (-a_{pp} +a_{qq})\sqrt{-x^{2}+\frac{1}{4}}+2a_{pq}x \\ (-a_{pp}+a_{qq})\sqrt{-x^{2}+\frac{1}{4}}+2a_{pq}x & a_{pp}(-x+\frac{1}{2})+2a_{pq}\sqrt{-x^{2}+\frac{1}{4}}+(x+\frac{1}{2})a_{qq}** \end{pmatrix}
\]
where $1 \leq p < q \leq n$\\
Now, when we solve the equation
\begin{equation}\label{e1}
(a_{qq}-a_{pp})\sqrt{-x^{2}+\frac{1}{4}}+2a_{pq}x=0
\end{equation}
we find that
\begin{equation*}
x_{0}=\pm \frac{|a_{qq}-a_{pp}|}{2\sqrt{(a_{qq}-a_{pp})^{2}+4a^{2}_{pq}}}
\end{equation*}
i.e., if $a_{pq} >0$, we have
\begin{equation*}
x_{0}=\frac{a_{pp}-a_{qq}}{2\sqrt{(a_{qq}-a_{pp})^{2}+4a^{2}_{pq}}}
\end{equation*}
then by subtitution in $(*)$ and $(**)$ we find
\begin{equation*}
\lambda_{*}=\frac{a_{qq}+a_{pp}}{2}+\frac{{(a_{pp}-a_{qq})}^{2}+4a^{2}_{pq}}{2\sqrt{(a_{qq}-a_{pp})^{2}+4a^{2}_{pq}}}
\end{equation*}
\begin{equation*}
\lambda_{**}=\frac{a_{qq}+a_{pp}}{2}+\frac{-{(a_{pp}-a_{qq})}^{2}-4a^{2}_{pq}}{2\sqrt{(a_{qq}-a_{pp})^{2}+4a^{2}_{pq}}}
\end{equation*}
and if $a_{pq} <0$, the root of the equation is
\begin{equation*}
x_{0}=\frac{a_{qq}-a_{pp}}{2\sqrt{(a_{qq}-a_{pp})^{2}+4a^{2}_{pq}}}
\end{equation*}
by subtitution $x_{0}$ by its value in $(*)$ and $(**)$ we find
\begin{equation*}
\lambda_{*}=\frac{a_{qq}+a_{pp}}{2}+\frac{-{(a_{pp}-a_{qq})}^{2}-4a^{2}_{pq}}{2\sqrt{(a_{qq}-a_{pp})^{2}+4a^{2}_{pq}}}
\end{equation*}
\begin{equation*}
\lambda_{**}=\frac{a_{qq}+a_{pp}}{2}+\frac{{(a_{pp}-a_{qq})}^{2}+4a^{2}_{pq}}{2\sqrt{(a_{qq}-a_{pp})^{2}+4a^{2}_{pq}}}
\end{equation*}
\subsection{Existence and uniqueness of solution of the equation \ref{e1}}
In this section we show that the equation (\ref{e1}) has a unique solution $x_{0}$. The idea of the proof consists to divide the interval $[-\frac{1}{2},\frac{1}{2}]$ on two open intervals, $]-\frac{1}{2},0[$ and $]0,\frac{1}{2}[$.\\
\vspace{0.5cm}
Let
\begin{center}
$f(x)=(a_{qq}-a_{pp})\sqrt{-x^{2}+\frac{1}{4}}+2a_{pq}x$
\end{center}
with the dervative
\begin{center}
$f^{\prime}(x)=(a_{qq}-a_{pp})\frac{-x}{\sqrt{-x^{2}+\frac{1}{4}}}+2a_{pq}$
\end{center}
now, for $a_{qq}-a_{pp} >0$, $a_{pq}>0$ and for $x \in ]-\frac{1}{2},0[$, we find
\begin{equation}\label{e2}
-a_{pq} < f(x) < \frac {a_{qq}-a_{pp}}{2}
\end{equation}
also we can find when $x \in ]0,\frac{1}{2}[$ the following
\begin{equation}\label{e3}
0 < f(x) < \frac {a_{qq}-a_{pp}}{2}+a_{pq}
\end{equation}
By (\ref{e2}) and according to the intermediate value theorem, $f$ has at least one root in the interval $ ]-\frac{1}{2},0[$ and by using the fact that $f^{\prime}$ is stricly positive in this interval, we can say that the root is unique. Now the expression (\ref{e3}) shows that $f$ keeps its positive sign on the interval $]0,\frac{1}{2}[$, it means that $f$ do not have any roots in this interval, so from the above we deduce that $f$ has a unique root in the interval $[-\frac{1}{2},\frac{1}{2}]$.
\vspace{0.4cm}
Processing by the same manier we can find:
\begin{enumerate}
\item if $a_{pq} >0$ and $a_{qq}-a_{pp}<0$ we have respectively on the intervals $ ]-\frac{1}{2},0[$ and $]0,\frac{1}{2}[$
\begin{equation*}
\frac {a_{qq}-a_{pp}}{2}-a_{pq} < f(x) < 0
\end{equation*}
\begin{equation*}
\frac {a_{qq}-a_{pp}}{2} < f(x) < a_{pq}
\end{equation*}
then $x_{0} \in ]0,\frac{1}{2}[$
\item if $a_{pq} <0$ and $a_{qq}-a_{pp}>0$ we have respectively on the intervals $ ]-\frac{1}{2},0[$ and $]0,\frac{1}{2}[$
\begin{equation*}
0 < f(x) < \frac {a_{qq}-a_{pp}}{2}-a_{pq}
\end{equation*}
\begin{equation*}
a_{pq} < f(x) < \frac {a_{qq}-a_{pp}}{2}
\end{equation*}
then $x_{0} \in ]0,\frac{1}{2}[$
\item if $a_{pq} <0$ and $a_{qq}-a_{pp}<0$ we have respectively on the intervals $ ]-\frac{1}{2},0[$ and $]0,\frac{1}{2}[$
\begin{equation*}
\frac {a_{qq}-a_{pp}}{2}< f(x) < -a_{pq}
\end{equation*}
\begin{equation*}
\frac {a_{qq}-a_{pp}}{2}+a_{pq} < f(x) < 0
\end{equation*}
then $x_{0} \in ]-\frac{1}{2},0[$
\end{enumerate}
\section{MATLAB Program }
Although this method is very similar to the Jacobi method, which is of course convergent, this does not prevent us to give it an associate program. In what follows we shall give the program of this new method. Our approch is based directly upon the program of cyclic Jacobi method given in \cite{quart} (Program 23-33, 35-37). A few changes were made since the functions of the orthogonal matrix were changed.\\
It is clear that the numerical esimations of the Jacobi method is still here unchaged. First of all, Let give the following quantity
$$
\Psi(A)=\left(\Sum \limits_{\underset{i \not=j}{i, j=1}}^{n}a_{ij}^2\right)^{1/2}=\left(\|A\|_{F}^{2}-\Sum_{i=1}^{n}a_{ii}^{2}\right)^{1/2}
$$
such that $\|\cdot\|_{F}$ is the Frobenius norm. And it is well known that in the k-iteration we have
$$
\Psi(A^{(k)}) \leq \Psi (A^{(k-1)}), \, \, \, \text{ for } k \geq 1
$$
Let also give the following estimation
$$
\Psi(A^{(k+n)}) \leq \Frac{1}{\delta \sqrt{2}}(\Psi(A^{(k)}))^{2}, \, \, \, k=1, 2,\cdots
$$
this last is obteined in the cyclic Jacobi method, where $N=n(n-1)/2$ and $\delta$, by hypothes, satisfies the following inequality
$$
|\lambda_{i}-\lambda_{j}|\geq \delta \, \, \, \text{ for } i \not=j
$$
Now, we give the MATLAB program with the changes required.
\begin{itemize}
\item Let start by the program that allows us to calculate the product $H(i,k,x)M$\\
function [M]=pro1(M,irr1,irr2,i,k,j1,j2)\\
for j=j1:j2\\
t1=M(i,j);\\
t2=M(k,j);\\
M(i,j)=irr1.*t1+irr2.*t2;\\
M(k,j)=-irr2.*t1+irr1.*t2;\\
end\\
return\\
such that $irr1=\sqrt{x+1/2}$ and $irr2=\sqrt{-x+1/2}$
\item Secondly, we give the program of the product $MH(i,k,x)^{T}$\\
function[M]=pro2(M,irr1,irr2,j1,j2,i,k)\\
for j=j1:j2\\
t1=M(j,i);\\
t2=M(j,k);\\
M(j,i)=irr1*t1+irr2*t2;\\
M(j,k)=-irr2*t1+irr1*t2;\\
end\\
return\\
\item Now, we give the program which allows us to evaluate $\Psi(A)$ in the cyclic new method\\
function[psi]=psinorm(A)\\
$[n,m]$=\text{size}(A);\\
if n$\not=$m, error('only for square matrix'); end\\
psi$=$0;\\
for i=1:n-1\\
j=$[i+1:n]$;\\
psi=psi+sum($A(i,j).^2$+$A(j,i).^2$')\\
end\\
psi=sqrt(psi);\\
return\\
\item Afterwards, the program which allows us to evaluate $irr1$ and $irr2$\\
function[irr1,irr2]=symschur2(A,p,q)\\
if A(p,q)==0\\
irr1=1;irr2=0;\\
else\\
if A(p,q)$>=$0\\
z1=(A(p,p)-A(q,q));\\
z2=((A(q,q)-$A(p,p)).^2$)+(4.*$(A(p,q)).^2$);\\
z3=sqrt(z2);\\
z4=2.*z3;\\
x=z1./z4;\\
else\\
v1=(A(q,q)-A(p,p));\\
v2=((A(q,q)-$A(p,p)).^2$)+(4.*$(A(p,q)).^2$);\\
v3=sqrt(v2);\\
v4=2.*v3;\\
x=v1./v4;\\
end\\
irr1=sqrt(x+(1/2)); irr2=sqrt(-x+(1/2));\\
end\\
return\\
\item Finally, here is the program of the new method\\
function[D,sweep,psi]=cycjacobi2(A,tol,nmax)\\
$[n,m]$=size(A);\\
if n$\not=$m, error('only for the square matrix'); end\\
D=A;\\
psi=norm(A,'fro');\\
epsi=tol*psi;\\
psi=psinorm(D);\\
sweep=0;\\
iter=0;\\
while psi$>$epsi and iter$<=$nmax\\
iter=iter+1;\\
sweep=sweep+1;\\
for p=1:n-1\\
for q=p+1:n\\
$[irr1,irr2]$=symschur2(D,p,q);\\
$[D]$=pro1(D,irr1,irr2,p,q,1,n);\\
$[D]$=pro2(D,irr1,irr2,1,n,p,q);\\
end\\
end\\
psi=psinorm(D);\\
end\\
return\\
such that tol is the tolerance and nmax is the maximum number of iterations.
\end{itemize}
\section{Example}
Let
\[ A = \begin{pmatrix}
1 & 0 & 2&\\
0 & 3 &0&\\
2&0 & 4
\end{pmatrix} \]
and let $A^{(0)}=A$, then we have
\begin{center}
$A^{(1)}=H^{T}AH$
\end{center}
such that
\[ H = \begin{pmatrix}
\sqrt{x+\frac{1}{2}} &0& -\sqrt{-x+\frac{1}{2}}&\\
0&1&0& \\
\sqrt{-x+\frac{1}{2}} &0& \sqrt{x+\frac{1}{2}}
\end{pmatrix}\]
Using the expressions defined in page (\pageref{e1}) we get
\begin{center}
$x_{0}=\frac{1-4}{2\sqrt{(4-1)^{2}+4 \times 2^{2}}}=\frac{-3}{10}$
\end{center}
\begin{center}
$\lambda_{1}=\frac{5}{2}+\frac{{(1-4)}^{2}+4 \times 2^{2}}{2\sqrt{(4-1)^{2}+4 \times 2^{2}}}=5$
\end{center}
\begin{center}
$\lambda_{2}=3$
\end{center}
\begin{center}
$\lambda_{3}=\frac{5}{2}+\frac{-{(1-4)}^{2}-4 \times 2^{2}}{2\sqrt{(4-1)^{2}+4 \times 2^{2}}}=0$
\end{center}
And the corresponding eigenvectors are
\[v_{1}=
\begin{pmatrix} 5\sqrt{5}\\ 0\\ 0 \end{pmatrix}, v_{2}=\begin{pmatrix}0\\ 3\\ 0 \end{pmatrix}, v_{3}=\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \]
\section{Conclusion}
In this paper, we gave another method for the computation of eigenvalues and eigenvectors of a real symetric matrix, and we well noticed the relationship between the two orthogonal matrices, i.e., these matrices allows us to calculate the same eigenvalues of a real symetric matrix but with two different values. Indeed, in the Jacobi method this value $\theta \in ] -\pi/4,\pi/4[$ and in this new method $x \in ]-\delta,1-\delta[$ such that $\delta \in \mathbb{R}$, so we can deduce that there is a bijection between these two intervals.
\newpage
|
1,314,259,994,788 | arxiv | \section{Introduction}
The {\it r}-modes are large-scale currents in neutron stars (NSs) that couple to
gravitational radiation and remove energy and angular momentum from
the star in the form of gravitational waves
\citep{Andersson:1997xt,Friedman:1997uh,Friedman:1978hf,Lindblom:1998wf}. The
physics of these oscillations is important in relating the microscopic
properties of dense matter--such as its viscosity and neutrino
emissivity--to the macroscopic, observable properties of NSs
such as their spin frequency, temperature, mass and radius. The
{\it r}-modes are unstable to gravitational radiation and their amplitudes
can grow exponentially if viscous and other possible damping
mechanisms are not large enough. Because they can affect the dynamic
properties of the star--its spin frequency and temperature
evolution--they are potentially important probes of the phases of
dense matter. For example, the boundary of the {\it r}-mode instability
region in the spin frequency - temperature plane, and in particular
its minimum, which may determine the final rotation frequency of the
star, is very different for stars with different interior compositions
\citep{Alford:2010fd,Ho:2011tt,2012MNRAS.424...93H}.
Whether or not the {\it r}-mode instability limits the spin rates of some
NSs it is nevertheless important to explore its
potential astrophysically observable signatures. While mass-radius
measurements of NSs are important for constraining the
equation of state (EOS) of dense matter, observations of their dynamic
properties such as spin and thermal evolution are important and
potentially more efficient in discriminating between different phases
of dense matter. That is because dynamic properties are affected by the
transport and thermodynamic properties of dense matter inside the
star, such as viscosity, heat conductivity and neutrino
emissivity which depend on low energy degrees of freedom and are very
different depending on the phase of dense matter present
\citep{Alford:2010fd,Alford:2012jc,Alford:2012yn}.
Pulse timing observations of accreting millisecond X-ray pulsars
(AMXPs) made with NASA's {\it Rossi X-ray Timing Explorer} ({\it RXTE}) are
beginning to reveal the long-term spin evolution of low mass X-ray
binary (LMXB) NSs. The pulsar recycling hypothesis whereby
millisecond radio pulsars acquire their fast spins via accretion,
requires that at least some of these stars are spun-up to hundreds of
Hz--the current NS spin record being 716 Hz
\citep{2006Sci...311.1901H}. Still, it remains somewhat puzzling that
the spin frequency distribution of AMXPs appears to cut-off well below
the mass-shedding limit of essentially all realistic NS EOS
\citep{2003Natur.424...42C}, and it has been suggested that the {\it r}-mode
instability may play a role in limiting NS spin rates
\citep{Andersson:1998qs}. Alternatively, recent work supports the idea
that at least some of the AMXP population are in or close to ``spin
equilibrium,'' essentially determined by the physics of magnetized
accretion, and that an additional mechanism, such as the {\it r}-mode
torque, is not required to halt their spin-up
\citep{1997ApJ...490L..87W,2012ApJ...746....9P}.
While pulse timing noise in AMXPs has made the interpretation of the X-ray
pulsation data difficult, there are now several convincing
measurements of spin-down during the quiescent phases between
accretion outbursts. For example, both SAX J1808.4-3658 (hereafter SAX
J1808) and IGR J00291+5934 (hereafter IGR J00291) show spin-down of
the NS between outbursts, at rates, $\dot\nu$, of
approximately $1-3 \times 10^{-15}$ Hz s$^{-1}$
\citep{Patruno:2010qz,2009ApJ...702.1673H}. The most convincing
evidence for an accretion-induced spin-up during outbursts is in IGR
J00291, for which a peak value of about $3 \times 10^{-13}$ Hz
s$^{-1}$ has now been inferred \citep{Patruno:2010qz}. The magnitude
of the ``instantaneous'' spin-up torque is larger in this source than
the spin-down torque by about a factor of 100. The long-term
evolution is then a competition between the accretion outbursts that
spin it up and the quiescent spin-down intervals. If the outbursts
are frequent enough then the star will be spun-up, if not, then
long-term spin-down occurs. As the spin-up torque is proportional to
the mass accretion rate, the outcome can also be expressed in terms of
the long-term, average mass accretion rate.
The observed quiescent spin-down rate, $\dot{\nu}_{sd}$, puts an upper
limit on the torque that can be present due to any {\it r}-mode, as it has
to be less than the observed torque of $2\pi I \dot\nu_{sd}$, where
$I$ is the moment of inertia of the NS. Now, spin-down of
pulsars is typically ascribed to the magnetic-dipole torque that is
almost certainly present at some level due to the large scale magnetic
field of the NS. Indeed, magnetic field estimates are
usually obtained by equating the {\it entire} observed spin-down to
that expected theoretically for magnetic-dipole radiation, but the
observed spin-down is due to the sum total of any torques present.
Assuming that both {\it r}-mode and magnetic-dipole torques are present,
then the magnitude of the {\it r}-mode torque is equal to the observed
torque minus the magnetic-dipole torque. Since the magnetic
field strengths of AMXPs are typically not known independently from
spin-down estimates, the precise value of the magnetic-dipole torque
is not known a priori. However, to the extent that the magnetic
torque accounts for the majority of the observed spin-down, as is
typically assumed, then the {\it r}-mode torque is likely to be much less
than the measured spin-down torque.
In addition to torquing the NS, the viscous damping of the
{\it r}-modes acts as an internal source of heat. Calculations of the
coupled thermal and spin evolution of accreting NSs
including the effects of {\it r}-mode heating and gravitational radiation
have been carried out by several authors \citep{2000ApJ...536..915B,
2011ApJ...738L..14H, Ho:2011tt, 2012MNRAS.424...93H}. The primary
assumption in these analyses has been that the long-term accretion
(spin-up) torque is balanced by the {\it r}-mode torque due to the emission
of gravitational radiation that carries away angular momentum, that
is, the long-term average accretion torque is in equilibrium with the
{\it r}-mode torque. As noted above, the recent pulse timing observations of
AMXPs indicate that this equilibrium assumption is likely not realized
in practice. Moreover, \citet{2000ApJ...536..915B} showed that if the
average, fiducial accretion torque given by $N_{acc} = \langle \dot M
\rangle (G M R)^{1/2}$, where $\langle \dot M \rangle$ is the
long-term average mass accretion rate, were balanced by the {\it r}-mode
torque then the quiescent luminosities of some accreting NS
transients should be substantially larger than observed due to heating
from the {\it r}-modes. Here we use a similar argument to place upper
bounds on the {\it r}-mode amplitudes that can be present in accreting
NSs assuming that {\it r}-mode heating provides the source of
NS luminosity in the absence of accretion. We then use
these amplitude limits to assess the level of {\it r}-mode spin-down that
can be present, its relation to observed spin-down rates when
available, and the expected strength of gravitational radiation.
The paper is organized as follows. In Section 2 we review the basic theory
of the {\it r}-mode instability and how it couples to the thermal and spin
evolution of NSs. In Section 3 we describe our methods for
constraining the {\it r}-mode amplitudes using the observed properties of
quiescent, LMXB NSs, and we assess the implications for
{\it r}-mode spin-down and the emission of gravitational radiation. We
provide a summary and discussion of the implications of our findings
in Section 4.
\section{Spin-down due to unstable {\it\lowercase{r}}-modes}
The ``flow pattern'' of the {\it r}-modes is prograde in the inertial frame
and retrograde in the rotating frame, which means that it moves in the
same direction as the star's rotation as seen by an observer at
infinity, but in the opposite direction as seen by an observer at rest
on the star. Any mode that is retrograde in the co-rotating frame and
prograde in the inertial frame grows as a result of its emitting
gravitational waves. This is the well-known
Chandrasekhar-Friedman-Schutz mechanism, and it means that
gravitational radiation drives the mode rather than dampening
it. Viscosity on the other hand tends to damp the {\it r}-mode by
transferring angular momentum from the mode to the rigid (unperturbed)
star. The total angular momentum of a perturbed star can be written
as
\begin{equation}
J=I\Omega- J_c
\end{equation}
where $I$ is the moment of inertia and $\Omega$ is the angular
rotation frequency of the star and $J_c$ is the canonical angular
momentum of the mode, given by
\begin{equation}
J_c=-\frac{3}{2} \tilde{J} M R^2 \Omega \alpha^2\label{eq:J_c}
\end{equation}
where $\alpha$ is the amplitude of the {\it r}-mode and $\tilde{J}$ is a
dimensionless constant defined by \citep{Owen:1998xg}
\begin{equation}
\tilde{J} \equiv \frac{1}{M R^4}\int_0^R \rho r^6 dr\label{eq:J-tilde}
\end{equation}
where $\rho$ is the run of density within the neutron
star. The moment of inertia of the star, $I$, can also be written as
$I=\tilde{I} M R^2$ where $\tilde {I}$ is defined by
\begin{equation}
\tilde{I} \equiv \frac{8 \pi}{3 M R^2}\int_0^R \rho r^4 dr \; .
\end{equation}
To derive the equations for the dynamical evolution of the star, we use
the following argument by \citet{Ho:1999fh}.
The canonical angular momentum of the mode increases through
gravitational radiation and decreases by transferring angular momentum
to the star through viscosity
\begin{equation}
\frac{dJ_c}{dt}=-\frac{2}{\tau_{G}}J_c-\frac{2}{\tau_V}J_c\label{eq:dJ_c}
\end{equation}
where the viscous damping time $\tau_V$ is given by
$\frac{1}{\tau_V}=\frac{1}{\tau_S} + \frac{1}{\tau_B}+ ...$. Here,
$\tau_S$ and $\tau_B$ refer to shear and bulk viscosities,
respectively, and the ellipsis denotes other possible dissipative
mechanisms, such as boundary layer effects
\citep{Wu:2000qy,Bildsten:2000ApJ...529L..33B}.
The second evolution equation is obtained by writing the conservation
of the total angular momentum $J$ of the perturbed star which says
that the total angular momentum of the star decreases due to
gravitational radiation and increases due to accretion
\begin{equation}
\frac{dJ}{dt}=-\frac{2J_c}{\tau_G}+N_{acc}\label{eq:dOmega} \;,
\end{equation}
where $N_{acc}$ is the accretion torque. For a fiducial torque
$N_{acc}$ can be written as $\dot{M}(G M R)^{1/2}$ which assumes that
each accreted particle transfers to the star an angular momentum equal
to the Keplerian value at the stellar radius $R$
\citep{2000ApJ...536..915B}. In the previous equations the quantities
$\frac{1}{\tau_i}$, where $i$ is either $G$, $B$, or $S$ for
gravitational radiation, bulk viscosity or shear viscosity timescales,
respectively, are given by,
\begin{equation}
\frac{1}{\tau_i}=-\frac{P_i}{2E_c}
\end{equation}
where $E_c$ is the canonical energy of the {\it r}-mode, $P_G$ is the power
radiated by gravitational waves and $P_B$ and $P_S$ are the powers
dissipated due to bulk and shear viscosity, respectively (in natural
units where $c=\hbar=k_B=1$) \citep{Owen:1998xg,Alford:2010fd},
\begin{align}
E_c &=\frac{1}{2} \alpha^2 \Omega^2 \tilde{J} M R^2\\
P_G &=\frac{32\pi(m-1)^{2m}(m+2)^{2m+2}}{((2m+1)!!)^2(m+1)^{2m+2}}\tilde{J}_m^2GM^2R^{2m+2}\alpha^2\Omega^{2m+4}\\
P_B &=-\frac{16m}{(2m+3)(m+1)^5\kappa^2}\frac{\tilde{V}_m\Lambda_{{\rm QCD}}^{9-\delta}R^7\alpha^2\Omega^4T^\delta}{\Lambda_{EW}^4}\\
P_S &=-\frac{(m-1)(2m+1)\tilde{S}_m\Lambda_{{\rm QCD}}^{3+\sigma}R^3\alpha^2\Omega^2}{T^\sigma} \; . \end{align}
\begin{table*}
\renewcommand{\arraystretch}{1.5}
\caption{Parameters of the Neutron Star Models\label{tab:viscosity-parameters}}
\scalebox{0.95}{
\begin{tabular}{crrrrrrrrrrrrrr}
\tableline\tableline
Neutron Star & Shell & $R$ (km) &$\Omega_K$ (Hz) &$\tilde{I}$& $\tilde{J}$ & $\tilde{S}$ & $\tilde{V}$&$\tilde{C}_V$ & $\tilde{L}$&$\sigma$&$\delta$&$v$&$\theta$\\
\tableline
NS $1.4\, M_{\odot}$ & Core & $11.5$&$6020$&$0.283$ & $1.81\times10^{-2}$ & $7.68\times10^{-5}$ & $1.31\times10^{-3}$&$2.36\times10^{-2}$ & $1.91\times10^{-2}$&$\frac{5}{3}$&$6$ &$1$&$8$ \\
NS $2.0\, M_{\odot}$ &Core & $11.0$&$7670$& $0.300$&$2.05\times10^{-2}$ &$2.25\times10^{-4}$ & $1.16\times10^{-3}$&$2.64\times10^{-2}$ & $1.69\times10^{-2}$&$''$&$''$ &$''$&$''$\\
NS $2.21\, M_{\odot}$ & m.U. core &$10.0$&$9310$&$0.295$& $2.02\times10^{-2}$ & $5.05\times10^{-4}$ & $9.34\times10^{-4}$&$2.62\times10^{-2}$ & $1.29\times10^{-2}$&$''$&$''$ &$''$&$''$\\
& d.U. core && & & & &$1.16\times10^{-8}$& & $2.31\times10^{-5}$&&$4$ &&$6$ \\
\tableline
\end{tabular}}
\tablecomments{Radius, Kepler frequency and radial integral parameters
that appear in the moment of inertia, angular momentum of the mode,
dissipative powers due to shear viscosity (from leptonic interactions)
and bulk viscosity (due to Urca processes), specific heat and neutrino
luminosity for different NS models considered in this work
\citep{Alford:2012yn,Alford:2010fd}. For the $2.21\, M_{\odot}$ model
the bulk viscosity and neutrino luminosity parameters are different in
the inner core where direct Urca processes are allowed, therefore
these values are given separately in the last row.}
\end{table*}
Here we consider $m=2$ {\it r}-modes, and $\Lambda_{{\rm QCD}}$ and
$\Lambda_{{\rm EW}}$ are characteristic strong and electroweak scales
introduced to make $\tilde{V}$ and $\tilde{S}$ dimensionless. In our
calculations we have used $\Lambda_{QCD}=1$ GeV and $\Lambda_{EW}=100$
GeV. The dimensionless parameters $\tilde{V}$, $\tilde{S}$,
$\tilde{I}$ and $\tilde{J}$, which involve radial integration over the
star, and $\delta$ and $\sigma$ are given in
Table~\ref{tab:viscosity-parameters} for three different NS
models that we study in this paper. All three models here are made of
non-superfluid, hadronic {\it npe} matter with the APR EOS
\citep{Akmal:1998cf}, which generates a reasonable NS mass-radius
relation that is consistent with observational constraints
(labeled AP4 in \citet{Hebeler:2013nza} and
\citet{Lattimer:2012nd}), but they have different masses ($1.4\,
M_{\odot}$, $2.0\, M_{\odot}$ and $2.21\, M_{\odot}$) and radii
\citep{Alford:2010fd}. The two models with masses of $1.4\, M_{\odot}$
and $2.0\, M_{\odot}$ only allow modified Urca neutrino emission in
the core, but the one with a mass of $2.21\, M_{\odot}$ allows direct
Urca neutrino emission in a core of radius $5.9$ km. Direct Urca
processes are very sensitive to the proton fraction of dense
matter. The required proton fraction is roughly $14 \%$ in the case of
the APR EOS, reached at relatively high density $n \sim 5n_0$, where
$n_0$ is the nuclear saturation density, this could be different for
other EOSs \citep{Alford:2010fd}.
The evolution equations for the amplitude of the {\it r}-mode, $\alpha$, and
spin frequency of the star, $\Omega$, can be written by using
Equations \ref{eq:dJ_c} and \ref{eq:dOmega} and substituting $J_c$
from Equation~\ref{eq:J_c}. The third equation --which describes the
temperature evolution-- can be obtained by noting
that the temperature of the star decreases due to thermal emission
from the surface and neutrino emission from the interior, which in an
average mass hadronic star is dominated by modified Urca processes (in
a massive star direct Urca processes can also occur in the core and
should be included in the neutrino emissivity as well), and it
increases due to the viscous dissipation of the {\it r}-mode energy, $P_V$.
This gives the following equations for the evolution of spin
frequency, {\it r}-mode amplitude and temperature,
\begin{subequations}
\begin{align}
\frac{d\Omega}{dt}&=-2Q\frac{\Omega\alpha^2}{\tau_V}+\frac{N_{acc}}{I}\label{eq:evolution-Omega}\\
\frac{d\alpha}{dt}&=-\frac{\alpha}{\tau_G}-\frac{\alpha}{\tau_{V}}(1-\alpha^2Q)-\frac{\alpha}{2 \Omega}\frac{N_{acc}}{I}\label{eq:evolution-alpha}\\
\frac{dE}{dt}&=C_V\frac{dT}{dt}=-L_{\nu}-L_{\gamma}+|P_V|+H \; , \label{eq:evolution-T}
\end{align}\label{eq:evolution}
\end{subequations}
where $Q\equiv\frac{3\tilde{J}}{2\tilde{I}}$, and the viscous
dissipated power is $P_V = P_S + P_B + \ldots$, and again the ellipsis
denotes other possible dissipative processes. In
Equation~(\ref{eq:evolution-T}), $H$ represents other heating mechanisms that
might be present in the star, but are not related to the {\it r}-mode
dissipation, such as deep crustal heating due to nuclear reactions in
the NS crust \citep{2003A&A...404L..33H, 2007ApJ...662.1188G}. Since
our goal in this paper is to obtain {\it upper} limits on {\it r}-mode
amplitudes we can safely set $H=0$. We discuss this further in the
next section. Here, $L_{\nu}$, $L_{\gamma}$ and $C_V$, which are the
total neutrino luminosity, photon luminosity and specific heat of the
star, are given by (in natural units)
\begin{align}
L_{\nu} &=\frac{4\pi R^3\Lambda_{QCD}^{9-\theta}\tilde{L}}{\Lambda_{EW}^4}T^{\theta}\label{eq:neutrino-luminosity}\\
L_{\gamma} &=4\pi R^2 \sigma T_{eff}^4\label{eq:thermal-luminosity}\\
C_V &=4\pi\Lambda_{QCD}^{3-v}R^3\tilde{C}_VT^v \; ,\end{align}
where $T$ is the core temperature of the star and $T_{eff}$ is the
surface temperature. The dimensionless parameters $\tilde{L}_{\nu}$
and $\tilde{C}_V$ (that involve radial integration over the star \citep{Alford:2012yn}) and
$\theta$ and $v$ are given in Table~\ref{tab:viscosity-parameters} for
different stellar models.
Figure~\ref{fig:Omega-T} shows the {\it r}-mode instability window computed
for a $1.4\, M_{\odot}$ NS using the APR EOS (the
modification to the instability window is rather modest in the case of
the two other stellar models considered in this paper
\citep{Alford:2010fd}). In the region above the curve the
gravitational radiation timescale is smaller than the viscous damping
timescales, therefore {\it r}-modes are unstable and their amplitudes grow
exponentially in this region. As can be seen in this figure most of
the LMXB's are in the unstable region for normal hadronic stars where
the damping is due to shear viscosity from leptonic scattering
(i.e. electron-electron and electron-proton scatterings, which are the
dominant contributions to the shear viscosity of normal hadronic
matter in NS cores) and bulk viscosity due to Urca processes,
therefore, there must be some non-linear mechanism that saturates the
amplitude of the unstable {\it r}-modes at a finite value. Supra-thermal
bulk viscosity \citep{Alford:2010gw} is one of these non-linear
mechanisms, but in the case that only the core of the star is
considered and the effects of the crust are ignored, it can only
saturate the {\it r}-mode at large amplitudes, ($\alpha \sim 1$)
\citep{Alford:2011pi}. Magnetohydrodynamic coupling to the stellar
magnetic field is another mechanism that can damp the {\it r}-mode
instability, but it can only saturate the {\it r}-mode at large amplitudes
($\alpha \gtrsim 0.01$) in the presence of a magnetic field
significantly larger than $\sim 10^{8}$ G that is characteristic of
the LMXB sources considered here
\citep{2000ApJ...531L.139R,2001PhRvD..64j4014R}. Mode coupling can
saturate the {\it r}-mode at smaller amplitudes ($\alpha \sim 10^{-4}$), but
those values are still very large compared to the upper limits we find
in this work
\citep{Arras:2002dw,2007PhRvD..76f4019B,2009PhRvD..79j4003B}. At this
point it is not entirely clear which mechanism is actually responsible
for saturating the {\it r}-mode amplitude (none of the saturation mechanisms
proposed so far can saturate {\it r}-modes at the low amplitudes we find
here), however, in this paper our primary interest is to obtain upper
bounds on {\it r}-mode amplitudes from observations of NS
transients and this does not require a precise understanding of the
saturation mechanism. Understanding the detailed physics of the
saturation mechanism is an important issue, but it is beyond the scope
of this paper.
\begin{figure}
\includegraphics[scale=0.9]{Omega_T_Plot3.eps}%
\caption{\label{fig:Omega-T} The {\it r}-mode instability region for a $1.4\,
M_{\odot}$ NS constructed with the APR EOS
in the spin frequency vs. core temperature plane. Also shown are some of
the LMXBs which have been considered in
this work. The horizontal line extending rightward from the temperature
symbols (the black squares) shows the difference between two models for
relating the surface temperature to the core temperature (i.e. the difference
between a fully or partially accreted envelope). The difference
between the core temperatures in these two cases gets larger as the
surface temperature increases, but even for the highest surface
temperature considered in this work the difference is not large enough
to change our results.\\
(A color version of this figure is available in the online journal.)}
\end{figure}
Here we note that if we were to consider the existence of exotic forms
of matter, such as strange quark matter in the star, then the shape of
the instability window would change due to their different shear and
bulk viscosities, and it is possible that the LMXBs considered here
might fall outside of the resulting instability window
\citep{Alford:2010fd,2012MNRAS.424...93H,Schwenzer:2012ga}. The shape
of the instability window may also be different due to crust boundary
layer effects \citep{Ho:2011tt}, but even in that case for realistic
boundary layer models most of the LMXB sources are in the unstable
region. Therefore, we assume that these sources have unstable {\it r}-modes
which are emitting gravitational waves but the amplitude of the {\it r}-mode
is not growing exponentially, and there is a mechanism that can
saturate the growth of the {\it r}-mode. In addition, we assume that all of
these stars are made of normal hadronic matter (constructed with the
APR EOS) and all of the sources that are in the unstable region for
hadronic stars in Figure~\ref{fig:Omega-T} are emitting gravitational
waves at a constant amplitude.
Since we do not know which mechanism is actually responsible for
saturating the {\it r}-mode amplitude, we simply assume that there is a
nonlinear mechanism that saturates the mode. When $\alpha$ hits the
saturation amplitude, the right hand side of
Equation~(\ref{eq:evolution-alpha}) becomes zero which implies that
\begin{align}
\frac{1}{\tau_V} &=\frac{1}{\tau_G}\frac{1}{1-\alpha^2 Q} \; ,
\end{align}
where we have neglected the last term in Equation~(\ref{eq:evolution-alpha})
since it is much smaller than the other terms.
Therefore, when the amplitude is saturated, in all of the evolution
equations $\frac{1}{\tau_V}$ should be replaced by
$\frac{1}{\tau_G}(\frac{1}{1-\alpha^2 Q})$ and $|P_V|$ by
$\frac{P_G}{1-\alpha^2 Q}$. Since viscosity alone cannot stop the
growth of the {\it r}-mode and an extra mechanism is needed to do that,
therefore at saturation $\frac{1}{\tau_G}(\frac{1}{1-\alpha^2 Q})$ is
larger than $\frac{1}{\tau_V}$ and the reheating term on the right
hand side of the temperature evolution equation
(Eq~\ref{eq:evolution-T}) will be larger than the reheating due to
bulk and shear viscosity \citep{Alford:2012jc}.
\begin{table}
\renewcommand{\arraystretch}{1.5}
\caption{Constraints from Spin Equilibrium\label{tab:spin-equilibrium alpha}}
\scalebox{0.75}{
\begin{tabular}{crrrrr}
\tableline\tableline
Source & $\Delta=\frac{t_o}{t_r}$& $\dot{\nu}_{su}$& $\alpha_{sp.eq} $ & $\alpha_{sp.eq} $ & $\alpha_{sp.eq} $ \tabularnewline
&& (Hz s$^{-1}$) &$(1.4\, M_{\odot})$& $(2.0\, M_{\odot})$ & $ (2.21\, M_{\odot})$\tabularnewline
\tableline
IGR J$00291$&$\frac{13}{1363} $& $5\times 10^{-13}$&$1.46 \times 10^{-7}$ & $1.22\times 10^{-7}$ &$1.41\times 10^{-7}$ \\
SAX J$1808$-$3658$ &$\frac{40}{2\times 365}$& $2.5\times 10^{-14}$ &$3.20 \times 10^{-7}$ & $2.66 \times 10^{-7}$ &$3.08 \times 10^{-7}$\\
XTE J$1814$-$338$ &$\frac{50}{19 \times 365}$&$1.5\times 10^{-14}$ &$2.12\times 10^{-7}$ & $1.76 \times 10^{-7}$&$2.04 \times 10^{-7}$\\
\tableline
\end{tabular}}
\tablecomments{Duty cycle $\Delta$, spin-up rate $\dot{\nu}_{su}$
\citep{Patruno:2012ab}, and upper limits on {\it r}-mode amplitudes from the
``spin-equilibrium'' condition $\alpha_{sp.eq} $, for three neutron star transients with
assumed masses of $1.4\, M_{\odot}$, $2.0\, M_{\odot}$ and $2.21\,
M_{\odot}$.}
\end{table}
\section{Constraining the {\it\lowercase{r}}-mode amplitude}
\subsection{Constraints from ``Spin Equilibrium''}
Here we compare two different methods for constraining the {\it r}-mode
amplitude, $\alpha$, from observations of LMXB NS
transients. The first one, which gives larger values for $\alpha$, is
based on the spin equilibrium assumption
\citep{2000ApJ...536..915B,Ho:2011tt,2011ApJ...738L..14H,Watts:2008qw}
where we assume that in an outburst-quiescence cycle all the spin-up
torque due to accretion during the outburst is balanced by the {\it r}-mode
spin-down torque due to gravitational radiation in the whole
cycle. This is similar to the prescription considered by previous
authors \citep{2000ApJ...536..915B,Ho:2011tt}, but rather than using a
``fiducial'' torque estimated from the long-term average $\dot{M}$, we
can now use the observed spin-up rates and outburst properties to
directly constrain the torque. Therefore we have
\begin{equation}
2\pi I \dot{\nu} \Delta= \frac{2 J_c}{\tau_G}\label{eq:spin-equilibrium}
\end{equation}
where $\dot{\nu}$ is the spin-up rate during outburst and
$\Delta=\frac{t_o}{t_r}$ is the ratio of the outburst duration, $t_o$,
to the recurrence time, $t_r$. We can estimate the left hand side of
Equation~(\ref{eq:spin-equilibrium}) from X-ray observations of LMXBs and
since the right hand side is a function of $\alpha$ through $J_c$ (see
Equation~(\ref{eq:J_c})) we can determine $\alpha$. In
Table \ref{tab:spin-equilibrium alpha} we give the results for $\alpha$
for three sources for which there are now estimates of the spin-up
rate due to accretion. The values of $\alpha$ are given for the three
different NS models considered in this work, and they are
all in the range of $\approx 1 - 3 \times 10^{-7}$.
\begin{table*}
\renewcommand{\arraystretch}{1.5}
\caption{Estimates of NS Temperatures and Luminosities\label{tab:temperature-luminosities}}
\scalebox{0.99}{
\begin{tabular}{crrrrrrrrrr}
\tableline\tableline
Source & $\nu_s$ (Hz) & Distance &$kT_{eff} $ &$T_{core} (K)$&$T_{core} (K)$&$L_{\gamma} (erg.s^{-1})$&$L_{\nu}(erg.s^{-1})$&$L_{\gamma}(erg.s^{-1})$&$L_{\nu}(erg.s^{-1})$\\
&&(kpc)&(eV)&$(1.4\, M_{\odot})$&$(2.21\, M_{\odot})$&$(1.4\, M_{\odot})$&$(1.4\, M_{\odot})$&$(2.21\, M_{\odot})$&$(2.21\, M_{\odot})$\\
\tableline
$4$U $1608$-$522$ & $620$&$4.1\pm 0.4$&$170$& $1.25\times 10^{8}$&$1.34 \times 10^8$ &$1.21\times 10^{34}$&$2.16\times 10^{32}$&$3.03\times 10^{34}$&$2.20\times 10^{39}$\\
IGR J$00291$+$5934$ & $599$&$5\pm 1$&$71$& $2.96\times 10^{7}$&$3.18 \times 10^7$ &$3.65\times 10^{32}$&$2.06\times 10^{27}$&$9.27\times 10^{32}$&$3.87\times 10^{35}$\\
MXB $1659$-$29$&$556$&$11.5\pm 1.5$&$55$&$1.96 \times 10^{7}$&$2.07 \times 10^{7}$ & $1.35\times 10^{32}$&$7.64\times 10^{25}$&$3.29\times 10^{32}$&$2.96\times 10^{34}$\\
Aql X-1& $550$&$4.55\pm 1.35$&$94$& $4.70\times 10^{7}$ & $5.06 \times 10^7$&$1.12\times 10^{33}$&$8.40\times 10^{28}$&$2.87\times 10^{33}$&$6.35\times 10^{36}$\\
KS $1731$-$260$&$524$&$7.2\pm 1.0$& $70$&$2.89 \times 10^{7}$&$3.12 \times 10^{7}$ &$3.44\times 10^{32}$&$1.70\times 10^{27}$&$8.88\times 10^{32}$&$3.47\times 10^{35}$\\
XTE J$1751$-$305$&$435$&$9 \pm 3$&$<71$& $2.96\times 10^{7}$&$3.18\times 10^7 $ &$3.65\times 10^{32}$&$2.06\times 10^{27}$&$9.27\times 10^{32}$&$3.87\times 10^{35}$\\
SAX J$1808$-$3658$ & $401$&$3.5\pm 0.1$&$<30$& $7.23\times 10^{6}$& $7.69 \times 10^6$ &$1.21\times 10^{31}$&$2.63\times 10^{22}$&$2.99\times 10^{31}$&$7.78\times 10^{31}$\\
XTE J$1814$-$338$&$314$&$6.7 \pm 2.9$&$<69$&$2.82\times 10^{7}$&$3.01 \times 10^7 $&$3.24\times 10^{32}$&$1.39\times 10^{27}$&$8.12\times 10^{32}$&$2.78\times 10^{35}$\\
NGC $6440$&$205$&$8.5 \pm 0.4$&$87$&$4.11 \times 10^{7}$&$4.46 \times 10^{7}$ & $8.10\times 10^{32}$&$2.88\times 10^{28}$&$2.11\times 10^{33}$&$2.97\times 10^{36}$\\
XTE J$1807$-$294$&$191$&$8.35 \pm 3.65$&$<51$&$1.72\times 10^{7}$&$1.83\times 10^7 $ &$9.84\times 10^{31}$&$2.71\times 10^{25}$&$2.46\times 10^{32}$&$1.44\times 10^{34}$\\
XTE J$0929$-$314$&$185$&$7.8 \pm 4.2$&$<50$&$1.66 \times 10^{7}$& $1.79 \times 10^7 $ &$9.06\times 10^{31}$&$2.06\times 10^{25}$&$2.31\times 10^{32}$&$1.23\times 10^{34}$\\
\tableline
\end{tabular}}
\tablecomments{Spin frequency, distance
to the source \citep{Watts:2008qw}, effective temperature at the
surface of the star \citep{Heinke:2006ie,Heinke:2008vj,Tomsick:2004pf},
core temperature, photon luminosity at the surface of the star and
neutrino luminosity for both $1.4\, M_{\odot}$ and $2.21\, M_{\odot}$
NSs. Note that $kT_{eff} $ given in this table is for a $1.4\,
M_{\odot}$ NS with a radius of $10$ km but in computing the core
temperatures and luminosities for different NS models the
appropriate redshifts have been used. We note that for the sake of
brevity the core temperatures for the $2\, M_{\odot}$ models are not
included in the table; however, their values are all $\approx 5\%$
less than those for the $2.21\, M_{\odot}$ model.}
\end{table*}
At these amplitudes the inferred {\it r}-mode spin-down rate would be
competitive with the magnetic dipole spin-down rate which almost
certainly exists in these LMXBs, and which is quite likely the
dominant spin-down mechanism. Moreover, the amplitudes are also
comparable to those deduced assuming {\it r}-mode spin equilibrium with the
``fiducial'' accretion torques estimated by
\citet{2000ApJ...536..915B}. Those authors also demonstrated that at
such amplitudes some of these objects should have significantly higher
quiescent luminosities due to {\it r}-mode reheating than observed. These
results suggest that the {\it r}-mode torque does not balance the
accretion-driven spin-up torque and that {\it r}-mode amplitude estimates
based on the ``spin equilibrium'' assumption will overestimate the
true amplitude.
\subsection{Constraints from ``Thermal Equilibrium''}
Here, we use the same thermal equilibrium argument outlined by Brown
\& Ushomirsky, but rather than estimating the quiescent luminosity
using the {\it r}-mode amplitude deduced from ``spin equilibrium,'' we use
observations of the quiescent luminosity of LMXBs to directly
constrain the amplitude of the {\it r}-mode. This works because in a
steady-state, gravitational radiation pumps energy into the {\it r}-mode at
a rate given by, $W_d = (1/3) \Omega \dot J_c = -2E_c/ \tau_G$. This
expression has the familiar relationship for a power dissipated by an
applied torque, and in this case it is simply the {\it r}-mode torque due to
gravitational radiation. In a thermal steady-state all of this energy
must be dissipated in the star. Some fraction of this heat will be
lost from the star due to neutrino emission and the rest will be
radiated at the surface. It should be mentioned that the thermal
steady-state is not an assumption, but a rigorous result when the mode
is saturated, and in particular it is independent of the cooling
mechanism \citep{Alford:2012yn}. We further assume that all of the
energy emitted from the star during quiescence is due
to the {\it r}-mode dissipation inside the star. This is equivalent to
setting $H=0$ in Equation (12c). The resulting {\it r}-mode amplitude limits are
upper bounds in the sense that the observed luminosity reflects the
contribution from {\it r}-mode heating as well as any additional sources of
heat that are present, such as for example due to accretion and the
nuclear processing of accreted material, so-called deep crustal
heating \citep{2003A&A...404L..33H, 2007ApJ...662.1188G}. If any
such sources of heat are present then the actual {\it r}-mode amplitude will
be less than the upper bounds given here. For the sources that we
study in this work, since we know the values of surface temperatures
and quiescent luminosities from observations, we can estimate the core
temperature and therefore determine the neutrino luminosities to
estimate the total amount of heat deposited in the core of these
systems by gravitational radiation.
To compute the core temperatures we use equation A$8$ in
\citet{Potekhin:1997mn} which relates the effective surface temperature
of the star $T_{eff}$ to the internal temperature $T_b$, which is the
temperature at a fiducial boundary at $\rho_b=10^{10}$ g cm$^{-3}$ for
a fully accreted envelope and is valid at not too high temperatures
$(T_b \leq 10^8 K)$,
\begin{equation}
(\frac{T_{eff}}{10^6 K})^4=(\frac{g}{10^{14} cm s ^{-2}})(18.1\frac{T_b}{10^9 K})^{2.42}\label{eq:T_b potekhin}
\end{equation}
where $g=GM/(R^2 \sqrt{1-r_g/R})$ is the surface gravity and
$r_g=2GM/c^2$. Here we assume that the NS's core is
isothermal and since the thermal conductivity of the crust is high
\citep{Brown:2009kw} we have $T_{core}=T_b$ to good approximation. If
we instead use equation A$9$ in \citet{Potekhin:1997mn} which gives
$T_b$ for a partially accreted envelope, and a column depth of
$\frac{P}{g}=10^9$ g cm$^{-2}$ \citep{2012MNRAS.424...93H} we get core
temperatures slightly higher than those derived from Equation (\ref{eq:T_b
potekhin}). The right hand side of the error bars on the temperatures
in Figure~\ref{fig:Omega-T} shows this difference. It is really only
relevant for a single source, 4U 1608-522, but even in this case it is
less than a $50\%$ increase, and the difference is always small enough
that it doesn't qualitatively change our results in the remainder of
the paper. To compute $T_b$ we have used the effective surface
temperatures, $T_{eff}$, given by
\citet{Heinke:2006ie,Heinke:2008vj} and \citet{Tomsick:2004pf}\footnote{It should
be emphasized that the temperatures given in Table 2 of
\citet{Heinke:2006ie,Heinke:2008vj} are effective temperatures at the
surface of the star and not redshifted surface temperatures seen by an
observer at infinity. We note that they have been incorrectly assumed
to be redshifted temperatures in \citet{Degenaar:2012tw} and
\citet{2012MNRAS.424...93H}. }. Note that those temperatures are
computed for a $1.4\, M_{\odot}$ NS with a $10$ km radius and we have
used the appropriate redshifts to compute $T_{core}$ for our neutron
star models.
Having surface and core temperatures for these sources we can use
Equations~(\ref{eq:neutrino-luminosity}) and (\ref{eq:thermal-luminosity}) to
evaluate their neutrino and thermal luminosities for different stellar
models. The values of the core temperatures, as well as the photon and
neutrino luminosities for different sources for the $1.4\, M_{\odot}$
and $2.21\, M_{\odot}$ NS models are computed and given in
Table~\ref{tab:temperature-luminosities}. Note that in the case of
$1.4\, M_{\odot}$ NSs the neutrino luminosity is only due to the
modified Urca reactions, but for the $2.21\, M_{\odot}$ NS it is due
to both direct and modified Urca reactions. By comparing the neutrino
and thermal luminosities in Table \ref{tab:temperature-luminosities}
one can see that at temperatures relevant for the LMXBs the standard
neutrino cooling from modified Urca reactions is negligible compared
to the photon emission from the surface of the star. However, if the
star is massive enough to enable direct Urca reactions in the core
(such as for our $2.21\, M_{\odot}$ NS model) then the
neutrino emission will dominate the cooling process for surface
temperatures higher than about $34$ eV \citep{Brown:1998qv}.
The thermal equilibrium condition can be written as $W_d =
L_{\nu}+L_{\gamma}$, where reheating due to {\it r}-mode dissipation is
given by $W_d=\frac{-2E_c}{\tau_{GR}}$ and is a function of {\it r}-mode
amplitude, $\alpha$. Therefore, $\alpha$ can be written in terms of
luminosities as
\begin{equation}
\alpha=\frac{5\times 3^4}{2^8 \tilde{J} M R^3 \Omega^4}(\frac{L_{\gamma}+L_{\nu}}{2 \pi G})^{1/2}\label{eq:alpha_thermal}
\end{equation}
where $L_{\gamma} = 4\pi R^2 \sigma T_{eff}^4$ is the thermal photon
luminosity at the surface of the star. Here, $R$ and $T_{eff}$ are the
stellar radius and surface temperature, respectively, and the neutrino
luminosity is given by
\begin{equation}
L_{\nu}=\frac{4 \pi R^3\Lambda_{QCD}^3 \tilde{L}_{DU}}{\Lambda_{EW}^4}T^6+\frac{4 \pi R^3 \Lambda_{QCD} \tilde{L}_{MU}}{\Lambda_{EW}^4}T^8
\end{equation}
where $T$ is the core temperature, $R_{DU}$ is the radius of the core
where direct Urca neutrino emission is allowed and $\tilde{L}$ is a
dimensionless parameter given in
Table~\ref{tab:viscosity-parameters}. The thermal equilibrium
condition for an NS with standard neutrino cooling ($1.4\, M_{\odot}$
and $2.0\, M_{\odot}$ NSs in this study) can be approximated as $W_d
\simeq L_{\gamma}$, since the neutrino cooling in this case is
negligible compared to the surface photon luminosity.
\begin{table*}
\renewcommand{\arraystretch}{1.5}
\caption{Upper Bounds on {\it\lowercase{r}}-mode Amplitudes and NS Spin-down Rates\label{tab:alpha} }
\begin{center}
\begin{tabular}{crrrrrrrr}
\tableline\tableline
Source & $\alpha_{th.eq}$ & $\alpha_{th.eq}$ & $\alpha_{th.eq} $ & $\dot{\nu}$ (Hz s$^{-1}$) &$\dot{\nu}$ (Hz s$^{-1}$) &$\dot{\nu}$ (Hz s$^{-1}$) &$\dot{\nu}_{sd}$ (Hz s$^{-1}$) \tabularnewline
&$(1.4\, M_{\odot})$&$(2.0\, M_{\odot})$&$(2.21\, M_{\odot})$ &$(1.4\, M_{\odot})$&$(2.0\, M_{\odot})$&$(2.21\, M_{\odot})$&observation \\
\tableline
$4$U $1608$-$522$ &$7.15\times 10^{-8}$ &$6.60\times 10^{-8}$ &$2.61\times 10^{-5}$ &-$1.44\times 10^{-15}$ &-$1.78\times 10^{-15}$ &-$2.08\times 10^{-10}$ &\\
IGR J$00291$+$5934$ & $1.41\times 10^{-8}$ &$1.32\times 10^{-8}$ &$3.99\times 10^{-7}$ & -$4.42\times 10^{-17}$ &-$5.59\times 10^{-17}$ &-$3.82\times 10^{-14}$ &-$3\times 10^{-15}$ \\
MXB $1659$-$29$&$1.16\times 10^{-8}$ &$1.07\times 10^{-8}$ &$1.49\times 10^{-7}$ & -$1.78\times 10^{-17}$ &-$2.18\times 10^{-17}$ &-$3.16\times 10^{-15}$ &\\
Aql X-1&$3.49\times 10^{-8}$ &$3.27\times 10^{-8}$ &$2.26\times 10^{-6}$ & -$1.49\times 10^{-16}$ &-$1.89\times 10^{-16}$ &-$6.74\times 10^{-13}$ &\\
KS $1731$-$260$&$2.35\times 10^{-8}$ &$2.20\times 10^{-8}$ &$6.44\times 10^{-7}$ & -$4.81\times 10^{-17}$&-$6.09\times 10^{-17}$&-$3.90\times 10^{-14}$&\\
XTE J$1751$-$305$&$5.09\times 10^{-8}$ &$4.76\times 10^{-8}$ &$1.44\times 10^{-6}$ & -$6.13\times 10^{-17}$ &-$7.74\times 10^{-17}$ &-$5.29\times 10^{-14}$ &-$5.5\times 10^{-15}$\\
SAX J$1808$-$3658$ &$1.28\times 10^{-8}$ &$1.19\times 10^{-8}$ &$3.30\times 10^{-8}$ & -$2.19\times 10^{-18}$ &-$2.74\times 10^{-18}$ &-$1.57\times 10^{-17}$ &-$5.5\times 10^{-16}$\\
XTE J$1814$-$338$ &$1.76\times 10^{-7}$ &$1.67\times 10^{-7}$ &$4.49\times 10^{-6}$ & -$7.49\times 10^{-17}$ &-$9.73\times 10^{-17}$ &-$5.26\times 10^{-14}$ &\\
NGC $6440$&$1.54\times 10^{-6}$ &$1.45\times 10^{-6}$ &$8.03\times 10^{-5}$ & -$2.90\times 10^{-16}$ &-$3.71\times 10^{-16}$ &-$8.50\times 10^{-13}$ &\\
\tableline
\end{tabular}
\end{center}
\tablecomments{Upper bounds on the {\it r}-mode amplitude from the ``thermal
equilibrium'' condition that are consistent with quiescent luminosity
data are given for different neutron star models. The
gravitational radiation induced spin-down rates due to unstable
{\it r}-modes as well as the observed spin-down rate for some of the sources
are also given \citep{Patruno:2010qz,Patruno:2012ab,Patruno:2011gu}.}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.7]{nu_ndot.eps}%
\end{center}
\caption{\label{fig:nu-ndot} Limits on the spin-down rates due to an
{\it r}-mode torque for nine LMXB systems and a range of NS masses
are shown in the $\dot{\nu}$ vs. $\nu$ plane. The $\dot\nu$ limits
for $1.4$, $2.0$, and $2.21\, M_{\odot}$ NS models obtained from the
{\it r}-mode amplitude limits derived from observations of quiescent
luminosities and temperatures (see the discussion in Section 3) are
marked by the $\times$, square, and circle symbols, respectively. The
vertical green lines connecting the symbols show the full range of
$\dot{\nu}$ for each source (labeled). Also shown are two pairs of
parallel lines representing magnetic-dipole (solid) and {\it r}-mode
(dashed) braking laws. Lines are drawn for two values of the magnetic
field, $10^8$ G (lower), and $10^9$ G (upper), as well as two values
of $\alpha$, $5\times 10^{-8}$ (lower) and $5\times 10^{-6}$
(upper). For systems with measured, quiescent spin-down rates, these
values are marked with the red diamond symbols. For additional
context, millisecond pulsars from the ATNF pulsar database
\citep{Manchester:2004bp} are shown and denoted by the black $+$
symbols.\\
(A color version of this figure is available in the online journal.)}
\end{figure*}
As can be seen in Figure~\ref{fig:Omega-T}, out of the $11$ sources
considered in this paper all but $2$ of them are likely to have
unstable {\it r}-modes, meaning that they are above the {\it r}-mode instability
curve. The two most slowly rotating sources, XTE J$1807$ and XTE
J$0929$, are outside the instability region for our NS
models, which means they likely can no longer spin-down due to
gravitational radiation from an {\it r}-mode\footnote{It has been shown by
\citet{Alford:2010fd} that the boundary of the instability region is
insensitive to the quantitative details of the microscopic
interactions that induce viscous damping in a given phase of dense
matter.}. Therefore, we only evaluate the upper bounds on the {\it r}-mode
amplitude for those nine sources within the instability window. Using
Equation~(\ref{eq:alpha_thermal}), we have evaluated $\alpha$ for all of
those sources using the three different NS models considered in this
work. The values of $\alpha$ are given in Table~\ref{tab:alpha}. As
can be seen for the $1.4\, M_{\odot}$ and $2.0\, M_{\odot}$ NSs, where
there is no enhanced neutrino emission, the values of $\alpha$ range
from $1.07\times 10^{-8}$ to $1.54\times 10^{-6}$, where NGC
$6440$--which has the lowest spin frequency--has the highest {\it r}-mode
amplitude, as expected. In the case of $2.21\, M_{\odot}$ NSs the
upper bounds on $\alpha$ are larger since due to the direct Urca
neutrino emission $W_d$ can be larger, and $\alpha$ in this case
ranges from $3.30\times 10^{-8}$ to $8.03\times 10^{-5}$. $4$U
$1608$-$522$ has the highest temperature among the sources considered
which implies a very large neutrino luminosity for the $2.21\,
M_{\odot}$ NS model and as a result a large {\it r}-mode amplitude. The
large value of the {\it r}-mode amplitude in this source may be ruled out by measurement of its spin-down rate, as
is explained further in the next section. The high temperature
of $4$U $1608$-$522$ could be explained if it has a lower mass, but it
should also be noted that this system has been accreting for a long
time and both its high quiescent luminosity and surface temperature
may be due to the long term accretion and our assumption in ascribing
all the quiescent luminosity to the heat that comes from inside of the
star may not be a good estimate for this system, but here we are
only interested in obtaining upper limits on {\it r}-mode amplitude.
\subsection{{\it r}-mode Spin-down}
To see whether or not these results are plausible and what fraction of
the quiescent spin-down of these sources can be due to gravitational
radiation from {\it r}-mode oscillations, we use our results for $\alpha$
from the ``thermal equilibrium'' condition and insert them into the
right hand side of Equation~(\ref{eq:evolution-Omega}) to determine
$\dot{\nu}$ in the absence of accretion ($N_{acc} = 0$). The derived
spin-down values for different NS models are given in
Table~\ref{tab:alpha}, and are shown graphically in
Figure~\ref{fig:nu-ndot} (vertical green lines). Comparing these results
with the observed spin-down rates, which exist for IGR J$00291$, XTE
$1751$-$305$ and SAX J$1808$ (red diamonds in Figure~\ref{fig:nu-ndot}),
we find that in the case of $1.4\, M_{\odot}$ ($\times$ symbols) and
$2.0\, M_{\odot}$ (square symbols) NSs, where there is no fast
neutrino cooling present in the star, the {\it r}-mode spin-down can only
provide about $1\%$ of the observed spin-down rate, which means that
other spin-down mechanisms such as magnetic-dipole radiation are
responsible for spinning down a $1.4\, M_{\odot}$ or $2.0\, M_{\odot}$
hadronic star with no fast cooling process. For the two AMXPs with
relatively ``slow'' spin frequencies, NGC 6440 and XTE J1814-338, we
find {\it r}-mode spin-down limits that are more competitive with observed
values. Indeed, our limit for NGC 6440 is comparable to the measured,
quiescent spin-down rate for SAX J1808, and thus spin-down measurements
for this source would be particularly interesting.
On the other hand, the {\it r}-mode amplitudes we obtain for the $2.21\,
M_{\odot}$ (circle symbols) hadronic model with direct Urca neutrino
emission are only consistent with observations for SAX J1808 (and
perhaps MXB 1659), as the inferred spin-down rates are either less
than the observed rate for the source--in the case of SAX J1808--or
similar to the other observed rates, as for MXB 1659. For the
remaining sources considered here the $2.21\, M_{\odot}$ limits are
likely not consistent with the observations since such large
amplitudes imply very large {\it r}-mode spin-down rates, and in the case of
IGR J$00291$ and XTE J$1814$ they are in fact larger than the observed
values. If the neutrino luminosity from these sources was indeed as
large as estimated with our $2.21\, M_{\odot}$ model, then in thermal
equilibrium there must be a heat source that can supply it. Since the
spin-down measurements for these sources indicate that {\it r}-mode heating
(for this model) would be insufficient, several possibilities
remain. First, there could be some additional source of heat other
than {\it r}-mode dissipation that supplies the needed energy. However, we
note that it would need to supply a substantial luminosity, as the
direct Urca neutrino emission for this model outshines the photon
luminosity by more than an order of magnitude, and we are not aware of
any simple mechanisms that could provide the required
luminosity. Second, the actual mass of these systems could be less
than that of the model in question ($2.21\, M_{\odot}$). Indeed, if it
could be demonstrated that {\it r}-mode dissipation were the only mechanism
that could produce such a large luminosity then an upper limit on the
mass would follow, and the limit would be the mass for which the
neutrino plus photon luminosity matched the {\it r}-mode heating produced
when the amplitude is large enough to produce a spin-down rate equal
to the observed quiescent rate or our theoretical value for a
high mass NS model, whichever is smaller. This would be a
conservative limit in the sense that it is likely that the {\it r}-mode
torque does not account for all of the observed spin-down. Finally,
our model assumptions, for example, the EOS and core composition,
could be incorrect, with one possibility being the existence of exotic
matter, such as kaon or pion condensates, or quark matter in the core
which have smaller neutrino emissivities than nucleon direct Urca
processes, or if the pairing gaps for $^3P_2$ neutrons and
$^1S_0$ protons were larger than current theoretical values (this will
be explained in more detail in the next paragraph). Interestingly,
if the masses of these systems were known then one of the
possibilities outlined above is precluded and then the observationally
derived {\it r}-mode limits become sensitive to properties of the core,
either the presence of exotic matter or perhaps additional heating
physics. In this sense further spin-down measurements, and where
possible, mass constraints could provide interesting new insights on
the physics of dense NS matter.
Our theoretical treatment of neutrino emission processes in this
study, namely modified and direct Urca, spans the plausible range
between ``slow'' and ``fast'' neutrino cooling processes. Here we have
considered NS models made of non-superfluid hadronic matter with the
APR EOS, but more realistically it is likely that
neutrons and protons will be in a superfluid phase inside
NSs. Therefore, a natural question would be whether or not our conclusions
will still hold in the presence of superfluidity. Considering
the presence of superfluidity in these sources, assuming that they are
still inside the unstable region for {\it r}-modes, could have two possible
effects on their cooling. The first one is neutrino emission due to
Cooper pair breaking and formation (PBF) just below the superfluid
critical temperature \citep{ Page:2004fy} and the second one is the
suppression of direct and modified Urca neutrino cooling at
temperatures below the critical temperature. Here we explain why our
qualitative results, and our argument about setting upper bounds on
the masses of these sources, are not changed by considering the effect
of superfluidity on the neutrino cooling of these sources. In our low
mass NS models ($M<2~M_{\odot}$) where there is no fast cooling
mechanism in the core, the thermal luminosity is much larger than the
modified Urca neutrino luminosity, in fact by more than five orders of
magnitude in all of the sources considered here except 4U
1608. Therefore, even if the temperature of these sources were just
below the critical temperature of superfluidity, neutrino emission due
to PBF, which is only about 10 times stronger than modified Urca
neutrino emission, would still be much smaller than the thermal
emission. Therefore, suppression of neutrino emission, or neutrino
emission due to PBF is not important for our low mass NS models. What
about high mass NSs where fast cooling can happen in the core? Since
direct Urca neutrino emission is much stronger than neutrino emission
due to PBF, if direct Urca is not suppressed by superfluidity, it will
be the dominant cooling mechanism. Now the question is whether or not neutron
and proton pairing can suppress direct Urca neutrino emission in the
core. Current theoretical results for the pairing gaps in
$^3P_2$ neutron superfluid \citep{Schwenk:2003bc, Dong:2013sqa} and
$^1S_0$ proton superconducting phases (see for example
\citet{Page:2004fy,Yakovlev:2004iq}), which are relevant in the core
of NSs, suggest that both of these
pairing gaps are vanishingly small in the inner core, where the direct Urca process can operate. Therefore, superfluidity is unlikely to
suppress fast cooling processes in the core of NSs and thus neglecting
the effect of superfluidity does not change our qualitative results,
assuming that {\it r}-modes are still unstable in the presence of
superfluidity in these sources.
We also note that the density profile (density versus radius) for an NS
made of hadronic matter with the APR EOS is almost flat at the center
of the star, which means that as the mass of the star increases above
$2 M_{\odot}$ (above which direct Urca processes can operate in the
core of an NS made of hadronic matter with the APR EOS), there will be
a sizable region in the core where direct Urca processes may happen,
which can make it easier to obtain an upper limit on the mass of these
sources using spin-down measurements.
With typical values of a few $\times 10^{-8}$, our derived amplitude
upper limits suggest that for many LMXB NSs the {\it r}-modes are
likely not excited to sufficient amplitudes to substantially affect
their spin evolution. This begs the question of whether or not
unstable, steady-state {\it r}-modes actually exist in these NSs.
One possibility is that additional damping mechanisms, such as those perhaps
associated with crust effects, such as the viscous friction
at the crust-core boundary due to the coupling between core {\it r}-modes
and crustal torsional modes \citep{Levin:2000vq}, superfluid mutual
friction \citep{Ho:2011tt} or the existence of exotic matter in the
core of NSs \citep{Alford:2010fd,Schwenzer:2012ga}, are at work and
modify the instability window so as to render these NSs
stable to {\it r}-mode excitation. An interesting related question is
whether the existence of {\it r}-modes at the amplitudes estimated here can
be inferred directly from observations. Figure~\ref{fig:nu-ndot} shows
both {\it r}-mode (dashed parallel lines) and magnetic-dipole (solid
parallel lines) spin-down laws. The {\it r}-mode spin-down braking index,
$n=7$, is steeper compared to that for magnetic spin-down ($n=3$),
thus at high enough spin frequencies (well above a kHz) one might
expect that the {\it r}-mode torque would eventually become competitive with
or dominate the magnetic dipole torque. However, as of yet there are
no known NSs spinning fast enough for this effect to become
dominant, and depending on the EOS the mass-shedding limit might be
reached before the {\it r}-mode torque becomes competitive with the magnetic
torque.
As discussed above, quiescent spin-down measurements have been
typically attributed to the magnetic torque. For a number of sources
considered here our {\it r}-mode amplitude limits support this
presumption. Any spin-down contribution from an {\it r}-mode torque would be
more easily identifiable if the magnetic field strengths of these
NSs were constrained independently from the magnetic
spin-down estimate. Moreover, identifying an {\it r}-mode spin-down would,
in principle, be simpler for those NSs with the lowest
magnetic torques, and thus field strengths. At present, the lowest
inferred dipolar magnetic field strengths are $\approx 6
\times 10^{7}$ G. At this level the magnetic spin-down is of the order
of $\dot{\nu} \approx 5 \times 10^{-17}$ Hz s$^{-1}$, which is comparable
to our derived {\it r}-mode spin-down limits for a number of sources
considered here, assuming that their NSs are $< 2\, M_{\odot}$. In
this regard, quiescent spin-down measurements for more of the sources
considered here, in particular the AMXPs XTE J1814 and NGC
6440, would be extremely valuable.
\subsection{Gravitational Wave Amplitudes}
{\it r}-modes in NSs are one of the possible mechanisms for gravitational
wave (GW) emission and they can be observationally interesting in
newborn NSs and perhaps accreting NSs in
LMXBs \citep{Owen:2010ng}. Continuous GW emission from {\it r}-modes is
dominated by $l=m=2$ current quadrupole emission
\citep{Lindblom:1998wf}. The gravitational wave amplitude $h_0$
(strain tensor amplitude) is related to the {\it r}-mode amplitude $\alpha$
by the following equation \citep{Owen:2010ng,Owen:2009tj}
\begin{equation}
h_0=\sqrt{\frac{8\pi}{5}}\frac{G}{c^5}\frac{1}{r}\alpha\omega_r^3MR^3\tilde{J},
\end{equation}
where $r$ is the distance to the source, $M$ and $R$ are the mass and
radius of the NS, $\tilde{J}$ is the dimensionless parameter defined
by Equation~(\ref{eq:J-tilde}) and $\omega_r$ is the frequency of the {\it r}-mode,
which is related to the angular spin frequency of the star $\Omega$
(for the $m=l=2$ {\it r}-mode) by the following equation
\begin{equation}
\omega_r\approx \frac{4}{3}\Omega \; .
\end{equation}
Using the upper limits on the {\it r}-mode amplitude of NSs in LMXBs derived
above, we can obtain upper limits on the amplitude of the GWs emitted
from these sources due to unstable {\it r}-modes. For the sources considered
in this work, upper limits on the GW strain amplitude $h_0$ for the
$1.4\, M_{\odot}$ and $2.0\, M_{\odot}$ NS models are in the range of
$1.8\times 10^{-29}$ to $4.9\times 10^{-28}$ which is below the
anticipated detectability threshold of Advanced LIGO
\citep{Collaboration:2009rfa,Watts:2008qw}. In the case of the $2.21\,
M_{\odot}$ NS model, since the {\it r}-mode amplitudes are larger than those
for the low mass NSs, we get larger values of $h_0$, but even in this
case for most of the sources $h_0$ is still below the detectability
threshold of Advanced LIGO. The highest values of $h_0$ for a massive
star are obtained for NGC $6440$ and Aql X-$1$ with an amplitude of
the order of $8.5 \times 10^{-27}$ and $4$U $1608$-$522$ with an
amplitude of $1.59 \times 10^{-25}$. However, it should be noted that
the large {\it r}-mode amplitudes in these sources, which would cause very
large spin-down rates, may eventually be ruled out by future spin-down
measurements. In this context it is important to restore the X-ray
timing capability that was lost when {\it RXTE} was decommissioned in
January 2012. Missions planned or currently in development which could
provide such a capability include India's {\it ASTROSAT}
\citep{2012hcxa.confE..95S}, ESA's {\it Large Observatory for X-ray Timing}
\citep{2012SPIE.8443E..2DF}, and NASA's {\it Neutron Star Interior
Composition Explorer} \citep{2012SPIE.8443E..13G}.
The upper limits on GW amplitudes discussed here are related to the GW
emission due to unstable {\it r}-modes but our results do not exclude the
possibility of having larger GW amplitudes in LMXBs from other GW
emission mechanisms such as NS mountains \citep{Ushomirsky:2000ax,
Haskell:2006sv}. It is worth mentioning that indirect upper limits on
GW amplitude can be obtained for sources with observed spin-down
rates, $\dot{\nu}$, by assuming that all of the observed spin-down is
due to GW emission \citep{Owen:2010ng},
\begin{equation}
h_0^{sd}=\frac{1}{r}\sqrt{\frac{45G I \dot{P}}{8c^3P}}
\end{equation}
where $P=\frac{2\pi}{\Omega}$ is the observed pulse period and
$|\frac{\dot{P}}{P}|=|\frac{\dot{\nu}}{\nu}|$. Using this equation
for the three sources with measured $\dot{\nu}$ and different NS
models (i.e. different masses and radii) we obtain $h_0^{sd}$ values that
range from $ 4.14 \times 10^{-28}$ to $ 6.53 \times 10^{-28}$.
\section{Conclusions}
In this paper we have presented upper limits on {\it r}-mode amplitudes in
LMXB NSs using their observed quiescent luminosities,
temperatures and spin-down rates. We calculated results for NS models
constructed with the APR EOS (normal hadronic matter) with masses of
1.4, 2 and 2.21\, $M_{\odot}$, where our highest mass model (2.21
$M_{\odot}$) can support enhanced, direct Urca neutrino emission in
the core. We have used two different methods to calculate {\it r}-mode
amplitudes. The first is based on the assumption that in an
outburst-quiescence cycle all the spin-up torque due to accretion
during the outburst is balanced by the {\it r}-mode spin-down torque due to
gravitational radiation. This method gives amplitudes in the range of
$\approx 1 - 3 \times 10^{-7}$ for the sources with measured spin-up
rates. Since in reality there are other sources of spin-down such as
magnetic-dipole radiation that may be the dominant spin-down source,
we use another method for computing the {\it r}-mode amplitude that does not
ascribe all of the spin-down of the star to gravitational radiation
and therefore gives tighter bounds on the amplitudes. This second
method is based on the assumption that in a thermal steady-state some
fraction of the heat that is generated in the star due to {\it r}-mode
dissipation will be lost from the star by neutrino emission and the
rest will be radiated at the surface. This assumes that all of the
heat emitted from the surface of the star during quiescence is due to
the {\it r}-mode dissipation inside the star, and thus provides an upper
bound on the {\it r}-mode amplitude. We have computed core temperatures as
well as neutrino and thermal (photon) luminosities for LMXB sources
using measurements of the quiescent luminosities and surface
temperatures and showed that at temperatures relevant for LMXB neutron
stars, when there is no enhanced cooling mechanism, the cooling of the
star is dominated by photon emission from the surface (for $1.4$ and
$2.0\, M_{\odot}$ NS models), but in a massive star where direct Urca
neutrino emission is allowed, the cooling is dominated by neutrino
emission (for $T_{eff} \gtrsim 34$ eV). For the lower mass NS models
(1.4 and 2 $M_{\odot}$) we find dimensionless {\it r}-mode amplitudes in the
range from about $1\times 10^{-8}$ to $1.5\times 10^{-6}$.
We note that none of the saturation mechanisms proposed so far can
saturate {\it r}-modes at these low amplitudes. Alternatively, the enhanced
dissipation that would result from
the existence of exotic matter in NS interiors could shift the
instability window such that the LMXBs are perhaps stable to {\it r}-mode
excitation \citep{Alford:2010fd,Schwenzer:2012ga}.
For the AMXP sources with known quiescent spin-down rates these limits
suggest that $\lesssim 1\%$ of the observed rate can be due to an
unstable {\it r}-mode. Interestingly, the AMXP with the highest amplitude
limit, NGC 6440, could have an {\it r}-mode spin-down rate comparable to the
observed, quiescent rate for SAX J1808. Thus, quiescent spin-down
measurements for this source would be particularly interesting.
Having enhanced, direct Urca neutrino emission in the core of our
highest mass model ($2.21\, M_{\odot}$) means that the dissipated heat
in the star can be larger and therefore it can have higher {\it r}-mode
amplitudes. Indeed, the inferred {\it r}-mode spin-down rates at these
higher amplitudes are inconsistent with the observed spin-down rates
for some of the LMXB sources, such as IGR J00291 and XTE J1751-305. If
{\it r}-mode dissipation were the only mechanism available to produce this
high luminosity, then this could be used to put an upper limit on the
masses of these sources if they were made of hadronic matter. Alternatively, it could be used to probe the
existence of exotic matter in them if the NS mass in these systems
were known. In this way, future spin-down and NS mass measurements for
the LMXB systems considered here, as well as for yet to be discovered
systems, could open a new window on dense matter in NS interiors. For
this as well as other reasons, we regard the re-establishment of a
sensitive X-ray timing capability as vital to the use of NSs as
natural laboratories for the study of dense matter. Using the results
for {\it r}-mode amplitudes, the upper limits on gravitational wave
amplitude due to {\it r}-modes have been computed. The upper limits on the
GW strain amplitude $h_0$ for the $1.4\, M_{\odot}$ and $2.0\,
M_{\odot}$ NS models are in the range of $1.8\times 10^{-29}$ to
$4.9\times 10^{-28}$ which is below the anticipated detectability
threshold of Advanced LIGO. In the case of the $2.21\, M_{\odot}$ NS
model, we obtain larger values for $h_0$, but even in this case for most
of the sources considered in this work, $h_0$ is still below the
detectability threshold of Advanced LIGO. Gravitational waves due to
other mechanisms such as NS mountains may have larger amplitudes in
these systems.
\acknowledgments
SM thanks Mark Alford, Andrew Cumming, Cole Miller and Kai Schwenzer
for helpful discussions. TS acknowledges NASA's support for high
energy astrophysics. SM acknowledges the support of the
U.S. Department of Energy through grant No. DEFG02- 93ER-40762.
|
1,314,259,994,789 | arxiv |
\section{Introduction}
Recently, domain adaptation has gained a lot of attention due to its efficiency during training without the need of collecting ground truth labels in the target domain.
Existing methods have made significant progress in image-based tasks, such as classification \cite{long2015learning,ganin2016domain,tzeng2017adversarial,Saito_CVPR_2018}, semantic segmentation \cite{Hoffman_ICML_2018,Tsai_DA4Seg_ICCV19,Vu_CVPR_2019,Li_CVPR_2019,Paul_daseg_eccv20} and object detection \cite{YChenCVPR18,SaitoCVPR19,KimCVPR19,Hsu_uda_det_eccv20}.
While several works have sought to extend this success to video-based tasks like action recognition by aligning appearance (\eg, RGB) features through adversarial learning \cite{Chen_da_iccv19,drone_wacv20,Pan_aaai20}, challenges persist in video adaptation tasks due to the greater complexity of the video data.
Moreover, different from the image data, domain shifts in videos for action recognition often involve more complicated environments, which increases the difficulty for adaptation.
For example, the “fencing” action usually happens in a stadium, but it can happen in other places
such as home or outdoors. Also, different actions can take place under the same background.
Therefore, purely relying on aligning RGB features can be biased to the background and affect the performance.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\linewidth]{fig/teaser.png}
\caption{
We propose a cross-modal contrastive learning framework for video domain adaptation. Our framework consists of two contrastive learning objectives: (1) cross-modal contrastive learning to align cross-modal representations from the same video, and (2) cross-domain contrastive learning to align representations between the source and target domains in each modality.
}
\label{fig:teaser}
\vspace{-4mm}
\end{figure}
In addition to the appearance cue, other modalities such as motion, audio, and text are considered in (self-)supervised learning methods on the video data \cite{Simonyan_nips_14,Arandjelovic_iccv17,Korbar_nips18,Piergiovanni_cvpr20}.
In this work, we focus on appearance and motion as the two most common modalities in the cross-domain action recognition task, in which the motion modality (\ie, optical flow) is shown to be more domain-invariant (\eg, background changes) than RGB \cite{Munro_cvpr20}.
As a result, motion can better capture background-irrelevant information, while RGB can identify semantically meaningful information under different camera setups, e.g., camera perspective.
As shown in Figure \ref{fig:teaser}, with two modalities across two domains, adaptation becomes a task of how to explore the relationship of cross-modal and cross-domain features, to fully exploit the multi-modal property for video domain adaptation.
That is, given either the source video $V_s$ or the target one $V_t$, they can be associated to either the appearance feature $F^a$ or the motion feature $F^m$, which results in four combinations of feature spaces, i.e., $F_s^a$, $F_t^a$, $F_s^m$, $F_t^m$.
Thus, the ensuing task is to design an effective adaptation mechanism for dealing with these four feature spaces.
Since each modality has its characteristics and benefit (e.g., flow is more domain-invariant and RGB can capture semantic cues), it is of great interest to enable feature learning across the two modalities.
Our key contribution stems from the observation that typical adversarial feature alignment schemes used in e.g. \cite{Chen_da_iccv19,Choi_eccv20} may not be directly applied in the cross-modal setting. For example, it is not reasonable to directly align the RGB feature $F_s^a$ in the source domain with the flow feature $F_s^m$ or $F_t^m$ in either domain.
To tackle this issue, motivated by the recent advancements in self-supervised multi-view learning \cite{cmc_eccv20} that achieves powerful feature representations, we propose to treat each modality as a view, while introducing the cross-domain video data in our multi-modal learning framework.
To this end, we leverage the contrastive learning objectives for performing feature regularization mutually among those four feature spaces (see Figure \ref{fig:teaser}) under the video domain adaptation setting.
We note that the prior work \cite{Munro_cvpr20} also adopts a multi-modal framework, but it focuses on typical adversarial alignment and a self-supervised objective to predict whether the RGB/flow modality comes from the same video clip, without the exploration of jointly regularizing cross-modal and cross-domain features like our work.
More specifically, our framework is allowed to contrast features across modalities within a domain (e.g., between $F_s^a$ and $F_s^m$) or across domains using one modality (e.g., between $F_s^a$ and $F_t^a$). Two kinds of loss functions are designed accordingly: 1) a cross-modal loss that considers each modality as one view in a video while contrasting views in other videos from the same domain; 2) a cross-domain loss that considers one modality at a time and contrasts features based on the (pseudo) class labels of videos across two domains.
There are several benefits of the proposed contrastive learning-based feature regularization strategies: 1) it is a unified framework that allows the interplay across features in different modalities and domains, while still enjoying the benefits of each modality; 2) it enables sampling strategies of selecting multiple positive and negative samples in the loss terms, coupled with memory banks to record large variations in video clips; 3) our cross-domain loss can be considered as a soft version of pseudo-label self-training with the awareness of class labels, which performs more robustly than typical adaptation methods.
We conduct experiments in video action recognition benchmark datasets, including the UCF \cite{ucf} $\leftrightarrow$ HMDB \cite{hmdb} setting, and the EPIC-Kitchens \cite{epic-kitchen,Munro_cvpr20} dataset.
We show that including either our cross-modal or cross-domain contrastive learning objective improves accuracy while combining these two strategies in a unified framework obtains the best results.
Moreover, our method performs favorably against state-of-the-art domain adaptation techniques, e.g., adversarial feature alignment \cite{Chen_da_iccv19,Munro_cvpr20}, self-learning scheme \cite{Choi_eccv20}, and pseudo-label self-training.
The main contributions of this work are summarized as follows.
\begin{itemize}
\item We propose a new multi-modal framework for video domain adaptation that leverages the property in four different feature spaces across modalities and domains.
\item We leverage the contrastive learning technique with well-designed sampling strategies and demonstrate the application to adaptation for cross-domain action recognition by exploiting appearance and flow modalities.
\item We show the effectiveness of both the cross-modal and cross-domain contrastive objectives, by achieving state-of-the-art results on UCF-HMDB and EPIC-Kitchens adaptation benchmarks with extensive analysis.
\end{itemize}
\section{Related Work}
In this section, we discuss existing fully-supervised and domain adaptation methods for action recognition, as well as methods on unsupervised learning for video representations.
\vspace{-4mm}
\paragraph{Supervised Action Recognition.}
Action recognition is one of the important tasks for understanding video representations. With the recent advancements in deep learning, early works either adopt 2D \cite{Karpathy_cvpr14} or 3D \cite{Ji_pami13} convolutional networks on RGB video frames, which achieve significant progress.
To improve upon the single-modal framework, optical flow is commonly used as the temporal cue to greatly improve the action recognition accuracy \cite{Simonyan_nips_14}.
Following this multi-modal pipeline, several methods are proposed to further incorporate the long-term temporal context \cite{non-local,slowfast} and structure \cite{Tran_cvpr17,Zhou_eccv18,Wang_eccv16}, or extend to the 3D convolutional networks \cite{Carreira_cvpr17,Tran_iccv15}.
Moreover, recent approaches show the benefit of adopting 1D/2D separable convolutional networks \cite{Tran_cvpr17,Xie_eccv18}, while other methods \cite{slowfast,Jiang_iccv19} focus on improving the 3D convolutional architecture for action recognition, to be computationally efficient.
Despite the promising performance of these methods in a fully-supervised manner, our focus is to develop an effective action recognition framework under the unsupervised domain adaptation setting.
\begin{figure*}[!t]
\centering
\vspace{-0.4cm}
\includegraphics[width=1.0\linewidth]{fig/framework.png}
\caption{
An overview of our cross-modal contrastive learning framework. We use a two-stream network for RGB and flow. Each stream takes video clips and outputs feature vectors for each domain and modality ($F_s^a, F_t^a, F_s^m$, $F_t^m$). For cross-modal contrastive learning, we add the projection head ($h$) to learn an embedding where the flow and RGB features from the same video are matched (\eg, $h(F_{s_i}^a)$, $h(F_{s_i}^m)$). For cross-domain contrastive learning, we match the cross-domain features of the same class without the projection head (\eg $F_{s_i}^a$, $F_{t_j}^a$) in the same modality. For the unlabeled target domain, we use high-confidence pseudo-labels $\hat{Y}_t$ to find positive samples in the source domain.
}
\label{fig:overview}
\vspace{-4mm}
\end{figure*}
\vspace{-4mm}
\paragraph{Domain Adaptation for Action Recognition.}
Due to the convenience of recording videos under various conditions, there is an increasing demand for developing approaches for cross-domain action recognition.
Previous methods focus on the setting of cross-domain transfer learning \cite{Sultani_cvpr14,Zhang_aaai19,Xu_ivc16} or tackle the problem of view-point variance in videos \cite{Kong_tip17,Li_nips18,Rahmani_cvpr15,Sigurdsson_cvpr18}.
However, unsupervised domain adaptation (UDA) for action recognition has received less attention until recently.
Early attempts align distributions across the source and target domains using hand-crafted features \cite{Cao_cvpr10,Jamal_bmvc13}, while recent deep learning based methods \cite{Jamal_bmvc18,Pan_aaai20,drone_wacv20,Chen_da_iccv19,Choi_eccv20,Munro_cvpr20} leverage the insight from UDA on image classification and extend it to the video case.
For instance, approaches \cite{Chen_da_iccv19,Pan_aaai20} utilize adversarial feature alignment \cite{ganin2016domain,tzeng2017adversarial} and propose a temporal version with attention modules.
Moreover, self-supervised learning strategies are adopted by considering the video properties, such as clip orders \cite{Choi_eccv20}, sequential domain prediction \cite{Chen_cvpr20}, and modality correspondence prediction \cite{Munro_cvpr20} in videos.
Similar to \cite{Munro_cvpr20}, our method also considers the multi-modal property, but focuses on a different problem regime. To be specific, we propose a contrastive learning framework that can better exploit the multi-modality to regularize the feature spaces simultaneously across modalities and across domains, which is previously unstudied.
\begin{table}[t]
\centering
\small
\label{tab:notation}
\caption{Summary of notations.}
\begin{tabular}{l|c}
\toprule
Notation & Meaning\\
\hline
$\{V_s, V_t\}$ & $\{$Source, Target$\}$ video clips \\
$F_{s}^{a}$ & Source appearance feature\\
$F_{s}^{m}$ & Source motion feature \\
$F_{t}^{a}$ & Target appearance feature\\
$F_{t}^{m}$ & Target motion feature \\
$h(\cdot)$ & Shared projection head \\
$\hat{Y_t}$ & Target pseudo-label \\
\bottomrule
\end{tabular}
\vspace{-4mm}
\end{table}
\vspace{-4mm}
\paragraph{Self-supervised Learning for Video Representation.}
Learning from unlabeled videos is beneficial for video representations as video labeling is expensive. For instance, numerous approaches are developed via exploiting the temporal structure in videos \cite{Fernando_cvpr17,Wei_cvpr18}, e.g., temporal order verification \cite{Misra_eccv16} and sorting sequences \cite{Lee_iccv17}.
By leveraging the temporal connection across frames, patch tracking over time \cite{Wang_iccv15} or prediction of future frames \cite{Srivastava_icml15} also facilitates feature learning in videos.
Moreover, to incorporate the multi-modal information into learning, RGB frames, audio, and optical flow are used to align with each other for self-learning \cite{Arandjelovic_iccv17,Korbar_nips18,Piergiovanni_cvpr20,Han_nips20,alwassel2019self,morgado2020learning,piergiovanni2020evolving}.
After the learning process, such methods are usually served as a pre-training step for the downstream tasks.
In this paper, we study the UDA setting for cross-domain and cross-modal action recognition, which involves a labeled source dataset and unlabeled target videos.
\section{Proposed Method}
In this section, we first introduce the overall pipeline of the proposed approach for video domain adaptation. Then we describe individual modules for cross-modal and cross-domain feature regularization, followed by the complete objective in a unified framework using contrastive learning.
\subsection{Algorithm Overview}
Given the source dataset that contains videos $\{s_i\} \in V_{s}$ with its action label set $Y_{s}$, our goal is to learn an action recognition model that is able to perform reasonably well on the unlabeled target video set $\{t_i\} \in V_{t}$.
Since we aim to investigate an effective way to construct a domain adaptive model that leverages the benefit of multi-modal information (i.e., RGB and flow) across domains, we utilize a two-stream network \cite{Munro_cvpr20} that takes the RGB and flow images as the input.
As a result, the two-stream network would output the RGB modality feature $F^a$ and the flow modality feature $F^m$, which forms four different feature spaces across the modality and domain, i.e., $F_s^a$, $F_t^a$, $F_s^m$, $F_t^m$.
In our contrastive learning framework, we jointly consider the relationship of these four spaces via two kinds of contrastive loss functions to regularize features as shown in Figure~\ref{fig:teaser}. First, we treat each modality as a view, extract the RGB/flow features from the same domain (either source or target), and contrast them based on whether the features come from the same video, e.g., the cross-modal features of one video, $F_s^a$ and $F_s^m$, should be closer to each other in an embedding space than others extracted from different video clips.
Second, for features across the domains, e.g., $F_s^a$ and $F_t^a$, but within the same modality, we contrast them based on whether the videos are likely to share the same action label. To this end, we calculate the pseudo-labels $\hat{Y}_{t}$ on target videos and form positive/negative samples to perform contrastive learning.
Figure \ref{fig:overview} illustrates our overall framework and Table~\ref{tab:notation} summarizes the notations.
\subsection{Cross-modal Regularization}
\label{sec:mo}
Motivated by the unsupervised multi-view feature learning method \cite{cmc_eccv20}, we treat each modality as a view and form positive training samples within the same video, as well as negative samples from different videos. However, the difference is that we cannot directly apply negative pairing to all the videos, as in our problem, the videos from two different domains under the same view could still be largely different because of the domain gap.
Therefore, it would not be proper to mix source and target videos, and instead, we form a contrastive objective in each domain separately.
\vspace{-4mm}
\paragraph{Sampling Strategy.}
It has been studied that the sampling strategy is crucial in image-based contrastive learning \cite{Kalantidis_nips_20}.
Considering videos from one domain in our case, we select positive training samples from the same video but with different modalities, while sampling the negative ones when the RGB and flow frames are from different videos, regardless of their action labels. An illustration of cross-modal sampling for the source domain is in Figure \ref{fig:sampling_mo}, and a similar strategy is used for the target domain.
\begin{figure}[!t]
\centering
\vspace{-0.2cm}
\includegraphics[width=0.9\linewidth]{fig/sampling_mo.png}
\caption{An overview of cross-modal contrastive learning. We pull an RGB feature and a flow feature from the same video clip as positives but push cross-modal features from different video clips.
}
\label{fig:sampling_mo}
\vspace{-4mm}
\end{figure}
In addition, since one video clip contains many frames, every time we need to randomly sample a window of consecutive frames within a video clip, following the setting in \cite{Choi_eccv20,Munro_cvpr20}.
To account for the large intra-variation within a video clip, we do not assume that the RGB and flow modality need to have the same window of frames. For example, given a video clip, the RGB frames can be randomly sampled from the time window $t\sim t+15$, while the flow frames can be different, e.g., $t+5\sim t+20$. Empirically, we find that such a sampling strategy is especially beneficial for our contrastive learning objective, which is aware of the variation within video clips in the embedding space.
\vspace{-4mm}
\paragraph{Similarity Between Samples.}
Another important aspect in contrastive learning is the feature similarity.
Taking the source domain as one example, we have features from RGB and flow, \ie, $F_s^a$ and $F_s^m$.
Since each modality maintains its own feature characteristic, directly contrasting these two features may make the negative impact on the feature representation and reduce the recognition accuracy. To this end, given source features $F_{s_i}$ and $F_{s_j}$ from two videos $\{s_i, s_j\} \in V_s$, we apply an additional projection head $h(\cdot)$ in a way similar to SimCLR \cite{simclr}, and then we can define the similarity function $\phi^s(\cdot)$ between samples with a temperature parameter $\tau$ as:
\begin{equation}
\label{eq:sim_s}
\phi^s(F_{s_i}^k, F_{s_j}^l)_{k,l \in \{a, m\}} = \text{exp}( h(F_{s_i}^k)^\top h(F_{s_j}^l) / \tau ).
\end{equation}
where $\{a, m\}$ represents either the appearance (RGB) or motion (flow) modality.
\vspace{-4mm}
\paragraph{Loss Function.}
Based on the aforementioned sampling strategy and similarity measurement as depicted in Figure \ref{fig:sampling_mo}, the loss function for the source domain is written as:
\begin{equation}
\label{eq:mo}
\mathcal{L}_{mo}^s = - \text{log} \frac{\sum\limits_{s_i \in V_s} \phi^s(F_{s_i}^k, F_{s_i+}^l)_{k\neq l} }{ \sum\limits_{s_i \in V_s} \phi^s(F_{s_i}^k, F_{s_i+}^l)_{k\neq l} + \phi^s(F_{s_i}^k, F_{s_j-}^l)_{k\neq l}},
\end{equation}
where $F_{s_i+}$ is the positive sample with a different view (modality) from the same video clip $F_{s_i}$, while $F_{s_j-}$ is the negative sample with another view of a different video from $F_{s_i}$, regardless of their action labels. Here, we omit the notation $k,l \in \{a, m\}$ to have a concise presentation.
\begin{figure}[!t]
\centering
\vspace{-0.2cm}
\includegraphics[width=0.9\linewidth]{fig/sampling_do_v2.png}
\caption{Based on pseudo-labels, we pull source and target features sharing the same labels but push cross-domain features otherwise.
}
\label{fig:sampling_do}
\vspace{-4mm}
\end{figure}
On the other hand, for videos in the target domain, we construct another loss $\mathcal{L}_{mo}^t$ similar to \eqref{eq:mo} with the same projection head $h(\cdot)$, where the similarity measurement $\phi^t$ between features of target videos $\{t_i, t_j\} \in V_t$ is defined as:
\begin{equation}
\label{eq:sim_t}
\phi^t(F_{t_i}^k, F_{t_j}^l)_{k,l \in \{a, m\}} = \text{exp}( h(F_{t_i}^k)^\top h(F_{t_j}^l) / \tau ).
\end{equation}
Here, to consider individual domain characteristics, we find that the key is to form the cross-modal loss for each domain separately, while these two loss functions can still share the same projection head $h$\footnote{We empirically find that using two individual projection heads for each domain produces a similar performance to the one that share the same projection head, so we use the shared projection head as a way for analyzing this embedding space later.}. The projection head helps to prevent overfitting to this regularization. Without it, the RGB and flow features would be aligned to be the same, so that they are not complementary to each other anymore.
Therefore, by combining these two loss functions in each of the source and target domains (i.e., $\mathcal{L}_{mo}^s$ and $\mathcal{L}_{mo}^t$), features within the same video but from different modalities are closer in an embedding space, which is also served as a feature regularization on unlabeled target videos.
\subsection{Cross-domain Regularization}
\label{sec:cross_domain}
In addition to the cross-modal regularization introduced in the previous section, we find that there is a missing connection between features across domains.
To further exploit the interplay between four feature spaces ($F_s^a$, $F_t^a$, $F_s^m$, $F_t^m$), we propose to use another contrastive learning objective for cross-domain samples.
\vspace{-4mm}
\paragraph{Sampling Strategy via Pseudo-labels.}
Taking one modality, RGB, as the example, we consider cross-domain features of $F_s^a$ and $F_t^a$ in a similar contrastive learning setup as described in Section \ref{sec:mo}. Here, an intuitive way to form positive samples is to find the videos with the same label across domains.
However, since we do not know the action label in the target domain, we first apply our two-stream action recognition model and obtain the prediction score of the target-domain video.
Then, if the score is larger than a certain threshold $T$ (e.g., $T=0.8$ in our setting), we obtain the pseudo-label of this target sample and sample other source videos with the same action label from positive samples (otherwise they are negative samples).
The procedure for the RGB modality is illustrated in Figure~\ref{fig:sampling_do}, while a similar way is used for the flow modality.
\vspace{-4mm}
\paragraph{Similarity Between Samples.}
To measure the sample similarity in our contrastive objective, we adopt $\phi^{st}$ to calculate the similarity between cross-domain features:
\begin{equation}
\label{eq:sim_st}
\phi^{st}(F_{t_i}^k, F_{s_i}^k)_{k \: \in \{a, m\}} = \text{exp}( F_{t_i}^k{^\top} F_{s_i}^k / \tau ),
\end{equation}
where the modality $k$ can be appearance or motion in our work.
Note that, for cross-domain feature regularization, in order to align features, we do not use an additional projection head $h(\cdot)$ like Section \ref{sec:mo} or Figure \ref{fig:sampling_mo} (explained as the follows).
\paragraph{Loss Function.}
The corresponding loss function with respect to the sampling strategy (Figure \ref{fig:sampling_do}) and the similarity measurement $\phi^{st}$ for the RGB modality is defined as:
\begin{equation}
\label{eq:do_a}
\mathcal{L}_{do}^a = - \text{log} \frac{\sum\limits_{t_i \in \hat{V_t}} \phi^{st}(F_{t_i}^a, F_{s_i+}^a) }{ \sum\limits_{t_i \in \hat{V_t}} \phi^{st}(F_{t_i}^a, F_{s_i+}^a) + \phi^{st}(F_{t_i}^a, F_{s_i-}^a)},
\end{equation}
where $t_i \in \hat{V_t}$ is the set of target videos with a more confident pseudo labels $\hat{Y_t}$. $F_{s_i+}^a$ are the positive samples of source videos $s_i \in V_s$ with the same class label as $\hat{Y_t}$, while $F_{s_i-}^a$ are the negative samples with different class labels.
Similarly, when the modality is flow, the loss function is:
\begin{equation}
\label{eq:do_m}
\mathcal{L}_{do}^m = - \text{log} \frac{\sum\limits_{t_i \in \hat{V_t}} \phi^{st}(F_{t_i}^m, F_{s_i+}^m) }{ \sum\limits_{t_i \in \hat{V_t}} \phi^{st}(F_{t_i}^m, F_{s_i+}^m) + \phi^{st}(F_{t_i}^m, F_{s_i-}^m)}.
\end{equation}
We also note that the choice without using the projection head $h(\cdot)$ is reasonable as our objectives in \eqref{eq:do_a} and \eqref{eq:do_m} essentially try to make features with the same action labels closer to each other, which is consistent with the final goal for performing action recognition.
\vspace{-4mm}
\paragraph{Connections to Pseudo-label Self-training.}
Using pseudo labels on the target sample to self-train the model is one commonly used approach in domain adaptation \cite{Zou_iccv19,Zou_ECCV_2018,Lian_ICCV19}.
In the proposed cross-domain contrastive learning, we also adopt pseudo labels to form positive samples. However, these two methods are distinct, in terms of the way that reshapes the feature space.
Given the target video $V_t$, one can produce a pseudo-label $\hat{Y_t}$ and use it for training the action recognition network with the standard cross-entropy loss.
Therefore, such supervision is a strong signal that forces the feature $F_t$ to map into the space of action label $\hat{Y_t}$, which is sensitive to noisy labels such as pseudo-labels.
On the contrary, using contrastive learning with pseudo-labels is similar in spirit to the soft nearest-neighbor loss \cite{pmlr-v2-salakhutdinov07a,Wu_eccv18}, which encourages soft feature alignments, rather than enforcing the hard final classification, hence more robust to potential erroneous pseudo-labels.
Similar observations are also presented in the recent work, which shows that the supervised contrastive loss \cite{Khosla_nips_20} is more robust than the cross-entropy loss in image classification.
In our case, we utilize such property and show that cross-domain contrastive learning can be achieved by using pseudo-labels in video domain adaptation, and is more robust than pseudo-label self-training. More empirical comparisons will follow in the experiments.
\subsection{A Contrastive Learning Framework}
In previous sections, we have introduced how we incorporate cross-modal and cross-domain contrastive loss functions to regularize features extracted from RGB/flow branches across the source and target domains. Next, we introduce the entire objective.
\vspace{-4mm}
\paragraph{Overall Objective.}
Overall, we include loss functions in Section \ref{sec:mo} and \ref{sec:cross_domain} without using any supervisions, and a standard supervised cross-entropy loss $\mathcal{L}_{src}$ on source videos $V_s$ with action labels $Y_s$. To obtain the final output from the two-stream network, we average the outputs from individual classifiers of RGB and flow branches (see Figure \ref{fig:overview}).
\begin{equation}
\label{eq:obj}
\begin{split}
\mathcal{L}_{all} & = \mathcal{L}_{src}(V_s, Y_s) + \\
& \lambda (\mathcal{L}_{mo}^s(V_s) + \mathcal{L}_{mo}^t(V_t) + \mathcal{L}_{do}(V_s, V_t,\hat{Y}_t) ),
\end{split}
\end{equation}
where $\lambda$ is the weight to balance the terms. Here, we treat loss functions in \eqref{eq:do_a} and \eqref{eq:do_m} as one term: $\mathcal{L}_{do} = \mathcal{L}_{do}^a + \mathcal{L}_{do}^m$, as they are with the same form but using different modalities.
Since all the loss terms are with a similar form, it does not require heavy tuning on each of them, so that we use the same $\lambda$ for cross-modal and cross-domain losses (\ie, $\lambda = 1.25$ in this paper).
\vspace{-4mm}
\paragraph{Leveraging Memory Banks.}
To compute the cross-modal and cross-domain loss functions, we need to compute all feature representations summed from video sets $V_s$ and $V_t$. However, it is impossible to obtain all the features at every training iteration. Therefore, we store the features in the memory banks following \cite{wu2018unsupervised}, i.e., an individual memory bank for a domain and a modality, totally with four combinations $M_s^a, M_s^m, M_t^a,$ and $M_t^m$. Given the features in a batch (\ie, $F_{s_i}^a, F_{s_i}^m, F_{t_j}^a, $ and $F_{t_j}^m$), we pull out features from the memory banks for positive and negative features (\eg, $F_{s_i+}^a$ or $F_{s_i-}^a$ in \eqref{eq:do_a} is replaced by $M_{s_i+}^a$ and $M_{s_i-}^a$). Then, the memory bank features are updated with the features in the batch at the end of each iteration. We use a momentum update $\delta=0.5$ following \cite{wu2018unsupervised}:
\begin{equation}
M_{s_i}^a = \delta M_{s_i}^a + (1-\delta)F_{s_i}^a.
\end{equation}
The other memory banks, $M_s^m, M_t^a,$ and $M_t^m$, are also updated in the same way. Using the momentum update also encourages smoothness in training dynamics~\cite{wu2018unsupervised}.
In our case, during the training process, we randomly sample consecutive frames in a video clip. Therefore, by using the memory banks, our model can also encourage the temporal smoothness within each clip in feature learning.
\section{Experimental Results}
In this section, we show performance comparisons on numerous domain adaptation benchmark scenarios for action recognition, followed by comprehensive ablation studies to validate the effectiveness of our cross-domain and cross-modal feature regularization. Please refer to the supplementary material for more results and analysis.
\subsection{Datasets and Experimental Setting}
We use the three standard benchmark datasets for video domain adaptation, UCF \cite{ucf}, HMDB \cite{hmdb}, and EPIC-Kitchens \cite{epic-kitchen}. We then show that our method is a general framework to work for different types of action recognition settings: UCF $\leftrightarrow$ HMDB for human activity recognition, as well as EPIC-Kitchens for fine-grained action recognition in egocentric videos.
\vspace{-4mm}
\paragraph{UCF $\leftrightarrow$ HMDB.}
Chen et al. \cite{Chen_da_iccv19} release the UCF $\leftrightarrow$ HMDB dataset for studying video domain adaptation. This dataset has $3209$ videos with 12 action classes.
All the videos come from the original UCF \cite{ucf} and HMDB \cite{hmdb} datasets, which sample the overlapping 12 classes out of 101/51 classes from UCF/HMDB,
respectively. There are two settings of interest, UCF $\rightarrow$ HMDB and HMDB $\rightarrow$ UCF. We show the performance of our method in both settings following the official split provided by the authors \cite{Chen_da_iccv19}.
\vspace{-4mm}
\paragraph{EPIC-Kitchens.}
This dataset contains fine-grained action classes with videos recorded in different kitchens from the egocentric view. We follow the train/test split used in \cite{Munro_cvpr20} for the domain adaptation task. There are 8 action categories in the three largest kitchens, \ie, D1, D2, and D3, and we use all the pairs of them as source/target domains. Note that, compared to UCF $\leftrightarrow$ HMDB, EPIC-Kitchens is more challenging as it has more fined-grained classes (\eg, ``Take'', ``Put'', ``Open'', ``Close'') and imbalanced class distributions. We report the top-1 accuracy on the test set averaged over the last 9 epochs following \cite{Munro_cvpr20}.
\begin{table}[!t]
\caption{
Performance comparisons on UCF $\leftrightarrow$ HMDB.
}
\label{table:ucf}
\small
\centering
\renewcommand{\arraystretch}{1.0}
\setlength{\tabcolsep}{3pt}
\resizebox{0.4\textwidth}{!}
{\begin{tabular}{lccc}
\toprule
Setting & Two-stream & UCF $\rightarrow$ HMDB & HMDB $\rightarrow$ UCF \\
\midrule
Source-only \cite{Chen_da_iccv19} & & 80.6 & 88.8 \\
TA$^3$N \cite{Chen_da_iccv19} & & 81.4 & 90.5 \\
Supervised-target \cite{Chen_da_iccv19} & & 93.1 & 97.0 \\
\midrule
TCoN \cite{Pan_aaai20} & & \textbf{87.2} & 89.1 \\
\midrule
Source-only \cite{Choi_eccv20} & & 80.3 & 88.8 \\
SAVA \cite{Choi_eccv20} & & 82.2 & \textbf{91.2} \\
Supervised-target \cite{Choi_eccv20} & & 95.0 & 96.8 \\
\midrule
Source-only & $\surd$ & 82.8 & 90.7 \\
MM-SADA \cite{Munro_cvpr20} & $\surd$ & 84.2 & 91.1 \\
Ours (cross-modal) & $\surd$ & \textbf{84.7} & 92.5 \\
Ours (cross-domain) & $\surd$ & 83.6 & 91.1 \\
Ours (final) & $\surd$ & \textbf{84.7} & \textbf{92.8} \\
Supervised-target & $\surd$ & 98.8 & 95.0\\
\bottomrule
\end{tabular}}
\vspace{-4mm}
\end{table}
\begin{table*}[!t]
\begin{minipage}[t]{.7\linewidth}
\caption{
Performance comparisons on EPIC-Kitchens.
}
\label{table:kitchen}
\small
\centering
\renewcommand{\arraystretch}{1.0}
\setlength{\tabcolsep}{3pt}
\resizebox{0.95\textwidth}{!}
{\begin{tabular}{lcccccccc}
\toprule
Setting & D2 $\rightarrow$ D1 & D3 $\rightarrow$ D1 & D1 $\rightarrow$ D2 & D3 $\rightarrow$ D2 & D1 $\rightarrow$ D3 & D2 $\rightarrow$ D3 & Mean & Gain\\
\midrule
Source-only & 42.5 & 44.3 & 42.0 & 56.3 & 41.2 & 46.5 & 45.5 \\
\midrule
AdaBN \cite{adabn} & 44.6 & 47.8 & 47.0 & 54.7 & 40.3 & 48.8 & 47.2 & +1.7 \\
MMD \cite{long2015learning} & 43.1 & 48.3 & 46.6 & 55.2 & 39.2 & 48.5 & 46.8 & +1.3 \\
MCD \cite{Saito_CVPR_2018} & 42.1 & 47.9 & 46.5 & 52.7 & 43.5 & 51.0 & 47.3 & +1.8 \\
MM-SADA \cite{Munro_cvpr20} & 47.4 & 48.6 & 50.8 & \textbf{56.9} & 42.5 & \textbf{53.3} & 49.9 & +4.4 \\
Ours (modality) & 44.3 & 50.2 & 49.5 & 56.6 & 43.0 & 48.8 & 48.7 & +3.2\\
Ours (domain) & 47.4 & \textbf{52.8} & \textbf{52.4} & 56.1 & 41.7 & 49.9 & 50.1 & +4.6 \\
Ours (final) & \textbf{49.5} & 51.5 & 50.3 & 56.3 & \textbf{46.3} & 52.0 & \textbf{51.0} & \textbf{+5.5} \\
\midrule
Supervised-target & 62.8 & 62.8 & 71.7 & 71.7 & 74.0 & 74.0 & 69.5 \\
\bottomrule
\end{tabular}}
\end{minipage}
\hfill
\begin{minipage}[t]{.3\linewidth}
\caption{
Ablation study on EPIC-Kitchens.
}
\label{table:kitchen_ablation}
\small
\centering
\renewcommand{\arraystretch}{1.1}
\setlength{\tabcolsep}{3pt}
\resizebox{\textwidth}{!}
{\begin{tabular}{lcccc}
\toprule
Setting & Modality & Domain & Mean & Gain \\
\midrule
Source-only & & & 45.5 & \\
\midrule
MM-SADA \cite{Munro_cvpr20} & $\surd$ & & 47.9 & +2.4 \\
Ours & $\surd$ & & \textbf{48.7} & \textbf{+3.2} \\
\midrule
MM-SADA \cite{Munro_cvpr20} & & $\surd$ & 49.4 & +3.9 \\
Pseudo-labeling & & $\surd$ & 49.0 & +3.5 \\
Ours & & $\surd$ & \textbf{50.1} & \textbf{+4.6} \\
\midrule
MM-SADA \cite{Munro_cvpr20} & $\surd$ & $\surd$ & 49.9 & +4.4 \\
Ours & $\surd$ & $\surd$ & \textbf{51.0} & \textbf{+5.5} \\
\bottomrule
\end{tabular}}
\end{minipage}
\vspace{-4mm}
\end{table*}
\vspace{-4mm}
\paragraph{Implementation Details.}
Our entire framework is implemented in PyTorch using 2 TITANXP GPUs. We use an I3D two-stream network \cite{Carreira_cvpr17} composed of an RGB stream and a flow stream, where the network is pre-trained on Kinetics following \cite{Choi_eccv20}. During training, we use the same setting as \cite{Choi_eccv20, Munro_cvpr20} to randomly sample 16 consecutive frames out of a video clip. Then each RGB and flow stream takes these 16 frames with a size of 224$\times$224. Each stream is followed by a fully-connected layer to compute individual output logits. Then the logits from each stream are averaged to predict the final class scores. To optimize the entire network, we use the SGD optimizer with a learning rate of 0.01. We set the temperature $\tau=0.1$ and $\delta=0.5$ for all experiments following \cite{wu2018unsupervised}. For UCF $\leftrightarrow$ HMDB, we follow the setting as \cite{Choi_eccv20} for batch size, total training epochs, learning rate, etc. For EPIC-Kitchens, we implement it upon the official code of~\cite{Munro_cvpr20} but set the batch size as 32 to fit the memory of 2 GPUs and train the model for 6K iterations. The learning rate decreases by a factor of 10 every 3K iterations.
\subsection{Results on UCF $\leftrightarrow$ HMDB}
We show experimental results for UCF $\rightarrow$ HMDB and HMDB $\rightarrow$ UCF in Table \ref{table:ucf}, comparing with state-of-the-art methods --- TA$^3$N \cite{Chen_da_iccv19}, TCoN~\cite{Pan_aaai20}, SAVA \cite{Choi_eccv20}, and MM-SADA \cite{Munro_cvpr20}.
\vspace{-4mm}
\paragraph{Comparisons with State-of-the-art Methods.}
In each group of Table \ref{table:ucf}, in addition to the result for each method, we show the ``Source-only'' model that only trains on videos in the source domain, and the ``Supervised-target'' model that trains on target videos with ground truths, which serves as an upper bound. We also implement~\cite{Munro_cvpr20} on UCF-HMDB in the same setup for fair comparisons.
Different from the TA$^3$N, TCoN, and SAVA that only exploit the single modality via adversarial feature alignment and self-learning schemes, our method leverages both the RGB and flow modalities in a domain adaptation framework, which achieves the state-of-the-art performance.
We also notice that our source-only model performs slightly better than the other source-only baselines, due to the usage of the flow stream. Despite that the domain gap is reduced by leveraging the flow modality, our approach still obtains comparable performance gains with respect to the source-only model and performs better than MM-SADA \cite{Munro_cvpr20} that adopts the same two-stream model.
For example, on UCF $\rightarrow$ HMDB, the gain for TA$^3$N and SAVA is 0.8\% and 1.9\%, respectively, while our gain is the same as SAVA and is much higher than TA$^3$N.
\vspace{-4mm}
\paragraph{Ablation Study.}
In the fourth group of Table \ref{table:ucf}, we show model variants to validate the usefulness of individual components in our contrastive learning framework, \ie, cross-modal and cross-domain feature regularization.
From the results, the two modules consistently improve the performance over the source-only baseline. By combining both, it provides the highest accuracy.
Here, interestingly, we find that the cross-domain module is less helpful than the cross-modal one. One reason is that on UCF $\leftrightarrow$ HMDB, these two domains already share a high similarity, which reduces the impact of using the cross-domain loss. However, this also shows the importance of incorporating the proposed two modules, in which the cross-modal loss can still provide effective regularization, even when the domain gap is smaller. In the next section, we will show a different scenario, where both modules are important.
\begin{figure*}[!t]
\centering
\vspace{-0.3cm}
\includegraphics[width=0.9\linewidth]{fig/fig_tsne.pdf}
\vspace{-1mm}
\caption{
t-SNE visualization on cross-modal and cross-domain features after the projection head $h(\cdot)$ on UCF $\rightarrow$ HMDB, \ie $h(F^a_s)$, $h(F^m_s)$, $h(F^a_t)$, $h(F^m_t)$. In (a)(b), we show the visualization for individual domains, where each domain contains the multi-modality features. In (c)(d), we visualize features for each modality, and each plot uses the features from two domains. (e) includes all the features from two domains and two modalities, where each color represents one action class.
}
\label{fig:tsne}
\vspace{-3mm}
\end{figure*}
\subsection{Results on EPIC-Kitchens}
We present results on the EPIC-Kitchens benchmark for domain adaptation \cite{Munro_cvpr20}, including comparisons with state-of-the-art methods, ablation study, and more analysis.
\vspace{-4mm}
\paragraph{Comparisons with State-of-the-art Methods.}
In Table \ref{table:kitchen}, we present several domain adaptation methods, including distribution alignment via adversarial learning \cite{long2015learning}, maximum classifier discrepancy \cite{Saito_CVPR_2018}, and adaptive batch normalization \cite{adabn}, and a recently proposed method that uses a self-learning objective \cite{Munro_cvpr20}.
We note that these results are reported from \cite{Munro_cvpr20} using the same two-stream feature extractor as ours, and they share the same ``Source-only'' model and ``Supervised-target'' upper bound in Table \ref{table:kitchen}.
For fair comparisons, we reproduce results of MM-SADA~\cite{Munro_cvpr20} using their official implementation and the same computing resources as ours, and show that our final model performs better than MM-SADA by 1.1\% on average. The results show the advantages of our contrastive learning framework. More detailed analysis is provided as follows.
\vspace{-4mm}
\paragraph{Ablation Study.}
In Table \ref{table:kitchen_ablation}, we ablate the two components for our cross-modal and cross-domain loss functions with other approaches that consider the similar aspect.
For fair comparisons, we use the same two-stream backbone, implementation, and computing resources for generating all the results.
Considering only modality or domain, our method consistently performs better than MM-SADA~\cite{Munro_cvpr20}, in which it uses a self-learning module to predict whether the RGB/flow modality comes from the same video clip and a typical adversarial learning scheme to align cross-domain features.
Combining these two factors, our method improves the ``Source-only'' model the most, which shows the effectiveness of the proposed unified framework using contrastive learning.
In addition, it is worth mentioning that our cross-domain loss performs better than pseudo-label self-training by $1.1$\%, which validates the discussion in Section \ref{sec:cross_domain} on the difference of leveraging pseudo-labels.
\vspace{-4mm}
\paragraph{Sampling Strategy.}
In Table \ref{table:kitchen_sampling}, we present the ablation for sampling strategy in the cross-modal loss (see Section \ref{sec:mo}), where we do not assume that the RGB and flow modality have the same window of frames, which handles the large intra-variation within a video clip.
When applying this strategy in MM-SADA~\cite{Munro_cvpr20}, acting as a way for data augmentation, the performance gain is smaller than ours (\ie, 0.5\% vs 1.1\%).
This validates our sampling strategy with the proposed contrastive learning objective that enriches feature regularization under the domain adaptation setting.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{fig/retrieval.pdf}
\caption{Cross-domain retrievals using the RGB feature. Given the target feature $F_t^a$, we retrieve the closest neighbor $F_s^a$ in the source domain. Our model correctly aligns videos of the same class under view-angle (1st row) and background (2nd row) differences.
}
\label{fig:retreival}
\vspace{-4mm}
\end{figure}
\subsection{More Results and Analysis}
We present more analysis including the feature visualizations using t-SNE \cite{tsne}, and example results for cross-domain retrieval to understand our model predictions.
\vspace{-4mm}
\paragraph{t-SNE Visualizations.}
In this paper, we use a projection head $h(\cdot)$ to project RGB/flow features to an embedding space for cross-modal loss in our framework.
Therefore, it is of great interest to understand how the features behave in this embedding space. To this end, we samples features from both domains across two modalities, and perform t-SNE visualizations on UCF $\rightarrow$ HMDB in Figure \ref{fig:tsne}.
Here, although our cross-modal loss is calculated in each domain, the projection head is shared across the domains.
Therefore, we provide different combinations of the feature spaces to visualize how the four feature spaces look like, \ie, $h(F^a_s)$, $h(F^m_s)$, $h(F^a_t)$, $h(F^m_t)$.
First, in Figure \ref{fig:tsne}(a)(b), it is not surprising to observe that in each domain, features from different modalities are aligned together (\eg, Source RGB/Flow in Figure \ref{fig:tsne}(a)), as it is exactly what the cross-modal objective in \eqref{eq:mo} optimizes for.
\begin{table}[!t]
\caption{
Ablation study on the sampling strategy.
}
\label{table:kitchen_sampling}
\small
\centering
\renewcommand{\arraystretch}{1.0}
\setlength{\tabcolsep}{3pt}
\resizebox{0.35\textwidth}{!}{\begin{tabular}{lccc}
\toprule
Setting & Sampling & Mean & Gain via sampling \\
\midrule
\multirow{2}{*}{MM-SADA \cite{Munro_cvpr20}} & & 49.4 & \\
& $\surd$ & 49.9 & +0.5 \\
\midrule
\multirow{2}{*}{Ours} & & 49.9 & \\
& $\surd$ & 51.0 & +1.1 \\
\bottomrule
\end{tabular}}
\vspace{-5mm}
\end{table}
More interestingly, if we consider each modality at a time in Figure \ref{fig:tsne}(c)(d), \eg, Source RGB and Target RGB, their features are also aligned well, even we do not explicitly have an objective to align them in the embedding space via $h(\cdot)$.
This shows the merit of our framework that enables feature regularization and interplay across four feature spaces.
Also, we present a visualization on the distribution for each class, including all the source and target features across two modalities in Figure \ref{fig:tsne}(e). This illustrates that features from the same category are aligned well.
\vspace{-5mm}
\paragraph{Cross-domain Retrievals.} In Figure~\ref{fig:retreival}, we show the cross-domain video retrievals using the RGB feature. Based on the target feature in HMDB, we show the nearest neighbor one from UCF. We show that our method can correctly retrieve the videos of the same class, either having the same context background but from a different view-angle or acting in a similar movement but with a different background.
\section{Conclusions}
We investigate the video domain adaptation task with our cross-modal contrastive learning framework.
To this end, we leverage the multi-modal, RGB and flow information, and exploit their relationships.
In order to handle feature spaces across modalities and domains, we propose two objectives to regularize such feature spaces, namely cross-modal and cross-domain contrastive losses, that learn better feature representations for domain adaptive action recognition.
Moreover, our framework is modular, so it can be applicable to other domain adaptive multi-modal applications, which will be considered as the future work.
\section*{Appendix}
\section{Implementation Details}
We use 2 TITANXP GPUs in our implementation. We also reproduce the results of MM-SADA~\cite{Munro_cvpr20}\footnote{https://github.com/jonmun/MM-SADA-code} using their released code with the same 2-GPU setup and the same batch size as our method.
\section{Ablation study}
In Table~\ref{table:ablation_hyper_params}, we show the sensitivity analysis on the $\lambda$ (Eq. (7) in the main paper) and the confidence threshold $T$ for pseudo-labels (Section 3.3 in the main paper).
In Table~\ref{table:ablation_projection_head}, we first show the benefit of having the projection head $h(\cdot)$ for multi-modal embedding space. We observe a 2\% performance gain by adding the projection head $h(\cdot)$, which demonstrates the importance of using $h(\cdot)$ for multi-modal regularization described in Section 3.2 of the main paper.
Moreover, we provide another ablation study where we add the projection head $h(\cdot)$ in the cross-domain module, while having $h(\cdot)$ for the cross-modal module as in our final model. Adding $h(\cdot)$ shows slightly worse results than our final model. One reason is that this scheme has less influence on the features that are supposed to be aligned for performing action recognition.
In Table \ref{table:ablation_setting}, we provide experimental results when different feature alignment methods are used in either cross-modal or cross-domain learning. In general, using the proposed contrastive learning method in both modules obtains the best performance, which shows the importance of having a unified contrastive learning framework for cross-modal and cross-domain
learning.
\begin{table}[!h]
\caption{
Ablation study on hyper-parameters on Epic-Kitchens. In the second group, we fix $T = 0.8$, while in the third group, we fix $\lambda = 1.25$.
}
\label{table:ablation_hyper_params}
\small
\centering
\renewcommand{\arraystretch}{1.1}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{lc}
\toprule
Setting & Mean \\
\midrule
Source-only & 45.5 \\
Ours ($\lambda=1.25, T=0.8$) & 51.0\\
\midrule
Ours ($\lambda=1.0$) & 50.1\\
Ours ($\lambda=1.5$) & 49.5\\
\midrule
Ours ($T=0.9$) & 49.6\\
Ours ($T=0.6$) & 49.8\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[!t]
\caption{
Ablation study on the projection head $h(\cdot)$ for EPIC-Kitchens.
}
\label{table:ablation_projection_head}
\small
\centering
\renewcommand{\arraystretch}{1.0}
\setlength{\tabcolsep}{3pt}
\resizebox{0.45\textwidth}{!}{
\begin{tabular}{lcc}
\toprule
Setting & $h(\cdot)$ in cross-modal module & Mean \\
\midrule
Ours (modality) & \checkmark & 48.7\\
Ours (modality) & \ding{55} & 46.7\\
\midrule
Setting & $h(\cdot)$ in cross-domain module & Mean \\
\midrule
Ours (modality + domain) & \checkmark & 50.1\\
Ours-final (modality + domain) & \ding{55} & 51.0\\
\bottomrule
\end{tabular}}
\end{table}
\begin{table}[!t]
\caption{
Ablation study of different feature alignment methods on EPIC-Kitchens. ``Con.'' indicates our proposed contrastive learning approach, and ``Adv.'' denotes the adversarial learning scheme.
}
\label{table:ablation_setting}
\small
\centering
\renewcommand{\arraystretch}{1.1}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{lllc}
\toprule
Setting & Modality & Domain & Mean \\
\midrule
Con. (our final model) & Con. & Con. & 51.0 \\
Adv. & Adv. & Adv. & 49.5 \\
Adv. + Con. & Con. & Adv. & 50.1 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{fig/fig_tsne_2.pdf}
\caption{
t-SNE visualization on cross-modal and cross-domain features before the projection head $h(\cdot)$ on UCF $\rightarrow$ HMDB, \ie $F^a_s$, $F^m_s$, $F^a_t$, $F^m_t$. In (a)(b), we show the visualization for individual domains, where each domain contains the multi-modal features. In (c)(d), we visualize features for each modality, and each plot uses the features from two domains. (e) includes all the features from two domains and two modalities, where each color represents one action class.
}
\label{fig:tsne_supp}
\vspace{-3mm}
\end{figure*}
\subsection{t-SNE Feature Visualizations}
Figure~\ref{fig:tsne_supp} shows different combinations of the
feature spaces before the projection head $h(\cdot)$, \ie, $F^a_s$, $F^m_s$, $F^a_t$, $F^m_t$. Figure~\ref{fig:tsne_supp}-(a,b) shows the RGB and flow features in each domain. While the RGB and flow features are almost completely aligned after the projection head in Figure 3 of the main paper, here the RGB and flow features still keep their respective information before the projection head $h(\cdot)$, which is useful for final action predictions. Figures~\ref{fig:tsne_supp}-(c,d) shows how the source and target features are aligned in each modality, where our method can learn domain-invariant features.
\subsection{Visualizations for Cross-domain Retrievals}
In Figure~\ref{fig:retreival_supp}-(a), based on the target feature in HMDB, we show the nearest neighbor one from UCF. Similarly, we show the retrievals from EPIC Kitchens D1 and D2 in Figure~\ref{fig:retreival_supp}-(b). EPIC-Kitchen is more challenging than UCF-HMDB as it has more common background (\eg, similar kitchen backgrounds) or objects (\eg, frying pan, utensils) between different action classes. Our method shows better results that retrieve the videos of the same class.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{fig/fig_viz_3.pdf} \\
(a) UCF$\rightarrow$HMDB \\
\includegraphics[width=0.9\linewidth]{fig/fig_viz_4.pdf} \\
(b) EPIC-Kitchen D1$\rightarrow$D2
\vspace{2mm}
\caption{Cross-domain retrievals in the RGB embedding space. Given the target feature $F_t^a$, we retrieve the closest neighbor $F_s^a$ in the source domain. By our contrastive learning framework, our model correctly aligns videos of the same class, while the source-only model are more likely to be biased to the background context.
}
\vspace{-4mm}
\label{fig:retreival_supp}
\end{figure*}
|
1,314,259,994,790 | arxiv | \section{Introduction}
Let ${\mathcal E}$ be the set of trascendental entire functions $f : \mathbb{C} \rightarrow \mathbb{C}$. For $f \in {\mathcal E}$, we write $f^n=f\circ f^{n-1}$ for the \textit{n}-th iterate of $f$, $n \in \mathbb{N}$, and $f^0= Id$ where the symbol $\circ$ denotes composition. When $f^n(z_0)=z_0$, for some $n\in \mathbb{N}$, the point $z_0$ is called a periodic point If $n$ is the minimal positive integer for which this equality holds, we say that $z_0$ has period $n$. If $n=1$, $z_0$ is called a fixed point. The classification of a periodic point $z_0$ of period $n$ of $f \in {\mathcal E}$ can be attracting, super-attracting, rationally indifferent, irrationally indifferent and repelling.
Given $f \in {\mathcal E}$, the \textit{Fatou set} $\mathfrak{F}(f)$ is defined as the set of all points $z \in\mathbb{C}$ such that the sequence of iterates $(f^n)_{n\in \mathbb{N}}$ forms a normal family in some neighborhood of $z$. The \textit{Julia set}, denoted by $\mathfrak{J}(f)$, is the complement of the Fatou set.
Some properties of the Julia and Fatou sets for functions in class ${\mathcal E}$ are mentioned below:
(i) $\mathfrak{F}(f)$ is open, so $\mathfrak{J}(f)$ is closed.
(ii) $\mathfrak{J}(f)$ is perfect and non-empty.
(iii) The sets $\mathfrak{J}(f)$ and $\mathfrak{F}(f)$ are completely invariant under $f$.
(iv) $\mathfrak{F}(f)=\mathfrak{F}(f^n)$ and $\mathfrak{J}(f)=\mathfrak{J}(f^n)$ for all $n \in \mathbb{N}$.
(v) The repelling periodic points are dense in $\mathfrak{J}(f)$. \\
See \cite{berg}, \cite{eremenko1}, \cite{eremenko2} and \cite{eremenko3} for definitions, proofs and more details concerning the Fatou and Julia sets. \\
We denote by $CV$ the set of critical values and by $OV$ the set of omitted values of a function $f \in {\mathcal E}$.
We recall that a Fatou component $G$ of $f$ is {\sl completely invariant} if $f^{-1}(G)=G$. Also, for any two points $z_{1},z_{2} \in G$, there is a path contained in $G$ that joins the two points. For $f$ a transcendental entire function, due to Picard theorem, every completely invariant Fatou component of $f$ is unbounded. If $f$ has $k$ completely invariant components, $G_{k}$, with $k \in {\mathbb N}$, then for every point $z \in \mathfrak{J}(f)$ and any neighborhood $N_{z}$ of $z$, we have $G_{k} \bigcap N_{z} \neq \emptyset$. \\
\noindent {\bf Observation 1.} {\it Let $f \in {\mathcal E}$, and $G$ a completely invariant Fatou component of $f$. Let $w \in G$ a regular value of $f$ and $z(w)$ any pre-image point, then there exist an oriented curve $\Gamma \subset G$ beginning at $w$, such that: (i) intersects any neighborhood of infinity with $\Gamma \bigcap OV=\emptyset$ and (ii) has a pre-image $\Gamma'$ beginning at $z(w)$, so $f(\Gamma')=\Gamma$.
The construction of the curve $\Gamma$ can be obtained by successive applications of a generalization of the Gross-star theorem, due to Kaplan \cite{ka} Theorem 3. In few words, Kaplan proves in particular, that for a (star) family of non intersecting bounded curves begining at a regular value $w$, the pre-images based at any $z(w)$ exist and can be continued indefinitely, for almost all of the curves, see details in \cite{ka} (compare with Inversen's Theorem in \cite{eremenko4}).
To construct $\Gamma$ we consider any neighborhood $N_{1}$ of $\infty$ and any pre-image $w_{1}$ of $w$ in $N_{1}$, then let $\tau_{1}$ any path contained in $G$ joining $w$ and $w_{1}$ (with that orientation). By Kaplan's theorem, for any small enough neighborhood $B(w_{1})$ of $w_{1}$, there is a curve $\tau_{1}'$ beginning at $w$ and ending in some $w_{1}' \subset B(w_{1})$ which has a pre-image $\tau_{2}$, beginning at $w_{1}'$.
Now, proceed inductively by choosing neighborhoods $N_{i} \subset N_{i-1}$, with $N_{i}$ converging to $\infty$ as $i$ tends to $\infty$ and choosing points $w_{i} \subset N_{i}$, with $w_{i}$ a pre-image of $w_{i-1}'$, also take paths $\tau_{i}$ joining $w_{i-1}'$ with $w_{i}$ and modify them to $\tau_{i}'$ accordingly to Kaplan's theorem. \\
In conclusion $\Gamma:=\bigcup_{i=1}^{\infty} \tau_{i}'$ is a curve satisfying (i) and (ii) and its closure $\overline{\Gamma}$ is a continuum in the sphere.}
An important result related to completely invariant components of transcendental entire functions was given by Baker in \cite{baker1e}, it is stated as follows.
\begin{theorem}
If $f \in {\mathcal E}$, then there is
at most one completely invariant component of $\mathfrak{F}(f)$.
\label{ba}
\end{theorem}
As it was mentioned in the abstract that there is a missing case in Baker's proof, in this paper we follow Baker's ideas and give some alternative arguments to solve the missing case.\\
It is interesting to note that a recent paper by Rempe and Sixmith \cite{rpe}, studies the connectivity of the pre-images of simply-connected domains of a transcendental entire function. The paper describes in detail the error in Baker's proof and mentions Duval's example, which is equivalent to the case of Figure \ref{fig4} in this article. Also, they prove that if infinity is accesible from some Fatou component, then at least one of the pre-images of some component is disconnected. Since infinity is accesible in Baker domains, they conclude that if the function has two completely invariant Fatou components both components must be attracting or parabolic basins. In their article it is included a list of papers which use Baker's result.
While we were making final corrections of this paper we got, by communication, the results obtained by Rempe and Sixmith in \cite{rpe}.
\section{Proof of Theorem}
The idea of the proof is by contradiction assuming that there are at least two completely invariant open components $G_{1}$ and $G_{2}$. We begin considering the cut system of Baker in Step 1, which is an open disc $D_{1}$ with boundary the simple curves $\widehat\gamma_{1}$, $\beta_{1}, \widehat\gamma_{2}, \beta_{2}$, with the properties that $\beta_{1} \subset G_{1}$ and $\beta_{2} \subset G_{2}$ and such that $f(\widehat\gamma_{1}) \subset \gamma$ , $f(\widehat\gamma_{2}) \subset \gamma$ are conformal injections, for $\gamma$ a segment with extremes at $G_{1}$ and $G_{2}$. In Step 2, we consider the image of the disc $f(D_{1})$ which has to be bounded and state some of its properties. We proceed in Step 3 to extend the curves $f(\beta_{1})$ and $f(\beta_{2})$ to infinity as in the Observation 1, creating two unbounded curves $\Gamma \subset G_{1}$ and $\Theta \subset G_{2}$. Such curves can be very complicated inside $f(D_{1})$, so we consider their intersection with the complement of $f(D_{1})$ that we named $B$. Then, we studied their pre-images in the complement of $D_{1}$. By adding a certain path $\sigma$ (Case A) and $\Sigma$ (Case B) between those pre-images in the same component, we show that one of the regions $G_{1}$ or $G_{2}$ is disconnected, which is a contradiction. It is important for the proof, the cut system since it helps to have certain control on the pre-images of $\Gamma \bigcap B = \Gamma_0$ and $\Theta \bigcap B = \Theta_0$. The differences between Case A and Case B relay in the way the pre-images of $\Gamma_0$ and $\Theta_0$ intersects the cut system, as indicated in the Step 3.\\
\noindent {\bf Proof.} Suppose that $\mathfrak{F}(f)$ has at least two mutually disjoint completely invariant
domains $G_1$ and $G_2$.\\
\noindent {\bf Step 1. The Cut System and the cancelation procedure.}
Take a value $\alpha$ in $G_1$ such that $f(z) = \alpha$ has infinitely many
simple roots $z_i$ \, ($f'(z) = 0$ at only countably many $z$ so we have to
avoid only countably many choices of $\alpha$). All $z_i$ are in $G_1$. Similarly take $\beta$ in $G_2$ such that $f(z) = \beta$ has infinitely many simple roots $z_i'$ in $G_2$. By Gross' star theorem
\cite{gross} we can continue
all the regular branches $g_i$ of $f^{-1}$ such that $g_i(\alpha) = z_i$,
along almost every ray to $\infty$ without meeting any singularity (even algebraic).
Thus we can move $\beta \in G_2$ slightly if necessary so that all $g_i$
continue to $\beta$ analytically along the line $\gamma$, which joins $\alpha$
and $\beta$. The images $g_i(\gamma)$ are disjoint curves joining $z_i$ to $z_i'$.
Denote $g_i(\gamma) = \gamma_i$. Note that $\gamma_i$ is oriented from
$z_i$ to $z_i'$, see Figure \ref{fig0}.
\begin{figure}[h!]
\centerline{\hbox{\psfig{figure=fi0,height=2in,width=2in}}}
\caption{The images $g_i(\gamma) = \gamma_i$}
\label{fig0}
\end{figure}
The branches $f^{-1}$ are univalent so $\gamma_i$ are disjoint simple arcs.
Different $\gamma_i$ are disjoint since $\gamma_i$ meets $\gamma_j$ at
say $w_0$ only if two different
branches of $f^{-1}$ become equal with values $w_0$ which can occur only if
$f^{-1}$ has branch point at $f(w_0)$ in $\gamma$, but this does not occur.
Take $\gamma_1$ and $\gamma_2$. Since $G_1$ is a domain we can join
$z_1$ to $z_2$ by an arc $\delta_1$ in $G_1$ and similarly $z_1'$ to $z_2'$
by an arc $\delta_{2}$ in $G_2$.
If $\delta_2$ is oriented from $z_1'$ to $z_2'$, let
$p'$ be the point where, for the last time, $\gamma_1$ meets $\delta_2$
and $q'$ be the point where, for the first time, $\gamma_2$ meets $\delta_2$.
If $\delta_1$ is oriented from $z_1$ to $z_2$, let
$p$ be the point where, for the last time, $\gamma_1$
meets $\delta_1$ and $q$ be the point where, for the first time,
$\gamma_2$ meets $\delta_1$, these might look like Figure \ref{fig1}.
\begin{figure}[h!]
\centerline{\hbox{\psfig{figure=fi1,height=2.2in,width=2.2in}}}
\caption{The points $p$, $p'$, $q$, $q'$ and the curves $\delta_1$, $\delta_2$, $\gamma_1$ and $\gamma_1$}
\label{fig1}
\end{figure}
Now we denote by $\beta_1$ the part of $\delta_1$ which joins
the points $p$ and $q$, by
$\beta_2$ the part of $\delta_2$ which
joins the points $p'$ and $q'$, by $\widehat{\gamma_1}$ the part of
$\gamma_1$ which joins the points $p$ and $p'$, oriented from $p$ to $p'$,
and by $\widehat{\gamma_2}$ the part of $\gamma_2$ which joins
the points $q$ and $q'$, oriented from $q$ to $q'$. Then
$\widehat{\gamma_1} \beta_2 \widehat{\gamma_2}^{-1} \beta_{1}^{-1}$
is a simple closed curve with an interior $D_1$, see Figure \ref{fig2}. \\
\begin{figure}[h!]
\centerline{\hbox{\psfig{figure=fi2,height=2.2in,width=2.2in}}}
\caption{The arcs $\beta_1$, $\beta_2$, $\widehat{\gamma_1}$ , $\widehat{\gamma_2}$ and $D_1$}
\label{fig2}
\end{figure}
\noindent {\bf Step 2. The map on the Disc $D_{1}$.}
Recall that the disc $D_{1}$ has boundary ${\beta_{1}} \cup {\beta_{2}}\cup {\widehat{\gamma_1}} \cup {\widehat{\gamma_2}}$, the end points of the curve ${\beta_{1}}$ are the points $p$ and $q$, the end points of the curve ${\beta_{2}}$ are the points $p'$ and $q'$, the end points of the curve ${\widehat{\gamma_1}}$ are the points $p$ and $p'$ and the end points of the curve ${\widehat{\gamma_2}}$ are the points $q$ and $q'$, see Figure \ref{fig2}. The function $f$ maps ${\widehat{\gamma_i}}$ inyectively into the cut $\gamma$, for $i=1,2$, and we consider $f({\beta_{1}})$ and $f({\beta_{2}})$ two non intersecting curves (with possible self intersections) with ends at $f(p),f(q)$ and $f(p'),f(q')$ respectively.
A natural question arises: Where is mapped $D_1$ under $f$?
Observe that $f(D_1) $ can be either unbounded or bounded. If $f(D_{1})$ is unbounded, so there is a pole in $D_{1}$. Thus we ruled out this case. Necessarily $f(D_{1})$ must be bounded and $f(\beta_{1})$ or $f(\beta_{2})$ need not be closed curves. This is the missing case in Baker's proof. \\
Remember that, the orientations of $\gamma_{1}$ and $\gamma_{2}$ are given by the chosen orientation in $\gamma$ as in Step 1 above. Two main possibilities arises when we consider the orientation of $\gamma_{1}$ together with the order of the set of points $\{p, p'\}$ and the orientation of $\gamma_{2}$ together with the order of the set of points $\{q, q'\}$. Let us define $a<b$ for $a, b$ points in an oriented curve $\gamma (t)$, if $\gamma(t_{1})=a$ and $\gamma(t_{2})=b$ and $t_{1}<t_{2}$. The possibilities are: (a) $\gamma_{1}$ and $\gamma_{2}$ preserve the same order, that is, if $p<p'$, then $q<q'$, see Figure \ref{fi8}, or (b) $\gamma_{1}$ and $\gamma_{2}$ reverse the order, that is, $p < p'$ but $q > q'$ or $q < q'$ but $p > p'$, see Figure \ref{fi9}.
\begin{figure}[h!]
\centerline{\hbox{\psfig{figure=fi8.eps,height=2.3in,width=2.1in}}}
\caption{ (a) $\gamma_1$ and $\gamma_2$ have the same orientations}
\label{fi8}
\end{figure}
\begin{figure}[h!]
\centerline{\hbox{\psfig{figure=fi9.eps,height=2.3in,width=2.1in}}}
\caption{ (b) $\gamma_1$ and $\gamma_2$ have opposite orientations }
\label{fi9}
\end{figure}
On the other hand, the curves $f({\beta_{i}})$ has winding number either $+1$, $0$ or $-1$ with respect to the points $\alpha$ and $\beta$. So, several possibilities occurs for the topology of $f(D_{1})$ accordingly to how the intervals $f( {\widehat{\gamma_1}})$ and $f( {\widehat{\gamma_2}})$ are placed in the cut $\gamma$. In Figure \ref{fig4}, there are two examples, one when $f( {\widehat{\gamma_1}}) \cap f( {\widehat{\gamma_2}}) \neq \emptyset$ and the other when $f( {\widehat{\gamma_1}}) \cap f( {\widehat{\gamma_2}}) = \emptyset$. \\
\begin{figure}[h!]
\centerline{\hbox{\psfig{figure=fi4.eps,height=2.6in,width=2.3in}}}
\caption{$f({\widehat{\gamma_1}} )\cap f({\widehat{\gamma_2}}) \neq \emptyset$ and $f({\widehat{\gamma_1}} )\cap f({\widehat{\gamma_2}}) = \emptyset$}
\label{fig4}
\end{figure}
In the next step we will finish the proof of the theorem. The construction indicated will work as well for (a) and (b), but is important to know that the difference exist.\\
\noindent {\bf Step 3. Unbounded curves and their pre-images.}
From now on, we will assume without lost of generality that $f({\beta}_{2})$ surrounds $f({\beta}_{1})$, it may look like Figures \ref{fi9} or \ref{fig4}. Also we assume that $\gamma_{1}$, $\gamma_{2}$ and $\gamma$ are compatibly oriented, as in Step 1.
For $x,w \in {\C}$, we denote by $\overline{xw}$ the oriented segment from $x$ to $w$ and by $T_{x}(\tau)$ the tangent at $x$ of some parametrization of a curve $\tau$.
The step consists of considering certain unbounded curves on the regions $G_{1}$ and $G_{2}$ and their pre-images. We recall that ${\beta}_{1} \subset G_{1}$ and $\beta_{2} \subset G_{2}$. The curve $f(\beta_{1})$ has end points at $f(p)$ and $f(q)$ in $\gamma$, and the curve $f(\beta_{2})$ has end points at $f(p')$ and $f(q')$ in $\gamma$. So we define their pre-images on $\gamma_{1}$ and on $\gamma_{2}$ as follows: $f^{-1}(f(q')) \cap {\gamma}_{1}=p_{1}'$, $f^{-1}(f(q')) \cap {\gamma}_{2}=q'$, $f^{-1}(f(p')) \cap {\gamma}_{1}=p'$, $f^{-1}(f(p')) \cap {\gamma}_{2}=q_{1}'$, $f^{-1}(f(p)) \cap {\gamma}_{1}=p$, $f^{-1}(f(p)) \cap {\gamma}_{2}=q_{1}$ and $f^{-1}(f(q)) \cap {\gamma}_{1}=p_{1}$, $f^{-1}(f(q)) \cap {\gamma}_{2}=q$, see Figure \ref{fig8} as an example. The point $p_{1}'$ is the end point of some pre-image of $f(\beta_{1})$, $q_{1}'$ is the beginning of another pre-image of $f(\beta_{1})$, $p_{1}$ is the end point of some pre-image of $f(\beta_{2})$ and $q_{1}$ is the beginning of another pre-image of $f(\beta_{2})$.
\begin{figure}[h!]
\centerline{\hbox{\psfig{figure=fig8.eps,height=1.9in,width=2.3in}}}
\caption{ The pre-images of $f(p), f(q) f(p')$ and $f(q')$}
\label{fig8}
\end{figure}
For brevity, we define $I_{1}$ as the interval $\overline{p'p'_{1}}$ and $I_{2}$ as the interval $\overline{q'_{1}q'}$. Thus $I_{1}$ and $I_{2}$ are pre-images of the interval $I_{0}=\overline{f(p')f(q')}$ in $\gamma_{1}$ and $\gamma_{2}$ respectively. We have two situations.
(i) Let us consider an unbounded oriented curve ${\Gamma} \subset G_{1}$ beginning at $f(q)$, and an unbounded oriented curve ${\Theta} \subset G_{2}$ beginning at $f(q')$, as in the Observation 1 in Section 1, see for instance Figure \ref{fig100}. More conveniently, we are interested in the piece of such curves complementary to $f(D_{1})$. Denote by $B$ the complement of $f(D_{1})$ in the sphere and let $\Gamma_{0}={\Gamma} \bigcap B$ and $\Theta_{0}={\Theta} \bigcap B$.
\begin{figure}[h!]
\centerline{\hbox{\psfig{figure=fig100.eps,height=2.3in,width=2.3in}}}
\caption{The curves $\Gamma$ and $\Theta$}
\label{fig100}
\end{figure}
(ii) The curves $\Gamma$ and $\Theta$ may oscillate and may intersect the interval $I_{0}$ in many points, so in this case $\Gamma_{0}$ and $\Theta_{0}$ are a union of curves beginning at points in $I_{0}$. By applying the Kaplan's theorem to each of these curves, we consider their pre-images beginning at points in $I_{1}$, denoted by $\Gamma_{1}$ and $\Theta_{1}$ respectively and pre-images beginning at $I_{2}$, denoted by $\Gamma_{2}$ and $\Theta_{2}$ respectively, none of these curves intersects $D_{1}$ or more generally $f^{-1}f(D_{1})$, it may look like Figure \ref{fig99}. If $N_{\infty}$ is any neighborhood of infinity, we have $\Gamma_{i} \bigcap N_{\infty} \neq \emptyset$ also $\Theta_{i} \bigcap N_{\infty} \neq \emptyset$, $i=0,1,2$, they are unbounded.
\begin{figure}[h!]
\centerline{\hbox{\psfig{figure=fig99.eps,height=2.4in,width=2.5in}}}
\caption{ The curves $\Gamma_{1}$, $\Theta_{1}$, $\Gamma_{2}$ and $\Theta_{2}$ }
\label{fig99}
\end{figure}
We have now two cases, either ( A ) the intersection of the set $\Gamma_{1}$ or $\Theta_{1}$ with $I_{1}$ is finite, consequently the same for $\Gamma_{2}$ or $\Theta_{2}$, or (B) the intersection of both sets is not finite.
Case A. Assume without lost of generality that $\Gamma_{i}$ intersects $I_{i}$ in a finite set, $i=1,2$. We consider the component of $\Gamma_{1}$ which is unbounded and denote it by $\Gamma_{1}'$, similarly we have an unbounded component $\Gamma_{2}'$. Both curves are in $G_{1}$ and recall that their closure in the sphere is a continua that contains infinity. Let us denote $x_{1}=\Gamma_{1}' \bigcap I_{1}$ and $x_{2}=\Gamma_{2}' \bigcap I_{2}$, observe that the pairs $(T_{x_{1}}(I_{1}),T_{x_{1}}(\Gamma'_{1}))$ and $(T_{x_{2}}(I_{2}),T_{x_{2}}(\Gamma'_{2}))$ are sent conformally by $f'$ to the corresponding pair $(T_{f(x_{1})}(I_{0}), T_{f(x_{1})}(\Gamma_{0}))$. Under such conditions, for any path $\sigma$ joining $x_{1}$ with $x_{2}$ which does not intersect $\Theta_{1}$ nor $\Theta_{2}$, then the curve $-\Gamma_{1}' \bigcup \sigma \bigcup \Gamma_{2}$ disconnects $\Theta_{1}$ from $\Theta_{2}$, it may look like Figure \ref{fig999}. Therefore, if $\sigma \in G_{1}$, then $G_{2}$ is disconnected.\\
\begin{figure}[h!]
\centerline{\hbox{\psfig{figure=fig999.eps,height=3.2in,width=2.4in}}}
\caption{ The curves $\Gamma_{0}$, $\Theta_{0}$ and the points $z_{i}$ and $w_{j}$, $i=0,1,2$.}
\label{fig999}
\end{figure}
Case B. In this case the intersection $I_{i} \bigcap \Gamma_{i}$ is an infinite collection of points $\{x^{j}_{i}\}$, equally $I_{i} \bigcap \Theta_{i}$ is an infinite collection of points $\{w^{j}_{i}\}$ for $i=1,2$ and $j \in {\mathbb N}$. Being $I_{1}$ and $I_{2}$ compact, the sequence $\{x^{j}_{i}\}$ has at least an accumulation point, say $x_{i}$, and let $w_{i}$ be an accumulation point for the sequence $\{w^{j}_{i}\}$, $i=1,2$. Again we have two situations, either (1) at least one of the points $x_{1}$ or $w_{1}$ is in the Fatou ser, or (2) both points $x_{1}$ and $w_{1}$ are in the Julia set.\\
(1) Assume without lost of generality that $x_{1}$ and so $x_{2}$ are in the Fatou set. Consider the closure on the sphere $\overline{\Gamma_{i}}$ of $\Gamma_{i}$, $i=0,1,2$. Let $\sigma$ be a path between $x_{1}$ and $x_{2}$ that does not intersect $\Theta_{i}$, $i=1,2$. As in Case A, the pairs $(T_{x_{1}}(I_{1}),T_{x_{1}}(\overline{\Gamma_{1}}))$ and $(T_{x_{2}}(I_{2}),T_{x_{2}}(\overline{\Gamma_{2}}))$ are sent conformally by $f'$ to the corresponding pair $(T_{f(x_{1})}(I_{0}), T_{f(x_{1})}(\overline{\Gamma_{0}}))$, where $(T_{x_{i}}(I_{i}),T_{x_{i}}(\overline{\Gamma_{i}}))$ means $lim_{x_{i}^{j} \rightarrow x_{i}}(T_{x_{i}}(I_{i}),T_{x_{i}}(\Gamma_{i}))$, $i=0,1,2$.
Under such conditions, for any path $\sigma$ joining $x_{1}$ with $x_{2}$ which does not intersect neither $\Theta_{1}$ nor $\Theta_{2}$, the set $\overline{\Gamma_{1}} \bigcup \sigma \bigcup \overline{\Gamma_{2}}$ disconnects $\Theta_{1}$ from $\Theta_{2}$. Therefore, if $\sigma \in G_{1}$, then $G_{2}$ is disconnected.\\
(2) Assume that $x_{i}$ and $w_{i}$ are in the Julia set, $i=1,2$. Consider paths $\sigma_{j}$ between $x_{1}^{j}$ and $x_{2}^{j}$, $j \in {{\mathbb N}}$ and let $\Sigma=\overline{\bigcup_{j}{\sigma}_{j}}$ be the closure of the union of all the paths $\{\sigma_{j}\}$ in the sphere. Also, the pairs $(T_{x_{1}}(I_{1}),T_{x_{1}}(\overline{\Gamma_{1}}))$ and $(T_{x_{2}}(I_{2}),T_{x_{2}}(\overline{\Gamma_{2}}))$ are sent conformally by $f'$ to the corresponding pair $(T_{f(x_{1})}(I_{0}), T_{f(x_{1})}(\overline{\Gamma_{0}}))$. As explained in (ii), the intersection of the sets $\overline{\Gamma_{1}},\overline{\Gamma_{2}},\overline{\Theta_{1}}$ and $\overline{\Theta_{2}}$ with $D_{1}$ is empty.
Observe that the set $\Sigma \bigcup \overline{\Gamma_{1}} \bigcup \overline{\Gamma_{2}}$ disconnects $G_{2}$, since in any neighborhood of $x_{1}$ and $x_{2}$ there are points that belong to $G_{2}$. Now, if all $\sigma_{j} \in G_{1}$, then $\Sigma \bigcap G_{2}=\emptyset$ and $\Sigma \bigcup \overline{\Gamma_{1}} \bigcup \overline{\Gamma_{2}}$ is disjoint of $G_{2}$, therefore in this case $G_{2}$ is disconnected.\\
In all these cases $G_{2}$ is disconnected which is a contradiction. Thus we have finished the proof of the Theorem \ref{ba}. \\
\noindent {\bf Remark.} This above proof applies also to the case of a transcendental meromorphic map with a finite number of poles, see \cite{pat}, since we can choose a disc $D_{1}$ without poles exactly as in the proof of the Theorem and this case proceeds as above.\\
\noindent {\bf Acknowledgmets.} The authors would like to thank to P. Rippon, G. Stallard, M. Montes de Oca and the members of the holomorphic dynamics seminar in UNAM, for their comments and support when we were having different drafts and ideas of the the proof. We thank specially to J. Kotus for her patience to listening our arguments and also for very interesting discussions of the proof.
|
1,314,259,994,791 | arxiv | \section{Introduction}
It is widely believed that the center of nearly every galaxy contains a
supermassive black hole. In particular, the increase of the astronomical
observations in recent years strongly indicates the presence of a
supermassive black hole at the center of our galaxy (Sgr A*). According to
the current model of cosmology, dark matter makes up about $27\%$ of the matter-energy composition of the
Universe, although, as of today, there is no direct experimental detection
of dark matter. Nevertheless, indirect experimental observations strongly
suggest that dark matter reveal its presence in many astrophysical
phenomena, especially important, in this context, are the problem of
galactic rotation curves \cite{dm1}, the galaxy clusters dynamics \cite{dm2}%
, while further evidence for dark matter comes from measurements on
cosmological scales of anisotropies in the cosmic microwave background
through PLANCK \cite{dm3}.
Therefore, it is extremely important to study the black hole physics in the
presence of dark matter. Li and Yang investigated the possibility of the static black hole immersed in dark matter \cite{SCindark}. Their model of dark matter is based on a single parameter $\alpha$ which is the limitation of the model. Furthermore their model corresponds to a specific case studied for the first time by Kiselev \cite{kiselev}. In particular the logarithmic dependence was introduced to explain the asymptotic rotation curves for the dark matter in terms of the of quintessential matter at large distances, i.e. in the halo dominated region. That being said, one possible limitation of this model is the fact that no interaction between the dark matter and other fields (say, dark energy field) is assumed. One can certainly modify the distribution of dark matter in a galaxy by considering an interaction between those fields. In other words, one may consider a more general scenario with a surrounding matter given as a combination of more complicated fields with more dark matter parameters.
Quite recently, a new Kerr black hole solution with
the dark matter effects has been reported in the literature \cite{KerrDM}.
This solution modifies the Kerr metric due to the presence of dark matter
encoded via the PFDM (PFDM) $\alpha$ which, among
other things, implies a modification of the ergosphere structure of the
black hole. This solution allows to study the effect of PFDM in different astrophysical problems. Very recently, this solution was used in \cite{shadow1,shadow2}, to
study the effect of PFDM and cosmological constant on the
size of black hole shadow, deflection angle, as well as the black hole emission rate which is related to the idea that for a far distant observer located at infinity the observed area of the black hole shadow approximately equals to the high energy absorption cross section.
According to the general theory of relativity, there is a rotational
dragging of inertial frames near the presence of a rotating black hole spacetimes known
as a Lense-Thirring precession (LT) \cite{lt1,lt2,lt3}. Basically, we can
explore the dragging effects with the help of a gyroscope (or a test gyro)
using the fact that a gyroscope tends to keep its spin axis rigidly pointed
in a fixed direction in space, say fixed relative to a given star. In a
rotating spacetime, due to the frame dragging effects, it is shown that the
precession of the gyroscope frequency is proportional to the spin parameter
of the rotating object and inversely proportional to cube of the distance
from the central object. In addition to that, there is a second effect
related to the gyroscopic precession due to the spacetime curvature which is
known as a geodetic precession \cite{gdp}. LT precession of a test gyroscope
has been extensively studied in recent years, along this line of thought, in
\cite{rotw} authors study the LT precession frequency in a rotating
traversable Teo wormhole, in \cite{strong} the frame-dragging effect in a
strong gravity regime is considered, and references therein. It is worth
noting that, great effort has been made to actually test the frame dragging
effect and geodetic effect in the Earth's gravitational field by the Gravity
Probe B experiment \cite{Everitt}.
The concept of the spacetime singularity is well known in general relativity
mainly due to the famous Penrose-Hawking singularity theorem.
A naked singularity on the other hand is defined as a gravitational singularity without an event horizon. According to the cosmic censorship conjecture, spacetime
singularities that arise in gravitational collapse are always hidden inside
of black holes and therefore can not be observed in nature \cite%
{nsing1,nsing2}. Whether naked singularities exist or not is an open
question, however, one can naturally raise the following intriguing question
concerning the nature of the final product of gravitational collapse: How
can we distinguish a naked singularity from a black hole? In this context,
the problem of naked singularities has attracted a great interest in recent
years \cite{new1,new2,new3}.
From the no-hair theorem we know that a Kerr solution is completely
characterized by the black hole mass $M$ and the black hole angular momentum
$J$. If the following condition $M \geq a$ holds, where the angular momentum
parameter $a$ is defined by the angular momentum per unit mass, then the
Kerr solution represents a black hole. On the other hand, if $M<a$, then a
naked singularity is recovered. In a very interesting work, Chakraborty et
al \cite{NS,LTkerr} argued that one can basically use the spin precession
frequency of a test gyroscope attached to both static and stationary
observers, to distinguish black holes from naked singularities. Afterwards,
a new spin forward to this idea was put by Rizwan et al \cite{RMA} who
studied the problem of distinguishing a rotating Kiselev black hole from a
naked singularity using spin precession of test gyroscope. In some recent
papers \cite{bambi0}, authors study the idea of distinguishing black holes
and naked singularities with iron line spectroscopy, rotating naked
singularities are studied in the context of gravitational lensing \cite%
{galin}, while in \cite{kimet}, authors study the problem of distinguishing
rotating naked singularities from Kerr-like wormholes by their deflection
angles of massive particles.
It is known that the process of matter accretion towards rotating neutron
stars and black holes is followed by the emission of electromagnetic waves,
mainly X and gamma-rays \cite{xray1}.
The quasi-periodic oscillations phenomena (QPOs) is linked with high frequency X-ray binaries \cite{stella1,stella2}. In particular, there are known the high-frequency (HF) quasi-periodic oscillations (QPOs) and three types of low-frequency (LF) QPOs. It is quite amazing that the LT effect can be
linked with this phenomena and perhaps to explain the QPOs of accretion
disks around rotating black holes, provided the disk is slightly misaligned
with the equatorial plane of the BH \cite{xray2}.
In the present paper, firstly we shall
examine the critical value of spin parameter $a_c$ to differentiate the
Kerr-like black hole from a naked singularity with PFDM. Then, we shall
calculate the spin precession frequency of a test gyroscope attached to
stationary observer to differentiate a Kerr-like black hole from a naked
singularity with PFDM.
The outline of this paper is as follows. In Section II we determine the
critical value of spin parameter to differentiate a Kerr-like black hole
from from naked singularities in PFDM. In Section III, we calculate the spin
precession frequency of a test gyroscope in Kerr-like black hole with PFDM,
in particular we examine in detail the LT-precession of a gyroscope in
Kerr-like black hole with PFDM. In Section IV, we specialize our results to
elaborate the geodetic precession in Schwarzschild black hole spacetime in
PFDM. In Section V, we shall focus on the problem of distinguishing black
holes from naked singularities. In Section VI, we study the effect of PFDM on the KF, the VEF, and the NPPF. Section VII is devoted for some concluding remarks.
\newpage
\section{Kerr-like black hole in perfect fluid dark matter\label{secGE}}
The line element of the Kerr-like black hole in PFDM is
given as \cite{KerrDM}
\begin{eqnarray}
ds^{2} &=&-\left( 1-\frac{2Mr-\alpha r\ln \left( \frac{r}{|\alpha |}\right)
}{\Sigma }\right) dt^{2}+\frac{\Sigma }{\Delta }dr^{2}+\Sigma d\theta ^{2}-2a%
\frac{\left( 2Mr-\alpha r\ln \left( \frac{r}{|\alpha |}\right) \right) }{%
\Sigma }dtd\phi \notag \label{LE} \\
&&+\sin ^{2}\theta \left( r^{2}+a^{2}+a^{2}\sin ^{2}\theta \frac{2Mr-\alpha
r\ln \left( \frac{r}{|\alpha |}\right) }{\Sigma }\right) ,
\end{eqnarray}%
where%
\begin{equation}
\Delta \equiv r^{2}-2Mr+a^{2}+\alpha r\ln \left( \frac{r}{|\alpha |}\right) ,\text{
}\Sigma\equiv r^{2}+a^{2}\cos ^{2}\theta .
\end{equation}%
Here $M$ and $a$ are mass and angular momentum per unit mass, parameters of the black hole. Using
Komar integral, total mass of black hole $M_{T}$ interior to the surface $%
r=r_{0}$, and the corresponding angular momentum $J_{T}$ around the axis of
rotation of a stationary spacetime is obtained as
\begin{equation}
M_{T}=M-\frac{\alpha \ln \left( \frac{r_{0}}{|\alpha |}\right) }{2ar_{0}}%
\left[ ar_{0}+\left( r_{0}^{2}+a^{2}\right) \tan ^{-1}\left( \frac{a}{r_{0}}%
\right) \right],
\end{equation}%
\begin{equation}
J_{T}=aM+\frac{\alpha }{4a^{2}r_{0}}\left[ \left( r_{0}^{2}+a^{2}\right)
^{2}\tan ^{-1}\left( \frac{a}{r_{0}}\right) -ar_{0}\left(
r_{0}^{2}+a^{2}\right) -2a^{3}r_{0}\ln \left( \frac{r_{0}}{|\alpha |}\right) %
\right].
\end{equation}
In the absence of PFDM ($\alpha =0$), the line element %
\eqref{LE} represents a Kerr black hole. The PDFM stress-energy tensor in
the standard orthogonal basis of the Kerr-like black hole can be written in
diagonal form $diag[-\rho,p_{r},p_{\theta},p_{\phi}]$ \cite{KerrDM}, where
\begin{equation}
-\rho=p_{r}=\frac{\alpha r}{8\pi \Sigma^2}, p_{\theta}=p_{\phi}=\frac{\alpha
r}{8\pi \Sigma^2}\left(r-\frac{\Sigma}{2r}\right).
\end{equation}
The location of the black hole horizons can be obtained by solving the
horizon equation
\begin{equation} \label{Horizneq}
\Delta =r^{2}-2Mr+a^{2}+\alpha r\ln \left( \frac{r}{|\alpha |}\right) =0.
\end{equation}%
Note that depending on the choice of parameters $a$ and $\alpha$, %
\eqref{Horizneq} has no solution, one solution or two solutions. In each
case the line element \eqref{LE} represents naked singularity, extremal
black hole or black hole with inner $(r_-)$ and outer horizon $(r_+)$,
respectively. To find out the critical value (the maximum value of the
parameter for which \eqref{LE} can represent a black hole) of the spin
parameter $a_{c}$ in this section we express the black hole parameters and
the radial distance in the unit of black hole mass, that is, $a/M\rightarrow
a$, $\alpha /M\rightarrow \alpha $ and $r/M\rightarrow r$. Assuming the spin
parameter $a$ as a function of $r$ and $\alpha $ we can write
\begin{equation}
a^{2}(r,\alpha )=2r-r^{2}-\alpha r\ln \left( \frac{r}{|\alpha |}\right) .
\label{a2}
\end{equation}%
Now to find the extreme value of the spin parameter $a$ we use the condition
of extrema of $a^{2}$, that is, $da^{2}/dr=0$, which yields
\begin{equation}
f(r,\alpha )\equiv 2-2r-\alpha \ln \left( {\frac{r}{|\alpha |}}\right) {\
-\alpha }=0. \label{extreme}
\end{equation}%
and for any fixed $\alpha $,
\begin{equation}
\frac{df}{dr}=-2-\frac{\alpha }{r},
\end{equation}%
Note that,
\begin{eqnarray}
\text{For any }\alpha &<&0,\text{ }\frac{df}{dr}>0\text{ for }0<r<-\frac{
\alpha }{2}\text{ and }\frac{df}{dr}<0\text{ for }-\frac{\alpha }{2}<r. \\
\text{For any }\alpha &>&0,\text{ }\frac{df}{dr}<0\text{ for all }r.
\end{eqnarray}
The above conditions show that the function $f(r,\alpha)$ behaves
differently for negative and positive values of $\alpha$. So we will discuss
these two cases separately.
\begin{figure}[!ht]
\centering
\minipage{0.50\textwidth} %
\includegraphics[width=8.2cm,height=5.4cm]{1.pdf}\newline
(a) \label{rea} \endminipage\hfill \minipage{0.50\textwidth} %
\includegraphics[width=8.0cm,height=5.6cm]{2.pdf}\newline
(b) \label{reb} \endminipage\hfill
\caption{{\protect\footnotesize The horizon of the extremal black hole $r_e$
(black line) and critical values of the spin parameter $a_c$ (blue line)
verse negative and positive $\protect\alpha$ are plotted in panel (a) and
(b). Here $\overline{\protect\alpha} \approx-0.581977$, $\overline{a}
_c\approx1.25776655499709$, $\tilde{\protect\alpha}=1/(1+e)$ and $\tilde{a}
_c\approx0.855$.}}
\label{reandac}
\end{figure}
\subsection{Negative $\alpha $
For any fixed chosen $\alpha<0$, the function $f(r,\alpha)$ has
maxima at $r=-\alpha/{2}$. The function increases in the interval $0< r <-\alpha/2$
while decreases for $-\alpha/{2}<r$. Thus, depending on the value of $%
\alpha $, $f(r,\alpha)$ have no zero or have two zeros say $r_1$ and $r_2$
such that $r_1\leq -\frac{\alpha}{2} \leq r_2$. If $\alpha_\text{min}$ is
the minimum value for which $f(r_,\alpha)$ has a zero, than
\begin{equation}
r_1=r_2 \quad \text{for} \quad \alpha=\alpha_\text{min}.
\end{equation}
That is, $r_1$ is a zero of $f(r,\alpha)$ of multiplicity $2$. Solving %
\eqref{extreme} for negative $\alpha$, we find that one zero of $f(r,\alpha)$
is
\begin{equation} \label{r1-}
r_1=\frac{\alpha }{2}{ProductLog}\left( {-2e^{-1+\frac{2}{\alpha }}}\right),
\end{equation}
where $ProductLog(x)$ is a Lambert \textit{W-}function. Now, if $r_1$ is
zero of $f(r,\alpha)$ of multiplicity $2$, then it must also be zero of $%
df/dr$, which gives
\begin{equation}
\alpha_\text{min}=-\frac{2}{ln(2)}\approx-2.88539.
\end{equation}
Note that for any value of $\alpha$ in the range $\alpha_\text{min}\leq
\alpha<0$ the corresponding extremal value of $a$ are
\begin{equation}
a^2=r_1(r_1+\alpha) \quad and \quad a^2=r_2(r_2+\alpha).
\end{equation}
As $r_1\leq -\alpha/2$, so in this case $a^2$ is negative which implies $r_1$
cannot be horizon of the extremal black hole and thus $r_2$ can be the
horizon of the extremal black hole. Further, for $\alpha=-2$, $r_2=2$ and $%
a=0$. If $\alpha$ is in the range $-\alpha_{min}\leq \alpha<2 $, the
cosponsoring solution $r_2$ gives $a^2$ negative. Thus, for negative $\alpha$
the line element \eqref{LE} represents a black hole only if $-2\leq\alpha<0$%
. Solving \eqref{extreme} numerically for $-2\leq \alpha<0 $ gives horizon
of the extremal black hole $r_2$ and henceforth will be represented by $r_e$.
Using $r_e$ in \eqref{extreme}, we get the critical value of spin parameter as follows
\begin{equation}
a_c=\sqrt{r_e(r_e+\alpha)}.
\end{equation}
The graph of $r_e$ and $a_c$ for $-2\leq \alpha <0$ is plotted in FIG:\ref%
{reandac} (a) which shows that $r_e$ decreases with increasing $\alpha$,
while $a_c$ has its maximum value $\overline{a}_c\approx1.2577$ at
$\overline{\alpha} \approx-0.58197$. If $\alpha$ is in the range, $%
-2\leq\alpha< \overline{\alpha}$ the critical value of spin parameter $a_c$
increases and if $\overline{\alpha}<\alpha<0$, then $a_c$ decreases.
Further, as $r_-\leq r_e\leq r_+$ so we can say that for $-2\leq \alpha <0$,
with increasing $\alpha$ the size of inner horizon $r_-$ decreases.
\begin{figure}[th]
\centering
\minipage{0.31\textwidth} %
\includegraphics[width=2.3in,height=1.6in]{3.pdf}\newline
(a) \label{a} \endminipage\hfill %
\minipage{0.31\textwidth} %
\includegraphics[width=2.3in,height=1.6in]{4.pdf}\newline
(b) \label{b} \endminipage\hfill %
\minipage{0.31\textwidth} %
\includegraphics[width=2.3in,height=1.6in]{5.pdf}\newline
(c) \label{c} \endminipage\hfill
\par
\minipage{0.31\textwidth} %
\includegraphics[width=2.3in,height=1.6in]{6.pdf}\newline
(d) \label{6} \endminipage\hfill %
\minipage{0.31\textwidth} %
\includegraphics[width=2.3in,height=1.6in]{7.pdf}\newline
(e) \label{e} \endminipage\hfill %
\minipage{0.31\textwidth} %
\includegraphics[width=2.3in,height=1.6in]{8.pdf}\newline
(f) \label{f} \endminipage\hfill
\caption{{\protect\footnotesize The graphs of $y=r^{2}-2r+a^{2}$ for
different value of $a$, $a<a_{c}$ (dash-dotted line), $a=a_{c}$ (dashed
line) and $a>a_{c}$ (dotted line) and $y=-\protect\alpha rln(\frac{r}{|
\protect\alpha |})$ (solid line) are plotted. For any $\protect\alpha $ and $%
a<a_{c}$, the point of intersection of \emph{solid lines} and \emph{\
dash-dotted parabolas} in (a)-(f), give the locations of inner horizons $%
r_{-}$ and event horizons $r_{+}$. For $a=a_{c}$, the point of intersection
of \emph{solid lines} and \emph{dashed parabolas} give the locations of
horizon of extremal black hole $r_{e}$. For $a>a_{c}$, the \emph{solid line}
and \emph{dotted line} does not intersect each other indicating that there
does not exist a black hole and the line element \eqref{LE} represents a
naked singularity.}}
\label{Fig4}
\end{figure}
\newline
\subsection{Case II: Positive $\alpha $}
To discuss the critical value $a_{c}$ for any positive $\alpha $, we first
find zeros of the function $f(r,\alpha )$. As for any chosen positive $%
\alpha $ the function $f(r,\alpha )$ is decreasing function of $r$ so it has
at most one zero. Further
\begin{equation}
f(\overline{r}_{1},\alpha )=2>0\text{ \ with \ }\overline{r}_{1}=\frac{
\alpha }{2}ProductLog\left( \frac{2}{e}\right) ,
\end{equation}%
and
\begin{equation}
f\left( \overline{r}_{2},\alpha \right) =-2\alpha e^{-1+\frac{2}{\alpha }}<0
\text{ \ with \ }\overline{r}_{2}=\alpha e^{-1+\frac{2}{\alpha }},
\end{equation}%
By Intermediate value theorem we can conclude that for any $\alpha >0,$ $%
f\left( r,\alpha \right) $ has one zero (say $r_{e}$) such that $\overline{r}%
_{1}<r_{e}$ $<\overline{r}_{2}$. Solving \eqref{extreme} for $r$ yields
\begin{equation}
r_{e}=\frac{\alpha }{2}{ProductLog}\left( {2e^{-1+\frac{2}{\alpha }}}\right)
.
\end{equation}%
and the corresponding extreme value of spin parameter $a_{c}$ is obtained as
\begin{equation}
a=\sqrt{r_{e}\left( 2-r_{e}-\alpha \ln \left( \frac{r_{e}}{\alpha }\right)
\right) },
\end{equation}%
or
\begin{equation}\label{ac}
a_{c}=\frac{\alpha }{2}\sqrt{ProductLog\left( 2e^{-1+\frac{2}{\alpha }
}\right) \left[ 2+ProductLog\left( 2e^{-1+\frac{2}{\alpha }}\right) \right] }
,
\end{equation}%
The horizon of the extremal black hole $r_{e}$ and critical value of spin
parameter $a_{c}$ verses $\alpha $ is plotted in FIG:\ref{reandac} (b) which
shows that size of the extremal black hole has minimum value for $\alpha
=2/3 $. It decreases for $0<\alpha <2/3$ and increase for $2/3<\alpha $. For
$\tilde{\alpha}=1/(1+e)$, $a_{c}$ has minimum value $\tilde{a}_{c}\approx
0.855 $. Further, for $0<\alpha <\tilde{\alpha}$, $a_{c}$ decreases while $%
1/(1+e)<\alpha $ it increases.
We have plotted $y=r^{2}-2r+a^{2}$ for different values of $a$ and $%
y=-\alpha r\ln \left( \frac{r}{|\alpha |}\right) $ for negative $\alpha $ in
FIG:\ref{Fig4}(a)-(c) and for positive $\alpha$ in FIG:\ref{Fig4}(d)-(f). In
each case values of $r$ for which these curves intersect are horizons of the
black hole. It is seen that for any value of $-2\leq \alpha $, if $a<a_{c}$
the curves intersect for two values of $r$ that are locations of inner
horizon ($r_{-}$) and event horizon ($r_{+}$). If $a=a_{c}$ the horizons
merge into a single horizon $r_{e}$ the horizon of extremal black hole and
if $a>a_{c}$ the curve does not intersect each other that is no solution of
horizon equation. Thus, we conclude that for any fixed $-2\leq \alpha $ the
line element \eqref{LE} represents a black hole with two horizons $\ r_{-}$
and $r_{+}$ only if $a<a_{c}$. For $a=a_{c}$, the two horizons $r_{+}$ and $%
r_{-}$ merge into a single horizon $r_{e}$ and \eqref{LE} is a extremal
black hole. However, for any $a>a_{c}$, the line element is a naked
singularity.
\section{Spin Precession Frequency}
In this section, we will discuss the spin precession frequency of a test
gyroscope attached to a stationary observer with respect to some fixed star
due to the frame dragging effects of the Kerr-like black hole in the PFDM. The precession frequency ($\Omega _{p}$) of a test
gyroscope attached to a stationary observer having $4-$ velocity $%
u=(-K^{2})^{-1/2}K$ in a stationary spacetime with timelike Killing vector
field $K=\partial_{0}+\Omega \partial _{t}$ is defined by \cite{NS}
\begin{equation}
\vec{\Omega}_{p}=\frac{\varepsilon _{ckl}}{2\sqrt{-g}\left( 1+2\Omega \frac{
g_{0c}}{g_{00}}+\Omega ^{2}\frac{g_{cc}}{g_{00}}\right) }\left[ \left(
g_{0c,k}-\frac{g_{0c}}{g_{00}}g_{00,k}\right) +\Omega \left( g_{cc,k}-\frac{
g_{cc}}{g_{00}}g_{00,k}\right) +\Omega ^{2}\left( \frac{g_{0c}}{g_{00}}
g_{cc,k}-\frac{g_{cc}}{g_{00}}g_{0c,k}\right) \right] \partial _{l},
\label{SPD}
\end{equation}%
where $\varepsilon _{ckl}$ is the Levi-Civita symbols and $g$ is the
determinant of the metric $g_{\mu \nu }$. Using the metric coefficients from %
\eqref{LE} in \eqref{SPD} yields
\begin{equation}
\vec{\Omega}_{p}=\frac{\left( F\sqrt{\Delta }\cos \theta \right) \hat{r}%
\text{ }+\left( H\sin \theta \right) \hat{\theta}\text{ }}{\Sigma ^{3/2}%
\left[ \Sigma -\left\{ 2Mr-\alpha r\ln \left( \frac{r}{|\alpha |}\right)
\right\} \left( 1-2\Omega a\sin ^{2}\theta \right) -\Omega ^{2}\sin
^{2}\theta \left\{ \left( r^{2}+a^{2}\right) \Sigma +a^{2}\sin ^{2}\theta
\left( 2Mr-\alpha r\ln \left( \frac{r}{|\alpha |}\right) \right) \right\} %
\right] }, \label{Op}
\end{equation}%
where
\begin{eqnarray}
F &=&a\left\{ 2Mr-\alpha r\ln \left( \frac{r}{|\alpha |}\right) \right\} -%
\frac{\Omega }{8}\left\{ 3a^{4}+8r^{4}+8a^{2}r\left( 2M+r\right)
+a^{2}\left( a^{2}\cos 4\theta -8\alpha r\ln \left( \frac{r}{|\alpha |}%
\right) +4\cos 2\theta \left( 2\Delta -a^{2}\right) \right) \right\} \notag
\\
&&+\Omega ^{2}a^{3}\left\{ 2Mr-\alpha r\ln \left( \frac{r}{|\alpha |}\right)
\right\} \sin ^{4}\theta , \label{F} \\
H &=&a\left[ M\left( r^{2}-a^{2}\cos ^{2}\theta \right) +\frac{\alpha }{2}%
\left\{ \Sigma -\left( r^{2}-a^{2}\cos ^{2}\theta \right) \ln \left( \frac{r%
}{|\alpha |}\right) \right\} \right] \notag \\
&&+\Omega \left[ a^{4}r\cos ^{4}\theta +r^{2}\left( r^{3}-a^{2}M\left(
1+\sin ^{2}\theta \right) -3Mr^{2}\right) +a^{2}\cos ^{2}\theta \left\{
2r^{3}+a^{2}M\left( 1+\sin ^{2}\theta \right) -Mr^{2}\right\} \right. \notag
\\
&&\left. -\frac{\alpha }{16}\left\{ a^{2}\left( 5a^{2}+16r^{2}\right)
+8r^{4}+\left( 5a^{4}-16a^{2}r^{2}-24r^{4}\right) \ln \left( \frac{r}{%
|\alpha |}\right) +a^{4}\left( 4\cos 2\theta -\cos 4\theta \right) \left\{
1+\ln \left( \frac{r}{|\alpha |}\right) \right\} \right\} \right] \notag \\
&&+a\Omega ^{2}\sin ^{2}\theta \left[ M\left\{ r^{2}\left(
3r^{2}+a^{2}\right) +a^{2}\cos ^{2}\theta \left( r^{2}-a^{2}\right) \right\}
+\frac{\alpha }{2}\left\{ a^{2}\cos ^{2}\theta \left\{ r^{2}+a^{2}+\left(
a^{2}-r^{2}\right) \ln \left( \frac{r}{|\alpha |}\right) \right\} \right.
\right. \notag \\
&&\left. \left. +r^{2}\left\{ r^{2}+a^{2}-\left( 3r^{2}+a^{2}\right) \ln
\left( \frac{r}{|\alpha |}\right) \right\} \right\} \right] , \label{G}
\end{eqnarray}%
and $\hat{r}$, $\hat{\theta}$ are the unit vectors in $r$ and $\theta $
directions, respectively. In the limiting case $\alpha =0$, the spin
precession of a Kerr black hole is successfully obtained \cite{NS}. Note
that the above expression of the precession frequency \eqref{SPD} is valid
only for a timelike observer at fixed $r$ and $\theta $ which gives the
restriction on the angular velocity $\Omega $ of the observer
\begin{equation}
\Omega _{-}(r,\theta )<\Omega (r,\theta )<\Omega _{+}\left( r,\theta \right)
, \label{range}
\end{equation}%
with
\begin{equation}
\Omega _{\pm }=\frac{a\sin \theta \left\{ 2Mr-\alpha r\ln \left( \frac{r}{%
|\alpha |}\right) \right\} \pm \Sigma \sqrt{\Delta }}{\sin \theta \left[
\left( r^{2}+a^{2}\right) \Sigma +a^{2}\sin ^{2}\theta \left\{ 2Mr-\alpha
r\ln \left( \frac{r}{|\alpha |}\right) \right\} \right] }. \label{omegapm}
\end{equation}%
At the black hole horizons, $\Omega _{+ }$ and $\Omega _{-}$ coincide and no timelike
observer can exist there and hence the expression for precision frequency $%
\Omega _{p}$ is not valid at the horizons but still we can study the
behavior of precession frequency near the black hole horizon.
\subsection{Lense-Thirring precession frequency}
The expression of the precession frequency \eqref{Op} is valid for all the
stationary observers inside and outside the ergosphere if their angular
velocity $\Omega$ is in the restricted range given by \eqref{range}. The
precession frequency contains the effects because of the spacetime rotation
(LT precession) as well as due to spacetime curvature (geodetic precession).
If we set $\Omega =0$ in \eqref{Op}, the expression for LT precession
frequency for Kerr-Like black hole in PFDM is obtained
as
\begin{figure}[!ht]
\centering
\minipage{0.31\textwidth} %
\includegraphics[width=2.2in,height=1.6in]{9.pdf}\newline
(a) \endminipage\hfill \minipage{0.31\textwidth} %
\includegraphics[width=2.2in,height=1.6in]{10.pdf}\newline
(b) %
\par
\endminipage\hfill \minipage{0.31\textwidth} %
\includegraphics[width=2.2in,height=1.6in]{11.pdf}\newline
(c) %
\par
\endminipage\hfill
\caption{{\protect\footnotesize The LT precession frequency $\Omega_{LT}$
(in $M^{-1}$ ) verse $r$ (in $M$) for different parameters is plotted.}}
\label{LTFig}
\end{figure}
\begin{equation} \label{LTvf}
\vec{\Omega}_{LT}=a\frac{\left[ 2Mr-\alpha r\ln \left( \frac{r}{|\alpha |}
\right) \right] \sqrt{\Delta }\cos \theta \hat{r}\text{ }+\sin \theta \left[
M\left( r^{2}-a^{2}\cos ^{2}\theta \right) +\frac{\alpha }{2}\left\{ \Sigma
-\left( r^{2}-a^{2}\cos ^{2}\theta \right) \ln \left( \frac{r}{|\alpha |}
\right) \right\} \right] \hat{\theta}\text{ }}{\Sigma ^{3/2}\left[ \Sigma
-\left\{ 2Mr-\alpha r\ln \left( \frac{r}{\alpha }\right) \right\} \right] }.
\end{equation}
\begin{figure}[!ht]
\centering
\minipage{0.30\textwidth} %
\includegraphics[width=2.0in,height=2.0in]{12.pdf}\newline
(a) \label{7} \endminipage\hfill \minipage{0.30%
\textwidth} \includegraphics[width=2.0in,height=2.0in]{13.pdf}%
\newline
(b) \label{7} \endminipage\hfill \minipage{0.30%
\textwidth} \includegraphics[width=2.0in,height=2.0in]{14.pdf}%
\newline
(c) \endminipage\hfill
\caption{{\protect\footnotesize The vector field of the LT-~precession
frequency \eqref{LTvf} (in Cartesian plane corresponding to $(r,\protect%
\theta)$) for black holes is plotted in panels (a) and (b) for negative and positive
$\protect\alpha$ and for naked singularity in panel (c). The field
lines show that for black hole the vector field is defined outside the
ergoshpere only, while for naked singularities it is finite up to the ring
singularity along all the directions.}}
\label{VF}
\end{figure}
The magnitude of the LT precession frequency is given by
\begin{equation} \label{LT}
\Omega _{LT}=a\frac{\sqrt{\left[ 2Mr-\alpha r\ln \left( \frac{r}{|\alpha |}
\right) \right] ^{2}|\Delta |\cos ^{2}\theta \text{ }+\left[ M\left(
r^{2}-a^{2}\cos ^{2}\theta \right) +\frac{\alpha }{2}\left\{ \Sigma -\left(
r^{2}-a^{2}\cos ^{2}\theta \right) \ln \left( \frac{r}{|\alpha |}\right)
\right\} \right] ^{2}\sin ^{2}\theta }\text{ }}{\Sigma ^{3/2}\left\vert
\Sigma -\left\{ 2Mr-\alpha r\ln \left( \frac{r}{|\alpha |}\right) \right\}
\right\vert }.
\end{equation}
The magnitude of LT precession frequency $\Omega_{LT}$ is plotted against $r$
in the FIG: (\ref{LTFig}), which indicates that the LT precession frequency
increases with increasing the rotation $a$ of the black hole as well as the
dark energy parameter $\alpha$. Further, for $\alpha>0$ the LT precession
frequency is minimum near polar axis ($\theta=0$) and increases towards the
equatorial plane ($\theta=\pi/2$) whereas for $\alpha<0$ it is minimum in
equatorial plane and increases towards the polar axis. The vector field of
the LT-precession frequency for the black hole and naked singularity in FIG:(%
\ref{VF}) which shows that LT precession frequency for the black hole remains
finite outside the ergoregion and will diverge at ergoregion while for
naked singularity it remains finite up to the ring singularity.
\section{Geodetic precession}
For $a=0$, the line element \eqref{LE} reduces to the Schwarzschild black
hole in PFDM \cite{SCindark}. The Schwarzschild black
hole in PFDM is non
rotating and have zero precession due to the frame dragging effects.
However, due to spacetime curvature the precession frequency $\Omega_{p}$ of
a test gyroscope is nonzero which is because of curvature of spacetime and
called geodetic precession frequency. The geodetic precession effects can be
obtained as
\begin{equation}
\vec{\Omega}_{p}|_{a=0}=\Omega \frac{\left( -\cos \theta \sqrt{
r^{2}-2Mr+\alpha r\left( \frac{r}{|\alpha |}\right) }\right) \hat{r}+\sin
\theta \left( r-3M-\frac{\alpha }{2}\left\{ 1-3\ln \left( \frac{r}{|\alpha |}
\right) \right\} \right) \hat{\theta}}{r-2M+\alpha \ln \left( \frac{r}{
|\alpha |}\right) -\Omega ^{2}r^{3}\sin ^{2}\theta }.
\end{equation}%
Due to spherically symmetric geometry of the static black hole in the PFDM the geodetic precession frequency is same over any
spherical symmetric surface around the black hole. Thus, without loss of
generality we can set $\theta=\pi/2$ and study the geodetic frequency in
equatorial plane. In this plane for any observer in circular orbit the
magnitude of precession frequency is equal to the Kepler frequency given by
\begin{equation}
{\Omega}_{p}|_{a=0}\equiv\Omega _{\text{Kep}}=\left[ \frac{M}{r^{3}}+\frac{\alpha
}{2r^{3}}\left\{ 1-\ln \left( \frac{r}{|\alpha |}\right) \right\} \right]
^{1/2}.
\end{equation}
\begin{figure}[!ht]
\centering
\minipage{0.50\textwidth} %
\includegraphics[width=8.2cm,height=5.4cm]{15.pdf}\newline
(a) \label{rea} \endminipage\hfill \minipage{0.50\textwidth} %
\includegraphics[width=8.0cm,height=5.6cm]{16.pdf}\newline
(b) \label{reb} \endminipage\hfill
\caption{{\protect\footnotesize The geodetic precession frequency $\Omega_{%
\text{geodetic}}$ verse $r$ for $\protect\alpha<0$ in panel (a) and for $%
\protect\alpha> 0$ in panel (b) is plotted which shows that the geodetic
precession frequency increases with increasing $\protect\alpha$. }}
\label{geo}
\end{figure}
The above expression for precession frequency is valid for Copernican frame a frame that does not rotate relative to the inertial frame at asymptotic infinity i.e. the fixed stars,
computed with respect to proper time $\tau$ which is related with the coordinate
time $t$ via
\begin{equation}
d\tau =\sqrt{1-\frac{3M}{r}-\frac{\alpha }{2r}\left\{ 1-\ln \left( \frac{r}{
|\alpha |}\right) \right\} }dt.
\end{equation}%
Using this transformation the geodetic precession frequency associated with
the change in the angle of the spin vector after one complete revolution of
the observer around a black hole in the coordinate basis is given by
\begin{equation}
\Omega _{\text{geodetic}}=\left[ \frac{M}{r^{3}}+\frac{\alpha }{2r^{3}}
\left\{ 1-\ln \left( \frac{r}{|\alpha |}\right) \right\} \right]
^{1/2}\left( 1-\sqrt{1-\frac{3M}{r}-\frac{\alpha }{2r}\left\{ 1-\ln \left(
\frac{r}{|\alpha |}\right) \right\} }\right).
\end{equation}
In this absence of PFDM around the black hole that is
for $\alpha=0$, the geodetic precession frequency for a Schwarzschild black
hole successfully recovered \cite{geosc1,geosc2}. The geodetic precession
frequency is plotted against $r$ for negative and positive $\alpha$ in panel
$(a)$ and $(b)$ of FIG: (3), which shows that with increasing PFDM parameter $\alpha$ the magnitude of the geodetic precession
frequency increases. Further, for any fixed $\alpha$ the geodetic precession
frequency decreases with increasing the radius of the circular orbit.
\begin{figure}[!ht]
\centering
\minipage{0.50\textwidth} %
\includegraphics[width=8.3cm,height=5.8cm]{17.pdf}\newline
(a) \label{6} \endminipage%
\hfill \minipage{0.50\textwidth} %
\includegraphics[width=8.3cm,height=5.8cm]{18.pdf}\newline
(b) \label{7} \endminipage%
\hfill \minipage{0.50\textwidth} %
\includegraphics[width=8.3cm,height=5.8cm]{19.pdf}\newline
(c) \label{6} \endminipage%
\hfill \minipage{0.50\textwidth} %
\includegraphics[width=8.3cm,height=5.8cm]{20.pdf}\newline
(d) \label{6} %
\endminipage\hfill \minipage{0.50\textwidth} %
\includegraphics[width=8.3cm,height=5.8cm]{21.pdf}\newline
(e) \label{6} \endminipage%
\hfill \minipage{0.50\textwidth} %
\includegraphics[width=8.3cm,height=5.8cm]{22.pdf}\newline
(f) \label{7} %
\endminipage\hfill
\caption{{\protect\footnotesize We have plotted the magnitude of spin
precession frequency $\Omega_p$ (in $M^{-1}$ ) verse $r$ (in $M$) is plotted
for black hole in left column and for naked singularity in right column. For
black hole we take $a=0.5$, $\protect\alpha=1$ and for naked singularity we
take $a=1.5$ and $k=0.1,0.5,0.9$ in first, second and third row,
respectively. For black hole the precession frequency $\Omega_p$ diverges
for $k=0.1,0.9$ and remain finite for $k=0.5$ as the observer approaches the
event horizon along any direction, whereas for naked singularity case it
with remain finite along all direction except at ring singularity ($r=0$,~$%
\protect\theta=\protect\pi/2$).}}
\label{opfig}
\end{figure}
\section{Distinguishing black hole from naked singularity}
In this section, using the precession frequency of a gyroscope, we will
differentiate a Kerr-like black hole in PFDM from naked
singularity and verify our results as obtained in section II. For this we
first express the angular velocity of the timelike observer in term of a
parameter $k$ such that
\begin{equation}
\Omega=k\Omega_{+}+(1-k)\Omega_{-},
\end{equation}
where $0<k<1$ and $\Omega_{\pm }$ given by \eqref{omegapm}. Thus for any
timelike observer the angular velocity defined as
\begin{equation}
\Omega =\frac{a\sin \theta \left\{ 2Mr-\alpha r\ln \left( \frac{r}{|\alpha |}
\right) \right\} -\left( 1-2k\right) \Sigma \sqrt{\Delta }}{\sin \theta %
\left[ \left( r^{2}+a^{2}\right) \Sigma +a^{2}\sin ^{2}\theta \left\{
2Mr-\alpha r\ln \left( \frac{r}{|\alpha |}\right) \right\} \right] },
\end{equation}
Note that an observer with angular velocity parameter $k=1/2$ is known as
zero-angular-momentum observer (ZAMO) and has an angular velocity
\begin{equation}
\Omega=-\frac{g_{t\phi}}{g_{\phi\phi}}=\frac{a \left\{ 2Mr-\alpha r\ln
\left( \frac{r}{|\alpha |} \right) \right\} }{ \left( r^{2}+a^{2}\right)
\Sigma +a^{2}\sin ^{2}\theta \left\{ 2Mr-\alpha r\ln \left( \frac{r}{|\alpha
|}\right) \right\} },
\end{equation}
The gyroscope attached to ZAMO observer is locally nonrotating and useful to
study physical procession near astrophysical objects \cite{Bardeen}.
Further, the behavior of the precession frequency attached to ZAMO observer
is different from the all other observers this situation is explain in FIG:
(\ref{opfig}). Now the magnitude of the precession frequency $\Omega_{p}$ in term of
parameter $k$ is written as
\begin{figure}[!ht]
\centering
\minipage{0.31\textwidth} %
\includegraphics[width=2.2in,height=1.6in]{23.pdf}\newline
(a) \label{6} \endminipage\hfill %
\minipage{0.31\textwidth} %
\includegraphics[width=2.2in,height=1.6in]{24.pdf}\newline
(b) \label{7} \endminipage\hfill %
\minipage{0.31\textwidth} %
\includegraphics[width=2.2in,height=1.6in]{25.pdf}\newline
(c) \label{7} \endminipage\hfill
\caption{{\protect\footnotesize The precession frequency $\Omega_p$ (in $%
M^{-1}$ ) verse $r$ (in $M$) for different parameters is plotted. The graphs
show that for black holes $\Omega_p$ diverges near the horizons while for
naked singularities it remain finite. }}
\label{opf}
\end{figure}
\begin{equation}
\Omega _{p}=\frac{\left\vert \left( r^{2}+a^{2}\right) \Sigma +a^{2}\sin
^{2}\theta \left( 2Mr-\alpha r\ln \left( \frac{r}{|\alpha |}\right) \right)
\right\vert }{4\Sigma ^{7/2}k\left( 1-k\right) \left\vert \Delta \right\vert
}\left[ F^{2}|\Delta |\cos ^{2}\theta +H^{2}\sin ^{2}\theta \right] ^{1/2}.
\end{equation}
where $F$ and $G$ are given by \eqref{F} and \eqref{G}. From the denominator
of the above equation one can see that it vanishes at the horizons of the
black hole and ring singularity. Thus we study the behavior of $\Omega_{p}$ for
different values of spacetime parameters $a$ and $\alpha$ and observer's
angular velocity parameter $k$ in detail.
In FIG: (\ref{opfig}), we have plotted $\Omega_{p}$ for black hole with $a=0.5$ and $%
\alpha=1$ in left column and for naked singular with $a=1.5$ and $\alpha=1$
in right column for different observers of angular velocity parameter $%
k=0.1,0.5,0.9$ in first, second and third row, respectively. It is seen that
for a black hole the precession frequency $\Omega_{p}$ of a gyroscope
attached to any observer except ZAMO, diverges when ever it approach the
horizons along any direction. However, its remain finite for ZAMO observer
at horizon. On the other hand, for a naked singularity the precession
frequency remain finite even as the observer approach $r=0$ along any
direction except $\theta=\pi/2$. Further, if the observer approach $r=0$
along $\theta=\pi/2$ the precession frequency $\Omega_p$ becomes very large
because of the ring singularity. In FIG: (\ref{opf}), we further illustrate that for
all other choices of the parameters, the behavior of the precession
frequency $\Omega_{p}$ remain the same as in case of the black hole with $%
a=0.5$ and $\alpha=1$ and $a=1.5$ and $\alpha=1$ in naked singularity. That
is, for black hole with any $a$ and $\alpha$ the precession frequency $%
\Omega_{p}$ of gyroscope of all the observer except ZAMO, diverges near the
horizons while for naked singularity it remain finite up to $r=0$ along any
direction except $\theta=\pi/2$.
Finally, using the spin precession frequency of a test gyroscope attached to
stationary observer, we can differentiate a black hole from naked
singularity. The four velocity of an observer in the spacetime of the line
element \eqref{LE} is timelike if azimuthal components of the velocity
(equal to angular velocity) $\Omega$ at fixed $(r,\theta)$ remain in between
$\Omega_{-}$ and $\Omega_{+}$. Further, the angular velocity can be
parameterized $q$ such that $0<q<1$. Consider there are two observer with
different angular velocity $\Omega_{1}$ and $\Omega_{2}$ approaching the
astrophysical object in the PFDM of line element %
\eqref{LE} along the different directions $\theta_1$ and $\theta_2$. If (a)
the precession frequency $\Omega_p$ of a test gyroscope of at least one
observer becomes arbitrary large as the observer approach the central object
along any direction then the object is a black hole and (b) if the
precession frequency of any of the observer along at most one of the two
directions becomes arbitrarily larges as it approach the central object,
then the object is a naked singularity.\\
\section{Observational aspects}
Experimental observations obtained with the Rossi X-ray Timing Explorer (RXTE) reveals the phenomena of quasi-periodic oscillations (QPOs) by analyzing the power spectrum of the time series
of the X-rays \cite{stella1,stella2,xray2,Revnew}.
There are various sources of cosmic X-rays and one of them is accreting stellar mass near compact objects like black holes and neutron stars.
A careful monitoring identifies two types of QPOs, namely the high-frequency quasi-periodic oscillations (HF QPOs), and the low-frequency
(LF QPOs). Although the theoretical explanation behind this effect is not
yet well understood, QPOs are often linked with the relativistic precession
of the accretion disk near black holes or neutron stars. QPOs may be potentially a useful tool in astrophysics for investigating new features related to the accretion process near black holes. For example, within a certain model of X-ray timing measurements of QPOs can be used to estimate the spin angular momentum and the mass of the black hole which is of significant importance in astrophysics \cite{motta}. Experimental data shows that, the observed HF QPOs belong to the interval $50 - 450$ Hz. Furthermore, there are three classes of LF QPOs known as: type-A, type-B, and type-C, LF QPOs, respectively. The
typical frequencies belong to the interval: $6.5-8 $Hz, $0.8-6.4$ Hz, and $0.01-30$ Hz,
respectively.
\begin{figure}[!ht]
\centering
\minipage{0.50\textwidth} %
\includegraphics[width=8.2cm,height=5.6cm]{26.pdf}\newline
(a) \label{ISCOpstv} \endminipage\hfill \minipage{0.50\textwidth} %
\includegraphics[width=8.2cm,height=5.6cm]{27.pdf}\newline
(b) \label{ISCOngtv} \endminipage\hfill
\caption{{\protect\footnotesize The ISCO of the black hole (BH) and naked singularity (NS) (in the unit of $M$) for positive and negative PFDM parameter $\alpha$ is plotted in panel (a) and (b), respectively. For $\alpha=-0.5,-0.75,-1.0$, the critical values of spin parameter are $a_c=1.25486557,1.246502349,1.190826462$ and for positive values of $\alpha$ the critical value of $a_c$ is obtained from \eqref{ac}.}}
\label{ISCO}
\end{figure}
\begin{figure}[!ht]
\centering
\minipage{0.50\textwidth} %
\includegraphics[width=8.2cm,height=5.6cm]{Onodngtv.pdf}\newline
(a) \label{Onodngtv} \endminipage\hfill \minipage{0.50\textwidth} %
\includegraphics[width=8.0cm,height=5.6cm]{Omganodpstv.pdf}\newline
(b) \label{Omganodpstv} \endminipage\hfill
\caption{{\protect\footnotesize We plot $\Omega_{nod}$ (in units of $M^{-1}$) as a function of $r$ (in units of $M$) with $\alpha=-1$ and $\alpha=1$. As can be seen, for black holes the NPPF $\Omega_{nod}$
decreases as we increase $r$. From the plots we can further observe that a naked singularity is characterized by a peak value of $\Omega_{nod}$. Furthermore $\Omega_{nod}$ vanishes at some radius $r_0$. The negative values of $\Omega_{nod}$, physically can be interpreted as a reversion of the precession direction.}}
\minipage{0.50\textwidth} %
\includegraphics[width=8.2cm,height=5.6cm]{Onodn5.pdf}\newline
(c) \label{Onodngtv5} \endminipage\hfill \minipage{0.50\textwidth} %
\includegraphics[width=8.0cm,height=5.6cm]{Omganod5.pdf}\newline
(d) \label{Omganodpstv5} \endminipage\hfill
\caption{{\protect\footnotesize We plot the NPPF $\Omega_{nod}$ as a function of $r$ for the black hole case with $\alpha=-0.5$ and $\alpha=0.5$. Clearly $\Omega_{nod}$ always decreases with the increase of $r$. }}
\minipage{0.50\textwidth} %
\includegraphics[width=8.2cm,height=5.6cm]{Onodnsn5.pdf}\newline
(e) \label{Onodngtv51} \endminipage\hfill \minipage{0.50\textwidth} %
\includegraphics[width=8.0cm,height=5.6cm]{Onodnsp5.pdf}\newline
(f) \label{Omganodpstv52} \endminipage\hfill
\caption{{\protect\footnotesize The $\Omega_{nod}$ as a function of $r$ is plotted in the case of naked singularities with $\alpha=-0.5$ and $\alpha=0.5$. From the plots we can see that $\Omega_{nod}$ increases initially, then a particular peak value is obtained, and finally decreases with the increase of $r$. The negative values of $\Omega_{nod}$, shows that the precession direction has changed. }}
\end{figure}\label{fig8}
\begin{figure}[!ht]
\centering
\minipage{0.50\textwidth} %
\includegraphics[width=8.2cm,height=5.6cm]{Omgarn5.pdf}\newline
(a) \label{Onodngtvv} \endminipage\hfill \minipage{0.50\textwidth} %
\includegraphics[width=8.0cm,height=5.6cm]{Omgar5.pdf}\newline
(b) \label{Omgar} \endminipage\hfill
\minipage{0.50\textwidth} %
\includegraphics[width=8.2cm,height=5.6cm]{Omgaralfa5.pdf}\newline
(c) \label{Onodr2} \endminipage\hfill \minipage{0.50\textwidth} %
\includegraphics[width=8.0cm,height=5.6cm]{Omgaralfan.pdf}\newline
(d) \label{Omgar3} \endminipage\hfill
\caption{{\protect\footnotesize We plot $\Omega_{r}$ versus $r$ with PFDM parameter values (upper panel) $\alpha=-1$ and $\alpha=1$, and spin parameter (lower panel) $a=-0.7$ and $a=0.7$. We can see that $\Omega_{r}$ vanishes at some particular value of $r$ depending on the particular value of PFDM parameter. Note that, $\Omega_{r}$ reaches a particular peak value, then decreases with the increase of $r$. }}
\end{figure}\label{fig0}
\begin{table}[tbp]
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|}
\hline
\multicolumn{2}{|c|}{ } & \multicolumn{3}{c|}{ $\alpha=1$ (in $M$) } & \multicolumn{3}{c|}{ $\alpha=0.5$ (in $M$) } & \multicolumn{3}{c|}{ $\alpha=0$ (in $M$) }\\\hline
$a/M$ & $r$ (in $M$) & $\nu^{\alpha}_{\phi}$ (in Hz)& $\nu^{\alpha}_{\theta}$
(in Hz) & $\nu^{\alpha}_{\text{nod}}$ (in Hz)& $\nu^{\alpha}_{\phi}$ (in Hz) & $\nu^{\alpha}_{\theta}$ (in Hz)& $\nu^{\alpha}_{\text{nod}}$
(in Hz) & $\nu^{0}_{\phi}$ (in Hz)& $\nu^{0}_{\theta}$
(in Hz) & $\nu^{0}_{\text{nod}}$
(in Hz) \\ \hline
0.1 & 5.67 & 220 & 217 & 3 & 226 & 223 & 3 & 234 & 231 & 3 \\ \hline
0.2 & 5.45 & 232 & 226 & 6 & 238 & 231 & 7 & 246 & 239 & 7 \\ \hline
0.3 & 5.32 & 239 & 229 & 10 & 245 & 234 & 10 & 253 & 242 & 11\\ \hline
0.4 & 4.61 & 192 & 273 & 19 & 299 & 279 & 20 & 309 & 287 & 22 \\ \hline
0.5 & 4.30 & 320 & 293 & 27 & 333 & 302 & 31 & 344 & 310 & 34 \\ \hline
0.6 & 3.82 & 375 & 331 & 44 & 383 & 336 & 47 & 395 & 342 & 53 \\ \hline
0.7 & 3.45 & 527 & 363 & 64 & 435 & 366 & 69 & 448 & 371 & 77 \\ \hline
0.8& 2.85 & 544 & 428 & 116 & 553 & 428 & 125 & 567 & 429 & 138\\ \hline
0.9 & 2.32 & 693 & 491 & 202 & 702 & 483 & 219 & 718 & 472 & 246 \\ \hline
0.98 & 1.85 & 884 & 544 & 340 & 893 & 521 & 372 & 910 & 485 & 425 \\ \hline
0.99 & 1.45 & 1137 & 575 & 562 & 1146 & 520 & 626 & 1163 & 420 & 743 \\ \hline
0.9999 & 1.20 & 1350 & 593 & 757 & 1358 & 498 & 860 & 1375 & 276 & 1099\\ \hline
0.999999 & 1.05 & 1509 & 635 & 874 & 1517 & 507 & 1010 & 1533 & 89 & 1444 \\ \hline
1.0 & 1 & 1568 & 669 & 899 & 1575 & 532 & 1043 & 1591 & 0 & 1591\\ \hline
1.001 & 0.95 & 1629 & 724 & 905 & 1636 & 583 & 1053 & 1652 & 133 & 1519 \\ \hline
1.01 & 0.80 & 1825 & 1093 & 732 & 1830 & 968 & 862 & 1844 & 679 & 1165 \\ \hline
1.02 & 0.75 & 1888 & 1339 & 549 & 1954 & 1428 & 526 & 1967 & 1199 & 768 \\ \hline
1.04 & 0.65 & 2020 & 2025 & -5 & 2024 & 1941 & 83 & 2035 & 1753 & 282 \\ \hline
1.08 & 0.667 & 1944 & 2127 & -183 & 1948 & 2054 & -106 & 1959 & 1894 & 65 \\ \hline
1.2 & 0.8 & 1646 & 1863 & -217 & 1650 & 1808 & -158 & 1662 & 1696 & -34 \\ \hline
2 & 1.26 & 919 & 1619 & -700 & 923 & 1609 & -686 & 932 &1588 & -656 \\ \hline
3 & 3.20 & 353 & 453 & -100 & 357 & 453 & -96 & 365 & 453 & -88 \\ \hline
4 & 4.00 & 256 & 371 & -115 &260 & 373 &-113 & 265 & 375 & -110 \\ \hline
\end{tabular}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|}
\hline
\multicolumn{2}{|c|}{ } & \multicolumn{3}{c|}{ $\alpha=-1$ (in $M$) } & \multicolumn{3}{c|}{ $\alpha=-0.5$ (in $M$) } & \multicolumn{3}{c|}{ $\alpha=-0.1$ (in $M$) }\\\hline
$a/M$ & $r$ (in $M$) & $\nu^{\alpha}_{\phi}$ (in Hz)& $\nu^{\alpha}_{\theta}$
(in Hz) & $\nu^{\alpha}_{\text{nod}}$ (in Hz) & $\nu^{\alpha}_{\phi}$ (in Hz) & $\nu^{\alpha}_{\theta}$ (in Hz)& $\nu^{\alpha}_{\text{nod}}$
(in Hz) & $\nu^{\alpha}_{\phi}$ (in Hz)& $\nu^{\alpha}_{\theta}$
(in Hz) & $\nu^{\alpha}_{\text{nod}}$
(in Hz) \\ \hline
0.1 & 5.67 & 247 & 243 & 4 & 242 & 238 & 4 & 236 & 233 & 3 \\ \hline
0.2 & 5.45 & 259 & 251 & 8 & 254 & 246 & 8 & 248 & 241 & 7 \\ \hline
0.3 & 5.32 & 267 & 254 & 13 & 261 & 249 & 12 & 255 & 244 & 11\\ \hline
0.4 & 4.61 & 325 & 299 & 26 & 318 & 294 & 24 & 312 & 289 & 23 \\ \hline
0.5 & 4.30 & 354 & 317 & 37 & 348 & 312 & 35 & 341 & 307 & 34 \\ \hline
0.6 & 3.82 & 412 & 352 & 60 & 406 & 349 & 57 & 398 & 344 & 54 \\ \hline
0.7 & 3.45 & 467 & 378 & 89 & 460 & 376 & 84 & 451 & 372 & 79 \\ \hline
0.8& 2.85 & 589 & 427 & 162 & 581 & 429 & 152 & 571 & 429 & 141\\ \hline
0.9 & 2.32 & 741 & 449 & 292 & 733 & 460 & 273 & 722 & 469 & 253 \\ \hline
0.98 & 1.85 & 935 & 410 & 525 & 926 & 443 & 483 & 925 & 474 & 441 \\ \hline
0.99 & 1.45 & 1188 & 113 & 1075 & 1180 & 281 & 899 & 1169 & 387 & 782 \\ \hline
0.9999 & 1.60 & 1070 & 315 & 762 & 1069 & 384 & 685 & 1058 & 455 & 613\\ \hline
0.999999 & 1.50 & 1147 & 224 & 923 & 1138 & 330 & 808 & 1127 & 415 & 1712 \\ \hline
1.0 & 1.55 & 1111 & 277 & 834 & 1138 & 330 & 808 & 1089 & 432 & 660\\ \hline
1.001 & 1.95 & 879 & 423 & 455 & 870 &450 & 420 & 859 & 475 & 384 \\ \hline
1.01 & 1.80 & 954 & 398 & 555 & 945 & 436 & 509 & 934 & 472 & 462 \\ \hline
1.02 & 1.75 & 978 & 386 & 592 & 970 & 429 & 541 & 959 & 470 & 489 \\ \hline
1.04 & 1.65 & 1032 & 357 & 675 & 1023 & 413 & 610 & 1012 & 464 & 548 \\ \hline
1.08 & 1.667 & 1008 & 378 & 630 & 1000 & 430 & 570 & 980 & 478 & 512 \\ \hline
1.2 & 1.8 & 902 & 437 & 465 & 895 & 472 & 422 & 985 & 506 & 378 \\ \hline
2 & 1.26 & 944 & 1550 & -606 & 941 & 1565 & -624 & 935 &1518 &-646 \\ \hline
3 & 3.20 & 376 & 449 & -73 & 372& 451 & -79 & 367 & 457 & -85 \\ \hline
4 & 4.00 & 274 & 377 & -103 &271 & 377 &-106 & 267 & 375 & -108 \\ \hline
\end{tabular}
\caption{We have considered an object with mass $M=10 {M}_\odot$ to
calculate the KF precession $%
\protect\nu^{\protect\alpha}_{\theta}$, VEF precession $%
\protect\nu^{\protect\alpha}_{\theta}$, and the NPPF $%
\protect\nu^{\protect\alpha}_{\text{nod}}$, respectively. We have chosen the ISCO radius $r=r_{ISCO}$ given by the interval $M\leq r_{ISCO} \leq 6 M $.}
\end{table}
There are three characteristic frequencies attributed to a test particle orbiting the black hole, namely the KF $\Omega _{\phi } $, the VEF $%
\Omega _{\theta }$, and the radial epicyclic frequency (REF) $%
\Omega _{r}$, respectively. By taking into account the effect of PFDM we have calculated the following expressions for the characteristic frequencies \cite{kepfreq}:
\begin{eqnarray}
\Omega _{\phi } &=&\pm \frac{\sqrt{M+\frac{\alpha }{2}\left( 1-\ln \left(
\frac{r}{|\alpha |}\right) \right) }}{r^{3/2}\pm a\sqrt{M+\frac{\alpha }{2}%
\left( 1-\ln \left( \frac{r}{|\alpha |}\right) \right) }}, \\
\Omega _{r} &=&\frac{\Omega _{\phi }}{r}\left[ r^{2}-6Mr-3a^{2}-\alpha
r\left\{ 1-3\ln \left( \frac{r}{|\alpha |}\right) \right\} \pm 8a\sqrt{rM+%
\frac{\alpha r}{2}\left\{ 1-\ln \left( \frac{r}{|\alpha |}\right) \right\} }-%
\frac{\alpha \left\{ a^{2}+ar+r^{2}\right\} }{\left\{ 2M+2\alpha \left(
1-\ln \left( \frac{r}{|\alpha |}\right) \right) \right\} }\right] ^{\frac{1}{%
2}}, \\
\Omega _{\theta } &=&\frac{\Omega _{\phi }}{r}\left[ r^{2}+3a^{2}\mp 4a\sqrt{%
rM+\frac{\alpha r}{2}\left\{ 1-\ln \left( \frac{r}{|\alpha |}\right)
\right\} }-\frac{a\alpha \left\{ a\mp 2\sqrt{rM+\frac{\alpha r}{2}\left\{
1-\ln \left( \frac{r}{|\alpha |}\right) \right\} }\right\} }{\left\{ M+\frac{%
\alpha }{2}\left( 1-\ln \left( \frac{r}{|\alpha |}\right) \right) \right\} }%
\right] ^{\frac{1}{2}}.
\end{eqnarray}
One can easily show that in the limiting case when $\alpha$ vanishes, the characteristic frequencies in the Kerr geometry are obtained \cite{NS,mottanew,bambi}. With these results in mind, we can extract further informations by defining the following two quantities
\begin{equation}
\Omega_{\text{nod}}=\Omega _{\phi }-\Omega _{\theta },
\end{equation}
and
\begin{equation}
\Omega_{\text{per}}=\Omega _{\phi }-\Omega _{r }.
\end{equation}
Where $\Omega_{\text{nod}}$ measures the orbital plane precession and is usually known as the NPPF (or Lense-Thirring precession frequency), on the other hand $\Omega_{\text{per}}$ measures the precession of the orbit and is known as the periastron precession frequency. From FIG. (9)-(11) we can see that in the black hole case, the NPPF $\Omega_{nod}$ always decreases with $r$, therefore we can write the following condition
\begin{equation}
\frac{d\Omega_{nod}}{dr}<0.
\end{equation}
On the other hand, an interesting feature arises in the case of naked singularities, namely $\Omega_{nod}$ initially increases, then a particular peak value is recovered, and finally $\Omega_{nod}$ decreases with the increase of $r$. Therefore, in the case of naked singularities we can have the following condition when $\Omega_{nod}$ increases with with $r$, written as follows
\begin{equation}
\frac{d\Omega_{nod}}{dr}>0.
\end{equation}
Finally we point out that negative values of $\Omega_{nod}$, can be interpreted as a reversion of the precession direction.
We provide a detailed analyses of our results in Table I, where we highlight the observational aspects by calculating the impact of the PFDM parameter on different frequencies: KF $%
\protect\nu^{\protect\alpha}_{\theta}$, VEF $%
\protect\nu^{\protect\alpha}_{\theta}$, and the NPPF $%
\protect\nu^{\protect\alpha}_{\text{nod}}$. Our results reveal that, typical values of the PFDM parameter $\alpha$ significantly affects these frequencies. In particular we find that, with the increase of positive $\alpha $, all frequencies become smaller and smaller.
On the other hand, with the decrease of negative $\alpha $, all frequencies become bigger and bigger.
Our results further indicate that, the effects of PFDM are getting stronger with the increase of spin angular momentum parameter $a$. From the Table I, we see that one can identify LF QPOs with $\nu _{\text{nod}}^{\alpha }$, for a slowly rotating black holes, i.e $a/M<0.5$. A significant difference between $%
\protect\nu^{\protect\alpha}_{\text{nod}}$ and $%
\protect\nu^{0}_{\text{nod}}$, occurs when $a/M>0.5$. Clearly, in this range, we can identify $%
\protect\nu^{\protect\alpha}_{\text{nod}}$ with
HF QPOs.
In the near future we plan to investigate the relativistic precession model to get constraints on $\alpha$ using the data of GRO J1655-40 and by following the analysis presented by Bambi \cite{bambi}.
\section{Conclusion}
In this paper, we have studied rotating object in PFDM %
\eqref{LE} to differentiate a Kerr-like black hole from naked singularity.
For a black hole we find the lower bound of the dark matter parameter $%
\alpha $, $-2\leq \alpha$ and gives the critical value of spin parameter $%
a_c $. For any fixed chosen $\alpha$ the line element \eqref{LE} represents
a black hole with two horizons if $a<a_c$, extremal black hole with one
horizon if $a=a_c$ and naked singularity if $a>a_c$. It is seen that for $%
\alpha< 0$, $a_c$ has maxima at $\alpha=\overline\alpha$ and for $\alpha>0$
it has minima at $\alpha=\tilde{\alpha}$. Further, for large value of $%
\alpha $, $a_c$ increases very large without any limit and thus a highly
spinning black hole can form. We also study the horizon of extremal black
hole and find that for $\alpha<0$ the size of extremal horizon $r_e$
decrease whereas for $\alpha>0$, $r_e$ has minima at $\alpha=2/3$. Further,
as $r_-\leq r_e \leq r_+$, so we can conclude that for any fixed $a$ and $%
-2\leq \alpha < 2/3 $, with increasing $\alpha$, size of the black hole
horizons decreases while for $2/3<\alpha$ increases.\\
We also studied the spin precession frequency $\Omega_p$ of the a test
gyroscope attached to a stationary timelike observer. For timelike observer
we find the restricted domain for the angular velocity $\Omega$ of the
observer. From the precession frequency $\Omega_{p}$, by setting $\Omega=0$,
we obtained LT- precession frequency for a static observer which
can lies only outside the ergosphere. We also find the geodetic precession
frequency which is due to the of spacetime curvature of Schwarzschild black
hole in PFDM. It is seen that for $\alpha<0$ the
geodetic precession frequency is increases while for $\alpha>0$ is increases.
Using spin precession frequency criteria we differentiate a black hole from
a naked singularity. We parameterize the angular velocity of the observer
and studied the spin precession frequency for different choices of the
parameter along the different directions. If the precession frequency of a
gyroscope attached to at least one of two observers with different angular
velocities blow up if they approach the central object in PFDM along any direction then the object is a black hole. If the
precession frequency of all the observer show divergence as the observers
approach the center of spacetime along at most one direction then it is
naked singularity.
We have summarized our results by computing the effect of PFDM on the KF, VEF, and NPPF given in Table I. We observe that frequencies depend upon the value of $a/M$, radial distance $r$, as well as the PFDM parameter $\alpha$, yielding notable differences in the corresponding frequencies of black holes and naked singularities. We have shown that, with the increase of positive PFDM parameter, all frequencies become smaller. Consequently, with the decrease of negative PFDM parameter, all frequencies become bigger. Following our results we can conclude that LF QPOs can be identified for $a/M<0.5$. However, most significant changes are observed in the interval $a/M>0.5$, whose frequencies can be identified with HF QPOs.
Given the fact that the accretion disk changes with time, say, when the accretion disk approaches the black hole/naked singularity, we need to study the evolution of QPO frequencies to distinguish black holes from naked singularities. In particular, for a given value of PFDM parameter $\alpha$ and $a/M$, if the accretion disc approaches the $r_{ISCO}$, we see that $\Omega_{nod}$ always increases reaching its maximum value. Interestingly, in the case of naked singularities, if the accretion disc approaches the $r_{ISCO}$, we find that $\Omega_{nod}$ firstly increases, then reaches its peak, and finally decreases. In fact, contrary to the black holes, in the case of naked singularity $\Omega_{nod}$ can be zero, as can be seen from FIG. (9). Finally, we anticipate that future experiments could produce constraints for the PFDM parameter $\alpha $. For example, recently Bhattacharyya has proposed a model of fast radio bursts from neutron stars plunging into black hole that implies the existence of event horizon, LT effects and the emission of gravitational waves from a black hole \cite{bhattacharyya}. In this context, it would certainly be interesting to explore the possible impact of the PFDM parameter $\alpha$ on the gravitational wave signatures.
\acknowledgments
This work is supported by National University of Modern
Languages, H-9, Islamabad, Pakistan (M. R). We would like to thank the referees for their valuable comments which helped to improve the manuscript.
|
1,314,259,994,792 | arxiv | \section{Introduction}
A bursting radio source, GCRT J1745-3009, was discovered at 0.33
GHz in a radio monitoring program of the Galactic
center region made on September 30, 2002 (Hyman et al. 2005a).
Five $\sim 10$ min bursts with peak flux of $\sim 1.67$ Jy were
detected at an apparently regular period of $\sim77$ min from
the source.
Activity (only one single $\sim 0.5$ Jy burst) had been detected
again by GMRT in 330 MHz at 2003 September 28 (Hyman et al.
2005b).
The source appears to be transient because it was not active at
the 1998 September 25 and 26 epochs of VLA observation, and had
not been detected in some other epochs of observation in 2002 and
2003.
Observations indicate that (Hyman et al. 2005b) the burst detected
in 2003 is an isolate one although additional undetected bursts
occurred with 77 min period like the 2002 bursts can not be
completely ruled out.
Assuming that the 2003 burst is an isolated one, Hyman et al.
(2005b) estimated crudely that the duty cycle of the transient
behavior is about 10\%.
Given that (1) the source's brightness temperature would exceed
$10^{12}$ kelvin if it is farther than 100 pc; (2) the source's
observational properties are not directly compatible with that of
any known coherent emitters like white dwarfs or pulsars;
Hyman et al. (2005a) concluded that it it is not likely to be a
incoherent emitter but rather might be one of a new class of
coherent emitter.
Kulkarni \& Phinney (2005) argued that the source could be a
nulling radio pulsar, like PSR J1752+2359 which has quasi periodic
nulling behavior (Lewandowski et al. 2004).
It is pointed out (Turolla, Possenti \& Treves 2005) that the
phenomenon is compatible with what is
expected from the interaction of wind and magnetosphere of two
pulsars in a binary system.
This scenario predicts: (1) a pulsar should be detectable at frequency higher
than 1 GHz; (2) the X-ray luminosity from the shock should be $10^{32}$ ergs s$^{-1}$ which is too low to be detectable by contemporary facilities.
The source could be a white dwarf
(Zhang \& Gil 2005), which may actually behave like a pulsar and
create the activity observed.
This scenario predicts that deep IR exposure with large telescope
may lead to the discovery of the counterpart of GCRT J1745-3009.
A conclusive understanding, however, has not been achieved yet,
and could only be accomplished through further observation.
An alternative effort is tried in this paper to explain the
observational features of GCRT 1745-3009. We propose that the
source could simply be a spinning pulsar precessing with a period
of $\sim 77$ min.
The duration and period of the bursts can be explained with a
broad choice of parameters, as long as the precession angle is not
very small ($>15$ degrees).
It is worth noting that the wobble angle of the pulsar could be
typically of tens of degrees (Melatos 2000) if the free precession
period is close to the radiation-driven precession period.
Given that the brightness temperature could be as high as
$10^{28}-10^{30}$ K, a pulsar could reproduce the observed flux
even if it is as far as 10 kpc away.
The transient nature of the source would be understandable if the
pulsar is an extremely nulling radio pulsar
(Backer 1970; Ritchings 1976; Manchester).
Some of the discovered nulling pulsars could have a huge nulling
fraction. PSR 0826-34 is a case in point, whose nulling fraction
is $70\pm35$ percent (Biggs 1992).
PSR B1931+24 switches off for $\sim 90\%$ of time, and it appears
quasi-periodically at $\sim 40$ days (Cordes et al. 2004; O'Brien
2005).
Such a high fraction of nulling might be consistent with the 10\%
duty-cycle estimated by Hyman et al. (2005b).
A similar idea was presented by Heyl \& Hernquist (2002) who
applied a precessing pulsar model to explain the 6 hours periodic
modulation of X-ray flux from 1E 161348-5055, a neutron star
candidate in the center of the supernova remnant RCW 103.
The model is introduced in \S2. Its application to the pulsar is
discussed in \S3. An extensive discussion on the population of
nulling and precessing pulsars as well as a comparison between our
model and other contemporary models are provided in \S4. The
results are summarized in \S5.
\section{The model}
A precessing pulsar scenario is shown in the observer's
rest frame in Fig. \ref{fig1}. The pulsar's
spin axis itself is rotating around a precession axis (which lies
along the direction of total angular momentum).
We denote the magnetic inclination angle as $\alpha$, the angle
between line of sight and the precession axis as $\beta$, and the
precession angle as $\gamma$.
One can also consider another frame, called the precessing
frame, which rotates along $\Omega_{\rm p}$ with the same period
as the precession period.
In this precessing frame, both $\Omega_{\rm p}$ and $\Omega_{\rm
s}$ axes are fixed, and the line of sight rotates about
$\Omega_{\rm p}$.
When the line of sight passes through the emission pattern (shaded
region in Fig. 1), the observer detects burst activity.
The points ``S'' and ``T'' represent the beginning and end of
the observed burst activity.
$\delta$ is the angle between ``S'' and ``T'' along the trajectory
of the line of sight.
Let's consider the parameter space of $\alpha$, $\beta$, and
$\gamma$, in which the observed flux variation can be successfully
reproduced.
The pulsar's radio emissivity is assumed to be $f(\theta)=f_0
e^{-\theta/\theta_p}$, where $\theta$ is the angular distance from
the magnetic axis $\mu$, and $\theta_p$ is a parameter
characterizing the width of the emission beam.
The observation is sampled every 30 second in the original
observation of Hyman et al. (2005a).
This sampling time is much shorter than the precession period (77 min)
and if it is also longer than the spin period of pulsar,
then the 30s sampled flux, $F_{30}$, can be regarded as a function of $\phi$
(i.e., the angle between line of sight and the pulsar's spin axis).
To simplify the problem, $F_{30}(\phi)$ is assumed to be
proportional to the maximum flux possible in a spin period,
$f(\phi-\alpha)$,\footnote{%
Note that $\phi > \alpha$ if one observes single-peak bursts.
Otherwise, an observer should detect double-peak bursts if $\phi <
\alpha$.
}%
\begin{equation}
F_{30}(\phi_1)/F_{30}(\phi_2)\sim
f(\phi_1-\alpha)/f(\phi_2-\alpha).
\end{equation}
Given that the peak flux observed is $1.67$ Jy, and the undetected
limit is 15 mJy, the ratio of the minimum to maximum fluxes should
thus be $F_{30}(\phi_{\rm max})/F_{30}(\phi_{\rm min})\sim $(15
mJy)/(1.67 Jy)$\simeq 0.01$, where $\phi_{\rm max}$ and $\phi_{\rm
min}$ are the maximum and minimum values of $\phi$ during bursts
(i.e., $F_{30}>15$ mJy), respectively.
Therefore, $f(\phi_{\rm
max}-\alpha)/f(\phi_{\rm min}-\alpha)=\exp[(\phi_{\rm min}-
\phi_{\rm max})/\theta_p]\sim0.01$.
We have then $\phi_{\rm
max}-\phi_{\rm min}=4.7\theta_p$, which is chosen to be $\sim 0.1$
rad $=6^{\rm o}$ since the typical beam width of a normal pulsar
is $\sim 10^{\rm o}$ (Tauris \& Manchester 1998).
The consequence of choosing a larger $\theta_p$ will be discussed
later.
The angle $\delta$ should be set to $\delta=2\pi(10/77)$ in order
to fit the observed ratio of the burst duration to the precession
period.
One has $\phi_{\rm min}=\beta-\gamma$ and $\phi_{\rm
max}=\arccos(\cos\beta\cos\gamma+\cos(\delta/2)\sin\beta\sin\gamma)$,
according to spherical geometry. Therefore we have,
\begin{equation}
\begin{array}{ll}
\phi_{\rm max}-\phi_{\rm min} &
=\arccos(\cos\beta\cos\gamma+\cos(\delta/2)
\sin\beta\sin\gamma) \\
&+\gamma-\beta =4.7\theta_p.\\
\end{array}
\label{thetap}
\end{equation}
The $\gamma$ value can be found from Eq.(\ref{thetap}) for given
$\alpha$, $\beta$, $\theta_p$.
The calculated result is shown in fig \ref{fig2}.
No $\gamma$
solution could be found for $\alpha$ and $\beta$ in the shaded
region in fig 2.
The vertical solid lines in fig \ref{fig2} are the contours of
resulting $\gamma$ from given $\alpha$ and $\beta$ by choosing
$\theta_p=0.1/4.7$ rad.
With the assumption that the pulsar's brightness temperature is
$10^{30}$ K, contours (the dashed lines in Fig. 2) of pulsar
distance can be calculated, provided that the 30s-sampled burst
peak flux is 1.67 Jy.
The distance is computed precisely by simulating the pulsar
emission and integrating the flux over 30 s numerically.
The smallest precessing angle $\gamma$ with which a pulsar can
reproduce the observed bursts is found to be $\sim 16$ degree in
this calculation, while its uncertainty should be $\sim 1$ degree.
Note that the above calculation is based on an assumed structure
of pulsar beam.
Without this assumption, one can also crudely estimate the
smallest possible precessing angle (14 degree if $4.7\theta_p=0.1$
rad and 7 degree for $4.7\theta_p=0.05$ rad) by letting
$\gamma=\beta$ in Eq.(\ref{thetap}).
Thus, we conclude that the pulsar should have a precessing angle
to be larger than $\sim 15$ degree if its beam width is larger
than $10$ degree in our model.
An example of simulated burst profiles is shown in Fig. 3, where
the parameters are $\alpha\simeq 10^{\rm o}$, $\beta\simeq 44^{\rm
o}$, $\gamma\simeq 30^{\rm o}$, and pulsar distance $\simeq 24$
kpc.
\section{The pulsar}
We propose that the enigmatic source, GCRT J1745-3009, could be a
precessing radio pulsar. A radio burst should be detected when the
pulsar's emission beam precesses through the line of sight.
In the model, the distance to the source could be even larger than
10 kpc if the brightness temperature of the pulsar is $\sim
10^{30}$ K.
We find that the precession angle, $\gamma$, must be rather large
($> \sim 15^{\rm o}$) in order to reproduce the general observed
behavior.
Higher values of the beam radius ($4.7 \theta_p>0.1$) have also been
considered. We find that, as the beam radius increases, the lower
limit of precession angle and the upper limit of the source
distance also increase.
GCRT J1745-3009 was discovered at 0.33 GHz in September 2002, but
was not detected at 1.4 GHz, with a threshold of 35 mJy, in
January 2003 (Hyman et al. 2005a).
Xiang Liu and Huaguang Song also tried to observe the source at 5
GHz with the 25m-radio telescope of the Urumqi station in
Xingjiang, China. They did not detect the source (upper limit of
50 mJy) in observations from 21:20 to 23:55 UT, March 20, 2005,
with an integration time of 30 s.
If the source bursting behavior at 0.33 GHz remained to this
observation, then its spectral index $\alpha$ should be smaller
than $-1.29$.
This value is somewhat smaller than that of the Galactic center
radio transients ($\alpha=-1.2$).
The estimated index ($\alpha < -1.29$) of the source is in
agreement with the typical pulsar spectrum ($\alpha = -1.75$)
obtained using statistics of 285 radio pulsars' spectral indices
between 400 MHz and 1400 MHz (Seiradakis \& Wielebinski 2004).
For a conventional neutron star, a 15-degree precession angle will
induce significantly magnus force and unpin the neutron star's
crust and the superfluid inside (Link \& Cutler 2002).
In this case, the relative deformation of the neutron star crust
is $\epsilon\sim P_s/P_p$, where $P_s$ is the spin period, and
$P_p$ is the 77 min precession period.
Then the deformation will be too large for a conventional neutron
star to have unless the star's spin period is $\sim1$ ms because
Owen (2005) derived the maximum elastic deformation,
$\epsilon_{\rm max}$, of conventional neutron star induced from
shear stresses (Ushomirsky et al. 2000) is only $6.0\times10^{-7}$
(1, fiduciary values of mass, radius are assumed; 2, the breaking
strain is chosen to be $10^{-2}$).
A neutron star with period of 1 ms and $\epsilon\sim 10^{-7}$
would emit gravitational waves that is potentially observable by
long-baseline interferometers like LIGO (Cutler \& Jones 2001;
Melatos \& Payne 2005; Payne \& Melatos 2005 ).
Ostriker \& Gunn (1969) and Heyl \& Hernquist (2002) derived that
the maximum deformation of conventional neutron star induced from
magnetic field is $\epsilon\simeq
4\times10^{-6}(3<B^2_{p,15}>-<B^2_{\phi,15}>)$, where $B_{15}$ is
magnetic field in unit of $10^{15}$G.
This means that if the pulsar have magnetic field like a magnetar
then its period can be as large as tens millisecond.
It is pointed out by Jones \& Andersson (2002) that the upper limit of the
precession angle for a neutron star crust is $\gamma_{\rm max}\sim
0.45(100~{\rm Hz}/f)^2 (u_{\rm break}/10^{-3})$, where $u_{\rm
break}$ is the breaking strain that the solid crust can withstand
prior to fracture .
This means that the value of $u_{\rm break}$ of this pulsar should
be at least $10^{-2}$ which is consistent with the value chosen
for derive the maximum elastic deformation from sheer strain by Owen (2005).
In conclusion, a precessing normal neutron star may reproduce the
observational features only if it is a millisecond pulsar with a
$\sim10^{-2}$ breaking strain.
It has been found that the precession of
normal neutron stars may be damped quickly (in a timescale of
$10^{2\sim 4}$ precession period) via various coupling mechanisms
between the solid crust and the fluid core (Shaham 1977; Levin \& D'Angelo 2004).
If the fast rotating neutron star we are considering here would
also damp that fast ($10^{6-8}$s), then the dissipated energy
(about $\sin\gamma I_{\rm crust}\omega_s^2\sim10^{50}$erg) is too
huge to be unseen in X-ray band (Hyman et al. 2005a).
Therefore, if the bursting activity was produced by a precession
millisecond pulsar, then the pulsar should still be precessing and
possibly detectable by future observation.
If the existence of it is confirmed by future observation then
the damping time scale of large amplitude precessing millisecond
pulsar should be reconsidered.
Alternatively, it is not necessary for the pulsar to rotate very
fast if the pulsar is a solid quark star (Xu 2003, Zhou et al.
2004), because a solid quark star could have a larger elastic
deformation, $\epsilon_{\rm max}\sim 10^{-4}$ (Owen 2005).
Suppose there is no other dissipation mechanism other than
gravitational wave radiation, the typical damping time scale of
precession is $\tau_{\theta}^{\rm rigid}=1.8\times10^6{\rm
yr}(\epsilon/10^{-7})^{-2}(P/0.001\rm s)^4(I/10^{45} \rm g\cdot
cm^2)^{-1} \sim 10^{6-12}\rm yr$ (Bertotti \& Anile 1973; Cutler
\& Jones 2001).
Therefore the bursting activity should remain approximately the same
period and duration provided that it is a solid quark star.
\section{Discussions}
The source had only been observed in activity for twice, the first
time is from 2002 September 30 to October 1, in which five 10-min
duration bursts are detected in a period of 77 minutes (Hyman et
al. 2005a).
The second detection is in 2003 September 28, only one burst is
detected at its decay phase (Hyman et al. 2005b).
The source is likely in quiescent state during other observation
epochs, such as the 1998 September 25 and 26 epochs (Hyman et al.
2005a; 2005b). The sum of the observing time for GCRT J1745-3009
is only 70 hours from 1989 to 2005.
According to this sparse sampling, Hyman et al. (2005b) made their
first crude estimation on the duty-cycle of the source activity
(i.e., $\sim 10\%$).
We propose GCRT J1745-3009 to be a precessing nulling radio pulsar
because of the following reasons.
On the one hand, as we have demonstrated in \S2, a precessing
pulsar with a set of slightly constrained parameters could act
like a bursting radio source if the time resolution of the
observation is not high enough to resolve the pulsar's spin
period. The intriguing source's period and duration, intensity and
distance, as well as the current limitation on its spectra could
be understood in this picture (\S2). The transient nature of the
source could be accounted for if the pulsar is an extremely
nulling pulsar.
Additionally, there is a possible link between the sources and the
supernova remnant since the image of the source shows that the
source is only $10'$ away from the center of a shell-type SNR
G359.1-0.5 (Hyman et al. 2005a; 2005b). The proper motion of the
source further inferred from the supernova's age is $\sim 225 ~\rm
km/s$, namely consistent with the typical kick velocities of
neutron stars.
This later observation supports that GCRT J1745-3009 should be
relevant to neutron stars.
On the other hand, the possibility of existing such a pulsar would
not be too low.
Precessing is rare in pulsars since there are only a few pulsars
which show tentative evidence for precession (Lyne et al. 1998;
Cadez et al. 1997; Jones \& Andersson 2001; Heyl \& Hernquist
2002), i.e., Crab pulsar, Vela pulsar, PSR B1642-03, PSR B1828-11,
the remnant of SN 1987A, Her X-1 and 1E 161348-5055.
Extreme nulling phenomenon with $\sim 10\%$ duty cycle is also not
common for known pulsars.
Within the old data (Biggs 1992), we can only find two pulsars
which show extremely nulling phenomena (PSR 0826-34 and PSR
1944+17).
PSR B1931+24 is suggested to be in a nulling state in about 90\%
of time (Cordes et al. 2004).
Ali (2004) discovered extremely nulling phenomena (nulling
fraction $\sim 70\% - 95\%$) from five pulsars (PSR J1502-5653;
PSR J1633-5102, PSR J1853-0505; PSR J1106-5911; PSR J1738-2335)
and 25 more candidates by analyzing Parkes Multibeam survey data.
Accordingly, one could estimate the possibility of precessing (or
extremely nulling) pulsar to be $7/2000\simeq 0.0035$ since the
total number of discovered pulsars is $\sim 2000$.
Therefore, there should be one precessing and extremely nulling
pulsar in every $10^5$ pulsars if the two phenomena are completely
independent ($0.0035^2\sim 10^{-5}$).
However, the above possibility might have been under-estimated,
because of the following two arguments.
(1). Precessing pulsars and nulling pulsars are more difficult to
detect than ordinary pulsars. Long-term and precise timing is
necessary to confirm precessing phenomena, and special searching
method should be applied to discover an extremely nulling pulsar.
This selection effect should thus reduce significantly the
percentage of these kind of pulsars.
(2). The source's $\sim 10\%$ duty-cycle is a rough estimation
because of the sparse sampling (Hyman et al. 2005b), based on the
assumption that the burst in the 2003 September 28 observation is
isolated.
But, in the second activity, additional undetected bursts other
than the one detected still can't completely be ruled out.
Therefore it is possible that the nulling fraction could be
smaller (even much smaller) than $\sim 90\%$.
Therefore, it could be reasonable for us to detect a radio pulsar
with both precessing and extremely nulling phenomena now.
Is our model less likely than others presented (models of double
neutron star system and of pulsar-like white dwarf)?
Note that the double neutron star model needs also one of the
neutron stars to precess in order to account for the transient
nature. The geodesic precession in the model predicts a 3-year
period of transient behavior, which was not confirmed by the
re-detection of the source in 2003 \cite{hy05b}.
Furthermore, the double neutron star model requires (1) an obital
eccentricity of $\sim0.3-0.6$ in order to change the distance
between the stars significantly; (2) the period of one of the
neutron stars to be close to 0.3 s so that the shock distance from
it can be close to its light-cylinder radius in order to trigger
the ON/OFF switch of the shock emission (Turolla, Possenti \&
Treves 2005).
This would reduce significantly the population of such double
neutron star systems.
Whereas, our model allows the period, the inclination angle (i.e.
$\alpha$) and the angle of line of sight (i.e. $\beta$) to vary in
very large domains.
It could be hasty to conclude that the double neutron star model
is more likely than ours.
The pulsar like white dwarf model presented by Zhang \& Gil (2005)
is interesting.
However, we have never seen any evidence before for the activity
of a pulsar-like white dwarf in the large population of white
dwarfs observed. The peculiarity, origin and population of
pulsar-like white dwarfs need further investigations.
Future observations may uncover the nature of the source.
Predictions for confirming or falsifying our model are provided
below.
It is predicted that a normal or millisecond pulsar should be
detected if the bursting activity is observed in a much higher
timing resolution.
To detect such a pulsar may be a little difficult given the small
duty-cycle of the source and low frequency of the burst activity.
It is said that, in the direction of Galactic center, scattering
would prevent the detection of pulsating radio signal at the
frequency of 330 MHz if the distance of the pulsar is in a range
of ($6\sim 12$) kpc (Turolla, Possenti \& Treves 2005 and
reference therein).
However, it is still possible that pulsing signals could be
observed due to following reasons.
1, The distance of the source could be $<6$ kpc in our model, thus
the scattering effect may be not strong enough to smear the
pulses.
2, It is possible that the pulsar can be detected by some
gamma-ray detector if it has strong magnetospheric activity.
3, Pulsed X-ray emission from the magnetosphere (due to
magnetospheric activity) and/or the surface (due to polar cap
heating) could be high enough to be detected by future instrument
with larger collecting area ($10^{-3}\dot{E}_{\rm rot}$ of a 10 ms
period $10^{12}$ G surface magnetic field pulsar gives $0.2$ mcrab
unabsorbed X-ray flux in a distance of 8 kpc).
In our model, the bursts induced by precession should rise in
almost the same time in different frequencies if the radio beam is
nearly frequency-independent.
One could then observe that the bursting activity begins almost
simultaneously in different channels after DM is considered.
The single pulse searching technique developed by Cordes et al.
(2005) is also a good method to check this prediction.
It is said that this new method is expected to find radio
transients (like GCRT J1745-3009) and a significant number of
pulsars which are not easily identifiable though the period
searching technique (Cordes et al. 2005).
Finally, if the source is a precessing pulsar, its bursts should
be {\em statistically} symmetric since the emissivity of radio
pulsars is generally variable.
If future observation confirm the asymmetric fitting of burst
profile by Hyman et al. 2005a and statistically rule out the
possibility of average symmetric profile, then our model should be
falsified.
If pulsing signals are detected by future observation, one could
distinguish our model from that by Turolla, Possenti \& Treves
(2005) because a precessing pulsar behaves differently from a
pulsar in a binary system in many aspects.
Our model predicts that (1) the frequency shift induced by
precession should be $\Delta\nu/\nu\sim P_s/P_p\sim 10^{-4}$ if
$P_s\sim 0.1$ s (while the shift due to orbital motion in a binary
is $\Delta \nu/\nu\sim 10^{-3}$); (2) pulse width of the pulsar
should vary as the line of sight goes in and out the pulsar's
beam; (3) the timing residual of the pulsar should vary in the
precessing period, with an amplitude of the scale of neutron star
radius\footnote{%
Timing residual and neutron star radius have a same dimension in
case that one set $c=1$.
} %
($<10$ km$/c$, $c$ is the speed of light) which is much smaller
than the timing residual induced by orbital motion ($10^5$
km$/c$).
A fitting to the timing data of observation could distinguish
between these two models.
In summary, it won't be a problem to falsify our model if more
observations are taken in the future.
\section{Summary}
It is shown in this paper that the observed features of GCRT
J1745-3009 can be explained by a precessing nulling radio pulsar
with a precessing angle larger than 15 degrees.
No observation known hitherto could lead one to rule out the model
presented or others (e.g., wind-magnetosphere interaction in
neutron star binary, pulsar like white dwarf).
We also provided some theoretical predictions in the model and
possible ways for falsifying our idea, which could be tested by
future observations.
Discovering of a precessing pulsar with a large precession angle
is interesting, which could provide evidence for a solid quark
star if the pulsar spins at a period of $\ga 10$ ms. This is
certainly very helpful to understand the nature of matter with
supranuclear density.
{\em Acknowledgments}:
The authors thank Dr. Xiang Liu and Mr. Huagang Song for taking
the observation in Nanshan. We are in debt to Prof. Joel Weisberg
for his stimulating comments and suggestions, and for improving
the language. The helpful suggestions from an anonymous referee
are sincerely acknowledged. This work is supported by NSFC
(10273001, 10573002), the Special Funds for Major State Basic
Research Projects of China (G2000077602), and by the Key Grant
Project of Chinese Ministry of Education (305001).
|
1,314,259,994,793 | arxiv |
\section{Introduction}
The \emph{Sid Meier's Civilization}\footnote{All products, company names, brand names, trademarks, and images are properties of their respective owners. The images are used here under Fair Use for the educational purpose of illustrating mathematical theorems.} series is a collection of turn-based strategy video games, with a focus on building and expanding an empire through technological, social, and diplomatic development.
The scale and scope of these games involve intricate rules and mechanics, along with multiple victory conditions, all of which makes them interesting from a computational point of view.
In this paper we focus on leveraging the mechanics inherent to these games to build universal Turing machines. Universal Turing machines, or UTMs, are mathematical abstractions used in the rigorous study of mechanical computation. By definition, a UTM is able to execute any program that can be ran in a computer, and is effectively equivalent to a computer with unbounded memory.
We introduce explicit constructions of UTMs for three installments of the \emph{Civilization} series: \emph{Sid Meier's Civilization: Beyond Earth} (Civ:BE from now on), \emph{Sid Meier's Civilization V} (Civ:V), and \emph{Sid Meier's Civilization VI} (Civ:VI). Our only strong constraint is that the map size and turn limits must be infinite, which is in line with the requirements around a UTM.
For the constructions of Civ:BE and Civ:V, we design UTMs that are relatively easy to visualize, although "large"--that is, the transition function has many instructions. We also describe, but not prove, smaller constructions. Finally, we note that an immediate consequence of the existence of an internal game state representing a UTM is undecidability of said game under the stated constraints. We provide constructions of each UTM in their respective games, and conclude by providing an example implementation and execution of the three-state Busy Beaver game \cite{BusyBeaver} in the Turing machine built for Civ:BE.
\section{Background}\label{sec:background}
\subsection{Turing machines and Turing completeness}
A Turing machine can be physically visualized as an automaton that executes instructions as dictated by a function $\delta$, and that involve reading or writing to a tape of infinite length via a head. Following the definition from Hopcroft et al. \cite{HopcroftUllman}, they are defined as a 7-tuple $\langle Q, \Gamma, \Sigma, \delta, b, q_0, F \rangle$, where:
\begin{itemize}
\item $Q$ is a set of possible \emph{states} that can be taken internally by the Turing machine. Namely, $q_0 \in Q$ is the initial state, and $F \subseteq Q$ is the set of final (or accepting) states.
\item $\Gamma$ is the set of tape \emph{symbols} on which the Turing machine can perform read/write operations. In particular, $b \in \Gamma$ is the blank symbol, and $\Sigma \subseteq \Gamma$, $b \not\in \Sigma$ is the set of symbols which may be initially placed in the tape.
\item $\delta : (Q \setminus F) \times \Gamma \nrightarrow Q \times \Gamma \times \{L, R\}$ is the \emph{transition function}. It is a partial function that takes in all states from the machine, along with the current read from the tape; it determines the next state to be taken by the Turing machine, as well as whether to move the head left ($L$) or right ($R$) along the tape. Note that not moving the tape can be trivially encoded in this definition.
\end{itemize}
Turing machines are a mathematical model, and are limited in practice due to the infinite length requirement on the tape. On the other hand, the expressive power behind the theory of Turing machines allows for the rigorous study of computational processes.
Concretely, since Turing machines are finite objects, it is possible to encode a Turing machine $M_1$ and use it as an input to a separate machine $M_2$, so that $M_2$ simulates $M_1$. A $(m, n)$-\emph{universal Turing machine}, or $(m, n)$-UTM, is a Turing machine that is able to simulate any other Turing machine, and such that $\size{Q} = m, \size{\Gamma} = n$. The Church-Turing Thesis states that such a construct models precisely the act of computation--concretely, the set of functions that can be computed by a UTM is precisely the set of computable functions \cite{RogersComputability}.
A collection of rules and objects under which a simulation of a UTM is possible is said to be \emph{Turing complete}. Turing complete models of computation are varied, from mathematical models such as the $\lambda$-calculus and the theory of $\mu$-recursive functions; to other rule-based systems like Conway's Game of Life \cite{ConwaysGOL} and the card game \emph{Magic: The Gathering} \cite{Churchill}; and even video games and other software such as \emph{Minesweeper} \cite{Minesweeper} and Microsoft PowerPoint \cite{PowerPoint}.
\subsection{General Mechanics of \emph{Sid Meier's: Civilization}}
All of the \emph{Sid Meier's: Civilization} installments are turn-based, alternating, multiplayer games. The players are in charge of an empire,\footnote{In the case of Civ:BE, an extrasolar colony.} and good management of resources and relationships with other players are key to achieve one of the multiple possible victory conditions allowable. These victories (e.g., Diplomatic, Militaristic, Scientific) and their specific mechanics are of interest when analyzing the computational complexity of the game, but not when designing a Turing machine.
In this paper we limit ourselves to describing the rules governing the actions of units on the map, and which are common to the games that we will be analyzing. Since the naming conventions for the components of the games vary across them--even though these components may serve the same purpose--we also introduce a unified notation.
The games that we will be discussing (Civ:BE, Civ:V, and Civ:VI) are played on a finite map divided in hexagonal tiles, or \emph{hexes}. Hexes are composed of different geographical types (e.g., oceans, deserts, plains), and may contain certain features (e.g., forests, oil) dependent on the initial conditions of the game.
All tiles have an inherent \emph{resource} yield, which is used by the player to maintain the units and buildings that they own, and to acquire more advanced \emph{technologies} and \emph{social policies}.
Some examples of these resources are Food, Production, Culture, and Science.
One or more player-controlled units known as \emph{Workers} are usually tasked to build \emph{improvements} on a tile, which alters their yields.
As an example, technologies are acquired with Science, and Workers can build improvements on tiles (e.g. an Academy) to increase their Science yield.
With some notable exceptions, described in the following section, Workers may only build improvements in tiles owned by a \emph{City}. A City can be seen as a one-tile special improvement, and has three main duties: to produce ("train") units, to control and expand the boundaries of the owner's frontiers, and to produces \emph{Citizens} to work the tiles and their improvements. Citizens may only work the tiles owned by the City, but players are able to micromanage the placement of Citizens to alter their resource yield.
It is important to note that on every turn, the player executes all or some of the actions available to them, such as building improvements, researching technologies and social policies, and focuses on the overall management of the empire.
On the other hand, the specific types and mechanics around Cities, resources, features, available improvements, technologies and social policies vary from game to game. They will be discussed in detail in the next section, as they will be the key for building our UTMs. All of the UTMs are built inside a running game session, and leverage the concepts described here.
\section{Civ:BE, Civ:V, and Civ:VI are Turing Complete}\label{sec:turinguniv}
For this section, the following two assumptions hold:
\begin{assumption}\label{assn:assumption1}
The number of turns in Civ:BE, Civ:V, and Civ:VI is infinite, and there is only one player in the game.
\end{assumption}
\begin{assumption}\label{assn:assumption2}
The map extends infinitely in either direction.
\end{assumption}
The rationale behind \assnref{assumption1} is related to the fact that another player may interfere with the computation by--say--attacking the Workers, or reaching any of the victory conditions before the Turing machine terminates. Likewise, \assnref{assumption2} is due to the fact that we intend to utilize the map as the tape for the UTM. We will discuss the feasibility of these assumptions in \secref{conclusion}. For now, it suffices to note that \assnref{assumption1} is easily achievable within the game itself; and \assnref{assumption2} is a requirement present in all UTM constructions, and inherent to the definition of a Turing machine.
\subsection{A $(10, 3)$-UTM in Civ:BE}
For the construction of this UTM, we rely on two Workers carrying out instructions on separate, non-overlapping sections of the map: one keeps track of states, and another acts as the head. The latter operates over the tape, which we consider to be the rest of the map. We do not require the player to own any of the tiles on the tape. However, the state hexes are finite and must be owned by the player. Building any improvement in Civ:BE takes a certain amount of turns, but the Workers may build and repair any number of them.
\subsubsection{The Tape:}
The tape is a continuous strip over the entire map, save for the state tiles, and without loss of generality, we assume that it is comprised of flat, traversable, hexes. Flatness is needed because irregular terrain requires more movement points to traverse.
The set of symbols $\Gamma$ is based off specific improvements that can be added and removed indefinitely by the tape Worker anywhere on the map--namely, Roads. The tape Worker adds and removes Roads according to the transition function, and a fast-moving unit (in this case, a Rover) pillages it. Note that for this setup to work, the Worker and the Rover will must move at the same time. Then, $\Gamma = \{\text{No Improvement}, \text{Road}, \text{Pillaged Road}\}$.
\subsubsection{The States:}
We assume that the player has at least nine flat, desert, unimproved tiles which will act as the states. Desert is required because we map states to the resource yields provided by a specific improvement--the \emph{Terrascape}. Income from other tile types may interfere with state tracking.
Moreover, we assume the player has unlocked the \emph{civic} (social policy) \emph{Ecoscaping}, which provides $+1$ Food, Production, and Culture for every Terrascape built, along with the relevant technology (\emph{Terraforming}) required to build them. The Worker in charge of the states builds and remove Terrascapes on the state hexes, and the normalized change in Culture income serves as the state value $q_i \in Q$, $Q = \{ 0, \dots, 9 \}$.
See \figref{civbe} for a sample construction.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{img/InGameCivBETM.png}
\caption{A $(10, 3)$-UTM built within Civ:BE. The tape is located to the left of the image, and runs from the top middle of the screen to the bottom left. It currently has Roads (symbols in $\Gamma$) on every hex, except one pillaged Road. The head (the tape Worker and the Rover) are positioned in the middle of the screen, reading the symbol "Road"--the read is implicit, but can also be seen from the "Build Actions" menu, which does not allow the player to build a Road on an existing Road. The UTM is in state $q_2$, since the total Culture yield in this turn is $C_t = 7$ (the purple number, top right on the image). Each Terrascape (lush green tiles on the right) contributes $3$ Culture, and the base Culture yield is $C_* = 1$, so $q_2 = (C_t - C_*)/3 = 2$.
For this particular map, desert tiles appear as solid ice.}
\label{fig:civbe}
\end{figure}
\begin{theorem}\label{thm:civbeutm}
Suppose \assntworef{assumption1}{assumption2} hold.
Also suppose that there is a section of the map disjoint from the rest, owned by the player, and comprised of at least nine desert, unimproved tiles; and that the player is able to build and maintain at least nine worked Terrascapes, has a constant base per-turn Culture yield $C_*$, and has the Ecoscaping civic.
Then the following is a $(10, 3)$-UTM:
\begin{align}
Q &= \{ 0, \dots, 9 \}, \\
\Gamma &= \{ \text{No Improvement}, \text{Road}, \text{Pillaged Road} \},
\end{align}
where $q_i \in Q$ is the difference in the normalized Culture yields in turn $t$, $q_i = C_t - C_*$, and the transition function $\delta$ is as described in \tabref{civbe}.
\end{theorem}
\begin{proof}
A full proof can be found in \appref{civbeapp}. It consists of two parts: building a bijection between the transition function and an existing $(10, 3)$-UTM, and then showing that the time delay between our construction and the aforementioned UTM is bounded by a constant.
\end{proof}
\begin{remark}
This is not the only possible UTM that can be built within Civ:BE. Unlocking the ability of a worker to add and remove Miasma expands the set of symbols from the tape yields a $(7, 4)$-UTM with a smaller set of instructions.
\end{remark}
There are a few things to highlight from this construction, and that will apply to the rest of the UTMs presented in this paper.
First, while the base constant yield is hard to achieve in-game, it is relatively simple to account for: simply execute every move at the beginning of the turn, record $C_*$, and then execute the transition function for the UTM. Even better, counting the number of Terrascapes on a specific tile would work just as well.
Second, remark that this UTM, along with the others presented in the paper, may be fully automated by using the API provided by the publisher. Manual simulation is, although tiresome, also possible, as displayed in \secref{busybeaver}.
Finally, it is well-known in the field of theoretical computer science that it is possible to have a UTM with no states, by simply increasing the number of heads and using a different section of the tape as scratch work. Such a construction will be equivalent to ours.
\subsection{A $(10, 3)$-UTM in Civ:V}
We follow the same approach as in \thmref{civbeutm}: one Worker builds improvements (Roads and Railroads) on tiles to simulate the symbols on a tape, and another Worker improves and unimproves tiles to encode the internal state of the machine based on the relative yield of a specific resource. Just as in Civ:BE, building any improvement in Civ:V takes a certain amount of turns, but the Workers may build any number of them.
\subsubsection{The Tape:}
As in \thmref{civbeutm}, the tape is a continuous strip over the entire map, and it is comprised of flat, traversable, hexes.
The set of symbols $\Gamma$ is the two improvements that can be built by the tape Worker anywhere on the map: Roads and Railroads. Building a Railroad requires that the player has unlocked the \emph{Engineering} technology. Then $\Gamma = \{ \text{No Improvement}, \text{Road}, \text{Railroad} \}$.
\subsubsection{The States:}
In Civ:V it is no longer possible to remove improvements outside of Roads and Railroads, so we must rely on these improvements for state tracking.
The Worker in charge of the states builds and removes Railroads to encode the state. This is carried out in a reserved section of the map (e.g., inside a City), and the total number of Railroads serves as the state value $q_i \in Q$, $Q = \{ 0, \dots, 9 \}$.
\begin{theorem}\label{thm:civvutm}
Suppose \assntworef{assumption1}{assumption2} hold.
Also suppose that there is a section of the map disjoint from the rest, owned by the player, and comprised of at least nine tiles.
Assume that the player has
the Engineering technology.
Then the following is a $(10, 3)$-UTM:
\begin{align}
Q &= \{ 0, \dots, 9 \}, \\
\Gamma &= \{ \text{No Improvement}, \text{Road}, \text{Railroad} \},
\end{align}
where $q_i \in Q$ is the total number of Railroads in the nine tiles,
and the transition function $\delta$ is as described in \tabref{civv}.
\end{theorem}
\begin{proof}
In \appref{civvapp}; it is similar to the proof of \thmref{civbeutm}.
\end{proof}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{img/InGameCivVTM.png}
\caption{A $(10, 3)$-UTM built within Civ:V. It is in the state $q_2$ since the City (left) has two Railroads built: one on the tile with the elephant and another on the plantation. In this case, the area reserved for state tracking is every tile owned by the City. The head (the rightmost Worker, near the top) is positioned over a Railroad hex with two Roads adjacent to it.}
\label{fig:civv}
\end{figure}
\begin{remark}
Just as in \thmref{civbeutm}, it is possible to use a fast-moving pillaging unit (e.g., a Knight) and the ability of Workers to build Forts, to expand the symbol set of the machine to a $(5, 5)$-UTM. This machine would have an even smaller set of instructions.
\end{remark}
\subsection{A $(48, 2)$-UTM in Civ:VI}
In Civ:VI, Worker units are unable to build improvements indefinitely. It is still possible to build an UTM in Civ:VI if we use the Cities as the head \emph{and} the tape, and leave the Worker to encode only the position of the head. The Citizens are micromanaged to "write" symbols to the tape, by putting them to work specific hexes. As before, we maintain a separate set of tiles to act as the states.
In CiV:VI, there is no upper limit on the number of Cities founded by the player, as long as they are all placed at least $4$ hexes away from other Cities.
Under \assnref{assumption1}, this allows us to found an infinite number of Cities via a unit called a \emph{Settler}. These units are trained by the City, and once the Settler has been created, the City loses a citizen. Provided there is enough Food, a new Citizen is born in the City after a certain number of turns. No Settlers can be trained if the City has only one citizen.
Founding a City consumes the Settler, and the City takes its place. The tiles owned at this time are set to be the six closest hexes,\footnote{This action technically builds a City Center, but we will forgo this distinction. Players who play as Russia will found Cities with five extra hexes.} and two Citizens spawn: one works the tile with the highest-possible yield of combined resources, and another--which cannot be retasked--the City itself. All of this occurs on the same turn.
We will utilize the available tiles and the Citizens to build our $(48, 2)$-UTM. A sample construction can be seen in \figref{civvi}. Although our construction is more complicated than the previous UTMs, we note that all of this is achievable with the built-in mechanics and rules of the game.
\subsubsection{The Tape:}
Assume, for simplicity, that the map is comprised of flat hexes, all of which are of the floodplains category. A single, uninterrupted, grassland strip is the only exception.
The tape is then the grassland tiles owned by the player. Cities are collocated three hexes away from one another over the grassland strip.
The symbols on the tape ($\Gamma = \{$"is being worked", "is not being worked"$\}$) are the number of Citizens working on a given grassland tile.
Our design is unconventional since the tape is not infinite by design. Instead, we consider a transition to be concluded if the City has at least three Citizens. We cap the growth of any new City to four Citizens: one that works in the City (and cannot be moved), two to work the tape tiles, and one that will act as a Settler if need be. If a Settler has been created, then the growth of the City is capped at three. Note that this doubles as a marker for the end of the tape.
Every time the transition function issues a "move right" command, the controller first checks the population size of the City, and the position of the Worker unit. There are four possible cases:
\begin{itemize}
\item The City has three Citizens, and the Worker is on the rightmost (resp. leftmost) hex. Then the Worker moves right (left), to the next City, and is placed on the leftmost (rightmost) hex.
\item The City has four Citizens. This is the end of the tape and a Settler must be trained to expand the tape. After it is spawned, the City is capped to three, and the Settler is issued the command to move right (resp. left) four hexes, build a City, and grow it to four population. The Worker is issued the command to move right (left) three hexes, and wait until the City is grown.
\end{itemize}
The "move left" command is symmetric to the above.
\subsubsection{The States:}
Just as in \thmstworef{civbeutm}{civvutm}, we rely on a set of hexes disjoint to the tape for our state tracking, and we utilize the relative yield of a specific resource (Faith) to encode it. We assume the player has built $23$ Monasteries without any adjacent districts. Monasteries, when worked under these conditions, yield $+2$ Faith per turn, and are only buildable if the player has ever allied with the Armagh City-State. We also require the player to have built $23$ Farms without any adjacent districts.\footnote{As in the other constructions, we impose specific requirements for our proofs and their generalizability. In practice, any resource yield (e.g., Culture) or improvement (e.g., Ethiopia's Rock-Hewn Church; Egypt's Sphinx; India's Stepwell) would work.}
Transitioning from one state to the other requires the player to re-assign as many workers as possible to adjust the difference in Faith yields from the beginning of time, which in turn realizes (part of) the state $q_i \in Q$.
\begin{theorem}\label{thm:civviutm}
Suppose \assntworef{assumption1}{assumption2} hold. Also suppose that the map contains an infinite strip of grassland tiles, with floodplains above and below, and that the player owns one City placed in the grassland strip.
Also assume that there is a set of tiles disjoint to the strip, owned by the player, and with $23$ Monasteries and $23$ Farms without any adjacent districts. Let $F_*$ be the constant base Faith yield per turn for this player.
Then, if there are no in-game natural disasters occuring during the time of the execution, the following Turing machine is a $(48, 2)$-UTM:
\begin{align}
Q &= \{ 0n, \dots, 23n \} \cup \{ 0b, \dots, 23b \} , \\
\Gamma &= \{ \text{Is Being Worked}, \text{Is Not Being Worked} \},
\end{align}
where $q_ix \in Q$ is the difference in the Faith yields at turn $t$, $q_i = F_t - F_*$; $x\in \{n, b\}$ indicates whether it is needed to build a new Settler, and the transition function $\delta$ is as described in \tabref{civvi}.
\end{theorem}
\begin{proof}
In \appref{civviapp}.
\end{proof}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{img/CivVITMScheme.png}
\caption{Diagram of a $(48, 2)$-UTM built with Civ:VI components and rules. The bottom two Cities act as the state of the machine, and each of the Cities on the top strip contain two cells from the tape. The head position is tracked by the Worker. Refer to \figref{igcivvi} for a working in-game construction.}
\label{fig:civvi}
\end{figure}
In \appref{civviappalt} we discuss an alternative implementation involving two players at war with each other, and which requires fewer cities being built.
\begin{remark}
This construction works with every civilization and update released up to and including the April 2021 Update. It is, however, much more brittle than the other two UTMs introduced in this paper.
\end{remark}
\begin{corollary}
Civ:BE, Civ:V, and Civ:VI are undecidable under \assntworef{assumption1}{assumption2}.
\end{corollary}
\begin{proof}
Immediate from the proofs of \thmsthreeref{civbeutm}{civvutm}{civviutm}. The existence of an in-game state that simulates a UTM implies undecidability.
\end{proof}
\section{Example: The Busy Beaver in Civ:BE}\label{sec:busybeaver}
\begin{table}\label{tab:bbtm}
\begin{center}
\caption{Transition function for the three-state Turing machine corresponding to BB-$3$, and the equivalent Civ:BE construction. For the Turing machine, the notation is read as $(state, \;read;\; write,\;move,\;set\;state)$. For the Civ:BE machine, $\Delta$ corresponds to the normalized change in Culture per turn.}
\def1.05{1.05}
\begin{tabular}{|l|l||c|}
\hline
Civ:BE state ($\Delta;\; tape\; read$) \;&\; Command to Workers $(tape;\; state)$ & BB-$3$ TM
\\ \hline
$\;0$; No Improvement & \;Build a Road and move $R$; Build a Terrascape \;& $q_00;1 R q_1$ \\
$\;0$; Road & \;Build a Road and move $L$; Build $2$ Terrascapes \;& $q_01;1 L q_2$ \\
\hline
$\;1$; No Improvement & \;Build a Road and move $L$; Remove a Terrascape \;& $q_10;1 L q_0$ \\
$\;1$; Road & \;Build a Road and move $R$; No build \;& $q_11;1 R q_1$ \\
\hline
$\;2$; No Improvement & \;Build a Road and move $L$; Remove a Terrascape \;& $q_20;1 L q_1$ \\
$\;2$; Road & \;HALT \;& $\;q_21;1 R$ $HALT$ \\
\hline
\end{tabular}
\end{center}
\end{table}
The Busy Beaver game, as described by Rad\'o \cite{BusyBeaver}, is a game in theoretical computer science.
It challenges the "player" (a Turing machine or a person) to find a halting Turing machine with a specified number of states $n$, such that it writes the most number of $1$s in the tape out of all possible $n$-state Turing machines. The winner is referred to as the $n^{\text{th}}$ Busy Beaver, or BB-$n$. Note that all the Turing machines playing the Busy Beaver game have, by definition, only two symbols, $\Gamma = \{0, 1\}$.
Finding whether a given Turing machine is a Busy Beaver is undecidable, but small Busy Beavers are often used to demonstrate how a Turing machine works. In particular, BB-$3$ is a three-state ($Q = \{q_0, q_1, q_2\}$), two-symbol ($\Gamma = \{0, 1\}$) Turing machine with a transition function as described in \tabref{bbtm}.
Given the Turing completeness of Civ:BE, we can build BB-$3$ within the game itself. An equivalent construction with our Civ:BE $(10, 3)$-UTM has the states $Q = \{0, 1, 2\}$, symbols $\Gamma = \{\text{No Improvement}, \text{Road}\}$, and with a transition function as described in \tabref{bbtm}. A sample execution of BB-$3$ within Civ:BE can be seen in \figref{civbebb}.\footnote{An animated version of this execution can be found in \url{https://adewynter.github.io/notes/img/BB3.gif}.}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{img/BB3.png}
\caption{Sample execution of BB-$3$ with a Civ:BE Turing machine. The machine executes $11$ instructions $t_1, \dots, t_{11}$ before halting. \emph{Top left}: state corresponding to $t_1 = q_00;1Rq_1$. \emph{Top right}: state for the machine at $t_2 = q_10;1Lq_0$. \emph{Bottom left}: state for the machine at $t_{10} = q_01;1Lq_2$. \emph{Bottom right}: state for the machine at $t_{11} = q_21;HALT$.}
\label{fig:civbebb}
\end{figure}
\section{Discussion}\label{sec:conclusion}
We introduced UTMs for Civ:BE, Civ:V, and Civ:VI, and showed their ability to execute an arbitrary algorithm by running BB-$3$ on our Civ:BE UTM. We also proved that, as an immediate consequence of the existence of a UTM within their internal state, these games are undecidable under the unbounded turn limit and map size assumptions.
Turing completeness in a system goes beyond a mere curiosity, however: the work here showed that there might exist states of the game where it is undecidable to determine whether there exists a sequence of moves that yields a specific state; and, more importantly, that it is possible to write and execute programs in-game.
That being said, any realistic construction of a Turing machine is physically limited by the hardware. It follows that if these assumptions were relaxed, Civ:BE, Civ:V, and Civ:VI are equivalent to a linear-bounded automaton. Even then, the games are characterized by long sessions, complex rules, multiple victory conditions, varying scales, and imperfect information, all of which leave room for further analysis from a computational complexity perspective. While other video games have been studied, such as classic Nintendo games \cite{NintendoNP} and traditional board games (both of which led to the development of powerful theoretical tools \cite{DemaineGames}), most 4X video games have not analyzed from this angle.
A good understanding of the computational complexity associated with these games is important for the development of more efficient and engaging AI systems; and their inherent long-term simulative nature at the macro and micro-management level makes them correlate into contemporary problems in computer science, such as multiple-scale forecasting and online learning.
|
1,314,259,994,794 | arxiv | \section{Vector Embedding}
We are interested in
\fi
\begin{abstract}
In this paper, we provide a theoretical understanding of word embedding and its dimensionality.
Motivated by the unitary-invariance of word embedding, we propose the Pairwise Inner Product (PIP) loss, a novel metric on the dissimilarity between word embeddings. Using techniques from matrix perturbation theory, we reveal a fundamental bias-variance trade-off in dimensionality selection for word embeddings. This bias-variance trade-off sheds light on many empirical observations which were previously unexplained, for example the existence of an optimal dimensionality. Moreover, new insights and discoveries, like when and how word embeddings are robust to over-fitting, are revealed. By optimizing over the bias-variance trade-off of the PIP loss, we can explicitly answer the open question of dimensionality selection for word embedding.
\end{abstract}
\section{Introduction}
Word embeddings are very useful and versatile tools, serving as keys to many fundamental problems in numerous NLP research \citep{turney2010frequency}. To name a few, word embeddings are widely applied in information retrieval \citep{salton1971smart,salton1988term,sparck1972statistical}, recommendation systems \citep{breese1998empirical,yin2017deepprobe}, image description \citep{frome2013devise}, relation discovery \citep{NIPS2013_5021} and word level translation \citep{mikolov2013exploiting}. Furthermore, numerous important applications are built on top of word embeddings. Some prominent examples are long short-term memory (LSTM) networks \citep{hochreiter1997long} that are used for language modeling \citep{bengio2003neural}, machine translation \citep{sutskever2014sequence,bahdanau2014neural}, text summarization \citep{nallapati2016abstractive} and image caption generation \citep{xu2015show,vinyals2015show}. Other important applications include named entity recognition \citep{lample2016neural}, sentiment analysis \citep{socher2013recursive} and so on.
However, the impact of dimensionality on word embedding has not yet been fully understood. As a critical hyper-parameter, the choice of dimensionality for word vectors has huge influence on the performance of a word embedding. First, it directly impacts the quality of word vectors - a word embedding with a small dimensionality is typically not expressive enough to capture all possible word relations, whereas one with a very large dimensionality suffers from over-fitting. Second, the number of parameters for a word embedding or a model that builds on word embeddings (e.g. recurrent neural networks) is usually a linear or quadratic function of dimensionality, which directly affects training time and computational costs. Therefore, large dimensionalities tend to increase model complexity, slow down training speed, and add inferential latency, all of which are constraints that can potentially limit model applicability and deployment \citep{wu2016google}.
Dimensionality selection for embedding is a well-known open problem. In most NLP research, dimensionality is either selected ad hoc or by grid search, either of which can lead to sub-optimal model performances. For example, 300 is perhaps the most commonly used dimensionality in various studies \citep{mikolov2013efficient,pennington2014glove,bojanowski2017enriching}. This is possibly due to the influence of the groundbreaking paper, which introduced the skip-gram Word2Vec model and chose a dimensionality of 300 \citep{mikolov2013efficient}. A better empirical approach used by some researchers is to first train many embeddings of different dimensionalities, evaluate them on a functionality test (like word relatedness or word analogy), and then pick the one with the best empirical performance. However, this method suffers from 1) greatly increased time complexity and computational burden, 2) inability to exhaust all possible dimensionalities and 3) lack of consensus between different functionality tests as their results can differ. Thus, we need a universal criterion that can reflect the relationship between the dimensionality and quality of word embeddings in order to establish a dimensionality selection procedure for embedding methods.
In this regard, we outline a few major contributions of our paper:
\begin{enumerate}
\item We introduce the PIP loss, a novel metric on the dissimilarity between word embeddings;
\item We develop a mathematical framework that reveals a fundamental bias-variance trade-off in dimensionality selection. We explain the existence of an optimal dimensionality, a phenomenon commonly observed but lacked explanations;
\item We quantify the robustness of embedding algorithms using the exponent parameter $\alpha$, and establish that many widely used embedding algorithms, including skip-gram and GloVe, are robust to over-fitting;
\item We propose a mathematically rigorous answer to the open problem of dimensionality selection by minimizing the PIP loss. We perform this procedure and cross-validate the results with grid search for LSA, skip-gram Word2Vec and GloVe on an English corpus.
\end{enumerate}
For the rest of the paper, we consider the problem of learning an embedding for a vocabulary of size $n$, which is canonically defined as $\mathcal V=\{1,2,\cdots, n\}$. Specifically, we want to learn a vector representation $v_i\in \mathbb R^d$ for each token $i$. The main object is the \textbf{embedding matrix} $E\in\mathbb{R}^{n\times d}$, consisting of the stacked vectors $v_i$, where $E_{i,\cdot}=v_i$. All matrix norms in the paper are Frobenius norms unless otherwise stated.
\section{Preliminaries and Background Knowledge}\label{sec:background}
Our framework is built on the following preliminaries:
\begin{enumerate}
\item Word embeddings are unitary-invariant;
\item Most existing word embedding algorithms can be formulated as low rank matrix approximations, either explicitly or implicitly.
\end{enumerate}
\subsection{Unitary Invariance of Word Embeddings} \label{subsec:unitary_invariance}
The unitary-invariance of word embeddings has been discovered in recent research \citep{hamilton2016diachronic, artetxe2016learning, smith2017offline}. It states that two embeddings are essentially identical if one can be obtained from the other by performing a unitary operation, e.g., a rotation. A unitary operation on a vector corresponds to multiplying the vector by a unitary matrix, \textit{i.e.} $v'=vU$, where $U^TU=UU^T=Id$. Note that a unitary transformation preserves the relative geometry of the vectors, and hence defines an \textit{equivalence class} of embeddings. In Section \ref{sec:PIP}, we introduce the Pairwise Inner Product loss, a unitary-invariant metric on embedding similarity.
\subsection{Word Embeddings from Explicit Matrix Factorization}
A wide range of embedding algorithms use explicit matrix factorization, including the popular Latent Semantics Analysis (LSA). In LSA, word embeddings are obtained by truncated SVD of a signal matrix $M$ which is usually based on co-occurrence statistics, for example the Pointwise Mutual Information (PMI) matrix, positive PMI (PPMI) matrix and Shifted PPMI (SPPMI) matrix \citep{levy2014neural}. Eigen-words \citep{dhillon2015eigenwords} is another example of this type.
\citet{caron2001experiments,bullinaria2012extracting,turney2012domain, levy2014neural} described a generic approach of obtaining embeddings from matrix factorization. Let $M$ be the signal matrix (e.g. the PMI matrix) and $M = UDV^T$ be its SVD. A $k$-dimensional embedding is obtained by truncating the left singular matrix $U$ at dimension $k$, and multiplying it by a power of the truncated diagonal matrix $D$, i.e. $E=U_{1:k}D_{1:k,1:k}^\alpha$ for some $\alpha\in [0,1]$. \citet{caron2001experiments,bullinaria2012extracting} discovered through empirical studies that different $\alpha$ works for different language tasks. In \citet{levy2014neural} where the authors explained the connection between skip-gram Word2Vec and matrix factorization, $\alpha$ is set to $0.5$ to enforce symmetry. We discover that $\alpha$ controls the robustness of embeddings against over-fitting, as will be discussed in Section \ref{subsec:robustness}.
\subsection{Word Embeddings from Implicit Matrix Factorization}
In NLP, two most widely used embedding models are skip-gram Word2Vec \citep{NIPS2013_5021} and GloVe \citep{pennington2014glove}. Although they learn word embeddings by optimizing over some objective functions using stochastic gradient methods, they have both been shown to be implicitly performing matrix factorizations.
\paragraph{Skip-gram} Skip-gram Word2Vec maximizes the likelihood of co-occurrence of the center word and context words. The log likelihood is defined as
\begin{small}
\[\sum_{i=0}^{n}\sum_{j=i-w,j\ne i}^{i+w} \log(\sigma(v_j^Tv_i)),\text{ where } \sigma(x)=\frac{e^x}{1+e^x}\]
\end{small}
\citet{levy2014neural} showed that skip-gram Word2Vec's objective is an implicit symmetric factorization of the Pointwise Mutual Information (PMI) matrix:
\begin{small}
\[\text{PMI}_{ij}=\log\frac{p(v_i,v_j)}{p(v_i)p(v_j)}\]
\end{small}
Skip-gram is sometimes enhanced with techniques like negative sampling \citep{mikolov2013exploiting}, where the signal matrix becomes the Shifted PMI matrix \citep{levy2014neural}.
\paragraph{GloVe} \citet{levy2015improving} pointed out that the objective of GloVe is implicitly a symmetric factorization of the log-count matrix. The factorization is sometimes augmented with bias vectors and the log-count matrix is sometimes raised to an exponent $\gamma\in[0,1]$ \citep{pennington2014glove}.
\iffalse
\section{Unitary Invariant Properties} \label{sec:UIP}
It has been observed that embeddings are invariant under unitary transformations \citep{hamilton2016diachronic, artetxe2016learning, smith2017offline}. Consider embedding $\hat E$, which is trained from data, and compare it against an unobserved, ground-truth embedding $E$. The trained embedding $\hat E$ suffers from noise and estimation error from training process. To measure its closeness to optimality, a gauge reflecting the deviation of $\hat E$ from $E$ is needed. Directly measuring the distance, for example $\|\hat E-E\|$, does not serve this purpose due to the unitary-invariance. To see this, note for a unitary matrix $U$, $E$ and $EU$ are essentially the same, while $\|E-EU\|$ can be large.
However, from a functionality point of view, what utmost matters is the vector space properties being able to reflect the properties of the tokens. In fact, when people evaluate the quality of embeddings, they compare how well the vector space properties correlate with the token space properties \citep{schnabel2015evaluation,baroni2014don}. Two widely used evaluations are similarity test and analogy test. Similarity test is on the \textit{similarity property}, which compares the token similarity against vector similarity. The analogy test is on the \textit{compositionality property}, which compares the pairwise token relation against the pairwise vector relation. To say the trained embedding $\hat E$ is close to the ground-truth $E$, we desire $\hat E$ and $E$ be \textit{functionally similar}, in terms of reflecting properties of the tokens they represent.
In order to decide if the trained embedding is close to optimal, we should first derive a meaningful metric for comparing embeddings. To provide an illustrative example, we trained two character embeddings (using characters as tokens is for visualization, as their numbers are small). We performed two runs of the same algorithm (Word2Vec) on the same corpus (Text8), so it is natural to expect that the results be the same. Figure \ref{fig:embeddings} is the visualization of the two embeddings using t-SNE \citep{maaten2008visualizing}. Surprisingly, at a first glance, they seem to be different, but a careful examination reveals that \ref{fig:embed2} can approximately be obtained by rotating \ref{fig:embed1}. This example shows any theory about vector embeddings should be \textit{canonical}, focusing on the \textit{functionality} differences between embeddings, rather than their absolute differences. Functionally, the two trained embeddings are indeed close; all letter-to-letter relations are similar in both \ref{fig:embed1} and \ref{fig:embed2}.
The significance of this question is due to the fact that embedding $\hat E$ is \textbf{trained} from data, hence is subject to noise and error. To measure its closeness to optimality, a gauge reflecting the deviation of $\hat E$ from $E$, the unobserved, clean ``oracle'' embedding, is needed. We desire $\hat E$ be close to $E$, so that $\hat E$ and $E$ are \textit{functionally similar}, in terms of reflecting properties of the original tokens.
\begin{figure}
\centering
\hspace*{\fill}%
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{figs/embed1.png}
\caption[]%
{{\small First Run}}
\label{fig:embed1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{figs/embed2.png}
\caption[]%
{{\small Second Run}}
\label{fig:embed2}
\end{subfigure}
\hspace*{\fill}%
\caption{Visualization of Two Trained Embeddings}
\label{fig:embeddings}
\end{figure}
\subsection{Distributional Hypothesis and Relational Properties}
The purpose of training vector embeddings is to have a mathematical handle on the relational properties between the original tokens. The training procedures are guided by the distributional hypothesis, which states that the meanings of the tokens are determined \textit{relatively} by other tokens. As a result, this relativity should be inherited by their embeddings as well. So any operations on the vectors that preserve their \textit{relative geometry}, for example a rotation, should not affect the properties being reflected. In operator theory, this phenomenon is called \textit{unitary-invariant}. A unitary invariant operation on a vector corresponds to multiplying the vector by a unitary matrix, i.e. $v'=vU$, where $U$ is an operator of the orthogonal group $\text O(d)$, or $U^TU=UU^T=Id$. We define Unitary-Invariant Properties (UIP) to be the relational properties of vectors that do not change under unitary operations, which include the original token-to-token properties that the vectors are capturing.
Two most important UIPs of vector embeddings, especially in NLP, are \textit{similarity} and
\textit{compositionality}. The first demands $\cos(v_i,v_j)$ be close to the similarity between token $i$ and $j$, while the latter states that, if the relation between token $i$ and $j$ is close to the relation between token $l$ and $k$ (e.g. man-woman vs king-queen), then $v_i-v_j\approx v_l-v_k$. It is trivial to see that these two relations are unitary-invariant, \textit{i.e.} substituting every $v_i$ by $v_iU$ yields same results.
To measure the closeness of $\hat E$ from $E$, we check the discrepancy between their UIPs, as UIPs serve as \textit{functionality} gauges. For example, a faithful $\hat E$ should maintain similarity and compositionality predicted by $E$, along with other UIPs. To serve this purpose, we derived a loss function, the Pairwise Inner Product (PIP) loss. We formally introduce the PIP loss in Section \ref{sec:PIP}, and mathematically prove that the PIP loss reveals the discrepancies on UIPs of the embeddings.
\fi
\section{PIP Loss: a Novel Unitary-invariant Loss Function for Embeddings} \label{sec:PIP}
How do we know whether a trained word embedding is good enough? Questions of this kind cannot be answered without a properly defined loss function. For example, in statistical estimation (\textit{e.g.} linear regression), the quality of an estimator $\hat \theta$ can often be measured using the $l_2$ loss $\mathbb E[\|\hat \theta-\theta^*\|_2^2]$ where $\theta^*$ is the unobserved ground-truth parameter. Similarly, for word embedding, a proper metric is needed in order to evaluate the quality of a trained embedding.
As discussed in Section \ref{subsec:unitary_invariance}, a reasonable loss function between embeddings should respect the unitary-invariance. This rules out choices like direct comparisons, for example using $\|E_1-E_2\|$ as the loss function. We propose the Pairwise Inner Product (PIP) loss, which naturally arises from the unitary-invariance, as the dissimilarity metric between two word embeddings:
\begin{definition}[PIP matrix]
Given an embedding matrix $E\in\mathbb R^{n\times d}$, define its associated Pairwise Inner Product (PIP) matrix to be
\[\text{PIP}(E)=EE^T\]
\end{definition}
It can be seen that the $(i,j)$-th entry of the PIP matrix corresponds to the inner product between the embeddings for word $i$ and word $j$, i.e. $\text{PIP}_{i,j}=\langle v_i,v_j\rangle$. To compare $E_1$ and $E_2$, two embedding matrices on a common vocabulary, we propose the \textbf{PIP loss}:
\begin{definition}[PIP loss]
The PIP loss between $E_1$ and $E_2$ is defined as the norm of the difference between their PIP matrices
\begin{small}
\[\|\text{PIP}(E_1)-\text{PIP}(E_2)\|=\|E_1E_1^T-E_2E_2^T\|=\sqrt{\sum_{i,j}(\langle v_i^{(1)},v_j^{(1)}\rangle-\langle v_i^{(2)}, v_j^{(2)}\rangle)^2}\]
\end{small}
\end{definition}
Note that the $i$-th row of the PIP matrix, $v_iE^T=(\langle v_i, v_1\rangle,\cdots, \langle v_i, v_n\rangle)$, can be viewed as the relative position of $v_i$ anchored against all other vectors $\{v_1,\cdots,v_n\}$. In essence, the PIP loss measures the vectors' \textit{relative position shifts} between $E_1$ and $E_2$, thereby removing their dependencies on any specific coordinate system. The PIP loss respects the unitary-invariance. Specifically, if $E_2=E_1U$ where $U$ is a unitary matrix, then the PIP loss between $E_1$ and $E_2$ is zero because $E_2E_2^T=E_1E_1^T$. In addition, the PIP loss serves as a metric of {\it functionality} dissimilarity. A practitioner may only care about the usability of word embeddings, for example, using them to solve analogy and relatedness tasks \citep{schnabel2015evaluation, baroni2014don}, which are the two most important properties of word embeddings. Since both properties are tightly related to vector inner products, a small PIP loss between $E_1$ and $E_2$ leads to a small difference in $E_1$ and $E_2$'s relatedness and analogy as the PIP loss measures the difference in inner products\footnote{A detailed discussion on the PIP loss and analogy/relatedness is deferred to the appendix}. As a result, from both theoretical and practical standpoints, the PIP loss is a suitable loss function for embeddings.
Furthermore, we show in Section \ref{sec:perturbation} that this formulation opens up a new angle to understanding the effect of embedding dimensionality with matrix perturbation theory.
\section{How Does Dimensionality Affect the Quality of Embedding?}\label{sec:perturbation}
With the PIP loss, we can now study the quality of trained word embeddings for any algorithm that uses matrix factorization. Suppose a $d$-dimensional embedding is derived from a signal matrix $M$ with the form $f_{\alpha,d}(M)\overset{\Delta}{=}U_{\cdot,1:d}D_{1:d,1:d}^\alpha$, where $M = UDV^T$ is the SVD. In the ideal scenario, a genie reveals a clean signal matrix $M$ (\textit{e.g.} PMI matrix) to the algorithm, which yields the \textbf{oracle embedding} $E=f_{\alpha,d}(M)$. However, in practice, there is no magical oil lamp, and we have to estimate $\tilde M$ (\textit{e.g.} empirical PMI matrix) from the training data, where $\tilde M=M+Z$ is perturbed by the estimation noise $Z$. The \textbf{trained embedding} $\hat E=f_{\alpha,k}(\tilde M)$ is computed by factorizing this noisy matrix. To ensure $\hat E$ is close to $E$, we want the PIP loss $\|EE^T-\hat E\hat E^T\|$ to be small. In particular, this PIP loss is affected by $k$, the dimensionality we select for the trained embedding.
\iffalse
\begin{definition}[Principal Angles]
Let $X$, $Y\in\mathbb{R}^{n\times k}$ be two orthogonal matrices, with $k\le n$. Let $UDV^T=X^TY$ be the singular value decomposition of $X^TY$. Then
\[D=\cos(\Theta)=diag(\cos(\theta_1), \cdots, \cos(\theta_k)).\]
In particular, $\Theta=(\theta_1,\cdots,\theta_k)$ are the principal angles between $\mathcal{X}$ and $\mathcal{Y}$, subspaces spanned by the columns of $X$ and $Y$.
\end{definition}
\fi
\citet{arora2016blog} discussed in an article about a mysterious empirical observation of word embeddings: ``\textit{... A striking finding in empirical work on word embeddings is that there is a sweet spot for the dimensionality of word vectors: neither too small, nor too large}"\footnote{\url{http://www.offconvex.org/2016/02/14/word-embeddings-2/}}. He proceeded by discussing two possible explanations: low dimensional projection (like the Johnson-Lindenstrauss Lemma) and the standard generalization theory (like the VC dimension), and pointed out why neither is sufficient for explaining this phenomenon. While some may argue that this is caused by underfitting/overfitting, the concept itself is too broad to provide any useful insight. We show that this phenomenon can be explicitly explained by a bias-variance trade-off in Section \ref{subsec:tradeoff_0}, \ref{subsec:tradeoff_generic} and \ref{subsec:tradeoff_main}. Equipped with the PIP loss, we give a mathematical presentation of the bias-variance trade-off using matrix perturbation theory. We first introduce a classical result in Lemma \ref{lemma:2}. The proof is deferred to the appendix, which can also be found in \citet{stewart1990matrix}.
\iffalse
\begin{lemma}\label{lemma:1}
For orthogonal matrices $X_0\in\mathbb{R}^{n\times k},Y_1\in\mathbb{R}^{n\times (n-k)}$, the SVD of their inner product equals
\[\text{SVD}(X_0^TY_1)=U_0\sin(\Theta)\tilde V_1^T\]
where $\Theta$ are the principal angles between $X_0$ and $Y_0$, the orthonormal complement of $Y_1$.
\end{lemma}
\fi
\begin{lemma}\label{lemma:2}
Let $X$, $Y$ be two orthogonal matrices of $\mathbb{R}^{n\times n}$. Let $X=[X_0,X_1]$ and $Y=[Y_0,Y_1]$ be the first $k$ columns of $X$ and $Y$ respectively, namely $X_0, Y_0\in\mathbb{R}^{n\times k}$ and $k\le n$. Then
\[\|X_0X_0^T-Y_0Y_0^T\|=c\|X_0^TY_1\|\]
where $c$ is a constant depending on the norm only. $c=1$ for 2-norm and $\sqrt{2}$ for Frobenius norm.
\end{lemma}
As pointed out by several papers \citep{caron2001experiments,bullinaria2012extracting,turney2012domain,levy2014neural}, embedding algorithms can be generically characterized as $E=U_{1:k,\cdot}D^\alpha_{1:k,1:k}$ for some $\alpha\in[0,1]$. For illustration purposes, we first consider a special case where $\alpha=0$.
\subsection{The Bias Variance Trade-off for a Special Case: $\alpha=0$}\label{subsec:tradeoff_0}
The following theorem shows how the PIP loss can be naturally decomposed into a bias term and a variance term when $\alpha=0$:
\begin{theorem}\label{theorem:1}
Let $E\in\mathbb{R}^{n\times d}$ and $\hat E\in\mathbb{R}^{n\times k}$ be the oracle and trained embeddings, where $k\le d$. Assume both have orthonormal columns. Then the PIP loss has a bias-variance decomposition
\begin{small}
\[\|\text{PIP}(E)-\text{PIP}(\hat E)\|^2=d-k+2\|\hat{E}^TE^\perp\|^2\]
\end{small}
\end{theorem}
\begin{proof}
The proof utilizes techniques from matrix perturbation theory. To simplify notations, denote $X_0=E$, $Y_0=\hat E$, and let $X=[X_0,X_1]$, $Y=[Y_0,Y_1]$ be the complete $n$ by $n$ orthogonal matrices. Since $k\le d$, we can further split $X_0$ into $X_{0,1}$ and $X_{0,2}$, where the former has $k$ columns and the latter $d-k$. Now, the PIP loss equals
\begin{small}
\begin{align*}
\|EE^T-\hat E\hat E^T\|^2
=&\|X_{0,1}X_{0,1}^T-Y_0Y_0^T+X_{0,2}X_{0,2}^T\|^2\\
=&\|X_{0,1}X_{0,1}^T-Y_0Y_0^T\|^2+\|X_{0,2}X_{0,2}^T\|^2+2\langle X_{0,1}X_{0,1}^T-Y_0Y_0^T, X_{0,2}X_{0,2}^T\rangle\\
\overset{(a)}{=}&2\|Y_0^T[X_{0,2},X_1]\|^2+d-k-2\langle Y_0Y_0^T, X_{0,2}X_{0,2}^T\rangle\\
=&2\|Y_0^TX_{0,2}\|^2+2\|Y_0^TX_1\|^2+d-k-2\langle Y_0Y_0^T, X_{0,2}X_{0,2}^T\rangle\\
=&d-k+2\|Y_0^TX_1\|^2=d-k+2\|\hat E^TE^\perp\|^2
\end{align*}
\end{small}
where in equality (a) we used Lemma \ref{lemma:2}.
\end{proof}
\iffalse
We would like to make two comments about theorem \ref{theorem:1}. First the proof only deals with the case where $k\le d$. There is no technical difficulties to go beyond this, as when $k>d$, the proof can be repeated in exactly the same fashion. However, this goes against Bob's reasoning (which is correct). $k>d$ only including more noise which harms the reconstruction quality.
\fi
The observation is that the right-hand side now consists of two parts, which we identify as bias and variance. The first part $d-k$ is the amount of lost signal, which is caused by discarding the rest $d-k$ dimensions when selecting $k\le d$. However, $\|\hat E^TE^\perp\|$ increases as $k$ increases, as the noise perturbs the subspace spanned by $E$, and the singular vectors corresponding to smaller singular values are more prone to such perturbation. As a result, the optimal dimensionality $k^*$ which minimizes the PIP loss lies in between 0 and $d$, the rank of the matrix $M$.
\subsection{The Bias Variance Trade-off for the Generic Case: $\alpha\in (0, 1]$}\label{subsec:tradeoff_generic}
In this generic case, the columns of $E$, $\hat E$ are no longer orthonormal, which does not satisfy the assumptions in matrix perturbation theory. We develop a novel technique where Lemma \ref{lemma:2} is applied in a telescoping fashion. The proof of the theorem is deferred to the appendix.
\begin{theorem}\label{theorem:2}
Let
\begin{small}
$M=UDV^T$, $\tilde M=\tilde U\tilde D\tilde V^T$
\end{small}
be the SVDs of the clean and estimated signal matrices. Suppose $E=U_{\cdot,1:d}D_{1:d,1:d}^\alpha$ is the oracle embedding, and $\hat E=\tilde U_{\cdot,1:k}{\tilde D}_{1:k,1:k}^\alpha$ is the trained embedding, for some $k\le d$. Let $D=diag(\lambda_i)$ and $\tilde D=diag(\tilde \lambda_i)$, then
\begin{small}
\begin{align*}
\|\text{PIP}(E)-\text{PIP}(\hat E)\|\le&\sqrt{\sum_{i=k+1}^d \lambda_i^{4\alpha}}+\sqrt{\sum_{i=1}^k (\lambda_i^{2\alpha}-\tilde\lambda_{i}^{2\alpha})^2}+\sqrt{2}\sum_{i=1}^k (\lambda_i^{2\alpha}-\lambda_{i+1}^{2\alpha})\|\tilde U_{\cdot,1:i}^T U_{\cdot,i:n}\|
\end{align*}
\end{small}
\end{theorem}
As before, the three terms in Theorem \ref{theorem:2} can be characterized into bias and variances. The first term is the bias as we lose part of the signal by choosing $k\le d$. Notice that the embedding matrix $E$ consists of signal directions (given by $U$) and their magnitudes (given by $D^\alpha$). The second term is the variance on the \textit{magnitudes}, and the third term is the variance on the \textit{directions}.
\iffalse
It's interesting to compare theorem \ref{theorem:1} and \ref{theorem:2} when $\alpha=0$, to which both applies. Plugging in $\alpha=0$ into theorem \ref{theorem:2} gives the upper bound $\|EE^T-\hat E\hat E^T\|\le \sqrt{d-k}+\sqrt{2}\|\hat{E}^TE^\perp\|$ while theorem \ref{theorem:1} states $\|EE^T-\hat E\hat E^T\|=\sqrt{d-k+2\|\hat{E}^TE^\perp\|^2}$. We observe that theorem \ref{theorem:1} provides a tighter upper bound (which is also exact). Indeed, we traded some tightness for generality, to deal with $\alpha>0$.
\fi
\subsection{The Bias-Variance Trade-off Captures the Signal-to-Noise Ratio}\label{subsec:tradeoff_main}
We now present the main theorem, which shows that the bias-variance trade-off reflects the ``signal-to-noise ratio'' in dimensionality selection.
\begin{theorem}[Main theorem]\label{theorem:main}
Suppose $\tilde M=M+Z$, where $M$ is the signal matrix, symmetric with spectrum $\{\lambda_i\}_{i=1}^d$. $Z$ is the estimation noise, symmetric with iid, zero mean, variance $\sigma^2$ entries. For any $0\le \alpha \le 1$ and $k\le d$, let the oracle and trained embeddings be
\begin{small}
\[ E=U_{\cdot,1:d}D_{1:d,1:d}^\alpha,\ \hat E=\tilde U_{\cdot,1:k}\tilde D_{1:k,1:k}^\alpha\]
\end{small}
where
\begin{small}
$M=UDV^T$, $\tilde M=\tilde U\tilde D\tilde V^T$
\end{small}
are the SVDs of the clean and estimated signal matrices. Then
\begin{enumerate}
\item When $\alpha=0$,
\begin{small}
\begin{equation*}
\mathbb E[\|EE^T-\hat E\hat E^T\|]\le\sqrt{d-k+2\sigma^2\sum_{r\le k,\\ s>d}(\lambda_{r}-\lambda_{s})^{-2}}
\end{equation*}
\end{small}
\item When $0<\alpha\le 1$,
\begin{small}
\begin{align*}
\mathbb E[\|EE^T-\hat E\hat E^T\|]\le\sqrt{\sum_{i=k+1}^d \lambda_i^{4\alpha}}+2\sqrt{2n}\alpha\sigma\sqrt{\sum_{i=1}^k \lambda_i^{4\alpha-2}}+\sqrt{2}\sum_{i=1}^k (\lambda_i^{2\alpha}-\lambda_{i+1}^{2\alpha})\sigma\sqrt{\sum_{r\le i<s}(\lambda_{r}-\lambda_{s})^{-2}}
\end{align*}
\end{small}
\end{enumerate}
\end{theorem}
\begin{proof}
We sketch the proof for part 2, as the proof of part 1 is simpler and can be done with the same arguments. We start by taking expectation on both sides of Theorem \ref{theorem:2}:
\begin{small}
\begin{align*}
\mathbb E[\|EE^T-\hat E\hat E^T\|]\le&\sqrt{\sum_{i=k+1}^d \lambda_i^{4\alpha}}+\mathbb E\sqrt{\sum_{i=1}^k (\lambda_i^{2\alpha}-\tilde\lambda_{i}^{2\alpha})^2}+\sqrt{2}\sum_{i=1}^k (\lambda_i^{2\alpha}-\lambda_{i+1}^{2\alpha})\mathbb E[\|\tilde U_{\cdot,1:i}^T U_{\cdot,i:n}\|],
\end{align*}
\end{small}
The first term involves only the spectrum, which is the same after taking expectation. The second term is upper bounded using Lemma \ref{lemma:bias1} below, derived from Weyl's theorem. We state the lemma, and leave the proof to the appendix.
\begin{lemma}\label{lemma:bias1}
Under the conditions of Theorem \ref{theorem:main},
\begin{small}
\begin{align*}
\mathbb E\sqrt{\sum_{i=1}^k (\lambda_i^{2\alpha}-\tilde\lambda_{i}^{2\alpha})^2}\le 2\sqrt{2n}\alpha\sigma\sqrt{\sum_{i=1}^k \lambda_i^{4\alpha-2}}
\end{align*}
\end{small}
\end{lemma}
For the last term, we use the Sylvester operator technique by \citet{stewart1990matrix}. Our result is presented in Lemma \ref{lemma:T}, and the proof of which is discussed in the appendix.
\begin{lemma}\label{lemma:T}
For two matrices $M$ and $\tilde M=M+Z$, denote their SVDs as $M=UDV^T$ and $\tilde M=\tilde U\tilde D \tilde V^T$. Write the left singular matrices in block form as $U=[U_0,U_1]$, $\tilde U=[\tilde U_0,\tilde U_1]$, and similarly partition $D$ into diagonal blocks $D_0$ and $D_1$. If the spectrum of $D_0$ and $D_1$ has separation
\begin{small}
\[\delta_k\overset{\Delta}{=}\min_{1\le i\le k,k< j\le n}\{\lambda_{i}-\lambda_{j}\}=\lambda_k-\lambda_{k+1}>0,\]
\end{small}
and $Z$ has iid, zero mean entries with variance $\sigma^2$, then
\begin{small}
\[
\mathbb E[\|\tilde U_1^TU_0\|]\le\sigma\sqrt{\sum_{\substack{1\le i\le k<j\le n}}(\lambda_{i}-\lambda_{j})^{-2}}\]
\end{small}
\end{lemma}
Now, collect results in Lemma \ref{lemma:bias1} and Lemma \ref{lemma:T}, we obtain an upper bound approximation for the PIP loss:
\begin{small}
\begin{align*}
&\mathbb E[\|EE^T-\hat E\hat E^T\|]\le \sqrt{\sum_{i=k+1}^d \lambda_i^{4\alpha}}+2\sqrt{2n}\alpha\sigma\sqrt{\sum_{i=1}^k \lambda_i^{4\alpha-2}}+\sqrt{2}\sum_{i=0}^k (\lambda_i^{2\alpha}-\lambda_{i+1}^{2\alpha})\sigma\sqrt{\sum_{r\le i<s}(\lambda_{r}-\lambda_{s})^{-2}}
\end{align*}
\end{small}
which completes the proof.
\end{proof}
Theorem \ref{theorem:main} shows that when dimensionality is too small, too much signal power (specifically, the spectrum of the signal $M$) is discarded, causing the first term to be too large (high bias). On the other hand, when dimensionality is too large, too much noise is included, causing the second and third terms to be too large (high variance). This explicitly answers the question of \citet{arora2016blog}.
\iffalse
Now assume the noise matrix $N$ is a random symmetric matrix, with entries that are bounded, iid and zero mean, with variance $\sigma^2$. Using elementary results from random matrix theory, specifically the unitary invariance property and the Tracy–Widom law, the above bound can be written as
\begin{align*}
\|EE^T-\hat E\hat E^T\|\le& \sqrt{\sum_{i=k}^d \lambda_i^{4\alpha}}+4\alpha \sqrt{n}\sigma\sqrt{\sum_{i=1}^k \hat\lambda_i^{4\alpha-2}}\\
&+\sqrt{2}\sum_{i=1}^k (\lambda_i^{2\alpha}-\lambda_{i+1}^{2\alpha})\frac{\sqrt{i(n-i)}\sigma}{\lambda_i-\lambda_{i+1}}\\
\approx & \sqrt{\sum_{i=k}^d \lambda_i^{4\alpha}}+4\alpha \sqrt{n}\sigma\sqrt{\sum_{i=1}^k \hat\lambda_i^{4\alpha-2}}\\
&+2\sqrt{2}\alpha\sigma\sum_{i=2}^{k+1} \lambda_i^{2\alpha-1}\sqrt{i(n-i)}
\end{align*}
The last step is to substitute the unobserved quantities in the above inequality by the estimates from the data. Specifically, we need the estimates $\hat\sigma$ and $\{\hat\lambda_i\}_{i=1}^d$. The final form is
\begin{align*}
\|EE^T-\hat E\hat E^T\|&\le\sqrt{\sum_{i=k}^d \hat\lambda_i^{4\alpha}}\\
&+2\alpha\hat\sigma\Big( 2\sqrt{n}\sqrt{\sum_{i=1}^k \hat\lambda_i^{4\alpha-2}}+\sqrt{2}\sum_{i=2}^{k+1} \hat\lambda_i^{2\alpha-1}\sqrt{i(n-i)}\Big)
\end{align*}
Such an upper bound only require two things from the data, namely the estimates of noise variance and the spectrum of the signal matrix.
\fi
\section{Two New Discoveries}\label{sec:discovery}
In this section, we introduce two more discoveries regarding the fundamentals of word embedding. The first is the relationship between the robustness of embedding and the exponent parameter $\alpha$, with a corollary that both skip-gram and GloVe are robust to over-fitting. The second is a dimensionality selection method by explicitly minimizing the PIP loss between the oracle and trained embeddings\footnote{Code can be found on GitHub: https://github.com/ziyin-dl/word-embedding-dimensionality-selection}. All our experiments use the Text8 corpus \citep{mahoney2011large}, a standard benchmark corpus used for various natural language tasks.
\iffalse
We use three datasets, the Text8 \citep{mahoney2011large}, Semantic Text Similarity (STS) \citep{marelli2014semeval} and Wikipedia dataset. The statistics are as follows:
\begin{small}
\begin{tabular}{ |c| c| c| c|}
\hline
& Text8 & Wikipedia & STS\\
\hline
\# Tokens & 17M & 2B & 86K\\
\hline
$\hat \sigma$ & 0.35 & 0.24 & 0.028\\
\hline
Matrix Size & 10k by 10k & 10k by 10k & 2k by 9k\\
\hline
Matrix Type & PMI/PPMI & PMI/PPMI & TF/TF-IDF\\
\hline
Purpose & Word Embed & Word Embed & Doc Embed\\
\hline
\end{tabular}
\end{small}
\subsection{Theoretical Upper Bound Quality}
We took the estimated spectrum and noise in the three datasets, and plotted theorem \ref{theorem:main}'s bound against ground truth, where we used symmetric factorization and randomly generated singular vectors as the ground truth signal directions. Figure \ref{fig:theory_vs_gt} shows our bound is close to the unobserved actual PIP loss, and captures the bias-variance trade-off. Randomly generated singular vectors do not affect the PIP loss. Each of the plots was repeated 10 times, and we observe the ground truth PIP losses concentrate. This observation suggests that besides the upper bound, Monte-Carlo methods can be used to estimate the actual PIP loss.
\begin{figure}[htb]
\centering
\hspace*{\fill}%
\begin{subfigure}[b]{0.15\textwidth}
\includegraphics[width=\textwidth]{figs/bound_Text8_0_5.pdf}
\caption[]%
{{\small Text8}}
\label{subfig:pip_error_Text8}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.15\textwidth} \includegraphics[width=\textwidth]{figs/bound_wiki_0_5.pdf}
\caption[]%
{{\small Wikipedia}}
\label{subfig:pip_error_wiki}
\end{subfigure}
\hspace*{\fill}%
\begin{subfigure}[b]{0.15\textwidth} \includegraphics[width=\textwidth]{figs/bound_sts_0_5.pdf}
\caption[]%
{{\small STS}}
\label{subfig:pip_error_sts}
\end{subfigure}
\caption{PIP loss: Upper Bound vs Ground Truth}
\label{fig:theory_vs_gt}
\end{figure}
\fi
\subsection{Word Embeddings' Robustness to Over-Fitting Increases with Respect to $\alpha$} \label{subsec:robustness}
Theorem \ref{theorem:main} provides a good indicator for the sensitivity of the PIP loss with respect to over-parametrization. \citet{vu2011singular} showed that the approximations obtained by matrix perturbation theory are minimax tight. As $k$ increases, the bias term
\begin{small}
$\sqrt{\sum_{i=k}^d \lambda_i^{4\alpha}}$
\end{small}
decreases, which can be viewed as a \textit{zeroth-order} term because the arithmetic means of singular values are dominated by the large ones. As a result, when $k$ is already large (say, the singular values retained contain more than half of the total energy of the spectrum), increasing $k$ has only marginal effect on the PIP loss.
On the other hand, the variance terms demonstrate a \textit{first-order} effect, which contains the difference of the singular values, or singular gaps. Both variance terms grow at the rate of $\lambda_k^{2\alpha-1}$ with respect to the dimensionality $k$ (the analysis is left to the appendix). For small $\lambda_k$ (i.e. $\lambda_k <1$), the rate $\lambda_k^{2\alpha-1}$ increases as $\alpha$ decreases: when $\alpha<0.5$, this rate can be very large; When $0.5 \le \alpha \le 1$, the rate is bounded and sub-linear, in which case the PIP loss will be robust to over-parametrization. In other words, as $\alpha$ becomes larger, the embedding algorithm becomes less sensitive to over-fitting caused by the selection of an excessively large dimensionality $k$. To illustrate this point, we compute the PIP loss of word embeddings (approximated by Theorem \ref{theorem:main}) for the PPMI LSA algorithm, and plot them for different $\alpha$'s in Figure \ref{subfig:pip_bound_theory}.
\begin{figure}[htb]
\centering
\hspace*{\fill}%
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/bounds_pip_theory.pdf}
\caption[]%
{{\small Theorem 3}}
\label{subfig:pip_bound_theory}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figs/corr_wordsim353.pdf}
\caption[]%
{{\small WordSim353 Test}}
\label{subfig:ws353}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figs/corr_mturk771.pdf}
\caption[]%
{{\small MTurk771 Test}}
\label{subfig:mturk771}
\end{subfigure}
\hspace*{\fill}%
\caption{Sensitivity to over-parametrization: theoretical prediction versus empirical results}
\label{fig:corr}
\end{figure}
Our discussion that over-fitting hurts algorithms with smaller $\alpha$ more can be empirically verified. Figure \ref{subfig:ws353} and \ref{subfig:mturk771} display the performances (measured by the correlation between vector cosine similarity and human labels) of word embeddings of various dimensionalities from the PPMI LSA algorithm, evaluated on two word correlation tests: WordSim353 \citep{wordsim353} and MTurk771 \citep{mturk771}. These results validate our theory: performance drop due to over-parametrization is more significant for smaller $\alpha$.
For the popular skip-gram \citep{mikolov2013exploiting} and GloVe \citep{pennington2014glove}, $\alpha$ equals $0.5$ as they are implicitly doing a symmetric factorization. Our previous discussion suggests that they are robust to over-parametrization. We empirically verify this by training skip-gram and GloVe embeddings. Figure \ref{fig:word2vec} shows the empirical performance on three word functionality tests. Even with extreme over-parametrization (up to $k=10000$), skip-gram still performs within 80\% to 90\% of optimal performance, for both analogy test \citep{mikolov2013efficient} and relatedness tests (WordSim353 \citep{wordsim353} and MTurk771 \citep{mturk771}). This observation holds for GloVe as well as shown in Figure \ref{fig:glove}.
\begin{figure}[htb]
\centering
\hspace*{\fill}%
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/question_accuracy.png}
\caption[]%
{{\small Google Analogy Test}}
\label{subfig:w2v_compositionality}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figs/word2vec_similarity_wordsim353.png}
\caption[]%
{{\small WordSim353 Test}}
\label{subfig:w2v_wordsim353}
\end{subfigure}
\hspace*{\fill}%
\begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figs/word2vec_similarity_mturk771.png}
\caption[]%
{{\small MTurk771 Test}}
\label{subfig:w2v_mturk771}
\end{subfigure}
\caption{skip-gram Word2Vec: over-parametrization does not significantly hurt performance}
\label{fig:word2vec}
\end{figure}
\begin{figure}[htb]
\centering
\hspace*{\fill}%
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/glove_analogy_scores.pdf}
\caption[]%
{{\small Google Analogy Test}}
\label{subfig:glove_analogy}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figs/glove_sim_scores_wordsim353.pdf}
\caption[]%
{{\small WordSim353 Test}}
\label{subfig:glove_wordsim353}
\end{subfigure}
\hspace*{\fill}%
\begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figs/glove_sim_scores_mturk771.pdf}
\caption[]%
{{\small MTurk771 Test}}
\label{subfig:glove_mturk771}
\end{subfigure}
\caption{GloVe: over-parametrization does not significantly hurt performance}
\label{fig:glove}
\end{figure}
\subsection{Optimal Dimensionality Selection: Minimizing the PIP Loss}
The optimal dimensionality can be selected by finding the $k^*$ that minimizes the PIP loss between the trained embedding and the oracle embedding. With a proper estimate of the spectrum $D=\{\lambda\}_1^d$ and the variance of noise $\sigma^2$, we can use the approximation in Theorem \ref{theorem:main}. Another approach is to use the Monte-Carlo method where we simulate the clean signal matrix $M=UDV$ and the noisy signal matrix $\tilde M=M+Z$.
By factorizing $M$ and $\tilde M$, we can simulate the oracle embedding $E=UD^\alpha$ and trained embeddings $\hat E_k=\tilde U_{\cdot,1:k}\tilde D_{1:k,1:k}^\alpha$, in which case the PIP loss between them can be directly calculated. We found empirically that the Monte-Carlo procedure is more accurate as the simulated PIP losses concentrate tightly around their means across different runs. In the following experiments, we demonstrate that dimensionalities selected using the Monte-Carlo approach achieve near-optimal performances on various word intrinsic tests. As a first step, we demonstrate how one can obtain good estimates of $\{\lambda_i\}_1^d$ and $\sigma$ in \ref{subsec:estimation}.
\subsubsection{Spectrum and Noise Estimation from Corpus} \label{subsec:estimation}
\paragraph{Noise Estimation}\label{sec:noise_est}
We note that for most NLP tasks, the signal matrices are estimated by counting or transformations of counting, including taking log or normalization. This holds for word embeddings that are based on co-occurrence statistics, \textit{e.g.}, LSA, skip-gram and GloVe.
We use a count-twice trick to estimate the noise: we randomly split the data into two equally large subsets, and get matrices $\tilde{M}_1=M+Z_1$, $\tilde{M}_2=M+Z_2$ in $\mathbb R^{m\times n}$, where $Z_1,Z_2$ are two independent copies of noise with variance $2\sigma^2$. Now, $\tilde{M}_1-\tilde{M}_2=Z_1-Z_2$ is a random matrix with zero mean and variance $4\sigma^2$. Our estimator is the sample standard deviation, a consistent estimator:
\begin{small}
\[\hat\sigma=\frac{1}{2\sqrt{mn}}\|\tilde{M}_1-\tilde{M}_2\|\]
\end{small}
\paragraph{Spectral Estimation}\label{sec:spectrum_est}
Spectral estimation is a well-studied subject in statistical literature \citep{cai2010singular,candes2009exact,kong2017spectrum}. For our experiments, we use the well-established universal singular value thresholding (USVT) proposed by \citet{chatterjee2015matrix}.
\begin{small}
\[\hat\lambda_i=(\tilde{\lambda}_i-2\sigma\sqrt{n})_+\]
\end{small}
where $\tilde{\lambda}_i$ is the $i$-th empirical singular value and $\sigma$ is the noise standard deviation. This estimator is shown to be minimax optimal \citep{chatterjee2015matrix}.
\subsubsection{Dimensionality Selection: LSA, Skip-gram Word2Vec and GloVe}
After estimating the spectrum $\{\lambda_i\}_1^d$ and the noise $\sigma$, we can use the Monte-Carlo procedure described above to estimate the PIP loss. For three popular embedding algorithms: LSA, skip-gram Word2Vec and GloVe, we find their optimal dimensionalities $k^*$ that minimize their respective PIP loss. We define the sub-optimality of a particular dimensionality $k$ as the additional PIP loss compared with $k^*$: $\|E_kE_k^T-EE^T\|-\|E_{k^*}{E_{k^*}}^T-EE^T\|$. In addition, we define the \textit{$p\%$ sub-optimal interval} as the interval of dimensionalities whose sub-optimality are no more than $p\%$ of that of a 1-D embedding. In other words, if $k$ is within the $p\%$ interval, then the PIP loss of a $k$-dimensional embedding is at most $p\%$ worse than the optimal embedding. We show an example in Figure \ref{fig:level_sets}.
\paragraph{LSA with PPMI Matrix} For the LSA algorithm, the optimal dimensionalities and sub-optimal intervals around them ($5\%$, $10\%$, $20\%$ and $50\%$) for different $\alpha$ values are shown in Table \ref{table:dimensions}. Figure \ref{fig:level_sets} shows how PIP losses vary across different dimensionalities.
From the shapes of the curves, we can see that models with larger $\alpha$ suffer less from over-parametrization, as predicted in Section \ref{subsec:robustness}.
We further cross-validated our theoretical results with intrinsic functionality tests on word relatedness. The empirically optimal dimensionalities that achieve highest correlations with human labels for the two word relatedness tests (WordSim353 and MTurk777) lie close to the theoretically selected $k^*$'s. All of them fall in the 5\% interval except when $\alpha=0$, in which case they fall in the 20\% sub-optimal interval.
\begin{figure}[htb]
\centering
\hspace*{\fill}%
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/pip_gt_0_0.pdf}
\caption[]%
{{\small $\alpha=0$}}
\label{subfig:pip_gt_0}
\end{subfigure}
\iffalse
\begin{subfigure}[b]{0.09\textwidth} \includegraphics[width=\textwidth]{figs/pip_gt_0_25.pdf}
\caption[]%
{{\small $0.25$}}
\label{subfig:pip_gt_025}
\end{subfigure}
\fi
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/pip_gt_0_5.pdf}
\caption[]%
{{\small $\alpha=0.5$}}
\label{subfig:pip_gt_05}
\end{subfigure}
\iffalse
\begin{subfigure}[b]{0.14\textwidth} \includegraphics[width=\textwidth]{figs/pip_gt_0_75.pdf}
\caption[]%
{{\small $0.75$}}
\label{subfig:pip_gt_075}
\end{subfigure}
\fi
\begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figs/pip_gt_1_0.pdf}
\caption[]%
{{\small $\alpha=1$}}
\label{subfig:pip_gt_1}
\end{subfigure}
\hspace*{\fill}%
\caption{PIP loss and its bias-variance trade-off allow for explicit dimensionality selection for LSA}
\label{fig:level_sets}
\end{figure}
\captionof{table}{Optimal dimensionalities for word relatedness tests are close to PIP loss minimizing ones} \label{table:dimensions}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{ |c| c |c|c|c| c|c|c|}
\hline
$\alpha$ & $\text{PIP} \arg\min$ & 5\% interval & 10\% interval & 20\% interval & 50\% interval & WS353 opt. & MT771 opt.\\
\hline
$0$ & 214 & [164,289] & [143,322] & [115,347] & [62,494] & 127 & 116 \\
\hline
$0.25$ & 138 & [95,190] & [78,214] & [57,254] & [23,352] & 146 & 116 \\
\hline
$0.5$ & 108 & [61,177] & [45,214] & [29,280] & [9,486] & 146 & 116 \\
\hline
$0.75$ & 90 & [39,206] & [27,290] & [16,485] & [5,1544] & 155 & 176 \\
\hline
$1$ & 82 & [23,426] & [16,918] & [9,2204] & [3,2204] & 365 & 282\\
\hline
\end{tabular}
}
\paragraph{Word2Vec with Skip-gram}
For skip-gram, we use the PMI matrix as its signal matrix \citep{levy2014neural}. On the theoretical side, the PIP loss-minimizing dimensionality $k^*$ and the sub-optimal intervals ($5\%$, $10\%$, $20\%$ and $50\%$) are reported in Table \ref{tab:word2vec}. On the empirical side, the optimal dimensionalities for WordSim353, MTurk771 and Google analogy tests are 56, 102 and 220 respectively for skip-gram. They agree with the theoretical selections: one is within the 5\% interval and the other two are within the 10\% interval.
\captionof{table}{PIP loss minimizing dimensionalities and intervals for Skip-gram on Text8 corpus} \label{tab:word2vec}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{ | c|c| c| c| c|c|c|c|c|}
\hline
Surrogate Matrix & $ \arg\min$ & $+5\%$ interval & $+10\%$ interval & $+20\%$ interval & $+50\%$ interval & WS353 & MT771 & Analogy\\
\hline
Skip-gram (PMI) & 129 & [67,218] & [48,269] & [29,365] & [9,679] &56&102&220\\
\hline
\end{tabular}
}
\paragraph{GloVe}
For GloVe, we use the log-count matrix as its signal matrix \citep{pennington2014glove}. On the theoretical side, the PIP loss-minimizing dimensionality $k^*$ and sub-optimal intervals ($5\%$, $10\%$, $20\%$ and $50\%$) are reported in Table \ref{tab:glove}. On the empirical side, the optimal dimensionalities for WordSim353, MTurk771 and Google analogy tests are 22
, 86
, and
560. Again, they agree with the theoretical selections: two are within the 5\% interval and the other is within the 10\% interval.
\captionof{table}{PIP loss minimizing dimensionalities and intervals for GloVe on Text8 corpus} \label{tab:glove}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{ | c|c| c| c| c|c|c|c|c|}
\hline
Surrogate Matrix & $ \arg\min$ & $+5\%$ interval & $+10\%$ interval & $+20\%$ interval & $+50\%$ interval & WS353 & MT771 & Analogy\\
\hline
GloVe (log-count)& 719 & [290,1286] & [160,1663] & [55,2426] & [5,2426] &220&860&560\\
\hline
\end{tabular}
}
\vspace{7pt}
The above three experiments show that our method is a powerful tool in practice: the dimensionalities selected according to empirical grid search agree with the PIP-loss minimizing criterion, which can be done simply by knowing the spectrum and noise standard deviation.
\iffalse
Finally, we demonstrate that the PIP loss minimizing method is much faster than the empirical method. Experiments are done on a server with dual Xeon E5-2630Lv2 CPUs and 64GB of RAM\footnote{evaluation details are listed in the appendix due to space limitation}.
\captionof{table}{Time needed to run dimensionality selection on the Text8 corpus} \label{tab:runtime}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{ | c|c| c| c| c|}
\hline
Dimensionality Selection Time & PPMI LSA & skip-gram Word2Vec (1-400) dims & GloVe (1-400) dims\\
\hline
Empirical grid search & 1.7 hours & 2.5 days & 1.8 days \\
\hline
Minimizing PIP loss & 1.2 hours & 42 minutes & 42 minutes \\
\hline
\end{tabular}
}
\fi
\iffalse
\subsubsection{Document Embedding}
We tested TF-IDF document embedding on the STS dataset, and compared the result with human labeled document similarities. For symmetric decomposition ($\alpha=0.5$), the theoretical PIP minimizing dimension is 144, and between 100 to 206 the PIP loss is within 5\%. The empirical argmax on the STS dataset is 164 with human label correlation 0.653, within the 5\% interval predicted by the theory.
\begin{figure}[htb]
\centering
\hspace*{\fill}%
\begin{subfigure}[b]{0.19\textwidth}
\includegraphics[width=\textwidth]{figs/pip_gt_0_5_sts.pdf}
\caption[]%
{{\small Dim vs PIP Loss}}
\label{subfig:pip_gt_0_5_sts}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth} \includegraphics[width=\textwidth]{figs/gt_sts.pdf}
\caption[]%
{{\small Dim vs Human Label}}
\label{subfig:gt_sts}
\end{subfigure}
\label{fig:sts}
\caption{TF-IDFDocument Similarity on STS}
\end{figure}
\fi
\iffalse
\subsubsection{Large Dateset: Wikipedia Corpus}
In this experiment, we obtain word embeddings from the Wikipedia Corpus, retrieved on Jun 23, 2017. We build the PPMI matrix using most frequent 10000 words, estimate the noise and spectrum as described in section \ref{sec:noise_est} and \ref{sec:spectrum_est}.
Our theory predicts 391 as the PIP loss-minimizing dimension, with symmetric embeddings ($\alpha=0.5$).
\captionof{table}{Optimal Dimensions and Regions} \label{table:dimensions_wiki}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{ |c| c| c| c|c|c|c|}
\hline
$\text{PIP} \arg\min$ &\text{PIP} $+5\%$ &\text{PIP} $+10\%$ &\text{PIP} $+20\%$ &\text{PIP} $+50\%$ & WS353 & MT771\\
\hline
391 & [211,658] & [155,839] & [94,1257] & [24,2110] & 355 & 557\\
\hline
\end{tabular}
}
\\
Table \ref{table:dimensions_wiki} explains why word embeddings are so successful even people randomly choose dimensions: from dimension 211 to 658, the PIP loss stays within 5\% of the optimal. Such a flat plateau near optimality is the key to the robustness. This stability result helps rule out a possibly disastrous scenario of the empirical methods people use to find the optimal dimensionality. People train a handful of dimensions -- say 100 to 1000 at an increment of 100 -- and pick the empirically optimal model. Our result show that the curvature of PIP loss is small with respect to large dimensionality. As a result, although evaluating a few models will not give the best result, it will be close to optimal. On the other hand, by using methods provided in this paper, we can directly find the theoretically optimal dimensionality by minimizing the PIP loss, which is accurate and fast. Although embeddings are robust to over-parameterization, practical performance is another important factor. Too large a dimension hurts training and inference time, and can severely affect downstream applications (RNN/LSTM for machine translation or language modeling) in terms of responsiveness. As a result, carefully picking the optimal dimensionality is still important.
\fi
\section{Conclusion}
In this paper, we present a theoretical framework for understanding vector embedding dimensionality. We propose the PIP loss, a metric of dissimilarity between word embeddings. We focus on embedding algorithms that can be formulated as explicit or implicit matrix factorizations including the widely-used LSA, skip-gram and GloVe, and reveal a bias-variance trade-off in dimensionality selection using matrix perturbation theory. With this theory, we discover the robustness of word embeddings trained from these algorithms and its relationship to the exponent parameter $\alpha$. In addition, we propose a dimensionality selection procedure, which consists of estimating and minimizing the PIP loss. This procedure is theoretically justified, accurate and fast. All of our discoveries are concretely validated on real datasets.
\paragraph{Acknoledgements} The authors would like to thank John Duchi, Will Hamilton, Dan Jurafsky, Percy Liang, Peng Qi and Greg Valiant for the helpful discussions. We thank Balaji Prabhakar, Pin Pin Tea-mangkornpan and Feiran Wang for proofreading an earlier version and for their suggestions. Finally, we thank the anonymous reviewers for their valuable feedback and suggestions.
\iffalse
We compared our prediction with three human labeled similarity datasets \citep{wordsim353,rg65,mturk771}.
\begin{tabular}{ |c| c| c| c|}
\hline
WS353 & MT771 &RG65\\
\hline
0.640@355 & 0.603@556 & 0.809@562\\
\hline
\end{tabular}
\begin{tabular}{ |c| c| c| c|}
\hline
WS353 & MT771 &RG65\\
\hline
0.606@761 & 0.567@1145 & 0.768@562\\
\hline
\end{tabular}
\begin{figure}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/PPMI_scores_0_5.pdf}
\caption[]%
{{\small PPMI, $\alpha=0.5$}}
\label{fig:ws353}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/PPMI_scores_1_0.pdf}
\caption[]%
{{\small PPMI, $\alpha=1.0$}}
\label{fig:rg65}
\end{subfigure}
\caption[]
{\small Comparison of Similarity Given by Word Embeddings with Human Labels}
\label{fig:simtest}
\end{figure}
\fi
|
1,314,259,994,795 | arxiv | \section{Introduction}
Neural networks with logistic sigmoid activations may look very similar to Bayesian networks of the same structure with logistic conditional distributions, aka {\em sigmoid belief networks}~\cite{Neal:1992}. However, hidden units in NNs are deterministic and take on real values while hidden units in Bayesian networks are binary random variables with an associated distribution. Given enough capacity and training data, both models can estimate the posterior distribution of their output arbitrary well. Besides somewhat different modelling properties, there is a principled difference that with stochastic models it is possible to pose, at least theoretically, a number of inference and learning problems with missing data by marginalizing over latent and unobserved variables. Unfortunately, even forward inference in Bayesian networks requires methods such as sampling or optimization of a variational approximation. Likewise, in more tightly coupled graphical models such as {\em deep Boltzmann machine} (DBM)~\cite{salakhutdinov09a} or deep belief networks (DBN)~\cite{Hinton:NC2006,Lee:ICML2009} practically all computations needed e.g.~for computing marginal and posterior probabilities are not tractable in the sense that approximations typically involve sampling or optimization.
In this paper we propose stacked (deep) conditional independent model (DCIM). There are two views how the model can be defined. One, as a Bayesian network with logistic conditional distributions. The other, just assuming that conditional probabilities of a general Bayesian network factor over the parent nodes up to a normalising factor. With binary units this necessary implies that conditional probabilities are logistic. It is noteworthy that we find the same form of conditional probabilities in most of the neural probabilistic models: restricted Boltzmann machine, DBM~\cite{salakhutdinov09a}, DBN~\cite{Hinton:NC2006}, etc. When the units are arranged in layers, as in a typical DNN, the layered Bayesian network can be viewed as a Markov chain with a state space of all variables in a layer. In this view, all necessary assumptions can be summarized in one property, termed {\em strong conditional independence} that the forward conditional transition probabilities between the layers factorize over the dimensions of both the input and output state spaces of the transition up to a normalising factor. Making only this assumption, we show that a simple approximate Bayesian inference in DCIM recovers main constructive elements of DNNs: sigmoid and softmax activations with its real-valued variables corresponding to expectations of binary random variables. With this interpretation we can view DNN as a way of performing approximate inference very efficiently. To our knowledge, this relationship has not been established before. Under this approximation conditional likelihood learning of a DCIM is equivalent to that of DNN and can be performed by back-propagation.
Our second objective is to propose an alternative, generative learning approach for DNNs. In a number of recent works~\cite{KingmaW13,Rezende:ICML2014,Salakhutdinov:ARSA2015,Mnih-2014} and in the prior work~\cite{HintonDayanFreyEtAl95,Hinton:NC2006} a pair of deterministic recognition (encoder) and a stochastic generator (decoder) networks is trained. Let us denote $\x^0$ the input of the recognition network, to be specific, an image, and $\x^d$ (random variables at layer $d$) the output of the recognition network, the latent state. Although the two networks often are taken to have a symmetric structure~\cite{HintonDayanFreyEtAl95,KingmaW13}, their parameters are decoupled. The stochastic generator network can typically only generate samples, but cannot directly evaluate its posterior distribution or the gradient thereof, requiring variational or sampling-based approximations. The methods proposed in~\cite{KingmaW13,Rezende:ICML2014} assume that samples generated from the same latent state $x^d$ must fall close together in the image space. This prohibits using categorical latent spaces such as digit class in MNIST because digits of the same class naturally may look differently. Instead, a model's own continuous latent space is used such that fixing a point in it defines both the class and the shape of the digit. Works~\cite{KingmaW13,Rezende:ICML2014} are thus restricted to unsupervised learning.
Given full training data pairs of $(\x^0, \x^d)$, the recognition network could learn a distribution $p(\x^d \mid \x^0)$ and the generator network could in principle learn a distribution $q(\x^0 \mid \x^d)$. With our link between DNN and DCIM, we ask the question when the two DCIMs, modelling the two conditional distributions $p(\x^0 \mid \x^d)$ and $q(\x^d \mid \x^0)$, are consistent, i.e., correspond to some implicitly modelled joint distribution $p^*(\x^{0,\dots, d})$ of the data and all hidden layers. It is the case when $p(\x^{1,\dots d} \mid \x^0)/ q(\x^{0,\dots d-1}\mid \x^d) = A(\x^0)B(\x^d)$ for some functions $A,B$. While this cannot be strictly satisfied with our strong conditional independence assumptions, we observe that most of the terms in the ratio cancel when we set the weights of recognition network $p$ to be transposed of those of network $q$. Both models therefore can be efficiently represented by one and the same DNN, share its parameters and can be learned simultaneously by using an estimator similar to pseudo-likelihood. We further use the link between DNN and DCIM to approximately compute the posterior distribution in the generator network $q(\x^d \mid \x^k)$ given a sample of an inner layer $\x^k$. The approximation is reasonable when this posterior is expected to have a single mode, such as when reconstructing an image from lower level features. We thus can fully or partially avoid sampling in the generator model. We demonstrate in a tentative experiment that such a coupled pair can be learned simultaneously, modelling the full distribution of the data, and has enough capacity to perform well in both recognition and generation.
\paragraph{Outline}
In~\cref{sec:approx} we consider strongly conditional independent model with two layers and derive the NN from it.
This model will serve as a building block of DCIM formally introduced in~\cref{sec:DCIM}.
In~\cref{subsec:gen_learn} we consider coupled pairs of DCIMs and propose a novel approach for generative learning.
In~\cref{sec:related} we discuss more connections with related work. In~\cref{sec:experiments} we propose our proof-of-concept experiments of generative learning using only DNN inference or sampling only a single layer.
\section{Deep conditional independent models}
\subsection{An approximation for conditional independent models}\label{sec:approx}
Let $\mvec{x}=(x_1,\ldots,x_n)$ and $\mvec{y}=(y_1,\ldots,y_m)$ be two collections of discrete random variables with a joint distribution $p(\mvec{x}, \mvec{y}) = p(\mvec{y} \mid \mvec{x}) \: p(\mvec{x})$.
The conditional distribution $p(\mvec{y} \mid \mvec{x})$ is {\em strongly conditional independent} if it factors as $p(\mvec{y} \mid \mvec{x}) = \frac{1}{Z(\mvec{x})} \prod_{i,j}g_{ij}(x_i, y_j)$.
Without loss of generality, it can be written as
\begin{equation} \label{eq:cim}
p(\mvec{y} \mid \mvec{x}) = \frac{1}{Z(\mvec{x})}
\exp \sum_{i,j} u_{ij}(x_i, y_j) =
\prod_{j=1}^{m} p(y_j \mid \mvec{x}) =
\prod_{j=1}^{m} \frac{1}{Z_j(\mvec{x})}
\exp \sum_{i=1}^{m} u_{ij}(x_i, y_j) .
\end{equation}
The functions $u_{ij}$ are arbitrary and $Z_j(\x)$ denote the corresponding normalization constants.
While sampling from $p(\mvec{y} \mid \mvec{x})$ is easy, computing $p(\mvec{y})$ by marginalizing over $\mvec{x}$ is not tractable even if $p(\mvec{x})$ factorizes over the components of $\mvec{x}$.
We consider the following approximation for the marginal distribution $p(\y)$:
\begin{equation} \label{eq:pyapprox}
p(\mvec{y}) \approx \frac{1}{Z} \exp
\sum_{ij} \sum_{x_i} p(x_i) u_{ij}(x_i, y_j) .
\end{equation}
One of the possible ways to derive it, is to consider Taylor expansions for moments of functions of random variables. By using a linear embedding $\mvec{\Phi}$ such that $\sum_{i,j}u_{ij}(x_i,y_j) = \scalp{\mvec{\Phi}(\mvec{x},\mvec{y})}{\mvec{u}}$, where $\mvec{u}$ is a vector with components $u_{i,j}(k,l)$, we can write $p(\mvec{y} \mid \mvec{x})$ in the form
\begin{equation}
p(\mvec{y} \mid \mvec{x}) = \frac{\exp \scalp{\mvec{\Phi}(\mvec{x},\mvec{y})}{\mvec{u}}
{\sum_{\mvec{y}'} \exp \scalp{\mvec{\Phi}(\mvec{x},\mvec{y}')}{\mvec{u}}} .
\end{equation}
The marginal distribution $p(\y)$ is obtained from
\begin{equation}
\E_{\mvec{x}\sim p(\mvec{x})} p(\mvec{y} \mid \mvec{x}) =
\E_{\mvec{x}\sim p(\mvec{x})}
\frac{\exp \scalp{\mvec{\Phi}(\mvec{x},\mvec{y})}{\mvec{u}}
{\sum_{\mvec{y}'} \exp \scalp{\mvec{\Phi}(\mvec{x},\mvec{y}')}{\mvec{u}}} .
\end{equation}
Taking the first order Taylor expansion w.r.t.~the random variable $\mvec{\Phi}(\mvec{x},\mvec{y})$ for fixed $\y$ around
\begin{equation}
\overline{\Phi}(\mvec{y}) = \E_{\mvec{x}\sim p(\mvec{x})} \mvec{\Phi}(\mvec{x},\mvec{y})
\end{equation}
and computing its expectation w.r.t.~$\mvec{x}$ gives \eqref{eq:pyapprox} as the constant term, while the first order term vanishes.
\begin{Example} \label{ex:sigmoid}
Assume that random variables in the two collections $\x=(x_1,\ldots,x_n)$ and $\y=(y_1,\ldots,y_m)$ are $\{0,1\}$-valued. Any function $u_{ij}(x_i, y_j)$ of binary variables can be written as $u_{ij}(x_i, y_j) = y_j W_{ji}x_i + b_j y_j + c_i x_i + d$. Terms $c_i x_i + d$ cancel in the normalization of $p(\mvec{y} \mid \mvec{x})$ and thus can be omitted. The approximation \eqref{eq:pyapprox} reads then $\hat{p}(\mvec{y}) = \prod_{j} \hat{p}(y_j)$ with
\begin{equation}\label{approx-binary1}
\hat{p}(y_j) = \frac{ e^{(\scalp{\mvec{w}^j}{\bar{\mvec{x}}} + b_j)y_j}}{
e^{\scalp{\mvec{w}^j}{\bar{\mvec{x}}} + b_j} + 1},
\end{equation}
where $\mvec{w}^j$ denotes the $j$-th row of the matrix $W$ and $\bar{\mvec{x}} = \E_{\mvec{x}\sim p(\mvec{x})} \x$. This in turn leads to the following approximation for the expectation of $\mvec{y}$:
\begin{equation}
\bar{y}_j = \hat p(y_j{=}1)
= \frac{1}{1 + e^{-\<\mvec{w}^j,\x\> - b_j}
} = \mathcal{S}(\<\mvec{w}^j,\bar \x\> + b_j),
\end{equation}
where $\mathcal{S}$ is the logistic (sigmoid) function.
If $\pm 1$ encoding is used instead to represent states of $\x$ and $\y$, then the corresponding expressions read
\begin{equation}\label{approx-binary2}
\hat p(y_j) = \frac{\exp\bigl[
y_j \bigl( \scalp{\mvec{w}^j}{\bar{\mvec{x}}} + b_j \bigr)\bigr]}
{2 \cosh \bigl[\scalp{\mvec{w}^j}{\bar{\mvec{x}}} + b_j\bigr]} ,
\end{equation}
and
\begin{equation}
\bar{y}_j = \tanh [ \scalp{\mvec{w}^j}{\bar{\mvec{x}}} + b_j ] .
\end{equation}
\end{Example}
\begin{Remark}
Let $p(\mvec{x}, \mvec{y})$ be a joint model such that $p(\mvec{y} \mid \mvec{x})$ is strongly conditional independent as above. Then $p(\mvec{x} \mid \mvec{y})$ is not in general conditional independent.
If both conditional distributions are strongly conditional independent, then the joint model is a restricted Boltzmann machine or, what is the same, an MRF on a bipartite graph.
\end{Remark}
\subsection{Deep conditional independent models}\label{sec:DCIM}
Let $\mathcal{X} = \bigl(\mvec{x}^0,\mvec{x}^1,\ldots,\mvec{x}^d\bigr)$ be a sequence of collections of binary valued random variables
\begin{equation} \label{eq:seq_coll}
\mvec{x}^k = \bigl\{ x_i^k = \pm 1 \bigm | i = 1,\ldots, n_k \bigr\} .
\end{equation}
The next observation highlights the difficulties that we run into if considering the deep Boltzmann machine model.
\begin{Observation} \label{obs:non-cim}
Let us assume that the joint distribution $p(\mathcal{X})$ is a deep Boltzmann machine, i.e.~an MRF on a layered, $d$-partite graph. Then neither the forward conditionals $p(\mvec{x}^k \mid \mvec{x}^{k-1})$ nor the backward conditionals $p(\mvec{x}^{k-1} \mid \mvec{x}^{k})$ are (conditionally) independent. This can be seen as follows. The joint distribution of such a model can be written w.l.o.g.~as
\begin{equation}
p(\mathcal{X}) = \frac{1}{Z} \exp \Bigr[
\sum_{k=0}^{d} \scalp{\mvec{b}^k}{\mvec{x}^k} +
\sum_{k=0}^{d} \scalp{\mvec{x}^k}{W^k \mvec{x}^{k-1}} \Bigr]
\end{equation}
and it follows that
\begin{equation}
p(\mvec{x}^k \mid \mvec{x}^{k-1}) =
\frac{F(\mvec{x}^k)}{Z(\mvec{x}^{k-1})} \exp \Bigr[
\scalp{\mvec{b}^k}{\mvec{x}^k} +
\scalp{\mvec{x}^k}{W^k \mvec{x}^{k-1}} \Bigr]
\end{equation}
where $F(\mvec{x}^k)$ results from marginalisation over $\mvec{x }^{k+1},\ldots,\mvec{x}^d$ and can be arbitrarily complex.
\end{Observation}
We will therefore consider a similar, but different class, for which the forward conditionals are (conditionally) independent.
\begin{Definition} \label{def:dcim}
A joint distribution $p(\mathcal{X})$ for a sequence of collections of binary valued random variables \eqref{eq:seq_coll} is a deep conditional independent model (DCIM) if it has the form
\begin{equation} \label{eq:DCIM}
p(\mvec{x}^0,\ldots,\mvec{x}^n) = p(\mvec{x}^0)
\prod_{k=1}^{d} p(\mvec{x}^k \mid \mvec{x}^{k-1})
\end{equation}
and its forward conditionals $p(\mvec{x}^k \mid \mvec{x}^{k-1})$ are strongly conditional independent, i.e.~have the form \eqref{eq:cim} for all $k=1,2,\ldots,d$.
\end{Definition}
A DCIM can be thus seen as an inhomogeneous Markov chain model with high dimensional state spaces $\mvec{x}^k \in \mathcal{X}^k$, however with an yet unspecified distribution for $\mvec{x}^0$. Given such a model and a realisation of $\mvec{x}^0$, the posterior conditional distribution $p(\mvec{x}^d \mid \mvec{x}^0)$ can be computed by applying the approximation \eqref{eq:pyapprox} recursively (see Example.~\ref{ex:sigmoid}). This leads to the recursion
\begin{equation}
\bar{x}_i^k = \tanh \bigl(
\scalp{\mvec{w}^k_i}{\bar{\mvec{x}}^{k-1}} + b^k_i \bigr)
\text{,\hspace{.5em}$k=1,2\ldots,d-1$},
\end{equation}
where $\bar{\mvec{x}}^k$ denotes the expectation of $\mvec{x}^k$ w.r.t.~$p(\mvec{x}^k \mid \mvec{x}^0)$. Finally, we obtain
\begin{equation}
p(x^d_i \mid \mvec{x}^0) =
\frac{\exp(x^d_i a^d_i)}{2 \cosh(a^d_i)}
\text{\hspace{.5em}with\hspace{.5em}}
a^d_i = \scalp{\mvec{w}^d_i}{\bar{\mvec{x}}^{d-1}} + b^d_i .
\end{equation}
It is obvious that the (approximate) computation of $p(\mvec{x}^d \mid \mvec{x}^0)$ for a DCIM is exactly the forward computation for a corresponding DNN.
Hence, discriminative learning of a DCIM by maximizing the conditional likelihood $p(\x^d \mid \x^0)$ of training samples $(\x^d, \x^0)$ means to learn the corresponding DNN with the same loss.
\par
With 0-1 state variables the model is equivalent to a Bayesian network (a directed graphical model), in which the set of parents of a variable $x^k_j$ is the preceding layer $\x^{k-1}$ (or its subset, depending on the structure of the weights) and the conditional distributions are of the form~\eqref{eq:cim}, \ie, logistic. Such networks were proposed by~\citet{Neal:1992} under the name sigmoid belief networks as a simpler alternative to Boltzmann machines. When we derive the model from the strong conditional independence assumption, it establishes a link to deep Boltzmann machines~\cite{salakhutdinov09a}, which are graphical models factorizing over a $d$-partite graph.
\subsection{Generative learning of deep networks} \label{subsec:gen_learn}
To learn a DCIM generatively from a given i.i.d.~training data $\mathcal{T}$, it is necessary to specify the joint model $p(\mvec{x}^0,\ldots,\mvec{x}^d)$ and to choose an appropriate estimator. In theory, the DCIM model can be completed to a joint model by specifying a prior distribution $p(\mvec{x}^0)$ over images and then the maximum likelihood estimator can be applied. It is however not realistic to model this complex multi-modal distribution in a closed form. We propose to circumvent this problem by applying the following bidirectional conditional likelihood estimator:
\begin{equation} \label{eq:pplik}
\frac{1}{\abs{\mathcal{T}}}
\sum_{(\mvec{x}^0,\mvec{x}^d)\in \mathcal{T}} \bigl[
\log p_{\theta}(\mvec{x}^d \mid \mvec{x}^0) +
\log p_{\theta}(\mvec{x}^0 \mid \mvec{x}^d) \bigr]
\rightarrow \max_{\theta},
\end{equation}
because it avoids to model $p(\mvec{x}^0)$ and $p(\mvec{x}^d)$ explicitly.
From the definition of a DCIM, and particularly from the fact that a DCIM is also a Markov chain model, follows that its reverse conditional distribution factorises similar as \eqref{eq:DCIM}, i.e.
\begin{equation}
p(\mvec{x}^0,\ldots,\mvec{x}^{d-1} \mid \mvec{x}^d) =
\prod_{k=0}^{d-1} p(\mvec{x}^k \mid \mvec{x}^{k+1}) .
\end{equation}
This holds for any $p(\mvec{x}^0)$ completing the DCIM to a joint distribution. Unfortunately, however, the reverse conditionals $p(\mvec{x}^k \mid \mvec{x}^{k+1})$ of a DCIM are {\em not} (conditionally) independent. This follows from an argument similar to the one used in Observation~\ref{obs:non-cim}. In order to resolve the problem, we propose to consider pairs of tightly connected DCIMs: a forward model and a reverse (i.e.~backward) model
\begin{equation} \label{eq:twins}
p(\mvec{x}^d,\ldots,\mvec{x}^1 \mid \mvec{x}^0 ) =
\prod_{k=1}^{d} p(\mvec{x}^k \mid \mvec{x}^{k-1})
\text{\hspace{1em}and\hspace{1em}}
q(\mvec{x}^0,\ldots,\mvec{x}^{d-1} \mid \mvec{x}^d) =
\prod_{k=0}^{d-1} q(\mvec{x}^k \mid \mvec{x}^{k+1})
\end{equation}
such that the forward conditionals of $p$ and the backward conditionals of $q$ are strongly (conditional) independent. Such a pair would give rise to a single joint distribution if
\begin{equation} \label{eq:wish}
\frac{p(\mvec{x}^d,\ldots,\mvec{x}^1 \mid \mvec{x}^0 )
{q(\mvec{x}^0,\ldots,\mvec{x}^{d-1} \mid \mvec{x}^d)} =
\frac{A(\mvec{x}^d)}{B(\mvec{x}^0)}
\end{equation}
holds for any realisation of $\mathcal{X}$ for some functions $A,B$. We already know that this is impossible to achieve while retaining conditional independence of both, the forward and backward conditionals. Therefore, we propose to choose the parameters for two models such that the equation is fulfilled as close as possible. Substituting \eqref{eq:cim} for the binary case in the lhs of \eqref{eq:wish}, gives
\begin{equation}
\frac{\prod_{k=0}^{d-1}\tilde{Z}(\mvec{x}^{k+1})}{\prod_{k=1}^{d} Z(\mvec{x}^{k-1})}
\frac{\exp \Bigl[
\sum_{k=1}^{d} \scalp{\mvec{x}^k}{W^k \mvec{x}^{k-1} + \mvec{b}^k} \Bigr]
{\exp \Bigl[
\sum_{k=0}^{d-1} \scalp{\mvec{x}^k}{V^k \mvec{x}^{k+1} + \mvec{a}^k} \Bigr]}
\overset{?}{=}
\frac{A(\mvec{x}^d)}{B(\mvec{x}^0)} ,
\end{equation}
where $Z_k$ and $\tilde{Z}_k$ denote the partition functions of $p(\mvec{x}^k \mid \mvec{x}^{k-1})$ and $q(\mvec{x}^k \mid \mvec{x}^{k+1})$ respectively. It is readily seen that all terms in the exponentials cancel out if $(V^{k-1})^T = W^k$ holds for the weights and $\mvec{b}^k = \mvec{a}^{k-1}$ holds for the biases. The remaining, not cancelling terms in the lhs of \eqref{eq:wish} are then the partition functions of the conditional distributions.
Summarising, a pair of such DCIMs share the same structure as well as their weights and biases. They are therefore represented by a single DNN and can be learned simultaneously by the estimator \eqref{eq:pplik}, which now reads as
\begin{equation} \label{eq:pplik2}
\frac{1}{\abs{\mathcal{T}}}
\sum_{(\mvec{x}^0,\mvec{x}^d)\in \mathcal{T}} \bigl[
\log p_{\theta}(\mvec{x}^d \mid \mvec{x}^0) +
\log q_{\theta}(\mvec{x}^0 \mid \mvec{x}^d) \bigr]
\rightarrow \max_{\theta}.
\end{equation}
Since both, the forward and backward conditionals are strongly independent, approximation \eqref{eq:pyapprox} can be applied for computing the probabilities $p(\mvec{x}^d \mid \mvec{x}^0)$ and $q(\mvec{x}^0 \mid \mvec{x}^d)$.
\begin{Remark}
If the model consists of one layer only ($d=1$), then the pair of $p(\mvec{x}^1 \mid \mvec{x}^0)$ and $q(\mvec{x}^0 \mid \mvec{x}^1)$ define a single joint distribution, which is an RBM, i.e. an MRF on a bipartite graph. The estimator \eqref{eq:pplik} becomes the pseudo-likelihood estimator.
\end{Remark}
\subsection{Relations to other models}\label{sec:related}
We briefly discuss relations between the proposed model and other models not mentioned in the introduction.
The connection between Bayesian networks and neural networks with injected noise, used in~\cite{Kingma:2014,Rezende:ICML2014} is different from ours. It relates two equivalent representations of stochastic models, while we relate stochastic and deterministic models.
Similar to our work, \citet{Hinton:NC2006} uses a constraint that generator and recognition networks are related by transposition of weights in the unsupervised, layer-by-layer pre-training phase, motivated by their study of deep Boltzmann machine. For the supervised fine-tuning the two models are decoupled.
The proposed model is of course related to classical auto-encoders \cite{Rumelhart:Nature1986} at least technically. Our learning approach is generative, but in contrast to auto-encoders it is supervised. Moreover, the ``decoding'' part (the reverse model) is in our case constrained to have the same parameters as the ``encoding'' part (the forward model).
Our model and the proposed generative learning approach are undoubtedly related to generative adversarial networks (GAN), \cite{Goodfellow:NIPS2014},\cite{Radford:ICLR2016} proposed by Goodfellow et al. Similar to them, the reverse part of our model aims at generating data. But in contrast to them our model uses no specific ``noise space'' and conditions the image distribution on the class. The randomness in the generating part comes solely through the reverse Markov chain. We believe that this should suffice for modelling rich distributions in the input space. It is however to expect that the approximation \eqref{eq:pyapprox} used for learning might impose limitations on the expressive power of this model part as compared with GANs (see experiments). Another difference is that our model is learned jointly with a ``twin'', i.e. the forward part, rather than in competition with an ``adversarial''.
\section{Experiments}\label{sec:experiments}
\subsection{MNIST dense}
The first experiment is a simple proof of concept. We trained a small network (three fully connected layers with 512, 512, 10 nodes) on the standard MNIST dataset \cite{LeCun:MNIST2010} for both learning approaches: the discriminative and the generative one. Sigmoid activation was chosen for all but the last layer, for which we used soft-max activation, i.e.~considering its nodes as categorical variable. A standard optimiser without drop-out and any other regularisers was used for searching the maximum of the objective function. Fig.~\ref{fig:mnist-mlp-learning} shows the training and validation accuracies for both approaches as functions of the iterations. It is clearly seen that the combined objective \eqref{eq:pplik} of the generative learning imposes only a rather insignificant drop of the classification accuracy. Fig.~\ref{fig:mnist-mlp-learning} also shows the training and validation loss of the the backward model.
To further asses the reverse model learned by the generative approach, we sampled 10 images for each of the classes from both models. The results are shown in Fig.~\ref{fig:mnist-mlp-sampling}. For each sampled image we also applied the forward model to classify it. The classes with the highest probability along with values of that probability are shown on top of each sampled image. It is clearly seen that the generative learning approach has found a set of model parameters that yields simultaneously good classification accuracy and the ability to generate images of average digits. On the other hand, it is also visible from Fig.~\ref{fig:mnist-mlp-sampling} that the generative model part learned only the mean shapes of the digits.
\begin{figure}[htb]
{\centering
\includegraphics[width=0.45\textwidth]{mnist-mlp/learning_acc-crop} \hfil
\includegraphics[width=0.45\textwidth]{mnist-mlp/generative_loss-crop}\\
}
\caption{Learning simple network on MNIST.
Left: accuracy for discriminative and generative learning (dashed - training, solid: validation).
Right: loss of the reverse model in generative learning (dashed - training, solid: validation).}
\label{fig:mnist-mlp-learning}
\end{figure}
\begin{figure}[htb]
{\centering
\includegraphics[width=0.4\textwidth]{mnist-mlp/sample_all-crop} \hspace{0.06\textwidth}
\includegraphics[width=0.4\textwidth]{mnist-mlp/sample_last-crop}\\
}
\caption{Images sampled form the reverse model part along with their classification and probability values (see text). Each row shows images sampled from one class. Left: images obtained by recursive sampling through the net starting from the class layer till the image layer. Right: images obtained by sampling through one layer only and then recursively computing probabilities till the image layer.}
\label{fig:mnist-mlp-sampling}
\end{figure}
\subsection{MNIST CNN} \label{subsec:mnist_cnn}
The second experiment intends to analyse the behaviour of the proposed generative learning approach for convolutional neural networks. We learned a deep CNN ( eight layers) with architecture ((3,1,32), (3,1,32), (2,2,32), (3,1,64), (3,1,64), (2,2,64), (50), (10)) on MNIST data. The components of the triplets denote the window size, stride and kernel number for convolutional layers. The singletons denote the size of dense layers. Sigmoid activation was used for all but the last layer. When using the objective \eqref{eq:pplik2}, the net achieves 0.989 validation accuracy for the forward part and 0.23 loss for the reverse model part. We observed neither over-fitting nor ``vanishing gradient'' effects. We conjecture that the reverse learning task serves as a strong regulariser. The left tableaux in Fig.~\ref{fig:mnist-cnn-sampling} shows images sampled from the reverse model part. Again, for better visibility, we sampled from the the distribution $q(\mvec{x}^{d-1} \mid \mvec{x}^{d})$ and then recursively
computed the probabilities down to the image layer by applying the approximation \eqref{eq:pyapprox}. It is clearly seen that the learned generative model part is not able to capture the multi-modal image
distribution. We conjecture that one of the possible reasons is the simplicity of the approximation \eqref{eq:pyapprox}, especially when applied for learning of the reverse model part.
To analyse the problem, we considered a somewhat different learning objective for this model part
\begin{equation} \label{eq:pplik3}
\frac{1}{\abs{\mathcal{T}}}
\sum_{(\mvec{x}^0,\mvec{x}^d)\in \mathcal{T}} \bigl[
\log p_{\theta}(\mvec{x}^d \mid \mvec{x}^0) +
\log q_{\theta}(\mvec{x}^0 \mid \mvec{x}^{d-1}) \bigr]
\rightarrow \max_{\theta}.
\end{equation}
where $\mvec{x}^{d-1}$ is sampled from $p_{\theta}(\mvec{x}^{d-1} \mid \mvec{x}^0)$ for the current $\theta$. The model learned with this objective achieved 0.990 validation accuracy and 0.14 validation loss for the reverse model part. Images sampled from this model in the same way as described above, are shown in the right tableaux of Fig.~\ref{fig:mnist-cnn-sampling}. It is clearly seen that this model captured modes in the image distribution, i.e.~different writing styles of the digits.
\begin{figure}[ht]
{\centering
\includegraphics[width=0.4\textwidth]{mnist-cnn/mnist_cnn_sample0-crop} \hspace{0.05\textwidth}
\includegraphics[width=0.4\textwidth]{mnist-cnn/mnist_cnn_sample-crop}\\
}
\caption{Images sampled from generatively learned CNNs on MNIST data. For better visibility the images were obtained by sampling through one layer only and then recursively computing probabilities till the image layer. Left: learning method as described in Sec.~\ref{subsec:gen_learn}. Right: alternative learning method described in Sec.~\ref{subsec:mnist_cnn}.}
\label{fig:mnist-cnn-sampling}
\end{figure}
All experiments presented in this paper were carried out by using Keras \cite{chollet:keras2015} on top of Tensorflow \cite{tensorflow2015a}.
\section{Conclusions}
We proposed a class of probabilistic models that leads to a new statistical interpretation of DNNs in which all neurons of the net become random variables. In contrast to Boltzmann machines, this class allows for an efficient approximation when computing the distribution of the output variables conditioned on a realisation of the input variables. The computation itself becomes identical to the recursive forward computation for a DNN.
The second objective of the paper, to design a generative learning approach for DCIMs, has been reached only partially. The proposed approach and its variants do allow to learn the forward and backward model parts simultaneously. However, the achievable expressive power of the backward part of the model is still inferior to GANs.
Notwithstanding this, we believe that the presented approach opens interesting research directions. One of them is to analyse the proposed approximation and to search for better ones, leading to new and better activation functions. Another question in that direction is a statistical interpretation of the ReLu activation as the expectation of a possibly continuous random variable.
A different direction calls for generalising DCIMs by introducing interactions between neurons in layers. This would lead to deep CRFs. We envision to learn such models, at least discriminatively, by using e.g.~recent results for efficient marginal approximations of log-supermodular CRF.
\subsubsection*{Acknowledgements}
A.~Shekhovtsov was supported by Toyota Motor Europe HS, O.~Fikar was supported by Czech Technical University in Prague under grant SGS17/185/OHK3/3T/13 and B.~Flach was supported by Czech Science Foundation under grant 16-05872S.
\bibliographystyle{apa
|
1,314,259,994,796 | arxiv | \section{Introduction}
\blfootnote{
%
%
%
%
\hspace{-0.65cm}
Accepted for publication in the Proceedings of the 14\textsuperscript{th}
International Workshop on Semantic Evaluation (SemEval-2020) \\
This work is licensed under a Creative Commons
Attribution 4.0 International Licence.
Licence details:
\url{http://creativecommons.org/licenses/by/4.0/}.
%
}
Offensive language is pervasive in social media in this day and age. Offensive language is so common and it is often used as emphasis instead of its semantic meaning, because of this, it can be hard to identify truly offensive content. Individuals frequently take advantage of the perceived anonymity of computer-mediated communication, using this to engage in behaviour that many of them would not consider in real life. Online communities, social media platforms, and technology companies have been investing heavily in ways to cope with offensive language to prevent abusive behavior in social media. One of the most effective strategies for tackling this problem is to use computational methods to identify offense, aggression, and hate speech in user-generated content (e.g. posts, comments, microblogs, etc.).
The SemEval 2020 task on abuse detection \cite{2006.07235} aims to study both the target and the type of offensive language that has not been covered by previous works on various offenses such as hate speech detection and cyberbullying. In this paper, we focus on the first two sub-tasks: \emph{Sub-task A}, that focuses on offensive language or profanity detection, a binary classification problem wherein the objective is to determine if a tweet is offensive or not. \emph{Sub-task B}, which focuses on the identification of target presence, also a binary classification problem, wherein we are required to determine if a tweet is targeted at someone or something or not.
\subsection{Boosting Pre-trained Representations}\label{section:intro-boosting}
Recent Natural Language Processing (NLP) systems have focused on the use of deep learning methods that take word embeddings as input. While these methods have been extremely successful on several tasks, we believe that information pertaining to the importance of individual words available at a corpus level might not be effectively captures by models that use pre-trained embeddings, especially given the small number of training epochs (usually 3) used. We hypothesise that deep learning models, especially those that use pre-trained embeddings and so are trained on a small number of epochs, can benefit from corpus level count information. We test this on Sub-Task A using an ensemble of BERT and TF-IDF which outperforms both the individual models (Section \ref{results:subtaska}).
For sub-task B, we hypothesise that these sentence representations can benefit from having POS information to help identify the presence of a target. To test this hypothesis, we integrate the count of part-of-speech (POS) tags with BERT. While this combination did outperform BERT, we found that a simpler modification to BERT (i.e. cost weighting, Section \ref{section:costweighting}) outperforms this combination.
\section{Related Work}
The Offensive Language Identification Dataset (OLID) was designed by \newcite{zampieri2019semeval} for the 2019 version of this task as there was no prior work on this task before then. OLID is made up of 14,100 tweets that were annotated using experienced annotators but suffered from limited size, especially class imbalance. To get around this, OffensEval 2020 made use of the Semi-Supervised Offensive Language Identification Dataset (SOLID)~\cite{rosenthal2020large}.
\subsection{Prior OffensEval Systems}
Based on the results of OffensEval 2019, it seems that BERT is itself very powerful and it does relatively well for all of the 3 sub-tasks. In this section, we examine some of the best performing models on their techniques that we refer to for our methods.
\newcite{nikolov-radivchev-2019-nikolov} use a large variety of models and combined the best models in ensembles. They did pre-processing on the tweets by separating hashtag-ed tokens into separate words split by camel case. Stop words for the second and third sub-task were filtered because certain nouns and pronouns could contain useful information for the models to detect targets.
Due to the class imbalance in the second and third sub-task, they used a variation of techniques to deal with this imbalance. They used oversampling by duplicating examples from the poorly represented classes. They also changed the class weights to provide more weight to the classes that are poorly represented. They also modified the thresholds that were used to classify an example, instead of having equal split for binary classes of 0.5, they shifted the boundary to accommodate the imbalance. Ensemble models were found to have over-fit the training data compared to BERT which had the best generalisation. Their BERT submissions were able to achieve $2^{nd}$ for the first sub-task and $1^{st}$ for the last sub-task.
Similarly, work by \newcite{liu-etal-2019-nuli} mostly looked at pre-processing inputs before feeding it to BERT. It seems that pre-processing for BERT works very well in terms of improving its results. They also used hashtag segmentation, other techniques includes emoji substitution, they used a emoji Unicode to phrase Python library to increase semantic meaning in tweets. With just pre-processing alone they were able to achieve $1^{st}$ place for the first sub-task.
A significantly different method was used by \newcite{han-etal-2019-jhan014}, who used a rule based sentence offensiveness calculation to evaluate tweets. High and low offensive values are automatically classified as offensive or non-offensive, otherwise it follows a probabilistic distribution. For sub-task B using the sentence offensiveness model, they outperformed other systems that used deep learning or non-neural machine learning. This is a interesting find as it shows that traditional techniques such as using rule based models for target classification can be very successful compared to deep learning methods.
\section{System Overview}
For sub-task A, we test three models: a standard neural network that uses TF-IDF features, BERT and the ensemble of these two. For sub-task B we use noun counts, BERT and the ensemble of both.
\subsection{TF-IDF}
In order to incorporate global information into our model, we need to employ a technique that does so and TF-IDF does this well. Using TF-IDF, we will be able to identify keywords that helps us to distinguish between offensive/non-offensive tweets which offensive tweets will tend to have more offensive words while non-offensive tweets usually contains more neutral-toned words. Since we use TF-IDF as our input features to be combined with BERT, we have a neural network so that when we are training the combination of the models, the neural network will enable us to maintain learning from training compared to non-neural machine learning techniques.
\subsection{BERT}
The reason for picking BERT \cite{DBLP:journals/corr/abs-1810-04805} is because BERT has outperformed several similar techniques that provides sentence level embeddings such as BiLSTM and ELMo \cite{Peters:2018}. It has also shown to be very effective at doing all the sub-tasks in the previous year evaluation \cite{zampieri2019semeval}. We can see that it has both strengths in generalisation and also able to handle contextual based evaluations well.
\subsection{Ensemble Model}
Ensemble techniques have shown to be effective in reducing variance in the prediction and at making better predictions, this can be achieved for neural networks having multiple sources of information \cite{brownlee2018better}. We will be using an ensemble model to combine individual models into one. Just using BERT alone will provide us sentence level information, but if we combine BERT features and TF-IDF features, we can have access to both sentence and corpus level information which is the goal of our hypothesis. This ensemble model is created by concatenating the sentence representation of BERT to the features generated by the TF-IDF model before then using this combined vector for classification. In practice, this translates into calculating the TF-IDF vector for each sentence and concatenating it to the corresponding BERT output. This vector is then fed to a fully connected classification layer. Both BERT and the TF-IDF weights are updated during training.
\subsection{Noun Count As Features}
We have seen the success of rule based method for sub-task B that achieved significant performance compared to machine learning techniques. \cite{han-etal-2019-jhan014} has shown that using a manually annotated offensive list of words that provides a measure of the strength of offensiveness is effective. Since targets are very likely to be identified as nouns or pronouns in the tweets, we can identify the presence of a target if we have a count of part-of-speech tags such as `PRP', `NP'.
\subsection{Cost Weight Adjustments}
\label{section:costweighting}
An analysis of the datasets showed that for all sub-tasks, there were large class imbalances. We follow the method described by \newcite{tayyar-madabushi-etal-2019-cost} to modify the cost function to allow poorly represented classes to have more impact when calculating the cost of error. They show that other techniques such as data augmentation through oversampling does not improve the performance of BERT. We use cost weighting for both tasks.
\begin{table}[ht]
\centering
\begin{tabular}{|l|l|l|l|l|}
\hline
Sub-task & Total & Class 1 & Class 2 \\
\hline
A & \textcolor[rgb]{0.129,0.129,0.129}{9075418 } & \textcolor[rgb]{0.129,0.129,0.129}{1446768 (15\%)~} & \textcolor[rgb]{0.129,0.129,0.129}{7628650 (85\%)} \\
\hline
B & \textcolor[rgb]{0.129,0.129,0.129}{188974 } & \textcolor[rgb]{0.129,0.129,0.129}{39424 (20\%)} & \textcolor[rgb]{0.129,0.129,0.129}{149550 (80\%)} \\
\hline
\end{tabular}
\caption{Class Imbalance Analysis}
\end{table}
\section{Experimental Setup}
For each of the sub-tasks we participate in, we split our training set into a training set and development set in a 4:1 ratio. Our test set is the evaluation set for SemEval-2019. We submit the best version of these experiments to SemEval-2020. Also, in each case, we first experiment with BERT, then by adding additional parameters to BERT and finally by use of cost-weights. All ensemble models were created at the embedding layer by appending additional features to BERT embeddings before then using a fully connected layer for classification.
\subsection{Sub-Task A}
Our setup for sub-task A was to pre-process using stemming and NLTK's tweet tokenizer for the TF-IDF features, where we only consider the top 6000 highest term frequency words to accommodate memory limitations. With BERT, we found that stemming or lemmatization does not help to improve the results, so our input uses BERT's default tokenizer with a maximum sequence length of 64. We used the English dataset provided for training, which consists of nine million examples. Unfortunately, due to memory constraints, we were unable to use this entire dataset for training and the final model used just 10\% of this dataset. We also applied cost weighting to account for class imbalances. We used a learning rate of 5e-6, batch size of 32. We found the best results to be within 1 to 2 epochs.
\subsection{Sub-Task B}
Our setup for sub-task B was using the OLID dataset, as we found that the new dataset provided had a high rate of misclassified label. We used NLTK's POS tagger to extract the tags for our noun count features and we extract the count of `NNS' and `PRP' tags as they give the most information about target presence. We used a learning rate of 5e-5, batch size of 32. We found the best results to be within 20 epochs as the dataset is small, we needed to adjust for the step size decrease.
\section{Results and Analysis}
We present our overall rankings on each of the two sub-tasks in Table \ref{table:results}. While our rank on the first task is not very high, we note that it is within 2 points of the top scoring team - it should be emphasised that we achieve this result by use of only 10\% of the the available training data due to GPU memory limitations. We rank much close to the top on Sub-Task 2 with a rank of 4 amongst a total of 44 submissions.
\begin{table}[h
\begin{subtable}[t]{.48\linewidth}
\centering
\begin{tabular}{|@{\extracolsep{1pt}}lll|}
\hline
Rank & System & Macro F1 \\
\hlin
1 & ltuhh2020 & 0.92226 \\
2 & gwiedemann & 0.92040 \\
3 & Galileo & 0.91985 \\
\multicolumn{3}{|c|}{\dots} \\
38 & \textbf{wml754 (This work)} & \textbf{0.90901} \\
\hdashline
& All NOT & 0.4193 \\
& All OFF & 0.2174 \\
\hline
\end{tabular}
\caption{Sub-task A}
\end{subtable}
\begin{subtable}[t]{.48\linewidth}
\centering
\begin{tabular}{|@{\extracolsep{1pt}}lll|}
\hline
Rank & System & Macro F1 \\
\hlin
1 & Galileo & 0.74618 \\
2 & tracypg & 0.73623 \\
3 & pochunchen & 0.69063 \\
4 & \textbf{wml754 (This Work)} & \textbf{0.67336} \\
\hdashline
& All TIN & 0.3741 \\
& All UNT & 0.2869 \\
\hline
\end{tabular}
\caption{Sub-task B}
\end{subtable}
\caption{\label{table:results}Rankings on Sub-Task 1 and Sub-Task 2}
\end{table}
\subsection{Sub-Task A}
As described in Section \ref{section:intro-boosting}, we hypothesise that deep learning models, especially those that use pre-trained embeddings and so are trained on a small number of epochs, can benefit from corpus level count information. So as to test this hypothesis we create three different models: A Simple Neural Network (SNN), consisting of a single layer, that is fed with TF-IDF features, BERT and an ensemble of the two. The ensemble model is constructed by removing the classification layer from BERT and the SNN model, concatenating their output and passing this concatenated vector through a new fully connected layer for classification. We perform our experiments using 10\% of the training data as our training set, and last year's SemEval test set as our test set. We pick the best performing model and use that to generate results for our submission. Unfortunately we were unable to train on more than 10\% of the training set. Our experiments show that corpus level count information captured by TF-IDF can indeed boost the performance of BERT. Table \ref{table:experiments-a-noweights} details the results of our experiments with the three models.
\label{results:subtaska}
\begin{table}[ht]
\centering
\begin{tabular}{|l|r|r|r|r|r|r|}
\hline
Model & \multicolumn{1}{l|}{Macro F1 (Train)} & \multicolumn{1}{l|}{Macro F1 (Dev)} \\
\hline
SNN & 0.9227 & 0.7329 \\
BERT & 0.9641 & 0.7378 \\
Ensemble & 0.9179 & \textbf{0.7819} \\
\hline
\end{tabular}
\caption{A comparison of a Simple Neural Network (SNN) fed with TF-IDF features, BERT and the ensemble of the two. }\label{table:experiments-a-noweights}
\end{table}
Our analysis of the training and test data using the Wilcoxon signed-rank test as described by \newcite{tayyar-madabushi-etal-2019-cost} shows that the training and development sets are different enough to warrant the use of cost-weighting (Section \ref{section:costweighting}). To this end we introduce cost weighting to each of the three models described above and the results of these experiments are presented in Table \ref{table:experiments-a-yesweights}.
\begin{table}[ht]
\centering
\arrayrulecolor{black}
\begin{tabular}{|l|r|r|r|r|r|r|r|r|}
\arrayrulecolor[rgb]{0.8,0.8,0.8}\cline{1-1}\arrayrulecolor{black}\cline{2-9}
\multicolumn{1}{!{\color[rgb]{0.8,0.8,0.8}\vrule}l|}{} & \multicolumn{2}{l|}{Cost Weight} & \multicolumn{3}{l|}{Train} & \multicolumn{3}{l|}{Dev} \\
\hline
Model & \multicolumn{1}{l|}{OFF} & \multicolumn{1}{l|}{NOT} & \multicolumn{1}{l|}{Precision} & \multicolumn{1}{l|}{Recall} & \multicolumn{1}{l|}{F1} & \multicolumn{1}{l|}{Precision} & \multicolumn{1}{l|}{Recall} & \multicolumn{1}{l|}{F1} \\
\hline
SNN & 10 & 1 & 0.8233 & 0.8923 & 0.8512 & 0.7619 & 0.7448 & 0.7523 \\
\hline
BERT & 50 & 1 & 0.9267 & 0.9615 & 0.9430 & 0.8123 & 0.8050 & 0.8085 \\
\hline
Ensemble & 100 & 1 & 0.8896 & 0.9604 & 0.9197 & 0.8095 & 0.8165 & \textbf{0.8128} \\
\hline
\end{tabular}
\caption{A Cost Weight Adjusted comparison of a Simple Neural Network (SNN) fed with TF-IDF features, BERT and the ensemble of the two.}\label{table:experiments-a-yesweights}
\end{table}
We observe that adding the optimal cost weights to poorly represented classes significantly improves the performance of all models. The ensemble model, however, still outperforms either of SNN or BERT, despite a large increase in performance for BERT after adding cost weights.
As mentioned we were only able to train our BERT and ensemble models with 10\% of the training data. The performance of our models can be further improved given GPU resources as shown in Table \ref{table:needgpu}.
\begin{table}[ht]
\centering
\arrayrulecolor{black}
\begin{tabular}{|r|r|r|r|r|r|r|}
\arrayrulecolor[rgb]{0.8,0.8,0.8}\cline{1-1}\arrayrulecolor{black}\cline{2-7}
\multicolumn{1}{!{\color[rgb]{0.8,0.8,0.8}\vrule}l|}{} & \multicolumn{3}{l|}{Train} & \multicolumn{3}{l|}{Dev} \\
\hline
\multicolumn{1}{|l|}{Data Size} & \multicolumn{1}{l|}{Precision} & \multicolumn{1}{l|}{Recall} & \multicolumn{1}{l|}{F1} & \multicolumn{1}{l|}{Precision} & \multicolumn{1}{l|}{Recall} & \multicolumn{1}{l|}{F1} \\
\hline
100000 & 0.8896 & 0.9604 & 0.9197 & 0.8095 & 0.8165 & 0.8128 \\
\hline
800000 & 0.9652 & 0.9413 & 0.9527 & 0.8364 & 0.8095 & \textbf{0.8212} \\
\hline
\end{tabular}
\caption{We show that our models can perform better if trained on more of the training data.} \label{table:needgpu}
\end{table}
\subsection{Sub-Task B}
\label{results:subtaskb}
For sub-task B, as mentioned in Section \ref{section:intro-boosting}, we hypothesise that these sentence representations can benefit from having POS information to help identify the presence of a target. To test this hypothesis, we integrate the count of part-of-speech (POS) tags with BERT. We use the OLID dataset for training and last year's evaluation set as the test set. The best performing model is used to make predictions for submission to this year's competition. We present the results of our experiments in Table \ref{table:experiments-b}. Our experiments show that while noun counts do improve the accuracy of BERT, cost weighting BERT is more effective.
\begin{table}[h!]
\centering
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\cline{2-9}
\multicolumn{1}{l|}{} & \multicolumn{2}{l|}{Cost Weight} & \multicolumn{3}{l|}{Train} & \multicolumn{3}{l|}{Dev} \\
\hline
Model & TIN & UNT & Precision & Recall & F1 & Precision & Recall & F1 \\
\hline
BERT & 1 & 1 & 0.8285 & 0.7582 & 0.7875 & 0.7533 & 0.6849 & 0.7115 \\
\hline
BERT & 1 & 4 & 0.8283 & 0.7347 & 0.7707 & 0.7604 & 0.7520 & \textbf{0.7561} \\
\hline
BERT + NC & 1 & 1 & 0.9177 & 0.8805 & 0.8979 & 0.7504 & 0.7173 & 0.7321 \\
\hline
BERT + NC & 1 & 4 & 0.8043 & 0.7767 & 0.7896 & 0.7175 & 0.8026 & 0.7485 \\
\hline
\end{tabular}
\caption{Comparison before and after adding Noun Count}
\label{table:experiments-b}
\end{table}
\section{Conclusion}
We show that incorporating corpus level information does help improve the performance of BERT. We achieve competitive results using just 10\% of the available dataset and would like to test the limits by training with the full dataset. Our experiments also show that noun counts do help boost the performance of BERT, but not as much as cost-weighting.
\bibliographystyle{coling}
|
1,314,259,994,797 | arxiv | \section{Introduction}
Quantum computers promise to outperform classical processors for particular tasks \cite{nielsen00, Montanaro2016, Arute2019, PRXQuantum.2.017001}.
Solving problems beyond the reach of classical computers with a universal quantum computer requires the implementation of quantum error correction (QEC) protocols \cite{Terhal_2015} to mitigate faulty operational building blocks. In QEC codes, logical qubits are encoded into entangled states of several physical qubits. Error syndrome readout permits detection of errors through quantum non-demolition (QND) parity check measurements (PCM) on the logical qubits \cite{PhysRevLett.99.120502, Lupascu2007, Barreiro2011, Corcoles2015}. A QND PCM requires performing a sequence of entangling gates between a set of data qubits and an ancilla qubit, to which the parity information is mapped \cite{Devitt_2013}. Projective measurements on the ancillae discretize eventual errors and thus allow for their detection and subsequent correction. However, PCM circuits consist of faulty gate operations and may therefore corrupt the qubit register. Therefore, \textit{fault-tolerant} (FT) QEC schemes are needed to prevent uncontrolled proliferation of errors through the quantum registers \cite{shor1996faulttolerant}.
\begin{figure}[h!tp]\begin{center}
\includegraphics[width=\columnwidth,trim={2cm 0cm 3cm 0cm},clip]{./figs/ftcircuit_mm_ugp2.pdf}
\caption{\textbf{a)} Sketch of the topological $[[7,1,3]]$ color code, highlighting a plaquette comprised of data qubits $d_1-d_4$. \textbf{b)} Segmented microchip ion trap used in this work, with the enlarged picture showing how the six ion qubits are distributed at the beginning of the gate sequence. \textbf{c)} Quantum circuit for a fault-tolerant parity check measurement on four data qubits. Entangling gates map the parity of the data qubits to the syndrome qubit $s$, while two additional gates serve for detecting potentially uncorrectable faults using the flag qubit $f$. An error $E_s$ (red) propagates through subsequent gates and results in a weight-2 error on the data qubit register is detected by the flag qubit. Initialization (yellow, blue) and analysis (green) rotations are carried out on all qubits at the beginning of the sequence and immediately before projective readout (grey).
}
\label{fig:circuit}
\end{center}\end{figure}
Previously conceived FT PCM schemes demand adding as many ancilla qubits as the parity check generator with maximum weight \cite{PhysRevLett.77.3260,PhysRevLett.98.020501}. More recent FT PCM schemes, based on so-called flag-qubits, substantially reduce the overhead in terms of qubits and gate operations \cite{PhysRevLett.121.050502, Chamberland2018flagfaulttolerant, Chamberland2019faulttolerantmagic,PhysRevA.100.062307, Reichardt_2020, PhysRevA.101.012342, Chamberland_2020,PhysRevA.101.012342,PRXQuantum.1.010302}. In particular, for distance-three codes implemented in fully connected quantum registers, a total of only two ancilla qubits is sufficient to maintain the one-fault detection and correction condition \cite{Chamberland2018flagfaulttolerant}, i.e.~to guarantee the correctability of one arbitrary error occurring on any of the qubits or operations involved in the logical qubit.\\
To date, several QEC protocols and components have been demonstrated, using trapped ions \cite{NIGG2014,PhysRevX.6.031030,SchindlerScience1059,kielpinski2001recent,Chiaverini2004,Negnevitsky2018,Stricker2020}, superconducting circuits \cite{Kelly2015, ofek2016demonstrating, Andersen2020,chen2021exponential}, nuclear magnetic resonance \cite{s-zhang-prl-109-100503, s-knill-prl-86-5811}, or nitrogen-vacancy centers \cite{Waldherr_2014,Unden2016}. Increasing gate fidelities for different platforms \cite{Srinivas2021, CraigClark2021,kjaergaard2020programming,Browaeys2020} render QEC circuit noise thresholds \cite{raussendorf-prl-98-190504} to be within reach of experimental capabilities. So far, with regard to FT QEC elements, FT state preparation and detection on
primitives of topological surface codes have been realized with superconducting circuits \cite{Corcoles2015, PhysRevLett.119.180501,Kelly2015} or trapped ions \cite{Linkee1701074}. Recently, FT preparation and FT operations of an encoded qubit on a distance-3 Bacon-Shor code was demonstrated \cite{egan2021faulttolerant}, where the FT syndrome extraction was realised using four ancilla qubits in addition to the nine data qubits. \\
\subsection{Fault tolerant parity check measurement}
In this work, we employ a trapped ion quantum processor to demonstrate a flag-based FT weight-4 PCM scheme, which reduces the overhead for FT syndrome readout to two extra \textit{syndrome} and \textit{flag} qubits. The flag qubit detects \textit{hook errors}, i.e. faults occurring on the syndrome qubit that proliferate onto two errors on the data qubit register. They would remain undetectable in a non-FT PCM scheme and eventually result in a logical error. In general, a weight-4 FT PCM circuit represents a key building block of the smallest distance-3 topological color code, which is equivalent to the $[[n=7,k=1,d=3]]$ Steane code \cite{PhysRevLett.77.793, PhysRevLett.97.180501}, as well as of FT circuit constructions for larger 2D topological QEC codes \cite{Chamberland2018flagfaulttolerant, PhysRevA.101.012342, PhysRevA.101.032333}. This stabilizer code \cite{stabilisers} encodes $k=1$ logical qubit into $n=7$ physical qubits with a code distance $d=3$ and can therefore correct up to $t=(d-1)/2=1$ arbitrary error on any of the physical qubits, provided that QEC cycles are realized via fault-tolerant circuit constructions, based e.g.~on the flag-qubit based FT PCM measurement demonstrated in this work. The physical qubits of the code can be arranged in a 2D triangular lattice structure formed by three interconnected 4-qubit plaquettes, as displayed in Fig.~\ref{fig:circuit}a. The set of parity check or stabilizer generators $\{g_i\}$ of the code generate the stabilizer group ${\cal S}$
and are 4-qubit Pauli operators defined on vertices $v(p)$ of each its plaquettes $p$:
\begin{equation}
g_x^{(p)}=\bigotimes_{i\in v(p)}X_i,\qquad g_z^{(p)}=\bigotimes_{i\in v(p)}Z_i,
\end{equation}
with the Pauli matrices $X_i,Y_i,Z_i,\mathbb{1}_i$ pertaining to qubit $i$.
The \textit{code space} ${\cal L}$ hosting the logical qubit is fixed as the common two-dimensional eigenspace of eigenvalue $+1$ of all generators $g_i$ (and combinations thereof),
\begin{equation}
{\cal L}:= \{\ket{\psi}_{{\cal L}}:g_i\ket{\psi}_{{\cal L}}=+\ket{\psi}_{{\cal L}}\quad \forall g_i
\}.
\end{equation}
Here, we focus on the experimental verification of a flag-based FT weight-4 parity check, $g_z=Z_1Z_2Z_3Z_4$, according to the circuit shown in Fig.~\ref{fig:circuit}c. The $g_x$ parity check is equivalent, as it merely requires mapping the data qubits by local rotations to the $X$ basis before syndrome readout. Four entangling gates of type $Z_i\otimes Z_j$ lead to a $\pi$ phase shift on the syndrome for odd parity of the four data qubits, which is detected upon readout. Two additional entangling gates between the syndrome and flag qubits serve for catching error events throughout the PCM, which would otherwise result in weight-2 errors on the data qubit register (see Fig. \ref{fig:circuit} c).
QEC is intimately linked to multipartite entanglement \cite{Preskilnotes, PhysRevA.54.3824, entanglementassistQECC,Almheiri2015,PRXQuantum.2.020304}. There are several works that reveal explicit connections between QEC and the production of maximally entangled states and equivalently, between entanglement fidelities of the encoded states and the weight distribution of a code \cite{PhysRevA.69.052330,Raissi_2018,PhysRevLett.78.1600,681316}. The inherent relation of non-classical correlations as a prerequisite for QEC renders the generation and verification of genuinely multipartite entangled (GME) states to be a suitable benchmarking protocol for FT QEC building blocks. Here, we verify GME between the data and ancilla qubits in order to demonstrate the correct functioning of our FT PCM, and to benchmark the capabilities of our trapped-ion processor in the context of FT QEC.
\section{Shuttling-Based Trapped-Ion Platform}
Quantum computer platforms based on trapped atomic ions arranged as static linear registers and laser addressing have seen substantial progress \cite{Blatt14Qubit,Debnath2016,Linke3305}. On such platforms, QEC building blocks have been demonstrated, such as repeated syndrome extraction and correction \cite{Schindler1059,Negnevitsky2018}, encoding, readout and gate operations for the $[[7,1,3]]$ code \cite{Nigg302}, and entanglement of encoded logical qubits \cite{Erhard2021}. However, QEC protocols impose stringent demands on the scalability of the underlying hardware platform. The shuttling-based ``Quantum-CCD'' approach offers a route to increased scalability \cite{KIELPINSKI2002,Lekitsch2017,Kaushal2020,Pino2021}. Here, the qubit ions are kept in the form of small subsets within a microstructured trap array, and the register is dynamically reconfigured via shuttling operations. This way, the excellent degree of control can be retained for increasing register sizes. In this work, we implement a shuttling-based FT PCM protocol. Between subsequent gate operations on two qubits, the register is reconfigured via shuttling operations. A special feature of our protocol is that we establish the required effective all-to-all connectivity by reordering the register via physical rotation of two commonly confined ions. This operation is equivalent to a unit-fidelity SWAP logic gate \cite{SWAPGATE} and contrasts with faulty radiation-driven SWAP gates. This, together with the inherently low cross-talk of the shuttling-based architecture, allows to maintain the one-fault QEC condition.\\
\begin{figure*}[h!tp]\begin{center}
\includegraphics[width=\textwidth,trim={0cm 0.3cm 0cm 0cm},clip]{./figs/6IonFTR_Shuttling.pdf}
\caption{Shuttling schedule of the fault-tolerant parity readout measurement sequence, indicating how the ion qubits are moved between different storage sites of the segmented ion trap. The fixed laser interaction zone is located at segment 19. The arrows indicate laser-driven gate interactions. A distance of at minimum two empty segments between sets of ion qubits is maintained throughout the sequence. The maximum spatial extent of the register is 24 segments (4.8~mm).}
\label{fig:6IonFTRShuttling}
\end{center}
\end{figure*}
We employ a micro-structured, segmented radio frequency ion trap \cite{Kaushal2020}, consisting of 32 uniform segment pairs which are linearly arranged along a \textit{trap axis}. Each segment pair can generate a confining potential well. All qubit operations - initialization, gates and readout - are carried out using laser beams, which are directed to segment 19 - henceforth referred to as \textit{laser interaction zone (LIZ)}. Shuttling operations are carried out by supplying suitable voltage waveforms to the trap electrodes. Potential wells containing one or two qubits can be moved along the trap axis \cite{WALTHER2012,BOWLER2012}, two commonly confined ions can be separated into potential wells \cite{RUSTER2014,KAUFMANN2014}, two separately confined ions can be merged into one well, and two commonly confined ions can be rotated such that their ordering along the trap axis is reversed \cite{SWAPGATE}. The \textit{separate / merge} and \textit{swap} shuttling operations are limited to the LIZ.\\
The qubits are encoded in the spin of the valence electron of atomic $^{40}$Ca$^+$ ions \cite{POSCHINGER2009,RusterLongLived2016}, with the assignment $\ket{0}\equiv \ket{S_{1/2},m_J=+1/2}, \ket{1}\equiv \ket{S_{1/2},m_J=-1/2}$. Gate operations are driven by a pair of beams detuned by about $2\pi \times 1.0$~THz from the $S_{1/2}\leftrightarrow P_{1/2}$ electric dipole transition. Local qubit rotations are realized via stimulated Raman transitions, allowing for arbitrary local rotations on qubit $i$ of the form
\begin{equation}
R_i(\theta,\phi)=\exp\left[-i\frac{\theta}{2}\left(\cos\phi\;X_i+\sin\phi\;Y_i\right)\right],
\end{equation}
Local rotations can also be carried out simultaneously on two qubit ions commonly confined in the LIZ, in which case the Pauli operators are to be replaced by the respective tensor sum operators. Entangling gates between any two qubits $i$ and $j$ are realized via spin-dependent optical dipole forces \cite{LEIBFRIED2003A}, effecting a phase shift $\Phi$ between even and odd parity (with respect to the $Z$ basis) states, represented by the unitary
\begin{eqnarray}
ZZ_{ij}(\Phi)&=&e^{\frac{i}{2}\Phi Z_i \otimes Z_j }.
\label{eq:entanglinggatesPhi}
\end{eqnarray}
We employ a maximally entangling gate with total phase $\Phi = \pi/2$, accumulated from two separate gate pulses, interspersed by a rephasing $\pi$ pulse. This leads to the total gate unitary
\begin{eqnarray}
G_{ij}&=&ZZ_{ij}(\pi/4)R(\pi,-\pi/2)ZZ_{ij}(\pi/4).
\label{eq:entanglinggates}
\end{eqnarray}
The rephasing pulses serve to maintain coherence \cite{Biercuk2009}, especially on the syndrome and flag qubits, which undergo multiple entangling gates. Upon gate operations, the potential well in the LIZ features single-ion secular frequencies of $2\pi\times\{1.49,3.88,4.64\}$~MHz, with the lowest frequency pertaining to the trap axis. Entangling gates are carried out using the transverse in-phase collective vibrational mode at $2\pi \times 4.64$~MHz as the gate-mediating mode. The laser beam geometry is chosen such that the gate operations are insensitive to the collective modes oscillating along the trap axis, which accumulate excitation from shuttling operations \cite{PhysRevLett.119.150503}. \\
The shuttling schedule realizing the FT PCM is constructed from the primitive operations described above, such that the total count of shuttling operations and the maximum spatial extent of the register is minimized, while additional constraints such as the minimum number of empty trap segments between two qubit sets are always fulfilled. Initially, the qubits are stored pairwise in order $\{d_2,d_1\},\{s,f\}$ and $\{d_3,d_4\}$. The ion pairs are sequentially moved to the LIZ, where all four transverse modes are cooled close to the ground state via resolved-sideband cooling \cite{POSCHINGER2009}, and the qubits are initialized to $\ket{0}$ via optical pumping. Then, the data qubit sets $\{d_2,d_1\},\{d_3,d_4\}$ are moved to the LIZ, where they are separated. Each data qubit is again moved into the LIZ, where optional $\pi$-flips allow for preparation of any desired logical basis state. A similar procedure is then carried out for the syndrome and flag qubits, which are prepared in superposition states via $\pi/2$ rotations. Then, the parity mapping sequence is carried out: the qubits undergo pairwise entangling gates according to Eq. (\ref{eq:entanglinggates}) in the sequence $d_1s,sf,d_2s,d_3s,sf,d_4s$. Before and after each gate, an optional $\pi/2$ rotation can be carried out on the participating data qubit in order to change the basis. Between two consecutive gates, a sequence of movement, separate / merge and position-swap operations is carried out, bringing the qubit pair on which the following gate operation is to be carried out to the LIZ, see Fig. \ref{fig:6IonFTRShuttling}. Upon completion of the gate sequence, the syndrome and qubits are separately moved to the LIZ and each undergo an analysis $\pi/2$ rotation. Qubit phases accumulated from positioning in the inhomogeneous magnetic field have been calibrated via previous Ramsey-type measurements \cite{WALTHER2012,PhysRevX.7.031050} and are corrected for. Upon completion of the gate sequence, the qubits are kept pairwise in order $\{d_2,d_1\},\{s,f\}$ and $\{d_3,d_4\}$. These pairs are sequentially moved to the LIZ in reverse order, where laser-driven population transfer from $\ket{0}$ to the metastable $D_{5/2}$ state takes place. Then, the qubits are singled at the LIZ, where state-dependent laser-induced fluorescence is detected. Thresholding the number of detected photons allows for assigning detection events to logical ($Z$) basis states, and equivalently to eigenvalues $M_i=\pm 1$ of the Pauli operator $Z_i$ of qubit $i$:
\begin{eqnarray}
\text{`dark'} \rightarrow\ket{0} \Leftrightarrow M_i^{(Z)}=+1 \nonumber \\
\text{`bright'} \rightarrow\ket{1} \Leftrightarrow M_i^{(Z)}=-1
\end{eqnarray}
Logical results on rotated bases $M_i^{(X)}=\pm 1$ ($M_i^{(Y)}=\pm 1$) are acquired by performing an analysis rotation $R(\pi/2,-\pi/2)$ ($R(\pi/2,0)$) on the respective qubit before shelving and fluorescence detection. As the population transfer on all qubits is carried out before fluorescence detection, cross-talk errors throughout readout are avoided. Details on qubit and shuttling operations and the sequences for register preparation and readout can be found in the supplemental material \cite{supplemental}.
\section{Measurement Results}
\subsection{Parity readout in the logical basis}
\begin{figure}[h!tp]\begin{center}
\includegraphics[width=\columnwidth]{./figs/6IonFTR_Syndrome_FlagSelect.pdf}
\caption{Fault-tolerant parity readout. The syndrome $M_s^{(X)}=-1$ event rate is shown for each computational basis input state of the data qubits, for all valid shots (blue) and post-selected on the flag qubit (green), versus ideal rate (white). 960 shots per input state are measured, the average shot noise error per input state is about 7$\times 10^{-3}$. The flag $M_f^{(X)}=-1$ readout rate is shown separated at the right (red). }
\label{fig:6IonFTR}
\end{center}
\end{figure}
We first verify the functionality of the FT PCM protocol by carrying out the sequence shown in Fig. \ref{fig:6IonFTRShuttling}, while preparing the data qubits in all 16 computational basis states. Syndrome and flag qubit are initialized to $\ket{-} = \frac{1}{\sqrt{2}}(\ket{0}-\ket{1})$ by means of an $R(\pi/2,-\pi/2)$ rotation on the initial state $\ket{0}$. The measurement results $M_s^{(X)}$ of the syndrome are compared to the parity of the input state. We define the parity fidelity as
\begin{eqnarray}
\mathcal{P}&=&\frac{1}{2}\big[p(M_s^{(X)}=-1\;|\;P_{in}=+1) \nonumber \\
&&+p(M_s^{(X)}=+1\;|\;P_{in}=-1)\big],
\end{eqnarray}
i.e. the probability for the correct syndrome readout result $M_s^{(X)}$ conditioned on the input parity $P_{in}$ of the data qubits. For 960 shots per input state, we measure $\mathcal{P}=$~92.3(2)\%, see Fig.~\ref{fig:6IonFTR}. For 93.7(2)\% of all shots, the flag qubit is detected as $M_f^{(X)}=-1$, indicating a low rate of weight-2 errors. Post-selecting the syndrome measurement on the flag readout, we obtain a conditional parity fidelity of $\mathcal{P}=$~93.2(2)\%. It exceeds the bare parity fidelity by 4.5 standard errors, thus showing that the FT scheme operates in the regime where it can catch native errors occurring throughout the PCM sequence. A discussion on the relevant error sources can be found in the supplemental material \cite{supplemental}.
\renewcommand{\arraystretch}{1.6}
\begin{table*
\centering
\begin{tabular}{ |wc{0.7cm}|p{4cm}||wc{5.5cm}|wc{6.5cm}| }
\hline
& & $n=4$ & $n=6$ \\
\hline \hline
1 & GME state $\ket{\psi}^{(n)}_{out}$ & ${1\over \sqrt{2}}\left(\ket{\psi}_{out}^{(4)}\ket{-}_{s}+\ket{\psi^{\perp}}_{out}^{(4)}\ket{+}_{s}\right)$ & ${1\over \sqrt{2}}\left(\ket{\psi}_{out}^{(5)}\ket{-}_{f}+i\ket{\psi^{\perp}}_{out}^{(5)}\ket{+}_{f}\right)$ \\
\hline
2 & Substate $M_{s/f}^{(X)} = +1$ & $\ket{\psi}_{out}^{(4)}={1\over \sqrt{2}}\left(\ket{--++}+\ket{++--}\right)$ & $\ket{\psi}_{out}^{(5)}={1\over \sqrt{2}}\left(\ket{--++-}+\ket{++--+}\right)$ \\
\hline
3 & Substate $M_{s/f}^{(X)} = -1$ & $\ket{\psi^{\perp}}_{out}^{(4)}={1\over \sqrt{2}}\left(\ket{--++}-\ket{++--}\right)$ & $\ket{\psi^{\perp}}_{out}^{(5)}={1\over \sqrt{2}}\left(\ket{----+}+\ket{++++-}\right)$ \\
\hline
& & $\{ g_1 = X_1X_2, g_2=-X_2X_3,$ & $\{ g_1 = -X_3X_s, g_2=-X_4X_s, $\\
4 & Stabilizer generator set $\mathcal{S}_n$ & $g_3=X_3X_4, g_4=\pm Z_1Z_2Z_3Z_4\}$ & $g_3=-X_1X_sX_f, g_4=-X_2X_sX_f, $\\
&&& $g_5=Z_1Z_2Y_f,g_6=Z_1Z_2Z_3Z_4Z_s\}$\\
\hline
5 & Witness operator $W_n$ & $l_4\mathbb{1}-{1\over 4}\sum_{i=1}^{4}g_i$ & $ l_6\mathbb{1}-{1\over 6}\sum_{i=1}^{6}g_i$\\
\hline
6 & Bound $l_n$ & 3/4 & 5/6\\
\hline
\end{tabular}
\caption{Properties of the entangled states generated by the PCM circuit for suitable input states. We distinguish the cases of the $n=4$ data qubits and all $n=6$ qubits involved in the respective GME state. The GME state vectors (before any measurements) are shown in line \textbf{1}. The corresponding substates are shown in lines \textbf{2} and \textbf{3}. The set of stabilizer generators with eigenvalues $+1$ which fix the four-qubit states $\ket{\psi}_{out}^{(4)}$ and $\ket{\psi^{\perp}}_{out}^{(4)}$, as well as the six-qubit state $\ket{\psi}^{(6)}_{out}$ are shown in line \textbf{4}, witness operators $W_n$ constructed based on these generator sets, according to Eq.~(\ref{eq:the-witness-we-use}), are displayed in line \textbf{5} together with the corresponding threshold values $l_n = (n-1)/n$ in line \textbf{6}.}
\label{tab:GMEeqs}
\end{table*}
\subsection{Error Injection}
\begin{figure}[h!tp]\begin{center}
\includegraphics[width=\columnwidth]{./figs/6IonFTR_ArtificialError_SyndromeFlag.pdf}
\caption{Fault-tolerant parity readout including an injected error between gates $d_2s$ and $d_3s$. The syndrome $M_s^{(X)}=-1$ event rate is shown versus the logical input state of the data qubits. 140 shots per input state are measured, the average shot noise error per input state is about 2.5$\times 10^{-2}$. \textbf{a)} pertains to a $Y$-type error, which does not affect the syndrome readout. \textbf{b)} pertains to an $X$-type error, which also flips the logical result of the syndrome. In both cases, the flag qubit is detected predominantly in $M_f^{(X)}=+1$, corresponding to detection of the error.}
\label{fig:6IonFTR_Error}
\end{center}
\end{figure}
In order to explicitly demonstrate that the FT PCM scheme can reliably detect hook errors, we deliberately inject errors $R_s(\pi,0) \equiv X_s$ or $R_s(\pi,\pi/2) \equiv Y_s$ on the syndrome qubit (equivalence up to a global phase), between gates $d_2s$ and $d_3s$, see Fig. \ref{fig:circuit}. The resulting $M_{s,f}^{(X)}=-1$ event rates for syndrome and flag are shown in Fig. \ref{fig:6IonFTR_Error}. The injected $Y_s$, which corresponds to the simultaneous occurrence of a bit and a phase flip error, does not commute with the subsequent entangling gates Eq. \ref{eq:entanglinggates}. The three subsequent entangling gates involving the syndrome qubit lead to a final $X_s$ error, which is not detected upon syndrome measurement. It also proliferates to the data qubits in the form of $Z_3$ and $Z_4$ errors and would therefore compromise the encoded state. However, as it propagates to the flag as a $Z_f$ error via entangling gate $sf$, it can still be detected. We observe an error detection rate of 90.6(6)\% $M_f^{(X)}=+1$ events on the flag. The syndrome still corresponds to the logical input state's parity with $\mathcal{P}=$~88.3(7)\%. By contrast, the injected $X_s$ error results in a final $Y_s$ error, such that the final syndrome $M_s^{(X)}=-1$ events anti-correlate with the input parity, yielding $\mathcal{P}=$~14.7(7)\%. Thus, as expected, the injected error results in a PCM measurement error. Similarly to the $Y_s$ error, an error detection rate of 89.7(6)\% $M_f^{(X)}=+1$ events is observed on the flag qubit. The latter indicates that the flag qubit again reliably detects the propagation of potentially detrimental weight-2 $Z$ errors onto the data qubits, as is required to preserve the fault-tolerance of the scheme.
\subsection{Generation of Genuine Multipartite Entanglement}
The PCM scheme can be used to generate maximally entangled states. We verify four-qubit GME involving only the data qubits and extend it to the case of six-qubit GME, including the syndrome and flag qubits. The verification of $n$-qubit GME is carried out efficiently via measurement of witness operators \cite{GUEHNE09,BOUR04}, as the measurement overhead for complete state tomography \cite{ROOS04} would scale detrimentally with $n$.
In contrast to the measurements discussed before, now all data qubits are initialized in $\ket{+}={1 \over \sqrt{2}}(\ket{0}+\ket{1})$ via local rotations $R_{d_i}(3\pi/2,-\pi/2)$ applied to $\ket{0}$. The GME states generated by the PCM
are listed in Table \ref{tab:GMEeqs} (lines 1-3). The GME states are verified via entanglement witnessing~\cite{GUEHNE09,Friis2019}.
An entanglement witness $W$ is an observable whose expectation value is by construction positive or equal to zero for all separable states, $\text{tr}(W\rho) \geq 0$, and negative for specific entangled states, $\text{tr}(W\rho) < 0$. The four- and six-qubit output states $\ket{\psi}^{(n)}_{out}$ of the PCM circuit belong to the class of stabilizer states, for which we use entanglement witness operators of the form
\begin{equation}
W_n = l_n\mathbb{1}-{1\over n}\sum_{i=1}^{n}g_i
\label{eq:the-witness-we-use}
\end{equation}
with the constant $l_n = (n-1)/n$. These witnesses correspond -- up to a normalization factor $1/n$ -- to the witnesses proposed in Refs.~\cite{Toth2005,PhysRevLett.94.060501}. They can be efficiently evaluated as they require the measurement of only the $n$ stabilizer generators $g_i$ (see Table \ref{tab:GMEeqs}, line 4). The expected ideal output states can be uniquely defined as eigenstates of the stabilizer generators with eigenvalue $+1$:
\begin{equation}
g_i \ket{\psi}^{(n)}_{out} = + \ket{\psi}^{(n)}_{out}
\end{equation}
Thus, GME in the experimentally prepared $n$-qubit states is signalled by a negative witness expectation value, $\langle W_n \rangle < 0$, which is the case if the sum of generator expectation values $\frac{1}{n}\sum_{i}^{n}g_{i}$ exceeds the threshold value of $l_n=(n-1)/n$, amounting to thresholds of $l_4=3/4$ and $l_6 =5/6$ for the verification of four- and six-qubit GME, respectively. Each generator expectation value $\langle g_i\rangle$ is determined by measuring the qubit register in a measurement setting where each qubit $j$ is subjected to appropriate analysis pulses before readout, which feature drive phases corrected for systematic phases acquired throughout the PCM sequence.
For preparation of a four-qubit GME state, the entangling gates $sf$ between the syndrome and ancilla qubit are switched off, however the respective rephasing pulses are retained. The measured expectation values of the four stabilizer generators defined on the data qubits are shown in Fig.~\ref{fig:4QBGME}, each conditioned on the syndrome measurement result $M_s^{(X)}$. Consistently with the four-qubit GME state in Table \ref{tab:GMEeqs}, $X_1X_2,X_3X_4$ ($X_2X_3$) display even (odd) parity, respectively. The parity of $Z_1Z_2Z_3Z_4$ depends conditionally on the syndrome readout result $M_s^{(X)}$, as upon syndrome readout, the four data qubits are projected into the respective $\pm 1$ eigenspace of the measured parity check operator. For the input product state $\ket{+}^{\otimes 4}$ of the data qubits, they are projected into one of two four-qubit GHZ-type states. As the substates of the data qubits forming the GME state feature opposite parities in the $X$ basis, this shows that the PCM circuit reliably measures the parity in this basis upon initial basis change of the data qubits. Evaluating the entanglement witness expectation value conditionally on the syndrome measurement result, we obtain $\expec{W_{4}}=-0.14(1)$ for $M_s^{(X)}=+1$ and $\expec{W_{4}}=-0.11(1)$ for $M_s^{(X)}=-1$, respectively. Both values fall below zero by more than 10 standard errors, we therefore certify conditional four-qubit GME.
\begin{figure}[h!tp]\begin{center}
\includegraphics[width=0.9\columnwidth]{./figs/4QB_GME_ExpValues_all.pdf}
\caption{Expectation values of stabilizer generators of the four-qubit GME state. About 330 per $X$-type stabilizer and 990 shots for the $Z$-type stabilizer are acquired. The results are conditioned on the readout result $M_s^{(X)}$ of the syndrome. The average shot-noise errors are about 2$\times 10^{-2}$ for all stabilizer expectation values. }
\label{fig:4QBGME}
\end{center}
\end{figure}
For the generation of six-qubit GME, we introduce additional rotations on the syndrome qubit, $R_s(\pi/2, 0)$, between the $d_2s$ and $d_3s$ entangling gates \cite{PRXQuantum.2.020304} and $R_s(3\pi/2, -\pi/2)$ directly before the analysis rotation. Note that the first of these rotations on the syndrome qubit can be interpreted in the QEC context as a coherent rotation error, which propagates through the subsequent gates and results in a six-qubit equal-weighted coherent superposition state. Here, the first component corresponds to state where the error has propagated into two data qubit errors, captured by the flag-qubit (in $\ket{-}_f$), and the second corresponds to the fault-free component (in $\ket{+}_f$). The measured stabilizer expectation values are shown in Fig. \ref{fig:6QBGME}, from which we compute the expectation value of the witness $W_6$ (see Table \ref{tab:GMEeqs}, line 5), obtaining $\expec{W_{6}}=-0.031(8)$. Falling below zero by 3.8 standard errors, we certify the capability of the flag-based PCM circuit to generate a six-qubit GME state.
\begin{figure}[h!tp]\begin{center}
\includegraphics[width=\columnwidth]{./figs/6QB_GME_ExpValues_all.pdf}
\caption{Expectation values of stabilizer generators $g_i$ for verification of six-qubit GME. Each $X$-type stabilizer is evaluated from 500 shots, while the $Z$-type stabilizers are evaluated from 1000 shots each. The average shot-noise errors are about 2$\times 10^{-2}$ for all stabilizer expectation values.}
\label{fig:6QBGME}
\end{center}
\end{figure}
\section{Discussion and Outlook}
\begin{figure}[h!tp]\begin{center}
\includegraphics[width=\columnwidth]{./figs/6IonFTR_Timing_Barplot.pdf}
\caption{Timing budget of the FT PCM sequence. The sequence is subdivided into register preparation, the actual PCM gate sequence shown in Fig. \ref{fig:6IonFTRShuttling}, and the readout and verification of all qubits. For each block we show the timing overhead for shuttling operations (green) and laser-driven gate operations (blue).}
\label{fig:6IonFTRTiming}
\end{center}
\end{figure}
We have successfully demonstrated a low qubit-overhead FT PCM scheme on a shuttling-based trapped-ion quantum processor. To that end, we have verified a parity measurement with high single-shot fidelity, that increases when taking the flag ancilla into account. By introducing errors deliberately, we have shown that the flag ancilla reliably detects the occurrence of errors, which would otherwise proliferate into uncorrectable weight-2 errors on the data qubit register. This verifies the FT operation of the PCM as key building block for QEC protocols.
Furthermore, we have efficiently and holistically benchmarked the proper operation of the FT PCM scheme by witnessing four- and six-qubit GME generation for suitable input states. A key enabling feature of the FT PCM scheme is the virtually complete absence of crosstalk errors during gate operations. Beyond their relevance in the context of FT QEC, our results demonstrate the capability of realizing multi-qubit quantum protocols with effective all-to-all connectivity on a shuttling-based quantum processor architecture.
The timing budget of the protocol is shown in Fig. \ref{fig:6IonFTRTiming}. About 23\% of the duty cycle pertains to the PCM gate sequence, while the remainder pertains to register initialization -- mostly cooling -- and fluorescence readout. Of the the actual PCM gate sequence, 5\% of the execution time is given by laser-driven gate interactions, while 95\% is consumed by shuttling operation overhead. While the preparation and readout overheads scale linearly with the register size, the shuttling overhead pertaining to the gate sequences can scale up to quadratically with the qubit number, depending on the connectivity required by the underlying protocol. This overhead can be mitigated by improving the control hardware to decrease the time required for qubit register reconfigurations, but also by using multiple manipulation sites for parallel processing \cite{Mehta2020}. We have carried out a circuit of moderate depth \textit{without} the requirement of in-sequence cooling for removing excitation of vibrational modes of the ions incurred from the shuttling operations. Future extensions will include dual-species operation for sympathetic cooling for further increase of sequence depths, and for non-destructive in-sequence readout \cite{Ballance2015,Tan2015}. These improvements will pave the way for using the FT PCM demonstrated here as a building block for FT QEC cycles executed on complete logical qubits \cite{PhysRevX.7.041061,PhysRevA.100.062307}.
\begin{acknowledgments}
We acknowledge former contributions of Thomas Ruster, Henning Kaufmann and Christian Schmiegelow to the experimental apparatus. The research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the U.S. Army Research Office grant W911NF-16-1-0070. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the U.S. Army Research Office. We gratefully acknowledge funding from the EUH2020-FETFLAG-2018-03 under Grant Agreement no.820495, the European Research Council (ERC) via ERC StG QNets Grant Number 804247, and by the Germany ministry of science and education (BMBF) via the VDI within the projects VERTICONS and IQuAn.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
\section{Qubit operations}
\subsection{$^{40}$Ca$^+$ spin qubit}
\begin{figure}[h!tp]\begin{center}
\includegraphics[width=0.8 \columnwidth,trim={4cm 0cm 8cm 0cm},clip]{./figs/levelscheme.pdf}
\caption{Relevant atomic levels of a $^{40}$Ca$^+$ ion encoding the spin qubit employed in the work.}
\label{fig:levelscheme}
\end{center}
\end{figure}
\begin{figure*}[h!tp]\begin{center}
\includegraphics[width=\textwidth]{./figs/6IonFTR_Shuttling_RegisterPrep.pdf}
\caption{Shuttling of the register preparation sequence part described in Sec. \ref{sec:init}. The two-qubit building blocks sequentially undergo Doppler laser cooling (light blue arrows, up to 6~ms), pulsed sideband cooling on all transverse modes (purple arrows, up to 23~ms), optical pumping (light blue arrows, at 24~ms) and optional qubit rotations for preparation of arbitrary logical states of the data qubits (pink arrows, up to 26~ms). }
\label{fig:6IonFTRShuttling_RegisterPrep}
\end{center}
\end{figure*}
To store and process quantum information, the spin of the valence electron of $^{40}$Ca$^+$ ions is used. The qubit is encoded in the Zeeman sublevels of the $S_{1/2}$ ground state, assigning $\ket{0}\equiv \ket{S_{1/2},m_J=+1/2}$ and $\ket{1}\equiv \ket{S_{1/2},m_J=-1/2}$ \cite{POSCHINGER2009,RusterLongLived2016}. The relevant atomic transitions of $^{40}$Ca$^+$ are shown in Fig. \ref{fig:levelscheme}. Permanent magnets are used to produce a highly stable magnetic field of around 3.7~G, leading to a qubit state Zeeman splitting of around $2\pi\times10$~MHz \cite{RusterLongLived2016}. This splitting is smaller than the natural linewidth of the $S_{1/2}\leftrightarrow P_{1/2}$ transition, which facilitates Doppler cooling and detection.
\subsection{Initialization}
\label{sec:init}
At the beginning of every measurement cycle, all ions are laser cooled (see Sec. \ref{sec:cooling}) and initialized to $\ket{0}\equiv \ket{S_{1/2},m_J=+1/2}$. This initialization is performed by moving the three pairs of commonly confined ions sequentially into the LIZ and applying an optical pumping sequence. The sequence is a combination of two pumping stages. First, the qubits in the LIZ are exposed to a $\sigma_+$ polarized 397~nm beam for a duration of 1~\si{\micro\second}, depleting the $\ket{1}$ state. To increase state preparation fidelity, a second -frequency selective- pumping stage is employed. Here, four cycles of optical pumping are used, consisting of a $\pi$-pulse on the dipole-forbidden $\ket{S_{1/2},m_J=-1/2}\leftrightarrow\ket{D_{5/2},m_J=+3/2}$ transition near 729~nm, at a duration of about 10~\si{\micro\second}. Each $\pi$ pulse is followed exposure to the 854~nm 'quench' laser for 4~\si{\micro\second}, depleting $D_{5/2}$ state again. This pumping scheme initializes all qubits to $\ket{0}$ with infidelity $<0.1$~\%.\\
In order to prepare different logical input states of the data qubits, the data bits are moved into the LIZ again after initialization to $\ket{0}$, where an optional rotation $R(\pi,-\pi/2)$ is performed. This finalizes the register preparation part of the measurement sequence, which can be found in Fig. \ref{fig:6IonFTRShuttling_RegisterPrep}. The syndrome and flag are initialized to $\ket{-}$ by a single-qubit rotation $R(\pi/2,-\pi/2)$ applied at the beginning of the gate sequence.
Readout rotation pulses $R(\pi/2,-\pi/2)$ on syndrome and flag are performed at the end of the gate sequence.
\subsection{Qubit rotations}
Local qubit rotations are carried out by driving a stimulated Raman transition with two beams near 397~nm, with a frequency matching the qubit frequency. The drive field is detuned from the $S_{1/2} \leftrightarrow P_{1/2}$ transition by about $2\pi\times$~1~THz. The beams are co-propagating, therefore the effective wavevector is zero, and the qubit drive does not couple to the motion of the qubit ions. Single-qubit Clifford error-per-gate rates of down to 10$^{-4}$ have been measured via randomized benchmarking.
Local qubit rotations occurring after initial rotations are corrected for systematic phases. The analysis pulses on the syndrome and flag qubits are corrected by a calibrated phase, for taking into account additional systematic phases resulting from the dynamical repositioning of the qubit ions in an inhomogeneous magnetic field, see Sec. \ref{sec:sip}. For the data qubits only, the analysis pulse phases are shifted by $-\pi/2$ to take phase shifts from the entangling gates into account. For the GME generation and verification presented in main manuscript, we apply the initial $3\pi/2$ rotation on each data qubit directly before the respective entangling gate to the syndrome, and the analysis rotation directly after the gate. This way, the data qubits spend a minimum amount of time in superposition state, which mitigates errors from dephasing.
\subsection{Entangling gates}
Entanglement between two qubits is realized using a geometric phase gate. Two laser beams at around 397~nm with a red detuning of about $2\pi\times$1.0~THz from the $S_{1/2}\leftrightarrow P_{1/2}$ transition are aligned such that the effective $\Vec{k}$-vector is oriented perpendicularly to the trap axis. The beams are arranged in lin-$\perp$-lin polarization geometry, the beat pattern therefore has a polarization gradient and leads to a spin-dependent optical dipole force on the two ions. The frequency difference of the beams is tuned close to the transverse in-phase (gate) mode at $2\pi\times4.64$~MHz, up to a detuning of $\delta \approx 2\pi\times20$~kHz. This almost-resonant drive force leads to transient oscillatory excitation of the gate mode, returning to rest at a duration of $T=2\pi/\delta \approx 50~\si{\micro\second}$. A geometric phase $\Phi$ proportional to the enclosed phase space area will be acquired, which can be tuned by the laser power. To realize a maximally entangling gate, a phase of $\Phi = \pi/2$ is required. The unitary describing phase accumulation on the even parity qubit states is
\begin{eqnarray}
ZZ_{ij}(\Phi)&=&e^{\frac{i}{2}\Phi Z_i \otimes Z_j }.
\label{eq:entanglinggatesPhi}
\end{eqnarray}
The actual entangling gate operation consists of two gate pulses, each leading to a phase accumulation of $\Phi=\pi/4$, interspersed by an additional rephasing pulse $R(\pi, -\pi/2)$ with a typical duration of around $4~\si{\micro\second}$ after half the phase accumulation, which leads to a total gate unitary of
\begin{eqnarray}
G_{ij}&=&ZZ_{ij}(\pi/4)R(\pi,-\pi/2)ZZ_{ij}(\pi/4).
\label{eq:entanglinggates}
\end{eqnarray}
The gate pulses feature a Tukey-type shape, ensuring adiabatic switching of the gate interaction. The reduced bandwidth of the gate pulses leads to suppression of errors from off-resonant excitation of spectator motional modes \cite{BALLANCE2014}. Typical two-qubit gate fidelities of around 99.6(2)~\% are reached at a total gate duration of $120~\si{\micro\second}$, verified via subspace cycle benchmarking \cite{PhysRevResearch.2.013317}.
\subsection{Phases induced by ion positioning}
\label{sec:sip}
Throughout the dynamical reconfiguration of the qubits, ions are moved along the trap axis into different storage positions and acquire additional phases due to a small inhomogeneity of the magnetic field. The maximum difference of the qubit frequency is about $2\pi\times$~7~kHz across the entire trap. Accumulated phases can be described by local $Z$ rotations, which commute with the entangling gates and therefore do not perturb these. However, the analysis rotations ought to be corrected accordingly. We calibrate the phases from additional measurements, where the respective qubit is initialized in $\ket{-}$, the shuttling sequence is carried out without executing entangling gates but with the rephasing pulses retained. Instead of the final analysis pulses, $X$ and $Y$ measurements are carried out. From the respective expectation values, we obtain the positioning-induced phases via maximum likelihood estimation. With 40 shots per operator and qubit, we obtain a phase estimation accuracy of about 0.15~rad. For the measurement with injected errors, the phase $\phi_{err}$ of the error pulses on the syndrome are calibrated by scanning the phase of a local rotation $R(\pi, \phi_{err})$ at the error position, to perform a full spin-flip on the final syndrome readout.
\section{Register management}
\subsection{Loading}
The six qubits are loaded in pairs of two commonly confined ion qubits. To that end, three potential wells are formed using the dc electrodes of the segmented trap. The wells are spaced by at least three empty segments to reduce the probability of unintentional additional loading events. $^{40}$Ca$^+$ ions are obtained by resonant two-photon ionization from an effusive beam $^{40}$Ca atoms, using two laser beams near 374~nm and 423~nm. The ions are usually trapped by applying a trapping voltage of -2.4~V at the LIZ, while potential wells in storage regions use a trapping voltage of -6~V. Upon successful trapping, the ions are moved from the loading region to a storage position, and the next empty potential well is moved into the LIZ for loading. The potential wells are cycled through the LIZ until all qubits are stored at their desired location.
\subsection{Cooling}
\label{sec:cooling}
In each measurement cycle, the ions are cooled close to motional ground state via multiple cooling stages. First, Doppler cooling is performed using the $S_{1/2}\leftrightarrow P_{1/2}$ transition near 397~nm and an exposure time of 2~\si{\milli\second} per two commonly confined ions. The ion pairs are cooled in sequence $\{d_3,d_4\}$, $\{s,f\}$ and $\{d_2,d_1\}$. The Doppler-cooled ions are further cooled by using pulsed resolved sideband cooling via driving the stimulated Raman transition on the red sidebands of the corresponding transverse motional modes. Each cooling pulse realizes an approximate $\pi$~pulse on the transition
\begin{equation}
\ket{0}\ket{n} \rightarrow \ket{1}\ket{n-1},
\end{equation}
such that the phonon number $n$ of the driven secular mode is reduced. After each pulse, optical pumping using a circularly polarized laser near 397~nm at a pulse duration of 1~\si{\micro\second} resets the state as
\begin{equation}
\ket{1}\ket{n-1} \rightarrow \ket{0}\ket{n-1},
\end{equation}
The ion pairs are sideband cooled sequentially, in order $\{d_2,d_1\}$, $\{s,f\}$ and $\{d_3,d_4\}$. First, all pairs undergo a cooling sequence of a total duration of 4~ms, covering the 2nd and the 1st red sideband of all transverse modes for both axes perpendicular to the trap axis, in-phase $2\pi\times\{3.88,4.64\}$~MHz and out-of-phase $2\pi\times\{3.57,4.37\}$~MHz. All ions which have already been cooled accumulate a small amount of excitation due to anomalous heating, mostly on the in-phase modes. This is mitigated by a second, much shorter, round of sideband cooling only on the in-phase modes, performed in the same cooling order.
\subsection{Multi-qubit readout}
\label{sec:mqreadout}
\begin{figure*}[h!tp]\begin{center}
\includegraphics[width=\textwidth]{./figs/6IonFTR_Shuttling_Readout.pdf}
\caption{Shuttling of the multi-qubit readout sequence part described in Sec. \ref{sec:mqreadout}. The two-qubit building blocks sequentially undergo electron shelving (red arrows, up to 43~ms). Then, state-dependent fluorescence is detected after separating each pair (light blue arrows, up to 51~ms). Another round of detection is performed including qubit reset to verify register integrity (light blue arrows, up to 59~ms). }
\label{fig:6IonFTRShuttling_Readout}
\end{center}
\end{figure*}
The sequence covers two rounds of detection for each qubit, for reading out the state of the qubit and to ensure the validity of the measurement run. First, readout of the spin qubit requires electron shelving, i.e. selective population transfer from $\ket{0}$ to the metastable state $D_{5/2}$, which possesses a radiative lifetime of about 1~s. This transfer is achieved by using rapid adiabatic passage pulses on the sub-transitions $\ket{0\equiv S_{1/2},m_J=+1/2}\leftrightarrow\ket{D_{5/2},m_J=+1/2}$ and $\ket{0\equiv S_{1/2},m_J=+1/2}\leftrightarrow\ket{D_{5/2},m_J=-3/2}$. For both transitions, the ions are exposed to a chirped Gaussian-shaped laser, varying the optical frequency by $\pm$60~\si{\kHz} around resonance within a duration of 200~\si{\micro\second}. The pulse parameters are chosen to maximize the transfer probability from $\ket{0}$ and to minimize parasitic transfer from $\ket{1}$. The qubits undergo electron shelving pairwise, in order $\{d_3,d_4\}$, $\{s,f\}$ and $\{d_2,d_1\}$. After pairwise shelving of all ions, a nearly identical detection sequence is executed twice, in reverse order. Here, all ion pairs are sequentially moved to the LIZ. Then, the pairs are separated, and the singled ions consecutively moved to the LIZ. They are exposed to a laser beam resonantly driving the cycling transition near 397~nm at about one-fold saturation. Scattered photons are collected and detected on a photomultiplier tube. Comparing the number of photons detected within 800~\si{\micro\second} to a threshold allows for discrimination of $\ket{0}$ and $\ket{1}$. The same detection sequence is repeated, including an additional laser beam near 854~nm, which depletes the metastable state via the $D_{5/2} \leftrightarrow P_{3/2}$ electric dipole transition. A complete set of 'bright' event at the second detection verifies that no ion losses have occurred during the measurement cycle. For the FT PCM shuttling sequence, we obtain a valid measurement cycle ratio of around 83\%. Finally, the ions are moved back to the initial loading positions.
\section{Shuttling operations}
\subsection{Movement}
The microstructured, segmented radio frequency ion trap consists of 32 pairs of dc electrodes, referred to as trap segments. At most two ions are stored in one potential, to harness the lower number of motional modes in a smaller ion crystal. In order to move ions to a neighboring segment, the voltage at the target segment is gradually put to negative bias, while the negative trapping voltage at the original segment is slowly increased at the same time. This way, the confining electrostatic potential well shifts from its original position to the destination segment. The voltage ramps are optimized to minimize the final motional excitation of the ions after the movement over a distance of one segment. The movement between two neighboring segments is performed within 20.9~\si{\micro\second}. Transport over larger segment ranges is realized by concatenated application of single-segment movements. A waiting time of 50.6~\si{\micro\second} is inserted after the last shuttling operation, before any laser-driven qubit operation is to be carried out. This ensures that the ions settle to the rest position in the LIZ.
\subsection{Separate / merge}
In order to obtain the required effective all-to-all connectivity within the six-qubit register, separation and merging of two qubit ions from / to common confinement is required. Separation is realized by dynamic control of the confining potential, transferring from a single-well to a double-well potential. In order to avoid excessive motional excitation from this operation, precise calibration of the process parameters is required. A 'tilt' voltage difference between the neighboring segments is required to compensate stray field along the trap axis, which needs to be calibrated to 1~mV precision to ensure low residual motion after the separation. This voltage is automatically recalibrated throughout data acquisition, as its value may drift due to UV light induced charge accumulation at the trap surfaces. A total separation duration of about 100~\si{\micro\second} is used. The merging process is merely the time reverse operation, employing the time-reverse voltage ramps with the same calibration parameters. The harmonic confinement along the trap axis is reduced throughout separation / merge processes, down to a minimum value of about $2\pi\times$~220~kHz.
\subsection{Ion swap}
The swap operation is realized via physical rotation of two commonly confined ion qubits, i.e. flipping the positions of the two ions along the trap axis. The rotation is controlled using the neighboring segments of the LIZ. We apply optimized voltage waveforms, minimizing the secular frequency deviation during the process to less than 300~kHz on all in-phase modes $2\pi\times\{1.49,3.88,4.64\}$~MHz and out-of-phase modes $2\pi\times\{2.57,3.57,4.37\}$~MHz of the two commonly confined ions. This avoids spectral crossing of the secular modes, and therefore suppresses transfer of motional excitation between highly excited axial modes and transverse modes. This is necessary as all transverse modes are required to have low motional excitations $\lesssim$~1 phonon in order to realize high-fidelity entangling gates. The rotation process is carried out within a duration of 60~\si{\micro\second}.
\section{Error analysis}
In this section, we discuss the relevant error sources limiting the single-shot fidelity of the parity check measurement. State preparation and measurement (SPAM) errors are of particular relevance in the context of QEC protocols: Many ancilla preparation and readout operations have to be performed per QEC cycle, and eventual feedback operations are conditioned on measurement results of these.\\
While the state preparation via two-stage optical pumping features infidelities $<$0.1\% and are thus dwarfed by measurement errors. Here, the fidelity bottleneck consists of the electron shelving operation for readout of the spin qubit. The laser beam near 729~nm used for population transfer couples to the transverse secular motion, with Lamb-Dicke factors on the four transverse modes in the ranging around $0.1$. The shuttling operations also lead to build-up of transverse motion, which impairs the electron shelving via dispersion of the coupling strength. Measurement fidelities on all qubits are listed in Table \ref{tab:SPAM}. These values are measured by initializing all qubits in logical states $\ket{0}$ ('dark' readout) or $\ket{1}$ ('bright' readout) and performing the shuttling sequence without any further gates. Readout infidelities ranging between 0.3(1)\% and 2.9(5)\% are observed, depending on qubit and prepared state. For reference, a single ion qubit without any shuttling operations before detection yields combined preparation and readout infidelities of 9(4)$\times$~10$^{-4}$ and 5(3)$\times$~10$^{-4}$ for $\ket{0}$ and $\ket{1}$, respectively.
\renewcommand{\arraystretch}{1.4}
\begin{table
\centering
\begin{tabular}{|wc{3.0cm}||wc{2.0cm}|wc{2.0cm}|}
\hline
FT PCM Sequence & $'bright' \equiv \ket{1}$ & $'dark' \equiv \ket{0}$ \\
\hline \hline
$s$ & 98.8(2)\% & 98.6(3)\% \\
\hline
$f$ & 98.8(2)\% & 98.1(3)\% \\
\hline \hline
GME Sequence & $'bright' \equiv \ket{1}$ & $'dark' \equiv \ket{0}$ \\
\hline \hline
$d_1$ & 99.7(1)\% & 98.2(4)\% \\
\hline
$d_2$ & 99.6(1)\% & 98.7(3)\% \\
\hline
$d_3$ & 99.2(2)\% & 99.0(3)\% \\
\hline
$d_4$ & 98.6(2)\% & 98.6(3)\% \\
\hline
$s$ & 99.6(1)\% & 97.1(5)\% \\
\hline
$f$ & 99.6(1)\% & 98.4(3)\% \\
\hline
\end{tabular}
\caption{State preparation and detection fidelity including shuttling sequence. For the data pertaining to the GME, early readout and shelving of the data qubits is carried out.}
\label{tab:SPAM}
\vspace{0.5cm}
\end{table}
\begin{figure}[h!tp]\begin{center}
\includegraphics[width=\columnwidth]{./figs/6IonFTR_Contrast.pdf}
\caption{Readout syndrome contrast after last gate with increasing number of gates executed, based on 160 shots per Pauli operator $X,Y$.}
\label{fig:ContrastLossGates}
\end{center}
\end{figure}
The infidelity of the two-qubit gates exhibits linear scaling behavior with respect to the mean phonon numbers of the transverse modes. Transverse secular motion leads to dispersion of the coupling strength of the gate driving force, which in turn leads to dispersion of the accumulated geometric phase, which finally manifests in the form of dephasing. To characterize this effect, we measure the syndrome contrast for different gates toggled on or off, performing the complete shuttling sequence. The contrast is obtained from measuring expectation values $\langle X_s\rangle$ and $\langle Y_s\rangle$ and maximum-likelihood estimation. Contrast values of ranging between 93(5)\% and 78(5)\% are observed, see Fig. \ref{fig:ContrastLossGates}. A statistically significant dependence on the number of performed two-qubit gates is not visible. We therefore conclude that this errors source does not represent a relevant contribution to the infidelity of the parity check measurement. The minimum observed contrast loss is due to fluctuations of the ambient magnetic field, and consistent with the observed parity measurement fidelity of 93.2(2)\%.
\bibliographystyle{apsrev4-1}
|
1,314,259,994,798 | arxiv | \section{Introduction}
Knowledge of the phase of matter waves is crucially important in studies of interferometry, entanglement, and precision measurement. Pertinent to all of these areas is the pseudospin-$\tfrac{1}{2}$ condensate, which can be realized with trapped neutral atoms in a superposition of two hyperfine ground states. Most commonly, the magnetically trappable \ket{F=1,m_F=-1} and \ket{F=2,m_F=1} states of $^{87}$Rb\, have been used in experiments \cite{Hall98a,Hall98b,Matthews99,McGuirk03,Mertes07}. Early studies demonstrated spatial separation of the components \cite{Hall98a} and interferometric detection of relative phase for a region where both components remained overlapped following strongly damped center of mass motion \cite{Hall98b}. Also, the effect of phase winding throughout a two-component condensate was studied for continuous electromagnetic coupling \cite{Matthews99}. Following this, images of spin excitations in an ultracold uncondensed gas \cite{McGuirk02} enabled the study of spin domain growth for mixtures of condensed and uncondensed atoms \cite{McGuirk03}. The interference between two vortex lattices comprised of each component has also been examined \cite{Schweikhard04}. More recently, long-lived ringlike excitations of the binary condensate system have been observed \cite{Mertes07}. This kind of dynamical instability is accompanied by spatially dependent relative phase dynamics, which we investigate here. In particular, we consider the temporal decay of the interference signal obtained with a Ramsey-like measurement of the pseudospin-$\tfrac{1}{2}$ condensate.
The mechanism of phase diffusion for a two-component quantum gas has been recently studied \cite{Widera08}, whereby the evolution of a coherent spin state results in the decay of Ramsey visibility \cite{Sinatra00}. For the close inter- and intrastate interaction strengths in our system, phase diffusion is negligible \cite{Sinatra00}. Rather, we consider mean-field driven spatial inhomogeneities of the relative phase, which also act to decrease the interferometric contrast even without significant spatial separation between the components. This is relevant in the context of proposals to squeeze the macroscopic pseudospin in two-component quantum degenerate gases \cite{Sorensen01,Jin07,Rey07,Li08}, which are based on using the mean-field interaction as a source of entanglement, and development of a trapped atomic clock using an atom chip \cite{Treutlein04,Rosenbusch09}.
This paper is organized as follows. In Sec.~\ref{sec:experiment}, our experimental procedure is described up to the point of initialization of a two-component Bose-Einstein condensate (BEC). In Sec.~\ref{sec:Ramsey_interferometry}, we report on the spatiotemporal relative phase dynamics of the two-component condensate using Ramsey interferometry. Simulations of coupled Gross Pitaevskii equations including atomic loss and electromagnetic driving terms yield striking agreement with observed matter wave interference. In Sec.~\ref{sec:dual_state}, we present the demonstration of a simultaneous state selective imaging technique. The method enables improved measurement of the longitudinal spin projection in a single experimental run, when classical fluctuations in relative and total atom numbers exist between different realizations of an experiment. Finally, in Sec.~\ref{sec:phase_interferometric}, we propose a combination of the dual state imaging method and Ramsey interferometry to directly image the spatial inhomogeneity and quantum fluctuations of relative phase.
\section{Realization of two-component BEC}
\label{sec:experiment}
A detailed description of our apparatus, including experiments performed with a \ket{F=2,m_F=2} condensate on a perpendicularly magnetized film atom chip, has been described elsewhere \cite{Hall06,Whitlock07,Hall07}. In brief, we use an atom chip with a machined Ag foil structure that allows currents to be passed in U- and Z-shape paths for surface magneto-optical trapping and Ioffe-Pritchard magnetic trapping \cite{Reichel99}. We begin by preparing a condensate in the \ket{F=1,m_F=-1} state, hereafter referred to as state \ket{1}. This is achieved in a manner similar to previous work with two main differences. First, we optically pump the atoms into \ket{1} prior to magnetic trapping using a $1\unit{ms}$ duration pulse of $\sigma^-$ polarized light, tuned to the D$_2$~($F =2 \rightarrow F' = 2$) transition, in the absence of repumping ($F = 1 \rightarrow F' = 2$) radiation. To ensure the purity of \ket{1} during the magnetic trapping stage, a $2\unit{ms}$ pulse of optical pumping light illuminates the trapped cloud, completely removing residual magnetically trapped atoms in the $F = 2$ level. Second, the BEC is imaged after magnetic trapping via optical absorption using a $100\unit{\mu s}$ pulse of $\sigma^+$ light, tuned to the D$_2$~($F = 2 \rightarrow F' = 3$) transition. This is immediately preceded by a $1\unit{ms}$ pulse of repumping light that transfers all the atoms into the $F = 2$ manifold for imaging, while the short repumping-imaging delay ensures image blurring (from the recoil of a single repumping photon by each atom) is minimized. A charge-coupled device (CCD) camera records the absorption image using an achromat doublet lens, with a resolution of $7.5\unit{\mu m}$/pixel. Using this sequence, a pure \ket{1} BEC of $~ 2 \times 10^5$ atoms is routinely created in a cycle time of $40\unit{s}$.
\begin{figure}
\includegraphics[width=1.0\columnwidth]{Figure_two_photon_Rabi.eps}
\caption{
(a) Hyperfine ground states of $^{87}$Rb\,. Two Zeeman levels are coupled by a two-photon microwave radio-frequency field (see text for description). (b) Two-photon Rabi oscillations of a pseduo-spin-$\tfrac{1}{2}$ BEC as measured by simultaneous detection of the populations of \ket{1} and \ket{2} (Sec.~\ref{sec:dual_state}).
\label{fig:two_photon_Rabi}
}
\end{figure}
Using a \ket{1} condensate, the radial and axial trap frequencies of the magnetic trap were measured to be \mbox{$f_{\rho} = 97.6(2)\unit{Hz}$} and \mbox{$f_z = 11.96(2)\unit{Hz}$}. We observe a \ket{1} condensate lifetime of $1.7\unit{s}$, well below the observed magnetic trap lifetime for a \ket{1} thermal cloud ($30\unit{s}$) and the expected three-body collisional-loss- dominated lifetime ($17\unit{s}$). Since the total atom number, BEC plus thermal cloud, is conserved, we attribute the observed lifetime to be limited by heating due to technical noise. The rate of this heating was measured to be $12(1)\unit{nK/s}$ which is small compared to the transition temperature of $120\unit{nK}$, yet ultimately of consequence. We prepare a two-component condensate of $1.5\times10^5$ atoms using a two-photon microwave radio-frequency field to couple $\ket{1}$ to $\ket{F = 2, m_F = 1}$, hereafter referred to as state $\ket{2}$. This is schematically presented in Fig.~\ref{fig:two_photon_Rabi}(a) and was initially demonstrated and described in Ref.~\onlinecite{Matthews98}. We use a magnetic trap with the potential minimum at $3.23\unit{G}$ (for which the relative Zeeman shift is independent of magnetic field to first order), as determined by microwave spectroscopy of a trapped condensate. The $\sim 6.8\unit{GHz}$ microwave radiation is derived from a signal generator (Agilent E8257D), pulsed using a fast ns switch (Agilent), then amplified (MA.Ltd $10\unit{W}$) and transmitted to the BEC using a helical antenna (gain $19\unit{dB}$) which resides in air $10\unit{cm}$ from the condensate. We simultaneously drive a radio-frequency magnetic field at $\sim 2.01\unit{MHz}$ with the atom chip wires used for evaporative cooling, albeit with an independent signal generator (SRS DS345) and an amplifier (MA.Ltd $2\unit{W}$). These fields couple \ket{1} and \ket{2} via the intermediate state \ket{F = 2, m_F = 0} with a detuning of $250\unit{kHz}$. This detuning ensures that no significant population is transferred to \ket{F = 2,m_F = 0}, a magnetically untrapped state, such that a two-level formalism adequately describes the multi-component dynamics of the system. We measure a two-photon Rabi frequency of $416\unit{Hz}$ [Fig.~\ref{fig:two_photon_Rabi}(b)] which is much faster than the characteristic time scale for the condensate to change shape. The high quality of these data is a result of the simultaneous state detection method described in Sec.~\ref{sec:dual_state}.
\section{Ramsey interferometry using two-component BEC}
\label{sec:Ramsey_interferometry}
Here we use Ramsey interferometry to study the relative phase evolution of a two-component condensate. The non-equilibrium dynamics observed in this system \cite{Mertes07} are accompanied by spatially dependent dynamics of the relative phase. We describe the evolution of the system using a pseudospinor formalism, whereby the internal and external states of the condensate are represented by a two-component order parameter
\begin{equation}\label{eq:spinor_definition}
\ket{\Psi ( \vec{r} ,t)} \equiv \left (
\begin{array} {c}
\Psi_1 ( \vec{r} ,t)\\
\Psi_2 ( \vec{r} ,t)
\end{array}
\right)
= \left (
\begin{array} {c}
\sqrt{n_{1}(\vec{r},t)} \, e^{i \phi_1 (\vec{r},t)}\\
\sqrt{n_{2}(\vec{r},t)} \, e^{i \phi_2 (\vec{r},t)}
\end{array}
\right) \; ,
\end{equation}
where $n_1$ and $n_2$ are the atomic densities of each state in the condensate. We begin with a \ket{1} condensate in the ground state of the combined mean-field and external potentials, with density $n_0(\vec{r})$. This corresponds to the pseudospinor representation
\begin{equation}\label{eq:spinor_0}
\ket{\Psi ( \vec{r} ,0)} =
\left(
\begin{array} {c}
1\\
0
\end{array}
\right)
\sqrt{n_0(\vec{r})}
\; ,
\end{equation}
Application of the first $\pi/2$ pulse (of length $t_{\pi/2}$) prepares the two-component superposition \cite{footnote1}
\begin{equation}\label{eq:spinor_initial}
\ket{\Psi ( \vec{r} , t_{\pi/2})} =
\frac{1}{\sqrt{2}} \left (
\begin{array} {c}
1\\
-i
\end{array}
\right)
\sqrt{n_0(\vec{r})}
\; .
\end{equation}
This is no longer the ground state of the two-component system, as the mean-field interaction between component \ket{1} with component \ket{2}, and component \ket{2} with itself is different to that of component \ket{1} alone. This is due to a slight difference in the $s$-wave scattering lengths $a_{11} = 100.40\,a_0$, $a_{22} = 95.00\,a_0$, and $a_{12} = 97.66\,a_0$, where $a_0$ is the Bohr radius \cite{Mertes07}. Through the state inter-conversion, we modify the mean-field energy of the system by $\sim 1 \%$ for our experimental parameters and a condensate with $1.5\times10^5$ atoms. This is enough to drive hundreds of milliseconds of weakly damped collective excitations and coherent relative phase evolution. The system is allowed to evolve for a time $T$, after which we observe the condensate with or without the application of a second $\pi/2$ pulse
\begin{subequations}
\label{eq:spinor_final}
\begin{align}
\ket{\Psi ( \vec{r} , t_{\pi/2} + T)} &=
\left (
\begin{array} {c}
\sqrt{n_{1}(\vec{r})} \, e^{i \phi_1(\vec{r})}\\
\sqrt{n_{2}(\vec{r})} \, e^{i \phi_2(\vec{r})}
\end{array}
\right)
\; ; \label{eq:spinor_final_a} \\
\ket{\Psi ( \vec{r} , t_{\pi/2} + T + t_{\pi/2})} &=
\left (
\begin{array} {c}
\sqrt{n_{1}'(\vec{r})} \, e^{i \phi_1'(\vec{r})}\\
\sqrt{n_{2}'(\vec{r})} \, e^{i \phi_2'(\vec{r})}
\end{array}
\right)
\; . \label{eq:spinor_final_b}
\end{align}
\end{subequations}
where primes denote the densities and phases after the optional second pulse. We define a spatially dependent relative phase as
\begin{equation}\label{eq:phase_definition}
\phi(\vec{r}) \equiv \phi_2 (\vec{r}) - \phi_1 (\vec{r}) \; .
\end{equation}
In the frame rotating at the effective frequency of the two-photon coupling, this phase evolves to
\begin{equation}\label{eq:phase_evolved}
\phi(\vec{r}) = \Delta \, T + \phi_{\text{mf}}(\vec{r}) + \delta \phi
\; ,
\end{equation}
after evolution time $T$, where $\Delta$ is the detuning of the two-photon field from the atomic resonance, $\phi_{\text{mf}}$ is the spatially dependent phase whose evolution is driven by the mean field, and $\delta \phi$ is any phase shift intentionally applied to the coupling field before $t = t_{\pi/2} + T$. Varying $\Delta$, $T$, or $\delta \phi$ and applying a second $\pi/2$ pulse yields a Ramsey interference signal $\alpha$, which we define as the expectation value of the longitudinal psuedo-spin projection, scaled to vary as the normalized population difference
\begin{equation}\label{Sz_definition}
\alpha \equiv (N \hbar)^{-1} \langle \hat{S_z} \rangle
= \left( N_2' - N_1' \right)/N \; ,
\end{equation}
where $N_i' = \int n_i' \, d \vec{r} \, , (i = 1,2)$ and $N$ is the total atom number.
A typical Ramsey signal obtained by varying the evolution time $T$ is shown in Fig.~\ref{fig:Ramsey_data_cf_GP_2.5Hz}. We observe nonexponential decay and chirp to Ramsey fringes in the time domain. These observations are consistent with the spatial evolution of a coherent two-component order parameter, as described below. Decoherence does not significantly contribute to the decay of Ramsey contrast we observe \cite{footnote2}. It is the spatial dependence of the mean-field driven phase that leads to a decrease in contrast of Ramsey fringes over time. It should be noted that this dephasing is unlike the inhomogeneous dephasing that occurs in statistical ensembles of uncondensed thermal atomic samples. Rather, it is the variation of the relative phase across a many-body wave function.
\begin{figure}
\includegraphics[width=1.0\columnwidth]{Figure_Ramsey_integrated.eps}
\caption{The Ramsey interference signal of a pseudospin $^{87}$Rb BEC where the detuning of the two-photon field from the atomic resonance $\Delta/2\pi = 1 \unit{Hz}$. The points are experimental measurements and the solid line is calculated using the CGPE theory, with an additional exponential decay due to decoherence \cite{footnote2}. The $\sim 16\unit{Hz}$ oscillation, decay, and frequency chirp of $\alpha$ are predominantly due to spatial evolution of the relative phase.
\label{fig:Ramsey_data_cf_GP_2.5Hz}}
\end{figure}
To simulate the evolution of density and relative phase, the dynamics throughout electromagnetic coupling pulses, and the Ramsey interferometry signal, we solve the coupled Gross-Pitaevskii equations (CGPE) for both components with decay terms corresponding inter- and intrastate many-body loss processes. In their most basic form (without loss or electromagnetic coupling terms), these equations were introduced in \cite{Ho96,Zeng95}. The inclusion of electromagnetic coupling terms has appeared in \cite{Ballagh97} (multicomponent condensates) and \cite{Marzlin97,Dum98,Williams99,Blakie99,Merhasin05,Gordon08} (specifically pseudospin-$\tfrac{1}{2}$ condensates). Addition of nonlinear terms to describe many-body loss processes was used in \cite{Yurovsky99} and applied to a binary condensate system of $^{87}$Rb\, in \cite{Mertes07}. Combining all of the above features, we arrive at
\begin{subequations}
\label{eq:CGPEs}
\begin{align}\label{eq:CGPE1}
\lefteqn{ i \hbar \partialD{\Psi_1}{t} = \frac{\hbar \, \Omega}{2}\Psi_2+}
\\ & &
\left[ -\frac{\hbar^2 \nabla^2}{2m} +
V_1 + g_{11} |\Psi_1|^2 + g_{12} |\Psi_2|^2
- i \hbar \, \Gamma_1 \right] \Psi_1
\, , \notag
\end{align}
and
\begin{align}\label{eq:CGPE2}
\lefteqn{ i \hbar \partialD{\Psi_2}{t} = \frac{\hbar \, \Omega}{2}\Psi_1 - \hbar \, \Delta \Psi_2 +}
\\ & &
\left[ -\frac{\hbar^2 \nabla^2}{2m} +
V_2 + g_{22} |\Psi_2|^2 + g_{12} |\Psi_1|^2
- i \hbar \, \Gamma_2 \right] \Psi_2
\, . \notag
\end{align}
\end{subequations}
where $m$ is the mass of $^{87}$Rb\,, $V_i$ is the magnetic trapping potential experienced by component $i$, and \mbox{$g_{ij} = 4\pi \hbar^2 a_{ij}/m$} are the mean-field interaction parameters. The dominant two- and three-body loss processes are described by the terms \mbox{$\Gamma_1=(\gamma_{111}n_1^2 + \gamma_{12} n_2)/2$} and \mbox{$\Gamma_2=(\gamma_{22} n_2 +\gamma_{12} n_1)/2$}, with loss rates $\gamma_{i..j}$ measured and described in Ref.~\onlinecite{Mertes07}.
To solve Eqs.~(\ref{eq:CGPEs}) numerically, we exploit the cylindrical symmetry of our experimental geometry, and assume that the wave function takes the form \mbox{$\Psi(\vec{r}) \rightarrow e^{i \, m \, \varphi} \Psi_m(\rho , z)$,} where $m$ is an integer. Since the ground state of the system has $m = 0$, we exclusively solve for this case and assume the time evolution does not break this symmetry. We use the discrete Hankel-Fourier transform to solve the CGPE and have adapted the procedure used in \cite{Ronen06} for multicomponent systems. Essentially, the technique facilitates the solution of second-order partial differential equations in cylindrical coordinates using spectral methods, where the Fourier transform fails due to divergence of the Laplacian for zero radial momentum. Implementation of the discrete Hankel transform is greatly simplified by sampling the fields at zeros of the $m^{\text{th}}$ order Bessel function of the first kind, rather than using a Cartesian grid.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{Figure_phase_evolution.eps}
\caption{Simulated in-trap column densities $\int n_i \, dx$ and relative phase $\phi(\rho=0,z)$ [see Eqs.~(\ref{eq:spinor_final}) and (\ref{eq:phase_definition})] for various evolution times of the initial state in Eq.~ (\ref{eq:spinor_initial}). The initially uniform phase exhibits inhomogeneous evolution, accompanied by density excitations in the axial direction $(z)$. An offset of $14\, \pi\unit{rad\,s^{-1}}$ has been added to the relative phase for clarity.
\label{fig:phase_evolution}}
\end{figure}
In Fig.~\ref{fig:phase_evolution}, we show the simulated column densities of each component for various evolution times following preparation of the initial superposition in Eq.~(\ref{eq:spinor_0}). The relative phase is seen to vary across the axial dimension of the condensate. This relative phase modulation eventually becomes manifest as relative density modulation, such that the components spatially separate along the axial direction, consistent with the sign of $a_{11} a_{22} - a_{12}^2$ that governs the miscibility of two-component condensates \cite{Ho96}. This is evident in Fig.~\ref{fig:phase_evolution} at $90\unit{ms}$, where \ket{1} predominantly resides further from the origin than \ket{2}. In the absence of loss, the moments $\int \vert z \vert n_i \, d\vec{r}$ would continue to oscillate out of phase, with \ket{2} occupying a larger volume than \ket{1} at $\sim 200 \unit{ms}$. However, atomic loss dampens the collective oscillations on this time scale.
We present here measurements of the density of state \ket{2} for various evolution times $T$, followed by a second $\pi/2$ pulse. We observe striking agreement between these results and those of CGPE simulations. In Fig.~\ref{fig:Exp_Vs_GP}, the column density $\int n_2 \, dx$ is shown, where $x$ is the imaging axis, a component of the radial coordinate (of tight confinement) $\rho$. The only free parameter used in the simulation is the detuning $\Delta$ of the two-photon field from the atomic transition. We adjust this parameter to optimize the agreement (in this case $\Delta/2\pi = 1\unit{Hz}$), consistent with a measurement of the detuning using a cold thermal cloud. In particular, the three distinct density maxima at $40$ and $115\unit{ms}$, corresponding to antinodes of the state \ket{2} wave function, are represented accurately by the simulation. This prominent density modulation along the $z$ axis (of weak confinement) reveals the nonuniform relative phase variation along this direction. This is enabled by the second $\pi/2$ pulse, which converts information about the relative phase into (relative) density. In the absence of the second pulse, while density modulation still exists, it is of lower contrast (Fig.~\ref{fig:phase_evolution}, $90\unit{ms}$ is the most structure we observe from simulations).
\begin{figure*}
\includegraphics[width=2.05\columnwidth]{Figure_Exp_vs_GP.eps}
\caption{Column density of state \ket{2} as a function of Ramsey interferometry time (inset for each image in milliseconds). Application of the second $\pi/2$ pulse locally converts the relative phase into relative population. (Left) Experimental: single shot absorption images, taken after a $20\unit{ms}$ fall time, reveal the mean-field driven phase evolution. (Right) Theoretical: column density plots obtained by solving the coupled Gross-Pitaevskii equations show excellent agreement with experimental results.
\label{fig:Exp_Vs_GP}}
\end{figure*}
Furthermore, the Ramsey contrast decays significantly faster than the components spatially separate. We emphasize that uncommon spatial modes of the components are neither necessary nor sufficient for loss of interferometric visibility. For example, consider the ground state of the two-component system, whose density consists of a central feature of state \ket{2}, enclosed by two lobes of state \ket{1}. Of course, no inhomogeneous relative phase evolution occurs for the ground state and the mean-field limited Ramsey visibility is $80 \%$. This remains true even in the presence of loss, which we have confirmed from simulations that show the condensate wave functions adiabatically follow the time-dependent ground state.
We attribute residual differences in the precise spatial structure of the measured and simulated macroscopic wave functions to the finite resolution of our imaging system, photon recoil blurring during the imaging pulse, imperfect cylindrical symmetry of the trap, and interaction of the condensate with an undetectable yet significant thermal component for longer times. The repeatability of the observed interference fringes (both run-to-run \textit{and} day-to-day) is indicative of the well-defined initial-state preparation and phase evolution in this system.
\section{Dual State Imaging}
\label{sec:dual_state}
We present a new imaging technique to simultaneously detect states $\ket{1}$ and $\ket{2}$ with spatial resolution. This represents the first unambiguous state selective measurement of a \mbox{pseudospin-$\tfrac{1}{2}$} condensate that preserves the spatial mode. pseudospin-$\tfrac{1}{2}$ condensates of $^{87}$Rb\, suitable for interferometry are commonly comprised of states whose magnetic moments are similar, complicating spatial separation of the constituent states with a magnetic field gradient. Various methods exist to individually image either the $F=1$ or $F=2$ populations, using resonant light in combination with optical or microwave repumping pulses. An alternative method uses a radio-frequency sweep to distribute the $\ket{1}$ and $\ket{2}$ populations among all eight Zeeman sublevels which are then spatially resolved using Stern-Gerlach separation prior to imaging. The original \ket{1} and \ket{2} populations are inferred from estimates of how the adiabatic rapid passage distributes population among the sublevels \cite{Streed06}. Phase-contrast imaging can also been used to image the difference in populations \cite{Matthews99} and has enabled the observation of vortex dynamics \cite{Anderson00,Anderson01,Schweikhard04}.
\begin{figure}
\includegraphics[width=0.75\columnwidth]{Figure_dual_scheme.eps}
\caption{
Transfer of \ket{1} to the magnetic field insensitive state \ket{2,0} is achieved by adiabatically sweeping the atomic resonance in the presence of a fixed frequency microwave pulse. (a) Spatial separation is performed by pulsing $I_z$ (see text for description) to induce a Stern-Gerlach force that accelerates \ket{2} away from the atom chip and \ket{1}. (b)~Absorption imaging using the D$_2$ ($F = 2 \rightarrow F' = 3$) transition yields a single shot image of a two-component condensate, \ket{1} (above) and \ket{2} (below), where the \ket{1} structure arises during a Ramsey interference experiment described in Sec.~\ref{sec:Ramsey_interferometry}.
\label{fig:dual_scheme}}
\end{figure}
Our dual state detection method consists of three stages. First, the magnetic trap is switched off by reducing the Z-wire current, $I_z$, to zero in $\sim 200 \unit{\mu s}$. The uniform bias field that completes the trapping potential is concurrently ramped off in $3\unit{ms}$ and a fixed frequency microwave field is applied. The Zeeman splitting of the magnetic sublevels decreases, facilitating rapid passage from \ket{1} to \ket{F = 2, m_F = 0}. This population transfer method is more robust than a resonant microwave $\pi$ pulse, owing to the less stringent requirements of rapid passage on the stability of magnetic fields and microwave power. The same microwave field is used for the two-photon pulses, obviating the need for a second microwave frequency synthesizer and enabling detection immediately after a two-photon pulse, as in Sec.~\ref{sec:Ramsey_interferometry}. For our initial experiments demonstrating this technique, we transferred $44(1)\%$ of \ket{1} population to \ket{F = 2, m_F = 0} (rapid bias field switching was desired at the expense of adiabaticity). At this point, both spin components of the condensate are spatially overlapped and fall freely under gravity. However those in the \ket{F = 2, m_F = 0} state are largely insensitive to magnetic fields.
The second step [Fig.~\ref{fig:dual_scheme}(a)] produces spatial separation
of the two components. This is achieved using a $30\unit{A}$, $1\unit{ms}$ $I_z$ pulse which induces a Stern-Gerlach force in the direction of gravity to displace \ket{2} such that both components are separated by $\sim~300\unit{\mu m}$ after $20\unit{ms}$ of free fall. Finally, an absorption image is taken
[Fig.~\ref{fig:dual_scheme}(b)] yielding the column density of \ket{2} directly below \ket{1}. This image is from a Ramsey interferometry experiment described in Sec.~\ref{sec:Ramsey_interferometry} and highlights the ability of this dual state imaging method to preserve spatial information about the wave functions. This is due to the minuscule recoil velocity ($5 \times 10^{-9} \unit{mm/s}$) associated with microwave photon absorption thus eliminating the image blur associated with optical repumping techniques.
While conservation of spatial information is crucial for dynamical studies of two-component BEC, the detection method should also determine the atom number accurately. We investigate the effects of optical pumping during absorption imaging on $\eta$, the relative efficiency of detecting atoms originating in \ket{F = 2, m_F = 0} to those starting in \ket{2}. The absorption of a $\sigma^+$ polarized probe beam was modeled using the optical Bloch equations \cite{Maguire06}, including the Zeeman splitting of the ground and excited states. We evaluated the absorption by integrating the total excited-state population ($F' = 1,2,3 \, ; m_F' = - F',..,F'$) during the imaging pulse for a given initial state. Results are shown in Fig.~\ref{fig:detection_efficiency}(a) for different saturation parameters $s = I_0 / 1.68 \unit{mW\,cm^{-2}}$ of the $\ket{F = 2, m_F = 2} \rightarrow \ket{F' = 3, m_F' = 3}$ cycling transition, with $I_0$ the intensity of the probe beam. As the imaging pulse duration increases, optical pumping acts to populate the \ket{F=2,m_F=2} state and more time is spent driving the cycling transition. As such, the relative detection efficiency asymptotes to unity and does so faster for higher imaging intensities. For our typical imaging conditions, we predict a relative detection efficiency of $\eta = 0.98$.
\begin{figure}
\includegraphics[width=1.0\columnwidth]{Figure_detection_efficiency.eps}
\caption{(a) The detection efficiency $\eta$ for dual state absorption imaging as a function of imaging pulse duration, for three typical saturation parameters, $s=0.50$ (solid), $s=0.10$ (dash-dotted) and $s=0.01$ (dashed). (b) Simultaneous measurement of atoms in \ket{1} and \ket{2} after a two-photon coupling pulse showing high correlations for a range of detected BEC atom numbers. An imaging pulse of $100\unit{\mu s}$ duration with $s = 0.1$ was used.
\label{fig:detection_efficiency}}
\end{figure}
The dual state imaging method minimizes fluctuations in measured relative number that arise when detecting individual populations in different experimental realizations. This is due to common mode rejection of both probe-laser frequency noise and irreproducibility of total atom number between experimental runs. This is demonstrated in Fig.~\ref{fig:detection_efficiency}(b) which shows a strong correlation between \ket{1} and \ket{2} populations for a wide range of detected total condensate atom number. The method allows the relative population to be measured with a subpercent standard deviation, as opposed to single-component measurements of the absolute population which can typically vary by $\pm 10 \%$. This method improves our measurement of phenomena such as Rabi oscillations [Fig.~\ref{fig:two_photon_Rabi}(b)] and Ramsey fringes (Fig.~\ref{fig:Ramsey_data_cf_GP_2.5Hz}). Moreover, it will see important application in atom chip clock research, allowing normalization of total atom number fluctuations, a practice used to enhance the accuracy of precision fountain atomic clocks \cite{Rosenbusch09,Santarelli99}. It also represents a significant step to achieving quantum-limited detection of the total spin projection in pseudospin-$\tfrac{1}{2}$ condensates.
\section{Phase reconstruction using Ramsey interferometry}
\label{sec:phase_interferometric}
In this system, imaging the relative phase amounts to resolving the transverse component of the local Bloch-vector corresponding to the pseudospinor in Eq.~ (\ref{eq:spinor_definition}). For a sample of ultracold uncondensed $^{87}$Rb\, atoms, this has been achieved in Ref.~\onlinecite{McGuirk02}. There, the Ramsey interference signal was binned in both space and time, recovering a phase spectrogram that illustrated collective spin excitations driven by exchange scattering. This requires the repetition of many experimental cycles and gives an averaged relative phase with reduced temporal resolution. We propose a technique to spatially resolve the relative phase of the pseudospin-$\tfrac{1}{2}$ condensate from two dual state absorption images and present theoretical results that investigate the method's applicability and limitations. Performing an identical experiment with and without the presence of the second $\pi/2$ pulse permits four column densities $\tilde{n} = \int n \, dx$ to be imaged, where $n \in \left\{ n_1, n_2, n'_1, n'_2 \right\}$ and the densities are defined in Eqs.~(\ref{eq:spinor_final}). The densities can be used to recover the relative phase $\phi(\vec{r})$ using
\begin{equation}\label{eq:phase_interferometric}
n'_2(r)-n'_1(r)= 2\sqrt{n_1(r)n_2(r)} \sin(\phi(\vec{r})) \, .
\end{equation}
There is little variation of the relative phase along the radial direction, and the densities of both components have similar radial dependence. This is a consequence of the tight radial confinement, as the energy of the system is not sufficient for significant radial excitations to be induced. As such, the radially integrated densities, the \textit{line densities} $\bar{n} = \int n \, 2\pi \rho \, d\rho = \int \tilde{n} \, dy$, are representative of the relative phase evolution, such that
\begin{equation}\label{eq:phase_interferometric_LD}
\bar{n}'_2(z)-\bar{n}'_1(z) = 2\sqrt{\bar{n}_1(z)\bar{n}_2(z)} \sin(\bar{\phi}(z))
\end{equation}
defines a phase $\bar{\phi}(z)$ that is a good estimate of the phase along the center of the condensate $\phi(\rho = 0, z)$. We have confirmed this by evaluating Eq.~(\ref{eq:phase_interferometric_LD}) with line densities attained from simulations, as shown in Fig.~\ref{fig:phase_reconstruct}. For $90\unit{ms}$ of evolution, where the range of relative phase is $1.9 \, \pi$ across the condensate, the agreement is within $78\unit{mrad}$.
In more elongated geometries or where tunable miscibility can be realized, radial excitations are further suppressed and the approximation improves. In these cases there is increased spin locking along the imaging axis and the integration of density inherent to absorption imaging corrupts the phase signal less.
\begin{figure}
\includegraphics[bb=16 0 325 306,width=0.9\columnwidth,clip=true]
{Figure_phase_reconstruct.eps}
\caption{Simulation of interferometric phase retrieval
method. (a) Two absorption images are captured for each evolution time,
yielding the line densities immediately prior to, and after the second $\pi/2$ pulse: $\bar{n}_1$, $\bar{n}_2$, and $\bar{n}'_1$, $\bar{n}'_2$
respectively. (b) Point-by-point arithmetic is used [Eq.~(\ref{eq:phase_interferometric_LD})] to extract the relative phase profile along the axial direction.
\label{fig:phase_reconstruct}}
\end{figure}
The requirements of this technique are more demanding than those of the experimental investigation presented in Sec.~\ref{sec:Ramsey_interferometry}. For successful implementation of Eq.~\ref{eq:phase_interferometric_LD} to point-by-point arithmetic on multiple absorption images, we are currently pursuing improved detection methods with higher signal-to-noise ratio, spatial resolution, and dynamic range. This technique may be useful for investigations of spin squeezing, in the context of recent proposals which suggest using the nonlinear interaction to reduce the spin projection noise below the standard quantum limit \cite{Sorensen01,Jin07,Rey07,Li08}. It is interesting to consider the spatial dependence of the relative phase \textit{fluctuations} of a macroscopic pseudospinor. This technique may allow such quantum noise to be studied with spatial resolution.
\section{\label{sec:conclusion}Conclusion}
We have quantitatively studied the spatial evolution of the relative phase of a pseudospin-$\tfrac{1}{2}$ $^{87}$Rb BEC. Preparing the system with a $\pi/2$ pulse yields a non-equilibrium state due to small differences in the inter- and intra-state scattering lengths $a_{11}$, $a_{22}$, and $a_{12}$. We demonstrate the resulting spatially inhomogeneous evolution of the collective relative phase. This acts to decrease the Ramsey contrast before significant spatial separation of the components. Spatial dephasing of the many-body wave function dominates the effects of decoherence and phase diffusion on the interferometric contrast. A second $\pi/2$ pulse maps the non-uniform phase distribution onto the population distribution of each spin state. Measuring the longitudinal pseudospin projection is enhanced by a new dual state imaging method that combines microwave adiabatic rapid passage and Stern-Gerlach spin filtering. This yields a single shot measurement of \ket{1} \textit{and} \ket{2}, enabling the relative populations to be measured with subpercent precision. Moreover, this novel method does not corrupt the spatial modes of either component. We propose extending this technique to image the relative phase profile of an elongated \mbox{pseudospin-$\tfrac{1}{2}$} BEC to study the spatial dependence of relative phase fluctuations.
\begin{acknowledgments}
We wish to thank Chris Vale, Peter Hannaford, Simon Haine, Mattias Johnsson, and \mbox{Peter Drummond} for useful discussions. This project is supported by the ARC Centre of Excellence for Quantum-Atom Optics, and an ARC LIEF grant LE0668398.
\end{acknowledgments}
|
1,314,259,994,799 | arxiv | \section{Introduction}
\label{Introduction}
The description of the physics of quantum many-body systems suffers from the "curse of dimensionality",
that is, the size of the parameter set that is required to achieve an exact description of the physical
state of a quantum many-body system grows exponentially in the number of its subsystems. Therefore, the
simulation of quantum-many-body systems by classical means appears to require an exponential amount of
computational resources which, in turn, would impose severe limitations on the size of the quantum
many-body systems that are amenable to classical simulation.
On the other hand, exactness of description may be traded for an approximate representations of the
state and dynamics of a quantum many-body system for as long as the quality of the approximation can
be controlled and increased at will and whenever such an approximate treatment results in a polynomial
scaling of the resources with the system size. Clearly, this will not be possible for all system dynamics
as it would imply the classical simulability of quantum computers which is generally not believed to
be the case.
One specific setting of considerable practical importance that allows for efficient approximate
description of quantum many-body systems concerns systems whose entanglement content is limited.
Indeed, in pure states that factor, at all times, into a product of
states; each involving a number of qubits that is bounded from above for all times by a constant
\cite{jozsa03} can be simulated efficiently on a classical device. Going beyond this, it is well-known
that the states of 1-D quantum systems often obey an area law \cite{Audenaert2002,Plenio2005,eisert08,horodecki13,horodecki15}
which represents a severe limitation of their entanglement content. This can be made use of, as
the state of such slightly entangled 1-D quantum many-body system in a pure state can be described
efficiently by means of the Density Matrix Renormalization Group (DMRG) \cite{white92}. It was
noticed early on \cite{ref:rommer1997} that DMRG amounts to the approximation of the state of the
system by a matrix product state \cite{ref:perezgarcia2007} which can be constructed systematically
in terms of a consecutive Schmidt decomposition.
This approach can be extended to the dynamics of one-dimensional quantum many-body systems. The
time-dependent DMRG (t-DMRG) \cite{white04}, the time-dependent Matrix Product States (t-MPS)
\cite{garcia06}, and the Time-Evolving Block-Decimation (TEBD) \cite{ref:vidal2003,ref:vidal2004}
are all algorithms based on the idea of evolving an MPS \cite{ref:perezgarcia2007} in time. In
settings in which only a restricted amount of entanglement is present in the system, these methods
are very efficient and the cost of simulation scales merely polynomially in the system size. A very
clear presentation of these three algorithms as well as a discussion of the main differences
between them can be found in \cite{ref:schollwoeck2011}.
All these algorithms rely in an essential way on the singular value decomposition (SVD) which, due
to its computational complexity, represents a bottleneck in their implementations. In this work we
show how to address this issue within the TEBD framework by replacing the SVD which is used to restrict
the Hilbert space of quantum states relevant to the system dynamics, with the Reduced-Rank Randomized
SVD (RRSVD) proposed in \cite{halko11}. We show that when the Schmidt coefficients decay exponentially,
the polynomial complexity of the TEBD decimation step can be reduced from $\mathcal{O} (n^3)$ to
$\mathcal{O}(n^2)$ with $n$ indicating the linear dimension of the square matrix to decompose. This
results in a considerable speed-up for the TEBD algorithm for real world problems, enabling the access
of larger parameter spaces, faster simulation times, and thus opening up new regimes that were previously
inaccessible.
The paper is organized as follows: Section \ref{sec:tebd} provides a brief description of the TEBD algorithm for
pure and mixed state dynamics in order to make the manuscript self-contained for the non-specialist
and to identify and highlight the crucial step in which the RRSVD routine can be applied for considerable
benefit. Section \ref{sec:tedopa} introduces a specific and challenging application of TEBD, the TEDOPA scheme, which
maps an open quantum system on a one-dimensional configuration, allowing for the efficient simulation of
its dynamics. Section \ref{sec:RRSVD} the proceeds with a description of the salient features of the RRSVD
algorithm and a discussion of the speed-up that it provides over the standard SVD. Benchmarking in the
TEBD context with applications to TEDOPA along with stability analysis are presented in section \ref{sec:rrsvdtedopa}.
The last section is devoted to conclusions and outlook.
\section{Reduced-rank Randomized SVD} \label{sec:RRSVD}
The Singular Value Decomposition (SVD) is at the heart of the MPS representation and MPS-based algorithms,
such as TEBD. The efficiency of TEBD comes from the possibility of approximating states living in an
exponentially large Hilbert space with states defined by a number of parameters that grows only polynomially
with the system size. In order to understand why the SVD plays such a crucial role, we introduce the following
problem: given a complex $m \times n$ matrix $A$, provide the best rank-$k$ ($k\leq n$) approximation of $A$.
Without loss of generality we suppose $m \geq n$ and $rank(A)=n$. The solution to this problem is well known
\cite{golub96}: first compute the Singular Value Decomposition of $A = U\Sigma V^\dagger$, where
$U=\llrr{U^{(1)},U^{(2)},\ldots,U^{(n)}}$, $V=\llrr{V^{(1)},V^{(2)},\ldots,V^{(n)}}$ are the left and right
singular vectors of $A$ respectively and $\Sigma = diag\llrr{\sigma_1,\sigma_2,\ldots,\sigma_n}$ with
$\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_n$. Then retain the first $k$ largest singular values
$\llrr{\sigma_1,\sigma_2,\ldots,\sigma_k}$ and build the matrices $U_k = \llrrq{U^{(1)},U^{(2)},\ldots,U^{(k)}}$
and $V_k = \llrrq{V^{(1)},V^{(2)},\ldots,V^{(k)}}$. The matrix $\widetilde{A}_k = U_k \Sigma_k V_k^\dagger$ satisfies
\begin{align}
||A-\widetilde{A}_k||_F = \sqrt{\sum_{i=k+1}^{n} \sigma_i^2} = \min_{rank(A') = k}||A-A'||_F ,
\end{align}
where $|| \cdot ||_F$ indicates the Frobenius norm
\begin{align}\label{eq:frobNorm}
||A||_F = \sqrt{\sum_{i=1}^{n} \sigma_i^2}.
\end{align}
In other words, $\widetilde{A}_k$ provides the best rank-$k$ approximation of $A$. This result justifies the
application of SVD for the TEBD decimation step.
The computational complexity of the SVD of $A$ is $\mathcal{O}(m \cdot n^2)$.
For large matrices, the SVD can therefore require a significant amount of time. This is a crucial point
since every single TEBD simulation step of an $N$-sites spin chain requires $\mathcal{O}\llrr{N}$ SVDs
which usually consumes about 90\% of the total simulation time.
As discussed in \sref{sec:tebd}, the bond-dimension $\chi$ requires discarding of $n-\chi$ singular values
(and corresponding left-/right- singular vectors). In the TEBD two-site update step, for example, we keep
only $\chi$ singular values out of $n=d \cdot \chi$ (pure states) or $n=d^2 \cdot \chi$ (mixed states).
Most of the singular values and left-/right-singular vectors are therefore discarded. It means that we are
investing time and computational resources to compute information that is then wasted.
It is possible to avoid the \emph{full} SVD of $A$ and compute only the first $k$ singular values
and corresponding singular vectors by using \emph{Truncated SVD} methods; such methods are standard tools
in data-classification algorithms \cite{hastie09}, signal-processing \cite{candes09} and other research fields.
The Implicitly Restarted Arnoldi Method \cite{sorensen98,sorensen02} and the Lanczos-Iteration \cite{larsen98}
algorithms, both belonging to the Krylov-subspace iterative methods \cite{saas92,hochbruck97}, are two examples.
The Reduced-Rank Singular Value Decomposition (RRSVD), originally presented in by N. Halko \emph{et al.}
\cite{halko11}, is a \emph{randomized} truncated SVD. It is particularly well suited to decompose structured
matrices, such as those appearing in the TEBD simulation of non-critical quantum systems. Most interestingly,
the algorithm is insensitive to the quality of the random number generator used, delivers highly accurate results
and is, despite its random nature, very stable: the probability of failure can be made arbitrarily small with
only minimal impact on the necessary computational resources.\\ In what follows we will describe the RRSVD
algorithm and report the main results on stability and accuracy. For a full account on RRSVD we refer the reader
to \cite{halko11}.
The RRSVD algorithm is a two-step procedure. The first step constructs an orthogonal matrix $Q$ whose columns
constitute a basis for the approximated range of the input matrix $A$. In the second step, the approximated
SVD of $A$ is computed by performing a singular value decomposition of $Q^\dagger A$.
The approximation of the range can be done either for fixed \emph{error tolerance} $\epsilon$, namely by finding a $Q_\epsilon$ such that
\begin{align} \label{eq:fixerrprob}
||A-Q_\epsilon Q_\epsilon^\dagger A|| \leq \epsilon,
\end{align}
or for a fixed rank $k$ of the approximation, that is by finding $Q_k$ such that
\begin{align}
\min_{rank(X)\leq k} || A- X|| \approx ||A-Q_kQ_k^\dagger A||.
\end{align}
The first one is known as the \emph{fixed-precision approximation problem}, whereas the second is known as the \emph{fixed-rank approximation problem}.
Here and in what follows, we indicate by $||A||$ the operator norm, corresponding to the largest singular value of $A$.
If $\sigma_j$ is the $j$-th largest singular value of $A$ then
\begin{align}
& \min_{rank(X)\leq k(\epsilon)} ||A-X||_F = \nonumber \\ & = ||A - Q_{k(\epsilon)}Q_{k(\epsilon)}^\dagger A||_F =
\sqrt{\sum_{j=k(\epsilon)+1}^n \sigma_j^2}.
\end{align}
where $\sigma_{k(\epsilon)}$ is the first singular value $\geq \epsilon$ and the columns of the rank-$k(\epsilon)$ matrix $Q$ are the first $k(\epsilon)$ left singular vectors of $A$.
However, this would require the knowledge of the first $k(\epsilon)$ singular values and vectors of $A$.
For the sake of simplicity, let us focus initially on the fixed-rank approximation problem.
In order to determine $Q_k$, we resort to randomness.
More precisely, we use a sample of $k+p$ random vectors $\omega^{(i)}$, whose components are independently drawn form a standard Gaussian distribution $\mathcal{N}_{(\mu=0,\sigma=1)}$.
The set $\left \{ \omega^{(i)} \right \}_{i=1}^{k+p}$ will be with \emph{very high probability} a set of independent vectors.
The parameter $p$ determines the amount of \emph{oversampling} needed to make the rank-$k$ approximation of $A$ more precise.
The $m \times (k+p)$ matrix
\[
Y=A\Omega,
\]
where $\Omega = (\omega^{(1)},\omega^{(2)},\ldots,\omega^{(k+p)})$, will therefore have full rank $(k+p)$.
By re-orthogonalizing the columns of the matrix $Y$ we obtain a basis for the rank-$(k+p)$ approximation of the range of $A$.
The re-orthogonalization can be done by using the $QR$-decomposition $Y=Q R$.
If $(k+p) < n$, the computational cost of the first step of this algorithm is dominated by the matrix multiplication $A\Omega$: this operation has an asymptotic complexity $\mathcal{O}(mn(k+p))$, whereas the QR decomposition of the $m \times (k+p)$ matrix $Y$ has asymptotic complexity $\mathcal{O}(m(k+p)^2)$ .
When the input matrix $A$ is very large, the singular vectors associated with small singular values may interfere with the calculation.
In order to reduce their weight relative to the dominant singular values it is expedient to take powers of the original matrix $A$.
So, instead of computing $Y=A\Omega$ we compute
\begin{align} \label{eq:Z}
Z =B\Omega = (AA^\dagger)^qA\Omega.
\end{align}
The singular vectors of $B =(AA^\dagger)^q A $ are the same as the singular vectors of $A$; for the singular values of $B$, on the other hand, it holds:
\begin{align} \label{eq:powSV}
\sigma_j(B) = \llrr{\sigma_j(A)}^{2q+1},
\end{align}
which leads to the desired reduction of the influence on the computation of the singular vectors associated to small singular values.
This ``stabilizing'' step, also referred to as \emph{Power Iteration} step (PI), increases the computational cost of the first step of the RRSVD algorithm by a constant factor $(2q+1)$.
A side effect of the PI is the extinction of all information pertaining to singular vectors associated to small singular values due to the finite size of floating point number representation.
In order to avoid such losses, we use intermediate re-orthogonalizations (see Algorithm \ref{alg:RSI}).
We point out that in the typical context where TEBD is successfully applied, the singular values of the according matrices decay very fast, so the PI scheme must be applied.
Since the RRSVD is a randomized algorithm, the quality of the approximation comes in the form of expectation values, standard deviations and failure probabilities.
We report pertinent results that can be found (including proofs) in \cite{halko11}, as well as some particular results tuned to cases of specific relevance to applications for TEBD.
\begin{theorem} \label{th:errexpiter}
(Corollary 10.10 in \cite{halko11})
Given a target rank $k \geq 2$ and an oversampling parameter $p\geq2 $, let $Q_Z$ be the orthogonal $m \times (k+p)$ matrix consisting of the fist $k+p$ left singular vectors of the matrix $Z$ defined in \eref{eq:Z}.
We define $P_Z = Q_Z^ \dagger Q_Z$. Then
\begin{align}\label{eq:errexpiter}
&\mathbb{E}\llrr{||\llrr{\mathbb{I} - P_Z }A}|| \leq \\
& \leq \llrrq{\llrr{1+\sqrt{\frac{k}{p-1}}} \sigma_{k+1}^{2q+1} +\frac{e \sqrt{k+p}}{p} \llrr{\sum_{j>k} \sigma_j^{2\llrr{2q+1}}}^{\!\!\frac{1}{2}}}^{\frac{1}{2q+1}} \nonumber \\
& \leq \llrrq{ 1+\sqrt{\frac{k}{p-1}}+\frac{e \sqrt{k+p}}{p} \sqrt{n-k} }^{1/\llrr{2q+1}} \sigma_{k+1}. \nonumber
\end{align}
\end{theorem}
The application of the PI scheme reduces the average error exponentially in $q$.
In order to quantify the deviations from the expected estimation error we use the following facts:
\begin{align} \label{eq:relAB}
|| \llrr{\mathcal{I} - P_Z}A||^{2q+1} \leq || \llrr{\mathcal{I} - P_Z}B||
\end{align}
(Theorem 9.2 \cite{halko11}) and
\begin{align} \label{eq:perr}
&|| \llrr{\mathcal{I} - P_Y}A|| \leq \\
&1+6 \sqrt{(k+p) \cdot p \log(p)} \sigma_{k+1} +3 \sqrt{k+p} \llrr{\sum_{j>k} \sigma_j^2}^\frac{1}{2} , \nonumber
\end{align}
with probability greater or equal to $1- \frac{3}{p^p}$ (Corollary 10.9 \cite{halko11}).
We can now state the following
\begin{corollary} \label{th:tamaErr}
Under the same hypotheses of Theorem \ref{th:errexpiter} it is
\begin{align}
P \llrr{|| \llrr {\mathcal{I} - P_Z}A|| \leq \alpha^\frac{1}{2q+1} \sigma_{k+1} + \beta^\frac{1}{2q+1} \sum_{j>k}\sigma_j} \geq 1- \frac{3}{p^p},
\end{align}
with $\alpha = (1+6 \sqrt{(k+p) \cdot p \log(p)})$ and $\beta = 3 \sqrt{k+p}$.
\end{corollary}
\begin{proof}
By applying \eref{eq:perr} to $B =(AA^\dagger)^q A $, and using \eref{eq:powSV} we have
\[
P \llrr{|| \llrr{\mathcal{I} - P_Z}B|| \leq\alpha \sigma_{k+1}^{2q+1} +\beta \llrr{\sum_{j>k} (\sigma_j^{2q+1})^2}^\frac{1}{2} } \geq 1- \frac{6}{p^p}.
\]
Using the relation \eref{eq:relAB} we have that
\[
|| \llrr{\mathcal{I} - P_Z}A|| \leq \llrr{\alpha \sigma_{k+1}^{2q+1} +\beta \llrr{\sum_{j>k} (\sigma_j^{2q+1})^2}^\frac{1}{2} }^{1/(2q+1)}
\]
since the function $f(q) = a^{1/x}, a>0, x>0$ is convex, it holds
\begin{align}
&\llrr{\alpha \ \sigma_{k+1}^{2q+1} +\beta \llrr{\sum_{j>k} (\sigma_j^{2q+1})^2}^\frac{1}{2} }^{1/(2q+1)} \nonumber \\
& \leq \llrr{\alpha \ \sigma_{k+1}^{2q+1}}^\frac{1}{(2q+1)} + \llrr{\beta \sqrt{\sum_{j>k} (\sigma_j^{2q+1})^2}}^\frac{1}{2q+1} \nonumber \\
&\leq \alpha^\frac{1}{2q+1} \sigma_{k+1} +\beta^\frac{1}{2q+1}\sqrt{ \sum_{j>k} \sigma_j^2} \nonumber \\
&= \alpha^\frac{1}{2q+1} ||A - \widetilde{A}_k|| +\beta^\frac{1}{2q+1}||A - \widetilde{A}_k||_F.
\end{align}
\end{proof}
The results provided by the algorithm are usually closer to the average value than those estimated
by the bound \eref{eq:errexpiter}, which therefore seems to be not tight. However, the important
message of the preceding results is that by applying PI we obtain much better approximations of $A$
than those provided by the original scheme while error expectations and deviations are under full control.
\begin{algorithm}
\caption{\bf{Randomized SVD with Power Iterations}}
\label{alg:RSI}
\begin{algorithmic}[1]
\Require $m \times n$ matrix $A$; integers $l = k+p$ (rank of the approximation) and $q$ (number of iterations).
\State Draw an $ n \times l$ Gaussian matrix $\Omega$.
\State Form $Y_0 = A \Omega$.
\State Compute the QR factorization $Y_0 = Q_0 R_0$.
\For {$j=1, 2,\dots, q$}
\State Form $\widetilde{Y}_j = A^\dagger Q_{j-1} $.
\State Compute the QR factorization $\widetilde{Y}_j = \widetilde{Q}_j \widetilde{R}_j$.
\State Form $Y_j = A \widetilde{Q}_{j} $.
\State Compute the QR factorization $Y_j = Q_j R_j$.
\EndFor
\State \Return $Q = Q_q$.
\end{algorithmic}
\end{algorithm}
It would be most convenient to have some means to check how close $QQ^\dagger A$ is to the original input matrix $A$.
With such a tool, we could check the quality of the approximation; moreover, we would be able to solve the fixed-error approximation problem \eref{eq:fixerrprob}.
In the TEBD setting, this would allow us to determine the bond dimension $\chi$ for an assigned value $\epsilon$ of the truncation error.
The solution to this problem comes from this result:
\begin{theorem} \label{th:accCheck} (equation 4.3 supported by Lemma 4.1 \cite{halko11}) With $M=(I-QQ^\dagger)A$
\begin{align}
&P\llrr{||M|| \leq 10 \sqrt{\frac{2}{\pi}} \max_{i=1,2,\ldots,r} ||M \omega^{(i)}||} \geq 1-10^{-r},
\end{align}
where $\omega^{{i}},i=1,\ldots,r$ are standard normal random vectors.
\end{theorem}
Suppose that we have completed the first three steps of Algorithm \ref{alg:RSI}.
Set $Q = Q_0$, with $rank(Q) = l=k+p$ and choose the size $r$ of the sample.
The \emph{Accuracy Check} algorithm (Algorithm \ref{alg:RAC}) takes in input $A,Q,r$ and $\epsilon$ and returns a new matrix $Q'$ that satisfies the accuracy bound with probability $1-10^{-r}$.
\begin{algorithm}[h]
\caption{\bf{Accuracy check}}
\label{alg:RAC}
\begin{algorithmic}[1]
\Require $m \times n$ matrix $A$; rank-$k+p$ projector $P_Q=QQ^\dagger$; integer $r$; tolerance $\epsilon$.
\Do
\State Set $l=k+p$.
\State Draw an $ n \times r$ Gaussian matrix $\Omega_r$.
\State Compute $B=A\Omega_r$.
\State Compute $D =(I-P_Q) A \Omega_r = \llrr{d^{(1)}, d^{(2)},\ldots,d^{(r)}}$.
\State Set MAX= $\max\left \{ d^{(i)} \right \}_{i=1}^r$.
\If{$( \text{MAX} > \epsilon) $}
\State Build $\widetilde{Q} = Q|B$.
\State Set $l = l+r$.
\State Compute the QR decomposition $\widetilde{Q} = \widetilde{Q}'\widetilde{R}'$.
\State Set $Q = \widetilde{Q}'$.
\EndIf
\doWhile($\text{MAX} >\epsilon$ and $l \leq n-r$ )
\State \Return $Q$.
\end{algorithmic}
\end{algorithm}
The computational cost of the Accuracy Check depends on different parameters.
The cost of each iteration step is $\mathcal{O}(m \cdot l^2)$.
Then we have to consider the iterations.
If we take $r$ too small (e.g. $r=1$), then if the starting rank-$l$ approximation is not good enough,
we might need many iteration to converge to the desired accuracy. As a rule of thumb, we suggest to
double the rank of the approximation at each time. This will likely lead to oversampling, but still
delivers a good performance balance.
Since the reference metric in TEBD is the Frobenius norm eq. \eref{eq:frobNorm}, some estimate of
the Frobenius norm via the operator norm is required. To this end we observe that the TEBD is successful
when the correlations in the simulated system are sufficiently short-ranged, i.e. the ``singular values decay fast enough''.
If the entanglement between distant parts of the system is non-vanishing, the decimation step will lead
to an important loss of relevant information. As an example let us consider a spin-chain of
size $n$ and a bipartition $A=\{1,2, \ldots l\}, \ B = \{l+1,\ldots, n\} $ of the chain. Let $ \ket{k}_A$
and $\ket{j}_B$ be orthonormal bases for subsystems $A$ and $B$ respectively. Then the spin-chain state
\ket{\psi} can be Schmidt-decomposed as
\[
\ket{\psi} = \sum_{k,j} c_{k,j} \ket{k}_A \ket{j}_B = \sum_i \sigma_i \ket{i}_A \ket{i}_B,
\]
where in the last equality we used the SVD decomposition of the matrix $ C= \left ( c_{j,k}\right ) = U \Sigma V^\dagger$
to perform the Schmidt decomposition of \ket{\psi}. The Schmidt coefficients $\sigma_i$ are the singular values of
$C$, i.e. the diagonal elements of $\Sigma$ \cite{nielsen11}. The number of non-zero singular values is the Schmidt number.
The amount of entanglement between the subsystems $A$ and $B$ can be quantified by the von Neumann, or entanglement, entropy
\cite{PlenioVirmani2007} of the reduced density matrices $\rho_A$ and $\rho_B$
\begin{align}
S(\rho_A) = -\sum_i \sigma_i^2 \log(\sigma_i^2) = S(\rho_B).
\end{align}
The decay rate of the singular value is therefore directly related to the amount of entanglement
shared between two parts of a system. If the system is highly entangled, the singular values will
decay ``slowly''; in the limiting case where the system is maximally entangled, the reduced density
matrices will describe completely mixed states and the singular values will be all equal to each other.
When the system is only slightly entangled, the singular values will decay very fast; in particular,
if the state \ket{\psi} is separable, the Schmidt number is equal to 1 (and $\sigma_1 = 1$).
The behavior of the entanglement entropy in systems at, or close to, the critical regime has been
thoroughly studied (see \cite{eisert08} and references therein) together with its dependence on the
decay rate of the eigenvalues of the reduced density matrices $\rho_{A,B}$ \cite{calabrese08, schuch08, horodecki13, horodecki15}.
By oversimplifying the problem, we observe that if the singular values decay as $\sigma_j = 1/\sqrt{j}$
the entanglement entropy shows, when the system size $n \to \infty$, a divergence $\log^2(n)$,
whereas if they decay as $\sigma_j = 1/j$ the entanglement entropy does converge, since an area law
holds \cite{eisert08}.
When the singular values decay as $1/j$, however, the truncation error $||A - \widetilde{A_k}||_F$ decreases slowly in $k$ and the TEBD scheme becomes inefficient since any decimation will lead to considerable approximation errors (see Eq.~\eref{eq:truncerr}).
For these reasons we consider the case of linearly decreasing singular values as an extremal case for the range of applicability of the TEBD scheme.
This observation provides a useful tool to estimate the Frobenius norm, which plays a central role in TEBD, through the operator norm.
For any matrix $A$ it is:
\begin{align}
||A|| \leq ||A||_F \leq \sqrt{rank(A)} ||A||. \nonumber
\end{align}
This result holds in general.
The inequalities are saturated, in particular, when $rank(A)=1$.
The upper bound on the Frobenius norm is the tighter the closer the singular values of $A$ are to each other.
If the singular values decay at least as $1/j$, on the other hand, we have the following result.
\begin{fact}
Given a rank-$n$ matrix $A$ with singular values $\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_n$ with $\sigma_j \leq \sigma_1/j, j=1,\ldots,n$, it holds:
\begin{align}
||A||_F \leq \frac{\pi}{\sqrt{6}} ||A|| = \sigma_1 \frac{\pi}{\sqrt{6}}
\end{align}
where $||A||$ indicates the operator norm.
\end{fact}
\begin{proof}
\begin{align}
||A||_F = \sqrt{Tr\llrr{A^\dagger A}} = \sqrt{\sum_{i=1}^n \sigma_i^2} \le \sigma_1 \sqrt{\sum_{i=1}^n \frac{1}{i^2}}.
\end{align}
The last term is upper bounded by
\[
\sigma_1 \lim_{n \to \infty} \sqrt{\sum_{i=1}^n \frac{1}{i^2}} = \sigma_1 \frac{\pi}{\sqrt{6}}.
\]
\end{proof}
This result finds application in the Accuracy Check routine. If we set $P_Q= QQ^\dagger$ we find
\begin{align}
||(I -P_Q)A||_F &= \sqrt{Tr\llrr{\llrr{A-P_Q A}^\dagger \llrr{A-P_Q A}}} \nonumber \\
& = \sqrt{Tr \llrr{A^\dagger A} - \llrr{A^\dagger P_Q A}} \nonumber \\
&= \sqrt{||A||_F^2 - ||A^\dagger Q||_F^2}\nonumber \\
& \leq ||A||_F.
\end{align}
Therefore
\begin{align}
||(I-P_Q)A||_F &\leq \frac{\pi}{\sqrt{6}} ||A||_F \\
& \leq \frac{10}{3} \pi \max_{i=1,2,\ldots,r}||(I-QQ^\dagger)A \omega^{(i)}|| \nonumber
\end{align}
with probability $1-10^{-r}$.
We point out that it is not really necessary to \emph{estimate} the Frobenius norm of the error: given $Q$ we can compute $ ||(I-QQ^\dagger)A||_F$ directly.
However one should note that this computation of $QQ^\dagger A$ requires $2 (m \cdot n \cdot k)$ float operations instead of the $ 2 (m \cdot k \cdot r)+ m \cdot n \cdot r$ operations required to get the error estimate
Now that we have the orthogonal matrix $Q$ whose columns constitute a basis for the approximated range of the input matrix $A$, we can directly compute the approximate SVD of $A$ by:
\begin{enumerate}[i)]
\item Form the matrix $B = Q^\dagger A$.
\item Compute the SVD of $B$: $B=\widetilde{U}\Sigma V^\dagger$.
\item Form the orthonormal matrix $U = Q\widetilde{U} $.
\end{enumerate}
The product $Q^\dagger A$ requires $\mathcal{O}((k+p) \cdot n \cdot m)$ floating point operations.
The SVD of $B$, under the reasonable assumption that $k+p\leq n$, requires $\mathcal{O}(n \cdot (k+p)^2)$ operations.
The product $Q\widetilde{U}$ requires $\mathcal{O}(m \cdot (k+p)^2)$ operations.
We conclude this section by presenting some results on the computational cost of the whole
procedure and on some more technical aspects related to the implementation of the algorithm.
The asymptotic computational cost of partial steps has been given at various places in this section.
In summary, the real bottleneck of the complete RRSVD algorithm is the first matrix multiplication $Y=A \Omega$ which has complexity $\mathcal{O}(m \cdot n \cdot(k+p))$.
The value of $p$ can be set in advance or determined by the Accuracy Check method described above.
All remaining operations, such as QR decompositions and error estimation, have smaller computational complexity.
If the TEBD scheme is applied in a non-adaptive way, i.e. the bond dimension is kept fixed at a given value $\chi$, we use RRSVD to solve the fixed-rank problem.
In this case RRSVD has complexity $\mathcal{O}(m\cdot n \cdot \chi)$.
If the bond dimension $\chi$ is set independently of the input matrix size, the replacement of the standard SVD by RRSVD will therefore result in a speed-up linear in $n$.
On the other hand, if we use an adaptive TEBD simulation where the bond dimension is set such that some bound on the approximation error is satisfied, the cost of RRSVD will (strongly) depend on the structural properties of the input matrices.
If the singular values decay exponentially (short-range correlations), then the expected speed-up is roughly the same as for the non-adaptive scheme.
If the simulated system presents long-range correlations, then the speed-up provided RRSVD will be less then linear in $n$, possibly even vanishing.
However, TEBD itself is not the ideal tool to deal with systems exhibiting long-range correlations, so this is only a minor limitation.
Another crucial observation, related to the implementation, is that all the operations required by RRSVD are standard functions of either the Basic Linear Algebra Subprograms (BLAS) \cite{blas} or the Linear Algebra PACKage (LAPack \cite{lapack}) libraries.
Both libraries are heavily optimized and are available for single-core, multi-core (e.g. Intel Math Kernel Library (MKL) \cite{mkl}) and Kilo-processor architectures (e.g. CuBLAS \cite{cublas} and CULA \cite{cula}).
Since TEBD simulations are usually run on many cores (on current cluster architectures often 8 or 16), RRSVD can take full advantage of the optimized libraries.
\section{RRSVD case-study and profiling of real TEBD simulations}
\label{sec:rrsvdtedopa}
We start by showing that, despite its random nature, RRSVD produces very accurate results with surprisingly small fluctuations.
To this end we test the described algorithm on a sample of relatively small structured matrices extracted from pure-state TEBD simulations of our standard system from section~\ref{sec:tedopa}, subsequently continuing to larger matrices from mixed-state TEBD simulations of the same system.
We analyze the accuracy of RRSVD and its time performances, concluding the section by presenting how RRSVD impacts full TEBD simulations.
\subsection{Stability analysis} \label{sec:stability}
In order to benchmark the stability of the RRSVD algorithm we consider a set of $7$ diagonal $750 \times 750$ matrices $\Sigma_m,\ m=1,2,\ldots,7$.
The matrices $\Sigma_m$ are extracted from an actual pure-state TEBD simulation of our benchmark system described in section \ref{sec:tedopa}.
For every $\Sigma_m$ we generate a set of $n_A=20$ random matrices $\left \{ A_{m,i}\right \}_{i=1}^{n_A}$ by randomly generating $1500 \times 750$ random orthonormal matrices $U_{m,i}$ and $750 \times 750$ random orthonormal matrices $V_{m,k}$; each $A_{m,i}$ has dimensions $1500 \times 750$.
In this way we check for the dependence of both LaPack SVD and RRSVD on different matrices with the same structural properties.
In order to take the random nature of RRSVD into account, for \emph{each} $A_{m,i}$ we perform $n_R=20$ executions of RRSVD.
In this first benchmark we provide a rank-$50$ approximation of the rank-$750$ original matrices $A_{m,i}$ and show how the
accuracy is related to the number of subspace iterations $q$. Motivated by the theorems reported in the previous section, we
set $p=50$. We compare the accuracy and timing results for different values of the iteration parameter $q=2,4,6$. The \emph{accuracy-check}
part of the algorithm is not included here: we do not estimate the difference between the original matrix $A$ and its projection on the reduced space.
By running the RRSVD on a set $\left \{A_{k,i} \right \}_{i=1}^{n_A}$ random realizations of matrices exhibiting the same structure, i.e. same singular values $\Sigma_k$, we check that the accuracy of RRSVD depends only, for fixed numbers of iterations $q$, approximation-rank $k$ and oversampling parameter $p$, on the structural properties of the matrices.
Therefore, in what follows we present an analysis referring to the instance corresponding to the random realization $A_{2,1}$, which is, in every respect, a good representative of the kind of matrices we deal with when performing a TEBD simulations of a pure quantum system far from the critical regime.
In \fref{fig:figure1a} we plot the singular values of $A_{2,1}$.
It is important to notice that some of the largest singular values are very close to each other.
This is a typical situation in which truncated-SVD methods belonging to the family of Krylov-subspace iterative methods are likely to require more iterations in order to accurately resolve the singular values.
RRSVD, on the other hand, is completely insensitive to this peculiarity of the spectrum and provides very good results for the whole range of retained singular values ($k=50$) starting from $q=4$: the approximation error is comparable to that of the state-of-the-art LAPack SVD routine.
Most noticeably, none of the $n_R$ executions of RRSVD on the instance $A_{2,1}$ presents \emph{outliers}, that is to say singular values that are computed with anomalous inaccuracy (\fref{fig:figure1b}).
The behavior of the approximation error as a function of $q$ is compatible with the theoretical results stated in the previous section.
In Table \ref{tab:tabSpeedUp1} we show the speed-up $t_{SVD}/t_{RRSVD}$ when both, MKL-SVD and MKL-based RRSVD (see section \ref{sec:profiling} for more information about the implementation), are executed on an Intel Xeon X5570@2.93GHz by 8 concurrent threads.
The speed-up over the MKL-SVD is obviously decreasing as $q$ and $k$ increase: for $k=p=100$ and $q=6$ almost no advantage remains in applying RRSVD instead of the standard SVD.
\begin{table}[t]
\begin{center}
\begin{tabular}{||c|| | c | c | c | c | c | c ||}
\hline \hline
k/q & 0 & 2 & 4 & 6 & 8 & 10\\ \hline \hline
50 & 11.6 & 5.4 & 3.5 & 2.6 & 2.04 & 1.7 \\ \hline
100 & 4.7 & 2.3 & 1.5 &1.1 & 0.89 & 0.69 \\ \hline \hline
\end{tabular}
\end{center}
\caption{$A_{2,1}$: RRSVD Speed-up $t_{SVD}/t_{RRSVD}$. LAPack SVD time: 3.84 s}
\label{tab:tabSpeedUp1}
\end{table}%
It is worth stressing here that RRSVD is meant to deliver a substantial advantage only for very large matrices and a comparatively small number of retained dimensions, as we will show later.
\begin{figure}[h]
\subfigure[]{\label{fig:figure1a} \includegraphics[width=\columnwidth]{graphSV2-eps-converted-to.pdf}}
\subfigure[]{\label{fig:figure1b} \includegraphics[width=\columnwidth]{qDep.pdf}}
\caption{Instance $A_{2,1}$: $m=1500, n=750$; $k=p=50$.
Discarded weight: $w=4 \cdot 10^{-4}$ (a)Base-10 logarithmic plot of the singular values $\Sigma_2$: the decay appears roughly exponential.
The inset shows the first $10$ singular values: some of these SVs are very close to each other (about $10^{-5}$) (b) The errors $\log_{10}(|\sigma_i - \sigma_i^{RRSVD}|)$ of the RRSVD for each singular value and for all $n_R=20$ executions of RRSVD on the same instance matrix $A_{2,1}$ for different values of the iteration number $q$.
The standard MKL SVD routine errors are shown as a thick black line. }
\label{fig:figure1}
\end{figure}
\begin{figure}[h]
\subfigure[]{\label{fig:figure2a} \includegraphics[width=\columnwidth]{errCritical.pdf}}
\caption{Instance $A_c$: $m=1500,n=750$; $k=p=50$. The errors $\log_{10}(|\sigma_i - \sigma_i^{RRSVD}|)$
of the RRSVD as in \fref{fig:figure1b} but referring to the singular values of the matrix $A_c$ and for
$q \in \{0,2,4,6,8,10\}$. The discarded weight $w$ for the chosen value of $k$ is $w=1 \cdot 10^{-1}$. }
\label{fig:figure1}
\end{figure}
We now turn our attention to the TEBD extremal case discussed previously.
We consider an $m=1500, n=750$ random matrix $A_c$ generated, as described at the beginning of this subsection, starting from singular values $\Sigma_c =diag(\sigma_1^c,\sigma_2^c,\ldots,\sigma_n^c) $ with $\sigma_i^c =1/i$.
As shown in \fref{fig:figure2a}, in order to provide the same accuracy delivered by LAPack on the first $k$ singular values, we need to increase the number of iterations.
For $k=50$ and $q=10$ RRSVD is still able to provide some speed-up over the LAPack SVD.
But the real problem is the truncation error: for $k=50$ we have a truncation error of $\epsilon\approx10^{-1}$. But in order to achieve $\epsilon<10^{-2}$, about $650$ singular values need to be retained. This results in a major loss of efficiency of the TEBD simulation scheme.
Therefore we can claim that RRSVD is indeed a fast and reliable method, able to successfully replace the
standard SVD in the TEBD algorithm \emph{in all situations} where TEBD can successfully be applied.
\subsection{Performance on larger matrices and TEBD profiling} \label{sec:profiling}
Now that the basic properties of the algorithm are established, we test it on larger
matrices sampled from mixed-state TEBD simulation of our benchmark system from section \ref{sec:tedopa}.
Given the bond dimension $\chi$ and the local dimension $d$ of the sites involved in
the two-site-update, the size of the matrix given as input to the SVD is $d^2 \chi \times d^2 \chi$.
In the following example we set $k=\chi=100$ and the dimension of the local oscillators
to $3,4,5,6$ and $7$ respectively. We therefore present results for matrices of dimensions
$d_3=900 \times 900, d_4= 1600 \times1600, d_5 =2500 \times 2500$, $d_6 = 3600 \times 3600$
and $d_7 = 4900 \times 4900$. The structural properties of the test matrices considered
are similar to the non-critical instances considered in the previous subsection.
We first analyzed the results provided by the RRSVD routine on a large sample of matrices (200 instances for each dimension).
We determined that the RRSVD reaches the LAPack accuracy for a number $q=2$ of PI steps.
We have developed three versions of the RRSVD algorithm: BL-, MKL- and GPU-RRSVD; each one
uses a different implementation of the BLAS and LAPack libraries. BL-RRSVD is based on a
standard single-thread implementation (CBLAS \cite{cblas}, LAPACKE \cite{lapacke}); MKL-RRSVD
uses the Intel$^ \circledR$ implementation Math Kernel Library (MKL \cite{mkl}); GPU-RRSVD
exploits CUBLAS\cite{cublas} and CULA \cite{cula}, i.e. the BLAS and LAPack libraries for
Nvidia$^\circledR$ Graphics Processing Units (GPUs). RRSVD is available for single/double
precision real/complex matrices in each version. We refer the reader to \cite{RRSVDgit}
for more details about our RRSVD implementations.
During the completion of this work, another implementation of RRSVD from one of the
authors of \cite{halko11} has been reported in \cite{martinsson15}. There the authors
present three variants of RRSVD (essentially RRSVD with and without the PI and the
Accuracy check) and discuss their performance on large (up to $6000 \times 12000$) real
matrices with slowly decaying singular values. The implementation described in \cite{martinsson15}
is available for single-multi and Kilo processor architectures, as ours, but is limited
to double precision real matrices. We are currently working on a full comparison between
our and this other version of RRSVD.
In Table \ref{tab:baretimes} we show the time required to perform the SVD/RRSVD of $
d_3,d_4,d_5$,$d_6$ and $d_7$ double-precision complex matrices for the three implementations.
%
\begin{table*}[t]
\begin{center}
\begin{tabular}{||c|| | c |c | c | c | c | c | c | c ||}
\hline \hline
& 1-BL-SVD& 1-BL-RRSVD & 1-MKL-SVD & 1-MKL-RRSVD & 16-MKL-SVD &16-MKL-RRSVD & GPU-SVD & GPU-RRSVD\\ \hline \hline
$d_3$ & 13.27 & 6.62 & 1.47 &0.71 & 0.46 & 0.14 & 1.08 & 0.25 \\ \hline
$d_4$ & 262.47 & 21.38 & 9.31 & 1.69 & 1.92 & 0.36 & 3.92 & 0.41 \\ \hline
$d_5$ & 449.97 & 37.19 & 31.87 & 3.62 & 6.07 & 0.54 & 9.95 & 0.61 \\ \hline
$d_6$ & 1464.67 & 74.48 & 97.37 &6.97& 22.93 & 0.84 & 21.97 &0.88 \\ \hline
$d_7$ & 1973.23& 99.9 & 241.01 & 11.49 & 61.51 & 1.43 & 49.00 &1.48 \\ \hline\hline
\end{tabular}
\end{center}
\caption{Execution time, in seconds, for the SVD of matrices of different sizes with LAPack SVD and RRSVD.
The parameters for RRSVD are $q=2$, $k=p=100$.
1-BL: single-thread CBLAS-LAPACKE-based implementation, executed on a Intel i7@2.66GHz processor.
MKL: MKL-based implementation; 1-MKL: with one MKL thread, 16-MKL: with 16 MKL-threads, executed on one and two 8-core Xeon X5570@2.93GHz respectively.
GPU: CUBLAS-CULA implementation executed on a NVIDIA K20s; the timing in this case includes host-to-device and device-to-host memory transfers.}
\label{tab:baretimes}
\end{table*}%
RRSVD provides a speed-up (\fref{fig:speedup}) which is consistent with the predictions except in the 16-MKL case, there
it grows stronger than linearly for matrix sizes in the range $d_3-d_6$. This peculiar behavior is due to the low impact
of the initial matrix multiplication $Y = A \Omega$ (and subsequent ones) on matrices of such sizes: this operation is
heavily optimized for multi-threaded execution. Then the operations that determine the computational cost are the QR and
final SVD decompositions, which have complexity $\mathcal{O}\llrr{m \cdot(k+p)^2}$. Since $k+p$ is kept constant, we have
a quadratic speed-up. This justification is supported, for example, by the speed-up scaling of the 1-MKL case.
However, as the matrix size increases the speed-up will tend to be linear (see the $d_7$ case).
The performance on RRSVD provided by GPUs are comparable those delivered by the 16-MKL .
Indeed, the standard SVD is faster on the GPU starting from size $d_6$.
This behavior was expected, since our test matrices are still too small to take full advantage of the Kilo-processor architecture.
At last, we perform full TEBD simulations: for each dimension $d_i, \ i=3,4,5,6,7$ we executed one
TEBD simulation with the standard SVD routine, and another TEBD with RRSVD. We ran all the jobs on
the same cluster node, equipped with two Xeon X5570@2.93GHz with 8 cores each, as to assure a fair
comparison.
In \fref{fig:profile} we show the average time required by the TEBD two-site update when standard
SVD and RRSVD are used. The overall speed-up of the two-site update (inset of \fref{fig:profile})
agrees with the Amdahl law \cite{amdahl67}. The average is taken over all maximum-sized two-site
updates performed in the TEBD simulation (more than 1500 for every dimension). When the standard
SVD is applied, it takes more than 90\% of the overall computation time; however if RRSVD is employed,
SVD-times reduce drastically. Its computational cost is of the same order as that of the building
of the $\widetilde{\Theta}$ matrix (cf. \fref{fig:profile}). The fluctuations around the average RRSVD
time are due to the action of the Accuracy-Check: from time to time the bond dimension $\chi$ must
be increased in order to keep the approximation error below the threshold value $\epsilon = 10^{-3}$
required in the simulation. According to Corollary \ref{th:tamaErr}, for the choice $p=100$ and $ q=2$,
such an increase is motivated only by the decay rate of the singular values: the failure probability
is smaller than $1/10^{100}$. This is confirmed by an \emph{a posteriori} analysis of the matrices
produced during our test TEBD simulations that required an extension of the bond dimension. The
rank of the approximation proposed by RRSVD, for assigned tolerance $\epsilon$, can therefore be
used to detect either an entanglement build-up in the system or an (unlikely) anomalous behavior of the RRSVD itself.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{speedup-eps-converted-to.pdf}
\caption{$k=p=100$; $q=2$. Scaling of the SVD/RRSVD speed-up for different platforms as a function of the matrix size, as obtained from the data reported in Table \ref{tab:baretimes}.}
\label{fig:speedup}
\end{figure}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{componentspeed.pdf}
\caption{\label{fig:profile} Profile of the average time required by the SVD during a TEBD update step; left bars are for the RRSVD,
right bars for the standard SVD. The data refers to the MKL implementation executed with $16$ MKL-threads. The (unbiased) standard deviation
of the execution time for the singular value decomposition part of the update step is shown as an error bar on top of each bar.
The inset presents the overall speed-up for one complete TEBD two-site update achieved by the RRSVD.}
\end{figure}
\section{Conclusions and Outlook}
The TEBD-algorithm, an important tool for the numerical description of one-dimensional quantum many-body
systems, depends essentially on the SVD which, in current implementations represents the bottleneck. We
have demonstrated that this bottleneck can be addressed by the successful application of the RRSVD algorithm
to TEBD simulations. The block decimation step of the TEBD update procedure is now approximately one order
of magnitude faster than with the standard SVD without incurring additional accuracy losses. We note that
in our test cases we have always chosen the RRSVD parameters such that we obtain singular values (and vectors)
which were as precise as those provided by the standard (LAPack) SVD routine. By relaxing this requirement,
the speed-up can be increased further. Moreover, by augmenting RRSVD with the Accuracy Check feature we are
able not only to detect any very unlikely deviations from the average behavior, but also to understand whether
the system is experiencing an entanglement build-up which would require an increasing of the retained Hilbert space.
In this paper we focused on the TEBD algorithm and its application to the one-dimensional system obtained
through a TEDOPA mapping of an open quantum system (section \ref{sec:tedopa}). In this context, RRSVD makes
it possible to increase the dimension of the local oscillators with a much reduced impact on the computational
time, thus allowing for the efficient simulation of the system at higher temperatures. However, all MPS algorithms
relying on the SVD to select the relevant Hilbert space can greatly benefit from the use of the RRSVD routine,
as long as the ratio between the number of retained and total singular values is sufficiently small.
The real scope and impact of this new computational tool is still to be understood fully. To this end, we
prepared the \emph{RRSVD-Package}, a library that provides the RRSVD routine for single-/double-precision
and real-/complex-matrices. The package includes all the different implementations (BL, MKL, GPU) of RRSVD
and has been designed to be plugged into existing codes through very minor code modifications: in principle,
it suffices to replace each call to SVD by a call to RRSVD. This should allow for a very quick test of RRSVD
in different simulation codes and scenarios. The library is written in C++; a Fortran wrapper, that allows
to call of the RRSVD routine from Fortran code, is included as well. Some of the matrices used for the analysis
of RRSVD in this paper are available, together with some stand-alone code that exemplifies the use of RRSVD
and how to reproduce some of the results reported in this paper. The RRSVD-Package is freely available at
\cite{RRSVDgit} .
\\[5pt]
The results obtained for the GPU implementation are rather promising: for the largest matrices considered
($d_5,d_6,d_7$) the GPU performs as well as the cluster node. Preliminary results on even larger matrices
show that a GPU can become a valid alternative means to perform TEBD simulations of system with high local
dimensions, or when the number of retained dimensions must be increased because of larger correlation lengths.
Moreover, if operated in the right way, a GPU can act as a large cluster of machines \cite{tama14} without
the difficulties stemming from the need of distributing the workload among different computational nodes
(Message Passing Interface (MPI)). A full GPU version of TEBD can make the access to super-computing facilities
superfluous: a typical laboratory workstation equipped with one or two GPUs would be sufficient. We are
currently re-analyzing the TEBD algorithm to expose further options for parallelization, as for example in
the construction of the $\widetilde{\Theta}$ matrix. It could be computed by an ad-hoc designed CUDA-kernel
and is a valid target for improvement now that its computational complexity is similar to that of the SVD.
\section*{Acknowledgements}
This work was supported by an Alexander von Humboldt-Professorship, the EU Integrating project SIQS, the
EU STREP projects PAPETS and EQUAM, and the ERC Synergy grant BioQ. The simulations were performed on the
computational resource bwUniCluster funded by the Ministry of Science, Research and Arts and the Universities
of the State of Baden-W\"urttemberg, Germany, within the framework program bwHPC.
\section{An algorithm for one-dimensional quantum systems}
\label{sec:tebd}
The time evolving block decimation (TEBD) is an algorithm that generates
efficiently an approximation to the time evolution of a one-dimensional
system subject to a nearest-neighbor Hamiltonian. Under the condition
that the amount of entanglement in the system is bounded a high fidelity
approximation requires polynomially scaling computational resources. TEBD
does so by dynamically restricting the exponentially large Hilbert space
to its most relevant subspace whose size is scaling polynomially in the
system size, thus rendering the computation feasible \cite{ref:schollwoeck2011, ref:vidal2004}.
TEBD is essentially a combination of an MPS description for a one-dimensional
quantum system and an algorithm that applies two-site gates that are necessary
to implement a Suzuki-Trotter time evolution. Together with MPS operations such
as the application of measurements this yields a powerful simulation framework
\cite{ref:perezgarcia2007}.
While originally formulated for pure states, an extension to mixed states is
possible by introducing a matrix product operator (MPO) to describe the density
matrix, in complete analogy to an MPS describing a state \cite{ref:zwolak2004}.
The simulation procedure remains unchanged, except for details such as a squaring
of the local dimension on each site, the manner in which two-site gates are
built, the procedures to achieve normalisation as well as the implementation
of measurements \cite{ref:zwolak2004, ref:schollwoeck2011}. While standard
MPO formulations cannot ensure positivity of the state efficiently, recent
reformulations can account for this feature too \cite{Montangero14}.
Important for the present work are implications of these modifications on the
two-site update - and while the numerical recipe does not change, its scaling {\em does}.
\subsection{Introduction to MPS}
The remarkable usefulness and broad range of applications of the MPS
description for quantum states has been widely recognized early on in
the development of DMRG algorithms \cite{ref:rommer1997}. To better
understand the full extent of the presented work, we highlight the
relevant key features of MPS, referring to references~\cite{ref:perezgarcia2007,ref:schollwoeck2011}
for a full account.
Let us introduce an MPS for a pure state of $N$ sites. For simplicity
we assume that each site has the same number of dimensions~$d$, the
extension to varying dimension is straight forward. The MPS then
relates the expansion coefficients $c_{i_1 i_2 \ldots i_N}$ in the Fock
basis to a set of $N\cdot d$~matrices $\Gamma$ and $N-1$ matrices
$\lambda$
\begin{align}
\ket{\psi} &= \sum_{i_1, i_2, \ldots i_N}
c_{i_1 i_2 \ldots i_N}
\ket{i_1 i_2 \cdots i_N}
\label{eq:defstate} \\
&= \sum_{i_1, i_2, \ldots i_N}
\Gamma^{\llrrq{1}i_1} \cdot \lambda^{\llrrq{1}}
\cdot \Gamma^{\llrrq{2}i_2} \cdot \ldots \cdot \nonumber \\
& \hspace{2.cm}\ldots \lambda^{\llrrq{N-1}} \Gamma^{\llrrq{N}i_N}
\ket{i_1 i_2 \cdots i_N}.
\label{eq:defmps}
\end{align}
Each of the $N$ sites is assigned a set of $d$ matrices~$\Gamma$ which
have dimension $\chi_l\times\chi_r$. The index~$k$ in square brackets
denotes the corresponding site and the $i_k$ the corresponding physical
state. The diagonal $\chi_b\times\chi_b$ matrices $\lambda$ are assigned to
the bond~$k$ between sites~$k$ and $k+1$. The structure of the MPS is
such that the matrices $\lambda$ contain the Schmidt values for a
bipartition at this bond. The matrices~$\Gamma$ and $\lambda$ are
related to the coefficients~$c$ by
\begin{equation}
c_{i_1 i_2 \ldots i_N} = \Gamma^{\llrrq{1}i_1} \cdot
\lambda^{\llrrq{1}} \cdot \Gamma^{\llrrq{2}i_2} \cdot
\ldots \cdot \lambda^{\llrrq{N-1}} \cdot
\Gamma^{\llrrq{N}i_N}
\label{eq:relcoefmat}
\end{equation}
with matrix dimensions $\chi_l$, $\chi_r$, and $\chi_b$ for all $\Gamma$
and $\lambda$ such that building the product yields a number. The main
reason for employing an MPS description is the reduction from $d^N$
coefficients~$c$ to only $\mathcal{O}\llrr{d N \chi^2}$ when the matrices
$\Gamma$ and $\lambda$ are at most of size $\chi\times\chi$. This description
is efficient, provided that the matrix size $\chi$ (also known as the
{\em bond dimension}) is restricted - which it is, if the amount of
entanglement in the system is bounded. Further the MPS structure entails that
local gates applied to the whole state only change the matrices of the
sites they act on - thus updates are inexpensive, as opposed to a full
state vector description.
\subsection{Two-site gates}
The crucial ingredient in this simulation scheme is the SVD. It is the
solution to the question of how to apply a two-site gate to an MPS or MPO.
The fact that a gate $G$ acting on the two sites $k$ and $k+1$ only
changes the matrices local to these sites can easily be seen from
\begin{align}
G_{k,k+1} \ket{\psi}
= &\sum_{i_1 \ldots i_N}
\Gamma^{\llrrq{1}i_1} \cdot \lambda^{\llrrq{1}} \cdot \ldots
\cdot \Gamma^{\llrrq{N}i_N} \\
& \ket{i_1 \ldots i_{k-1}} G_{k,k+1} \ket{i_k i_{k+1}} \ket{i_{k+2} \cdots i_N} \nonumber \\
=& \sum_{i_1 \ldots i_N} \sum_{j_k, j_{k+1}}
\Gamma^{\llrrq{1}i_1} \cdot \lambda^{\llrrq{1}} \cdot \ldots \cdot \Gamma^{\llrrq{N}i_N}\nonumber \\
& \bra{i_k i_{k+1}}
G_{k,k+1} \ket{j_k j_{k+1}} \ket{i_1 \ldots i_N}.
\label{eq:deftsgate}
\end{align}
Above we first introduced the completeness relation for $j_k$ and
$j_{k+1}$, followed by switching indices $i_k$ with $j_k$ and $i_{k+1}$
with $j_{k+1}$. Identifying all terms related to $j_k$ and $j_{k+1}$
now defines a tensor $\Theta$ of fourth order
\begin{equation}
\Theta\llrr{i_k, i_{k+1}, \alpha, \beta} =
\sum_{\gamma} \lambda_{\alpha}^{\llrrq{k-1}}
\Gamma_{\alpha,\gamma}^{\llrrq{k}i_k} \cdot
\lambda_{\gamma}^{\llrrq{k}} \cdot
\Gamma_{\gamma,\beta}^{\llrrq{k+1}i_{k+1}} \cdot
\lambda_{\beta}^{\llrrq{k+1}}
\label{eq:deftheta}
\end{equation}
which can be built at a numerical cost of $\mathcal{O}\llrr{d_k \cdot
d_{k+1} \cdot \chi^3}$ basic operations. This tensor needs to be
updated when applying the gate~$G$. The update rule from
Eq.~\eqref{eq:deftsgate} yields the relation
\begin{align}
\tilde{\Theta}\llrr{i_k, i_{k+1}, \alpha, \beta} =&
\sum_{j_k, j_{k+1}}
\bra{i_k i_{k+1}} G_{k,k+1} \ket{j_k j_{k+1}} \cdot \nonumber \\
&\cdot \Theta\llrr{j_k, j_{k+1}, \alpha, \beta}.
\label{eq:defupdatetheta}
\end{align}
This sum is performed for all $\alpha$ and $\beta$ - which in general
run from $1$ to $\chi$. Thus there are $d_k\times d_{k+1}$ products,
which gives the number of basic operations for the update of $\Theta$
as $\mathcal{O}\llrr{d_l^2 \cdot d_r^2 \cdot \chi^2}$
\cite{ref:vidal2003}. This formula however only enables the update of
the complete matrix $\Theta$ and not of the single entities
$\Gamma^{\llrrq{k}}$, $\lambda^{\llrrq{k}}$ and $\Gamma^{\llrrq{k+1}}$.
They still have to be extracted from these updated products. To do
this, $\Theta$ is written in a blocked index form
$\Theta_{\llrr{d_k\chi},\llrr{d_{k+1}\chi}}$ which then is singular
value-decomposed \cite{ref:vidal2003}. The general singular value
decomposition (SVD) scales as $\mathcal{O}\llrr{m\cdot n^2}$ for a
$m\times n$-matrix, thus resulting in an overall computational cost of
$\mathcal{O}\llrr{d_l \cdot d_r^2 \cdot \chi^3}$. This makes the SVD
the real bottleneck in the simulation, consuming by far the most resources,
and therefore the first target for improvements.
Here we stick to the (unofficial) standard notation for MPS parameters
in the context of the TEBD algorithm and denote the diagonal matrices
as well as their entries by $\lambda$. During our discussion of the
singular value decomposition though we switch to the respective
notational standards where the $i$'th singular value will be denoted by
$\sigma_i$ - which however is the same quantity as the $\lambda_i$ in
this section.
\subsection{Error analysis}
During a TEBD simulation, the two main error sources are the Trotter
and the truncation error. Other small error sources, often depending on
implementational or algorithmical choices, are neglected in the
following analysis.
TEBD relies heavily on the nearest-neighbor structure of the underlying
Hamiltonian to implement time evolution. Traditionally this is done by
standard Suzuki-Trotter decompositions \cite{ref:suzuki1990} where the
total Hamiltonian is split into two terms $H=F+G$ with each $F$ and $G$
are the sum over all even and odd (respectively) terms of the
Hamiltonian. This way all terms within $F$ ($G$) commute with each
other, incurring no error while applying an operator of the form
$\text{exp}\llrr{\alpha F}$ ($\alpha \in \mathbb{C}$). The most
straight forward and illustrative example is the standard $3$rd-order
expansion
\begin{align}\label{eq:defst3rd}
&\text{exp}\llrr{i H \delta t} =
\text{exp}\llrr{i \llrr{F+G} \delta t} = \\
&=\text{exp}\llrr{i \frac{1}{2} F \delta t} \cdot
\text{exp}\llrr{i G \delta t} \cdot
\text{exp}\llrr{i \frac{1}{2} F \delta t} +
\mathcal{O}{\llrr{\delta t}^3}. \nonumber
\end{align}
This leads to three sweeps ($\frac{1}{2}F$, $G$, $\frac{1}{2}F$) of
non-overlapping gate applications, comprising one time step. Various
higher-order schemes for such decompositions exists, differing in the
number of sweeps and the order of the resulting error \cite{ref:suzuki2005}.
However, these kind of schemes may introduce non-orthogonal components
since the order of gate applications is not successive. This can be
circumvented by resorting to schemes that produce sweeps with
ordered, successive gate applications \cite{ref:sornborger1999}.
A decomposition to order $p$ introduces an error of order
$\epsilon_{\delta t}=\llrr{\delta t}^{p+1}$. The error incurred in one
time step in general scales linearly with the system size~$N$. This is
due to the nested commutators occurring in the error term of the
Suzuki-Trotter decomposition as can be seen when applying the
Baker-Campbell-Hausdorff formula. Since the number of time steps taken
is the total time $T$ divided by the number of time steps $T/\delta t$,
the total Trotter error $\epsilon_{trotter}$ is of order
$\mathcal{O}\llrr{\llrr{\delta t}^{p}NT}$ \cite{ref:gobert2005}.
The second considered error source is the truncation error. It stems
from the truncation of the Schmidt values during the application of a
two-site gate. Employing the Schmidt decomposition, a bipartite state
can be written as
\begin{align}
\ket{\psi} &= \sum_{i=1}^{\chi '} \lambda_i
\ket{\psi_i^{\text{left}}} \ket{\psi_i^{\text{right}}} +
\sum_{i=\chi '+1}^{\chi} \lambda_i \ket{\psi_i^{\text{left}}} \ket{\psi_i^{\text{right}}} \nonumber \\
&= \ket{\psi_{\text{trunc}}} + \ket{\psi_{\bot}}
\label{eq:bipartite}
\end{align}
where the left sum until $\chi '$ denotes the kept part
$\ket{\psi_{\text{trunc}}}$ and the right sum starting from $\chi '+1$
denotes the discarded part $\ket{\psi_{\bot}}$. Due to the fact
that the $\ket{\psi_i^{\text{left}}}$ are mutually orthogonal (as are
those of the right subsystem), the discarded part is orthogonal to the
retained. Given that the squared Schmidt values sum up to $1$, this
truncation leads to a deviation in the norm of the state
\begin{equation}
\braket{\psi_{\text{trunc}}}{\psi_{\text{trunc}}} =
1 - \sum_{i=\chi '+1}^{\chi} \lambda_i^2 = 1 - w
\label{eq:defdiscweigt}
\end{equation}
where we defined the {\em discarded weight} $w=\sum_{i=\chi '+1}^{\chi}
\lambda_i^2$. Thus when renormalizing $\ket{{\psi_{\text{trunc}}}}$ we
pick up a factor of $1/\llrr{1-w}$. Thus upon $n$ truncations we are
off by a factor of about $\llrr{1-w}^{n_t}$ with $n_t$ being the number
of truncations performed. Truncating each bond in each time step
results in $n_t \propto \frac{NT}{\delta t}$ and thus the truncation
error is about
\begin{equation}
\epsilon_{\text{trunc}} = \llrr{1-w}^{\frac{NT}{\delta t}} =
\text{exp}\llrr{\frac{NT}{\delta t}\text{ln}\llrr{1-w}}
\label{eq:truncerr}
\end{equation}
Thus we end up with a careful balancing of the two errors, depending on
the size of the time step $\delta t$. For smaller $\delta t$ we have a
smaller truncation error. Yet this requires more truncations due to the
larger number of time steps taken and thus in a larger truncation
error.
\section{An advanced application of the TEBD algorithm}
\label{sec:tedopa}
The TEBD algorithm is remarkably useful also in scenarios which at
first seem to be quite different from quantum many-body systems. One
such example is its usage in the time evolving density matrix using
orthogonal polynomials algorithm (TEDOPA) capable of treating open quantum
systems. We briefly present the TEDOPA scheme to show in which regimes
RRSVD proves to be most useful and how to speed-up previous simulations
and refer to \cite{ref:prior2010, ref:chin2010, ref:woods2014} for a
more detailed presentation of the algorithm.
TEDOPA is a certifiable and numerically exact method to treat open
quantum system dynamics \cite{ref:prior2010, ref:woods2014,
ref:woods2015}. It acts upon a spin-boson model description of an open
quantum system where a central spin interacts linearly with an
environment modelled by harmonic oscillators. In a two-stage process
TEDOPA then first employs a unitary transformation reshaping the
spin-boson model into a one-dimensional configuration. In a second step
this emerging configuration is treated by TEBD, exploiting its full
simulation power for one-dimensional systems.
The total Hamiltonian is split into system, environment and interaction
part
\begin{align}
&H = H_{\text{sys}} + H_{\text{env}} + H_{\text{int}},
\label{eq:H_sb1}\\
&H_{\text{env}} = \int_0^{x_{\text{max}}} \!\!\!\!dx~g\llrr{x}
a_x^\dagger a_x,
\label{eq:H_sb2}\\
& H_{\text{int}} = \int_0^{x_{\text{max}}} \!\!\!\!dx~h\llrr{x}
\llrr{a_x^\dagger + a_x} A .
\label{eq:H_sb3}
\end{align}
The bosonic creation and annihilation operators $a_x^\dagger$ and $a_x$
fulfill the usual bosonic commutation relations for the environmental
mode~$x$. The function $g\llrr{x}$ can be identified with the
environmental dispersion relation; the function $h\llrr{x}$ gives the
system-environment coupling strength for mode $x$ between its
displacement $\llrr{a_x^\dagger + a_x}$ and the operator $A$ acting on the
system.
Here the functions $g\llrr{x}$ and $h\llrr{x}$, together with the
temperature, uniquely characterize an environment and define the
spectral density $J\llrr{\omega}$ given by
\begin{equation}
J\llrr{\omega} = \pi h^2\llrrq{g^{-1}\llrr{\omega}}
\frac{dg^{-1}\llrr{\omega}}{d\omega}.
\label{eq:def-sd}
\end{equation}
The interpretation of the quantity
$\llrr{dg^{-1}\llrr{\omega}/d\omega}\delta\omega$ is the number of
quantised modes with frequencies between $\omega$ and $\omega +
\delta\omega$ (for $\delta\omega\rightarrow0$). Further hold $g$ the
relations $g^{-1}\llrrq{g\llrr{x}} = g\llrrq{g^{-1}\llrr{x}} = x$.
Then new oscillators with creation and annihilation operators
$b_n^\dagger$ and $b_n$ can be obtained by defining the analytical
transformation~$U_n\llrr{x}$ as
\begin{align}
&U_n\llrr{x} = h\llrr{x} p_n\llrr{x}, \\
&b_n^\dagger =
\int_0^{x_{\text{max}}} \!\!\!\!dx U_n\llrr{x} a_x^\dagger.
\label{eq:chainmap}
\end{align}
It utilizes the orthogonal polynomials $p_n\llrr{x}$ defined with
respect to the measure $d\mu\llrr{x}=h^2\llrr{x}dx$. While in certain
cases it is possible to perform this transformation analytically, in
general a numerically stable procedure is used
\cite{ref:chin2010, ref:prior2010, ref:gautschi1994}.
This transformation yields a semi-infinite one-dimensional
nearest-neighbor Hamiltonian
\begin{align}
H =&
H_{\text{sys}} +
t_0 A \llrr{b_0 + b_0^\dagger} +
\sum_{n=1}^\infty \omega_n b_n^\dagger b_n \nonumber \\
& \hspace*{1.5cm} +\sum_{n=1}^\infty t_n
\llrr{b_n^\dagger b_{n+1} + b_n b_{n+1}^\dagger}
\label{eq:H_1D}
\end{align}
whose nearest-neighbor geometry (which is necessary for the
application of TEBD) as well as coefficients $\omega_n$ and
$t_n$ are directly related to the recurrence coefficients of the
three-term recurrence relation defined by the orthogonal polynomials
$p_n\llrr{x}$ with respect to the measure $d\mu\llrr{x}=h^2\llrr{x}dx$
\cite{ref:chin2010}.
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=0.9\columnwidth]{tedopa}
\caption{Illustration of the spin-bonson model's transformation
into a one-dimensional configuration where the system is only
coupled to the environment's first site.}
\label{fig:tedopa}
\end{center}
\end{figure}
This transformation of the configuration is depicted in Fig.~\ref{fig:tedopa},
from the spin-boson model on the left to a one-dimensional geometry on the right.
In a last step it is necessary to adjust this configuration further to suit
numerical needs. The number of levels for the environment's oscillator on
site~$k$ can be restricted to $d_{k\text{,max}}$ to reduce required computational
resources. A suitable value for $d_{k\text{,max}}$ is related to this
site's average occupation, depending on the environment's structure and
temperature. The number of sites that is required to faithfully represent
the environment has to be sufficiently large to completely give the appearance of
a ``large'' reservoir - one that avoids unphysical back-action on the system
due to finite-size effects that lead to reflections at the system boundaries
(see \cite{Rosenbach15} for extensions of TEDOPA that can alleviate this
problem considerably). It should be noted that these truncations, while feasible
numerically and easily justifiable in a hand-waving kind of argument, in the end
can also be {\em rigorously certified} by analytical bounds \cite{ref:woods2015}.
These adjustments yield an emerging system-environment configuration which is
now conveniently accessible by TEBD.
|
1,314,259,994,800 | arxiv | \section{Introduction}
In this article we study the problem of robust exponential utility maximization
in discrete time.
Here the term robust reflects uncertainty about the true probabilistic model
and the consideration of a whole family of models as a consequence.
This is not a new concept and since the seminal papers
\cite{gilboa1989maxmin} and \cite{maccheroni2006ambiguity}
it has gained a lot of attention,
see e.g.~\cite{acciaio2013model,beiglbock2013model,beiglbock2015complete,bouchard2015arbitrage,burzoni2016pointwise,cheridito2015representation,denis2013optimal,hobson1998robust,neufeld2015robust,nutz2014utility,peng2007g}
and \cite{cheridito2016duality} for an overview.
To state our problem more precisely, given the exponential utility function
\[ U(x):=-\exp(-\gamma x) \]
with risk-aversion parameter $\gamma>0$, a possibly non-dominated set of probabilistic models
$\mathcal{P}$ and the agent's random endowment $X$,
we are interested in the optimization problem
\begin{align}
\label{eq:intro.primal}
\sup_{(\vartheta,\alpha)\in\Theta\times\mathbb{R}^e}\inf_{P\in\mathcal{P}} E_P[ U(X+(\vartheta\cdot S)_T+\alpha(g-g_0))].
\end{align}
Here $g^1,\dots,g^e$ are traded options available for buying and selling at time 0 for the prices $g_0^1,\dots,g_0^e$, the set $\Theta$ consists of all predictable dynamic trading strategies for the (discounted) stock $S$,
and $(\vartheta\cdot S)_T+\alpha(g-g_0)$ is the outcome of a semi-static trading strategy
$(\vartheta,\alpha)\in\Theta\times\mathbb{R}^e$.
The first immediate question when investigating the optimization problem \eqref{eq:intro.primal}
is whether an optimal strategy $(\vartheta,\alpha)$
(which should be defined simultaneously under all models $P\in\mathcal{P}$) exits.
Due to the absence of a measure capturing all zero sets and the failure of classic
arguments such as Komlos' theorem as a consequence, this is non-trivial.
Our second interest lies in the validity of a dual representation with respect to linear pricing measures,
namely if \eqref{eq:intro.primal} is equal to
\[ -\exp\big(-\inf_{Q\in\mathcal{M}} \big( \gamma E_Q[X] + H(Q,\mathcal{P}) \big) \big),\]
where $\mathcal{M}$ denotes the set of all martingale measures $Q$ for the stock $S$ calibrated to the options
(i.e.~$E_Q[g^i]=g^i_0$ for $1\leq i\leq e$) under which the robust entropy $H(Q,\mathcal{P})$ is finite.
Finally we study if \eqref{eq:intro.primal} satisfies the dynamic programming
principle (in case without options),
meaning that it is possible to analyze the problem locally
and later ``glue" everything together. In particular this implies that a strategy
which is optimal at time 0, will be optimal again, if one starts to solve the optimization
problem at some positive time $t$.
\vspace{0.5em}
The main contribution of this paper is to show that positive answers to all three questions,
namely the existence of an optimal strategy, duality, and dynamic programming
can be given under weak assumptions, see Theorem \ref{thm:main} and Theorem \ref{thm:main.options}.
Further, it is shown that a scaled version of \eqref{eq:intro.primal} converges to the
minimal superhedging price of $X$ if the risk-aversion parameter $\gamma$ tends to infinity,
see Theorem \ref{thm:limit.superhedg}.
In fact, we adopt the setting suggested by Bouchard and Nutz in
the milestone paper \cite{bouchard2015arbitrage}
and show by means of optimal control, that for any unbounded measurable (lower semianalytic)
random endowment $X$ (regardless of whether the optimization problem \eqref{eq:intro.primal}
is finite or not), existence, duality, and the dynamic programming principle hold true.
Due to the unboundedness of $X$ there are several technical obstacles for dynamic programming.
This is bypassed by showing duality and dynamic programming simultaneously:
In each local (i.e.~the one-period case) problem, trading strategies are vectors in $\mathbb{R}^d$
and finite dimensional arguments may be applied to show duality.
This duality then gives enough regularity for arguments needed to apply dynamic programming.
\vspace{0.5em}
Needless to say, utility maximization is an important topic in mathematical finance
starting with \cite{kramkov1999asymptotic,merton1971optimum}.
In case of exponential utility function
(though in a continuous-time and non-robust setting)
\cite{frittelli2000minimal} and \cite{delbaen2002exponential} were the first to prove
duality and existence, which lead to further analysis, for example
a BSDE characterization of the optimal value and solution in an incomplete
market and under trading constraints is given in \cite{hu2005utility},
and the dynamics and asymptotics in the risk-aversion parameter $\gamma$
are studied in \cite{mania2005dynamic}.
In the presence of uncertainty, starting with \cite{quenez2004optimal} and \cite{schied2004risk},
most results are obtained under the assumption that
$\mathcal{P}$ is dominated, see e.g.~\cite{backhoff2015robust,gundel2005utility,owari2011robust}.
The literature focusing on a non-dominated set $\mathcal{P}$ is still comprehensible and in
continuous-time results are given in \cite{denis2013optimal,matoussi2015robust,neufeld2015robust}.
In the present setting (that is discrete-time and a non-dominated set $\mathcal{P}$),
the dynamic programming principle and the existence of an optimal strategy are first shown in \cite{nutz2014utility},
where the author considers a random utility function $U$ defined on $\Omega\times\mathbb{R}_+$
satisfying a certain boundedness
(which would correspond to a random endowment that is bounded from below in our setting).
More recently, there are two papers generalizing the result of \cite{nutz2014utility}.
In \cite{carassus2016robust} the boundedness of the random utility
(still defined on the positive real line)
is replaced by a certain integrability condition and dynamic programming as well as the
existence of an optimal strategy is shown.
In \cite{neufeld2016robust} the random utility function is no longer defined on the positive real line,
but satisfies certain boundedness similar to \cite{nutz2014utility}.
Moreover the market is more general and includes e.g.~trading constraints or proportional transaction cost.
Duality on the other hand is shown in Section 4.2 of \cite{cheridito2016duality}
under a compactness condition on the set $\mathcal{P}$ and (semi-)continuity of the
random endowment $X$.
\vspace{0.5em}
In order to lighten notation, we will assume without loss of generality that the prices
of the traded options are 0 and, instead of \eqref{eq:intro.primal}, consider the equivalent problem
\[ \inf_{(\vartheta,\alpha)\in\Theta\times\mathbb{R}^e}
\sup_{P\in\mathcal{P}} \log E_P[\exp(X+(\vartheta\cdot S)_T + \alpha g)].\]
It is clear that both problems are one to one, except all results for an
endowment $X$ in the original problem hold for $-X$ in the transformed one, and vise versa.
The remainder of this paper is organized as follows:
Section \ref{sec:main} contains the setting, all main results,
a discussion of the assumptions, and some examples.
Sections \ref{sec:one.period} and Section \ref{sec:multi.period} are devoted to the proofs for the one-period and general case respectively.
Finally, technical proofs are given in Appendix \ref{sec:app.proofs} and a brief introduction
to the theory of analytic sets is given in Appendix \ref{sec:app.analytic}.
\section{Main results}
\label{sec:main}
\subsection{Setting}
Up to a minor change regarding the no-arbitrage condition, we work in the setting
proposed by Bouchard and Nutz \cite{bouchard2015arbitrage}, which is briefly summarized below.
Analytic sets and the general terminology are shortly discussed in Appendix \ref{sec:app.analytic}.
Let $\Omega_0$ be a singleton and $\Omega_1$ be a Polish space.
Fix $d,T\in\mathbb{N}$, let $\Omega_t:=\Omega_1^t$, and define
$\mathcal{F}_t$ to be the universal completion of the Borel $\sigma$-field on $\Omega_t$
for each $0\leq t\leq T$.
To simplify notation, we denote $(\Omega,\mathcal{F})=(\Omega_T,\mathcal{F}_T)$
and often consider $\Omega_t$ as a subset of $\Omega$.
For $s<t$, some fixed $\omega\in\Omega_s$, and a function $X$ with domain $\Omega_t$, we consider
$X(\omega,\cdot)$ as a function with domain $\Omega_{t-s}$, i.e.~$\Omega_{t-s}\ni\omega'\mapsto X(\omega,\omega')$.
For each $0\leq t\leq T-1$ and $\omega\in\Omega_t$, there is a given convex and non-empty
set of probabilities $\mathcal{P}_t(\omega)\subset\mathfrak{P}(\Omega_1)$,
which can be seen as all possible probability scenarios
for the price of the stock at time $t+1$, given the history $\omega$.
The assumption throughout is that the stock $S_t\colon\Omega_t\to\mathbb{R}^d$ is Borel
and that the set-valued mapping $\mathcal{P}_t$ has analytic graph.
The latter in particular ensures that
\begin{align}
\label{eq:P.time.consistent}
\mathcal{P}:=\{ P=P_0\otimes\cdots\otimes P_{T-1} : P_t(\cdot)\in\mathcal{P}_t(\cdot) \}
\end{align}
is not empty.
Here each $P_t$ is a selector of $\mathcal{P}_t$ , i.e.~a universally measurable function
$P_t\colon\Omega_t\to\mathfrak{P}(\Omega_1)$ satisfying $P_t(\omega)\in\mathcal{P}_t(\omega)$
for each $\omega$, and the probability $P$ on $\Omega$ is defined by
$P(A):=\int_{\Omega_1}\cdots\int_{\Omega_1} 1_A(\omega_1,\dots,\omega_T)
P_{T-1}(\omega_1,\dots,\omega_{T-1},d\omega_T)\cdots P_0(d\omega_1)$.
The set of all dynamic trading strategies is denoted by $\Theta$ and an element $\vartheta\in\Theta$
is a vector $\vartheta=(\vartheta_1,\cdots,\vartheta_T)$ consisting of $\mathcal{F}_{t-1}$-measurable
mappings $\vartheta_t\colon\Omega_{t-1}\to\mathbb{R}^d$.
The outcome at time $t$ of trading according to the dynamic strategy $\vartheta$ starting at time $s\leq t$ is given by
\[ (\vartheta\cdot S)_s^t:=\vartheta_{s+1}\Delta S_{s+1} +\cdots + \vartheta_t\Delta S_t,\quad
\text{where}\quad \Delta S_u:=S_u-S_{u-1} \]
and $\vartheta_u\Delta S_u:=\sum_{i=1}^d \vartheta_u^i\Delta S_u^i$ is the inner product.
We assume that for every $0\leq t\leq T-1$ and $\omega\in\Omega_t$ the no arbitrage condition
NA($\mathcal{P}_t(\omega)$) is satisfied, i.e.~$h\Delta S_{t+1}(\omega,\cdot)\geq 0$
$\mathcal{P}_t(\omega)$-q.s.~implies $h\Delta S_{t+1}(\omega,\cdot)= 0$
$\mathcal{P}_t(\omega)$-q.s.~for every $h\in\mathbb{R}^d$.
Up to polar sets, this is the same as to require the NA($\mathcal{P}$) condition, namely that
$(\vartheta\cdot S)_0^T\geq 0$ $\mathcal{P}$-q.s.~implies
$(\vartheta\cdot S)_0^T= 0$ $\mathcal{P}$-q.s.~for every $\vartheta\in\Theta$,
see \cite[Theorem 4.5]{bouchard2015arbitrage}.
A discussion on why we ask for this (slightly stronger) condition is given in
Remark \ref{rem:main.discussion}. Finally define
\[ \mathcal{M}
=\big\{ Q\in\mathfrak{P}(\Omega):
S \text{ is a martingale under $Q$ and } H(Q,\mathcal{P})<+\infty\big\}, \]
to be the set of martingale measures with finite robust entropy
\[ H(Q,\mathcal{P}):=\inf_{P\in\mathcal{P}} H(Q,P)
\quad\text{where}\quad
H(Q,P):=\begin{cases}
E_P\big[\frac{dQ}{dP}\log\frac{dQ}{dP}\big]&\text{if }Q\ll P,\\
+\infty &\text{else.}
\end{cases} \]
Finally define $E_P[X]:=E_P[X^+]-E_P[X^-]$ with $E_P[X]:=-\infty$ if $E_P[X^-]=+\infty$,
so that $X$ is integrable with respect to $P$ if and only if $E_P[X]\in\mathbb{R}$.
\subsection{Main results}
\begin{theorem}[Without options]
\label{thm:main}
Let $X\colon\Omega\to(-\infty,+\infty]$ be upper semianalytic.
Then
\begin{align}
\label{eq:optim.problem}
\inf_{\vartheta\in\Theta}\sup_{P\in\mathcal{P}}
\log E_P\big[\exp\big(X + (\vartheta\cdot S)_0^T\big)\big]
=\sup_{Q\in\mathcal{M}} \big(E_Q[X] -H(Q,\mathcal{P}) \big)
\end{align}
and both terms are not equal to $-\infty$.
Moreover, the infimum over $\vartheta\in\Theta$ is attained and the optimization
problem satisfies the dynamic programming principle,
see Theorem \ref{thm:multiperiod} for the precise formulation of the last statement.
\end{theorem}
In addition to the previous setting, assume that there are $e\in\mathbb{N}\cup\{0\}$ options
($e=0$ corresponding to the case without options)
i.e.~Borel functions $g^1,\dots,g^e\colon\Omega\to\mathbb{R}$,
available at time $t=0$ for price zero.
The outcome of a semi-static trading strategy $(\vartheta,\alpha)\in\Theta\times\mathbb{R}^e$
equals $(\vartheta\cdot S)_0^T+ \alpha g$, where $\alpha g:=\sum_{i=1}^e \alpha_ig^i$ again denotes the inner product.
In addition to the already imposed no arbitrage condition, assume that
$(\vartheta\cdot S)_0^T+ \alpha g\geq 0$ $\mathcal{P}$-q.s.~implies
$(\vartheta\cdot S)_0^T+ \alpha g= 0$ $\mathcal{P}$-q.s.~for every strategy $(\vartheta,\alpha)\in\Theta\times\mathbb{R}^e$.
\begin{theorem}[With options]
\label{thm:main.options}
Fix a Borel function $Z\colon\Omega\to[0,+\infty)$ such that $|g^i|\leq Z$ for $1\leq i\leq e$
and let $X\colon\Omega\to\mathbb{R}$ be an upper semianalytic function satisfying $|X|\leq Z$.
Then it holds
\begin{align*}
\inf_{(\vartheta,\alpha)\in\Theta\times\mathbb{R}^e}\sup_{P\in\mathcal{P}}
\log E_P\big[\exp\big(X + (\vartheta\cdot S)_0^T + \alpha g\big)\big]
=\sup_{Q\in\mathcal{M}_g} \big(E_Q[X] -H(Q,\mathcal{P}) \big),
\end{align*}
where $\mathcal{M}_g$ denotes the set of all $Q\in\mathcal{M}$
with $E_Q[Z]<+\infty$ and $E_Q[g^i]=0$ for $1\leq i\leq e$.
Moreover, the infimum over $(\vartheta,\alpha)\in\Theta\times\mathbb{R}^e$ is attained.
\end{theorem}
\begin{remark}
\label{rem:main.assumptions}
{\rule{0mm}{1mm}\\[-3.25ex]\rule{0mm}{1mm}}
\begin{enumerate}[1)]
\item
The no-arbitrage condition {\rm NA}$(\mathcal{P})$is essential.
Indeed, even if both sides in \eqref{eq:optim.problem} do not take the value
$-\infty$, the condition {\rm NA}$(\mathcal{P})$ does not need to hold --
nor does equation \eqref{eq:optim.problem}, see Appendix \ref{sec:app.proofs}.
\item
In general, due the supremum over $P\in\mathcal{P}$,
the minimizer $\vartheta$ in \eqref{eq:optim.problem} is not unique
and the supremum over $Q$ is not attained.
\item In Theorem \ref{thm:main} the set $\mathcal{M}$ can be replaced by
$\mathcal{M}(Y):=\{Q\in\mathcal{M} : E_Q[Y]<+\infty\}$, where $Y\colon\Omega\to[0,+\infty)$
is an arbitrary function such that $-Y$ is upper semianalytic.
The same holds true for Theorem \ref{thm:main.options}, i.e.~one can replace
$\mathcal{M}_g$ by $\mathcal{M}_g(Y):=\{Q\in\mathcal{M}_g : E_Q[Y]<+\infty\}$.
\end{enumerate}
\end{remark}
Another interesting problem is the study of asymptotic behavior of the optimization problem
in the risk-aversion parameter $\gamma$, see e.g.~\cite{mania2005dynamic}.
Let us give some motivation: Typically the superhedging price
\[\pi(X):=\inf\{ m\in\mathbb{R} : m+(\vartheta\cdot S)_0^T +\alpha g \geq X\,
\mathcal{P}\text{-q.s.~for some } (\vartheta,\alpha)\in\Theta\times\mathbb{R}^e \}\]
is extremely high.
A natural way of shrinking $\pi$ is to allow $m+(\vartheta\cdot S)_0^T +ug<X$ with positive
probability in a ``controlled'' way, see e.g.~\cite{cheridito2016duality,follmer1999quantile}.
More precisely, define
\[ \pi_\gamma(X):=\inf\Big\{ m \in\mathbb{R}:
\begin{array}{l}
\sup_{P\in\mathcal{P}}\frac{1}{\gamma}\log
E_P[\exp(\gamma(X-m-(\vartheta\cdot S)_0^T-\alpha g))]\leq0\\
\text{for some } (\vartheta,\alpha)\in\Theta\times\mathbb{R}^e
\end{array}\Big\} \]
for each risk-aversion parameter $\gamma>0$.
Then $\pi_\gamma(X)\leq\pi(X)$ by definition and since $\exp(\gamma x)/\gamma\to+\infty1_{(0,+\infty]}(x)$
as $\gamma\to+\infty$, an evident question is whether the same holds true for the superhedging prices.
\begin{theorem}[Entropic hedging]
\label{thm:limit.superhedg}
In the setting of Theorem \ref{thm:main.options} it holds
\[ \pi(X)=\lim_{\gamma\to+\infty}\pi_\gamma(X) \]
and the limit in $\gamma$ is a supremum over $\gamma>0$.
\end{theorem}
\begin{remark}
\label{rem:main.discussion}
{\rule{0mm}{1mm}\\[-3.25ex]\rule{0mm}{1mm}}
\begin{enumerate}[1)]
\item
Besides using dynamic programming as a tool to transform an infinite dimensional problem
into a family of finite dimensional ones, the technique used in this paper differs from
\cite{carassus2016robust,neufeld2016robust,nutz2014utility} in that duality and dynamic programming
are shown simultaneously and strongly rely on each other.
For instance, without duality it is -- due to the unboundedness of $X$ -- not clear whether the
local version of the optimization problem (see \eqref{eq:local.optimization} for the precise definition)
satisfies the crucial measurability needed for dynamic programming.
Similar, without assumptions on compactness of $\mathcal{P}$ and continuity of $X$,
it is not clear whether duality holds in the multi-period case.
This is only possible since one may analyze the problem locally (i.e.~in a one-period model)
and rely on finite-dimensional techniques.
The drawback is that {\rm NA}$(\mathcal{P}_t(\omega))$ has to hold for every
$t$ and $\omega\in\Omega_t$ (and not only for $\mathcal{P}$-quasi every $\omega$ as usually)
which does not, however, seems to be restrictive regarding applications, see Section \ref{sec:examples}.
Note that in case of $\mathcal{P}$ being a singleton, it is always possible to choose
$\mathcal{P}_t$ such that {\rm NA}$(\mathcal{P}_t(\omega))$ holds for every $t$ and
$\omega\in\Omega_t$, see Appendix \ref{sec:app.proofs}.
\item
Additionally to point 1), duality in the multi-period setting also provides the non-trivial
regularity of the optimization problem in the endowment $X$.
This e.g.~allows for accessible proofs of Theorem \ref{thm:main.options} and
Theorem \ref{thm:limit.superhedg}.
\item
The (technical) assumption that the graph of $\mathcal{P}_t$ is an analytic set has two consequences:
It allows for measurable selection arguments and enables to define pointwise
conditional sublinear expectations, i.e.~ensure that
\begin{align}
\label{eq:sublin.expectation}
\mathcal{E}_t(X)(\omega):=\sup_{P\in\mathcal{P}_t(\omega)} E_P[X(\omega,\cdot)]
\end{align}
is upper semianalytic as a mapping of $\omega\in\Omega_t$
whenever $X\colon\Omega_{t+1}\to\mathbb{R}$ is,
see \cite{bouchard2015arbitrage,nutz2013constructing}.
The converse holds true as well: Given an arbitrary sublinear conditional expectation
$\mathcal{E}_t$, there always exists (under some assumptions) a set-valued mapping $\mathcal{P}_t$
with analytic graph such that \eqref{eq:sublin.expectation} holds true,
see \cite[Corollary 2.5]{bartl2016pointwise} for the precise statement.
In a similar manner, the ``time-consistency'' \eqref{eq:P.time.consistent} of $\mathcal{P}$ is
equivalent to the tower-property of the sublinear expectations, see \cite[Corollary 2.10]{bartl2016pointwise}.
\end{enumerate}
\end{remark}
\subsection{Examples}
\label{sec:examples}
In this section we discuss a general method of robustifying a given probability
and also give applications to financial models.
All nontrivial claims are proven at the beginning of Appendix \ref{sec:app.proofs}.
In many cases, the physical measure is not known a priori, but rather a result of collecting data
and estimation. In particular the estimator is not equal to, but only ``converges'' (as the data grow richer)
to the actual unknown physical measure. A canonical way of taking this into account therefore consists of adding
some sort of ``neighborhood'' to the estimator $P^\ast=P^\ast_0\otimes\cdots\otimes P_{T-1}^\ast$,
i.e.~to define
\begin{align}
\label{eq:P.robust.general}
\mathcal{P}_t(\omega):=\{ P\in\mathfrak{P}(\Omega_1) :
\mathop{\mathrm{dist}}(P,P_t^\ast(\omega))\leq\varepsilon_t(\omega) \}.
\end{align}
Here, as the name suggest,
\[\mathop{\mathrm{dist}}\colon\Omega_1\times\Omega_1\to[0,+\infty] \]
can be thought of a distance and $\varepsilon_t\colon\Omega_t\to[0,+\infty]$
as the size of the neighborhood.
If $\mathop{\mathrm{dist}}$, $\varepsilon_t$ (and $P^\ast_t$) are Borel
-- from now on a standing assumption -- then $\mathcal{P}_t$
has analytic graph.
If $\mathop{\mathrm{dist}}$ is in fact a metric
or at least fulfills $\mathop{\mathrm{dist}}(P,P)=0$, the values of $\mathcal{P}_t$ are also non-empty.
Since the distance should be compatible with estimation,
natural choices include the Wasserstein distances of order $p$ or, more generally,
the cost of transportation i.e.~
\[ \mathop{\mathrm{dist}}(Q,P):=\inf\Big\{ \int_{\Omega_1\times\Omega_1} c(x,y)\Pi(dx,dy) : \Pi \Big\} \]
where the infimum is taken over all measures on the product $\Pi\in\mathfrak{P}(\Omega_1\times\Omega_1)$
with $\Pi(\cdot\times\Omega_1)=Q$ and $\Pi(\Omega_1\times\cdot)=P$, and
$c\colon\Omega_1\times\Omega_1\to[0,+\infty]$ is a given lower semicontinuous function (the ``cost'').
This includes the Wasserstein distance of order $p$; then the cost $c$ equals a metric
on $\Omega_1$ to the power $p$, see e.g.~Chapter 5 and 6 in \cite{villani2008optimal}.
This traceable distance has many advantages, e.g.~that besides metrizing weak convergence,
it controls the integrability of the tails. In this case $\mathcal{P}_t$ has convex values.
The above method can also be applied when a certain model for the dynamics of the underlying is fixed
and only the parameters are uncertain. For simplicity assume
that $\Omega=\mathbb{R}^T$, $S_t(\omega)=\omega_t$
is the canonical space of a one-dimensional stock.
We illustrate in two concrete examples: The Binomial model, which reads as
\begin{align}
\label{eq:dyn.S.binomal}
S_{t+1}(\omega,\cdot)=S_t(\omega) +B(\cdot)
\end{align}
for every $t$ and $\omega\in\Omega_t$ where $B\colon\Omega_1\to\mathbb{R}$ is binomially distributed,
and a discrete version of the Black-Scholes model, which reads as
\begin{align}
\label{eq:dyn.S.black.scholes}
S_{t+1}(\omega,\cdot)=S_t(\omega)\big(\mu \Delta t + \sigma\Delta W(\cdot)\big)
\end{align}
where $\mu\in\mathbb{R}$, $\sigma,\Delta t>0$, and $\Delta W\colon\Omega_1\to\mathbb{R}$ is normally distributed
with mean 0 and variance $\Delta t$; we write $\Delta W\sim N(0,\Delta t)$.
Defining $f_t(\omega,x):=S_t(\omega)+x$, $X:=B$ in case of the Binomial, and
$f_t(\omega,x):=S_t(\omega)x$, $X:=\mu\Delta t+ \sigma \Delta W$ in case of the Black-Scholes model,
it follows that both can be written in the more general form
\begin{align}
\label{eq:dyn.S.general}
S_{t+1}(\omega,\cdot)=f_t(\omega, X(\cdot)),
\end{align}
where $f_t\colon\Omega_t\times\mathbb{R}\to\mathbb{R}$ and $X\colon\Omega_1\to\mathbb{R}$ are Borel.
In terms of distributions, \eqref{eq:dyn.S.general} means nothing but
\[ \mathop{\mathrm{law}} S_{t+1}(\omega,\cdot) = R \circ f(\omega,\cdot)^{-1},
\quad \text{where } R:=\mathop{\mathrm{law}} X. \]
Therefore a canonical way of robustifying a given model of the form
\eqref{eq:dyn.S.general} is to replace $R$ in the equation above by a set
$\mathcal{R}_t(\omega)\subset\mathfrak{P}(\Omega_1)$, and to define
\[\hat{\mathcal{P}}_t(\omega)
:=\{\mathop{\mathrm{law}} S_{t+1}(\omega,\cdot) = R \circ f(\omega,\cdot)^{-1}:
R\in\mathcal{R}_t(\omega)\}.\]
For example, in line with the first part of this section, one can take some neighborhood
\begin{align}
\label{eq:Rt.dist}
\mathcal{R}_t(\omega)=\{ R\in\mathfrak{P}(\Omega_1) :
\mathop{\mathrm{dist}}(R,\mathop{\mathrm{law}} X)\leq\varepsilon_t(\omega) \},
\end{align}
or, if there are even less data, one might argue that
\begin{align}
\label{eq:Rt.phitn}
\mathcal{R}_t(\omega):=\{ R\in\mathfrak{P}(\Omega_1) : E_R[\phi_t^i(\omega, \cdot)]\leq 1
\text{ for } 1\leq i\leq n \}
\end{align}
for some given Borel functions $\phi_t^1,\dots,\phi_t^n\colon \Omega_t\times\mathbb{R}^d\to\mathbb{R}$
is a good choice.
Here, if $\inf_{x} \phi_t^i(\omega,x)\leq 0$ for all $i$ and
$f_t$ in \eqref{eq:dyn.S.general} is such that
$S_t(\omega)$ lies in the relative interior of $f_t(\omega,\mathbb{R}^d)$
for every $\omega\in\Omega_t$ -- an assumption which is usually fulfilled --
then the resulting model of \eqref{eq:Rt.phitn} satisfies NA$(\mathcal{P}_t(\omega))$ for every $t$ and $\omega$.
The same holds true for $\mathcal{R}_t$ defined by \eqref{eq:Rt.dist} under the mentioned
assumption on $f_t$ if e.g.~$\mathop{\mathrm{dist}}$ is the Wasserstein distance of order $p$
and $X$ has a finite $p$-th moment.
On a technical level, $\mathcal{R}_t$ defined by \eqref{eq:Rt.phitn} has analytic graph
and so do $\hat{\mathcal{P}}_t$ and $\mathcal{P}_t$, the latter begin defined as
$\mathcal{P}_t(\omega):=\mathop{\mathrm{conv}} \hat{\mathcal{P}}_t(\omega)$
the convex hull of $\hat{\mathcal{P}}_t$.
The same holds true for $\mathcal{R}_t$ defined by \eqref{eq:Rt.dist}.
\begin{example}[Binomial model]
Besides what was mentioned above, another natural generalization of the Binomal model
is to allow for the jump size and probability
to take values in some intervals (which may depend on the time $t$ and past $\omega\in\Omega_t$).
This corresponds to
\[ \mathcal{R}_t(\omega):=\big\{ p\delta_a +(1-p)\delta_b : p\in[\underline{p}_t(\omega),\overline{p}_t(\omega)],
a\in[\underline{a}_t(\omega), \overline{a}_t(\omega)], b\in[\underline{b}_t(\omega),\overline{b}_t(\omega)]
\big\}\]
where $0<\underline{p}_t\leq \overline{p}_t<1$,
$\underline{a}_t\leq \overline{a}_t<0<\underline{b}_t\leq \overline{b}_t$ are Borel functions.
Here $\delta_a$ denotes the Dirac measure at point $a$.
Note that {\rm NA}$(\mathcal{P}_t(\omega))$ is trivially satisfied
for every $t$ and $\omega$.
\end{example}
Regarding the Black-Scholes model in continuous time, there is a popular and well-studied
way of robustification, see e.g.~\cite{peng2007g}:
Consider all models \eqref{eq:dyn.S.black.scholes} with $\mu$ and volatility $\sigma$ in some given intervals.
This can be done as in the previous example, however, then each $\mathcal{P}_t(\omega)$ and therefore
also the resulting family $\mathcal{P}$ is dominated (by the Lebesgue measure).
In the present discrete-time setting, it seems more interesting to discard the assumption of
normality of $\Delta W$ in \eqref{eq:dyn.S.black.scholes}.
\begin{example}[Black-Scholes]
Fix two Borel functions
$\mu_t\colon\Omega_1\to\mathbb{R}$ and $\sigma_t\colon\Omega_1\to(0,+\infty)$,
and let $\varepsilon_t$ and $\mathop{\mathrm{dist}}$ be as above.
Now define
\[ \mathcal{R}_t(\omega):=\big\{ R\ast \delta_{\mu\Delta t} :
\mu\in[\underline{\mu_t}(\omega),\overline{\mu}_t(\omega)]
\text{ and } \mathop{\mathrm{dist}}(R,N(0,\sigma_t^2(\omega)\Delta t))\leq\varepsilon_t(\omega) \big\},\]
where $R\ast \delta_{\mu\Delta t}$ denotes the convolution $R\ast \delta_{\mu\Delta t}(A):=R(A-\mu\Delta t)$.
The set $\hat{\mathcal{P}}_t$ therefore corresponds to the Black-Scholes model with drift and
volatility uncertainty in the sense that one considers all models
\[ S_{t+1}(\omega,\cdot)=S_t(\omega)\big(\mu \Delta t + Y\big),
\quad
\begin{array}{l}
\text{$\mu\in[\underline{\mu_t}(\omega),\overline{\mu}_t(\omega)]$ and the law of}\\
\text{$Y$ is $\varepsilon_t(\omega)$ close to $N(0,\sigma_t^2(\omega)\Delta t)$}
\end{array}\]
simultaneously.
To be more in line with the original model, one can also require that $R$ (resp.~$Y$) has mean 0
in the definition of $\mathcal{R}_t$.
Note that for any reasonable choice for the distance (e.g.~Wasserstein),
the set $\mathcal{P}_t(\omega)$ satisfies all of our assumptions.
\end{example}
\section{Proof for the one-period setting}
\label{sec:one.period}
Let $(\Omega,\mathcal{F})$ be a measurable space armed with a family of probability
measures $\mathcal{P}\subset\mathfrak{P}(\Omega)$.
Further let $S_0\in\mathbb{R}$ and $S_1\colon\Omega\to\mathbb{R}$ be measurable.
We write $h\in\Theta=\mathbb{R}^d$ for trading strategies and assume the no-arbitrage
NA$(\mathcal{P})$, i.e.~$h\Delta S\geq 0$ $\mathcal{P}$-q.s.~implies
$h\Delta S= 0$ $\mathcal{P}$-q.s.~for every $h\in\mathbb{R}^d$.
Given some random variable $Z\colon\Omega\to[0,+\infty)$, denote by
\[\mathcal{M}(Z)
=\{ Q\in\mathfrak{P}(\Omega) : E_Q[|\Delta S|+Z]+H(Q,\mathcal{P})<+\infty
\text{ and } E_Q[\Delta S]=0\}\]
the set of martingale measures that have finite entropy and integrate $Z$.
The following is the main result of this section.
\begin{theorem}
\label{thm:1peroiod}
Fix a random variable $X\colon\Omega\to(-\infty,+\infty]$.
Then one has
\begin{align}
\label{eq:problem.1period}
\inf_{h\in\mathbb{R}^d}\sup_{P\in\mathcal{P}} \log E_P\big[\exp(X+h\Delta S)\big]
=\sup_{Q\in\mathcal{M}(Y)} \big( E_Q[X]-H(Q,\mathcal{P}) \big)
\end{align}
for every random variable $Y\colon\Omega\to[0,+\infty)$
and both terms are not equal to $-\infty$.
Moreover, the infimum over $h\in\mathbb{R}^d$ is attained.
\end{theorem}
The following lemma, which turns out to be be helpful in the multi-period case,
is shown in the course of the proof of Theorem \ref{thm:1peroiod}.
\begin{lemma}
\label{lem:1period.cont.below}
Let $X_n\colon\Omega\to(-\infty,+\infty]$ be a sequence of random variables
increasing point-wise to $X$. Then it holds
\[\sup_n \inf_{h\in\mathbb{R}^d}\sup_{P\in\mathcal{P}} \log E_P\big[\exp(X_n+h\Delta S)\big]
=\inf_{h\in\mathbb{R}^d}\sup_{P\in\mathcal{P}} \log E_P\big[\exp(X+h\Delta S)\big],\]
i.e.~the optimization problem is continuous from below.
\end{lemma}
\begin{lemma}
\label{lem:rep.exponential}
Fix a random variable $X\colon\Omega\to\mathbb{R}$.
Then one has
\[\sup_{P\in\mathcal{P}}\log E_P[\exp(X+h\Delta S)]
=\sup_{Q\in\mathcal{C}}\big(E_Q[X+h\Delta S] - H(Q,\mathcal{P}) \big)\]
for every $h\in\mathbb{R}^d$, where
\[ \mathcal{C}:=\{Q\in\mathfrak{P}(\Omega) :
E_Q[|X|+|\Delta S|+Y]+H(Q,\mathcal{P})<+\infty\} \]
and $Y\colon\Omega\to[0,+\infty)$ is an arbitrary random variable.
\end{lemma}
\begin{proof}
(a)
Define $Z:=X+h\Delta S$ and fix a measure $P\in\mathcal{P}$.
It follows from the well known representation of expected exponential
utility and the monotone convergence theorem that
\begin{align}
\label{eq:rep.exponential.set.smaller}
\log E_P[\exp(Z)]
=\sup_{Q\in\mathcal{A}_P}(E_Q[Z]-H(Q,P)),
\end{align}
where
\[\mathcal{A}_P:=\{Q\in\mathfrak{P}(\Omega) : E_Q[Z^-] + H(Q,P)<+\infty\}.\]
For the sake of completeness, a proof is provided in Lemma \ref{lem:rep.exponential.dominated}.
We claim that one can replace $\mathcal{A}_P$ with $\mathcal{C}_P$
in \eqref{eq:rep.exponential.set.smaller}
without changing the value of the supremum, where
\[ \mathcal{C}_P:=\{ Q\in\mathfrak{P}(\Omega): E_Q[|X|+|\Delta S|+Y]+H(Q,P)<+\infty\}. \]
Since $\mathcal{C}_P$ is a subset of $\mathcal{A}_P$, it suffices
to show that for any $Q\in\mathcal{A}_P$, there exists a sequence
$Q_n\in\mathcal{C}_P$ such that $E_{Q_n}[Z]-H(Q_n,P)$ converges to $E_Q[Z]-H(Q,P)$.
To that end, fix some $Q\in\mathcal{A}_P$ and define
\[ Q_n:=Q(\,\cdot\,|B_n) \quad\text{where}\quad
B_n:=\{ |X|+|\Delta S| + Y\leq n \}\]
for all $n$ large enough such that $Q(B_n)>0$.
Then it holds
\[ \frac{dQ_n}{dP}=\frac{1_{B_n}}{Q(B_n)}\frac{dQ}{dP}\]
and since $B_n\uparrow\Omega$, a straightforward computation shows that
\[H(Q_n,P)
=E_P\Big[\frac{1_{B_n}}{Q(B_n)}\frac{dQ}{dP}\log\frac{dQ}{dP}\Big] -\log Q(B_n)
\to H(Q,P).\]
In particular $H(Q_n,P)<+\infty$ and since
$X$, $\Delta S$, and $Y$ are integrable with respect to
$Q_n$, it follows that $Q_n\in\mathcal{C}_P$.
Further, the integrability of $Z^-$ with respect to $Q$ guarantees
the convergence of $E_{Q_n}[Z]$ to $E_Q[Z]$ and therefore
\[E_Q[Z]-H(Q,P)
=\lim_n (E_{Q_n}[Z]-H(Q_n,P))
\leq\sup_{Q\in\mathcal{C}_P}(E_Q[Z]-H(Q,P)).\]
Taking the supremum over all $Q\in\mathcal{A}_P$ yields the claim.
(b)
To conclude the proof, make the simple observation that
$\mathcal{C}$ equals the union over $\mathcal{C}_P$ where $P$ runs trough $\mathcal{P}$.
This implies that
\[\sup_{P\in\mathcal{P}} \log E_P[\exp(Z)]
=\sup_{P\in\mathcal{P}} \sup_{Q\in\mathcal{C}_P}(E_Q[Z]-H(Q,P))
=\sup_{Q\in\mathcal{C}}(E_Q[Z]-H(Q,\mathcal{P})),\]
where the first equality follows from step (a).
\end{proof}
\begin{lemma}
\label{lem:convex.H.C}
The relative entropy $H$ is jointly convex.
Moreover, the function $H(\cdot,\mathcal{P})$
and the set $\mathcal{C}$ defined in Lemma \ref{lem:rep.exponential} are convex.
\end{lemma}
\begin{proof}
It follows from \cite[Lemma 3.29]{follmer2011stochastic} that
\[ H(Q,P)=\sup\{ E_Q[Z] -\log E_P[\exp(Z)] : Z\text{ is a bounded random variable} \}. \]
For any such $Z$, the function $(Q,P)\mapsto E_Q[Z] -\log E_P[\exp(Z)]$ is convex.
Thus $H$, as the supremum over convex functions, is itself convex.
Furthermore, the convexity of $\mathcal{P}$ yields that
$H(\cdot,\mathcal{P})$ and $\mathcal{C}$ are convex.
\end{proof}
In the proof of Theorem \ref{thm:1peroiod} it will be important that
$0\in\mathop{\mathrm{ri}}\{E_Q[\Delta S] :Q\in\mathcal{C}\}$
where $\mathcal{C}$ was defined in Lemma \ref{lem:rep.exponential} and $\mathop{\mathrm{ri}}$ denotes
the relative interior.
To get the idea why this is true, assume for simplicity that $d=1$ and that $\Delta S$
is not $\mathcal{P}$-quasi surely equal to 0.
Then, by the no-arbitrage condition, there exist two measures
$P^\pm$ such that $P^\pm(\pm\Delta S>0)>0$.
Now define
\[Q_\lambda:=\lambda P^+(\,\cdot\,|\,0< \Delta S,|X|,Y <n)+(1-\lambda) P^-(\,\cdot\,|-n<\Delta S,-|X|,-Y<0)\]
for $n$ large enough and every $\lambda\in[0,1]$.
Then $X$, $\Delta S$, and $Y$ are integrable with respect to $Q_\lambda$ and since
$E_{Q_0}[\Delta S]<0$, $E_{Q_1}[\Delta S]>0$ it follows that
$0\in\mathop{\mathrm{int}} \{ E_{Q_\lambda}[\Delta S] : \lambda\in [0,1]\}$.
As the density of $Q_\lambda$ with respect to $(P^++P^-)/2\in\mathcal{P}$ is bounded,
it holds $H(Q_\lambda,\mathcal{P})<+\infty$ and thus $Q_\lambda\in\mathcal{C}$.
\begin{lemma}[\text{\cite[Lemma 3.3]{bouchard2015arbitrage}}]
\label{lem:fundamental.lem}
Let $X,Y\colon\Omega\to\mathbb{R}$ be random variables and assume that $Y$ is non-negative.
Then one has $0\in\mathop{\mathrm{ri}} \{ E_Q[\Delta S] : Q\in\mathcal{C}\}$
where $\mathcal{C}$ was defined in Lemma \ref{lem:rep.exponential}.
\end{lemma}
\begin{proof}
Even though \cite[Lemma 3.3]{bouchard2015arbitrage} states that
$0\in\mathop{\mathrm{ri}} \{ E_Q[\Delta S] : Q\in\Theta\}$
for the set
$\Theta=\{Q: E_Q[|X|+|\Delta S|+Y]<+\infty\text{ and } Q\ll P\text{ for some }P\in\mathcal{P}\}$,
the constructed measures $Q$ have bounded densities $dQ/dP$ with respect to some $P\in\mathcal{P}$,
in particular $H(Q,\mathcal{P})$ is finite.
The proof can be copied word by word.
\end{proof}
Before being ready for the proof of the main theorem, one last observation on the
decomposition of $\mathbb{R}^d$ into relevant and irrelevant strategies $h$ needs to be made.
Denote by $\mathop{\mathrm{supp}}_\mathcal{P}\Delta S$ the smallest closed subset of $\mathbb{R}^d$
such that $\Delta S(\omega)\in\mathop{\mathrm{supp}}_\mathcal{P}\Delta S$ for $\mathcal{P}$-quasi every $\omega$,
see \cite[Lemma 4.2]{bouchard2015arbitrage}.
Further write $\mathop{\mathrm{lin}}A$ for the smallest linear space which contains
a given set $A\subset\mathbb{R}^d$,
and $L^\perp:=\{h\in\mathbb{R}^d : hl=0\text{ for all }l\in L\}$ for the orthogonal
complement of a linear space $L\subset\mathbb{R}^d$.
\begin{lemma}[\text{\cite[Lemma 2.6]{nutz2014utility}}]
\label{lem:char.space.L}
Define $L:=\mathop{\mathrm{lin}}\mathop{\mathrm{supp}}_\mathcal{P}\Delta S$.
Then one has $h\in L^\perp$ if and only if $h\Delta S=0$ $\mathcal{P}$-quasi surely.
\end{lemma}
\begin{proof}[{\bf Proof of Theorem \ref{thm:1peroiod} and Lemma \ref{lem:1period.cont.below}}]
In step (a) duality is shown under the assumption that $X$ is bounded from above.
The existence of an optimizer $h\in\mathbb{R}^d$
as well as continuity from below are proven simultaneously in step (b).
Finally, the results from (a) and (b) are combined to extend
to unbounded random endowment $X$ in step (c).
(a)
Throughout this step assume that $X$ is bounded from above,
meaning that there exists some constant $k$ such that $X(\omega)\leq k$ for every $\omega$.
The goal is to show the following dual representation
\begin{align}
\label{eq:1peroid.dual.Xintegr}
\inf_{h\in\mathbb{R}^d}\sup_{P\in\mathcal{P}} \log E_P[\exp(X+h\Delta S)]
=\sup_{Q\in\mathcal{M}(|X|+Y)} (E_Q[X]-H(Q,\mathcal{P})).
\end{align}
By Lemma \ref{lem:rep.exponential} it holds
\begin{align*}
\inf_{h\in\mathbb{R}^d}\sup_{P\in\mathcal{P}} \log E_P[\exp(X+h\Delta S)]
=\inf_{h\in\mathbb{R}^d}\sup_{Q\in\mathcal{C}}\big( E_Q[X+h\Delta S]-H(Q,\mathcal{P}) \big)
\end{align*}
where
\[ \mathcal{C}:=\{Q\in\mathfrak{P}(\Omega) : E_Q[|X|+|\Delta S|+Y]+H(Q,\mathcal{P})<+\infty \}.\]
Thus, if interchanging the infimum over $h\in\mathbb{R}^d$
and the supremum over $Q\in\mathcal{C}$ were possible,
\eqref{eq:1peroid.dual.Xintegr} would follow
since $\inf_{h\in\mathbb{R}^d}E_Q[h\Delta S]=-\infty$ whenever $Q$ is not a martingale measure.
In what follows we argue why one can in fact interchange the infimum and the supremum.
Define
\[\Gamma:=\mathop{\mathrm{lin}}\{ E_Q[\Delta S] :Q\in\mathcal{C} \}\]
and notice that if $\Gamma=\{0\}$, then $\mathcal{C}=\mathcal{M}(|X|+Y)$
and $E_Q[h\Delta S]=0$ for all $h\in\mathbb{R}^d$ and $Q\in\mathcal{C}$ so that there is nothing to prove.
Therefore assume in the sequel that $\Gamma\neq\{0\}$ and let
\[\{e_1,\dots,e_r\} \text{ be an orthonormal basis of } \Gamma.\]
Further, to simplify notation, define the function $J\colon \mathcal{C}\times\mathbb{R}^d\to\mathbb{R}$,
\[ J(Q,h):=hE_Q[\Delta S]+ E_Q[X]-H(Q,\mathcal{P}). \]
By Lemma \ref{lem:convex.H.C} the set $\mathcal{C}$ and the function
$H(\cdot,\mathcal{P})$ are convex, which shows that $J(\cdot,h)$ is concave for all $h\in\mathbb{R}^d$.
Further, $J(Q,\cdot)$ is convex for all $Q\in\mathcal{C}$.
Therefore, \cite[Theorem 4.1]{sion1958general} gives a sufficient condition for
\[ \inf_{h\in\mathbb{R}^d}\sup_{Q\in\mathcal{C}} J(Q,h)
=\sup_{Q\in\mathcal{C}}\inf_{h\in\mathbb{R}^d} J(Q,h) \]
to hold true, namely that
\begin{align}
\label{eq:supinfcondition}
\begin{cases}
\text{for every }
c<\inf_{h\in\mathbb{R}^d}\sup_{Q\in\mathcal{C}} J(Q,h)
\text{ one can find a finite set }F\subset \mathcal{C}\\
\text{such that for every }
h\in\mathbb{R}^d \text{ there exists } Q\in F \text{ satisfying } J(Q,h)>c.
\end{cases}
\end{align}
To prove \eqref{eq:supinfcondition}, fix such $c$ and notice that
\[\{h\in\mathbb{R}^d: J(Q,h)>c\}
=\{h\in \Gamma : J(Q,h)>c\}+ \Gamma^\perp\]
since $hE_Q[\Delta S]=0$ for every $h\in\Gamma^\perp$ and $Q\in\mathcal{C}$.
Therefore we can assume without loss of generality that $h\in\Gamma$ in the sequel.
In fact, we shall distinguish between elements in $\Gamma$ with large and small (Euclidean) length.
From Lemma \ref{lem:fundamental.lem} it follows that
\[ 0\in\mathop{\mathrm{ri}}\{ E_Q[\Delta S] : Q\in\mathcal{C}\} \]
which implies that there exist $a^\pm_i>0$ and $Q_i^\pm\in\mathcal{C}$ satisfying
\[E_{Q_i^\pm}[\Delta S]=\pm a^\pm_ie_i \quad\text{for } 1\leq i\leq r.\]
We claim that
\begin{align}
\label{eq:for.existence.minimizer}
\begin{cases}
\max\{J(h,Q_i^\pm) : 1\leq i\leq r\}>c+1>c\\
\text{for all } h\in\Gamma\text{ such that }
|h|>m\sqrt{r}/\delta
\end{cases}
\end{align}
where
\[ m:=\max\big\{c+1 - E_{Q^\pm_i}[X] + H(Q^\pm_i,\mathcal{P}) : 1\leq i\leq r\big\}\in\mathbb{R}\]
and
\[ \delta:=\min\{ a^\pm_i : 1\leq i\leq r \}>0.\]
Indeed, since $\sum_{i=1}^r (he_i)^2=|h|^2> r(m/\delta)^2$, it follows that
$|he_j|> m/\delta$ for some $1\leq j\leq r$. If $he_j>m/\delta$ it holds
\[hE_{Q^+_j}[\Delta S]
=h a^+_je_j
>\frac{m a_j^+}{\delta}
\geq m
\geq c+1- E_{Q^+_j}[X] +H(Q^+_j,\mathcal{P})\]
and a rearrangement of the appearing terms yields $J(h,Q_j^+)>c+1$.
If $he_j<-m/\delta$, the same argumentation shows that $J(h,Q_j^-)>c+1$.
Further, as
\[ J(Q,\cdot) \text{ is continuous and } c<\inf_{h\in\Gamma}\sup_{Q\in\mathcal{C}} J(Q,h),\]
the collection
\[U_Q :=\{h \in \Gamma: J(Q,h) >c \},\]
where $Q\in\mathcal{C}$, forms an open cover of $\Gamma$.
By compactness of the set $\{h\in \Gamma : |h|\leq m\sqrt{r}/\delta\}$,
there exists a finite family $F'\subset\mathcal{C}$ such that
\[\{h\in \Gamma : |h|\leq m\sqrt{r}/\delta\}\subset \bigcup \{U_Q : Q\in F'\}.\]
Then $F:=F'\cup\{Q^\pm_i : 1\leq i\leq r\}$ is still finite and it holds
\[ \Gamma=\bigcup \{ U_Q : Q\in F\},\]
which is a reformulation of \eqref{eq:supinfcondition}.
Putting everything together it follows that
\begin{align*}
&\inf_{h\in\mathbb{R}^d}\sup_{P\in\mathcal{P}} \log E_P[\exp(X+h\Delta S)]
=\inf_{h\in\mathbb{R}^d}\sup_{Q\in\mathcal{C}} J(Q,h)\\
&=\sup_{Q\in\mathcal{C}}\inf_{h\in\mathbb{R}^d} J(Q,h)
=\sup_{Q\in\mathcal{M}(|X|+Y)} (E_Q[X]-H(Q,\mathcal{P})).
\end{align*}
In particular, since $\mathcal{M}(|X|+Y)$ is not empty by Lemma \ref{lem:fundamental.lem},
it follows that the optimization problem does not take the value $-\infty$.
(b)
We proceed to show that the optimization problem is continuous from below
(Lemma \ref{lem:1period.cont.below})
and that an optimal strategy $h\in\mathbb{R}^d$ exists.
Recall that $X_n$ is a sequence increasing point-wise to $X$.
For the existence of an optimal strategy for a fixed function $X$, consider
the constant sequence $X_n:=X$ in the following argumentation.
For each natural number $n$, let $h_n\in\mathbb{R}^d$ such that
\begin{align}
\label{eq:h.nearly.optimal}
\inf_{h\in\mathbb{R}^d} \sup_{P\in\mathcal{P}} \log E_P[\exp(X_n+h\Delta S)]
\geq \sup_{P\in\mathcal{P}} \log E_P[\exp(X_n+h_n\Delta S)]- \frac{1}{n}.
\end{align}
By step (a) this is possible, i.e.~the left hand side of \eqref{eq:h.nearly.optimal}
is not equal to $-\infty$.
By Lemma \ref{lem:char.space.L} we may assume without loss of generality that
every $h_n$ is an element of
$L:=\mathop{\mathrm{lin}}\mathop{\mathrm{supp}}\nolimits_\mathcal{P}\Delta S$.
First assume that the sequence $h_n$ is unbounded,
i.e.~$\sup_n|h_n|=+\infty$.
Then, possibly after passing to a subsequence, $h_n/|h_n|$ converges to some limit $h^\ast$.
Since $|h^\ast|=1$ and $h^\ast\in L$,
it follows from Lemma \ref{lem:char.space.L} and the NA($\mathcal{P}$)-condition, that
$P'(A)>0$ for some $P'\in\mathcal{P}$ where $A:=\{h^\ast\Delta S>0\}$.
However, since
\[\exp(X_n+h_n\Delta S)1_A\to+\infty 1_A,\]
an application of Fatou's lemma yields
\[ \inf_{h\in\mathbb{R}^d}\sup_{P\in\mathcal{P}} \log E_P[\exp(X_n+h\Delta S)]
\geq \log E_{P'}[\exp(X_n+h_n\Delta S)]-\frac{1}{n}
\to +\infty.\]
But then, since the sequence $X_n$ is increasing, it follows that
\begin{align*}
&\inf_{h\in\mathbb{R}^d}\sup_{P\in\mathcal{P}} \log E_P[\exp(X+h\Delta S)]\\
&\geq \lim_n \inf_{h\in\mathbb{R}^d}\sup_{P\in\mathcal{P}} \log E_P[\exp(X_n+h\Delta S)]
=+\infty.
\end{align*}
Hence the optimization problem is continuous from below
and every $h\in\mathbb{R}^d$ is optimal for $X$.
If the sequence $h_n$ is bounded, again possibly after passing to a subsequence,
$h_n$ converges to some limit $h^\ast\in\mathbb{R}^d$.
Now it follows that
\begin{align*}
\sup_{P\in\mathcal{P}} \log E_P[\exp(X+h^\ast\Delta S)]
&\leq \liminf_n\Big( \sup_{P\in\mathcal{P}} \log E_P[\exp(X_n+h_n\Delta S)]-\frac{1}{n}\Big)\\
&\leq \liminf_n \inf_{h\in\mathbb{R}^d} \sup_{P\in\mathcal{P}} \log E_P[\exp(X_n+h\Delta S)] \\
&\leq \inf_{h\in\mathbb{R}^d} \sup_{P\in\mathcal{P}} \log E_P[\exp(X+h\Delta S)],
\end{align*}
where the first inequality follows from Fatou's lemma, the second one
since $h_n$ was chosen optimal up to an error of $1/n$, and the last one
since $X_n$ is an increasing sequence.
This shows both that the optimization problem is continuous from below
and that $h^\ast$ is optimal for $X$.
(c)
In the final step, the duality established in (a) is extended to general
random endowment. Let $X\colon\Omega\to(-\infty,+\infty]$ be measurable and observe that
\[\mathcal{M}(X^- + Y)=\mathcal{M}(|X\wedge n| + Y)\quad\text{for all }n\in\mathbb{N}\]
since $X^-$ is integrable if and only if $(X\wedge n)^-$ is.
Moreover, for any $Q\in\mathcal{M}(X^- + Y)$ the monotone convergence theorem applies
and yields $\sup_n E_Q[X\wedge n]=E_Q[X]$.
But then it follows that
\begin{align*}
&\inf_{h\in\mathbb{R}^d}\sup_{P\in\mathcal{P}} \log E_P[\exp(X + h\Delta S)]
=\sup_n \inf_{h\in\mathbb{R}^d}\sup_{P\in\mathcal{P}} \log E_P[\exp(X\wedge n+ h\Delta S)]\\
&=\sup_n \sup_{Q\in\mathcal{M}(X^- + Y)} \big( E_Q[X\wedge n]-H(Q,\mathcal{P})\big)
=\sup_{Q\in\mathcal{M}(X^- + Y)} \big( E_Q[X]-H(Q,\mathcal{P})\big)
\end{align*}
where the first and second equality follow from step (b) and (a) respectively, and the last
one by interchanging two suprema.
\end{proof}
\section{Proofs for the multi-period case}
\label{sec:multi.period}
\subsection{The case without options}
In this section, measurable selection arguments are used to show that the global analysis
can be reduced to a local one wherein the results of the one-period case are used.
For each $0\leq t \leq T-1$ and $\omega\in\Omega_t$ define
\[\mathcal{P}_t^T(\omega)=\{P_t\otimes\cdots\otimes P_{T-1}:
P_s(\cdot)\in\mathcal{P}_s(\omega,\cdot) \text{ for }t\leq s\leq T-1\},\]
where each $P_s(\cdot)$ is a universally measurable selector of $\mathcal{P}_s(\omega,\cdot)$.
Thus $\mathcal{P}_t^T(\omega)$ corresponds to the set of all possible probability scenarios
for the future stock prices $S_{t+1},\dots, S_T$, given the past $\omega\in\Omega_t$.
In particular it holds $\mathcal{P}=\mathcal{P}_0^T$ in line with Bouchard and Nutz.
In order to keep the indices to a minimum, fix two functions
\[ X\colon\Omega\to(-\infty,+\infty]\quad\text{and}\quad Y\colon\Omega\to[0,+\infty)\]
such that $X$ and $-Y$ are upper semianalytic, and define the set of all martingale measures
for the future stock prices $S_{t+1},\dots, S_T$ given the past $\omega\in\Omega_t$ by
\[ \mathcal{M}_t^T(\omega)
=\bigg\{ Q\in\mathfrak{P}(\Omega_{T-t}):
\begin{array}{l}
(S_s(\omega,\cdot))_{t\leq s \leq T} \text{ is a $Q$-martingale and}\\
E_Q[X(\omega,\cdot)^-+Y(\omega,\cdot)] + H(Q,\mathcal{P}_t^T(\omega))<+\infty
\end{array}\bigg\}\]
for each $0\leq t\leq T-1$ and $\omega\in\Omega_t$.
It is shown in Lemma \ref{lem:graph.Q.analytic} that $\mathcal{M}_t^T$ has analytic graph
and within the proof of Theorem \ref{thm:multiperiod} that its values are not empty.
Note that $\mathcal{M}_0^T=\mathcal{M}(Y)=\{Q\in\mathcal{M} : E_Q[Y+X^-]<+\infty \}$,
where $\mathcal{M}$ was defined in Section \ref{sec:main}.
Further introduce the dynamic version of the optimization problem:
Define
\[\mathcal{E}_T(\omega,x):=X(\omega)+x \]
for $(\omega,x)\in\Omega\times\mathbb{R}$
and recursively
\begin{align}
\label{eq:local.optimization}
\mathcal{E}_t(\omega,x)
:=\inf_{h\in\mathbb{R}^d}\sup_{P\in\mathcal{P}_t(\omega)}\log
E_P\Big[\exp\Big(\mathcal{E}_{t+1}\big(\omega\otimes_t\cdot,x+h\Delta S_{t+1}(\omega,\cdot)\big)\Big)\Big]
\end{align}
for $(\omega,x)\in\Omega_t\times\mathbb{R}$.
Here we write $\omega\otimes_t\omega':=(\omega,\omega')\in\Omega_{t+s}$
for $\omega\in\Omega_t$ and $\omega'\in\Omega_s$ instead of $(\omega,\cdot)$ to avoid confusion.
It will be shown later that $\mathcal{E}_t$ is well defined,
i.e.~that the term inside the expectation is appropriately measurable.
The following theorem is the main result of this section and includes
Theorem \ref{thm:main} as a special case (corresponding to $t=0$).
\begin{theorem}
\label{thm:multiperiod}
For every $0\leq t\leq T-1$ and $\omega\in\Omega_t$ it holds
\begin{align*}
\mathcal{E}_t(\omega,x)-x
&=\inf_{\vartheta\in\Theta}\sup_{P\in\mathcal{P}_t^T(\omega)}\log
E_P\Big[\exp\Big(X(\omega,\cdot) + (\vartheta\cdot S)_t^T(\omega,\cdot)\Big)\Big] \\
&=\sup_{Q\in\mathcal{M}_t^T(\omega)}
\Big(E_Q[X(\omega,\cdot)] -H(Q,\mathcal{P}_t^T(\omega)) \Big)
\end{align*}
and both terms are not equal to $-\infty$.
Moreover, the infimum over $\vartheta\in\Theta$ is attained.
\end{theorem}
We start by investigating properties of the (robust) relative entropy
and the graph of $\mathcal{M}_t$, which will ensure that measurable selection arguments
can be applied.
We then focus on deriving a duality for $\mathcal{E}_t$ and last prove the dynamic
programming principle.
\begin{lemma}[\text{\cite[Lemma 1.4.3.b]{dupuis2011weak}}]
\label{lem:H.is.borel}
The relative entropy $H$ is Borel.
\end{lemma}
\begin{proof}
Any Borel function can be approximated in measure by continuous functions,
so it follows as in the proof of Lemma \ref{lem:convex.H.C} that
\[ H(Q,P)=\sup\{ E_Q[Z] -\log E_P[\exp(Z)] : Z\text{ is bounded and continuous} \}. \]
Therefore $\{H\leq c\}$ is closed for any real number $c$ showing that $H$ is Borel.
\end{proof}
The so-called chain rule for the relative entropy is well known, a proof can be found
e.g.~in Appendix C3 of the book by Dupuis and Ellis \cite{dupuis2011weak}.
However, since we are dealing with universally measurable kernels
and also in order to be self-contained, a proof is given in the Appendix.
For the link between dynamic risk measures and this chain rule see
e.g.~\cite{cheridito2011composition} in the dominated,
and \cite{bartl2016pointwise,lacker2015liquidity} in the non-dominated setting.
\begin{lemma}
\label{lem:entropie.sum}
Let $0\leq t\leq T-1$ and $P,Q\in\mathfrak{P}(\Omega_{T-t})$. Then
\[ H(Q,P)=\sum_{s=t}^{T-1} E_Q[H(Q_s(\cdot),P_s(\cdot))] \]
where $Q_s$ and $P_s$ are universally measurable kernels such that
$Q=Q_t\otimes\cdots\otimes Q_{T-1}$ and $P=P_t\otimes\cdots\otimes P_{T-1}$.
\end{lemma}
\begin{lemma}
\label{lem:entropie.sum.robust}
For any $0\leq t\leq T-1$, the function
\[ \Omega_t\times\mathfrak{P}(\Omega_{T-t})\to[-\infty,0],\quad
(\omega,Q)\mapsto -H(Q,\mathcal{P}_t^T(\omega))\]
is upper semianalytic. Moreover it holds
\begin{align*}
H(Q,\mathcal{P}_t^T(\omega))
&=H(Q_t,\mathcal{P}_t(\omega)) +E_Q[ H(Q'(\cdot),\mathcal{P}_{t+1}^T(\omega,\cdot))]
\end{align*}
where $Q'$ is a universally measurable kernel such that $Q=Q_t\otimes Q'$.
\end{lemma}
\begin{proof}
Every probability $Q\in\mathfrak{P}(\Omega_{T-t})$ can be written as
$Q=Q_t\otimes\cdots\otimes Q_{T-1}$ where $Q_s$ are the kernels
from Remark \ref{rem:kernels.measurable.and.derivative.measurable}, i.e.~such that
\[ \Omega_{s-t}\times\mathfrak{P}(\Omega_{T-t})\to\mathfrak{P}(\Omega_1),
\quad(\bar{\omega},Q)\mapsto Q_s(\bar{\omega})\]
is Borel.
(a) We start by showing that
\begin{align}
\label{eq:H.equal.sum}
\Omega_t\times\mathfrak{P}(\Omega_{T-t})\to[-\infty,0],
\quad (\omega,Q)\mapsto \sum_{s=t}^{T-1} -E_Q[ H(Q_s(\cdot),\mathcal{P}_s(\omega,\cdot))]
\end{align}
is upper semianalytic. Fix some $t\leq s\leq T-1$.
In the sequel $\omega$ will refer to elements in $\Omega_t$
and $\bar{\omega}$ to elements in $\Omega_{s-t}$.
Since $(\bar{\omega},Q)\mapsto Q_s(\bar{\omega})$ is Borel by construction
and the entropy $H$ is Borel by Lemma \ref{lem:H.is.borel}, the composition
\begin{align*}
\Omega_t\times\Omega_{s-t}\times\mathfrak{P}(\Omega_{T-t})\times\mathfrak{P}(\Omega_1)
&\to[-\infty,0],\quad
(\omega,\bar{\omega},Q,R)
\mapsto -H(Q_s(\bar{\omega}),R)
\end{align*}
is Borel as well.
As the graph of $\mathcal{P}_s$ is analytic,
it follows from \cite[Proposition 7.47]{bertsekas1978stochastic} that
\begin{align}
\label{eq:H(Q,scrP).is.lsa}
\Omega_t\times\Omega_{s-t}\times\mathfrak{P}(\Omega_{T-t})
\to[-\infty,0],\quad
&(\omega,\bar{\omega},Q)
\mapsto
-H(Q_s(\bar{\omega}),\mathcal{P}_s(\omega,\bar{\omega}))
\end{align}
is upper semianalytic.
Moreover \cite[Proposition 7.50]{bertsekas1978stochastic} guarantees that for any $\varepsilon>0$,
there exists a universally measurable kernel $P^\varepsilon_s$ such that
\begin{align}
\label{eq:selector.Ps'}
\begin{cases}
P^\varepsilon_s(\omega,\bar{\omega},Q)\in\mathcal{P}_s(\omega,\bar{\omega}), \\
H(Q_s(\bar{\omega}),P^\varepsilon_s(\omega,\bar{\omega},Q))
\leq H(Q_s(\bar{\omega}),\mathcal{P}_s(\omega,\bar{\omega}))+\varepsilon
\end{cases}
\end{align}
for all $(\omega,\bar{\omega},Q)$. This will be used in part (b).
Further, since
\begin{align*}
\Omega_t\times\mathfrak{P}(\Omega_{T-t})
\to[-\infty,0],\quad
(\omega,Q)
\mapsto
-E_Q[H(Q_s(\cdot),\mathcal{P}_s(\omega,\cdot))]
\end{align*}
is just \eqref{eq:H(Q,scrP).is.lsa} integrated with respect to $Q(d\bar{\omega})$,
an application of Lemma \ref{lem:integral.is.measurable} shows that this mapping
is upper semianalytic.
Finally, the fact that sums of upper semianalytic functions are again upper semianalytic
(see \cite[Lemma 7.30]{bertsekas1978stochastic}) implies that
\eqref{eq:H.equal.sum} is upper semianalytic as was claimed.
(b)
Fix some $\omega\in\Omega_t$ and $Q\in\mathfrak{P}(\Omega_{T-t})$.
From Lemma \ref{lem:entropie.sum} it follows that
\[ H(Q,\mathcal{P}_t^T(\omega))
=\inf_{P\in\mathcal{P}_t^T(\omega)}\sum_{s=t}^{T-1} E_Q[ H(Q_s(\cdot),P_s(\cdot))]
\geq \sum_{s=t}^{T-1} E_Q\big[ H(Q_s(\cdot),\mathcal{P}_s(\omega,\cdot))\big].\]
For the other inequality, let $\varepsilon>0$ be arbitrary and
$P_s^\varepsilon$ be the kernels from \eqref{eq:selector.Ps'}.
Recall that $Q$ and $\omega$ are fixed so that
\[P'_s\colon\Omega_{s-t}\to \mathfrak{P}(\Omega_1),\quad
\bar{\omega}\mapsto P_s^\varepsilon(\omega,\bar{\omega},Q)\]
is still universally measurable by \cite[Lemma 7.29]{bertsekas1978stochastic}.
Then it follows that
\[P':=P_t'\otimes\cdots\otimes P_{T-1}'\in\mathcal{P}_t^T(\omega)\]
and, using Lemma \ref{lem:entropie.sum} once more, that
\begin{align*}
&\sum_{s=t}^{T-1} E_Q\big[ H(Q_s(\cdot),\mathcal{P}_s(\omega,\cdot))\big]
\geq \sum_{s=t}^{T-1} E_Q\big[ H(Q_s(\cdot),P_s'(\cdot)) - \varepsilon \big]\\
&= H(Q,P') -(T-t)\varepsilon
\geq H(Q,\mathcal{P}_t^T(\omega)) -(T-t)\varepsilon.
\end{align*}
As $\varepsilon$ was arbitrary, this shows the desired inequality.
(c)
Finally, kernels are almost-surely unique so that
\[Q'=Q_{t+1}\otimes\cdots\otimes Q_{T-1}\quad Q_t\text{-almost surely.}\]
Hence it follows that
\[ H(Q'(\cdot),\mathcal{P}_{t+1}(\omega,\cdot))
=\sum_{s= t+1}^{T-1} E_{Q'(\cdot)}[H(Q_s(\cdot),\mathcal{P}_s(\omega,\cdot))]
\quad Q_t\text{-almost surely}.\]
It only remains to integrate this equation with respect to $Q_t$.
\end{proof}
Fix a measure $Q=Q_t\otimes\cdots\otimes Q_{T-1}\in\mathfrak{P}(\Omega_{T-t})$
and $\omega\in\Omega_t$.
An elementary computation shows that
$Q$ is a martingale measure for $(S_s(\omega,\cdot))_{t\leq s\leq T}$ if and only if
$E_Q[|\Delta S_{s+1}(\omega,\cdot)|]<+\infty$ and
\[ E_{Q_s(\bar{\omega})}[\Delta S_{s+1}(\omega,\bar{\omega},\cdot)]=0 \quad
\text{for } Q_t\otimes\cdots\otimes Q_{s-1}\text{-almost every }\bar{\omega}\in\Omega_{s-t}\]
and every $t\leq s\leq T-1$. This is used in the sequel without reference.
\begin{lemma}
\label{lem:graph.Q.analytic}
The graph of $\mathcal{M}_t^T$ is analytic.
\end{lemma}
\begin{proof}
First notice that $Z:=X\wedge 0-Y$ is upper semianalytic.
This follows from the fact that $\{X\wedge 0 \geq a\}$ equals $\emptyset$ if $a>0$
and $\{X\geq a\}$ else and that the sum of upper semianalytic functions remains upper semianalytic.
Therefore an application of Lemma \ref{lem:integral.is.measurable} shows that
\[\Omega_t\times\mathfrak{P}(\Omega_{T-t})\to[-\infty,0],
\quad(\omega,Q)\mapsto E_Q[Z(\omega,\cdot)] \]
is upper semianalytic.
Then, since $(\omega,Q)\mapsto -H(Q,\mathcal{P}_t^T(\omega))$ is upper semianalytic
by Lemma \ref{lem:entropie.sum.robust} and the sum of upper semianalytic
mappings is again upper semianalytic, it follows that
\[ A:=\{(\omega,Q): E_Q[Z(\omega,\cdot)] - H(Q,\mathcal{P}_t^T(\omega))>-\infty\}\]
is an analytic set.
The missing part now is the martingale property.
First notice that
\[\Omega_t\times\mathfrak{P}(\Omega_{T-t})\to[0,+\infty],\quad
(\omega,Q)\mapsto E_Q[|\Delta S_{s+1}(\omega,\cdot)|] \]
is Borel by Lemma \ref{lem:integral.is.measurable}.
As before, for every $Q\in\mathfrak{P}(\Omega_{T-t})$,
we will write $Q=Q_t\otimes\cdots\otimes Q_{T-1}$ for the kernels $Q_s$
from Remark \ref{rem:kernels.measurable.and.derivative.measurable}.
Then, since $(\bar{\omega},Q)\mapsto Q_s(\bar{\omega})$ is Borel,
a twofold application of Lemma \ref{lem:integral.is.measurable} shows that
\[\Omega_t\times\mathfrak{P}(\Omega_{T-t})\to[0,+\infty],\quad
(\omega,Q)\mapsto E_Q[ |E_{Q_s(\cdot)}[\Delta S_{s+1}(\omega,\cdot)]|] \]
is Borel.
Thus
\[ B_s:=\{(\omega,Q): E_Q[|\Delta S_{s+1}(\omega,\cdot)|] <+\infty \text{ and }
E_Q[ |E_{Q_s(\cdot)}[\Delta S_{s+1}(\omega,\cdot)]|]=0\}\]
is Borel which implies that
\[\mathop{\mathrm{graph}}\mathcal{M}_t^T
= \bigcap \{ B_s : t\leq s\leq T-1\}\cap A, \]
as the finite intersection of analytic sets, is itself analytic
(see \cite[Corollary 7.35.2]{bertsekas1978stochastic}).
\end{proof}
Define
\[ \mathcal{D}_t(\omega):=\sup_{Q\in\mathcal{M}_t^T(\omega)}
\Big( E_Q[X(\omega,\cdot)]-H(Q,\mathcal{P}_t^T(\omega))\Big)\]
for all $0\leq t\leq T-1$ and $\omega\in\Omega_t$, and recall that
\[\mathcal{E}_t(\omega,x)
:=\inf_{h\in\mathbb{R}^d}\sup_{P\in\mathcal{P}_t(\omega)}\log
E_P\Big[\exp\big(\mathcal{E}_{t+1}\big((\omega,\cdot),x+h\Delta S_{t+1}(\omega,\cdot)\big)\big)\Big]\]
for $(\omega,x)\in\Omega_t\times\mathbb{R}$.
\begin{proof}[\text{\bf Proof of Theorem \ref{thm:multiperiod} -- Duality}]
We claim that
\begin{align}
\label{eq:induction.main.thm}
\bigg\{
\begin{array}{l}
\mathcal{E}_t(\omega,x)=\mathcal{D}_t(\omega)+x\quad\text{and}\quad \mathcal{D}_t(\omega)\in(-\infty,+\infty]\\
\text{for all $\omega\in\Omega_t$, $x\in\mathbb{R}$ and $0\leq t\leq T-1$.}
\end{array}
\end{align}
The proof will be a backward induction.
For $t=T-1$, \eqref{eq:induction.main.thm} is just the statement of Theorem \ref{thm:1peroiod}.
Now assume that \eqref{eq:induction.main.thm} holds true for $t+1$.
First we artificially bound $X$ from above and then pass to the limit.
More precisely, define
\[ \mathcal{D}^n_s(\omega):=\sup_{Q\in\mathcal{M}_s^T(\omega)}
\big( E_Q[X(\omega,\cdot)\wedge n] - H(Q,\mathcal{P}_s^T(\omega))\big)\]
for $s=t,t+1$ and $\omega\in\Omega_s$, and notice that $\mathcal{D}^n_s$ is upper
semianalytic. Indeed, since $X(\omega,\cdot)\wedge n$ is upper semianalytic, the mapping
$(\omega,Q)\mapsto E_Q[X(\omega,\cdot)\wedge n]$
is upper semianalytic by Lemma \ref{lem:integral.is.measurable}.
Then, Lemma \ref{lem:entropie.sum.robust} and the fact that
the sum of upper semianalytic functions stays upper semianalytic implies that
\[ (\omega,Q)\mapsto E_Q[X(\omega,\cdot)\wedge n] - H(Q,\mathcal{P}_s^T(\omega))\]
is upper semianalytic.
Since the graph of $\mathcal{M}_s^T$ is analytic by Lemma \ref{lem:graph.Q.analytic},
it follows from \cite[Proposition 7.47]{bertsekas1978stochastic} that $\mathcal{D}_s^n$ is upper semianalytic.
Moreover, \cite[Proposition 7.50]{bertsekas1978stochastic} guarantees that
for any $\varepsilon>0$ there exists a universally measurable kernel
$Q^\varepsilon(\cdot)\in\mathcal{M}_{t+1}^T(\omega,\cdot)$ such that
\begin{align}
\label{eq:Q.eps.in.dual}
\mathcal{D}^n_{t+1}(\omega\otimes_t\cdot)\leq
E_{Q^\varepsilon(\cdot)}[X(\omega,\cdot)\wedge n]
- H({Q^\varepsilon(\cdot)},\mathcal{P}_{t+1}^T(\omega,\cdot))+\varepsilon.
\end{align}
By interchanging two suprema it holds $\mathcal{D}_s=\sup_n \mathcal{D}_s^n$
(for more details see part (c) of the proof of Theorem \ref{thm:1peroiod}).
In particular $\mathcal{D}_s$ is upper semianalytic,
as the countable supremum over upper semianalytic functions.
Therefore it follows from Lemma \ref{lem:1period.cont.below} that
$\mathcal{E}_t=\sup_n \mathcal{E}_t^n$
where
\[\mathcal{E}_t^n(\omega,x)
:=\inf_{h\in\mathbb{R}^d}\sup_{P\in\mathcal{P}_t(\omega)}
\log E_P[\exp( \mathcal{D}^n_{t+1}(\omega\otimes_t\cdot) + h\Delta S_{t+1}(\omega,\cdot)+x)].\]
The goal now is to show that
$\mathcal{E}_t^n$ equals $\mathcal{D}_t^n$ for all $n$, from which it follows that
\[ \mathcal{E}_t(\omega,x)
=\sup_n \mathcal{E}_t^n(\omega,x)
=\sup_n \mathcal{D}_t^n(\omega)+x
=\mathcal{D}_t(\omega)+x\]
and the proof is complete.
To show that indeed
$\mathcal{E}_t^n(\omega,x)=\mathcal{D}_t^n(\omega)+x$,
fix some $n$, $x$, and $\omega\in\Omega_t$.
By Theorem \ref{thm:1peroiod} it holds
\[ \mathcal{E}^n_t(\omega,x)
=\sup_{Q_t\in\mathcal{M}_t(Z)} \big( E_{Q_t}[\mathcal{D}^n_{t+1}(\omega\otimes_t\cdot)]
-H(Q_t,\mathcal{P}_t(\omega))\big) + x
>-\infty\]
where
\[\mathcal{M}_t(Z):=\bigg\{ Q\in\mathfrak{P}(\Omega_1) :
\begin{array}{l}
E_Q[\mathcal{D}_{t+1}^n(\omega\otimes_t\cdot)^- +|\Delta S_{t+1}(\omega,\cdot)|+Z]<+\infty,\\
H(Q,\mathcal{P}_t(\omega))<+\infty \text{ and } E_Q[\Delta S_{t+1}(\omega,\cdot)]=0
\end{array} \bigg\}\]
and $Z\colon\Omega_1\to[0,+\infty)$ is an arbitrary universally measurable function.
We start by showing that $\mathcal{E}_t^n(\omega,x)\leq\mathcal{D}_t^n(\omega)+x$.
Fix some $\varepsilon>0$, let
$Q^\varepsilon(\cdot)\in\mathcal{M}_{t+1}^T(\omega,\cdot)$
be the kernel from \eqref{eq:Q.eps.in.dual}, and define $Z\colon\Omega_1\to[0,+\infty)$,
\[Z:= E_{Q^{\varepsilon}(\cdot)}\Big[X(\omega,\cdot)^-+Y(\omega,\cdot)
+\sum_{s= t+2}^T |\Delta S_s(\omega,\cdot)|\Big]
+H(Q^\varepsilon(\cdot),\mathcal{P}_{t+1}^T(\omega,\cdot)). \]
Then $Z$ is real-valued by the definition of $\mathcal{M}_{t+1}^T(\omega,\cdot)$
and it follows from Lemma \ref{lem:integral.is.measurable},
Lemma \ref{lem:entropie.sum.robust}, and
\cite[Proposition 7.44]{bertsekas1978stochastic}
that $Z$ is universally measurable.
Moreover
\begin{align}
\label{eq:Q.times.Qeps.in.Ms}
Q_t\otimes Q^\varepsilon\in\mathcal{M}_t^T(\omega)\quad\text{for any }Q_t\in\mathcal{M}_t(Z).
\end{align}
To show this, fix some $Q_t\in\mathcal{M}_t(Z)$ and define $Q:=Q_t\otimes Q^\varepsilon$.
Then an application of Lemma \ref{lem:entropie.sum.robust} yields
\begin{align*}
H(Q,\mathcal{P}_t^T(\omega))
&=H(Q_t,\mathcal{P}_t(\omega))+ E_{Q_t}[H(Q^\varepsilon(\cdot),\mathcal{P}_{t+1}^T(\omega,\cdot))]\\
&\leq H(Q_t,\mathcal{P}_t(\omega)) + E_{Q_t}[Z]
<+\infty.
\end{align*}
Moreover, it holds
\[E_Q\Big[X(\omega,\cdot)^- +Y(\omega,\cdot)+ \sum_{s= t+1}^T|\Delta S_s(\omega,\cdot)|\Big]
\leq E_{Q_t}[ |\Delta S_{t+1}(\omega,\cdot)|+ Z]
<+\infty\]
so that indeed $Q\in\mathcal{M}_t^T(\omega)$ and therefore
\begin{align*}
& E_{Q_t}[\mathcal{D}_{t+1}^n(\omega\otimes_t\cdot)]- H(Q_t,\mathcal{P}_t(\omega))\\
&\leq E_{Q_t}[ E_{Q^\varepsilon(\cdot)}[X(\omega,\cdot)\wedge n]
- H(Q^\varepsilon(\cdot),\mathcal{P}_{t+1}^T(\omega,\cdot))+\varepsilon]
- H(Q_t,\mathcal{P}_t(\omega))\\
&= E_Q[X(\omega,\cdot)\wedge n]- H(Q,\mathcal{P}_t^T(\omega))+\varepsilon
\leq \mathcal{D}_t^n(\omega)+\varepsilon.
\end{align*}
As $Q_t\in\mathcal{M}_t(Z)$ and $\varepsilon>0$ were arbitrary, it follows that
$\mathcal{E}_t^n(\omega,x)\leq\mathcal{D}_t^n(\omega)+x$.
To show the other inequality,
i.e.~$\mathcal{E}_t^n(\omega,x)\geq\mathcal{D}_t^n(\omega)+x$,
fix some measure $Q\in\mathcal{M}_t^T(\omega)$ which we write as
\[Q=Q_t\otimes Q'\]
for a measure $Q_t$ on $\Omega_1$ and a Borel kernel
$Q'\colon\Omega_1\to\mathfrak{P}(\Omega_{T-t-1})$.
Then
\[Q_t\in\mathcal{M}_t(0)\qquad\text{and}\qquad
Q'(\cdot)\in\mathcal{M}_{t+1}^T(\omega,\cdot)\quad\text{$Q_t$-almost surely}\]
where $\mathcal{M}_t(0)=\mathcal{M}_t(Z)$ for the function $Z\equiv 0$.
Indeed, first notice that
\[ H(Q_t,\mathcal{P}_t(\omega)) + E_{Q_t}[H(Q'(\cdot),\mathcal{P}_{t+1}^T(\omega,\cdot))]
=H(Q,\mathcal{P}_t^T(\omega))
<+\infty \]
by Lemma \ref{lem:entropie.sum.robust}, so that
\[H(Q'(\cdot),\mathcal{P}_{t+1}^T(\omega,\cdot))<+\infty\quad Q_t\text{-almost surely}.\]
Similar we conclude that
$E_{Q'(\cdot)}[X(\omega,\cdot)^-+Y(\omega,\cdot) + |\Delta S_s(\omega,\cdot)|]<+\infty$ $Q_t$-almost surely
for all $t+2\leq s\leq T$.
Thus it holds
\[Q'(\cdot)\in\mathcal{M}_{t+1}^T(\omega,\cdot)\quad\text{$Q_t$-almost surely}.\]
But then it follows from the definition of $\mathcal{D}_{t+1}^n$ that
\[E_{Q'(\cdot)}[X(\omega,\cdot)\wedge n]
- H(Q'(\cdot),\mathcal{P}_{t+1}^T(\omega,\cdot))
\leq \mathcal{D}^n_{t+1}(\omega\otimes_t\cdot) \]
$Q_t$-almost surely, so that
\begin{align*}
E_{Q_t}[\mathcal{D}^n_{t+1}(\omega\otimes_t\cdot)^-]
&\leq E_{Q_t}\big[\big(E_{Q'(\cdot)}[X(\omega,\cdot)\wedge n]
- H(Q'(\cdot),\mathcal{P}_{t+1}^T(\omega,\cdot))\big)^-\big]\\
&\leq E_Q[X(\omega,\cdot)^-] + H(Q,\mathcal{P}_t^T(\omega))
<+\infty
\end{align*}
by Jensen's inequality and Lemma \ref{lem:entropie.sum.robust}.
Therefore one has $Q_t\in\mathcal{M}_t(0)$ and it follows that
\begin{align*}
&E_Q[X(\omega,\cdot)\wedge n]-H(Q,\mathcal{P}_t^T(\omega))\\
&=E_{Q_t}[ E_{Q'(\cdot)}[ X(\omega,\cdot)\wedge n]
- H(Q'(\cdot),\mathcal{P}_{t+1}^T(\omega,\cdot))] - H(Q_t,\mathcal{P}_t(\omega))\\
&\leq \sup_{R\in\mathcal{M}_t(0)}\big(
E_{R}[\mathcal{D}^n_{t+1}(\omega\otimes_t\cdot)]- H(R,\mathcal{P}_t(\omega))\big)
=\mathcal{E}_t^n(\omega,x)-x
\end{align*}
and as $Q\in\mathcal{M}_t^T(\omega)$ was arbitrary, that indeed
$\mathcal{D}_t^n(\omega)+x\leq \mathcal{E}_t(\omega,x)$.
Coupled with the other inequality which was shown before, it holds
$\mathcal{E}_t(\omega,x)=\mathcal{D}_t^n(\omega)+x$ and the proof is complete.
\end{proof}
The following lemma will be important in the proof of the dynamic programming principle.
Since it was already shown that $\mathcal{E}_t(\omega,x)=\mathcal{D}_t(\omega)+x$,
the proof is almost one to one to the one for \cite[Lemma 3.7]{nutz2014utility}.
For the sake of completeness, a proof is given in the Appendix.
\begin{lemma}[\text{\cite[Lemma 3.7]{nutz2014utility}}]
\label{lem:existence.optimizer.measurable}
For every $0\leq t\leq T-1$ and $x\in\mathbb{R}$
there exists a process $\vartheta^\ast\in\Theta$ such that
\begin{align*}
\mathcal{E}_s(\omega,x+(\vartheta^\ast\cdot S)_t^s(\omega))
=\sup_{P\in\mathcal{P}_s(\omega)} \log E_P[\exp(\mathcal{E}_{s+1}(\omega\otimes_s\cdot,
x+(\vartheta^\ast\cdot S)_t^{s+1}(\omega,\cdot) ))]
\end{align*}
for all $t\leq s\leq T-1$ and $\omega\in\Omega_s$.
\end{lemma}
\begin{proof}[\text{\bf Proof of Theorem \ref{thm:multiperiod} -- Dynamic programming}]
We turn to the proof of the dynamic programming principle, i.e.~we show that
\begin{align}
\label{eq:dyn.prog}
C:=\inf_{\vartheta\in\Theta}\sup_{P\in\mathcal{P}_t^T(\omega)}
\log E_P[\exp(X(\omega,\cdot)+x+ (\vartheta\cdot S)_t^T(\omega,\cdot))]
=\mathcal{E}_t(\omega,x)
\end{align}
for all $x$, $\omega\in\Omega_t$, and $0\leq t\leq T-1$ and that the infimum over
$\vartheta\in\Theta$ is attained.
Again, fix some $x$, $t$, and $\omega\in\Omega_t$.
By the first part of the proof of Theorem \ref{thm:multiperiod},
i.e.~the part which focuses on duality,
it holds $\mathcal{E}_t(\omega,x)=\mathcal{D}_t(\omega)+x$.
Therefore $x$ can be substracted on both sides of \eqref{eq:dyn.prog}
and there is no loss of generality in assuming that $x=0$. This will lighten notation.
First we focus on the inequality $C\geq \mathcal{E}_t(\omega,x)$.
Fix some $\vartheta\in\Theta$, $P\in\mathcal{P}_t^T(\omega)$, and $Q\in\mathcal{M}_t^T(\omega)$.
If
\[ C':=\log E_P[\exp(X(\omega,\cdot)+ (\vartheta\cdot S)_t^T(\omega,\cdot))]
\geq E_Q[X(\omega,\cdot)] - H(Q,P),\]
then the claim follows by taking the supremum over all those $Q$ and $P$,
and in a second step the infimum over all $\vartheta\in\Theta$.
To show this, one may assume that $C'$ and $H(Q,P)$ are finite, otherwise
there is nothing to prove. Define
\[ Z:=X(\omega,\cdot)+(\vartheta\cdot S)_t^T(\omega,\cdot).\]
Applying the elementary inequality
$ab\leq \exp(a)+b\log b$ to ``$a=Z^+$'' and ``$b=dQ/dP$'' yields
\[ E_Q[Z^+]\leq E_P[\exp(Z^+)]+H(Q,P)\leq \exp(C')+1+H(Q,P)<+\infty.\]
Therefore it holds
\[ E_Q[(\vartheta\cdot S)_t^T(\omega,\cdot)^+]
\leq E_Q[X(\omega,\cdot)^-] + E_Q[Z^+]
<+\infty \]
by the definition of $\mathcal{M}_t^T(\omega)$.
But then it follows from a result on local martingales (see \cite[Theorem 1 and 2]{jacod1998local}) that
$(\vartheta\cdot S)_t^T(\omega,\cdot)$ is actually integrable with respect to $Q$ and has expectation 0.
Hence $E_Q[Z^-]<+\infty$ and therefore Lemma \ref{lem:rep.exponential.dominated} yields
\[C'=\log E_P[\exp(Z)]
\geq E_Q[Z] - H(Q,P)
= E_Q[X(\omega,\cdot)] -H(Q,P)\]
which is what we wanted to show.
We finish the proof by showing that $C\leq \mathcal{E}_t(\omega,0)$
and that an optimal strategy $\vartheta^\ast\in\Theta$ exists.
Let $\vartheta^\ast$ be the as in Lemma \ref{lem:existence.optimizer.measurable}, i.e.~such that
\begin{align}
\label{eq:optimal.H}
\mathcal{E}_s(\omega,(\vartheta^\ast \cdot S)_t^s(\omega))
=\sup_{P\in\mathcal{P}_s(\omega)}
\log E_P[\exp(\mathcal{E}_{s+1}(\omega\otimes_s\cdot,(\vartheta^\ast \cdot S)_t^{s+1}(\omega,\cdot))]
\end{align}
for all $t\leq s\leq T-1$.
Then $\vartheta^\ast$ is optimal and $C\leq \mathcal{E}_t(\omega,0)$.
Indeed, let $P=P_t\otimes\cdots\otimes P_{T-1}\in\mathcal{P}_t^T(\omega)$ and
fix some $t\leq s\leq T-1$.
Then it follows from \eqref{eq:optimal.H} that
\begin{align*}
&\log E_P\big[\exp( \mathcal{E}_s(\omega\otimes_t\cdot,(\vartheta^\ast \cdot S)_t^s(\omega,\cdot))\big]\\
&=\log E_{P_t\otimes\cdots\otimes P_{s-1}}\big[\exp(
\mathcal{E}_s(\omega\otimes_t\cdot,(\vartheta^\ast \cdot S)_t^s(\omega,\cdot))\big)\big]\\
&\geq \log E_{P_t\otimes\cdots\otimes P_{s-1}}\Big[\exp\Big(\log
E_{P_s(\cdot)}\big[\exp\big(
\mathcal{E}_{s+1}(\omega\otimes_t\cdot,(\vartheta^\ast \cdot S)_t^{s+1}(\omega,\cdot))\big)\big]\Big)\Big] \\
&=\log E_P\big[\exp\big(\mathcal{E}_{s+1}(\omega\otimes_t\cdot,(\vartheta^\ast \cdot S)_t^{s+1}(\omega,\cdot))\big)\big],
\end{align*}
and an iteration yields
\begin{align*}
\mathcal{E}_t(\omega,0)
&=\log E_P[\exp(\mathcal{E}_t(\omega,(\vartheta^\ast \cdot S)_t^t(\omega)))]\\
&\geq \log E_P[\exp(\mathcal{E}_T(\omega\otimes_t\cdot,(\vartheta^\ast \cdot S)_t^T(\omega,\cdot)))]\\
&=\log E_P[\exp( X(\omega,\cdot) + (\vartheta^\ast\cdot S)_t^T(\omega,\cdot))].
\end{align*}
As $P\in\mathcal{P}_t^T(\omega)$ was arbitrary, it holds
$C\leq \mathcal{E}_t(\omega,x)$ and since $\vartheta^\ast\in\Theta$, it follows from the
previously shown inequality that $\vartheta^\ast$ is optimal.
\end{proof}
\subsection{The case with options}
Fix some function $Y\colon\Omega\to[0,+\infty)$ such that $-Y$ is upper semianalytic and recall that
$\mathcal{M}(Y):=\{Q\in\mathcal{M}: E_Q[Y]<+\infty\}$ and
$\mathcal{M}_g(Y):=\{Q\in\mathcal{M}_g : E_Q[Y]<+\infty\}$, where $\mathcal{M}$ and $\mathcal{M}_g$
where defined in Section \ref{sec:main}.
Moreover, fix some Borel function $Z\colon\Omega\to\mathbb{R}$.
We first claim that for every upper semianalytic function $X\colon\Omega\to\mathbb{R}$
bounded by $Z$, i.e.~$|X|\leq Z$, one has
\begin{align}
\label{eq:superhedge.entropy}
\inf\{ m\in\mathbb{R} : m+(\vartheta\cdot S)_0^T\geq X\,\mathcal{P}\text{-q.s. for some }
\vartheta\in\Theta\}
&=\sup_{Q\in\mathcal{M}(Y)} E_Q[X]
\end{align}
in case of no options and, if $|g^i|\leq Z$ for $1\leq i\leq e$, then
\begin{align}
\label{eq:superhedge.0.in.ri}
0\in\mathop{\mathrm{ri}} \{ E_Q[g^e] : Q\in\mathcal{M}_{\hat{g}}(Y)\}
\end{align}
where $\hat{g}:=(g^1,\dots,g^{e-1})$ and also
\begin{align}
\label{eq:superhedge.options.entropy}
&\inf\Big\{m\in\mathbb{R} :
\begin{array}{l}
m+(\vartheta\cdot S)_0^T+\alpha g \geq X\,\mathcal{P}\text{-q.s.}\\
\text{for some }(\vartheta,\alpha)\in\Theta\times\mathbb{R}^e
\end{array}\Big\}
=\sup_{Q\in\mathcal{M}_g(Y)} E_Q[X].
\end{align}
All these claims are proven in \cite{bouchard2015arbitrage} if one relaxes $\mathcal{M}$
in the sense that the relative entropy does not need to be finite.
In fact, Bouchard and Nutz deduce
\eqref{eq:superhedge.options.entropy} from \eqref{eq:superhedge.0.in.ri}, and
\eqref{eq:superhedge.0.in.ri} from \eqref{eq:superhedge.entropy};
see Theorem 4.9 as well as equation (5.1) and Theorem 5.1 in \cite{bouchard2015arbitrage} respectively.
The same can be done here (with the exact same arguments as in \cite{bouchard2015arbitrage}),
so we shall only give a (sketch of a) proof for \eqref{eq:superhedge.entropy}.
Consider first the one-period case and define
\[\mathcal{C}':=\{ Q\in\mathfrak{P}(\Omega) : E_Q[|\Delta S| + Y]<+\infty
\text{ and } Q\ll P \text{ for some }P\in\mathcal{P} \},\]
and
$\mathcal{M}':=\{ Q\in\mathcal{C} : E_Q[\Delta S]=0\}$.
Then the following superhedging duality
\[ \inf\{m\in\mathbb{R} : m+h\Delta S\geq X\,\mathcal{P}\text{-q.s.} \text{ for some }h\in\mathbb{R}^d\}
=\sup_{Q\in\mathcal{M}'} E_Q[X],\]
see \cite[Theorem 3.4]{bouchard2015arbitrage}, is a consequence of the fact that
$0\in\mathop{\mathrm{ri}}\{E_Q[\Delta S] :Q\in\mathcal{C}'\}$,
see Lemma 3.5 and Lemma 3.6 in \cite{bouchard2015arbitrage}.
However, since
\[0\in\mathop{\mathrm{ri}} \{ E_Q[g] : Q\in\mathcal{C} \}\quad\text{for}\quad
\mathcal{C}=\{ Q \in\mathcal{C} : H(Q,\mathcal{P})<+\infty\}\]
by Lemma \ref{lem:fundamental.lem},
the same arguments as for \cite[Theorem 3.4]{bouchard2015arbitrage} show that
\[ \inf\{m\in\mathbb{R} : m+h\Delta S\geq X\,\mathcal{P}\text{-q.s.} \text{ for some }h\in\mathbb{R}^d\}
=\sup_{Q\in\mathcal{M}(Y)} E_Q[X],\]
in particular $\sup_{Q\in\mathcal{M}'} E_Q[X]=\sup_{Q\in\mathcal{M}(Y)} E_Q[X]$.
For the transition to the multi-period case define recursively $m_T:=m_T':=X$ and
\[m_t'(\omega):=\sup_{Q\in\mathcal{M}_t'(\omega)} E_Q[m_{t+1}'(\omega,\cdot)]
\quad\text{and}\quad
m_t(\omega)=\sup_{Q\in\mathcal{M}_t^Z(\omega)} E_Q[m_{t+1}(\omega,\cdot)],\]
for $0\leq t\leq T-1$ and $\omega\in\Omega_t$, where
\begin{align*}
\mathcal{M}_t'(\omega)&:=\{Q\in\mathfrak{P}(\Omega_1): Q\ll P \text{ for some }P\in\mathcal{P}_t(\omega)
\text{ and } E_Q[\Delta S_{t+1}(\omega,\cdot)]=0\},\\
\mathcal{M}_t^Z(\omega)&:=\{Q\in\mathcal{M}_t'(\omega) : E_Q[Z]+E_Q[m_{t+1}(\omega,\cdot)^-]+H(Q,\mathcal{P}_t(\omega))<+\infty\}
\end{align*}
and $Z\colon\Omega_1\to[0,+\infty)$ an arbitrary universally measurable function.
A backward induction shows that $m_t=m_t'$ for each $t$.
Moreover, following the exact same arguments as in the part of the proof for Theorem \ref{thm:multiperiod}
which focuses on duality, one can show that
$m_t(\omega)=\sup_{Q\in\mathcal{M}_t^T(\omega)} E_Q[X(\omega,\cdot)]$
where we recall
\[ \mathcal{M}_t^T(\omega)
=\bigg\{ Q\in\mathfrak{P}(\Omega_{T-t}):
\begin{array}{l}
(S_s(\omega,\cdot))_{t\leq s \leq T} \text{ is a $Q$-martingale and}\\
E_Q[X(\omega,\cdot)^-+Y(\omega,\cdot)] + H(Q,\mathcal{P}_t^T(\omega))<+\infty
\end{array}\bigg\} \]
so that $\mathcal{M}_0^T=\mathcal{M}(Y)$.
Since it is shown in (or rather within the proof of)
\cite[Lemma 4.13]{bouchard2015arbitrage} that
\[\inf\{ m\in\mathbb{R} : m+(\vartheta\cdot S)_0^T\geq X\,\mathcal{P}\text{-q.s. for some }
\vartheta\in\Theta\}
=m_0',\]
the claim follows from $m_0'=m_0=\sup_{Q\in\mathcal{M}(Y)}E_Q[X]$.
\begin{proof}[\text{\bf Proof of Theorem \ref{thm:main.options}}]
The proof is an induction over $e$.
For $e=0$ the statement is a special case of Theorem \ref{thm:multiperiod},
so assume that both claims (duality and existence) are true for $e-1\geq0$.
By assumption there is a Borel function $Z$ such that $|X|+|g^i|\leq Z$ for every $1\leq i\leq e$.
Using the induction hypothesis, it follows that
\begin{align}
\label{eq:options.infinf}
&\inf_{(\vartheta,\alpha)\in\Theta\times\mathbb{R}^e}
\sup_{P\in\mathcal{P}} \log E_P[\exp(X+(\vartheta\cdot S)_0^T+\alpha g)]\\
&=\inf_{\beta \in\mathbb{R}}\min_{(\vartheta,\hat{\alpha})\in\Theta\times\mathbb{R}^{e-1}}
\sup_{P\in\mathcal{P}} \log E_P[\exp(X+(\vartheta\cdot S)_0^T+\hat{\alpha}\hat{g}+\beta g^e)]
\label{eq:options.inf.min} \\
&=\inf_{\beta \in\mathbb{R}} \sup_{Q\in\mathcal{M}_{\hat{g}}} \big( E_Q[X]+\beta E_Q[g^e] -H(Q,\mathcal{P}) \big)
=\inf_{\beta \in\mathbb{R}} \sup_{Q\in\mathcal{M}_{\hat{g}}} J(Q,\beta )\nonumber
\end{align}
where $\hat{g}=(g^1,\dots,g^{e-1})$ and
\[J\colon \mathcal{M}_{\hat{g}}\times\mathbb{R}\to\mathbb{R},
\quad (Q,\beta )\mapsto E_Q[X]+\beta E_Q[g^e] -H(Q,\mathcal{P}).\]
It is already shown that
$0\in\mathop{\mathrm{ri}} \{ E_Q[g^e] : Q\in\mathcal{M}_{\hat{g}}\}$,
see \eqref{eq:superhedge.0.in.ri},
which can be used exactly as in the proof of Theorem \ref{thm:1peroiod} to prove that
\begin{align}
\label{eq:options.minimax}
\inf_{|\beta |\leq n} \sup_{Q\in\mathcal{M}_{\hat{g}}} J(Q,\beta)
=\inf_{\beta\in\mathbb{R}} \sup_{Q\in\mathcal{M}_{\hat{g}}} J(Q,\beta)
=\sup_{Q\in\mathcal{M}_{\hat{g}}} \inf_{\beta\in\mathbb{R}} J(Q,\beta)
\end{align}
for some $n\in\mathbb{N}$; see \eqref{eq:for.existence.minimizer} for the first,
and the text below \eqref{eq:supinfcondition} for the second equality.
Hence
\begin{align*}
&\inf_{(\vartheta,\alpha)\in\Theta\times\mathbb{R}^e}
\sup_{P\in\mathcal{P}} \log E_P[\exp(X+(\vartheta\cdot S)_0^T+\alpha g)]\\
&=\inf_{\beta \in\mathbb{R}} \sup_{Q\in\mathcal{M}_{\hat{g}}} J(Q,\beta)
=\inf_{\beta \in\mathbb{R}} \sup_{Q\in\mathcal{M}_{\hat{g}}} J(Q,\beta)
=\sup_{Q\in\mathcal{M}_g} \big( E_Q[X]-H(Q,\mathcal{P})\big)
\end{align*}
showing that duality holds.
The first equality in \eqref{eq:options.minimax} together with the lower semicontinuity of
$\sup_{Q\in\mathcal{M}_{\hat{g}}} J(Q,\cdot)$
imply that there is some $\beta^\ast\in\mathbb{R}$ such that
\[\sup_{Q\in\mathcal{M}_{\hat{g}}} J(Q,\beta^\ast)
=\inf_{\beta\in\mathbb{R}}\sup_{Q\in\mathcal{M}_{\hat{g}}} J(Q,\beta).\]
For this $\beta^\ast$, the induction hypotheses \eqref{eq:options.inf.min} guarantees the existence of an optional
strategy $(\vartheta^\ast,\hat{\alpha}^\ast)\in\Theta\times\mathbb{R}^{e-1}$ showing that
$(\vartheta^\ast,\alpha^\ast)\in\Theta\times\mathbb{R}^e$ is optimal for
\eqref{eq:options.infinf}, where $\alpha^\ast:=(\hat{\alpha}^\ast,\beta^\ast)$.
This completes the proof.
\end{proof}
\begin{proof}[\text{\bf Proof of Theorem \ref{thm:limit.superhedg}}]
Since $\Theta$ and $\mathbb{R}^e$ are vector-spaces, it follows from
Theorem \ref{thm:main.options} that
\begin{align*}
\pi_\gamma(X)
&=\inf_{(\vartheta,\alpha)\in\Theta\times\mathbb{R}^e}\sup_{P\in\mathcal{P}}
\frac{1}{\gamma}\log E_P\big[\exp\big(\gamma X + (\vartheta\cdot S)_0^T + \alpha g\big)\big]\\
&= \frac{1}{\gamma}\sup_{Q\in\mathcal{M}_g} \big( E_Q[\gamma X] -H(Q,\mathcal{P}) \big)
=\sup_{Q\in\mathcal{M}_g} \big( E_Q[X] -\frac{1}{\gamma}H(Q,\mathcal{P}) \big).
\end{align*}
This formula implies both that $\pi_\gamma$ is increasing in $\gamma$ and,
by interchanging the suprema over $\gamma$ and $Q$, that
$\sup_\gamma \pi_\gamma(X)=\sup_{Q\in\mathcal{M}_g} E_Q[X]$.
The latter term coincides by \eqref{eq:superhedge.options.entropy}
with $\pi(X)$, hence the proof is complete.
\end{proof}
|
1,314,259,994,801 | arxiv | \section{Introduction}
When describing cosmological solutions at a macroscopic level a usual choice for the matter model is a fluid. The pressure is then a function of the energy density, and the relation between these two quantities is often linear. However, a more refined description is obtained via kinetic theory. One of the most important models is the Einstein-Boltzmann system, and this system is very interesting from the mathematical and physical points of view; the Boltzmann equation is a bridge between micro- and macroscopic laws and is crucial for the understanding of many-body physics. In some sense it is also a bridge between classical and quantum physics. In this paper we are interested in the Einstein-Boltzmann system and study the late-time behaviour of it.
We assume the expansion of the Universe as a fact and consider a positive cosmological constant so that the expansion is accelerating. The main objective of the present paper is to improve the previous results \cite{LeeNungesser171,LeeNungesser172}. In \cite{LeeNungesser171} we showed that Bianchi I solutions to the Einstein-Boltzmann system exist globally in time, but imposed artificial restrictions on the scattering kernel. In \cite{LeeNungesser172} we showed that classical solutions to the Boltzmann equation exist globally in time, but assumed that an isotropic spacetime is given with spatially flat geometry and exponentially growing scale factor. In the present paper we continue this line of research considering the coupled Einstein-Boltzmann system with Bianchi I symmetry. Future global existence is shown, and the asymptotic behaviour of the distribution function and the relevant geometric quantities is obtained with specific decay rates. It is shown that the spacetime tends to de Sitter spacetime at late times. To the best of our knowledge it is the first result of this type concerning the coupled Einstein-Boltzmann system with a scattering kernel which is physically well motivated.
The paper is structured as follows. Before coming to the next section, we will introduce some notations and in particular define what will be the relevant norm. We finish the introduction considering isotropic solutions to the coupled Einstein-Boltzmann system which is easily established based on \cite{LeeNungesser172}. This gives the reader an idea of what will be done in the following sections for the more complicated Bianchi I case, where the spacetime is still homogeneous but anisotropic. In Section 2 we present our main results. After that we collect the main equations in Section 3 and some basic estimates in Section 4. The key estimates developed in this paper are then established in Section 5. Finally in Section 6 we explain how the different estimates are combined to prove the main theorem and present some ideas on how to continue this line of research.
\subsection{Notations}\label{sec_notations}
Let $A=(a_1,\cdots,a_m)$ be an $m$-tuple of integers between $1$ and $3$. We write $\partial^A=\partial^{a_1}\cdots\partial^{a_m}$, where $\partial^a=\partial /\partial p_a$ is the partial derivative with respect to $p_a$ for $a\in\{1,2,3\}$ with $|A|=m$ the total order of differentiation. Indices are lowered and raised by the metric $g_{\alpha\beta}$ and $g^{\alpha\beta}$, respectively, and the Einstein summation convention is assumed. Greek letters run from $0$ to $3$, while Latin letters from $1$ to $3$, and momentum variables without indices denote three dimensional vectors, for instance we write
\[
p=(p^1,p^2,p^3),\quad p_*=(p_1,p_2,p_3).
\]
We consider an orthonormal frame $e_\mu(=e_\mu^\alpha E_\alpha)$, i.e. $g_{\alpha\beta}e^\alpha_\mu e^\beta_\nu=\eta_{\mu\nu}$, where $e_0=\partial_t$ and $\eta_{\mu\nu}$ denotes the Minkowski metric. Momentum variables can be written as $p^\alpha E_\alpha=\p^\mu e_\mu$ where we use a hat to indicate that the momentum is written in an orthonormal frame. Note that
\[
p^\alpha=\p^\mu e_\mu^\alpha,\quad \p_\mu(=\eta_{\mu\nu}\p^\nu)=p_\alpha e^\alpha_\mu,
\]
where the Minkowski metric applies in an orthonormal frame. For partial derivatives in an orthonormal frame, we use hats in a similar way, and the derivatives with respect to $p_a$ and $\p_a$ are related to each other as $\partial^a=e^a_b\hat{\partial}^b$. For a multi-index $A=(a_1,\cdots,a_m)$ we have
\[
\partial^A=e^A_B\hat{\partial}^B,
\]
where $e^A_B=e^{a_1}_{b_1}\cdots e^{a_m}_{b_m}$ for $B=(b_1,\cdots,b_m)$.
We also consider the usual $l^2$-norm: for a three-dimensional vector $v$, we define
\[
|v|=\sqrt{\sum_{i=1}^3(v^i)^2},
\]
and note that $|\p|^2=\eta_{ab}\p^a\p^b$. With this notation we define the weight function:
\[
\langle p_*\rangle=\sqrt{1+|p_*|^2},
\]
and note that it is different from $p^0$ in general. With this weight function we define the norm of a function $f=f(t,p_*)$ as follows: for a non-negative integer $N$,
\begin{align*}
\|f(t)\|^2_{k,N}=\sum_{|A|\leq N}\|\partial^A f(t)\|_k^2,\quad \|f(t)\|_k^2=\int_{\bbr^3}\langle p_*\rangle^{2k}e^{p^0(t)}|f(t,p_*)|^2dp_*,
\end{align*}
where $k$ is a positive real number.
\subsection{Isotropic solutions in the coupled Einstein-Boltzmann system}
Before coming to the Bianchi I case, let us establish the results for the coupled Einstein-Boltzmann system with FLRW symmetry in the case of spatially flat geometry based on the previous result \cite{LeeNungesser172}. We consider the metric
\begin{align*}
^{(4)}g=-dt^2+g,\quad g=R^2 (dx^2 + dy^2+dz^2),
\end{align*}
where $R=R(t)>0$ is the scale factor. Thus the spatial metric is $g_{ab}=R^2 \delta_{ab}$, so that $k_{ab}=\dot{g}_{ab}/2=R \dot{R} \delta_{ab}$, where the dot denotes the derivative with respect to time. As a consequence the Hubble variable is $H=k_{ab} g^{ab}/3 = \dot{R}/R$. In contrast to the Bianchi I case, which we will consider in the following sections, there is no shear, i.e. $\sigma_{ab}=k_{ab}-H g_{ab}=0$. For the isotropic case we also assume that the distribution function is invariant under rotations of the momenta. The collision kernel will be described in the next section, but for the moment we only notice that it is invariant under rotations; it does not depend on the scattering angle, and the other variables are invariant under the group $SO(3)$, for example $p^0=(1+R^{-2}|p_*|^2)^{-1/2}$ which depends only on the length of $p_*$. Also the quantity $s$ is invariant under the group $SO(3)$ since $s=(p^0+q^0)^2-R^{-2}|p_*-q_*|^2$. As a consequence the Boltzmann equation preserves the isotropy. Note that we have here an example of a collision kernel satisfying (4.4) of \cite{NTRW}. The governing equations in a FLRW spacetime with a cosmological constant $\Lambda$, which will be considered to be positive, are given as follows:
\begin{align*}
\frac{{\dot{R}}^{2}}{R^{2}}&=\frac{\rho+\Lambda}{3},\\
\frac{3\ddot{R}}{R}&=-\frac{\rho+3P}{2}+\Lambda,
\end{align*}
which are called the Friedmann equations. Here $\rho$ and $P$ are the energy density and the pressure, respectively, which are given by
\begin{align*}
\rho &= R^{-3} \int_{\bbr^3} f p^0dp_*,\\
P &= R^{-5} \int_{\bbr^3} f \frac{|p_*|^2}{3p^0}dp_*.
\end{align*}
We are interested in solutions having an asymptotic behaviour concerning the distribution function as in \cite{LeeNungesser172}. We thus assume that
\begin{align*}
f(t,p_*) \leq C_f (1+\vert p_* \vert^2)^{-\frac{k}{2}} e^{-\frac12 p^0},
\end{align*}
for some large $k$, or in an orthonormal frame
\begin{align*}
\hat{f}(t,\p) \leq C_f (1+R^2 \vert \hat{p} \vert^2)^{-\frac{k}{2}} e^{-\frac12 p^0}.
\end{align*}
We assume that our Universe is initially expanding, i.e. $H(t_0)>0$ and non-empty, i.e. $f(t_0)\neq0$. The vacuum case corresponds to the well-known Kasner solutions. Then, from the first Friedmann equation we have $H > \gamma=(\Lambda/3)^{1/2}$ for all times. Concerning the energy density, we have
\begin{align*}
\rho&\leq C_f R^{-3} \int (1+ \vert p_* \vert^2)^{-\frac{k}{2}} e^{-\frac12\sqrt{1+ R^{-2} \vert p_* \vert^2}}\sqrt{1+ R^{-2} \vert p_* \vert^2}\, dp_* \\
&\leq 4\pi C_f R^{-3} \int_0^\infty r^2 (1+ r^2)^{-\frac{k}{2}} e^{-\frac12 \sqrt{1+ R^{-2} r^2}}\sqrt{1+ R^{-2} r^2} \, dr \leq C C_f R^{-3},
\end{align*}
where we used the fact that $R$ is bounded from below since $H$ is. Similarly we obtain that
\begin{align*}
P \leq C C_f R^{-5}.
\end{align*}
Thus $\rho$ and $P$ are bounded. Applying standard arguments we obtain the global existence of $R$ given $f$ satisfying the asymptotic behaviour mentioned above. Moreover, we can easily obtain an improved estimate for $H$. Using the lower bound obtained in \cite{LeeNungesser172} for $R$, namely $R\geq e^{\gamma t}$, and the asymptotic behaviour of $f$, we have that the energy density decays exponentially:
\begin{align*}
\rho \leq C C_f R^{-3} \leq C C_f e^{-3\gamma t},
\end{align*}
and $P\leq CC_f e^{-5\gamma t}$ for the pressure. From the Friedmann equations we have that
\begin{align*}
\dot{H}= \gamma^2 -H^2-\frac{1}{6}(\rho+3P)\leq\gamma^2 -H^2,
\end{align*}
and we derive
\begin{align*}
\frac{d}{dt}(H-\gamma)\leq -(H-\gamma)(H+\gamma)\leq -2\gamma (H-\gamma),
\end{align*}
which shows that $H$ converges to $\gamma$ exponentially:
\begin{align*}
H-\gamma =O(e^{-2\gamma t}).
\end{align*}
Let us define $R= e^{\gamma t} \bar{R}$ and use the definition of $H=\dot{R}/R$ to obtain
\begin{align*}
\dot{\bar{R}}=(H-\gamma) \bar{R}.
\end{align*}
Using the estimate of $H-\gamma$ we conclude that $\bar{R}$ is bounded, and as a consequence we obtain
\begin{align*}
\dot{\bar{R}}=O(e^{-2\gamma t}).
\end{align*}
This means that $\bar{R}$ will in fact converge to the expression given by
\begin{align*}
\bar{R}_\infty= \bar{R}(t_0)+ \int^\infty_{t_0} (H-\gamma) \bar{R}\, dt.
\end{align*}
Putting things together we obtain that $R$ converges exponentially to an exponentially growing function:
\begin{align*}
R= e^{\gamma t} (\bar{R}_\infty + O(e^{-2\gamma t})).
\end{align*}
We now combine this with the result of \cite{LeeNungesser172} to obtain the result for the coupled case. Suppose that initial data $R(t_0)$, $\dot{R}(t_0)$, and $f(t_0)$ are given, and define an iteration $\{R_n\}$ and $\{f_n\}$ as follows. Let $R_0= e^{\gamma t} \bar{R}(t_0)$, and note that $R_0$ satisfies the condition of Theorem 1 of \cite{LeeNungesser172}. As a consequence there exists a small positive $\varepsilon_0$ such that if $\Vert f(t_0) \Vert^2_{k,N} <\varepsilon_0$, where the norm is defined in Section \ref{sec_notations}, then there exists a unique non-negative classical solution $f_0$ to the Boltzmann equation in a given spacetime with scale factor $R_0$. Note that the solution satisfies $\sup_{0\leq t<\infty}\|f_0(t)\|^2_{k,N}\leq C\varepsilon_0$ for some constant $C>0$. Suppose now that $f_n$ is given such that $\sup_{0\leq t<\infty}\Vert f_n(t) \Vert^2_{k,N} \leq C\varepsilon_0$ with $\Vert f(t_0) \Vert^2_{k,N} <\varepsilon_0$. This means that we have the desired asymptotic behaviour for $f_n$, hence $R_{n+1}$ exists globally in time. As a result we have an exponential growth for the scale factor using Lemma 1 of \cite{LeeNungesser172}. Applying Theorem 1 of \cite{LeeNungesser172} again we obtain $f_{n+1}$, and this completes the iteration. We have uniform bounds for all the relevant quantities and thus can take the limit up to a subsequence to obtain classical solutions to the coupled equations with the asymptotic behaviour for $f$ and $R$ as described above. To summarize, we briefly considered here the isotropic case as an introduction and obtained certain asymptotic behaviours for the metric and the distribution function. For the Bianchi I case the procedure will be much similar to the isotropic case, but below neither the metric nor the distribution function will be isotropic.
\section{Main results}
We state the main theorem of the present paper.
\begin{thm}\label{Thm}
Consider the Einstein-Boltzmann system \eqref{evolution1}--\eqref{constraint2} with Bianchi I symmetry and a positive cosmological constant. Suppose that the assumption on the scattering kernel holds and the Hubble variable is initially given as $H(t_0)<(7/6)^{1/2}\gamma$ with $\gamma=(\Lambda/3)^{1/2}$. Let $g_{ab}(t_0)$, $k_{ab}(t_0)$, and $f(t_0)$ be initial data of the Einstein-Boltzmann system satisfying the constraints \eqref{constraint1}--\eqref{constraint2} such that $\|f(t_0)\|_{k+1/2,N}$ is bounded for $k>5$ and $N\geq 3$. Then, there exists a small $\varepsilon>0$ such that if $\|f(t_0)\|_{k+1/2,N}<\varepsilon$, then there exists a unique classical solutions $g_{ab}$, $k_{ab}$, and $f$ to the Einstein-Boltzmann system corresponding to the initial data. The solutions exist globally in time, the spacetime is geodesically future complete, and the distribution function $f$ is nonnegative. Moreover, there exist constant matrices $\mathcal{G}_{ab}$ and $\mathcal{G}^{ab}$ such that
\begin{align*}
&H=\gamma+O(e^{-2\gamma t}),\\
&\sigma^{ab}\sigma_{ab}=O(e^{-2\gamma t}),\\
&g_{ab}=e^{2\gamma t}\Big(\mathcal{G}_{ab}+O(e^{-\gamma t})\Big),\\
&g^{ab}=e^{-2\gamma t}\Big(\mathcal{G}^{ab}+O(e^{-\gamma t})\Big),
\end{align*}
and the distribution function $f$ satisfies in an orthonormal frame
\[
\hat{f}(t,\hat{p})\leq C \varepsilon (1+e^{2\gamma t} |\hat{p}|^2)^{-\frac12 k} e^{-\frac12 p^0},
\]
where $C$ is a positive constant.
\end{thm}
The proof is given in the last section. From the control of the main quantities one can then obtain estimates of related quantities collected in the corollary below. The Kasner exponents or shape parameters tend all to $1/3$, the deceleration parameter tends to the expected value, and we have a dust-like behaviour at late times. The details of this can be found in the Section \ref{sec_einstein} about basic estimates concerning the Einstein part.
\begin{cor}
Let $g_{ab}$, $k_{ab}$, and $f$ be the solutions obtained in the previous theorem. Then, we also obtain the following estimates:
\begin{align*}
&s_i= \frac13 + O(e^{-\gamma t}),\\
&d= -1 + O(e^{-2 \gamma t}),\\
&\rho=O(e^{-3\gamma t}),\\
&\hat{S}_{ij}=O(e^{-5\gamma t}), \\
&\frac{\hat{S}_{ij}}{\rho}= O(e^{-2\gamma t}).
\end{align*}
\end{cor}
\section{The Einstein-Boltzmann system with Bianchi I symmetry in the case of Israel particles}
In this paper we are interested in the spacetime with Bianchi type I symmetry. We follow the sign conventions of \cite{Hans} and also refer to this book for background on the Einstein equations with Bianchi symmetry. Concerning the relativistic kinetic theory we refer to \cite{CercignaniKremer}. Using a left-invariant frame $E_a$ with $\xi^a$ its duals, the metric of a Bianchi spacetime can be written as
\[
^{(4)}g=-dt\otimes dt+g,\quad g=g_{ab}\,\xi^a\otimes\xi^b.
\]
In the Bianchi I case the evolution equations of metric $g_{ab}$ and second fundamental form $k_{ab}$ are obtained via the Einstein equations and are as follows (cf. (25.17)--(25.18) of \cite{Hans}):
\begin{align}
\dot{g}_{ab}&=2k_{ab},\label{evolution1}\\
\dot{k}_{ab}&=2k^c_{a}k_{bc}-k\,k_{ab}+S_{ab}+\frac{1}{2}(\rho-S)g_{ab}+\Lambda g_{ab},\label{evolution2}
\end{align}
where a dot denotes derivation with respect to the time, $k=g^{ab}k_{ab}$, $k^c_a=g^{cd}k_{da}$, and $\det g$ is the determinant of the matrix $(g_{ab})$.
Moreover $\rho$ and $S$ come from the energy-momentum tensor $T_{\alpha\beta}$ which, since we use a kinetic picture, is defined by
\[
T_{\alpha\beta}=(\det g)^{-\frac12}\int_{\bbr^3}f(t,p_*)\frac{p_\alpha p_\beta}{p^0}dp_*,
\]
and $\rho=T_{00}$, $S_{ab}=T_{ab}$, and $S=g^{ab}S_{ab}$. The evolution equations \eqref{evolution1}--\eqref{evolution2} are coupled to the Boltzmann equation:
\begin{align}
\partial_tf&=(\det g)^{-\frac12}\int_{\bbr^3}\int_{\bbs^2}v_M\sigma(h,\theta)\Big(f(p_*')f(q_*')-f(p_*)f(q_*)\Big)d\omega dq_*\label{boltzmann} \\
&=:Q(f,f)=Q_+(f,f)-Q_-(f,f). \nonumber
\end{align}
The $Q(f,f)$ is called the collision operator, where $Q_{\pm}(f,f)$ are called the gain and the loss terms, respectively. The M{\o}ller velocity $v_M$ and the relative momentum $h$ are defined for given momenta $p^\alpha$ and $q^\alpha$ by
\[
v_M=\frac{h\sqrt{s}}{4p^0q^0},\quad h=\sqrt{(p_\alpha-q_\alpha)(p^\alpha-q^\alpha)},\quad s=-(p_\alpha+q_\alpha)(p^\alpha+q^\alpha),
\]
where $s$ is called the total energy and satisfy $s=4+h^2$. The post-collision momenta $p_\alpha'$ and $q_\alpha'$ are now given by
\begin{align*}
\left(
\begin{array}{c}
p'^0\\
p'_k
\end{array}
\right)=
\left(
\begin{array}{c}
\displaystyle
p^0+2\bigg(-q^0\frac{n_a e^a_b\omega^b}{\sqrt{s}}+q_ae^a_b\omega^b+\frac{n_ae^a_b\omega^bn_c q^c}{\sqrt{s}(n^0+\sqrt{s})}\bigg)\frac{n_de^d_i\omega^i}{\sqrt{s}}\\
\displaystyle
p_k+2\bigg(-q^0\frac{n_ae^a_b\omega^b}{\sqrt{s}}+q_ae^a_b\omega^b+\frac{n_ae^a_b\omega^bn_cq^c}{\sqrt{s}(n^0+\sqrt{s})}\bigg)
\bigg(g_{ka}e^a_b\omega^b+\frac{n_ae^a_b\omega^bn_k}{\sqrt{s}(n^0+\sqrt{s})}\bigg)
\end{array}
\right),
\end{align*}
and $q_\alpha'=p_\alpha+q_\alpha-p_\alpha'$, where $n^\alpha$ denotes $p^\alpha+q^\alpha$ for simplicity, and $\omega=(\omega^1,\omega^2,\omega^3)\in\bbs^2$ serves as a parameter. The $e^a_b$ are the components of an orthonormal frame, and we recall that these were introduced in Section \ref{sec_notations}. For background on the representation of the post-collision momenta we refer to the Appendix of \cite{LeeNungesser172}. The constraints are given by
\begin{align}
-k_{ab}k^{ab}+k^2&=2\rho+2\Lambda,\label{constraint1}\\
0&=-T_{0a}.\label{constraint2}
\end{align}
In the present paper we assume that the cosmological constant is positive, i.e. $\Lambda>0$. The relevant equations of the Einstein-Boltzmann system with Bianchi I symmetry and $\Lambda>0$ are thus the equations \eqref{evolution1}--\eqref{constraint2}, and we now study the global-in-time properties of it. The quantity $\sigma(h,\theta)$ in the collision operator is called the scattering kernel, where the scattering angle $\theta$ is defined by $\cos\theta=(p^\alpha-q^\alpha)(p'_\alpha-q'_\alpha)/h^2$. In this paper we consider Israel particles:
\[
\sigma(h,\theta)=\frac{4}{h(4+h^2)}\sigma_0(\theta),\label{scat}
\]
where $\sigma_0$ is an arbitrary function of the scattering angle $\theta$. For simplicity we assume that
\[
\sigma_0(\theta)\equiv 1.
\]
Hence, the scattering kernel of our interest is written as $\sigma(h,\theta)=4(hs)^{-1}$.
\section{Basic estimates}
\subsection{Estimates for the Einstein part}\label{sec_einstein}
In this section we enumerate the results which are needed for the Einstein part. It is convenient to introduce the trace-free part of the second fundamental form:
\[
\sigma_{ab}= k_{ab}-\frac13 k g_{ab},
\]
and denote the Hubble variable by
\[
H=\frac13 k.
\]
Apart from estimates for the metric components it is sometimes useful to have estimates for the generalised Kasner exponents or sometimes also called shape parameters, which are defined as the quotient of the eigenvalues of the second fundamental form with respect to the metric and the trace of the second fundamental form. We will denote them by $s_i$. Another useful variable is the deceleration parameter $d$, which is defined as
\[
d=-1 -\frac{\dot{k}}{k^2}.
\]
The following is the results for the Einstein equations with a given distribution function $f$ satisfying a certain property.
\begin{prop}\label{prop_einstein}
Consider a Bianchi I spacetime, which is initially expanding, i.e. $H(t_0)>0$, and let $g_{ab}(t_0)$ and $k_{ab}(t_0)$ be initial data of the evolution equations \eqref{evolution1}--\eqref{evolution2} satisfying the constraints \eqref{constraint1}--\eqref{constraint2}. Suppose that a distribution function $f$ is given and satisfies for some positive $C_f$,
\begin{align}\label{asympfI}
\hat{f}(t,\p)\leq C_f (1+ e^{2\gamma t}|\p|^2)^{-\frac{1}{2}k}e^{-\frac12 p^0},
\end{align}
where $\gamma=(\Lambda/3)^{1/2}$ and $k>5$. Then, the Einstein equations admit global-in-time solutions $g_{ab}$ and $k_{ab}$ which satisfy the following estimates:
\begin{align*}
&H=\gamma+O(e^{-2\gamma t}),\\
&\sigma_{ab}\sigma^{ab}= O(e^{-2\gamma t}),\allowdisplaybreaks\\
&g_{ab}= e^{2\gamma t} \Big(\mathcal{G}_{ab}+ O(e^{-\gamma t})\Big),\\
&g^{ab}= e^{-2\gamma t} \Big(\mathcal{G}^{ab}+ O(e^{-\gamma t})\Big),\allowdisplaybreaks\\
&s_i= \frac13 + O(e^{-\gamma t}),\\
&d= -1 + O(e^{-2 \gamma t}),\allowdisplaybreaks\\
&\rho=O(e^{-3\gamma t}),\\
&\hat{S}_{ij}=O(e^{-5\gamma t}), \\
&\frac{\hat{S}_{ij}}{\rho}= O(e^{-2\gamma t}),
\end{align*}
where $\mathcal{G}_{ab}$ and $\mathcal{G}^{ab}$ are constant matrices.
\end{prop}
\begin{proof}
Given the distribution function $f$ satisfying \eqref{asympfI} we estimate $\rho$ as
\begin{align*}
\rho=\int \hat{f}(t,\p)p^0d\p \leq C_f \int (1+e^{2\gamma t} \vert\hat{p} \vert^2)^{-\frac12 k} e^{-\frac12 p^0}p^0 d\p \leq CC_f e^{-3\gamma t}.
\end{align*}
To estimate $\hat{S}_{ij}$ we use \eqref{asympfI} with the fact that $p^0\geq1$ for massive particles:
\begin{align*}
\hat{S}_{ij}&= \int \hat{f}(t,\p) \frac{\hat{p}_i \hat{p}_j}{p^0} d\p \leq C C_f\int (1+e^{2\gamma t} \vert\hat{p} \vert^2)^{-\frac12 k} e^{-\frac12 p^0} \vert \hat{p} \vert^2 d\p \\
&\leq C C_fe^{-5\gamma t} \int (1+|z|^2)^{-\frac12 k} e^{-\frac12 \sqrt{1+ e^{-2\gamma t}|z|^2}} \vert z \vert^2 dz \leq C C_f e^{-5\gamma t},
\end{align*}
where we used $k>5$ for the last inequality. Global-in-time existence of solutions $g_{ab}$ and $k_{ab}$ is easily obtained by standard arguments. The estimates of $H$, $\sigma_{ab}\sigma^{ab}$, $g_{ab}$, $g^{ab}$, $s_i$, and $d$ are also easily obtained by the same arguments as in \cite{Lee04}. In fact the results of \cite{Lee04} go through since the Vlasov equation is not used at all. So we omit the details and only refer to the proofs of Propositions 3.1 and 3.5--3.7 in \cite{Lee04}. To obtain the estimate of the quotient of $\hat{S}_{ij}$ and $\rho$, we note that the energy density is bounded from below by the zeroth component of the particle current density $N^\alpha$, which is defined as
\begin{align*}
N^\alpha=(\det g)^{-\frac12}\int_{\bbr^3}f(t,p_*)\frac{p^\alpha}{p^0}dp_*.
\end{align*}
This quantity is divergence-free and as a result
\begin{align*}
\dot{N}^0=-3HN^0.
\end{align*}
Using the estimate for $H$ we obtain that
\begin{align*}
\rho > N^0 > C e^{-3\gamma t},
\end{align*}
and this proves the estimate of $\hat{S}_{ij}/\rho$.
\end{proof}
Here, we assumed that a distribution function is given and obtained global solutions to the Einstein equations. This result will be used in Section \ref{sec_boltzmann} to obtain solutions to the Boltzmann equation for given $g_{ab}$ and $k_{ab}$. We will define an iteration for the coupled equations. To obtain a solution from the iteration we need boundedness of the following quantity:
\[
F=\frac{\sigma_{ab}\sigma^{ab}}{4H^2},
\]
which is a scaled version of the shear.
\begin{lemma}\label{lem_F}
Let $g_{ab}$ and $k_{ab}$ be the solutions obtained in Proposition \ref{prop_einstein}. If initial data is given as $H(t_0)<(7/6)^{1/2}\gamma$, then $F(t)<1/4$ for all $t\geq t_0$.
\end{lemma}
\begin{proof}
Note that $H$ satisfies the following differential equation:
\[
\dot{H}=-3H^2-\frac{1}{6}S+\frac{1}{2}\rho+\Lambda,
\]
and the constraint equation \eqref{constraint1} is written as
\begin{align}\label{constraint3}
k^2=\frac32\sigma_{ab}\sigma^{ab}+3\rho+3\Lambda.
\end{align}
Since $k=3H$, the differential equation for $H$ is now written as
\[
\dot{H}=-\frac12 \sigma_{ab}\sigma^{ab} -\frac{1}{6}S-\frac12\rho\leq 0,
\]
and this shows that $H$ is decreasing. Together with the previous results we can see that $H$ is monotonically decreasing to the constant $\gamma$, in particular $H\geq \gamma$. We use again the constraint equation \eqref{constraint3} to obtain the following inequality:
\[
\sigma_{ab}\sigma^{ab}=6H^2-2\rho-6\gamma^2\leq 6H^2-6\gamma^2.
\]
Then, the quantity $F$ is estimated as follows:
\[
F=\frac{\sigma_{ab}\sigma^{ab}}{4H^2}\leq\frac{3H^2-3\gamma^2}{2\gamma^2}\leq \frac32\bigg(\frac{H^2(t_0)}{\gamma^2}-1\bigg)<\frac14,
\]
and this completes the proof.
\end{proof}
Finally let us note that with the estimate of $H$ the determinant of the metric is estimated as follows. Since
\begin{align*}
\frac{d (\log \det g)}{dt}= 6H,
\end{align*}
we obtain
\begin{align*}
\det g= O( e^{6\gamma t}).
\end{align*}
This means that we can go from our frame to an orthonormal frame and back with transformation matrices which satisfy $|e^a_b|\leq C e^{\gamma t}$ and $|(e^{-1})^a_b|\leq Ce^{-\gamma t}$. For details on this we refer to \cite{LeeNungesser171}.
\subsection{Basic estimates for momentum variables}\label{sec_basic2}
In this part we collect basic lemmas for the Boltzmann equation.
\begin{lemma}\label{lem_basic}
The following estimates hold:
\[
s=4+h^2,\quad \frac{|\p-\q|}{\sqrt{p^0q^0}}\leq h\leq |\p-\q|,\quad
s\leq 4p^0q^0,\quad
|\p|\leq p^0.
\]
\end{lemma}
\begin{proof}
We refer to \cite{GuoStrain12,LeeNungesser171} for the proofs. Note that $\p^i$ is the $i$-th component of the momentum in an orthonormal frame.
\end{proof}
\begin{lemma}\label{lem_int}
For any integer $m$, we have
\[
\int_{\bbr^3}(p^0)^me^{-p^0}dp_*\leq C(\det g)^{\frac12},
\]
where the constant $C$ does not depend on $t$.
\end{lemma}
\begin{proof}
By a direct calculation we have
\begin{align*}
\int_{\bbr^3}(p^0)^me^{-p^0}dp_*=(\det g)^{\frac12}\int_{\bbr^3} (p^0)^me^{-p^0}d\p\leq C(\det g)^{\frac12},
\end{align*}
where we used the representation $p^0=\sqrt{1+|\p|^2}$ in an orthonormal frame.
\end{proof}
\begin{lemma}\label{lem_pp'q'}
Let a spatial metric $g$ satisfy the assumption {\sf (A)} of Section \ref{sec_boltzmann}. Then, the following estimate holds:
\[
\langle p_*\rangle\leq C\langle p_*'\rangle\langle q_*'\rangle,
\]
where the constant $C$ does not depend on the metric.
\end{lemma}
\begin{proof}
Since $g^{ab}p_ap_b=|\p|^2$, the assumption {\sf (A)} implies that
\[
\frac{1}{c_0}e^{2\gamma t}|\p|^2\leq |p_*|^2\leq c_0 e^{2\gamma t}|\p|^2.
\]
Then, we have in an orthonormal frame,
\[
\frac{1+|p_*|^2}{(1+|p_*'|^2)(1+|q_*'|^2)}\leq \frac{C(1+e^{2\gamma t}|\p|^2)}{(1+e^{2\gamma t}|\p'|^2)(1+e^{2\gamma t}|\q'|^2)}\leq \frac{C(1+e^{2\gamma t}|\p|^2)}{1+e^{2\gamma t}(|\p'|^2+|\q'|^2)},
\]
where the constant $C$ depends only on the constant $c_0$ given in {\sf (A)}. We now follow the proof of Lemma 4 of \cite{LeeNungesser172}, where the factor $e^{2\gamma t}$ is replaced by $R^2$, and obtain the desired result. We refer to \cite{LeeNungesser172} for more details.
\end{proof}
\begin{lemma}\label{lem_partial_1/p0}
For a multi-index $A$, there exist polynomials ${\mathcal P}$ and ${\mathcal P}_i$ such that
\begin{align*}
&\partial^A \bigg[\frac{1}{p^0}\bigg]=\frac{e^A_B}{(p^0)^{|A|+1}}{\mathcal P}\bigg(\frac{\p}{p^0}\bigg),\\
&\partial^A\bigg[\frac{1}{\sqrt{s}}\bigg]=\frac{e^A_C}{\sqrt{s}}\sum_{i=0}^{|A|}\bigg(\frac{q^0}{s}\bigg)^i \bigg(\frac{1}{p^0}\bigg)^{|A|-i}{\mathcal P}_i\bigg(\frac{\p}{p^0},\frac{\q}{q^0}\bigg),
\end{align*}
where the multi-indices $B$ and $C$ are summed with the polynomials.
\end{lemma}
\begin{proof}
Note that the partial derivatives with respect to $p_a$ and $\p_a$ are related to each other as $\partial^a=e^a_b \hat{\partial}^b$, and for high order derivatives we have
\begin{align}
\partial^A=e^A_B\hat{\partial}^B,\label{relation}
\end{align}
where $e^A_B=e^{a_1}_{b_1}\cdots e^{a_m}_{b_m}$ for $A=(a_1,\cdots,a_m)$ and $B=(b_1,\cdots,b_m)$. From the proof of Lemma 5 of \cite{LeeNungesser172} we have in an orthonormal frame
\begin{align*}
\hat{\partial}^A \bigg[\frac{1}{p^0}\bigg]=\frac{1}{(p^0)^{|A|+1}}{\mathcal P}\bigg(\frac{\p}{p^0}\bigg),
\end{align*}
and the first estimate of the lemma follows from the relation \eqref{relation} with the fact that $|A|=|B|$. The second estimate is also obtained by the same argument, and this completes the proof. For more details we refer to Lemma 5 and 6 of \cite{LeeNungesser172}.
\end{proof}
\begin{lemma}\label{lem_partial_1/n0+s}
For a multi-index $A\neq 0$, there exist polynomials ${\mathcal P}_i$ such that
\[
\partial^A\bigg[\frac{1}{n^0+\sqrt{s}}\bigg]=e^A_B\sum_{i=1}^{|A|}\frac{(q^0)^{|A|}}{(n^0+\sqrt{s})^{i+1}} {\mathcal P}_i\bigg(\frac{\p}{p^0},\frac{\q}{q^0}, \frac{1}{p^0}, \frac{1}{q^0},\frac{1}{\sqrt{s}}\bigg),
\]
where the multi-index $B$ is summed with the polynomials.
\end{lemma}
\begin{proof}
This lemma is also proved by the same argument as in the previous lemma. We apply the relation \eqref{relation} to the proof of Lemma 7 of \cite{LeeNungesser172} and obtain the desired result.
\end{proof}
\begin{lemma}\label{lem_p'}
Consider post-collision momenta $p_*'$ and $q_*'$ for given $p_*$ and $q_*$. For a multi-index $A\neq 0$, we have the following estimate:
\[
|\partial^Ap'_*|+|\partial^Aq'_*|\leq C\max_{a,b}|(e^{-1})^a_b|(\max_{c,d}|e^c_d|)^{|A|}(q^0)^{|A|+4},
\]
where the constant $C$ does not depend on $p_*$.
\end{lemma}
\begin{proof}
Note that the post-collision momentum is given in an orthonormal frame by $\p'_j=e^k_j p_k'$, which is explicitly written as
\[
\p'_j=\p_j+2\bigg(-q^0\frac{\n_a\omega^a}{\sqrt{s}}+\q_a\omega^a+\frac{\n_a\omega^a\n_b\q^b}{\sqrt{s}(n^0+\sqrt{s})}\bigg)
\bigg(\eta_{aj}\omega^a+\frac{\n_a\omega^a\n_j}{\sqrt{s}(n^0+\sqrt{s})}\bigg).
\]
We use the estimate (27) of \cite{LeeNungesser172}, where the estimate of high order derivatives of $\p'$ is given by
\[
|\hat{\partial}^A\p'|\leq C(q^0)^{|A|+4}
\]
for $|A|\geq 1$. We now apply the relation \eqref{relation} to the above. Since $p'_k=(e^{-1})^j_k\p'_j$, where $e^{-1}$ is the inverse of the matrix $e$ and $\partial^A p'_k=(e^{-1})^j_ke^A_B \hat{\partial}^B\p'_j$, we have
\[
|\partial^Ap'_k|\leq C\max_{a,b}|(e^{-1})^a_b|(\max_{c,d}|e^c_d|)^{|A|}(q^0)^{|A|+4}
\]
for each $k\in\{1,2,3\}$ and $|A|\geq 1$. This completes the proof of the estimate for $p'_*$, and the estimate of $q'_*$ is given by the same arguments.
\end{proof}
\section{Estimates for the Boltzmann equation}\label{sec_boltzmann}
In this section we study the Boltzmann equation for a given metric. Let us assume that a metric $^{(4)}g=-dt^2+g$ is given and the spatial metric $g$ satisfies the properties of {\sf (A)} below. We show that the Boltzmann equation admits global-in-time solutions for small initial data. The following are the assumptions on the spatial metric $g$.\bigskip
\noindent{\sf (A)} {\bf Assumptions on the spatial metric.}
Let $\gamma=(\Lambda/3)^{1/2}$. There exists a constant $c_0>0$ such that
\[
\frac{1}{c_0}e^{-2\gamma t}|p_*|^2\leq g^{ab}p_ap_b\leq c_0 e^{-2\gamma t}|p_*|^2,
\]
for any $p_*$. The Hubble variable $H$ and the scaled shear $F$ satisfy
\[
H=\gamma+O(e^{-2\gamma t}),\quad 0<F<\frac14.
\]
Each component of the metric satisfies $|g_{ab}|\leq Ce^{2\gamma t}$ and $|g^{ab}|\leq Ce^{-2\gamma t}$. Let $e^a_b$ be an orthonormal frame with the inverse $e^{-1}$. We assume that they satisfy $|e^a_b|\leq C e^{\gamma t}$ and $|(e^{-1})^a_b|\leq Ce^{-\gamma t}$.
\bigskip
\begin{lemma}\label{lem_monotone}
For each $p_*$, the component $p^0$ is monotonically decreasing in $t$, i.e. $\partial_t p^0<0$, and can also be estimated as
\[
|\partial_t p^0|\leq C\langle p_*\rangle,
\]
where $C$ is independent of $t$.
\end{lemma}
\begin{proof}
On the mass shell we have $p^0=(1+g^{ab}p_ap_b)^{1/2}$ and obtain
\[
\partial_t p^0=-\frac{k^{ab}p_ap_b}{p^0}=-\frac{\sigma^{ab}p_ap_b+Hg^{ab}p_ap_b}{p^0}.
\]
The quantity $\sigma^{ab}p_ap_b$ can be estimated as
\[
|\sigma^{ab}p_ap_b|\leq (\sigma^{ab}\sigma_{ab})^{\frac12}(g^{cd}p_cp_d)=2HF^{\frac12}(g^{ab}p_ap_b),
\]
which shows that
\[
\partial_tp^0\leq \frac{H (-1+2F^{\frac12})(g^{ab}p_ap_b)}{p^0}.
\]
The right side is negative by the assumption {\sf(A)}, and this shows that $p^0$ is monotonically decreasing in $t$. With the above estimate of $\sigma^{ab}p_ap_b$ we also have
\[
|\partial_tp^0|\leq H(1+2F^{\frac12})p^0.
\]
Since $p^0\leq C\langle p_*\rangle$ by the assumption {\sf (A)}, we obtain the second estimate, and this completes the proof.
\end{proof}
\begin{lemma}\label{lem_f1}
Let $f$ be a solution of the Boltzmann equation. Then, it satisfies the following estimate:
\[
\|f(t)\|^2_k\leq \|f(t_0)\|^2_k+C\sup_{\tau\in[t_0,t]}\|f(\tau)\|^3_k,
\]
where $k$ is a positive real number.
\end{lemma}
\begin{proof}
The proof of this lemma is almost the same with that of Lemma 9 of \cite{LeeNungesser172}. Multiplying the Boltzmann equation by $f$ and integrating it on $[0,t]$, we obtain
\[
f^2(t,p_*)=f^2(t_0,p_*)+2\int_{t_0}^tf(\tau,p_*)Q(f,f)(\tau,p_*)d\tau.
\]
Note that the quantity $p^0(t)$ is monotonically decreasing in $t$ for each $p_*$ by Lemma \ref{lem_monotone}. We use this monotone property to estimate the above as follows:
\begin{align*}
&\langle p_*\rangle^{2k} e^{p^0(t)}f^2(t,p_*)\leq\langle p_*\rangle^{2k} e^{p^0(t_0)}f^2(t_0,p_*)\\
&\quad +C\int_{t_0}^t(\det g)^{-\frac12}\langle p_*\rangle^ke^{\frac12p^0(\tau)}f(\tau,p_*)\\
&\qquad\times\iint \frac{e^{-\frac12q^0(\tau)}}{p^0q^0\sqrt{s}}\langle p_*'\rangle^ke^{\frac12p'^0(\tau)}f(\tau,p_*')\langle q_*'\rangle^ke^{\frac12q'^0(\tau)}f(\tau,q_*')d\omega dq_* d\tau,
\end{align*}
where we ignored the loss term and used Lemma \ref{lem_pp'q'} with the energy conservation at time $\tau$:
\[
p'^0(\tau)+q'^0(\tau)=p^0(\tau)+q^0(\tau).
\]
Integrating the above with respect to $p_*$, we obtain the following estimate:
\begin{align}
\|f(t)\|_k^2&\leq\|f(t_0)\|^2_k+C\int_{t_0}^t(\det g)^{-\frac12}\int\langle p_*\rangle^ke^{\frac12p^0(\tau)}f(\tau,p_*)\nonumber\\
&\qquad\times\iint \frac{e^{-\frac12q^0(\tau)}}{p^0q^0\sqrt{s}}\langle p_*'\rangle^ke^{\frac12p'^0(\tau)}f(\tau,p_*')\langle q_*'\rangle^ke^{\frac12q'^0(\tau)}f(\tau,q_*')d\omega dq_* dp_*d\tau\allowdisplaybreaks\nonumber\\
&\leq\|f(t_0)\|^2_k+C\int_{t_0}^t(\det g)^{-\frac12}\bigg(\iiint\langle p_*\rangle^{2k}e^{p^0(\tau)}f^2(p_*)e^{-q^0(\tau)}d\omega dq_* dp_*\bigg)^{\frac12}\nonumber\\
&\qquad\times\bigg(\iiint \frac{1}{p^0q^0}\langle p_*'\rangle^{2k}e^{p'^0(\tau)}f^2(p_*')\langle q_*'\rangle^{2k}e^{q'^0(\tau)}f^2(q_*')d\omega dq_* dp_*\bigg)^{\frac12}d\tau\allowdisplaybreaks\nonumber\\
&\leq\|f(t_0)\|^2_k+C\int_{t_0}^t(\det g)^{-\frac12}\|f(\tau)\|_k\bigg(\int e^{-q^0(\tau)}dq_* \bigg)^{\frac12}\nonumber\\
&\qquad\times\bigg(\iiint \frac{1}{p^0q^0}\langle p_*\rangle^{2k}e^{p^0(\tau)}f^2(p_*)\langle q_*\rangle^{2k}e^{q^0(\tau)}f^2(q_*)d\omega dq_* dp_*\bigg)^{\frac12}d\tau\allowdisplaybreaks\nonumber\\
&\leq\|f(t_0)\|^2_k+C\int_{t_0}^t(\det g)^{-\frac14}\|f(\tau)\|_k^3d\tau,\label{est_f}
\end{align}
where we used $(p^0q^0)^{-1}dp_*dq_*=(p'^0q'^0)^{-1}dp_*'dq_*'$ and Lemma \ref{lem_int}. Since the quantity $(\det g)^{-1/4}$ is integrable, we obtain the desired result.
\end{proof}
\begin{lemma}\label{lem_f2}
Let $f$ be a solution of the Boltzmann equation. Then, it satisfies the following estimate:
\[
\|\partial^Af(t)\|^2_k\leq \|\partial^A f(t_0)\|^2_{k}+C\sup_{\tau\in[t_0,t]}\|f(\tau)\|_{k,N}^3,
\]
where $1\leq |A|\leq N$ is a multi-index and $k$ is a positive real number.
\end{lemma}
\begin{proof}
The proof of this lemma is also almost similar to that of Lemma 10 of \cite{LeeNungesser172}, and we briefly sketch the proof. For a multi-index $A\neq 0$, we take $\partial^A$ to the Boltzmann equation, multiply the equation by $\partial^A f$, and integrate it over $[t_0,t]$ to obtain
\begin{align*}
&(\partial^A f)^2(t,p_*)=(\partial^A f)^2(t_0,p_*)\\
&\quad +2\int_0^t(\det g)^{-\frac12}\partial^A f(\tau,p_*)\sum\iint \partial^{A_0}\bigg[\frac{1}{p^0q^0\sqrt{s}}\bigg] \partial^{A_1}\Big[f(p_*')\Big]\partial^{A_2}\Big[f(q_*')\Big]d\omega dq_*d\tau\\
&\quad -2\int_0^t(\det g)^{-\frac12}\partial^A f(\tau,p_*)\sum\iint \partial^{A_0}\bigg[\frac{1}{p^0q^0\sqrt{s}}\bigg] (\partial^{A_1}f)(p_*)f(q_*)d\omega dq_*d\tau,
\end{align*}
where the summations are taken for all the possible $A_0$, $A_1$, and $A_2$ satisfying $A=A_0+A_1+A_2$ and $A=A_0+A_1$, respectively. Note that the multi-index notation in this paper is different from the one used in \cite{LeeNungesser172}, and here $A=A_0+A_1$ means that the set $A$ is equal to the disjoint union $A_0\sqcup A_1$, and $A=A_0+A_1+A_2$ is understood in a similar way. Multiplying the above by $\langle p_*\rangle^{2k} e^{p^0(t)}$ and using the monotone property of $p^0(t)$ and Lemma \ref{lem_pp'q'}, we obtain the following:
\begin{align}
&\langle p_*\rangle^{2k} e^{p^0(t)}(\partial^A f)^2(t,p_*)\leq \langle p_*\rangle^{2k} e^{p^0(t_0)}(\partial^A f)^2(t_0,p_*)\nonumber\\
&\quad +C\sum\int_{t_0}^t(\det g)^{-\frac12}\langle p_*\rangle^{k} e^{\frac12 p^0(\tau)}|\partial^A f(\tau,p_*)|\nonumber\\
&\qquad\times\iint \bigg|\partial^{A_0}\bigg[\frac{1}{p^0q^0\sqrt{s}}\bigg]\bigg| e^{-\frac12 q^0(\tau)}\langle p_*'\rangle^{k} e^{\frac12 p'^0(\tau)}\Big|\partial^{A_1}\Big[f(p_*')\Big]\Big|\langle q_*'\rangle^{k} e^{\frac12 q'^0(\tau)}\Big|\partial^{A_2}\Big[f(q_*')\Big]\Big|d\omega dq_*d\tau\allowdisplaybreaks\nonumber\\
&\quad +C\sum\int_{t_0}^t(\det g)^{-\frac12}\langle p_*\rangle^{k} e^{\frac12 p^0(\tau)}|\partial^A f(\tau,p_*)|\nonumber\\
&\qquad\times\iint \bigg|\partial^{A_0}\bigg[\frac{1}{p^0q^0\sqrt{s}}\bigg]\bigg|e^{-\frac12 q^0(\tau)}\langle p_*\rangle^{k} e^{\frac12 p^0(\tau)}| (\partial^{A_1}f)(p_*)|\langle q_*\rangle^{k} e^{\frac12 q^0(\tau)}f(q_*)d\omega dq_*d\tau.\label{est_partial_f1}
\end{align}
The partial derivatives in the integrands are estimated by Lemma \ref{lem_partial_1/p0}, \ref{lem_partial_1/n0+s}, and \ref{lem_p'}. We first notice that the assumption {\sf (A)} shows that Lemma \ref{lem_p'} implies
\[
|\partial^Bp'_*|+|\partial^Bq'_*|\leq C(q^0)^{|B|+4}
\]
for any multi-index $|B|\geq 1$. Since the components $e^a_b$ are bounded by the assumption {\sf (A)}, Lemma \ref{lem_partial_1/p0} implies that
\[
\bigg|\partial^{A_0}\bigg[\frac{1}{p^0q^0\sqrt{s}}\bigg]\bigg|\leq \frac{C(q^0)^{|A_0|}}{p^0q^0}.
\]
The quantities $\partial^{A_1}[f(p_*')]$ and $\partial^{A_2}[f(q_*')]$ are estimated as in \cite{LeeNungesser172}. Applying Faa di Bruno's formula and Lemma \ref{lem_p'} with the assumption {\sf (A)} we obtain
\begin{align*}
\Big|\partial^{A_1}\Big[f(p_*')\Big]\Big|\leq C(q^0)^{5|A_1|}\sum |(\partial^{B}f)(p_*')|,
\end{align*}
where the summation is finite and taken over $1\leq|B|\leq |A_1|$. We obtain a similar estimate for $\partial^{A_2}[f(q_*')]$, and the inequality \eqref{est_partial_f1} is estimated as follows:
\begin{align*}
&\langle p_*\rangle^{2k} e^{p^0(t)}(\partial^A f)^2(t,p_*)\leq \langle p_*\rangle^{2k} e^{p^0(t_0)}(\partial^A f)^2(t_0,p_*)\nonumber\\
&\quad +C\sum\int_{t_0}^t(\det g)^{-\frac12}\langle p_*\rangle^{k} e^{\frac12 p^0(\tau)}|\partial^A f(\tau,p_*)|\nonumber\\
&\qquad\times\iint \frac{(q^0)^{5|A|}}{p^0q^0} e^{-\frac12 q^0(\tau)}\langle p_*'\rangle^{k} e^{\frac12 p'^0(\tau)}|(\partial^{B_1}f)(p_*')|\langle q_*'\rangle^{k} e^{\frac12 q'^0(\tau)}|(\partial^{B_2}f)(q_*')|d\omega dq_*d\tau\allowdisplaybreaks\nonumber\\
&\quad +C\sum\int_{t_0}^t(\det g)^{-\frac12}\langle p_*\rangle^{k} e^{\frac12 p^0(\tau)}|\partial^A f(\tau,p_*)|\nonumber\\
&\qquad\times\iint \frac{(q^0)^{|A_0|}}{p^0q^0}e^{-\frac12 q^0(\tau)}\langle p_*\rangle^{k} e^{\frac12 p^0(\tau)}| (\partial^{A_1}f)(p_*)|\langle q_*\rangle^{k} e^{\frac12 q^0(\tau)}f(q_*)d\omega dq_*d\tau,
\end{align*}
where the summation of the second term is taken over some $B_1$ and $B_2$ satisfying $|B_1|+|B_2|\leq |A|$. Integrating the above with respect to $p_*$, we obtain
\begin{align*}
&\|\partial^A f(t)\|^2_k\leq \|\partial^Af(t_0)\|^2_k
+C\sum\int_{t_0}^t(\det g)^{-\frac12}\int\langle p_*\rangle^{k} e^{\frac12 p^0(\tau)}|\partial^A f(\tau,p_*)|\\
&\qquad\times\iint \frac{(q^0)^{5|A|}}{p^0q^0} e^{-\frac12 q^0(\tau)}\langle p_*'\rangle^{k} e^{\frac12 p'^0(\tau)}|(\partial^{B_1}f)(p_*')|\langle q_*'\rangle^{k} e^{\frac12 q'^0(\tau)}|(\partial^{B_2}f)(q_*')|d\omega dq_*dp_*d\tau\allowdisplaybreaks\\
&\quad +C\sum\int_{t_0}^t(\det g)^{-\frac12}\int\langle p_*\rangle^{k} e^{\frac12 p^0(\tau)}|\partial^A f(\tau,p_*)|\nonumber\\
&\qquad\times\iint \frac{(q^0)^{|A_0|}}{p^0q^0}e^{-\frac12 q^0(\tau)}\langle p_*\rangle^{k} e^{\frac12 p^0(\tau)}| (\partial^{A_1}f)(p_*)|\langle q_*\rangle^{k} e^{\frac12 q^0(\tau)}f(q_*)d\omega dq_*dp_*d\tau\allowdisplaybreaks\\
&\leq \|\partial^Af(t_0)\|^2_k
+C\sum\int_{t_0}^t(\det g)^{-\frac12}\bigg(\iiint \langle p_*\rangle^{2k} e^{p^0(\tau)}|\partial^A f(p_*)|^2(q^0)^{10|A|}e^{-q^0(\tau)}d\omega dq_* dp_*\bigg)^{\frac12}\\
&\qquad\times\bigg(\iiint \frac{1}{p^0q^0} \langle p_*'\rangle^{2k} e^{p'^0(\tau)}|(\partial^{B_1}f)(p_*')|^2\langle q_*'\rangle^{2k} e^{q'^0(\tau)}|(\partial^{B_2}f)(q_*')|^2d\omega dq_*dp_*\bigg)^{\frac12}d\tau\allowdisplaybreaks\\
&\quad+C\sum\int_{t_0}^t(\det g)^{-\frac12}\bigg(\iiint \langle p_*\rangle^{2k} e^{p^0(\tau)}|\partial^A f(p_*)|^2(q^0)^{2|A_0|}e^{-q^0(\tau)}d\omega dq_* dp_*\bigg)^{\frac12}\\
&\qquad\times\bigg(\iiint \frac{1}{p^0q^0} \langle p_*\rangle^{2k} e^{p^0(\tau)}|(\partial^{A_1}f)(p_*)|^2\langle q_*\rangle^{2k} e^{q^0(\tau)}f^2(q_*)d\omega dq_*dp_*\bigg)^{\frac12}d\tau\allowdisplaybreaks\\
&\leq \|\partial^Af(t_0)\|^2_k
+C\sum\int_{t_0}^t(\det g)^{-\frac12}\|\partial^Af(\tau)\|_k\bigg(\int (q^0)^{10|A|}e^{-q^0(\tau)}dq_*\bigg)^{\frac12}\\
&\qquad\times\bigg(\iiint \frac{1}{p^0q^0} \langle p_*\rangle^{2k} e^{p^0(\tau)}|(\partial^{B_1}f)(p_*)|^2\langle q_*\rangle^{2k} e^{q^0(\tau)}|(\partial^{B_2}f)(q_*)|^2d\omega dq_*dp_*\bigg)^{\frac12}d\tau\allowdisplaybreaks\\
&\quad+C\sum\int_{t_0}^t(\det g)^{-\frac12}\|\partial^Af(\tau)\|_k\bigg(\int (q^0)^{2|A_0|}e^{-q^0(\tau)}dq_* \bigg)^{\frac12}\|\partial^{A_1}f(\tau)\|_k\|f(\tau)\|_kd\tau\allowdisplaybreaks\\
&\leq \|\partial^Af(t_0)\|^2_k
+C\sum\int_{t_0}^t(\det g)^{-\frac14}\|\partial^Af(\tau)\|_k\|\partial^{B_1}f(\tau)\|_k\|\partial^{B_2}f(\tau)\|_kd\tau\allowdisplaybreaks\\
&\quad+C\sum\int_{t_0}^t(\det g)^{-\frac14}\|\partial^Af(\tau)\|_k\|\partial^{A_1}f(\tau)\|_k\|f(\tau)\|_kd\tau,
\end{align*}
where we used $(p^0q^0)^{-1}dp_*dq_*=(p'^0q'^0)^{-1}dp_*'dq_*'$ and Lemma \ref{lem_int}. We obtain
\begin{align}\label{est_partial_f2}
\|\partial^A f(t)\|^2_k\leq \|\partial^Af(t_0)\|^2_k+C\int_{t_0}^t(\det g)^{-\frac14}\|f(\tau)\|^3_{k,N}d\tau,
\end{align}
and the integrability of $(\det g)^{-1/4}$ gives the desired result.
\end{proof}
\subsection{Global-in-time existence for the Boltzmann equation}\label{sec_boltzmann_existence}
For a given metric $g$ satisfying the assumption {\sf (A)} the local-in-time existence of classical solutions to the Boltzmann equation is obtained by a standard iteration method. The estimates of Lemma \ref{lem_f1} and \ref{lem_f2} show that small solutions are bounded globally in time such that
\[
\|f(t)\|^2_{k,N}\leq \|f(t_0)\|^2_{k,N}+C\sup_{\tau\in[t_0,t]}\|f(\tau)\|_{k,N}^3.
\]
Hence, we conclude that there exists a small $\varepsilon>0$ such that if initial data is given as $\|f(t_0)\|^2_{k,N}<\varepsilon$, then the corresponding solution exists globally in time and is bounded such that
\begin{align}\label{norm_bounded}
\sup_{t\in[t_0,\infty)}\|f(t)\|^2_{k,N}\leq C\varepsilon.
\end{align}
To ensure that $f$ is differentiable with respect to $t$ we consider again the estimates \eqref{est_f} and \eqref{est_partial_f2}. For a multi-index $A$ we have
\[
\partial_t\Big[\langle p_*\rangle^{2k}e^{p^0}(\partial^Af)^2\Big]=\langle p_*\rangle^{2k}(\partial_tp^0)e^{p^0}(\partial^Af)^2+2\langle p_*\rangle^{2k}e^{p^0}(\partial^Af)(\partial^AQ)(f,f).
\]
We integrate the above with respect to $p_*$ and use the estimate $|\partial_tp^0|\leq C\langle p_*\rangle$ of Lemma \ref{lem_monotone} to estimate the first quantity on the right side. The estimate of the second quantity is the same as in Lemma \ref{lem_f2}. Collecting all $|A|\leq N$, we obtain the following:
\[
\bigg|\frac{d}{dt}\|f(t)\|^2_{k,N}\bigg|\leq \|f(t)\|^2_{k+\frac12,N}+C(\det g)^{-\frac14}\|f(t)\|^3_{k,N}.
\]
Since \eqref{norm_bounded} holds for any $k$, the quantity $\|f(t)\|^2_{k+1/2,N}$ is also bounded globally in time. The right hand side of the above inequality is bounded, and this shows that the solution $f$ is continuous and also differentiable with respect to $t$ by the equation \eqref{boltzmann}. Uniqueness is easily proved by taking two solutions $f$ and $g$ with $f(t_0)=g(t_0)$ and applying Gr{\"o}nwall's inequality. We obtain the following result.
\begin{prop}\label{prop_boltzmann}
Suppose that a spatial metric $g$ satisfies the assumption {\sf (A)}. Then, there exists a small $\varepsilon>0$ such that if initial data is given as $\|f(t_0)\|_{k+1/2,N}<\varepsilon$ for $N\geq 3$, then there exists a unique classical solution of the Boltzmann equation \eqref{boltzmann} which exists globally in time and satisfies
\[
\sup_{t\in[t_0,\infty)}\|f(t)\|^2_{k,N}\leq C\varepsilon.
\]
\end{prop}
\section{Proof of the main theorem and outlook}
We can now prove global-in-time existence of classical solutions to the Einstein-Boltzmann
system \eqref{evolution1}--\eqref{constraint2} with a positive cosmological constant $\Lambda$. Suppose that initial data $g_{ab}(t_0)$, $k_{ab}(t_0)$, and $f(t_0)$ are given such that $H(t_0)<(7/6)^{1/2}\gamma$, and define an iteration for $\{g_n\}$, $\{k_n\}$, and $\{f_n\}$ as follows. Let $(g_0)_{ab}(t)=e^{2\gamma t} \bar{g}_{ab}(t_0)$ and $(k_0)_{ab}(t)=k_{ab}(t_0)$. Choose an orthonormal frame $(e_0)^a_b$ satisfying $(g_0)_{ab}=(e_0)^c_a(e_0)^d_b\eta_{cd}$, which is given by $(e_0)^a_b(t)=e^{\gamma t} \bar{e}^a_b(t_0)$, and let $(e_0^{-1})^a_b$ be the inverse of $(e_0)^a_b$. Then, $g_0$ satisfies the assumption {\sf (A)} of Section \ref{sec_boltzmann}. By Proposition \ref{prop_boltzmann}, there exists a small positive constant $\varepsilon$ such that if $\|f(t_0)\|_{k+1/2,N}<\varepsilon$, then there exists a unique classical solution $f_0$, which is the solution of the Boltzmann equation in a given spacetime with the metric $g_0$ and satisfies $\sup_{t\in[t_0,\infty)}\|f_0(t)\|^2_{k,N}\leq C\varepsilon$. Now, suppose that $f_n$ is given such that $\sup_{t\in[t_0,\infty)}\|f_n(t)\|^2_{k,N}\leq C\varepsilon$ with $\|f(t_0)\|_{k+1/2,N}<\varepsilon$. Then, we have
\[
\hat{f}_n(t,\hat{p})\leq C \varepsilon (1+ e^{2\gamma t} |\hat{p}|^2)^{-\frac12 k} e^{-\frac12 p^0},
\]
and applying Proposition \ref{prop_einstein} we obtain $g_{n+1}$ and $k_{n+1}$, which are the solutions of ODEs, which result when $g$ and $k$ of \eqref{evolution1}--\eqref{evolution2} are replaced by $g_{n+1}$ and $k_{n+1}$, respectively, and $\rho$ and $S_{ab}$ are constructed with $f_n$. It is clear that $g_{n+1}$ and $k_{n+1}$ satisfy the assumption {\sf (A)}, and appyling Proposition \ref{prop_boltzmann} again we obtain $f_{n+1}$, and this completes the iteration. The estimates of Propositions \ref{prop_einstein} and \ref{prop_boltzmann} show that the constructed quantities are uniformly bounded with the desired asymptotic behaviour, and taking the limit, up to a subsequence, we find classical functions $g$, $k$, and $f$. We have seen from Lemmas \ref{lem_F} and \ref{lem_monotone} that $p^0$ decays monotonically. As a consequence $p^0$ is bounded from above which gives us the future geodesic completeness. For more details we refer to \cite{LeeNungesser171}.
We thus have obtained the global existence and asymptotic behaviour of solutions to the Einstein-Boltzmann system, which extend the results of \cite{LeeNungesser171, LeeNungesser172}. A natural generalisation would be to consider higher Bianchi types. The isotropic spacetime with spatially flat topology and Bianchi I spacetimes are in fact the simplest in the sense that the Vlasov part is particularly simple. Thus, it would be of interest to extend \cite{LeeNungesser171, LeeNungesser172} to an FLRW spacetime with negative curvature with or without a cosmological constant. The scattering kernel considered here is physically well-motivated, however for simplicity we assumed that it does not depend on the scattering angle. A generalisation would thus be to remove this restriction. Similarly it is desirable to remove the smallness assumption and obtain a large data result as in \cite{ND}. Finally based on the work of Tod \cite{Tod} it is of interest to study this system with singular initial data, and these topics will be our future projects.
\section*{Acknowledgements}
H. Lee has been supported by the TJ Park Science Fellowship of POSCO TJ Park Foundation. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT \& Future Planning (NRF-2015R1C1A1A01055216). E.N. is currently funded by a Juan de la Cierva research fellowship from the Spanish government and this work has been partially supported by ICMAT Severo Ochoa project SEV-2015-0554 (MINECO).
|
1,314,259,994,802 | arxiv | \section{Introduction}
For image super-resolution (SR), receptive field of a convolutional network determines the amount of contextual information that can be exploited to infer missing high-frequency components. For example, if there exists a pattern with smoothed edges contained in a receptive field, it is plausible that the pattern is recognized and edges are appropriately sharpened. As SR is an ill-posed inverse problem, collecting and analyzing more neighbor pixels can possibly give more clues on what may be lost by downsampling.
Deep convolutional networks (DCN) succeeding in various computer vision tasks often use very large receptive fields (224x224 common in ImageNet classification \cite{krizhevsky2012imagenet, simonyan2015very}). Among many approaches to widen the receptive field, increasing network depth is one possible way: a convolutional (conv.) layer with filter size larger than a $1\times 1$ or a pooling (pool.) layer that reduces the dimension of intermediate representation can be used. Both approaches have drawbacks: a conv. layer introduces more parameters and a pool. layer typically discards some pixel-wise information.
For image restoration problems such as super-resolution and denoising, image details are very important. Therefore, most deep-learning approaches for such problems do not use pooling. Increasing depth by adding a new weight layer basically introduces more parameters. Two problems can arise. First, overfitting is highly likely. More data are now required. Second, the model becomes too huge to be stored and retrieved.
To resolve these issues, we use a deeply-recursive convolutional network (DRCN). DRCN repeatedly applies the same convolutional layer as many times as desired. The number of parameters do not increase while more recursions are performed. Our network has the receptive field of 41 by 41 and this is relatively large compared to SRCNN \cite{dong2014image} (13 by 13). While DRCN has good properties, we find that DRCN optimized with the widely-used stochastic gradient descent method does not easily converge. This is due to exploding/vanishing gradients \cite{bengio1994learning}. Learning long-range dependencies between pixels with a single weight layer is very difficult.
We propose two approaches to ease the difficulty of training (Figure \ref{fig:recursive_supervision}(a)). First, all recursions are supervised. Feature maps after each recursion are used to reconstruct the target high-resolution image (HR). Reconstruction method (layers dedicated to reconstruction) is the same for all recursions. As each recursion leads to a different HR prediction, we combine all predictions resulting from different levels of recursions to deliver a more accurate final prediction. The second proposal is to use a skip-connection from input to the reconstruction layer. In SR, a low-resolution image (input) and a high-resolution image (output) share the same information to a large extent. Exact copy of input, however, is likely to be attenuated during many forward passes. We explicitly connect the input to the layers for output reconstruction. This is particularly effective when input and output are highly correlated.
\textbf{Contributions} In summary, we propose an image super-resolution method deeply recursive in nature. It utilizes a very large context compared to previous SR methods with only a single recursive layer. We improve the simple recursive network in two ways: recursive-supervision and skip-connection. Our method demonstrates state-of-the-art performance in common benchmarks.
\section{Related Work}
\subsection{Single-Image Super-Resolution}
We apply DRCN to single-image super-resolution (SR) \cite{Irani1991, freeman2000learning,glasner2009super}. Many SR methods have been proposed in the computer vision community. Early methods use very fast interpolations but yield poor results. Some of the more powerful methods utilize statistical image priors \cite{sun2008image,Kim2010} or internal patch recurrence \cite{glasner2009super, Huang-CVPR-2015}. Recently, sophisticated learning methods have been widely used to model a mapping from LR to HR patches. Many methods have paid attention to find better regression functions from LR to HR images. This is achieved with various techniques: neighbor embedding \cite{chang2004super,bevilacqua2012}, sparse coding \cite{yang2010image,zeyde2012single,Timofte2013,Timofte}, convolutional neural network (CNN) \cite{dong2014image} and random forest \cite{schulter2015fast}.
Among several recent learning-based successes, convolutional neural network (SRCNN) \cite{dong2014image} demonstrated the feasibility of an end-to-end approach to SR. One possibility to improve SRCNN is to simply stack more weight layers as many times as possible. However, this significantly increases the number of parameters and requires more data to prevent overfitting. In this work, we seek to design a convolutional network that models long-range pixel dependencies with limited capacity. Our network recursively widens the receptive field without increasing model capacity.
\subsection{Recursive Neural Network in Computer Vision}
Recursive neural networks, suitable for temporal and sequential data, have seen limited use on algorithms operating on a single static image. Socher et al. \cite{socher2012convolutional} used a convolutional network in a separate stage to first learn features on RGB-Depth data, prior to hierarchical merging. In these models, the input dimension is twice that of the output and recursive convolutions are applied only two times. Similar dimension reduction occurs in the recurrent convolutional neural networks used for semantic segmentation \cite{pinheiro2014recurrent}. As SR methods predict full-sized images, dimension reduction is not allowed.
In Eigen et al. \cite{Eigen2014}, recursive layers have the same input and output dimension, but recursive convolutions resulted in worse performances than a single convolution due to overfitting. To overcome overfitting, Liang and Hu \cite{Liang_2015_CVPR} uses a recurrent layer that takes feed-forward inputs into all unfolded layers. They show that performance increases up to three convolutions. Their network structure, designed for object recognition, is the same as the existing CNN architectures.
Our network is similar to the above in the sense that recursive or recurrent layers are used with convolutions. We further increase the recursion depth and demonstrate that very deep recursions can significantly boost the performance for super-resolution. We apply the same convolution up to 16 times (the previous maximum is three).
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{figs/f2}
\caption {Unfolding inference network. \textbf{Left}: A recursive layer \textbf{Right}: Unfolded structure. The same filter W is applied to feature maps recursively. Our model can utilize very large context without adding new weight parameters. }
\label{fig:inference_network}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\textwidth]{figs/f3}
\caption{(a): Our final (advanced) model with recursive-supervision and skip-connection. The reconstruction network is shared for recursive predictions. We use all predictions from the intermediate recursion to obtain the final output. (b): Applying deep-supervision \cite{lee2014deeply} to our basic model. Unlike in (a), the model in (b) uses different reconstruction networks for recursions and more parameters are used. (c): An example of expanded structure of (a) without parameter sharing (no recursion). The number of weight parameters is proportional to the depth squared. }
\label{fig:recursive_supervision}
\end{center}
\end{figure*}
\section{Proposed Method}
\subsection{Basic Model}
Our first model, outlined in Figure \ref{fig:overview}, consists of three sub-networks: embedding, inference and reconstruction networks. The embedding net is used to represent the given image as feature maps ready for inference. Next, the inference net solves the task. Once inference is done, final feature maps in the inference net are fed into the reconstruction net to generate the output image.
The \textbf{embedding net} takes the input image (grayscale or RGB) and represents it as a set of feature maps. Intermediate representation used to pass information to the inference net largely depends on how the inference net internally represent its feature maps in its hidden layers. Learning this representation is done end-to-end altogether with learning other sub-networks. \textbf{Inference net} is the main component that solves the task of super-resolution. Analyzing a large image region is done by a single recursive layer. Each recursion applies the same convolution followed by a rectified linear unit (Figure \ref{fig:inference_network}). With convolution filters larger than $1\times 1$, the receptive field is widened with every recursion. While feature maps from the final application of the recursive layer represent the high-resolution image, transforming them (multi-channel) back into the original image space (1 or 3-channel) is necessary. This is done by the \textbf{reconstruction net}.
We have a single hidden layer for each sub-net. Only the layer for the inference net is recursive. Other sub-nets are vastly similar to the standard mutilayer perceptrons (MLP) with a single hidden layer. For MLP, full connection of $F$ neurons is equivalent to a convolution with $1\times 1\times F \times F$. In our sub-nets, we use $3\times 3\times F \times F$ filters. For embedding net, we use $3\times 3$ filters because image gradients are more informative than the raw intensities for super-resolution. For inference net, $3\times 3$ convolutions imply that hidden states are passed to adjacent pixels only. Reconstruction net also takes direct neighbors into account.
\textbf{Mathematical Formulation} The network takes an interpolated input image (to the desired size) as input ${\bf x}$ and predicts the target image ${\bf y}$ as in SRCNN \cite{dong2014image}. Our goal is to learn a model $f$ that predicts values $\mathbf{\hat{y}}=f(\mathbf{x})$, where $\mathbf{\hat{y}}$ is its estimate of ground truth output $\mathbf{y}$. Let $f_1, f_2, f_3$ denote sub-net functions: embedding, inference and reconstruction, respectively. Our model is the composition of three functions: $f({\bf x}) = f_3(f_2 (f_1({\bf x}))).$
Embedding net $f_1({\bf x})$ takes the input vector ${\bf x}$ and computes the matrix output $H_0$, which is an input to the inference net $f_2$. Hidden layer values are denoted by $H_{-1}$. The formula for embedding net is as follows:
\begin{align}
H_{-1} &= max(0, W_{-1}*{\bf x} + b_{-1})\\
H_0 &= max(0, W_{0}*H_{-1} + b_0)\\
f_1({\bf x}) &= H_0,
\end{align}
where the operator $*$ denotes a convolution and $max(0,\cdot)$ corresponds to a ReLU. Weight and bias matrices are $W_{-1},W_0$ and $b_{-1},b_0$.
Inference net $f_2$ takes the input matrix $H_0$ and computes the matrix output $H_{D}$. Here, we use the same weight and bias matrices $W$ and $b$ for all operations. Let $g$ denote the function modeled by a single recursion of the recursive layer: $g(H)=max(0,W*H+b)$. The recurrence relation is
\begin{equation}
H_d = g(H_{d-1}) = max(0,W*H_{d-1}+b),
\end{equation}
for $d = 1, ..., D$.
Inference net $f_2$ is equivalent to the composition of the same elementary function $g$:
\begin{equation}
f_2(H) = (g \circ g \circ \cdots \circ) g(H) = g^{D}(H),
\end{equation}
where the operator $\circ$ denotes a function composition and $g^{d}$ denotes the $d$-fold product of $g$.
Reconstruction net $f_3$ takes the input hidden state $H_D$ and outputs the target image (high-resolution). Roughly speaking, reconstruction net is the inverse operation of embedding net. The formula is as follows:
\begin{align}
H_{D+1} &= max(0, W_{D+1}*H_D + b_{D+1})\\
\hat{{\bf y}} &= max(0, W_{D+2}*H_{D+1} + b_{D+2})\\
f_3(H) &= \hat{{\bf y}}.
\end{align}
\textbf{Model Properties} Now we have all components for our model. The recursive model has pros and cons. While the recursive model is simple and powerful, we find training a deeply-recursive network very difficult. This is in accordance with the limited success of previous methods using at most three recursions so far \cite{Liang_2015_CVPR}. Among many reasons, two severe problems are \textit{vanishing} and \textit{exploding gradients} \cite{bengio1994learning, pascanu2013difficulty}.
\textit{Exploding gradients} refer to the large increase in the norm
of the gradient during training. Such events are due to
the multiplicative nature of chained gradients. Long term components can grow exponentially for deep recursions. The
\textit{vanishing gradients} problem refers to the opposite behavior. Long term components approach exponentially
fast to the zero vector. Due to this, learning the relation between distant pixels is very hard. Another known issue is that storing an exact copy of information through many recursions is not easy. In SR, output is vastly similar to input and recursive layer needs to keep the exact copy of input image for many recursions. These issues are also observed when we train our basic recursive model and we did not succeed in training a deeply-recursive network.
In addition to gradient problems, there exists an issue with finding the optimal number of recursions. If recursions are too deep for a given task, we need to reduce the number of recursions. Finding the optimal number requires training many networks with different recursion depths.
\subsection{Advanced Model}
\textbf{Recursive-Supervision} To resolve the gradient and optimal recursion issues, we propose an improved model. We supervise all recursions in order to alleviate the effect of vanishing/exploding gradients. As we have assumed that the same representation can be used again and again during convolutions in the inference net, the same reconstruction net is used to predict HR images for all recursions. Our reconstruction net now outputs $D$ predictions and all predictions are simultaneously supervised during training (Figure \ref{fig:recursive_supervision} (a)). We use all $D$ intermediate predictions to compute the final output. All predictions are averaged during testing. The optimal weights are automatically learned during training.
A similar but a different concept of supervising intermediate layers for a convolutional network is used in Lee et al \cite{lee2014deeply}. Their method simultaneously minimizes classification error while improving the directness and transparency of the hidden layer learning process. There are two significant differences between our recursive-supervision and deep-supervision proposed in Lee et al. \cite{lee2014deeply}. They associate a unique classifier for each hidden layer. For each additional layer, a new classifier has to be introduced, as well as new parameters. If this approach is used, our modified network would resemble that of Figure \ref{fig:recursive_supervision}(b). We would then need $D$ different reconstruction networks. This is against our original purpose of using recursive networks, which is avoid introducing new parameters while stacking more layers. In addition, using different reconstruction nets no longer effectively regularizes the network. The second difference is that Lee et al. \cite{lee2014deeply} discards all intermediate classifiers during testing. However, an ensemble of all intermediate predictions significantly boosts the performance. The final output from the ensemble is also supervised.
Our recursive-supervision naturally eases the difficulty of training recursive networks. Backpropagation goes through a small number of layers if supervising signal goes directly from loss layer to early recursion. Summing all gradients backpropagated from different prediction losses gives a smoothing effect. The adversarial effect of vanishing/exploding gradients along one backpropagation path is alleviated.
Moreover, the importance of picking the optimal number of recursions is reduced as our supervision enables utilizing predictions from all intermediate layers. If recursions are too deep for the given task, we expect the weight for late predictions to be low while early predictions receive high weights.
By looking at weights of predictions, we can figure out the marginal gain from additional recursions.
We present an expanded CNN structure of our model for illustration purposes in Figure \ref{fig:recursive_supervision}(c). If parameters are not allowed to be shared and CNN chains vary their depths, the number of free parameters grows fast (quadratically).
\textbf{Skip-Connection} Now we describe our second extension: skip-connection. For SR, input and output images are highly correlated. Carrying most if not all of input values until the end of the network is inevitable but very inefficient. Due to gradient problems, exactly learning a simple linear relation between input and output is very difficult if many recursions exist in between them.
We add a layer skip \cite{bishop2006pattern} from input to the reconstruction net. Adding layer skips is successfully used for a semantic segmentation network \cite{long2014fully} and we employ a similar idea. Now input image is directly fed into the reconstruction net whenever it is used during recursions. Our skip-connection has two advantages. First, network capacity to store the input signal during recursions is saved. Second, the exact copy of input signal can be used during target prediction.
Our skip-connection is simple yet very effective. In super-resolution, LR and HR images are vastly similar. In most regions, differences are zero and only small number of locations have non-zero values. For this reason, several super-resolution methods \cite{Timofte2013, Timofte, bevilacqua2012,bevilacqua2013super} predict image details only. Similarly, we find that this domain-specific knowledge significantly improves our learning procedure.
\textbf{Mathematical Formulation} Each intermediate prediction under recursive-supervision (Figure \ref{fig:recursive_supervision}(a)) is
\begin{equation}
\hat{{\bf y}}_{d} = f_3({\bf x}, g^{(d)}(f_1({\bf x}))),
\end{equation}
for $d=1,2,\dots,D$, where $f_3$ now takes two inputs, one from skip-connection. Reconstruction net with skip-connection can take various functional forms. For example, input can be concatenated to the feature maps $H_d$. As the input is an interpolated input image (roughly speaking, $\hat{\bf y} \approx {\bf x}$), we find $f_3({\bf x}, H_d) = {\bf x} + f_3(H_d)$ is enough for our purpose. More sophisticated functions for merging two inputs to $f_3$ will be explored in the future.
Now, the final output is the weighted average of all intermediate predictions:
\begin{equation}
\hat{{\bf y}} = \sum_{d=1}^{D} w_d \cdot \hat{{\bf y}}_d.
\end{equation}
where $w_d$ denotes the weights of predictions reconstructed from each intermediate hidden state during recursion. These weights are learned during training.
\begin{table*}
\begin{center}
\setlength{\tabcolsep}{2pt}
\small
\begin{tabular}{ | c | c | c | c | c | c | c | c | }
\hline
\multirow{2}{*}{Dataset} & \multirow{2}{*}{Scale} & Bicubic & A+ \cite{Timofte} & SRCNN \cite{dong2014image} & RFL \cite{schulter2015fast} & SelfEx \cite{Huang-CVPR-2015} & DRCN (Ours)\\
& & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM\\
\hline
\hline
\multirow{3}{*}{Set5} & $\times$2 & 33.66/0.9299 & 36.54/{\color{blue}0.9544} & {\color{blue}36.66}/0.9542 & 36.54/0.9537 & 36.49/0.9537 & {\color{red}37.63}/{\color{red}0.9588}\\
& $\times$3 & 30.39/0.8682 & 32.58/0.9088 & {\color{blue}32.75}/0.9090 & 32.43/0.9057 & 32.58/{\color{blue}0.9093} & {\color{red}33.82}/{\color{red}0.9226}\\
& $\times$4 & 28.42/0.8104 & 30.28/0.8603 & {\color{blue}30.48}/{\color{blue}0.8628} & 30.14/0.8548 & 30.31/0.8619 & {\color{red}31.53}/{\color{red}0.8854}\\
\hline
\hline
\multirow{3}{*}{Set14} & $\times$2 & 30.24/0.8688 & 32.28/0.9056 & {\color{blue}32.42}/{\color{blue}0.9063} & 32.26/0.9040 & 32.22/0.9034 & {\color{red}33.04}/{\color{red}0.9118}\\
& $\times$3 & 27.55/0.7742 & 29.13/0.8188 & {\color{blue}29.28}/{\color{blue}0.8209} & 29.05/0.8164 & 29.16/0.8196 & {\color{red}29.76}/{\color{red}0.8311}\\
& $\times$4 & 26.00/0.7027 & 27.32/0.7491 & {\color{blue}27.49}/0.7503 & 27.24/0.7451 & 27.40/{\color{blue}0.7518} & {\color{red}28.02}/{\color{red}0.7670}\\
\hline
\hline
\multirow{3}{*}{B100} & $\times$2 & 29.56/0.8431 & 31.21/0.8863 & {\color{blue}31.36}/{\color{blue}0.8879} & 31.16/0.8840 & 31.18/0.8855 & {\color{red}31.85}/{\color{red}0.8942}\\
& $\times$3 & 27.21/0.7385 & 28.29/0.7835 & {\color{blue}28.41}/{\color{blue}0.7863} & 28.22/0.7806 & 28.29/0.7840 & {\color{red}28.80}/{\color{red}0.7963}\\
& $\times$4 & 25.96/0.6675 & 26.82/0.7087 & {\color{blue}26.90}/0.7101 & 26.75/0.7054 & 26.84/{\color{blue}0.7106} & {\color{red}27.23}/{\color{red}0.7233}\\
\hline
\hline
\multirow{3}{*}{Urban100} & $\times$2 & 26.88/0.8403 & 29.20/0.8938 & 29.50/0.8946 & 29.11/0.8904 & {\color{blue}29.54}/{\color{blue}0.8967} & {\color{red}30.75}/{\color{red}0.9133}\\
& $\times$3 & 24.46/0.7349 & 26.03/0.7973 & 26.24/0.7989 & 25.86/0.7900 & {\color{blue}26.44}/{\color{blue}0.8088} & {\color{red}27.15}/{\color{red}0.8276}\\
& $\times$4 & 23.14/0.6577 & 24.32/0.7183 & 24.52/0.7221 & 24.19/0.7096 & {\color{blue}24.79}/{\color{blue}0.7374} & {\color{red}25.14}/{\color{red}0.7510}\\
\hline
\end{tabular}
\caption{Benchmark results. Average PSNR/SSIMs for scale factor $\times$2, $\times$3 and $\times$4 on datasets Set5, Set14, B100 and Urban100. {\color{red}Red color} indicates the best performance and {\color{blue}blue color} refers the second best.}
\label{tbl:benchmark}
\end{center}
\end{table*}
\begin{figure*}
\begin{adjustwidth}{0.5cm}{0.5cm}
\begin{center}
\small
\setlength{\tabcolsep}{3pt}
\begin{tabular}{ c c c c c c }
{\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{img082_for_figDRCN_HR.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{img082_for_figDRCN_A+.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{img082_for_figDRCN_SRCNN.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{img082_for_figDRCN_RFL.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{img082_for_figDRCN_SelfEx.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{img082_for_figDRCN_RCN.png}}
\\
Ground Truth& A+ \cite{Timofte}& SRCNN \cite{dong2014image}& RFL \cite{schulter2015fast}& SelfEx \cite{Huang-CVPR-2015}& DRCN (Ours)\\
(PSNR, SSIM)& (29.83, 0.9102)& (29.97, 0.9092)& (29.61, 0.9026)& ({\color{blue}{30.73}}, {\color{blue}{0.9193}})& ({\color{red}{32.17}}, {\color{red}{0.9350}})\\
\end{tabular}
\caption{Super-resolution results of ``img082"(\textit{Urban100}) with scale factor $\times$4. Line is straightened and sharpened in our result, whereas other methods give blurry lines. Our result seems visually pleasing.}
\label{fig:img1}
\end{center}
\end{adjustwidth}
\end{figure*}
\begin{figure*}
\begin{adjustwidth}{0.5cm}{0.5cm}
\begin{center}
\small
\setlength{\tabcolsep}{3pt}
\begin{tabular}{ c c c c c c }
{\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{134035_for_figDRCN_HR.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{134035_for_figDRCN_A+.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{134035_for_figDRCN_SRCNN.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{134035_for_figDRCN_RFL.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{134035_for_figDRCN_SelfEx.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{134035_for_figDRCN_RCN.png}}
\\
Ground Truth& A+ \cite{Timofte}& SRCNN \cite{dong2014image}& RFL \cite{schulter2015fast}& SelfEx \cite{Huang-CVPR-2015}& DRCN (Ours)\\
(PSNR, SSIM)& (23.53, 0.6977)& ({\color{blue}{23.79}}, {\color{blue}{0.7087}})& (23.53, 0.6943)& (23.52, 0.7006)& ({\color{red}{24.36}}, {\color{red}{0.7399}})\\
\end{tabular}
\caption{Super-resolution results of ``134035" (\textit{B100}) with scale factor $\times$4. Our result shows a clear separation between branches while in other methods, branches are not well separated. }
\label{fig:img2}
\end{center}
\end{adjustwidth}
\end{figure*}
\begin{figure*}
\begin{adjustwidth}{0.5cm}{0.5cm}
\begin{center}
\small
\setlength{\tabcolsep}{3pt}
\begin{tabular}{ c c c c c c }
{\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{ppt3_for_figDRCN_HR.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{ppt3_for_figDRCN_A+.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{ppt3_for_figDRCN_SRCNN.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{ppt3_for_figDRCN_RFL.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{ppt3_for_figDRCN_SelfEx.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{ppt3_for_figDRCN_RCN.png}}
\\
Ground Truth& A+ \cite{Timofte}& SRCNN \cite{dong2014image}& RFL \cite{schulter2015fast}& SelfEx \cite{Huang-CVPR-2015}& DRCN (Ours)\\
(PSNR, SSIM)& (26.09, 0.9342)& (27.01, 0.9365)& (25.91, 0.9254)& ({\color{blue}{27.10}}, {\color{blue}{0.9483}})& ({\color{red}{27.66}}, {\color{red}{0.9608}})\\
\end{tabular}
\caption{Super-resolution results of ``ppt3" (\textit{Set14}) with scale factor $\times$3. Texts in DRCN are sharp while, in other methods, character edges are blurry.}
\label{fig:img3}
\end{center}
\end{adjustwidth}
\end{figure*}
\begin{figure*}
\begin{adjustwidth}{0.5cm}{0.5cm}
\begin{center}
\small
\setlength{\tabcolsep}{3pt}
\begin{tabular}{ c c c c c c }
{\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{58060_for_figDRCN_HR.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{58060_for_figDRCN_A+.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{58060_for_figDRCN_SRCNN.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{58060_for_figDRCN_RFL.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{58060_for_figDRCN_SelfEx.png}}
& {\graphicspath{{figs/figDRCN/}}\includegraphics[width=0.15\textwidth]{58060_for_figDRCN_RCN.png}}
\\
Ground Truth& A+ \cite{Timofte}& SRCNN \cite{dong2014image}& RFL \cite{schulter2015fast}& SelfEx \cite{Huang-CVPR-2015}& DRCN (Ours)\\
(PSNR, SSIM)& (24.24, 0.8176)& ({\color{blue}{24.48}}, {\color{blue}{0.8267}})& (24.24, 0.8137)& (24.16, 0.8145)& ({\color{red}{24.76}}, {\color{red}{0.8385}})\\
\end{tabular}
\caption{Super-resolution results of ``58060" (\textit{B100}) with scale factor $\times$2. A three-line stripe in ground truth is also observed in DRCN, whereas it is not clearly seen in results of other methods.}
\label{fig:img4}
\end{center}
\end{adjustwidth}
\end{figure*}
\subsection{Training}
\textbf{Objective} We now describe the training objective used to find optimal parameters of our model. Given a training dataset $\{{\bf x}^{(i)},{\bf y}^{(i)}\}{}_{i=1}^{N}$, our goal is to find the best model $f$ that accurately predicts values $\mathbf{\hat{y}}=f(\mathbf{x})$.
In the least-squares regression setting, typical in SR, the mean squared error $\frac{1}{2}||\mathbf{y}-f(\mathbf{x})||^{2}$
averaged over the training set is minimized. This favors high Peak Signal-to-Noise
Ratio (PSNR), a widely-used evaluation criteria.
With recursive-supervision, we have $D+1$ objectives to minimize: supervising $D$ outputs from recursions and the final output. For intermediate outputs, we have the loss function
\begin{equation}
l_1(\theta) = \sum_{d=1}^D \sum_{i=1}^N \frac{1}{2DN}||{\bf y}^{(i)} - \hat{\bf y}_d^{(i)} ||^{2},
\end{equation}
where $\theta$ denotes the parameter set and $\hat{\bf y}_d^{(i)}$ is the output from the $d$-th recursion. For the final output, we have
\begin{equation}
l_2(\theta) = \sum_{i=1}^N \frac{1}{2N}||{\bf y}^{(i)} - \sum_{d=1}^D w_d \cdot \hat{\bf y}_d^{(i)} ||^{2}
\end{equation}
Now we give the final loss function $L(\theta)$. The training is regularized by weight decay ($L_2$ penalty multiplied by $\beta$).
\begin{equation}
L(\theta) =\alpha l_1(\theta) + (1 - \alpha) l_2(\theta) + \beta ||\theta||^2,
\end{equation}
where $\alpha$ denotes the importance of the companion objective on the intermediate outputs and $\beta$ denotes the multiplier of weight decay. Setting $\alpha$ high makes the training procedure stable as early recursions easily converge. As training progresses, $\alpha$ decays to boost the performance of the final output.
Training is carried out by optimizing the regression objective using mini-batch gradient descent based on back-propagation (LeCun et al. \cite{lecun1998gradient}). We implement our model using the \textit{MatConvNet}\footnote{\url{ http://www.vlfeat.org/matconvnet/}} package \cite{arXiv:1412.4564}.
\section{Experimental Results}
In this section, we evaluate the performance of our method on several datasets. We first describe datasets used for training and testing our method. Next, our training setup is given. We give several experiments for understanding our model properties. The effect of increasing the number of recursions is investigated. Finally, we compare our method with several state-of-the-art methods.
\subsection{Datasets}
For training, we use 91 images proposed in Yang et al. \cite{yang2010image} for all experiments. For testing, we use four datasets. Datasets \textit{Set5} \cite{bevilacqua2012} and \textit{Set14} \cite{zeyde2012single} are often used for benchmark \cite{Timofte,Timofte2013,dong2014image}. Dataset \textit{B100} consists of natural images in the Berkeley Segmentation Dataset \cite{Martin2001}. Finally, dataset \textit{Urban100}, urban images recently provided by Huang et al. \cite{Huang-CVPR-2015}, is very interesting as it contains many challenging images failed by existing methods.
\begin{figure}
\begin{adjustwidth}{0cm}{-0.0cm}
\centering
{\graphicspath{{figs/graph1/}}\includegraphics[width=0.4\textwidth]{graphOne.pdf}}
\caption{Recursion versus Performance for the scale factor $\times$3 on the dataset \textit{Set5}. More recursions yielding larger receptive fields lead to better performances.}\end{adjustwidth}
\label{fig:more}
\end{figure}
\begin{figure}
\centering
{\graphicspath{{figs/graph1/}}\includegraphics[width=0.45\textwidth]{graphEnsemble}}
\caption{Ensemble effect. Prediction made from intermediate recursions are evaluated. There is no single recursion depth that works the best across all scale factors. Ensemble of intermediate predictions significantly improves performance. }
\label{fig:ensemble}
\end{figure}
\subsection{Training Setup}
We use 16 recursions unless stated otherwise. When unfolded, the longest chain from the input to the output passes 20 conv. layers (receptive field of 41 by 41). We set the momentum parameter to 0.9 and weight decay to 0.0001. We use 256 filters of the size $3 \times 3$ for all weight layers. Training images are split into 41 by 41 patches with stride 21 and 64 patches are used as a mini-batch for stochastic gradient descent.
For initializing weights in non-recursive layers, we use the method described in He et al. \cite{he2015delving}. For recursive convolutions, we set all weights to zero except self-connections (connection to the same neuron in the next layer) \cite{socher2012semantic, le2015simple}. Biases are set to zero.
Learning rate is initially set to 0.01 and then decreased by a factor of 10 if the validation error does not decrease for 5 epochs. If learning rate is less than $10^{-6}$, the procedure is terminated. Training roughly takes 6 days on a machine using one Titan X GPU.
\subsection{Study of Deep Recursions}
We study the effect of increasing recursion depth. We trained four models with different numbers of recursions: 1, 6, 11, and 16. Four models use the same number of parameters except the weights used for ensemble. In Figure {\color{red}8}, it is shown that as more recursions are performed, PSNR measures increase. Increasing recursion depth with a larger image context and more nonlinearities boosts performance. The effect of ensemble is also investigated. We first evaluate intermediate predictions made from recursions (Figure \ref{fig:ensemble}). The ensemble output significantly improves performances of individual predictions.
\subsection{Comparisons with State-of-the-Art Methods}
We provide quantitative and qualitative comparisons. For benchmark, we use public code for A+ \cite{Timofte}, SRCNN \cite{dong2014image}, RFL \cite{schulter2015fast} and SelfEx \cite{Huang-CVPR-2015}. We deal with luminance components only as similarly done in other methods because human vision is much more sensitive to details in intensity than in color.
As some methods such as A+ \cite{Timofte} and RFL \cite{schulter2015fast} do not predict image boundary, they require cropping pixels near borders. For our method, this procedure is unnecessary as our network predicts the full-sized image. For fair comparison, however, we also crop pixels to the same amount. PSNRs can be slightly different from original papers as existing methods use slightly different evaluation frameworks. We use the public evaluation code used in \cite{Huang-CVPR-2015}.
In Table \ref{tbl:benchmark}, we provide a summary of quantitative evaluation on several datasets.
Our method outperforms all existing methods in all datasets and scale factors (both PSNR and SSIM). In Figures \ref{fig:img1}, \ref{fig:img2}, \ref{fig:img3} and \ref{fig:img4}, example images are given. Our method produces relatively sharp edges respective to patterns. In contrast, edges in other images are blurred. Our method takes a second to process a $288 \times 288$ image on a GPU Titan X.
\section{Conclusion}
In this work, we have presented a super-resolution method using a deeply-recursive convolutional network. Our network efficiently reuses weight parameters while exploiting a large image context. To ease the difficulty of training the model, we use recursive-supervision and skip-connection. We have demonstrated that our method outperforms existing methods by a large margin on benchmarked images. In the future, one can try more recursions in order to use image-level context. We believe our approach is readily applicable to other image restoration problems such as denoising and compression artifact removal.
{\small
\bibliographystyle{ieee}
|
1,314,259,994,803 | arxiv | \section{Introduction}
\label{sec:intro}
\input{sections/s1_intro_v2}
\section{Related Work}
\label{sec:related}
\input{sections/s2_related}
\input{sections/f2_model}
\section{Approach}
\label{sec:approach}
\input{sections/s3_approach}
\section{Experiments}
\label{sec:experiments}
\input{sections/s4_experiments}
\section{Conclusion}
\label{sec:conclusion}
\input{sections/s5_conclusion}
\vspace{6pt} \noindent
\textbf{Acknowledgments}
We would like to thank Richard Higgins and Karan Desai for many helpful discussions and feedback on early drafts of this work.
{\small
\bibliographystyle{ieee_fullname}
\subsection{Point Cloud Registration}
\label{sec:exp_pcreg}
We first evaluate our approach on point cloud registration on ScanNet and report our results in Table~\ref{tab:pose_scannet}.
Given two point clouds, we estimate the transformation $\mathbf{T} \in \text{SE}(3)$ that would align the point clouds.
We emphasize that we discard the visual encoder at test time, and only use the geometric encoder on point cloud input.
\lsparagraph{Baselines.} While our approach is unsupervised, we are interested in comparing to both classical hand-crafted and supervised learning approaches.
We first compare our approach against different variants of ICP~\cite{rusinkiewicz2001efficient}.
ICP is an important comparison since it is both an inspiration of this work, as well as a classical algorithm for point cloud registration.
We also compare it with a RANSAC-based aligner using FPFH~\cite{rusu2009fast} or FCGF~\cite{choy2019fully} 3D feature descriptors.
FPFH~\cite{rusu2009fast} is a hand-crafted 3D feature descriptor that encodes a histogram of the geometric relationships between each point and its nearest neighbors.
FPFH is one of the best non-learned 3D feature descriptors and would be representative of the performance of hand-crafted 3D features.
FCGF~\cite{choy2019fully} is a recently proposed learned 3D feature descriptor that combines sparse 3D convolutional networks with contrastive losses defined on ground-truth correspondences to achieve state-of-the-art performance on several registration benchmarks.
Finally, we compare with Deep Global Registration~\cite{choy2020deep} and 3D Multiview Registration~\cite{gojcic2019perfect}: two supervised approaches that learn to estimate correspondences on top of FCGF features.
Those approaches use supervision for both feature learning and correspondence estimation, while our approach is unsupervised for both. It is worth noting that 3D Multi-view Registration~\cite{gojcic2020learning} proposes both a method for pairwise registration and synchronizing multiple views at the same time. We only compare against their pairwise registration module.
\lsparagraph{Evaluation Metrics. }
We evaluate the pairwise registration by calculating the rotation and translation error between the predicted and ground-truth transformation as follows:
\begin{equation}
E_{\text{rotation}} = \arccos(\frac{Tr(\mathbf{R}_{pr}\mathbf{R}_{gt}^\top) - 1}{2}),
\end{equation}
\begin{equation}
E_{\text{translation}} = ||\mathbf{t}_{pr} - \mathbf{t}_{gt}||_2 .
\end{equation}
We report the translation error in centimeters and the rotation errors in degrees. We also report the chamfer distance between the predicted and ground-truth alignments of the scene.
For each of metric, we report the mean and median errors as well as the accuracy at different thresholds.
\nsparagraph{Results. }
We first note that ICP approaches fail on this task. ICP assumes that the point clouds are prealigned and can be very effective at fine-tuning such alignment by minimizing a chamfer distance. However, our view pairs have a relatively large camera motion with the mean transformation between two frames being 11.4 degrees and 19.4 cm.
As a result, ICP struggles with the large transformations and partial overlap between the point cloud pairs.
Similarly, FPFH also fails on this task as its output descriptors are not distinctive enough, resulting in many false correspondences which greatly deteriorates the registration performance.
On the other hand, learned approaches show a clear advantage in this domain as they are able to learn features that are well-tuned for the task and data domain.
Our model is able to outperform FCGF despite FCGF being trained with ground-truth correspondences on an indoor scene dataset.
This is true regardless of whether our model is trained using RGB-D or depth pairs.
While we find that our model trained on 3D Match performs worse than FCGF, this is expected since 3DMatch is a much smaller dataset making it less suitable for a self-supervised approach.
Finally, our approach is competitive against approaches that use supervision for both feature learning and correspondence estimation~\cite{choy2020deep,gojcic2020learning}.
This comparison represents the difference between full supervision on a small dataset vs. self-supervision on a large dataset.
Our competitive performance demonstrates the promise of self-supervision in this space and our model's ability to learn for a very simple learning signal: consistency between video frames.
\input{sections/t3_analaysis}
\lsparagraph{What is the impact of the transformation estimator?}
While we observe that RANSAC improves the performance of FPFH and FCGF compared to the Weighted Procrustes, we see the opposite pattern with our approach.
This is due to the fact that our model is trained specifically on a registration loss on filtered correspondence. As a result, Lowe's ratio becomes a very effective method of filtering our correspondences while being less effective for other approaches.
\lsparagraph{How good are random features?}
We find that random visual features can serve as a strong baseline for point cloud registration on ScanNet, as shown in Fig~\ref{fig:intial_corr} and Table~\ref{tab:analysis}.
This is suprising since random visual features perform on-par with FCGF. This explains why our method is capable of achieving this performance without any supervision.
We also find that after training, our visual features achieve the highest registration performance.
Those results point that visual features are better descriptors for registration, but it is unclear if this a fundemntal advantage or if the performance gap can be be resolved through better architectures or training schemes for geometric feature learning.
\subsection{Correspondence Estimation}
\label{sec:exp_corr}
We now examine the quality of the correspondences estimated by our method.
We evaluate our approach on the 3D Match geometric registration benchmark and follow the evaluation protocol proposed by Deng~\etal~\cite{deng2018ppfnet} of evaluating the correspondence recall.
Intuitively, feature-match recall measures the percentage of point cloud pairs that would be registered accurately using a RANSAC estimator by guaranteeing a minimum percentage of inliers.
\lsparagraph{Baselines. }
We compare our approach against three sets of baselines.
The first set are hand-crafted features based on the local geometry around each point~\cite{rusu2009fast,salti2014shot,tombari2010usc}.
The second set are supervised approaches that use known pose to sample ground-truth correspondences and apply a metric learning loss to learn features for geometric registration.
Finally, the third set are unsupervised approaches trained on reconstructed scenes.
While those approaches do not directly use ground-truth pose during training, their training data (reconstructed scenes) is generated by aligning 50 depth maps into a single point cloud. Hence, while those approaches do not use pose supervision explicity, pose information is \textit{needed} to generate their data. We refer to those approaches as \textit{scene-supervised}.
\input{sections/t2_3dmatch_fmr}
\lsparagraph{Evaluation Metrics.}
Given a set of correspondences $\mathcal{C}$, $FM(\mathcal{C})$ evaluates whether the percentage of inliers exceeds $\tau_2$, where an inlier correspondence is defined as having a residual error less than $\tau_1$ given the ground-truth transformation $\mathbf{T}^{*}$. Feature-match recall is the percentage of point cloud pairs that have a successful feature matching.
\begin{equation}
FM(\mathcal{C}) =
\Big[\frac{1}{|\mathbf{\mathcal{C}|}}
\sum\limits_{(p, q) \in \mathbf{\mathcal{C}}}
\mathbf{\mathbbm{1}}
\big( ||\mathbf{x}_p{-}\mathbf{T}^{*}\mathbf{x}_q|| < \tau_1\big)\Big] > \tau_2
\end{equation}
Similar to \cite{choy2019fully,deng2018ppffoldnet,deng2018ppfnet}, we calculate feature-match recall over all view pairs using $\tau_{1}=10$ cm and $\tau_2 = 5\%$.
Prior approaches often generate feature sets without any specified means of filtering them. As a result, they define the correspondence set as the set of all nearest neighbors.
Unlike prior work, our approach outputs a small set of correspondences after ranking them using Lowe's ratio test.
\input{sections/f4_qualitative_results}
\lsparagraph{Results. }
We find that our approach achieves a high feature-match recall, outperforming traditional approaches and scene-supervised approaches, while being competitive with supervised approaches.
It is worth emphasizing that we achieve this performance while training on the raw RGB-D or depth scans without requiring any additional annotation or post-processing of the data.
We achieve the best performance by training on ScanNet with rotation augmentations. Rotation augmentation gives us a small boost, despite resulting in lower registration performance on ScanNet.
This can be explained by the differing data distribution between the two dataset: 3D Match benchmark has larger transformations than those observed between video frames. Hence, data augmentation becomes very useful.
We also observe the interesting pattern that the model trained on only geometric correspondence generalizes better to 3D Match despite doing worse on ScanNet.
One explanation for this discrepency is that bootstrapping with visual correspondences biases the model towards representing features that are meaningful in both modalities.
Such representations might be more dataset specific, hindering across-dataset generalization.
This finding also opens up the possibility of scaling datasets that only include depth video; \eg, lidar datasets.
While our best configuration outperforms all the scene-supervised approaches,
we achieve performance that is competitve with the scene-supervised approaches when we evaluate all our features (\textit{no filtering}).
We observe that when we attempt to filter the correspondences for FPFH or FCGF, their performance deteriorates.
This is consistent with some of the reported results by~\cite{deng2018ppffoldnet} where using a larger number of features improved their performance.
Hence, it is unclear how correspondence filtering would affect their performance.
Due to the lack of an official implementation of PPF-FoldNet and the complexity of their approach, we were unable to run additional experiments to better understand the impact of the training data and correspondence filtering on the learning process. This affects both PPF-FoldNet and 3D Point Capsule Networks since the latter approach replaces the encoder in PPF-FoldNet.
\subsection{Point Cloud Registration}
\label{sec:approach_pcreg}
Given two point clouds, $\mathcal{P}_0$ and $\mathcal{P}_1$, point cloud registration is the task of finding the transformation $\mathbf{T} \in \text{SE}(3)$ that aligns them.
Registration approaches commonly consist of three stages: feature extraction, correspondence estimation, and geometric fitting.
In our approach, we perform two registration steps using image or point cloud features. Correspondence estimation and geometric fitting are the same for both steps.
Below we discuss each of the steps in detail.
\lsparagraph{Geometric Feature Extraction. }
Our first encoder takes point clouds as input. This encoder allows us to extract features based on the geometry of the input scene.
We first generate a point cloud for each view using the input depth and known camera intrinsic matrix.
We then encode each point cloud using a sparse 3D convolutional network~\cite{choy2019minkowski,Graham_2018_CVPR}.
We use this network due to its success as a back-end for supervised registration approaches~\cite{choy2019fully,choy2020deep,gojcic2020learning} and 3D representation learning~\cite{xie2020pointcontrast,zhang2021depthcontrast}.
This network applies sparse convolution to a voxelized point cloud, which allows it to extract features that capture local geometry while maintaining a quick run-time.
Similar to prior work~\cite{choy2019fully,xie2020pointcontrast,zhang2021depthcontrast}, we find that a voxel size of 2.5 cm works well for indoor scenes.
This step maps our input RGB-D image, $I_0, I_1 \in \mathbb{R}^{4 \times H \times W}$ to $\mathcal{P}_0, \mathcal{P}_1 \in \mathbb{R}^{N \times (3 + F)}$ where each point cloud has N points, and each point, $p$, is represented by a 3D coordinate, $\mathbf{x}_p$, and a $F$-dimensional geometric feature vector, $\mathbf{g}_p$.\footnote{Voxelization will result in point clouds of varying dimension. We use heterogeneous batching to handle this in our implementation, but assume that point clouds have the same size in our discussion for clarity.} In our experiments, we use a feature dimension of 32.
\lsparagraph{Visual Feature Extraction. }
Our second encoder takes images as input and generates an output feature map of the same size.
Maintaining the image's spatial resolution results in a feature vector extracted for each pixel.
We use a ResNet encoder with two residual blocks as our image encoder and map each pixel to a feature vector of size 32.
We use the projected 3D coordinates of the voxelized point cloud from the geometric encoder to index into the 2D feature map.
This allows us to generate a point cloud for each input RGB-D image whose points $p \in \mathcal{P}$ have both a visual feature vector, $\mathbf{v}_p$, and a geometric feature vector, $\mathbf{g}_p$.
We use this property for transferring correspondences between the different feature modalities in \S~\ref{sec:approach_v2g}.
We only use the visual encoder during training to bootstrap the geometric feature learning. At test time, we register point clouds without access to image data.
\lsparagraph{Correspondence estimation. }
We estimate the correspondences between the two input views for each feature modality to output two sets of correspondences: $\mathcal{C}_{vis}$ and $\mathcal{C}_{geo}$.
We first generate a list of correspondences by finding the nearest neighbor to each point in the feature space.
Since we have two point clouds of N points, we end up with a correspondence list of length 2N candidate correspondences.
The candidate correspondences will likely contain a lot of false positives due to poor matching, repetitive features, and occluded or non-overlapping portions of the image.
The common approach is to filter the correspondences based on some criteria of uniqueness or correctness.
Recent approaches propose learning networks that estimate a weight for each correspondence~\cite{choy2020deep,gojcic2020learning,ranftl2018deepfundamental}.
In this work, we leverage the method proposed by~\cite{elbanani2021unsupervisedrr} of using a weight based on Lowe's ratio~\cite{lowe2004distinctive}.
Given two point clouds, $\mathcal{P}_0$ and $\mathcal{P}_1$, we find the correspondences of point $p \in \mathcal{P}_0$ by finding the two nearest neighbors $q_p$ and $q_{p, nn_2}$ to $p$ in $\mathcal{P}_1$ in feature space.
We can calculate the Lowe's ratio weight as follows:
\begin{equation}
w_{p, q_p} = 1 - \frac{D(\mathbf{f}_p, \mathbf{f}_{q_p})}{D(\mathbf{f}_p, \mathbf{f}_{q_{p, nn_2}})}
\end{equation}
where $D$ is cosine distance, and $\mathbf{f}_p$ is either the visual or the geometric feature descriptor depending on which correspondence set is being calculated.
It is worth noting that this formulation is similar to the triplet loss often used in contrastive learning, where $q_p$ is the positive sample and $q_{p, nn_2}$ is the hardest negative sample.
We use the resulting weights to rank the correspondences and only include the top $k$ correspondences. We use $k=400$ in our experiments.
Each element of our correspondence set $\mathcal{C}$ consists of the two corresponding points and their weight $(p, q, w_{p, q})$.
\lsparagraph{Geometric Fitting. }
For each set of correspondences, we estimate the transformation, $\mathbf{T}^{*} \in \text{SE(3)}$ that would minimize the mean-squared error between the aligned correspondences:
\begin{equation}
{E}(\mathcal{C}, \mathbf{T}) = \frac{1}{|\mathcal{C}|} \sum_{(p, q_p, w) \in \mathcal{C}} \frac{w}{\sum_{\mathcal{C}} w}||\mathbf{x}_{q_p} - \mathbf{T}(\mathbf{x}_p)||
\end{equation}
Choy~\etal~\cite{choy2020deep} show this problem can be reformulated as a weighted Procrustes algorithm~\cite{gower1975generalized,kabsch1976solution}, allowing for weights to be integrated into the operation to improve the optimization process while maintaining differentiability with respect to the weights.
We adopt this formulation due to its relative simplicity and ease of incorporation within an end-to-end trainable system.
Despite having a filtered correspondence list, the correspondence set might still include some outliers that would result in an incorrect geometric fitting.
We adopt the randomized optimization used in~\cite{elbanani2021unsupervisedrr}, and similarly find that we get the best performance by only using it at test time.
\input{sections/f3_corres}
\lsparagraph{Registration Loss. }
Our registration loss is defined with respect to our correspondence set and the estimated transformation as follows:
\begin{equation}
\mathcal{L}_{reg}(\mathcal{C}) = \argmin_{\mathbf{T} \in \text{SE}(3)}{E(\mathcal{C}, \mathbf{T})}
\end{equation}
There are a few interesting things about this loss. First, it is worth noting that the gradients are back-propagated to the feature encoder through the weights, $w$, \textit{and} the transformation, $\mathbf{T}$. Hence, the loss can be formulated without using the weights. We find that using the weight improved the performance of visual registration while deteriorating the performance of geometric registration. Therefore, in our model, we only apply the weighting to the visual registration branch while removing it from the geometric branch.
Second, the loss operates as a weighted sum over the residuals, specifically, the loss is minimized if the correspondence with the lowest residual error has the highest weight. Since the weights are L1 normalized, the relative weighing of the correspondences matters. Removing the normalization results in an obvious degeneracy since the loss can be minimized by driving the weights to 0, which can be achieved by mode collapse. Finally, the weighted loss closely resembles a triplet loss since we sample both positive (first nearest neighbor) and hardest negative (second nearest neighbor) samples. However, unlike the commonly used margin triplet loss, this formulation does not require defining a margin as it operates on the ratio of distances rather than their absolute scale.
\subsection{Visual $\to$ Geometric}
\label{sec:approach_v2g}
The approach outlined in \S~\ref{sec:approach_pcreg} works well with visual features, however it is less effective with geometric features.
The reason for this becomes apparent once we consider the registration performance using features from randomly initialized encoders.
As shown in Figure~\ref{fig:intial_corr}, we observe that the features extracted using randomly initialized visual encoder provide some distinctive output, while a random geometric encoder's outputs are more random.
Ideally, we would leverage the good visual correspondences to further bootstrap the learning of the geometric features.
We observe that geometric feature learning approaches typically define metric learning losses using sampled correspondences~\cite{bai2020d3feat,choy2019fully,gojcic2019perfect,Li2020e2e3ddescriptors,yew20183dfeatnet}.
We adapt this approach to the unsupervised setting by sampling feature pairs using visual correspondences.
This is fairly simple in our approach since each point has both a visual feature and a geometric feature, so transferring correspondences becomes as simple as indexing into another tensor.
Since the correspondences act as an indexing mechanism, the loss is only back-propagated to the geometric encoder.
Current 3D feature learning approaches rely on both positive and negative pairs to define triplet~\cite{choy2019fully,khoury2017cgf,Li2020e2e3ddescriptors,yew20183dfeatnet} or contrastive~\cite{bai2020d3feat,choy2019fully,xie2020pointcontrast} losses.
However, as noted in the literature, those losses can be difficult to apply due to their susceptibility to mode collapse and sensitive to hyperparameter choices and negative sampling strategy~\cite{choy2019fully,xie2020pointcontrast,zhang2021depthcontrast}.
Those issues are amplified in our setting since we rely on estimated, not ground-truth, correspondences.
Instead of the typical contrastive setup, we adapt the recently proposed SimSiam~\cite{chen2020exploring} to the point cloud setting.
SimSiam allows us to train our model without requiring negative sampling or having any hyperparameters, while being less susceptible to mode collapse than contrastive losses~\cite{chen2020exploring}.
We adapt SimSiam by applying it to the geometric features of corresponding points, instead of features for different augmentations of the same image.
Given a correspondence $(p, q) \in \mathcal{C}_{vis}$, we first project the features using a two-layer MLP projection head and apply a stop-gradient operator on the features:
\begin{equation}
\mathbf{z}_p = \mathtt{stopgradient}(project(\mathbf{g}_p)).
\end{equation}
We then compute the loss based on the cosine distance between each geometric feature and the projection of its correspondence:
\begin{equation}
\mathcal{L}_{V\to G}(\mathcal{C}_{vis}) = \frac{1}{|\mathcal{C}_{vis}|} \hspace{-0.5mm} \sum_{(p, q) \in \mathcal{C}_{vis}} \hspace{-4mm} D(\mathbf{g}_p, \mathbf{z}_q) + D(\mathbf{g}_q, \mathbf{z}_p)
\end{equation}
where $D$ is the cosine distance function and $\mathcal{C}_{vis}$ is the set of visual correspondences.
In our experiments, we observe that adding SimSiam improved the performance without requiring any additional fine-tuning or model changes.
\input{sections/t1_registration}
|
1,314,259,994,804 | arxiv | \section{Introduction}
\label{Sec:Intro}
The control of turbulent flows has been an area
of particular interest in aerospace research due to major commercial benefits. For most applications, the control parameters are determined by using simulation based optimization techniques. As typical examples, active flow control of high-lift devices for lift enhancement \citep{nielsen_2011}, flow relaminarization \citep{Bewley:2001} and noise control \citep{Wei:2006} can be given.
Especially for large-scale problems, an efficient way of finding the optimal values of control parameters is by employing gradient-based numerical optimization techniques. These methods are ideally combined with continuous or discrete adjoint solvers to evaluate the gradient. The main advantage of the adjoint approaches is that they enable efficient evaluation of gradients, irrespective of the number of control parameters, at a fixed computational cost. Therefore, numerical optimization studies using high dimensional control vectors and high fidelity simulation for function evaluations become viable within limited computational resources.
Broadly, we can classify adjoint approaches
into two categories: continuous and discrete adjoint methods. In the continuous adjoint method \citep{Jameson:1998},
the optimality system is derived from the continuous optimization problem. The resulting adjoint partial differential equations (PDEs) are then discretised and solved using state-of-the-art numerical
methods. Although being computationally efficient, development of continuous adjoint flow solvers requires considerable development effort and a good understanding of the underlying flow solver is often required. Furthermore, their maintenance becomes problematic as the underlying non-linear flow solvers are subject to continuous modifications, e.g., new boundary conditions, new physical models etc. In short, extension of the adjoint solver to incorporate the new features might be a quite challenging task.
In the discrete adjoint method, on the other hand, the derivation of the optimality conditions starts directly with the discretized state PDEs that govern the fluid flow. Based on a given discretization, the discrete adjoint equation is derived. In general, compared to the continuous adjoint solvers, discrete adjoint solvers are more straightforward to implement. Therefore, they have found a wider acceptance for flow control applications of practical relevance in the past.
In general, a discrete adjoint method for optimal active flow control can be developed either by
using the so-called hand-discrete approach \citep{Nielsen} or by employing Algorithmic Differentiation (AD) techniques to the underlying non-linear flow solver. In the hand-discrete approach, the adjoint equations are derived by linearizing the discrete residuals by hand. Based on the derivation, a computer code is then implemented to solve the adjoint equations and to evaluate the gradient vector. In the AD based approach, on the other hand, the adjoint code is generated directly by applying AD techniques to the computational fluid dynamics (CFD) code that is implemented to solve the discretized flow equations.
Accurate computation of sensitivities requires exact differentiation of all terms that constitute the discrete residual. However, exact linearization of these terms might be often quite complex, laborious and error prone. To simplify this tedious effort and ease the development of the adjoint solvers, various Jacobian approximations have been proposed in the past \citep{Pet2010}. While using these approximations, linearization of certain terms in the flux Jacobian is omitted. As typical examples, turbulent models, flux limiters, convergence acceleration schemes and higher order reconstruction terms can be given. The Jacobian approximations ease the development effort significantly at the expense of some inaccuracy in the sensitivity evaluation. Usually for steady-state problems, the inaccuracies incurred by the Jacobian approximations can be tolerated. In unsteady flows, however, the effect of these approximations on the accuracy of sensitivities is much more significant as the errors generated in the adjoint solution tend to accumulate rapidly while solving the adjoint equations backward-in-time \citep{anil2013}. On the other hand, by using the AD techniques in reverse mode, very accurate adjoint solvers can be generated since the exact differentiation of all residual terms can be performed by AD packages with much ease. Therefore, Jacobian approximations are no more required in the AD based discrete adjoint framework.
The AD based adjoint solvers, apart from the advantages mentioned above, have in general higher memory demand compared to the continuous and hand-discrete counterparts. The reason for that is the fact that all the operations that are performed within a time iteration must be saved while integrating the state vector forward in time. Especially for large eddy (LES) or direct numerical (DNS) simulations, the number of floating point operations can be very large, therefore the memory demand might required for the reversal of a time iteration might overwhelmingly large. In addition, extra memory is also required for the reversal of the time loop. In the most extreme case, the complete forward trajectory of the state solution is kept in memory. This approach is known as the store-all approach \citep{HascoetGK09} in the AD community. As far as the run-time requirements are concerned, for most applications, the store-all approach is not feasible. A simple way of overcoming the excessive memory demand of the store-all approach is by storing parts or the entire forward trajectory on the hard-disk at the expense of increased run-time. This approach has been widely used in the past \citep{Mani08,Nada02, Rump07} for unsteady adjoint computations. Another memory saving solution is the checkpointing strategy. In adjoint solvers based on checkpointing, the flow solutions are stored only at selective time iterations known as checkpoints. Various checkpointing strategies have been proposed based on the storage criteria for checkpoints \citep{Griewank2000,Stumm09}.
Although, checkpointing strategies are successfully applied to for unsteady Reynold-averaged Navier-Stokes (URANS) and LES computations recently \citep{Zhou_etal_2016a,NeOeGaKrTh2016b}, the computational cost associated with the AD for LES/DNS computations at high Reynolds numbers and with large control horizons is still prohibitively high.
Another important issue, especially for the turbulent flow control problems, is the stability of the adjoint solvers. In general, the sensitivities that are evaluated by the adjoint solver are highly sensitive to the initial and boundary conditions at the flow regimes with high Reynolds numbers. This is actually not surprising since the turbulence phenomena, by its very nature, means that small variations in the control parameters lead to high variations in the objective function. Especially, if a wide time horizon for the flow control is chosen, the sensitivities of the specified objective function tend to an arithmetic overflow beyond a certain simulation time. The result is then an unstable adjoint solver, which evaluates sensitivity values that are not meaningful. This problem is addressed in \citep{wang13}. In contrast to URANS adjoint solvers, the usage of AD techniques does not help much alleviating this problem. The reason for this is that, the numerical scheme that is used to resolve physical instabilities that are inherent in the flow is also differentiated exactly by the AD tools, and the resulting high sensitivity values due to small scales of turbulence are simply transferred to the complete domain. Therefore, after a certain number of time iterations are performed, eventually the complete adjoint solution gets corrupted. Yet in other situations, poor numerical treatment may also lead to the same problem. Even if the physical instability is filtered out from the simulation (e.g., by using averaging or coarse grids), numerical noise introduced by an improper numerical scheme may lead to similar problems. A transient ODE example, which illustrates this problem in a detailed way can be found in \citep{Ozk16Arxiv}.
The failure that is mentioned above can be overcome by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem” \citep{WANG2014, SteffiOS}. In this way, one can obtain workable sensitivities from the adjoint solver to be used in flow control problems. The drawback of the LSS method is the increased run-time and memory demand, which may be a serious problem for large-scale simulations. To decrease the run time, the LSS problem can be solved parallel-in-time \citep{Guenther2017}. As an alternative to the LSS method, the receding horizon approach \citep{Marinc2012} can be taken. In this approach the long control interval is divided into smaller sub-intervals, in which each sub-interval is small enough such that the sub-optimization problem stays controllable. In this way, an optimal control problem is treated as a group of sub-optimal problems, in which solutions to them can be achieved with classical nonlinear optimization methods. The receding horizon algorithm does not suffer from high computational cost, as the LSS method does, but this advantage comes at the expense of accuracy.
In the present work, we aim to make a comprehensive study of the adjoint-based turbulent flow control and the associated stability issues. Thereby, we focus on the pure flow control problem of plane and round jets for noise reduction without regularization techniques that are mentioned previously. This paper is organized as follows: In Section 2, we present briefly the governing equations and the numerical method chosen for the present work. Information about the test case configurations and implementation details for the discrete adjoint solver has been also provided. The validation results for the discrete adjoint solver has been presented in Section 3. In Section 4, we present the optimization results achieved for noise reduction problem using different configurations. In Section 5, we shortly introduce the methodology used to generate the exact tangent-linear solver that has been used to study the behavior of the control sensitivities for long time horizons. In Section 6, the results of a sensitivity study obtained with the tangent-linear method are presented. Finally, some conclusions are drawn in Section 7.
\section{Governing Equations and Numerical method}
\label{sec:numerical_method}
In the present work, the $3$D compressible Navier-Stokes equations are solved on a Cartesian
grid to provide the primal solution
\begin{eqnarray}
\frac{\partial \rho}{\partial t} & = & \frac{\partial m_i}{\partial x_i} \label{eqNS1}\\
\frac{\partial m_i}{\partial t} & = & -\frac{\partial p}{\partial x_i} -\frac{\partial}{\partial x_j} \rho u_j u_i + \frac{\partial}{\partial x_j} \tau_{j i} \label{eqNS234}\\
\frac{\partial p}{\partial t} & = & -\frac{\partial}{\partial x_i} p u_i + \frac{\partial}{\partial x_i} \lambda (\gamma - 1) \frac{\partial}{\partial x_i} T\nonumber\\
& & - (\gamma - 1 ) p \frac{\partial}{\partial x_i} u_i + (\gamma - 1 ) \tau_{i j} \frac{\partial}{\partial x_j} u_i, \label{eqNS5}
\end{eqnarray}
where $\rho$ is the density, $u_i$ is the $i$th component of the velocity vector $u$, $\gamma$ is the ratio of the specific heats, $T$ is the temperature and $\lambda$ is the heat conductivity. Furthermore, the mass flux in the $i$th direction is denoted by $m_i = \rho u_i$ and the viscous stress tensor $\tau_{ij}$ is given by
\begin{eqnarray}
\tau_{i j} = \mu s_{i j} & = & \mu\left( \frac{\partial u_i}{\partial x_j} + \frac{\partial u_i}{\partial x_j} - \delta_{ij} \frac{2}{3} \frac{\partial u_k}{\partial x_k} \right),
\vspace{-.1cm}
\end{eqnarray}
where $\mu$ is the viscosity.
The above equations are discretized in space by using an optimized explicit dispersion-relation-preserving summation-by-parts (DRP-SBP) finite-difference scheme of sixth-order. As the time discretization, a two step low-dispersion-dissipation fourth-order Runge-Kutta (RK) scheme as in \citep{Hu:1996} is used. Furthermore, characteristic boundary conditions (CBC) as proposed in \citep{Lodato2008} are used to simulate the open boundaries in the jets. For the isotropic turbulence simulations periodic boundary conditions (BC) are used. Additionally, sponge regions together with grid stretching and spatial filtering are utilized to reduce the reflections near to non-periodic boundaries \citep{Foysi:2010a}. The flow field is filtered in every second RK iteration using a $10$th-order accurate low-pass filter. The filtering improves the numerical stability and also serves as a sub-grid scale model, which is equivalent to the approximate deconvolution approach to LES \citep{Stolz:1999,Mathew:2003,Mathew:2006}.
\begin{table}[h!]
\centering
{\small
\begin{tabular}{|c|cccccccccc|}
\hline
Case & $L_x$ & $L_y$ & $L_z$ & $n_x$ & $n_y$ & $n_z$ & $\Delta_{x,min}$ & $\Delta_{y,min}$ & $\Delta_{z,min}$ & $\Delta t$ \\\hline
{DNS2D} & 30 & - & 34 & 512 & 1 & 640 & $0.04$ & - & $0.036$ & $0.017$ \\\hline
{ELES3D} & 37 & 9 & 28 & 416 & 64 & 320 & $0.071$ & $0.14$ & $0.065$ & $0.03 $ \\\hline
{LES3D} & 37 & 9 & 28 & 512 & 160 & 400 & $0.051$ & $0.056$ & $0.051$ & $0.021$ \\\hline
{DNS3D} & 37 & 9 & 28 & 800 & 288 & 600 & $0.029$ & $0.031$ & $0.028$ & $0.012$ \\\hline
\end{tabular}}
\caption{Parameters of the plane jet simulations.}
\label{tab:parameterplanejet}
\end{table}
\begin{table}[h!]
\centering
\begin{tabular}{|c| c c c c c c c c c c |}
\hline
Case & $L_x\!$ & $L_y\!$ & $L_z\!$ & $n_x$ & $n_y$ & $n_z$ & $\Delta_{x,min}$ & $\Delta_{y,min}$
& $\Delta_{z,min}$ & $\Delta t$\\
\hline
jetG1 & 31 & 16 & 23 & 352 & 160 & 224 & 0.0675 & 0.0695 & 0.067 & 0.0286\\
jetG2 & 31 & 16 & 23 & 448 & 216 & 288 & 0.0505 & 0.049 & 0.049 & 0.0207\\
jetG3 & 31 & 16 & 23 & 640 & 288 & 384 & 0.033 & 0.033 & 0.032 & 0.0136\\
jetG4 & 31 & 16 & 23 & 1152& 512 & 640 & 0.016 & 0.015 & 0.015 & 0.0061\\
\hline
\end{tabular}
\caption{The parameters of the round jet simulations.}
\label{tab:parameterroundjet}
\end{table}
\begin{table}[h!]
\centering
\begin{tabular}{|c| c c c c c c c c c c |}
\hline
Case & $L_x\!$ & $L_y\!$ & $L_z\!$ & $n_x$ & $n_y$ & $n_z$ & $\Delta_{x,min}$ & $\Delta_{y,min}$
& $\Delta_{z,min}$ & $\Delta t$\\
\hline
iso64 & 1 & 1 & 1 & 64 & 64 & 64 & 0.0156 & 0.0156 & 0.0156 & 0.00219\\
iso128 & 1 & 1 & 1 & 128 & 128 & 128 & 0.0078 & 0.0078 & 0.0078 & 0.00098\\
iso256 & 1 & 1 & 1 & 256 & 256 & 256 & 0.0039 & 0.0039 & 0.0039 & 0.00013\\
\hline
\end{tabular}
\caption{The parameters of the isotropic turbulence simulations (rounded).}
\label{tab:parameteriso}
\end{table}
An important problem in industrial applications concerns the sound emission of subsonic plane and round jets and its control to suppress the radiated sound. This canonical flow problem represents a configuration, which is still simple enough allowing researchers to concentrate on the relevant physical mechanisms associated with shear flows and turbulence, without dealing with other complex effects like chemical reactions, multiple-phases, complex geometries, etc. Since the primary focus of the present work is the adjoint based optimization, the jet simulations serve as a framework for the assessment of the different adjoint approaches. In addition, we also performed forced isotropic turbulence simulations, which are good for comparing quantities like Lyapunov exponents.
Tables \ref{tab:parameterplanejet}, and \ref{tab:parameterroundjet} give an overview of the numerical parameters of the plane and round jet simulations that are performed in this work.
For the plane jet simulations, the domain lengths
$L_i$ are normalized by the jet diameter $D$. The Reynolds number, based on the diameter is set to $Re =U_j \rho_j D/\mu_j=2000$ and the Mach number is set to $Ma=U_j/c_j=0.9$ using centerline values for all simulations. The number of grid points are represented by $n_i$ for the different directions, respectively, with associated minimum grid-spacing $\Delta_{i,min}$.
$\Delta t$ indicates the time-step used during the optimization computations non-dimensionalized by $D/U_j$. The subscript $j$ denotes mean values at the jet inflow plane. Similarly, for the round jet simulations $L_i$ is the length of the computational domain in $i$th. direction. For the jets the reference length $D_j$ and reference velocity $U_j$ are chosen to be the jet diameter and velocity at the inflow. In Table \ref{tab:parameteriso}, numerical parameters used for the forced isotropic turbulence simulations are given. For the isotropic turbulence, the extent of the computational domain $L_i$ and the rms-velocity over the whole computational domain serve as reference values for non-dimensionalization. The Mach number $Ma_{rms}$ is set as $0.2$ for the isotropic turbulence. In order to sustain the turbulence the flow has to be forced. In the jet simulations the flow is forced at the inflow by explicitly setting the inflow part of the CBC. Precurser simulations, which are fed into the domain to provide realistically correlated inflow data can be found in \citep{Foysi:2010a,Marinc2012}.
For the case of isotropic turbulence the forcing method proposed in \citep{Petersen} was implemented. In this work, only the solenoidal part of the thirteen modes with the lowest wave length were perturbed. Note that this forcing term allows us to choose explicitly the dissipation rate $\varepsilon$, therefore enabling an easy evaluation of the Kolmogorov length scale $\eta = (\nu^3/\varepsilon)^{1/4}$.
The maximal Lyapunov exponent (MLE) can be computed by first solving the linearized Navier-Stokes equations with some arbitrary initial perturbation. As soon as the solution exhibits exponential growth in $L2$ norm of the flow quantities, the MLE is obtained by a fit to this exponential following the procedure described in \cite{Kuptsov}.\\
As already mentioned, the investigation of the adjoint approaches is centered on the problem of reducing sound emission of compressible turbulence jets. A space and time dependent heating/cooling source term in an area near the jet inflow acts as the control. The noise emitted by the jet is measured by a cost functional, which is defined as an integral over the square of the pressure fluctuations in the far field
\begin{equation}
\Im = \int_{\Omega} \int_T \left(p(\mathbf{x},t)-\overline{p}(\mathbf{x}) \right)^2 dt~d\Omega,
\label{eqCost}
\end{equation}
where $T$ is the control interval length in time, $\Omega$ is a small volume in the far-field of the jet and $\bar p$ denotes the temporal average of the pressure over the interval $T$. The control setup is illustrated in Fig. \ref{fig:control_setup}. Optimization studies using configurations similar to this can be found in \cite{Kim:2010,Wei:2006}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.45,trim={0.5cm 0.3cm 0cm 0.3cm},clip]{pics/control_setupbw.png}
\caption{An illustration of the control setup.}
\label{fig:control_setup}
\end{center}
\end{figure}
As far as the gradient evaluation method is considered, the adjoint method has been chosen for its computational efficiency. Based on the flow solver, which has been introduced previously, both discrete and continuous adjoint versions have been developed. The implementation details of the continuous adjoint can be found in \cite{Marinc2012} and is not repeated in this paper for the sake of brevity. On the other hand, the hand-discrete adjoint solver includes some subtle details. Therefore, the key implementation issues regarding the development of the discrete adjoint solver is briefly introduced in the following.
Within a time iteration, $N$ Runge-Kutta sub-steps together with filtering, control and initial conditions can be written generally as
\begin{eqnarray}
\bold k_0 &=& \bold 0 \\
\bold u_0 &=& \bold u_{init}\\
\bold k_s &=& \alpha_{s-1}\bold k_{s-1} + \Delta t \left[
\bold R(\bold u_{s-1}) + \sum_{i=0}^M \gamma_{s-1,i}\mathbf\Phi_i\right]\\
\bold u_s &=& F_s[\bold u_{s-1} + \beta_{s-1}\bold k_s],\qquad s \in \{1,\ldots,N\},
\end{eqnarray}
where $\bold R$ is the right hand side (RHS) of the discrete Navier-Stokes equations,
$F_s$ is the discrete representation of a filter operator at step $s$ in case of LES,
$\mathbf\Phi_i$ are $M+1$ control vectors and $\gamma_{s,i}$ are scalars chosen such that a linear
interpolation between the control vectors is achieved.
Given some cost functional $\Im = \sum_{s=0}^N \Im_s(\bold u_s)$
the Lagrangian is defined as
\begin{eqnarray}
L &=&\sum^N_{s=0}\Im_{s}(\bold u_{s}) - \sum^N_{s=1}\boldsymbol{\xi}^T_s\left[
\bold k_s - \alpha_{s-1}\bold k_{s-1} - \Delta t \left\{
\bold R(\bold u_{s-1}) + \sum^M_{i=0}\gamma_{s-1,i}\mathbf\Phi_i\right\}\right]\nonumber\\
&&-\sum^N_{s=1}\boldsymbol{\omega}^T_s(\bold u_{s} - F_s[\bold u_{s-1} + \beta_{s-1}\bold k_s])
- \boldsymbol{\xi}^T_0 \bold k_0 - \boldsymbol{\omega}^T_0 [\bold u_{0} - \bold u_{init}],
\end{eqnarray}
where the vectors $\boldsymbol{\xi}_s$ and $\boldsymbol{\omega}_s$ are the Lagrangian multipliers.
Using the variational form of this equation, transposing and rearranging the terms with respect to the variations leads to the adjoint Runge-Kutta integration scheme
\begin{eqnarray}
\boldsymbol{\omega}_N &=& \left(\left.\PD{\Im_{N}}{\bold u}\right|_{\bold u_N}\right)^T\\
\boldsymbol{\xi}_N&=& \beta_{N-1} F^T_N\boldsymbol{\omega}_N\\
\boldsymbol{\omega}_s&=&F^T_{s+1}\boldsymbol{\omega}_{s+1} + \Delta t
\left(\left.\PD{\bold R}{\bold u}\right|_{\bold u_s}\right)^T\boldsymbol{\xi}_{s+1}+
\left(\left.\PD{\Im_{s}}{\bold u}\right|_{\bold u_s}\right)^T \; s \in \{0 ,\ldots,N-1\}\label{eq:om}\\
\boldsymbol{\xi}_s &=&\alpha_{s}\boldsymbol{\xi}_{s+1} + \beta_{s-1}F^T_s \boldsymbol{\omega}_s \quad s \in \{1,\ldots,N-1\}\\
\boldsymbol{\xi}_0 &=&\alpha_{0}\boldsymbol{\xi}_1.
\end{eqnarray}
Using $\boldsymbol{\xi}_s$, the gradient of the cost functional is given by
\begin{equation}
\left(\frac{d\Im}{d\mathbf\Phi_i}\right)^T =
\left(\frac{d L}{d\mathbf\Phi_i}\right)^T = \Delta t\sum^N_{s=1}\gamma_{s-1,i}\boldsymbol{\xi}_s
,\qquad i\in \{0,\ldots,M\}.
\end{equation}
Note, that in equation (\ref{eq:om}) the transpose of the linearized Navier-Stokes operator $ \left(\left.\PD{\bold R}{\bold u}\right|_{\bold u_s}\right)^T$ appears.
When implementing the discrete adjoint equations the hardest part is usually the implementation
of this operator, as the components of the linearized Navier-Stokes operator are for most cases not known explicitly and thus its transpose cannot be computed directly.
In order to compute the transpose of some linear operator $D$ (e.g. a matrix representing a
discretized derivative operator) we utilize its expression as
\begin{equation}D =\sum^N_{i=1}D_i,\label{eq:split}\end{equation}
where $D_i$ has the entries of $D$ in its $i$th row and is zero everywhere else.
Consequently, the operator $D_i^T$ is zero except for the $i$-th column.
The product $D_i\bold a$, with some arbitrary vector $\bold a$, is computed by multiplying grid
points adjacent to the $i$th grid point with the corresponding finite difference coefficients
and evaluating the sum of these values at the $i$th grid point. In a similar fashion,
the product $D^T_i\bold a$ can be computed by setting the corresponding adjacent grid points to the
product of the $i$th grid point with a finite difference coefficient.
The implementation of the transposed linear operator-product is similar to the one described in \cite{Pando}. With the operator splitting in Eq. (\ref{eq:split}), the transpose of the full Navier-Stokes operator can be computed for each grid point separately as $D^T = \sum^N_{i=1} D_i^T$.
Consequently, different parts of the computational domain, e.g. boundary and inner schemes,
can be treated on its own. Furthermore, only the entries of the $i$th row of $D$ are needed
for computing the contribution of the $i$th grid point to the transposed right hand side. Therefore,
the derivative coefficients of adjacent grid points do not need to be known, which eases the parallel
implementation. The procedure described above may be interpreted such that the matrix vector
product is computed row-wise, while the product with the transposed matrix is computed column-wise.
The Navier-Stokes equations consist of several sums and products of linear operators, therefore
the identity
\begin{equation}
(AB + CD)^T = B^T A^T + D^T C^T
\end{equation}
for arbitrary matrices $A, B, C$ and $D$ has to be used to make the above implementation
applicable to the cases considered in this work. Splitting all the derivative operators within the
Navier-Stokes equations according to the preceding equation and the Eq. (\ref{eq:split}) can become a tedious task. Therefore, a program is implemented that automatically generates the source code for computing the desired matrix vector products for any given PDE.
\section{Validation of the Discrete Adjoint}
The implementation of the hand-discrete discrete adjoint can be error-prone, therefore several tests have been performed to check the validity of the approach used in this work. To ensure that the linearization of the Navier-Stokes equations is correctly implemented,
results from the sensitivity operator were compared with the sensitivities obtained through complex
differentiation of real functions:
\begin{align}
s(x)&=Re(s(x+i h))+\mathcal{O}(h^2), \\
s^{(1)}(x)&=\frac{Im(s(x+i h))}{h}+\mathcal{O}(h^3),
\end{align}
where $s$ is an arbitrary function, $i$ is the imaginary unit and $Re(\ldots)$ and $Im(\ldots)$ are the real and imaginary parts. With this formulation the sensitivities of an arbitrary function can be computed by just exchanging real values with the complex ones in the function. Furthermore, the sensitivity is calculated without performing subtractions, which means that the step size $h$ can be chosen very small without having cancellation errors.
Tests showed that the difference of the sensitivities obtained by using the direct
implementation and with that using complex differentiation was of the order of machine precision.
Note that the RK-iteration is already linear so that making additional checks for its correct is linearization unnecessary.
The comparison between the sensitivity and discrete adjoint equations was done using the identity
\begin{equation}
\mathbf{a}^T\left(\left.\frac{\partial \bold R}{\partial\mathbf{\Phi}}\right|_{\mathbf{\Phi_{s}}}\right)^T\mathbf{b}=\mathbf{b}^T\left(\left.\frac{\partial \bold R}{\partial\mathbf{\Phi}}\right|_{\mathbf{\Phi_{s}}}\right)\mathbf{a}
\end{equation}
for randomly chosen vectors $\mathbf{a}$ and $\mathbf{b}$, with ${\bold R}$ being the Navier-Stokes
operator. This test revealed that, the transpose of the linearized Navier-Stokes operator is implemented correctly and is accurate up to machine precision.
The complete implementation of the full adjoint system, given some arbitrary perturbation $\mathbf{\Phi}'$ of the control, the linear response of the cost functional with respect to that perturbation can be obtained either from the solution of the sensitivity equations (the linearized state equations), or by the first order term of a Taylor series expansion via the discrete adjoint:
\begin{equation}
\underbrace{\frac{\partial L}{\partial\mathbf{\Phi}}}_{\text{gradient}}\mathbf{\Phi}'
=\frac{d\Im}{d\mathbf{\Phi}}\mathbf{\Phi}'=
\frac{d\Im}{d\mathbf{u}}\frac{d\mathbf{u}}{d\mathbf{\Phi}}\mathbf{\Phi}'+\PD{\Im}{\mathbf{\Phi}}\mathbf{\Phi'}=
\frac{d\Im}{d\mathbf{u}}\underbrace{\mathbf{u}'}_{\text{sensitivity}}+\PD{\Im}{\mathbf{\Phi}}\mathbf{\Phi'}
\label{eqSensGradIdentity}
\end{equation}
where $L$ is the Lagrangian of the system at hand and the superscript $(\cdot)'$ denotes the variation with respect to the control.
The identity in eq. \eqref{eqSensGradIdentity} involves integration over the full control horizon and was used to validate the implementation of the discrete adjoint further.
Several tests with small Gaussian shaped perturbations with a finite temporal support were performed for the cases listed in \cite{Marinc2012}.
A selection of these tests are depicted in Table \ref{tab:validate} and it becomes apparent that the both sides of the eq. \eqref{eqSensGradIdentity} match perfectly.
Although the RHS is transposed accurately up to machine precision, the full discrete adjoint over
the complete time horizon is slightly less accurate. A possible reason are the round-up
errors that occur during the RK iterations as well as the integration of long time horizons.
Furthermore, the discrete adjoint is utilized as a reference solution for validation of various
other adjoint formulations.
\begin{table}[h!]
\centering
\begin{tabular}{|c|c c c c|}
\hline
case& pos.& \#RK& $\PD{L}{\mathbf{\Phi}}\mathbf{\Phi}'$ & $\PD{\Im}{\boldsymbol{u}}
{\boldsymbol{u}'} + \PD{\Im}{\mathbf{\Phi}}{\mathbf{\Phi}}'$\\
\hline
DNS2D &100& 10000 & -2.19978130134220$\cdot10^{-4}$ & -2.19978130134229$\cdot10^{-4}$\\
& 900 & 10000 & -1.44380102098536$\cdot10^{-5}$ & -1.44380102098535$\cdot10^{-5}$\\
&1700 & 10000 & -1.17132271501490$\cdot10^{-5}$ & -1.17132271501492$\cdot10^{-5}$\\
\hline
ELES3D &100 & 2400 & -5.37171660047461$\cdot10^{-6}$ & -5.37171660047416$\cdot10^{-6}$\\
&200 & 2400 & -6.73097452538381$\cdot10^{-6}$ & -6.73097452538368$\cdot10^{-6}$\\
\hline
LES3D &100 & 6400 & -2.25323809359032$\cdot10^{-4}$ & -2.25323809359047$\cdot10^{-4}$\\
&200 & 6400 & -1.70187568261517$\cdot10^{-4}$ & -1.70187568261540$\cdot10^{-4}$\\
\hline
DNS3D &100 & 7000 & -3.11011806150672$\cdot10^{-4}$ & -3.11011806150674$\cdot10^{-4}$\\
&400 & 7000 & -2.85005052892766$\cdot10^{-5}$ & -2.85005052892798$\cdot10^{-5}$\\
\hline
\end{tabular}
\caption{Comparisons of both sides of eq. (\ref{eqSensGradIdentity}) for different perturbations
and for different jet simulations. The perturbation was initiated after $pos.$ Runge-Kutta
iterations from the beginning of the control interval, extending over $\#RK$ Runge-Kutta iterations.
The maximum normalized amplitude of the perturbations was $10^{−4}$.}
\label{tab:validate}
\end{table}
The gradient is used in a low storage Limited memory Broyden-Fletcher–Goldfarb-Shanno (L-BFGS) optimization scheme together with a line search method using the Wolfe condition.
\section{Optimization Results with Discrete and Continuous Adjoint Solvers}
Figure \ref{figJet2DoptLongIter} shows the reduction of the cost functional for the {DNS2D}\ case for both continuous and discrete optimization over the number of L-BFGS iterations. The same behavior over time can be observed in Figure \ref{figJet2DoptCostTime}, which also clearly indicates the time horizon affected by the control.
It is clearly seen, that the discrete adjoint optimization is able to reduce the sound pressure level (SPL) more than the continuous approach. However, in both cases it was not possible to effectively influence the SPL below $t U_j/D=100$.
It should be noted that this interval could be controlled successfully when optimizing over this shorter intervals. This was already observed in \cite{Marinc2012} while dealing with the continuous approach.
In this regard, the discrete approach was considered to be more promising, since the differences in the numerical treatment and the boundary conditions used while discretizing the continuous adjoint equations may lead to inconsistencies compared to the primal flow equations. These differences result that the discretized adjoint equations are not the exact dual counterpart to the discretized primal equations.
The fact that long control intervals are difficult to control even with accurate gradient information suggests that dividing the optimization problem into a set of sub-optimal problems with shorter time horizons might be more efficient. Such strategies are realized using the receding-horizon approach, as done and proposed in \cite{Collis:2000,Marinc2012}.
When applying the procedures to the $3$D simulations, a stronger reduction in the sound-pressure levels of the $2$D simulations has been observed. Another observation is that, the continuous optimization performed minimally better in terms of cost functional reduction in this case. To check whether this behavior was triggered by the choice of initial condition, the optimization using the discrete adjoint has been performed using the restart solution obtained from continuous adjoint optimization with $30$ iterations.
The result of this optimization study as well as pure discrete and continuous adjoint optimizations are shown in Figure \ref{figJetELESoptIter}. In both cases, the cost functional can be significantly reduced, and only a negligible difference between both approaches has been observed. This behavior supports the conjecture, that the differences observed in Figure \ref{figJetELESoptIter} are due to the high dimensionality of the control vector, and thus the
increased complexity of the cost functional response surface.
For short control horizons, as used in this case, the performance of the optimization is similar for the continuous and discrete approaches as both the approaches still calculate accurate gradients. In both cases, approximately $3$dB reduction in the overall SPL has been achieved.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{pics/cost_func_cont_discrete_long_interval.png}
\caption{Comparison of the cost functional over L-BFGS-iterations for case {DNS2D}\ involving a long control interval. The discrete optimization clearly outperforms the continuous optimization.}
\label{figJet2DoptLongIter}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{pics/cost_func_comp_cont_disc_over_time.png}
\caption{Comparison of the cost functional over time for the {DNS2D}\ case with a long control interval.}
\label{figJet2DoptCostTime}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{pics/jet_ELES_opt_iter.png}
\caption{Cost functional over L-BFGS iterations for the ELES3D case with a short control interval.}
\label{figJetELESoptIter}
\end{center}
\end{figure}
To illustrate the problems emerging from the long control horizons in more detail, Figure \ref{figJet2DabsgradVLong} shows the ''energy`` of the gradient obtained with a control vector set to zero.
It can be observed that the norm of the gradient decreases strongly with time, which is a hint that the problem is very ill-conditioned and is thus a source for optimization inefficiency.
Furthermore, as the control is basically a linear superposition of gradients obtained during the optimization iterations, the control has high amplitudes only at the beginning of the control interval but it is negligible for the rest. Consequently, a significant part of the control interval is practically unused.
Another problem with the long control horizons becomes apparent by investigating Figure \ref{figJet2DCostSensVLong}, which shows the quantity $\int p'^2 d\Omega$ as a measure of the strength of the linear response of the cost functional due to a control perturbation. The gradient computed in Figure \ref{figJet2DabsgradVLong} was chosen as the control perturbation ($\mathbf g:= \mathbf\Phi'=\left(\frac{d\Im}{d\mathbf{\Phi}}\right)^T$) and $p'$ constitutes the linear response of the pressure to this perturbation.
It can be observed that the linear response of the cost functional increases exponentially with time and large fluctuations can occur, leading to very large values of the gradient sensitivities in some cases.
As the gradient used for optimization contains no reliable information for the non-linear regime, the optimization scheme will in most cases select a step size, in which the linear terms cannot dominate. Thus, the linear response of the cost functional to a perturbation gives an estimate of the change of the cost functional over one optimization iteration. This implies, that due to the strong increase of the amplitude of the sensitivities, they become often hardly differentiable, only a short interval at the end of the simulation is actually controlled.
It should be noted that the strong amplitude growth is not related to numerical instabilities but rather to instabilities that are inherent to the linearized Navier-Stokes equations, acting on the initial and boundary conditions.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{pics/gradient_strength_over_time.png}
\caption{Left: The quantity $\int g^2 d \tilde{\Omega}$ as a measure for the gradient-”strength” over time, where $g$ is the gradient of the cost functional and $\tilde{\Omega}$ is the whole computational domain. Right: The same quantity in log-scale.}
\label{figJet2DabsgradVLong}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{pics/lin_response_measure_over_time.png}
\caption{Left: The quantity $\int p^{\prime2} d\Omega$ as a measure for the linear response of the cost functional to a control perturbation. It can be seen that only times
$t \geq 100$ are influenced noticeably by the control. Right: The same quantity in log-scale.}
\label{figJet2DCostSensVLong}
\end{center}
\end{figure}
Reducing the dimensionality of the non-linear system improves the situation, thus possibly giving the impression of successful optimization over given control horizons for selected time-dependent
problems. This is illustrated using $2$D and $3$D simulations, including lower resolution LES, by artificially decreasing the control space dimension. Cases {DNS2D}, {ELES3D}, {LES3D}\ and {DNS3D}\ were optimized using the L-BFGS scheme with a control interval length of $T=70$. The cost function reduction relative to the value without control is compared for these cases in Figure \ref{figJetsInCompRes}. For a better comparison the cost function values are normalized with their initial values in each test.
It becomes obvious that the efficiency of the optimization decreases with the increasing resolution of the $3$D simulations.
This is reasonable as the range of scales resolved in the flow solution increases with the resolution, in addition to the increase in the control space dimension. This is due to the fact that the volume of the controlled area was kept fixed in order to make the different cases physically comparable, thereby increasing the control space dimension with increasing resolution. These smaller scales likely have an adverse effect on the controllability of the system.
The Courant-Friedrichs-Lewy (CFL) criterion ensures additionally, that the time-step decreases with the increasing resolution, and consequently
the number of RK iterations is required to be higher. Therefore, the dimension of the control vector also increases, since the cost functional involves an integration over the time horizon. In a second experiment, the control space dimension was reduced by using interpolation between selected time steps.
An interpolation point for the control was set every second RK iteration for the cases {DNS2D}, {ELES3D}\ and {LES3D}\ and every third RK iteration for the case {DNS3D}.
Thus, the number of interpolation points in time increases with the resolution, also leading to an increased control space dimension.
The number of control variables can be reduced by choosing an independent control parameter at only every $n^{th}$ grid point.
The missing control function values in between are obtained using interpolation.
In the following, this kind of control is referred to as \gapCtrl{$\alpha\beta\gamma$}{$\delta$}, where $\alpha$, $\beta$ and $\gamma$ gives the number of grid points between the sampling points for the control in three spatial directions and $\delta$ means that control values are given in every $\delta$th RK iteration.
For example, \gapCtrl{111}{1} means that the control is active at every grid point in the controlled region and at every RK iteration.
The interpolation in the spatial directions is achieved using a Catmull-Rom spline \citep{Marschner:1994}.
Controlling only every $n^{th}$ grid point directly means roughly that only wave lengths below an $n^{th}$ of the Nyquist wave lengths can be controlled directly.
Consequently, by choosing the gap size between control supporting points a trade off is made between controllability and the reduction in control space dimension.
It should be noted, that the highest wave numbers are not captured by the FD derivative operators
or cut off using filtering.
Therefore, wave lengths near the Nyquist frequency can not be efficiently controlled, which reduces the frequency band effectively controllable by scheme $gap111$ below the Nyquist frequency.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{pics/opt_runs_different_cases.png}
\caption{Different optimization runs for the cases {DNS2D}, {ELES3D}, {LES3D}\ and {DNS3D}\ }
\label{figJetsInCompRes}
\end{center}
\end{figure}
Figures \ref{figJetsInCompGapELES3D} and \ref{figJetsInCompGapDNS3D} show the history of the cost functional over optimization iterations for different gaps for the cases {ELES3D}\ and {DNS3D}\ using a control interval length of $T=70~D/U_j$. One should have in mind that for case {DNS3D}\ due to the computational cost only a moderate number of optimization iterations with only one initial condition were performed.
Thus, the specific behavior observed in Figure \ref{figJetsInCompGapDNS3D} should be interpreted with care.
However, it can be clearly observed that the efficiency of the optimization could be increased successfully by decreasing the control space dimension for both cases.
It can be observed in Figure \ref{figJetsInCompGapDNS3D} that cases \gapCtrl{333}{3}\ and \gapCtrl{555}{8}\ perform comparably well, whereas the reduction is lower for case \gapCtrl{555}{4}.
This indicates that there is not a monotonic relation between optimization efficiency increase and control space reduction.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{pics/ELES3D_with_interpolation.png}
\caption{Optimization histories for the {ELES3D}\ case}
\label{figJetsInCompGapELES3D}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{pics/DNS3D_with_interpolation.png}
\caption{Optimization histories the {DNS3D}\ case}
\label{figJetsInCompGapDNS3D}
\end{center}
\end{figure}
For the case of a DNS all physically relevant effects are resolved and one would expect that the linear instabilities of the system are solely determined by the physical properties of the system.
For an LES, however, smaller scales are excluded from the numerical simulation. These scales can effect the maximum Lyapunov exponent, which we thus expect to depend on the numerical parameters chosen for LES.
To quantify the smallest physically relevant scales, Figure \ref{figRkolmogorovLRe} shows estimates of the Kolmogorov length for different Reynolds numbers, obtained from computations performed with grids {jetG3}\ and {jetG4}.
The estimates have been calculated via the relations given in \cite{Pope}:
\begin{align}
\eta &= \left(\frac{\nu^3}{\epsilon}\right)^{1/4} \label{eqEstimationKolmogorov}\\
\epsilon &=\nu\overline{\frac{\partial \widehat{u}_i}{\partial x_j}\frac{\partial \widehat{u}_i}{\partial x_j}+\frac{\partial \widehat{u}_i}{\partial x_j}\frac{\partial \widehat{u}_j}{\partial x_i}} \\
\nu &= \frac{\mu(T_{ref})}{\rho_{ref}},
\end{align}
where $\overline{\cdots}$ denote the Reynolds average and $\widehat{u}=u-\overline{u}$.
It should be noted that Eq. \eqref{eqEstimationKolmogorov} is derived from the incompressible Navier-Stokes equations.
However, as only unheated jets are considered in this work, the density fluctuations are expected to be small enough for Eq. \eqref{eqEstimationKolmogorov} to give a useful approximation.
One can find a reasonable agreement with the lengths estimations given in \cite{Bogey:2006}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{pics/Kolmogorov_scale.png}
\caption{Kolmogorov length estimations for round jets with different Reynolds numbers.}
\label{figRkolmogorovLRe}
\end{center}
\end{figure}
To illustrate the impact of the MLE on the optimization, the case {jetG1}\ has been optimized on two intervals with different temporal lengths of $T=60$ and $T=143$.
Figure \ref{figR1optCalcCost} shows the reduction of the cost function over time after eight optimization iterations for these two interval lengths.
Again it is clear that only a final interval of length $\Delta T\approx 40$ is successfully
controlled.
The figure also shows an exponential function growing with the rate of the MLE.
It can be seen that the length of the effectively controlled interval corresponds roughly with the average linear response of the cost functional estimated by this MLE.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{pics/jetG1costfunctionreduction.png}
\caption{Reduction of the cost function over time for control intervals with length $T = 60$ and $T = 143$ and the grid jetG1.}
\label{figR1optCalcCost}
\end{center}
\end{figure}
In the following the maximum Lyapunov exponent is determined as a function of Reynolds number.
Due to the easier access to theory, first forced isotropic turbulence case is considered, in which
the dissipation rate $\epsilon$ is fixed.
It has been reasoned that the MLE is proportional to the smallest time scale present in the computation \citep{Crisanti:1993}. Using the relations $\tau=\sqrt{\frac{\nu}{\epsilon}}$ and $\nu=\frac{1}{Re}$ (non-dimensionalized) one
obtains the estimate $\tau\propto \sqrt{\frac{1}{Re}}$, where $\tau$ is the Kolmogorov time scale, the smallest physical time scale in the flows considered.
In Figure \ref{figIsoLyapunovRe}, the expected behavior can be roughly seen for smaller Reynolds numbers ($Re\lesssim 4000$), where one observes a scaling with $Re^{0.6}$. Note that the above derivation does not take compressibility effects into account, which may result in the observed difference in the scaling behavior.
For larger Reynolds numbers the MLE reaches a plateau as soon as resolution is insufficient to
resolve the required length and time scales, limiting the MLE.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{pics/Lyapunov_exponent.png}
\caption{Lyapunov exponent with maximal real part versus Reynolds number for forced isotropic turbulence with different resolutions and Reynolds numbers.}
\label{figIsoLyapunovRe}
\end{center}
\end{figure}
Figure \ref{figR3LyapunovRe} shows the MLE, now determined for the round jet calculations
providing information for different resolutions and Reynolds numbers.
The figure shows the effect of increasing the resolution for a fixed Reynolds number of $Re=10000$.
As is to be expected the MLE increases with increasing resolution, supporting the behavior
seen in the optimization calculations above (Figure \ref{figJetsInCompRes}).
As already found for the forced isotropic turbulence the increase of the MLE with increasing Reynolds number reaches a saturation for very large Reynolds numbers, where the smallest scales are not sufficiently resolved. Still, both cases jetG3 and jetG4 considered for this investigation indicate an almost linear scaling with Reynolds number itself.
Opposite to the isotropic turbulence case, where intrinsic compressibility effects are dominant,
these jet cases have strongly varying fluid properties, changing the Kolmogorov length and time scales.
As a consequence the increase of the MLE is larger than estimated above for the isotropic turbulence
simulations.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{pics/Lyapunov_round_jet.png}
\caption{Lyapunov exponent with maximal real part versus Reynolds number for the round jet with different
resolutions.}
\label{figR3LyapunovRe}
\end{center}
\end{figure}
\begin{table
\centering
\begin{tabular}{|c|c|}
\hline
LES-model & max. Lyapunov exponent \\\hline
expl. filter ($10^{th}$ order) & $9.85\cdot 10^{-2}$ \\
DMM1 from \cite{Martin:2000} & $3.75\cdot 10^{-2}$ \\
Smagorinsky (C=0.01) & $3.12\cdot 10^{-2}$ \\
dynamic Smagorinsky & $1.78\cdot 10^{-2}$ \\
Smagorinsky (C=0.02) & $1.45\cdot 10^{-2}$ \\\hline
\end{tabular}
\caption{Maximal Lyapunov exponent for case {jetG2}\ with different LES-models.}
\label{tabMaxLyapunovLES}
\end{table}
Table \ref{tabMaxLyapunovLES} lists the MLE for case {jetG2}\ with different LES-models.
The LES-models utilized are the direct filtering approach to the approximate deconvolution model (ADM, \cite{Stolz:1999,Mathew:2003,Foysi:2010a}), based on a $10^{th}$ order low-pass filter,
a variant of a DMM (Dynamic Mixed Model, see DMM1 in \cite{Martin:2000} for details)
and the standard and dynamic Smagorinsky model \cite{Martin:2000,Bogey:2006}.
The constant $C$ present in the Smagorinsky model was either set to fixed values or estimated
using the dynamic procedure. Using the dynamic procedure the constant reached a maximum of about
$0.02$, located in the fully turbulent region of the jet in accordance with results from
\cite{Bogey:2006}.
It can be observed that the MLE varies considerably with the different LES models.
A general tendency is that the MLEs decrease with an increase of the expected dissipation
of the models. The results suggest, too, that a simulation performed with the DSM can be controlled much more efficiently than a corresponding simulation using the filter variant of the approximate deconvolution model, which is less dissipative. The Smagorinsky models tend to overestimate
dissipation according to \cite{Martin:2000,Bogey:2006} and thus the increased optimization efficiency is accompanied by a loss in physical accuracy. The results are in line with the observed reduction
of the cost functional in Figure \ref{figJetsInCompRes}, where one could see a better control performance
of the simulations with less resolution capabilities. They also agree with the control simulations artificially reducing the control space by using only a subset of the control points (Figures \ref{figJetsInCompGapELES3D} and \ref{figJetsInCompGapDNS3D}).
\section{Exact Tangent-Linear Solver by Algorithmic Differentiation}
In order to investigate in detail very large sensitivity values, which have been observed in optimizations with large control horizons, the primal flow solver has been differentiated using Algorithmic Differentiation \citep{GriewankBook} techniques. In this way a tangent-linear solver has been generated, which fully retains the features of the primal flow solver. Moreover, the AD based tangent-linear solver delivers exact sensitivities of the objective function at any convergence level achieved by the primal simulation. Although, the tangent-linear solver requires a run-time that increases linearly with the number of control parameters, it is still a perfect tool to evaluate sensitivities for large control horizons and make an assessment of the situation. For LES/DNS type of problems with a large control horizon, the AD based adjoint solver runs quickly out of memory, and therefore it is not feasible as a sensitivity evaluation method. On the other hand, the tangent-linear method enables sensitivity evaluation of large scale problems at a reasonable memory demand. In the following, we introduce briefly the forward mode of Algorithmic Differentiation, which is used to generate the tangent-linear code used in the study. We also show that the results that are obtained with this code are mathematically equivalent to the adjoint mode.
Algorithmic Differentiation, also known as Automatic Differentiation, is the name given to a set of techniques developed for the differentiation of functions that are implemented as computer programs. Given a computer code, the AD tool generates a differentiated code that computes the derivatives of the output variables with respect to input variables. The basic idea behind AD is that, the given code can be interpreted as a sequence of simple elementary operations involving additions, multiplications and use of intrinsic functions like $\sin, \exp$ etc. Assuming that the computer code is composed of piece-wise differentiable functions, the derivative code is then obtained by applying the chain rule of differentiation sequentially to each of these elementary operations.
In general, chain rule can be applied in two ways to a given set of elementary operations. The first way, which appears to be more natural, is the so-called forward or tangent-linear mode of AD. Using the forward mode, the chain rule is applied to every operation in a sequence that starts from the input parameters and ends with the output parameters. Therefore, each operation in the data flow is differentiated with respect to a specified direction vector. The resulting derivative expressions are then evaluated simultaneously with the operations of the original function. In contrast to the forward mode, the reverse or adjoint mode of AD applies the chain rule in the reverse order, in which the operations are performed in the original computer program. Note that, both the forward and reverse modes produce exactly the same result.
To simplify the derivation of the tangent-linear method, we consider an objective function over a time interval $[0,T]$ that is to be minimized or maximized
\begin{equation}
J = J(Y_0,Y_1,\ldots,Y_N,X),
\end{equation}
where $Y_i$ is the state vector at the time iteration $i$ and $X$ is the control vector. $N$ is the number of time iterations performed in the primal simulation to reach the final time $t=T$. Note that the above equation is the most general form, in which the control interval starts from $t=0$ and goes up to $t=T$.
Assuming that we have discrete state solutions $Y_0,Y_1,Y_2,\ldots, Y_N$ over the time interval $[0,T]$, the true dependency between the objective function $J$ and the design vector $X$ is given by the relation
\begin{equation}
J = J(Y_0,Y_1,\ldots,Y_N,X), \quad \mbox{such that} \quad Y_{k+1}=G(Y_k,X),\; k=0,1,\ldots, N-1,
\label{eq:J}
\end{equation}
where $G$ is a mapping of a state space into itself, i.e., a single time iteration of the flow solver including all the intermediate Runge-Kutta steps of the temporal scheme. In our setting, it includes all the operations within a time iteration, e.g, spatial discretization terms, boundary treatment, filtering etc.
If we differentiate the objective function $J$ with respect to the design parameter vector $X$, we get
\begin{equation}
\frac{dJ}{dX}= \frac{\partial J}{\partial X} + \frac{\partial J}{\partial Y_0}\frac{d Y_0}{d X} + \frac{\partial J}{ \partial Y_1}\frac{d Y_1}{d X} + \ldots +\frac{\partial J}{\partial Y_N}\frac{d Y_N}{d X}.
\label{eq:dJdX}
\end{equation}
The initial solution $Y_0$ does not depend on the control $X$, so the above equation simplifies to
\begin{equation}
\frac{dJ}{dX}= \frac{\partial J}{\partial X} + \frac{\partial J}{ \partial Y_1}\frac{d Y_1}{d X} + \ldots +\frac{\partial J}{\partial Y_N}\frac{d Y_N}{d X}.
\label{eq:dJ_dX}
\end{equation}
On the other hand, by differentiating the discrete mappings $Y_{k+1}=G(Y_k,X),\; k=1,2,\ldots,N$, we get
\begin{equation}
\frac{dY_1}{dX} = \frac{\partial G(Y_0,X)}{\partial X}, \quad \frac{dY_{i+1}}{dX} = \frac{\partial G(Y_{i},X)}{\partial Y_{i}} \frac{dY_{i}}{dX} + \frac{\partial G(Y_i,X)}{\partial X},\; i=1,\ldots,N-1.
\label{eq:dYdXchain}
\end{equation}
The directional derivation for the given arbitrary differentiation direction $\dot X$ is given by
\begin{equation}
\frac{dJ}{dX} \dot X = \frac{\partial J}{\partial X} \dot X + \sum_{i=1}^N \frac{\partial J}{ \partial Y_i} \frac{d Y_i}{dX} \dot X,
\label{eq:dJ_dX_sum1}
\end{equation}
if we denote the matrix-vector product $\frac{dY_i}{dX} \dot{X}$ by $\dot Y_i$, the above equation can be rewritten as
\begin{equation}
\frac{dJ}{dX} \dot X = \frac{\partial J}{\partial X} \dot X + \sum_{i=1}^N \frac{\partial J}{ \partial Y_i} \dot Y_i,
\label{eq:dJ_dX_sum2}
\end{equation}
where the $\dot Y_i$ is given by the recursion
\begin{equation}
\dot Y_1 = \frac{\partial G(Y_0,X)}{\partial X} \dot{X}, \quad \dot Y_i = \frac{\partial G(Y_{i},X)}{\partial Y_{i}} \dot Y_{i-1} + \frac{\partial G(Y_i,X)}{\partial X} \dot X,\; i=2,\ldots,N.
\label{eq:Yi_recursion}
\end{equation}
The tangent-linear code that performs the solution procedure given in Eqs. (\ref{eq:dJ_dX_sum2}) and (\ref{eq:Yi_recursion}) can be generated automatically by applying AD techniques on the source code of the primal solver in a black-box fashion. In this way, one obtains a tangent-linear solver that gives exact sensitivity results for a defined forward trajectory of $Y_0,Y_1,\ldots,Y_N$. In the present work, we have used the source transformation tool Tapenade for the differentiation \citep{TapenadeRef13}. The vector $\dot Y_i$ corresponds to the exact linearization of the solution procedure at the time iteration $i$ for the given differentiation direction $\dot X$. One obvious disadvantage of the tangent-linear method, which is outlined above, is the computational cost. Since the forward propagation of derivatives given in Eq. \eqref{eq:dJ_dX_sum2} can be achieved only for a single direction vector $\dot X$ at a time, the procedure must be repeated for all the entries of the gradient vector $dJ/dX$.
To show that the sensitivities obtained from the tangent-linear solver are equivalent to the adjoint results, one can take the transpose of the Eq. \eqref{eq:dJ_dX} and multiply it with a weight vector $\bar J$
\begin{equation}
\left( \frac{dJ}{dX} \right)^\top \!\!\bar J = \left( \frac{\partial J}{\partial X} \right)^\top \!\!\bar J
+ \left( \frac{d Y_1}{d X} \right)^\top \left( \frac{\partial J}{ \partial Y_1} \right)^\top \!\!\bar J
+ \ldots + \left( \frac{d Y_N}{d X} \right)^\top \left( \frac{\partial J}{ \partial Y_N} \right)^\top\!\! \bar J.
\label{eq:dJdX_transpose}
\end{equation}
Similarly transposing the relations for $dY_i/dX$ yields
\begin{small}
\begin{eqnarray}
\left( \frac{dY_1}{dX} \right)^\top&=& \left( \frac{\partial G(Y_0,X)}{\partial X} \right)^\top, \quad \left( \frac{dY_{i+1}}{dX} \right)^\top = \left( \frac{dY_{i}}{dX} \right)^\top \frac{\partial G(Y_{i},X)^\top}{\partial Y_{i}}+ \left( \frac{\partial G(Y_i,X)}{\partial X} \right)^\top,\nonumber\\
&&\hfill i=1,\ldots,N-1.
\label{eq:dYdXchain_T}
\end{eqnarray}
\end{small}
Combining the both equations, we get the adjoint sensitivity equation given as
\begin{small}
\begin{eqnarray*}
\left( \frac{dJ}{dX} \right)^{\!\!\top}\!\!\! \bar J\!\!\! & = &\!\!\! \left( \frac{\partial J}{\partial X} \right)^\top \bar J + \frac{ \partial G(Y_0,X)^\top}{\partial X} \frac{\partial J^\top}{\partial Y_1} \bar J \nonumber\\
\!\!\!&+&\!\!\!\left( \frac{\partial G(Y_1,X)^\top}{\partial X} +\frac{\partial G(Y_0,X)^\top}{\partial X} \frac{\partial G(Y_1,X)^\top}{\partial Y_1} \right) \frac{\partial J^\top}{\partial Y_2} \bar J + \left( \frac{\partial G(Y_2,X)^\top}{\partial X} \right.\nonumber \\
\!\!\!& + &\!\!\! \left. \frac{\partial G(Y_1,X)^\top}{\partial X} \frac{\partial G(Y_2,X)^\top}{\partial Y_2} + \frac{\partial G(Y_0,X)^\top}{\partial X} \frac{\partial G(Y_1,X)^\top}{\partial Y_1} \frac{\partial G(Y_2,X)^\top}{\partial Y_2} \right) \!\!\frac{\partial J^\top}{\partial Y_3}\! \bar J \nonumber \\
\!\!\!& + &\!\!\! \ldots \nonumber \\
\!\!\!& + &\!\!\! \left( \frac{\partial G(Y_{N-1},X)^{\!\!\top}}{\partial X} + \frac{\partial G(Y_{N-2},X)^{\!\!\top}}{\partial X} \frac{\partial G(Y_{N-1},X)^{\!\!\top}}{\partial Y_{N-1}} + \ldots \right. \nonumber \\
\!\!\!&+ &\!\!\! \frac{\partial G(Y_1,X)^{\!\!\top}}{\partial X} \frac{\partial G(Y_2,X)^{\!\!\top}}{\partial Y_2} \!\ldots\! \frac{\partial G(Y_{N-2},X)^{\!\!\top}}{\partial Y_{N-2}} \frac{\partial G(Y_{N-1},X)^{\!\!\top}}{\partial Y_{N-1}} \nonumber\\
\!\!\!&+&\!\!\!\left. \frac{\partial G(Y_0,X)^{\!\!\top}}{\partial X}\frac{\partial G(Y_1,X)^{\!\!\top}}{\partial Y_1} \!\ldots\! \frac{\partial G(Y_{N-3},X)^{\!\!\top}}{\partial Y_{N-3}} \frac{\partial G(Y_{N-2},X)^{\!\!\top}}{\partial Y_{N-2}} \frac{\partial G(Y_{N-1},X)^{\!\!\top}}{\partial Y_{N-1}} \right)\!\! \frac{\partial J^{\!\!\top}}{\partial Y_N} \bar J.
\label{eq:dJ_dX_long_T}
\end{eqnarray*}
\end{small}
Similar to the tangent-linear solver, the adjoint code that performs the solution procedure given in the above equation can be generated automatically by applying AD techniques on the source code of the primal solver in a black-box fashion, with the only difference that, this time the differentiation must be done in reverse mode. Since the objective function is a scalar, the weight vector $\bar J$ is also a scalar and can be simply chosen as $1$. In this way, the complete gradient vector $dJ/dX$ can be evaluated only with a single run of the adjoint code. The memory demand, on the other hand, increases linearly with the number of time iterations performed in the primal simulation as the state vector $Y$ must be available in the adjoint evaluation in the reverse order, i.e, $Y_{N-1},Y_{N-2}, \ldots, Y_1$.
Note that from the above equations, we can easily derive the relationship
\begin{equation}
\bar J \dot X = \dot J \bar X,
\end{equation}
which shows the relationship between the adjoint control sensitivities $\bar X$ and the directional derivative $\dot J$ in the direction $\dot X$. In conclusion, by applying AD techniques it is guaranteed that the tangent-linear sensitivity results are equivalent to the adjoint results.
\section{Tangent-Linear Sensitivities with Long Time Horizon}
For the validation of the AD based tangent-linear solver, the round jet configuration with $Re=10000$ and $Ma=0.9$ has been used. As far as the computational grid is concerned, a structured grid with $448 \times 216 \times 288$ grid points are used (JetG2 test case). For the forcing term, to simplify the analysis, we take a scalar term that is added to the energy equation
\begin{equation}
\frac{\partial p}{\partial t} = -\frac{\partial}{\partial x_i} p u_i + \frac{\partial}{\partial x_i} \lambda (\gamma - 1) \frac{\partial}{\partial x_i} - (\gamma - 1 ) p \frac{\partial}{\partial x_i} u_i + (\gamma - 1 ) \tau_{i j} \frac{\partial}{\partial x_j} u_i + \rho R s(x,t) g,
\end{equation}
where $g$ is the scalar forcing term and $s$ is the windowing function to ensure a smooth transition from uncontrolled to
controlled areas in the flow domain. The windowing function is given by
\begin{equation}
s(x,t) = s_{window}(x,2\Delta_x)s_{window}(z,2\Delta_z)s_{window}(t,5\Delta_t)
\end{equation}
and
\begin{equation}
s_{window}(k,\Delta_x) = \frac{1}{2} ( erf((k-k_{start}-2\Delta)/\Delta) - erf((k-k_{end}+2\Delta)/\Delta) ),
\end{equation}
where $erf(x)$ is the error function, $\Delta$ is the grid spacing for spatial and the time-step for temporal directions and $k_{start}$ and $k_{end}$ are the start and end positions/times of the
controlled area. With this definitions, the control can be interpreted as a temperature forcing term added to the energy equation.
In Figure \ref{fig:ADvsFD}, the time histories of the tangent-linear sensitivities obtained by AD and forward finite differences are shown. It can be observed that, in contrast to the AD sensitivities, finite difference results remain bounded and do not tend to overflow. In the right figure, the same trajectories up to $3000$ iterations are shown. From the figure, it can observed that both curves coincide each other perfectly within this narrow time window, which is an indication that the tangent-linear sensitivities are correct. It is interesting that, at around $2500$th time iteration, the tangent-linear results start to deviate significantly from the finite difference results. In other words, two trajectories bifurcate. At the later iterations, the tangent-linear sensitivity values simply grow because of the "butterfly effect". The FD sensitivities, however, do not show this behavior and they are, therefore, certainly unreliable after a certain number of time iterations.
\begin{figure}
\begin{minipage}{0.5\textwidth}
\includegraphics[scale=0.4]{pics/AD_vs_FDbw.png}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\includegraphics[scale=0.4]{pics/AD_vs_FD_zoomed_3000bw.png}
\end{minipage}
\caption{Comparison of AD and FD sensitivities (Left: complete time horizon, right: initial $3000$ iterations)}
\label{fig:ADvsFD}
\end{figure}
We can conclude that the sensitivity trajectory obtained from the LES simulation shows a very similar behavior to the Lorenz system \citep{Lorenz}. In both cases, the perturbed trajectory of the objective function stays very close to the original trajectory in the initial time steps. The gap between the both trajectories, however, tend to increase as more time iterations are taken in the forward-in-time integration. In both cases, the sensitivity results suffer from the so-called "butterfly" effect and tend to overflow. The objective function values, on the other hand, remain bounded. Unfortunately, this phenomena renders gradient based approaches for flow control problems almost useless as the gradient information becomes unreliable if the control horizon is taken large. On the other hand, choosing a small time horizon may not be appropriate as well as the resolution of the physics is concerned. Usage of AD techniques, unfortunately, does not alleviate this problem. On the contrary, it may even worsen the situation since all kind of instabilities present in the solution (either physical or numerical) are exactly differentiated by the underlying AD library. Unfortunately, differentiating noisy parts of the primal solution corrupts the sensitivity results as the exact derivative values tend to go to infinity. Using more advanced regularization techniques like Least Square Shadowing (LSS) \citep{WANG2014} is a promising method to overcome this problem. It comes, however, at the expense of increased computational cost.
\section{Conclusions}
In this paper, results of several adjoint based flow control studies performed on turbulent plane and round jet configurations are presented. For the flow simulations, a LES/DNS finite difference solver with a high order spatial scheme has been used. Based on this flow solver, continuous and discrete adjoint versions have been developed to able to evaluate gradient vectors efficiently. From the results, it has been observed that the cost functional can be significantly reduced using an adjoint based L-BFGS optimization algorithm provided that the control horizons are kept small enough. A significant difference between continuous and discrete approaches has not been observed as both approaches are good enough to evaluate accurate sensitivity gradients. For large control horizons, on the other hand, it has been observed that flow control is no more possible as the sensitivity gradients tend to grow in time. To be able to make a better assessment of the situation and exclude the effect of all possible consistency errors in the adjoints, the same flow solver has been differentiated using the machine accurate AD techniques in forward mode. In this way, an exact tangent-linear solver has been developed that corresponds to an exact linearization of the flow solver with all its underlying features. The results that are obtained with the tangent-linear solver gave a similar picture. In the initial time iterations, the tangent-linear results matched perfectly with the finite difference approximations. After a certain time iteration, however, the both trajectories started to deviate from each other. Similar to the adjoint results, tangent-linear sensitivities also grow rapidly with the increasing simulation time. From the results, it can be concluded that after a certain size of the control horizon adjoint based flow control is no more viable without regularization techniques.
\section*{Acknowledgments}
This research was funded by the German Research Foundation (DFG) under the project numbers, $FO \; 674/4-1$ and $GA \; 857/9-1$. The authors gratefully acknowledge the computing time granted by the Allianz f\"{u}r Hochleistungsrechnen Rheinland-Pfalz (AHRP), Germany.
\section*{References}
|
1,314,259,994,805 | arxiv |
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{C}{reating} dynamic general clothes or garments on animated characters has been a long-standing problem in computer graphics (CG).
In the CG industry, physics-based simulations (PBS) are used to achieve realistic and detailed folding patterns for garment animations.
However, it is time-consuming and requires expertise to synthesize fine geometric details since high-resolution meshes with tens of thousands or more vertices are often required.
For example, 10 seconds are required for physics-based simulation of a frame for detailed skirt animation shown in Fig.~\ref{fig:lrhrsim1}.
Not surprisingly, garment animation remains a bottleneck in many applications.
Recently, data-driven methods provide alternative solutions to fast and effective wrinkling behaviors for garments.
Depending on human body poses, some data-driven methods~\cite{wang10example,Feng2010transfer,deAguiar10Stable,santesteban2019learning, wang2019learning} are capable of generating tight cloth animations successfully.
\begin{figure}[t]
\centering
\begin{tabular}{ccc}
\multicolumn{3}{c}{
\includegraphics[width=1.0\linewidth]{pictures/wireframe2_1.pdf}} \\
(a) coarse skirt & (b) tracked skirt & (c) fine skirt
\end{tabular}
\caption{\small \cl{One frame of \YL{skirt in different representations.} (a) \YL{coarse mesh} (207 triangles), (b) \YL{tracked mesh} (13,248 triangles) and (c) \YL{fine mesh} (13,248 triangles). \YL{Both coarse and fine meshes are obtained by simulating the skirt using a physics-based method \cl{\cite{Narain2012AAR}}. The tracked mesh is obtained with physics-based simulation involving additional constraints to track the coarse mesh.} The tracked mesh exhibits stiff folds while the wrinkles in the fine simulated mesh are more realistic.}%
}
\label{fig:lrhrsim1}
\end{figure}
Unfortunately, they are not suitable for loose garments, such as skirts, since the deformation of wrinkles cannot be defined by a static mapping from a character’s pose.
Instead of human poses, wrinkle augmentation on coarse simulations provides another alternative.
It utilizes coarse simulations with fast speed to cover a high-level deformation and leverages learning-based methods to add realistic wrinkles.
Previous methods~\cite{kavan11physics,zurdo2013wrinkles,chen2018synthesizing} commonly require dense correspondences between coarse and fine meshes, so that local details can be added without affecting global deformation.
\YL{Such methods also require coarse meshes to be sufficiently close to fine meshes, as they only add details to coarse meshes.}
To maintain the correspondences for training data and ensure closeness between coarse and fine meshes, weak-form constraints such as various test functions~\cite{kavan11physics,zurdo2013wrinkles,chen2018synthesizing} are applied to make fine meshes track the coarse meshes,
\YL{but as a result, the obtained high-resolution meshes do not fully follow physical behavior, leading to animations that lack realism. An example is shown in Fig.~\ref{fig:lrhrsim1} where the tracked skirt (b) loses a large amount of wrinkles which should appear when simulating on fine meshes (c).}
Without requiring the constraints between coarse and fine meshes, we propose
\gl{the DeformTransformer network
to synthesize detailed thin shell animations from coarse ones, based on deformation transfer.}
This is inspired by the similarity observed between pairs of coarse and fine meshes generated by PBS. %
Although the positions of vertices from two meshes are not aligned, the overall deformation is similar, so it is possible to predict fine-scale deformation with coarse simulation results.
Most previous works~\cite{kavan11physics,zurdo2013wrinkles,chen2018synthesizing} use explicit vertex coordinates to represent 3D meshes, which are sensitive to translations and rotations,
so they require good alignments between low- and high-resolution meshes.
In our work, we regard the cloth animations as non-rigid deformation and propose a novel representation for mesh sequences, called TS-ACAP (Temporal and Spatial As-Consistent-As-Possible) representation.
TS-ACAP is a local deformation representation, capable of representing and solving large-scale deformation problems, while maintaining the details of meshes.
Compared to the original ACAP representation~\cite{gao2019sparse}, TS-ACAP is fundamentally designed to ensure the temporal consistency of the extracted feature sequences, \YL{and meanwhile} it can maintain the original features of ACAP \YL{to cope with large-scale deformations}.
With \YL{TS-ACAP} representations for both coarse and fine meshes, we leverage a sequence transduction network to map the deformation from coarse to fine level to assure the temporal coherence of generated sequences.
Unlike existing works using recurrent neural networks (RNN)~\cite{santesteban2019learning}, we utilize the Transformer network~\cite{vaswani2017attention}, an architecture consisting of frame-level attention mechanisms for our mesh sequence transduction task.
It is based entirely on attention without recursion modules so can be trained significantly faster than architectures based on recurrent %
layers.
With \YL{temporally consistent features and the Transformer network, \YL{our method achieves} stable general cloth synthesis with fine details in an efficient manner.}
In summary, the main contributions of our work are as follows:
\begin{itemize}
\item \YL{We propose a novel framework for the synthesis of cloth dynamics, by learning temporally consistent deformation from low-resolution meshes to high-resolution meshes \gl{with realistic dynamic}, which is $10 \sim 35$ times faster than PBS \cite{Narain2012AAR}.}
\item \YL{To achieve this, we propose a \cl{temporally and spatially as-consistent-as-possible deformation representation (TS-ACAP)} to represent the cloth mesh sequences. It is able to deal with large-scale deformation, essential for mapping between coarse and fine meshes, while ensuring temporal coherence.}
\item \gl{Based on the TS-ACAP, We further design an effective neural network architecture (named DeformTransformer) by improving Transformer network, which successfully enables high-quality synthesis of dynamic wrinkles with rich details on thin shells and maintains temporal consistency on the generated high-resolution mesh sequences.}
\end{itemize}
We qualitatively and quantitatively evaluate our method for various cloth types (T-shirts, pants, skirts, square and disk tablecloth) with different motion sequences.
In Sec.~\ref{sec:related_work}, we review the work most related to ours. We then give the detailed description of our method in Sec.~\ref{sec:approach}.
Implementation details are presented in Sec.~\ref{sec:implementation}. We present experimental results, including extensive
comparisons with state-of-the-art methods in Sec.~\ref{sec:results}, and finally, we draw conclusions and \YL{discuss future work} in Sec.~\ref{sec:conclusion}.
\section{Related work} \label{sec:related_work}
\subsection{Cloth Animation}
Physics-based techniques for realistic cloth simulation have been widely studied in computer graphics, \YL{using methods such as} implicit Euler integrator \cite{BW98,Harmon09asynchronous}, iterative optimization \cite{terzopoulos87elastically,bridson03wrinkles,Grinspun03shell}, collision detection and response \cite{provot97collision,volino95collision}, etc.
\YL{Although such techniques can generate realistic cloth dynamics, }they are time consuming for detailed cloth synthesis, and the robustness and efficiency of simulation systems are also of concern.
\YL{To address these, alternative methods have been developed to generate} the dynamic details of cloth animation via adaptive techniques \cite{lee2010multi,muller2010wrinkle,Narain2012AAR}, data-driven approaches \cite{deAguiar10Stable, Guan12DRAPE, wang10example, kavan11physics,zurdo2013wrinkles} and deep learning-based methods \cite{chen2018synthesizing,gundogdu2018garnet,laehner2018deepwrinkles,zhang2020deep}, etc.
Adaptive techniques \cite{lee2010multi, muller2010wrinkle} usually simulate a coarse model by simplifying the smooth regions and \YL{applying interpolation} to reconstruct the wrinkles, \YL{taking normal or tangential degrees of freedom into consideration.}
Different from simulating a reduced model with postprocessing detail augmentation, Narain {\itshape et al.} \cite{Narain2012AAR} directly generate dynamic meshes in \YL{the} simulation phase through adaptive remeshing, at the expense of increasing \YL{computation time}.
Data-driven methods have drawn much attention since they offer faster cloth animations than physical models.
With \YL{a} constructed database of \YL{high-resolution} meshes, researchers have proposed many techniques depending on the motions of human bodies with linear conditional models\cite{deAguiar10Stable, Guan12DRAPE} or secondary motion graphs \cite{Kim2013near, Kim2008drivenshape}.
However, these methods are limited to tight garments and not suitable for skirts or cloth with more freedom.
An alternative line \YL{of research} is to augment details on coarse simulations \YL{by exploiting knowledge from a} database of paired meshes, to generalize the performance to complicated testing scenes.
In this line, in addition to wrinkle synthesis methods \YL{based on} bone clusters \cite{Feng2010transfer} or human poses \cite{wang10example} for fitted clothes, there are some approaches \YL{that investigate how to} learn a mapping from a coarse garment shape to a detailed one for general \YL{cases} of free-flowing cloth simulation.
Kavan {\itshape et al.} \cite{kavan11physics} present linear upsampling operators to \YL{efficiently} augment \YL{medium-scale} details on coarse meshes.
Zurdo {\itshape et al.} \cite{zurdo2013wrinkles} define wrinkles as local displacements and use \YL{an} example-based algorithm to enhance low-resolution simulations.
\YL{Their approaches mean the} high-resolution cloth \YL{is} required to track \YL{the} low-resolution cloth, \YL{and thus cannot} exhibit full high-resolution dynamics.
Recently deep learning-based methods have been successfully applied for 3D animations of human \YL{faces}~\cite{cao2016real, jiang20183d}, hair \cite{zhang2018modeling, yang2019dynamic} and garments \cite{liu2019neuroskinning, wang2019learning}.
As for garment synthesis, some approaches \cite{laehner2018deepwrinkles, santesteban2019learning, patel2020tailornet} are proposed to utilize a two-stream strategy consisting of global garment fit and local \YL{wrinkle} enhancement.
L{\" a}hner {\itshape et al.} \cite{laehner2018deepwrinkles} present DeepWrinkles, \YL{which recovers} the global deformation from \YL{a} 3D scan system and \YL{uses a} conditional \YL{generative adversarial network} to enhance a low-resolution normal map.
Zhang {\itshape et al.} \cite{zhang2020deep} further generalize the augmentation method with normal maps to complex garment types as well as various motion sequences.
\YL{These approaches add wrinkles on normal maps \YL{rather than geometry}, and thus their effectiveness is restricted to adding fine-scale visual details, not large-scale dynamics.}
Based on \YL{the} skinning representation, some algorithms \cite{gundogdu2018garnet,santesteban2019learning} use neural networks to generalize garment synthesis algorithms to multiple body shapes.
\YL{In addition, other works are} devoted to \YL{generalizing neural networks} to various cloth styles \cite{patel2020tailornet} or cloth materials \cite{wang2019learning}.
Despite tight garments dressed on characters, some deep learning-based methods \cite{chen2018synthesizing, oh2018hierarchical} are %
\YL{demonstrated to work for cloth animation with higher degrees} of freedom.
Chen {\itshape et al.} \cite{laehner2018deepwrinkles} represent coarse and fine meshes via geometry images and use \YL{a} super-resolution network to learn the mapping.
Oh {\itshape et al.} \cite{oh2018hierarchical} propose a multi-resolution cloth representation with \YL{fully} connected networks to add details hierarchically.
Since the \YL{free-flowing cloth dynamics are harder for networks to learn} than tight garments, the results of these methods have not reached the realism of PBS. \YL{Our method based on a novel deformation representation and network architecture has superior capabilities of learning the mapping from coarse and fine meshes, generating realistic cloth dynamics, while being much faster than PBS methods.}
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\linewidth, trim=20 250 20 50,clip]{pictures/mainpicture2.pdf}
\caption{\small The overall architecture of our detail synthesis network. At data preparation stage, we generate low- and high-resolution \gl{thin shell} animations via coarse and fine \gl{meshes} and various motion sequences.
Then we encode the coarse meshes and the detailed meshes to a deformation representation TS-ACAP, respectively.
\YL{Our algorithm then} learns to map the coarse features to fine features %
\YL{by designing a DeformTransformer network that consists of temporal-aware encoders and decoders, and finally reconstructs the detailed animations.}
}
\label{fig:pipeline}
\end{figure*}
\subsection{Representation for 3D Meshes}
Unlike 2D images with regular grid of pixels, \YL{3D meshes have irregular connectivity which makes learning more difficult. To address this, existing deep learning based methods turn 3D meshes to a wide range of representations to facilitate processing~\cite{xiao2020survey},} such as voxels, images \YL{(such as depth images and multi-view images)}, point clouds, meshes, etc.
\YL{The volumetric representation has a regular structure, but it} often suffers from \YL{the problem of extremely high space and time consumption.}
Thus Wang {\itshape et al.} \cite{wang2017cnn} propose an octree-based convolutional neural network and encode the voxels sparsely.
Image-based representations including \YL{depth images} \cite{eigen2014depth,gupta2014learning} and multi-view images \cite{Su2015mvcnn,li20193d} are proposed to encode 3D models in a 2D domain.
It is unavoidable that both volumetric and image-based representations lose some geometric details.
Alternatively, geometry images are used in \cite{sinha2016deep,Sinha2017surfnet,chen2018synthesizing} for mesh classification or generation\YL{, which are obtained through cutting a 3D mesh to a topological disk, parameterizing it to a rectangular domain and regularly sampling the 3D coordinates in the 2D domain~\cite{gu2002geometry}.}
\YL{However, this representation} may suffer from parameterization distortion and seam line problems.
Instead of representing 3D meshes into other formats, recently there are methods \cite{tan2017autoencoder, tan2017variational, hanocka2019meshcnn} applying neural networks directly to triangle meshes with various features.
Gao {\itshape et al.} \cite{gao2016efficient} propose a deformation-based representation, called the rotation-invariant mesh difference (RIMD) which is translation and rotation invariant.
Based on the RIMD feature, Tan {\itshape et al.} \cite{tan2017variational} propose a fully connected variational autoencoder network to analyze and generate meshes.
Wu {\itshape et al.} \cite{wu2018alive} use the RIMD to generate
a 3D caricature model from a 2D caricature image.
However, it is expensive to reconstruct vertex coordinates from the RIMD feature due to the requirement of solving a very complicated optimization.
Thus it is not suitable for fast mesh generation tasks.
A faster deformation representation based on an as-consistent-as-possible (ACAP) formulation \cite{gao2019sparse} is further used to reconstruct meshes \cite{tan2017autoencoder}, which is able to cope with large rotations and efficient for reconstruction.
Jiang {\itshape et al.} \cite{jiang2019disentangled} use ACAP to disentangle the identity and expression of 3D \YL{faces}.
They further apply ACAP to learn and reconstruct 3D human body models using a coarse-to-fine pipeline \cite{jiang2020disentangled}.
\YL{However, the ACAP feature is represented based on individual 3D meshes. When applied to a dynamic mesh sequence, it does not guarantee temporal consistency.}
We propose a \cl{temporally and spatially as-consistent-as-possible (TS-ACAP)} representation, to ensure both spatial and temporal consistency of mesh deformation.
Compared to ACAP, our TS-ACAP can also accelerate the computation of features thanks to the sequential constraints.
\subsection{Sequence Generation with \YL{DNNs (Deep Neural Networks)}}
Temporal information is crucial for stable and \gl{vivid} sequence generation. Previously, recurrent neural networks (RNN) have been successfully applied in many sequence generation tasks \cite{mikolov2010recurrent, mikolov2011extensions}. However, it is difficult to train \YL{RNNs} to capture long-term dependencies since \YL{RNNs} suffer from the vanishing gradient problem \cite{bengio1994learning}. To deal with this problem, previous works proposed some variations of RNN, including long short-term memory (LSTM) \cite{hochreiter1997long} and gated recurrent unit (GRU) \cite{cho2014properties}. These variations of RNN rely on the gating mechanisms to control the flow of information, thus performing well in the tasks that require capturing long-term dependencies, such as speech recognition \cite{graves2013speech} and machine translation \cite{bahdanau2014neural, sutskever2014sequence}. Recently, based on attention mechanisms, the Transformer network \cite{vaswani2017attention} has been verified to outperform \YL{many typical sequential models} for long sequences. This structure is able to inject the global context information into each input. Based on Transformer, impressive results have been achieved in tasks with regard to audio, video and text, \textit{e.g. } speech synthesis \cite{li2019neural, okamoto2020transformer}, action recognition \cite{girdhar2019video} and machine translation \cite{vaswani2017attention}.
We utilize the Transformer network to learn the frame-level attention which improves the temporal stability of the generated animation sequences.
\section{Approach} \label{sec:approach}
With a simulated sequence of coarse meshes $\mathcal{C} = \{\mathcal{C}_1, \dots, \mathcal{C}_n\}$ as input, our goal is to produce a sequence of fine ones $\mathcal{D} = \{\mathcal{D}_1, \dots, \mathcal{D}_n\}$ which have similar non-rigid deformation as the PBS. Given two simulation sets of paired coarse and fine garments, we extract the TS-ACAP representations respectively, \YL{and} then use our proposed DeformTransformer network to learn the \YL{transform} \YL{from the low-resolution space to the high-resolution space}. \YL{As illustrated previously in Fig.~\ref{fig:lrhrsim1}, such a mapping involves deformations beyond adding fine details.}
Once the network is trained by the paired examples, a consistent and detailed animation $\mathcal{D}$ can be synthesized for each input sequence $\mathcal{C}$.
\subsection{Overview}
The overall architecture of our detail synthesis network is illustrated in Fig. \ref{fig:pipeline}.
To synthesize realistic \gl{cloth animations}, we propose a method to simulate coarse meshes first and learn a \YL{temporally-coherent} mapping to the fine meshes.
To realize our goal, we construct datasets including low- and high-resolution cloth animations, \textit{e.g. } coarse and fine garments dressed on a human body of various motion sequences.
To efficiently extract localized features with temporal consistency, we propose a new deformation representation, called TS-ACAP (temporal \YL{and spatial} as-consistent-as-possible), which is able to cope with both large rotations and unstable sequences. It also has significant advantages: it is efficient to compute for \YL{mesh} sequences and its derivatives have closed form solutions.
Since the vertices of the fine models are typically more than ten thousand to simulate realistic wrinkles, it is hard to directly map the coarse features to the high-dimensional fine ones for the network.
Therefore, \YL{convolutional encoder networks are}
applied to encode \YL{coarse and fine meshes in the TS-ACAP representation} to \YL{their latent spaces}, respectively.
The TS-ACAP generates local rotation and scaling/shearing parts on vertices, so we perform convolution \YL{operations} on vertices %
\YL{to learn to extract useful features using shared local convolutional kernels.}
With encoded feature sequences, a sequence transduction network is proposed to learn the mapping from coarse to fine TS-ACAP sequences.
Unlike existing works using recurrent neural networks \YL{(RNNs)}~\cite{santesteban2019learning}, we use the Transformer \cite{vaswani2017attention}, a sequence-to-sequence network architecture, based on frame-level attention mechanisms for our detail synthesis task, \YL{which is more efficient to learn and leads to superior results.}
\subsection{Deformation Representation}
\YL{As discussed before, large-scale deformations are essential to represent \gl{thin shell mode dynamics such as }cloth animations, because folding and wrinkle patterns during animation can often be complicated. Moreover, cloth animations are in the form of sequences, hence the temporal coherence is very important for the realistic. Using 3D coordinates directly cannot cope with large-scale deformations well, and existing deformation representations are generally designed for static meshes, and directly applying them to cloth animation sequences on a frame-by-frame basis does not take temporal consistency into account. }
To cope with this problem, we propose a mesh deformation feature with spatial-temporal consistency, called TS-ACAP, to represent the coarse and fine deformed shapes, which exploits the localized information effectively and reconstructs \YL{meshes} accurately.
Take \YL{coarse meshes} $\mathcal{C}$ for instance and \YL{fine meshes $\mathcal{D}$ are processed in the same way.} \YL{Assume that a sequence} of coarse meshes contains $n$ models with the same topology, each denoted as $\mathcal{C}_{t}$ \YL{($1\leq t \leq n$)}.
\YL{A mesh with the same topology is chosen as the reference model, denoted as $\mathcal{C}_{0}$. For example, for garment animation, this can be the garment mesh worn by a character in the T pose.}
$\mathbf{p}_{t,i} \in \mathbb{R}^{3}$ is the $i^{\rm th}$ vertex on
the $t^{\rm th}$ mesh.
To represent the local shape deformation, the deformation gradient $\mathbf{T}_{t,i} \in \mathbb{R}^{3 \times 3}$ can be obtained by minimizing the following energy:
\begin{equation}
\mathop{\arg\min}_{\mathbf{T}_{t,i}} \ \ \mathop{\sum}_{j \in \mathcal{N}_i} c_{ij} \| (\mathbf{p}_{t,i} - \mathbf{p}_{t,j}) - \mathbf{T}_{t,i} (\mathbf{p}_{0,i} - \mathbf{p}_{0,j}) \|_2^2 \label{con:computeDG}
\end{equation}
where $\mathcal{N}_i$ is the one-ring neighbors of the $i^{\rm th}$ vertex, and $c_{ij}$ is the cotangent weight $c_{ij} = \cot \alpha_{ij} + \cot \beta_{ij} $ \cite{sorkine2007rigid,levi2014smooth}, where $\alpha_{ij}$
and $\beta_{ij}$ are angles opposite to the edge connecting the $i^{\rm th}$ and $j^{\rm th}$ vertices.
The main drawback of the deformation gradient representation is that it cannot handle large-scale rotations, which often \YL{happen} in cloth animation.
Using polar decomposition, the deformation gradient $\mathbf{T}_{t,i} $ can be decomposed into a rotation part and a scaling/shearing part $\mathbf{T}_{t,i} = \mathbf{R}_{t,i}\mathbf{S}_{t,i}$.
The scaling/shearing transformation $\mathbf{S}_{t,i}$ is uniquely defined, while the rotation $\mathbf{R}_{t,i}$ \YL{corresponds to infinite possible rotation angles (differed by multiples of $2\pi$, along with possible opposite orientation of the rotation axis)}. Typical formulation often constrain the rotation angle to be within $[0, \pi]$ which is unsuitable for smooth large-scale animations.
In order to handle large-scale rotations, we first require the orientations of rotation axes and rotation angles of \YL{spatially} adjacent vertices \YL{on the same mesh} to be as consistent as possible.
Especially for our sequence data, we further add constraints for adjacent frames to ensure the temporal consistency of the orientations of rotation axes and rotation angles on each vertex.
We first consider consistent orientation for axes.
\begin{flalign}\label{eqn:axis}
\arg\max_{{o}_{t,i}} \sum_{(i,j) \in \mathcal{E} } {o}_{t,i}{o}_{t,j} \cdot s(\boldsymbol{\omega}_{t,i} \cdot \boldsymbol{\omega}_{t,j}, \theta_{t,i}, \theta_{t,j}) \nonumber\\
+ \sum_{i \in \mathcal{V} } {o}_{t,i} \cdot s(\boldsymbol{\omega}_{t,i} \cdot \boldsymbol{\omega}_{t-1,i}, \theta_{t,i}, \theta_{t-1,i}) \nonumber\\
{\rm s.t.} \quad
{o}_{t,1} = 1, {o}_{t,i} = \pm 1 (i \neq 1) \quad
\end{flalign}
where $t$ is the \YL{index} of \YL{the} frame, $\mathcal{E}$ is the edge set, and $\mathcal{V}$ is the vertex set. \YL{Denote by $(\boldsymbol{\omega}_{t,i}, \theta_{t,i})$ one possible choice for the rotation axis and rotation angle that match $\mathbf{R}_{t,i}$. $o_{t,i} \in \{+1, -1\}$ specifies whether the rotation axis is flipped ($o_{t,i} = 1$ if the rotation axis is unchanged, and $-1$ if its opposite is used instead). }\YL{The first term promotes spatial consistency while the second term promotes temporal consistency.}
$s(\cdot)$ is a function measuring orientation consistency, which is defined as follows:
\begin{equation}
s(\cdot)=\left\{
\begin{aligned}
0 & , & |\boldsymbol{\omega}_{t,i} \cdot \boldsymbol{\omega}_{t,j}|\leq\epsilon_1 \; {\rm or} \;
\theta_{t,i}<\varepsilon_2 \; {\rm or} \; \theta_{t,j}<\varepsilon_2 \\
1 & , & {\rm Otherwise~if}~\boldsymbol{\omega}_{t,i} \cdot \boldsymbol{\omega}_{t,j}>\epsilon_1 \\
-1 & , & {\rm Otherwise~if}~ \boldsymbol{\omega}_{t,i} \cdot \boldsymbol{\omega}_{t,j}<-\epsilon_1 \\
\end{aligned}
\right.
\end{equation}
\YL{The first case here is to ignore cases where the rotation angle is near zero, as the rotation axis is not well defined in such cases.}
As for rotation angles, \YL{we optimize the following}
\begin{flalign}\label{eqn:angle}
\arg\min_{r_{t,i}} &\sum_{(i,j) \in \mathcal{E} } \| (r_{t,i} \cdot 2\pi+{o}_{t,i}\theta_{t,i}) - (r_{t,j}\cdot 2\pi+{o}_{t,j}\theta_{t,j}) \|_2^{2} &\nonumber\\
+ &\sum_{i \in \mathcal{V} } \| (r_{t,i} \cdot 2\pi+{o}_{t,i}\theta_{t,i}) - (r_{t-1,i}\cdot 2\pi+{o}_{t,j}\theta_{t-1,i}) \|_2^{2} \nonumber\\
{\rm s.t.}& \quad r_{t,i} \in \mathbb{Z},~~r_{t,1} = 0.
\end{flalign}
where $r_{t,i} \in \mathbb{Z}$ specifies how many $2\pi$ rotations should be added to the rotation angle.
\YL{The two terms here promote spatial and temporal consistencies of rotation angles, respectively.
These optimizations can be solved using integer programming, and we use the mixed integer solver CoMISo~\cite{comiso2009} which provides an efficient \gl{solver}. See~\cite{gao2019sparse} for more details.}
A similar process is used to compute the TS-ACAP representation of the fine meshes.
\cl{Compared to the ACAP representation, our TS-ACAP representation considers temporal constraints to represent nonlinear deformation for optimization of axes and angles, which is more suitable for consecutive large-scale deformation \YL{sequences}.
We compare ACAP~\cite{gao2019sparse} and our TS-ACAP using a simple example of a simulated disk-shaped cloth animation sequence. Once we obtain deformation representations of the meshes in the sequence,
we interpolate two meshes, the initial state mesh and a randomly selected frame, using linear interpolation of \YL{shape representations}.
\YL{In Fig. \ref{fig:interpolation}, we demonstrate the interpolation results with ACAP representation, which shows that it cannot handle such challenging cases with complex large-scale deformations. In contrast, with our temporally and spatially as-consistent-as-possible optimization, our TS-ACAP representation is able to produce consistent interpolation results.}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{pictures/acap_tacap1_1.pdf}%
\caption{\small Comparison of shape interpolation results with different deformation representations, ACAP and TS-ACAP. %
(a) and (b) are the source (t = 0) and target (t = 1) models with large-scale deformation to be interpolated.
The first row shows the interpolation results by ACAP, and the second row show the results with our TS-ACAP.
\gl{The interpolated models with ACAP feature are plausible in each frame while they are not consistent in the temporal domain.}
}
\label{fig:interpolation}
\end{figure}
}
\subsection{DeformTransformer Networks}
Unlike \cite{tan2017variational, wang2019learning} which use fully connected layers for mesh encoder, we perform convolutions \YL{on meshes to learn to extract useful features using compact shared convolutional kernels.}
As illustrated in Fig. \ref{fig:pointconv}, we use a convolution operator on vertices \cite{duvenaud2015convolutional, tan2017autoencoder} where the output at a vertex is obtained as a linear combination of input in its one-ring neighbors along with a bias.
\YL{The input to our network is the TS-ACAP representation, which for the $i^{\rm th}$ vertex of the $t^{\rm th}$ mesh, we collect non-trivial coefficients from the rotation $\mathbf{R}_{t, i}$ and scaling/shearing $\mathbf{S}_{t,i}$, which forms a 9-dimensional feature vector (see~\cite{gao2019sparse} for more details). Denote by $\mathbf{f}_i^{(k-1)}$ and $\mathbf{f}_i^{k}$ the feature of the $i^{\rm th}$ vertex at the $(k-1)^{\rm th}$ and $k^{\rm th}$ layers, respectively. The convolution operator is defined as follows:
\begin{equation}
\mathbf{f}_i^{(k)} =
\mathbf{W}_{point}^{(k)} \cdot \mathbf{f}_{i}^{(k-1)} +
\mathbf{W}_{neighbor}^{(k)} \cdot \frac{1}{D_i} \mathop{\sum}_{j=1}^{D_i} \mathbf{f}_{n_{ij}}^{(k-1)}
+ \mathbf{b}^{(k)}
\end{equation}
where $\mathbf{W}_{point}^{(k)}$, $\mathbf{W}_{neighbor}^{(k)}$ and $\mathbf{b}^{(k)}$ are learnable parameters for the $k^{\rm th}$ convoluational layer, $D_i$ is the degree of the $i^{\rm th}$ vertex, $n_{ij}(1 \leq j \leq D_i )$ is the $j^{\rm th}$ neighbor of the $i^{\rm th}$ vertex.
}
\begin{figure}[ht]
\centering
\includegraphics[width=0.48\linewidth]{pictures/pointconv.pdf}
\caption{\small Illustration of the convolutional operator on meshes.
The result of convolution for each vertex is obtained by a linear combination from the input in the 1-ring neighbors of the vertex, along with a bias.
}
\label{fig:pointconv}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth, trim=0 50 0 150,clip]{pictures/transformer.pdf} %
\caption{\small The architecture of our DeformTransformer network.
The coarse and fine mesh sequences are embedded into feature vectors using the TS-ACAP representation which \YL{is} defined \YL{at} each vertex as a 9-dimensional vector.
Then two convolutional \YL{encoders} map coarse and fine features to \YL{their latent spaces}, respectively.
These latent vectors are fed into the DeformTransformer network, \cl{which consists of the encoder and decoder, each including a stack of $N=2$ identical blocks with 8-head attention,} to recover \YL{temporally-coherent} deformations.
Notice that in \YL{the} training phase the input high-resolution TS-ACAP \YL{features are those from the ground truth},
\YL{but during testing, these features are initialized to zeros, and once a new high-resolution frame is generated, its TS-ACAP feature is added.}
With predicted feature vectors, realistic and stable cloth animations are generated.
}
\label{fig:Transformer}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\linewidth, trim=18 33 18 3,clip]{pictures/tshirt06_08_poseswithhuman_collision/temp0270keyshot_unsolve.png}
\includegraphics[width=0.4\linewidth, trim=18 33 18 3,clip]{pictures/tshirt06_08_poseswithhuman_collision/temp0270keyshot_solve.png}
\caption{\small For tight clothing, data-driven cloth deformations may suffer from apparent collisions with the body (left). We apply a simple postprocessing step to push
\YL{the collided} T-shirt vertices outside the body (right).
}
\label{fig:collisionrefinement}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\linewidth, trim=50 150 100 150,clip]{pictures/dataset.pdf}
\caption{\small
We test our algorithm on 5 datasets including TSHIRT, PANTS, SKIRT, SHEET and DISK.
The former three are garments (T-shirts, skirts, and pants) dressed on a template body and simulated with various motion sequences.
The SHEET dataset is a square sheet interacting with various obstacles.
The DISK dataset is a round tablecloth draping on a cylinder in the wind of various velocities.
Each cloth shape has a coarse resolution (top) and a fine resolution (bottom).
}
\label{fig:dataset}
\end{figure*}
Let $\mathcal{F}_\mathcal{C} = \{\mathbf{f}_{\mathcal{C}_1}, \dots, \mathbf{f}_{\mathcal{C}_n}\}$ be the sequence of coarse mesh features, and $\mathcal{F}_\mathcal{D} = \{\mathbf{f}_{\mathcal{D}_1}, \dots, \mathbf{f}_{\mathcal{D}_n}\}$ be its counterpart, the sequence of detailed mesh features.
To synthesize $\mathcal{F}_\mathcal{D}$ from $\mathcal{F}_\mathcal{C}$, the DeformTransformer framework is proposed to solve this sequence-to-sequence problem.
The DeformTransformer network consists of several stacked encoder-decoder layers, \YL{denoted} as $Enc(\cdot)$ and $Dec(\cdot)$. To take the order of the sequence into consideration, triangle positional embeddings \cite{vaswani2017attention} are injected into frames of $\mathcal{F}_\mathcal{C}$ and $\mathcal{F}_\mathcal{D}$, respectively.
The encoder takes coarse mesh features as input and encodes it to a \YL{temporally-dependent} hidden space.
It is composed of identical blocks \YL{each} with two sub-modules, one is the multi-head self-attention mechanism, the other is the frame-wise fully connected feed-forward network.
We also employ a residual connection around these two sub-modules, followed \YL{by} the layer normalization.
The multi-head attention is able to build the dependence between any frames, thus ensuring that each input can consider global context of the whole sequence. Meanwhile, compared with other sequence models, this mechanism splits \YL{the} attention into several subspaces so that it can model the frame \YL{relationships} in multiple aspects.
With the encoded latent vector $Enc(\mathcal{F}_\mathcal{C})$, the decoder network attempts to reconstruct a sequence of fine mesh features.
The decoder has two parts:
The first part takes fine mesh sequence $\mathcal{F}_\mathcal{D}$ as \YL{input} and
encodes it similar to the encoder.
\YL{Unlike the encoder, detailed meshes are generated sequentially, and when predicting frame $t$, it should not attend to subsequent frames (with the position after frame $t$). To achieve this, we utilize a masking process
for the self-attention module.} The second part performs multi-head attention over the output of the encoder, thus capturing the long-term dependence between coarse mesh features $\mathcal{F}_\mathcal{C}$ and fine mesh features $\mathcal{F}_\mathcal{D}$.
We train the Transformer network by minimizing the mean squared error between predicted detailed features and the ground-truth.
With predicted TS-ACAP feature vector, we reconstruct the vertex coordinates of \YL{the} target mesh\YL{, in the same way as reconstruction from ACAP features} (please refer to \cite{gao2019sparse} for details).
Our training data is generated by PBS \YL{and is collision-free}.
Since human body \YL{(or other obstacles)} information is unseen in our algorithm, it does not guarantee the predicted cloth \YL{is free from any penetration}.
Especially for tight garment like T-shirts, it will be apparent if collision \YL{between the garment and human body} happens.
We use a fast refinement method \cite{wang2019learning} to push the cloth vertices colliding with the body outside \YL{while} preserving the local wrinkle details (see Fig.~\ref{fig:collisionrefinement}).
For each vertex detected inside the body, we find its closest point over the body surface with normal and position.
Then the cloth mesh is deformed to update the vertices by minimizing the energy which penalizes the euclidean distance and Laplacian difference between the updated mesh and the initial one (please refer to \cite{wang2019learning} for details).
The collision solving process usually takes less than 3 iterations to converge to a collision-free state.
\section{Implementation}\label{sec:implementation}
We describe the details of the dataset construction and the network architecture in this section.
\textbf{\YL{Datasets}.}
To test our method, we construct 5 datasets, called TSHIRT, PANTS, SKIRT, SHEET and DISK respectively.
The former three datasets are different types of garments, \textit{i.e. }, T-shirts, skirts and pants worn on human bodies.
Each type of garment \YL{is represented by both low-resolution and high-resolution meshes}, \YL{containing} 246 and 14,190 vertices for the T-shirts, 219 and 12,336 vertices for the skirts, 200 and 11,967 vertices for the pants.
Garments of the same type and resolution are simulated from a template mesh, which means \YL{such meshes obtained through cloth animations have the same number of vertices and the same connectivity}.
These garments are dressed on animated characters, which are obtained via driving a body \YL{in the SMPL (Skinned Multi-Person Linear) model} \cite{loper2015smpl} with publicly available motion capture data from CMU \cite{hodgins2015cmu}.
Since the motion data is captured, there are some \YL{self-collisions} or long repeated sequences.
\YL{After removing poor quality data}, we select various motions, such as dancing, walking, running, jumping etc., including 20 sequences (\YL{9031, 6134, 7680 frames in total} for TSHIRT, PANTS and SKIRT respectively).
In these motions, 18 sequences are randomly selected for training and the remaining 2 sequences for testing.
The SHEET dataset consists of a pole or a sphere of three different sizes crashing to a piece of \YL{cloth sheet}.
The coarse mesh has 81 vertices and the fine mesh has 4,225 vertices.
There are \YL{4,000} frames in the SHEET dataset, in which 3200 frames for training and \YL{the remaining} 800 frames for testing.
We construct the DISK dataset by draping a round tablecloth to a cylinder in the wind, with 148 and 7,729 vertices for coarse and fine meshes respectively.
We adjust the velocity of the wind to get various animation sequences, in which 1600 frames for training and 400 frames for testing.
\begin{table*}[ht]
\renewcommand\arraystretch{1.5}
\caption{ Statistics and timing (sec/\YL{frame}) of the testing examples including five types of \YL{thin shell animations}.
}
\label{table:runtime}
\centering
\begin{tabular}{cccccccccc}
\toprule[1.2pt]
Benchmark & \#verts & \#verts & PBS & ours & speedup & \multicolumn{4}{c}{our components} \\ \cline{7-10}
& LR & HR & HR & & & coarse & TS-ACAP & synthesizing & refinement \\
& & & & & & sim. & extraction & (GPU) & \\ \hline \hline
TSHIRT & 246 & 14,190 & 8.72 & 0.867 & \textbf{10} & 0.73 & 0.11 & 0.012 & 0.015\\
PANTS & 200 & 11,967 & 10.92 &0.904 & \textbf{12} & 0.80 & 0.078 & 0.013 & 0.013\\
SKIRT & 127 & 6,812 & 6.84 & 0.207 & \textbf{33} & 0.081 & 0.10 & 0.014 & 0.012 \\
SHEET & 81 & 4,225 & 2.48 & 0.157 & \textbf{16} & 0.035 & 0.10 & 0.011 & 0.011 \\
DISK & 148 & 7,729 & 4.93 & 0.139 & \textbf{35} & 0.078 & 0.041 & 0.012 & 0.008 \\
\bottomrule[1.2pt]
\end{tabular}
\end{table*}
To prepare the above datasets, we generate both \YL{low-resolution (LR)} and \YL{high-resolution (HR)} cloth \YL{animations} by PBS.
The initial state of the HR mesh is obtained by applying the Loop subdivision scheme \cite{Thesis:Loop} to the coarse mesh and waiting for several seconds till stable.
Previous works \cite{kavan11physics, zurdo2013wrinkles, chen2018synthesizing} usually constrain the high-resolution meshes by various tracking mechanisms to ensure that the coarse cloth \YL{can be seen as} a low-resolution version of the fine cloth during the complete animation sequences.
However, fine-scale wrinkle dynamics cannot be captured by this model, as wrinkles are defined quasistatically and limited to a \YL{constrained} subspace.
Thus we \YL{instead perform} PBS for the two resolution meshes \emph{separately}, without any constraints between them.
We use a cloth simulation engine called ARCSim \cite{Narain2012AAR} to produce all animation sequences of low- and high-resolution meshes with the same parameter setting.
In our experiment, we choose the Gray Interlock from a library of measured cloth materials \cite{Wang2011DEM} as the material parameters for ARCSim simulation.
Specially for garments interacting with characters, to ensure collision-free, we manually put the coarse and fine garments on a template human body (in the T pose) and run the simulation to let the \YL{clothing} relax. To this end, we define the initial state for all subsequent simulations.
We interpolate 15 frames between the T pose and the initial pose of each motion sequence, before applying the motion sequence, which is smoothed using a convolution operation.
\begin{figure}[ht]
\centering
\subfloat{
\includegraphics[width=0.5\linewidth]{pictures/hyper_inputframes-eps-converted-to.pdf}
}
\subfloat{
\includegraphics[width=0.5\linewidth]{pictures/hyper_hiddensize-eps-converted-to.pdf}
}
\caption{\small Evaluation of hyperparameters in the Transformer network\YL{, using the SKIRT dataset. }
(Left) average error for the reconstructed results as a function of the number of input frames.
(Right) error for the synthesized results under the condition of various dimensions of the latent layer.
}
\label{fig:hyperpara}
\end{figure}
\textbf{Network architecture.}
As shown in Fig.~\ref{fig:Transformer}, our transduction network consists of two components, namely convolutional \YL{encoders} to map coarse and fine mesh sequences into latent spaces for improved generalization capability, and the Transformer network for \YL{spatio-temporally} coherent deformation transduction.
The feature encoder module takes the 9-dimensional TS-ACAP features defined on vertices as input, followed by two convolutional layers with $tanh$ as the activation function.
In the last convolutional layer we abandon the activation function, similar to \cite{tan2017autoencoder}.
A fully connected layer is used to map the output of the convolutional layers into a 16-dimensional latent space.
We train one encoder for coarse \YL{meshes} and another for fine \YL{meshes} separately.
For the DeformTransformer network, its input includes the embedded latent vectors from both \YL{the} coarse and fine domains.
The DeformTransformer network consists of sequential encoders and decoders,
each \YL{including} a stack of 2 identical blocks with 8-head attention.
Different from variable length sequences used in natural language processing, we \YL{fix} the number of input frames \YL{(to 3 in our experiments)} since a motion sequence may include a thousand frames.
\YL{We perform experiments to evaluate the performance of our method with different settings.}
As shown in Fig.~\ref{fig:hyperpara} \YL{(left)}, using 3 input frames is found to perform well in our experiments.
We also evaluate the results generated with various dimensions of latent space shown in Fig. \ref{fig:hyperpara} \YL{(right)}.
When the dimension of latent space is larger than 16, the network can \YL{easily overfit}.
Thus we set the dimension of the latent space %
to 16, which is sufficient for all the examples in the paper.
\begin{table}[tb]
\renewcommand\arraystretch{1.5}
\caption{Quantitative comparison of reconstruction errors for unseen \YL{cloth animations} in several datasets. We compare our results with Chen {\itshape et al.} \cite{chen2018synthesizing} and Zurdo {\itshape et al.} \cite{zurdo2013wrinkles} with LR meshes as a reference. \YL{Three metrics, namely RMSE (Root Mean Squared Error), Hausdorff distance and STED (Spatio-Temporal Edge Difference)~\cite{Vasa2011perception} are used. Since LR meshes have different number of vertices from the ground truth HR mesh, we only calculate its Hausdorff distance.}}
\label{table:compare_zurdo_chen2}
\centering
\begin{tabular}{ccccc}
\toprule[1.2pt]
\multirow{3}{*}{Dataset} & \multirow{3}{*}{Methods} & \multicolumn{3}{c}{Metrics} \\ \cline{3-5}
& & RMSE & Hausdorff & STED \\
& & $\times 10^\YL{-2}$ $\downarrow$ & $\times 10^\YL{-2}$ $\downarrow$ & $\downarrow$ \\
\hline \hline
\multirow{4}{*}{TSHIRT} & LR & - & 0.59 & - \\ \cline{2-5}
& Chen {\itshape et al.} & 0.76 & 0.506 & 0.277 \\ \cline{2-5}
& Zurdo {\itshape et al.} & 1.04 & 0.480 & 0.281 \\ \cline{2-5}
& Our & \textbf{0.546} & \textbf{0.416} & \textbf{0.0776} \\ \hline \hline
\multirow{4}{*}{PANTS} & LR & - & 0.761 & - \\ \cline{2-5}
& Chen {\itshape et al.} & 1.82 & 1.09 & 0.176 \\ \cline{2-5}
& Zurdo {\itshape et al.} & 1.89 & 0.983& 0.151 \\ \cline{2-5}
& Our & \textbf{0.663} & \textbf{0.414} & \textbf{0.0420} \\ \hline \hline
\multirow{4}{*}{SKIRT} & LR & - & 2.09 & - \\ \cline{2-5}
& Chen {\itshape et al.} & 1.93 & 1.31 & 0.562 \\ \cline{2-5}
& Zurdo {\itshape et al.} & 2.19 & 1.52 & 0.178 \\ \cline{2-5}
& Our & \textbf{0.685} & \textbf{0.681} & \textbf{0.0241} \\ \hline \hline
\multirow{4}{*}{SHEET}
& LR & - & 2.61 & - \\ \cline{2-5}
& Chen {\itshape et al.} & 4.37 & 2.60 & 0.155 \\ \cline{2-5}
& Zurdo {\itshape et al.} & 3.02 & 2.34 & 0.0672 \\ \cline{2-5}
& Our & \textbf{0.585} & \textbf{0.417} & \textbf{0.0262} \\ \hline \hline
\multirow{4}{*}{DISK} & LR & - & 3.12 & - \\ \cline{2-5}
& Chen {\itshape et al.} & 7.03 & 2.27 & 0.244 \\ \cline{2-5}
& Zurdo {\itshape et al.} & 11.40 & 2.23 & 0.502 \\ \cline{2-5}
& Our & \textbf{2.16} & \textbf{1.30} & \textbf{0.0557 } \\
\bottomrule[1.2pt]
\end{tabular}
\end{table}
\section{Results}\label{sec:results}
\subsection{Runtime Performance}
We implement our method on a \YL{computer with a} 2.50GHz \YL{4-Core} Intel CPU for coarse simulation and TS-ACAP extraction,
and \YL{an} NVIDIA GeForce\textsuperscript{\textregistered}~GTX 1080Ti GPU for fine TS-ACAP generation by the network and mesh coordinate reconstruction.
Table~\ref{table:runtime} shows average per-frame execution time of our method for various cloth datasets.
The execution time contains four parts: coarse simulation, TS-ACAP extraction, high-resolution TS-ACAP synthesis, and collision refinement.
For reference, we also \YL{measure} the time of a CPU-based implementation of high-resolution PBS using ARCSim \cite{Narain2012AAR}.
Our algorithm is $10\sim35$ times faster than the \YL{PBS} HR simulation.
The low computational cost of our method makes it suitable for the interactive applications.
\begin{figure}[tb]
\centering
\setlength{\fboxrule}{0.5pt}
\setlength{\fboxsep}{-0.01cm}
\setlength{\tabcolsep}{0.00cm}
\renewcommand\arraystretch{0.01}
\begin{tabular}{>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}}
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/0crop0090down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/1crop0090down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/2crop0090down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/3crop0090down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/4crop0090down.png} \\
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/0crop0300down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/1crop0300down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/2crop0300down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/3crop0300down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/4crop0300down.png} \\
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/0crop0110down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/1crop0110down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/2crop0110down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/3crop0110down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/4crop0110down.png} \\
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/0crop0260down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/1crop0260down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/2crop0260down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/3crop0260down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/4crop0260down.png} \\
\vspace{0.3cm} \footnotesize (a) Input & \vspace{0.3cm} \hspace{-0.3cm} \footnotesize (b) Chen {\itshape et al.} & \vspace{0.3cm} \hspace{-0.2cm} \footnotesize (c) Zurdo {\itshape et al.} & \vspace{0.3cm} \footnotesize (d) Ours & \vspace{0.3cm} \footnotesize (e) GT
\end{tabular}
\caption{Comparison of the reconstruction results for unseen data \YL{on the TSHIRT} dataset.
(a) coarse simulation,
(b) results of \cite{chen2018synthesizing},
(c) results of \cite{zurdo2013wrinkles},
(d) our results,
(e) ground truth generated by PBS.
Our method produces the detailed shapes of higher quality than Chen {\itshape et al.} and Zurdo {\itshape et al.}, see the folds and wrinkles in the close-ups. Chen {\itshape et al.} results suffer from seam line problems. The results of Zurdo {\itshape et al.} exhibit clearly noticeable artifacts.}
\label{fig:comparetoothers_tshirt}
\end{figure}
\begin{figure}[!htb]
\centering
\setlength{\fboxrule}{0.5pt}
\setlength{\fboxsep}{-0.01cm}
\setlength{\tabcolsep}{0.00cm}
\renewcommand\arraystretch{0.01}
\begin{tabular}{>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}}
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/0crop0010down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/1crop0010down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/2crop0010down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/3crop0010down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/4crop0010down.png} \\
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/0crop0060down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/1crop0060down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/2crop0060down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/3crop0060down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/4crop0060down.png} \\
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/0crop0140down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/1crop0140down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/2crop0140down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/3crop0140down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/4crop0140down.png} \\
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/0crop0160down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/1crop0160down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/2crop0160down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/3crop0160down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/4crop0160down.png} \\
\vspace{0.3cm} \footnotesize (a) Input & \vspace{0.3cm} \hspace{-0.3cm} \footnotesize (b) Chen {\itshape et al.} & \vspace{0.3cm} \hspace{-0.2cm} \footnotesize (c) Zurdo {\itshape et al.} & \vspace{0.3cm} \footnotesize (d) Ours & \vspace{0.3cm} \footnotesize (e) GT
\end{tabular}
\caption{Comparison of the reconstruction results for unseen data in the PANTS dataset.
(a) coarse simulation results,
(b) results of \cite{chen2018synthesizing}, mainly smooth the coarse meshes and barely exhibit any wrinkles.
(c) results of \cite{zurdo2013wrinkles}, have clear artifacts on examples where LR and HR meshes are not aligned well, \textit{e.g. } the trouser legs.
(d) our results, ensures physically-reliable results.
(e) ground truth generated by PBS.
}
\label{fig:comparetoothers_pants}
\end{figure}
\begin{figure*}[htb]
\centering
\subfloat[Input]{
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/0/frm0080_00_skirtlrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/0/frm0110_00_skirtlrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/0/frm0140_00_skirtlrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/0/frm0160_00_skirtlrkeyshot.png}
\end{minipage}}
\subfloat[Chen {\itshape et al.}]{
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/1/temp0080keyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/1/temp0110keyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/1/temp0140keyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/1/temp0160keyshot.png}
\end{minipage}}
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45 ,clip]{pictures/skirt09_06_posescolormap/1/09_06_posesfrm0080_00_skirtlr_result.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45 ,clip]{pictures/skirt09_06_posescolormap/1/09_06_posesfrm0110_00_skirtlr_result.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45 ,clip]{pictures/skirt09_06_posescolormap/1/09_06_posesfrm0140_00_skirtlr_result.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45 ,clip]{pictures/skirt09_06_posescolormap/1/09_06_posesfrm0160_00_skirtlr_result.png}
\end{minipage}
\subfloat[Zurdo {\itshape et al.}]{
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/2/temp0080keyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/2/temp0110keyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/2/temp0140keyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/2/temp0160keyshot.png}
\end{minipage}}
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/2/frm0080_00_skirthr.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/2/frm0110_00_skirthr.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/2/frm0140_00_skirthr.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/2/frm0160_00_skirthr.png}
\end{minipage}
\subfloat[Ours]{
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/3/frm0080_00_skirthrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/3/frm0110_00_skirthrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/3/frm0140_00_skirthrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/3/frm0160_00_skirthrkeyshot.png}
\end{minipage}}
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/3/frm0080_00_skirthr.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/3/frm0110_00_skirthr.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/3/frm0140_00_skirthr.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/3/frm0160_00_skirthr.png}
\end{minipage}
\subfloat[GT]{
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/4/frm0080_00_skirthrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/4/frm0110_00_skirthrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/4/frm0140_00_skirthrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/4/frm0160_00_skirthrkeyshot.png}
\end{minipage}}
\begin{minipage}[b]{0.08\linewidth}
\includegraphics[width=1.000000\linewidth, trim=0 0 0 0,clip]{pictures/bar.png}
\end{minipage}
\caption{Comparison of the reconstruction results for unseen data in the SKIRT dataset.
(a) the coarse simulation,
(b) the results of \cite{chen2018synthesizing},
(c) the results of \cite{zurdo2013wrinkles},
(d) our results,
(e) the ground truth generated by PBS.
The reconstruction accuracy is qualitatively showed as a difference map.
Reconstruction errors are color-coded and warmer colors indicate larger errors. Our method leads to significantly lower reconstruction errors. }
\label{fig:comparetoothers_skirt}
\end{figure*}
\subsection{\YL{Fine Detail} Synthesis Results and Comparisons}
We now demonstrate our method using various \YL{detail enhancement}
examples \YL{both} quantitatively and qualitatively, \YL{including added wrinkles and rich dynamics.}
Using detailed meshes generated by PBS as ground truth, we compare our results with physics-based coarse simulations, our implementation of a deep learning-based method \cite{chen2018synthesizing} and a conventional machine learning-based method \cite{zurdo2013wrinkles}.
For quantitative comparison, we use \YL{three} metrics: Root Mean Squared Error (RMSE), Hausdorff distance as well as spatio-temporal edge difference (STED) \cite{Vasa2011perception} designed for motion sequences with a focus on `perceptual’ error of models.
The results are shown in Table~\ref{table:compare_zurdo_chen2}.
Note that \YL{for the datasets from the top to bottom in the table,} the Hausdorff \YL{distances} between LR meshes and the ground truth are increasing. \YL{This} tendency is in accordance with the deformation range from tighter T-shirts and pants to skirts and square/disk tablecloth with higher degrees \YL{of freedom}.
Since the vertex position representation cannot handle rotations well, the larger scale the models deform, the more artifacts Chen {\itshape et al.} \cite{chen2018synthesizing} and Zurdo {\itshape et al.} \cite{zurdo2013wrinkles} would \YL{bring in} in the reconstructed models, \YL{leading to increased} RMSE and Hausdorff distances.
The results indicate that our method has better reconstruction results \YL{quantitatively} than the compared methods \YL{on} the 5 datasets with \YL{all the three} metrics.
Especially \YL{for} the SKIRT, SHEET and DISK \YL{datasets} which \YL{contain} loose cloth \YL{and hence larger and richer deformation}, our \YL{method} outperforms \YL{existing methods significantly} since tracking between coarse and fine meshes \YL{is} not required in our algorithm.
\begin{figure}[tb]
\centering
\setlength{\fboxrule}{0.5pt}
\setlength{\fboxsep}{-0.01cm}
\setlength{\tabcolsep}{0.00cm}
\renewcommand\arraystretch{0.01}
\begin{tabular}{>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}}
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/0crop0130down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/1crop0130down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/2crop0130down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/3crop0130down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/4crop0130down.png}\\
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/0crop0180down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/1crop0180down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/2crop0180down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/3crop0180down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/4crop0180down.png} \\
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/0crop0260down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/1crop0260down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/2crop0260down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/3crop0260down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/4crop0260down.png} \\
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/0crop0320down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/1crop0320down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/2crop0320down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/3crop0320down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/4crop0320down.png} \\
\vspace{0.3cm} \footnotesize (a) Input & \vspace{0.3cm} \hspace{-0.3cm} \footnotesize (b) Chen {\itshape et al.} & \vspace{0.3cm} \hspace{-0.2cm} \footnotesize (c) Zurdo {\itshape et al.} & \vspace{0.3cm} \footnotesize (d) Ours & \vspace{0.3cm} \footnotesize (e) GT
\end{tabular}
\caption{Comparison of the reconstruction results for unseen data in the SHEET dataset.
(a) the coarse simulation,
(b) the results of \cite{chen2018synthesizing}, with inaccurate and
rough wrinkles different from the GT.
(c) the results of \cite{zurdo2013wrinkles}, show similar global shapes to coarse meshes with some wrinkles and unexpected sharp corner.
(d) our results, show mid-scale wrinkles and similar global deformation as GT.
(e) the ground truth generated by PBS.}
\label{fig:comparetoothers_crashball}
\vspace{-0.2cm}
\end{figure}
\begin{figure}[tb]
\centering
\setlength{\fboxrule}{0.5pt}
\setlength{\fboxsep}{-0.01cm}
\setlength{\tabcolsep}{0.00cm}
\renewcommand\arraystretch{0.001}
\begin{tabular}{>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}}
\includegraphics[width=\linewidth, trim=42 0 30 30,clip]{pictures/disk4.300withhuman/0crop0050down.png} &
\includegraphics[width=\linewidth, trim=42 0 30 30,clip]{pictures/disk4.300withhuman/1crop0050down.png} &
\includegraphics[width=\linewidth, trim=42 0 30 30,clip]{pictures/disk4.300withhuman/2crop0050down.png} &
\includegraphics[width=\linewidth, trim=42 0 30 30,clip]{pictures/disk4.300withhuman/3crop0050down.png} &
\includegraphics[width=\linewidth, trim=42 0 30 30,clip]{pictures/disk4.300withhuman/4crop0050down.png} \\
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/0crop0090down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/1crop0090down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/2crop0090down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/3crop0090down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/4crop0090down.png} \\
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/0crop0160down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/1crop0160down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/2crop0160down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/3crop0160down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/4crop0160down.png} \\
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/0crop0360down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/1crop0360down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/2crop0360down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/3crop0360down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/4crop0360down.png} \\
\vspace{0.3cm} \footnotesize (a) Input & \vspace{0.3cm} \hspace{-0.3cm} \footnotesize (b) Chen {\itshape et al.} & \vspace{0.3cm} \hspace{-0.2cm} \footnotesize (c) Zurdo {\itshape et al.} & \vspace{0.3cm} \footnotesize (d) Ours & \vspace{0.3cm} \footnotesize (e) GT
\end{tabular}
\caption{Comparison of the reconstruction results for unseen data in the DISK dataset.
(a) the coarse simulation,
(b) the results of \cite{chen2018synthesizing}, cannot reconstruct credible shapes.
(c) the results of \cite{zurdo2013wrinkles}, show apparent artifacts near the flying tails since no tracking constraints applied.
(d) our results, reproduce large-scale deformations, see the tail of the disk flies like a fan in the wind.
(e) the ground truth generated by PBS.}
\label{fig:comparetoothers_disk}
\end{figure}
\YL{We further make qualitative comparisons on the 5 datasets.}
Fig. \ref{fig:comparetoothers_tshirt} shows \YL{detail synthesis results} on the TSHIRT dataset.
The first and second
rows
are from \YL{sequence} 06\_08, a woman dribbling the basketball sideways and the \YL{last two rows} are from \YL{sequence} 08\_11, a walking woman.
In this dataset of tight t-shirts on human bodies, Chen {\itshape et al.} \cite{chen2018synthesizing}, Zurdo {\itshape et al.} \cite{zurdo2013wrinkles} and our method are able to reconstruct the garment model completely with mid-scale wrinkles.
However, Chen {\itshape et al.} \cite{chen2018synthesizing} suffer from the seam line problems due to \YL{the use of geometry image representation}.
A geometry image is a parametric sampling of the shape, which is \YL{made a topological disk by cutting through some seams.}
The boundary of the disk needs to be fused so that the reconstructed mesh has the original topology.
\YL{The super-resolved geometry image corresponding to high-resolution cloth animations are not entirely accurate, and as a result the fused boundaries no longer match exactly, }
\textit{e.g. } clear seam lines on the shoulder and crooked boundaries on the left side of the waist \YL{for the examples} in Fig.~\ref{fig:comparetoothers_tshirt} (b)),
\YL{while} our method \YL{produces} better results than \cite{chen2018synthesizing} and \cite{zurdo2013wrinkles} which have \YL{artifacts of unsmooth surfaces}.
Fig. \ref{fig:comparetoothers_pants} shows comparative results of the animations of pants on a fixed body shape while changing the body pose over time.
The results of \cite{chen2018synthesizing} \YL{mainly} smooth the coarse meshes and barely exhibit \YL{any} wrinkles.
Zurdo {\itshape et al.} \cite{zurdo2013wrinkles} utilize tracking algorithms to ensure the %
\YL{close alignment}
between coarse and fine meshes, and thus the fine meshes are constrained \YL{and do not exhibit the behavior of full physics-based simulation.}
\YL{So on the PANTS dataset,} the results of \cite{zurdo2013wrinkles} have clear artifacts on examples \YL{where} LR and HR meshes are not aligned well, \textit{e.g. } the trouser legs.
Different from the two compared methods \YL{that reconstruct displacements} or local coordinates,
our method \YL{uses} deformation-based features in both encoding and decoding \YL{phases} which \YL{does not suffer from such restrictions and ensures physically-reliable results.}
For looser garments like \YL{skirts}, we show comparison results in Fig. \ref{fig:comparetoothers_skirt}, with color coding to highlight the differences between synthesized results and the ground truth.
Our method successfully reconstructs the swinging skirt \YL{caused by} the body motion (see the small wrinkles on the waist and the \YL{medium-level} folds on the skirt \YL{hem}).
Chen {\itshape et al.} are able to reconstruct the overall shape of the skirt, however there are many small unsmooth \YL{triangles leading to noisy shapes}
due to the 3D coordinate representation with untracked fine meshes with abundant wrinkles.
This leads to unstable animation, please see the accompanying video.
The results of \cite{zurdo2013wrinkles} have some problems of the global deformation, see the directions of the skirt hem and the large highlighted area in the color map.
Our learned \YL{detail} synthesis model provides better visual quality for shape generation \YL{and the generated results look} closer to the ground truth.
Instead of garments dressed on human bodies, we additionally show some results of free-flying tablecloth.
The comparison of the testing results \YL{on} the SHEET dataset are shown in Fig.~\ref{fig:comparetoothers_crashball}.
The results of \cite{chen2018synthesizing} show inaccurate and rough wrinkles different from the ground truth.
For hanging sheets in the results of \cite{zurdo2013wrinkles}, the global shapes are more like coarse \YL{meshes} with some wrinkles and unexpected sharp corners, \textit{e.g. } the left side in the last row of Fig. \ref{fig:comparetoothers_crashball} (c),
while ours show \YL{mid-scale} wrinkles and similar global deformation \YL{as} the high-resolution meshes.
As for the DISK dataset, from the visual results in Fig.~\ref{fig:comparetoothers_disk}, we can see that Chen {\itshape et al.} \cite{chen2018synthesizing} and Zurdo {\itshape et al.} \cite{zurdo2013wrinkles} cannot handle large-scale rotations well and cannot reconstruct credible shapes in such cases.
\gl{Especially for Zurdo {\itshape et al.} \cite{zurdo2013wrinkles}, the impact of tracking is significant for their algorithm.}
They can reconstruct the top
and part of tablecloth near the cylinder, but the flying tails have apparent artifacts.
Our algorithm does not have such drawbacks.
Notice how our method successfully reproduces ground-truth deformations, including the overall drape (\textit{i.e. }, how the tail of the disk flies like a fan in the wind) and mid-scale wrinkles.
\begin{table}[!htb]
\renewcommand\arraystretch{1.5}
\caption{User study results on cloth \YL{detail} synthesis. We show the average ranking score of the three methods: Chen {\itshape et al.} \cite{chen2018synthesizing}, Zurdo {\itshape et al.} \cite{zurdo2013wrinkles}, and ours. The
ranking ranges from 1 (the best) to 3 (the worst). The results are calculated
based on 320 trials. We see that our method achieves the best in terms of
wrinkles, temporal stability \YL{and overall quality}.}
\label{table:userstudy}
\centering
\begin{tabular}{cccc}
\toprule[1.2pt]
Method & Wrinkles & Temporal stability & Overall \\ \hline
Chen {\itshape et al.} & 2.184 & 2.1258 &2.1319\\ \hline
Zurdo {\itshape et al.} & 2.3742 & 2.5215 & 2.4877\\ \hline
Ours & \textbf{1.4417} & \textbf{1.3528} & \textbf{1.3804} \\
\bottomrule[1.2pt]
\end{tabular}
\end{table}
\gl{We further conduct a user study to evaluate the stability and realistic of the synthesized dense mesh dynamics. 32 volunteers are involved for this user study.}
For every question, we give one sequence and 5 images of coarse meshes as references, \YL{and} then let the user rank the corresponding outputs from Chen {\itshape et al.} \cite{chen2018synthesizing}, Zurdo {\itshape et al.} \cite{zurdo2013wrinkles} and ours according to three different criteria (wrinkles, temporal stability and overall).
We shuffle the order of the algorithms each time we exhibit the question and show shapes from the three methods randomly \YL{to avoid bias}.
We show the results of the user study in Table \ref{table:userstudy}, where we observe that our generated \YL{shapes} perform the best on all three criteria.
\begin{table}[tb]
\renewcommand\arraystretch{1.5}
\caption{Per-vertex error (RMSE) on synthesized shapes with different feature representations: 3D coordinates, ACAP and TS-ACAP.}
\label{table:feature_compare}
\centering
\begin{tabular}{cccccc}
\toprule[1.2pt]
Dataset & TSHIRT & PANTS & SKIRT & SHEET & DISK \\ \hline
3D coordinates & 0.0101 & 0.0193 & 0.00941 & 0.00860 & 0.185 \\ \hline
ACAP & 0.00614 & 0.00785 & 0.00693 & 0.00606 & 0.0351 \\ \hline
TS-ACAP & \textbf{0.00546} & \textbf{0.00663} & \textbf{0.00685} & \textbf{0.00585} & \textbf{0.0216}\\
\bottomrule[1.2pt]
\end{tabular}
\end{table}
\begin{figure}[tb]
\centering
\setlength{\fboxrule}{0.5pt}
\setlength{\fboxsep}{-0.01cm}
\setlength{\tabcolsep}{0.00cm}
\renewcommand\arraystretch{0.001}
\begin{tabular}{>{\centering\arraybackslash}m{0.25\linewidth}>{\centering\arraybackslash}m{0.25\linewidth}>{\centering\arraybackslash}m{0.25\linewidth}>{\centering\arraybackslash}m{0.25\linewidth}}
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/0/crop0040.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/1/crop0040.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/2/crop0040.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/3/crop0040.png} \\
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/0/crop0075.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/1/crop0075.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/2/crop0075.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/3/crop0075.png} \\
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/0/crop0110.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/1/crop0110.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/2/crop0110.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/3/crop0110.png} \\
\vspace{0.3cm} \small (a) Input & \vspace{0.3cm}\small (b) Coordinates & \vspace{0.3cm}\small (c) Ours & \vspace{0.3cm}\small (d) GT
\end{tabular}
\caption{The evaluation of the TS-ACAP feature in our detail synthesis method.
(a) input coarse \YL{shapes},
(b) the results using 3D coordinates, which can be clearly seen the rough appearance, unnatural deformation and some artifacts, especially in the highlighted regions with details shown in the close-ups.
(c) our results, which show smooth looks and the details are more similar to the GT.
(d) ground truth.
}
\label{fig:ablationstudy_coordiniates_skirt}
\end{figure}
\begin{figure}[htb]
\centering
\setlength{\tabcolsep}{0.05cm}
\renewcommand\arraystretch{0.001}
\begin{tabular}{>{\centering\arraybackslash}m{0.02\linewidth}>{\centering\arraybackslash}m{0.31\linewidth}>{\centering\arraybackslash}m{0.31\linewidth}>{\centering\arraybackslash}m{0.31\linewidth}}
\rotatebox{90}{\small ACAP} &
\includegraphics[width=\linewidth, trim=90 0 0 60,clip]{pictures/tacap_acap/0/crop0103.png} &
\includegraphics[width=\linewidth, trim=90 0 0 60,clip]{pictures/tacap_acap/0/crop0104.png} &
\includegraphics[width=\linewidth, trim=90 0 0 60,clip]{pictures/tacap_acap/0/crop0105.png} \\
\rotatebox{90}{\small TS-ACAP} &
\includegraphics[width=\linewidth, trim=90 0 0 60,clip]{pictures/tacap_acap/1/crop0103.png} &
\includegraphics[width=\linewidth, trim=90 0 0 60,clip]{pictures/tacap_acap/1/crop0104.png} &
\includegraphics[width=\linewidth, trim=90 0 0 60,clip]{pictures/tacap_acap/1/crop0105.png} \\
\vspace{0.3cm} & \vspace{0.3cm} \small $t = 103$ & \vspace{0.3cm} \small $t = 104$ & \vspace{0.3cm} \small $t = 105$
\end{tabular}
\caption{
Three consecutive frames from a testing sequence in the DISK dataset. First row: the results of ACAP. As shown in the second column, the enlarged wrinkles are different from the previous and the next frames.
This causes jumping in the animation.
Second row: the consistent results obtained via TS-ACAP feature, demonstrating that our TS-ACAP representation ensures the temporal coherence.
}
\label{fig:jump_acap}
\end{figure}
\begin{table}[tb]
\renewcommand\arraystretch{1.5}
\fontsize{7.5}{9}\selectfont
\caption{Comparison of RMSE between synthesized shapes and ground truth with different networks, \textit{i.e. } without temporal modules, with RNN, with LSTM and ours with the Transformer network.}
\label{table:transformer_compare}
\centering
\begin{tabular}{cccccc}
\toprule[1.2pt]
Dataset & TSHIRT & PANTS & SKIRT & SHEET & DISK \\ \hline
WO Transformer & 0.00909 & 0.01142 & 0.00831 & 0.00739 & 0.0427 \\ \hline
With RNN & 0.0435 & 0.0357 & 0.0558 & 0.0273 & 0.157 \\ \hline
With LSTM & 0.0351 & 0.0218 & 0.0451 & 0.0114 & 0.102 \\ \hline
With Transformer & \textbf{0.00546} & \textbf{0.00663} & \textbf{0.00685} & \textbf{0.00585} & \textbf{0.0216} \\
\bottomrule[1.2pt]
\end{tabular}
\end{table}
\begin{figure}[tb]
\centering
\setlength{\tabcolsep}{0.0cm}
\renewcommand\arraystretch{-1.9}
\begin{tabular}{>{\centering\arraybackslash}m{0.08\linewidth}>{\centering\arraybackslash}m{0.18\linewidth}>{\centering\arraybackslash}m{0.18\linewidth}>{\centering\arraybackslash}m{0.18\linewidth}>{\centering\arraybackslash}m{0.18\linewidth}>{\centering\arraybackslash}m{0.18\linewidth}}
\rotatebox{90}{\small (a) Input}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/0/0008.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/0/0016.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/0/0022.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/0/0094.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/0/0200.png}
\\
\rotatebox{90}{\small (b) EncDec} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/5/0008.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/5/0016.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/5/0022.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/5/0094.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/5/0200.png}
\\
\rotatebox{90}{\small (c) RNN} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/rnn/0008.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/rnn/0016.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/rnn/0022.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/rnn/0094.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/rnn/0200.png}
\\
\rotatebox{90}{\small (d) LSTM}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/lstm/0008.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/lstm/0016.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/lstm/0022.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/lstm/0094.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/lstm/0200.png}
\\
\rotatebox{90}{\small (e) Ours}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/3/0008.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/3/0016.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/3/0022.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/3/0094.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/3/0200.png}
\\
\rotatebox{90}{\small (f) GT}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/4/0008.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/4/0016.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/4/0022.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/4/0094.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/4/0200.png}
\end{tabular}
\caption{The evaluation of the Transformer network in our model for wrinkle synthesis.
From top to bottom we show (a) %
\gl{input coarse mesh with physical simulation}
(b) the results with an encoder-decoder \YL{dropping out temporal modules}, (c) the results with RNN \cite{chung2014empirical}, (d) the results with LSTM \cite{hochreiter1997long}, (e) ours, and (f) the ground truth generated by PBS.}
\label{fig:transformer_w_o_tshirt}
\end{figure}
\subsection{\YL{Evaluation of} Network Components}
We evaluate the effectiveness of our network components for two aspects: the \YL{capability} of the TS-ACAP feature and the \YL{capability} of the Transformer network.
We evaluate our method qualitatively and quantitatively on different datasets.
\textbf{Feature Representation Evaluation}.
To verify the effectiveness of our TS-ACAP feature, we compare per-vertex position errors to other features to evaluate the generated shapes in different datasets quantitatively.
We compare our method using TS-ACAP feature with our transduction methods using 3D vertex coordinates and ACAP, with network layers and parameters adjusted accordingly to optimize performance alternatively.
The details of numerical comparison are shown in Table \ref{table:feature_compare}.
ACAP and TS-ACAP show quantitative improvements than 3D coordinates.
In Fig. \ref{fig:ablationstudy_coordiniates_skirt}, we exhibit several compared examples of animated skirts of coordinates and TS-ACAP.
\YL{The results using coordinates show rough appearance, unnatural deformation and some artifacts,
I can't really see the two circles?
especially in the highlighted regions with details shown in the close-ups.} Our results with TS-ACAP are more similar to the ground truth than the ones with coordinates.
ACAP has the problem of temporal inconsistency, thus the results are shaking or jumping frequently.
\YL{Although the use of the Transformer network can somewhat mitigate this issue, such artifacts can appear even with the Transformer.}
\YL{Fig.~\ref{fig:jump_acap} shows} three consecutive frames from a testing sequence in the DISK dataset.
Results with TS-ACAP show more consistent wrinkles than the ones with ACAP thanks to the temporal constraints.
\textbf{Transformer Network Evaluation}.
We also evaluate the impact of the Transformer network in our pipeline.
We compare our method to an encoder-decoder network dropping out the temporal modules, our pipeline with the recurrent neural network (RNN) and with the long short-term memory (LSTM) \YL{module}.
An example of T-shirts is given in Fig. \ref{fig:transformer_w_o_tshirt}, \YL{showing} 5 frames in order.
The results without any temporal modules show artifacts on the sleeves and neckline since these places have strenuous \YL{forces}. %
The models using RNN and LSTM stabilize the sequence via eliminating dynamic and detailed deformation, but all the results keep wrinkles on the chest from the initial state\YL{, lacking rich dynamics.}
Besides, they are not able to generate stable and realistic garment animations \YL{that look similar to} the ground truth,
\YL{while} \YL{our} method with the Transformer network \YL{apparently} improves the temporary stability, \YL{producing results close to the ground truth.}
We also quantitatively evaluate the performance of the Transformer network \YL{in our method} via per-vertex error.
As shown in Table \ref{table:transformer_compare}, the RMSE of our model \YL{is} smaller than the other models.
\section{Conclusion and Future Work}\label{sec:conclusion}
In this paper, we introduce a novel algorithm for synthesizing robust and realistic cloth animations via deep learning.
To achieve this, we propose a geometric deformation representation named TS-ACAP which well embeds the details and ensures the temporal consistency.
\YL{Benefiting} from \YL{the} deformation-based feature, there is no explicit requirement of tracking between coarse and fine meshes in our algorithm.
We also use the Transformer network based on attention mechanisms to map the coarse TS-ACAP to fine TS-ACAP, maintaining the stability of our generation.
Quantitative and qualitative results reveal that our method can synthesize realistic-looking wrinkles in various datasets, such as draping tablecloth, tight or \YL{loose} garments dressed on human bodies, etc.
Since our algorithm synthesizes \YL{details} based on the coarse meshes, the time for coarse simulation is unavoidable.
Especially for tight garments like T-shirts and pants, the collision solving phase is time-consuming.
In the future, we intend to generate coarse sequences for tight cloth via skinning-based methods in order to reduce the computation for our pipeline.
Another limitation is that our current network is not able to deal with all kinds of garments with different topology.
\newpage
\bibliographystyle{IEEEtran}
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{C}{reating} dynamic general clothes or garments on animated characters has been a long-standing problem in computer graphics (CG).
In the CG industry, physics-based simulations (PBS) are used to achieve realistic and detailed folding patterns for garment animations.
However, it is time-consuming and requires expertise to synthesize fine geometric details since high-resolution meshes with tens of thousands or more vertices are often required.
For example, 10 seconds are required for physics-based simulation of a frame for detailed skirt animation shown in Fig.~\ref{fig:lrhrsim1}.
Not surprisingly, garment animation remains a bottleneck in many applications.
Recently, data-driven methods provide alternative solutions to fast and effective wrinkling behaviors for garments.
Depending on human body poses, some data-driven methods~\cite{wang10example,Feng2010transfer,deAguiar10Stable,santesteban2019learning, wang2019learning} are capable of generating tight cloth animations successfully.
\begin{figure}[t]
\centering
\begin{tabular}{ccc}
\multicolumn{3}{c}{
\includegraphics[width=1.0\linewidth]{pictures/wireframe2_1.pdf}} \\
(a) coarse skirt & (b) tracked skirt & (c) fine skirt
\end{tabular}
\caption{\small \cl{One frame of \YL{skirt in different representations.} (a) \YL{coarse mesh} (207 triangles), (b) \YL{tracked mesh} (13,248 triangles) and (c) \YL{fine mesh} (13,248 triangles). \YL{Both coarse and fine meshes are obtained by simulating the skirt using a physics-based method \cl{\cite{Narain2012AAR}}. The tracked mesh is obtained with physics-based simulation involving additional constraints to track the coarse mesh.} The tracked mesh exhibits stiff folds while the wrinkles in the fine simulated mesh are more realistic.}%
}
\label{fig:lrhrsim1}
\end{figure}
Unfortunately, they are not suitable for loose garments, such as skirts, since the deformation of wrinkles cannot be defined by a static mapping from a character’s pose.
Instead of human poses, wrinkle augmentation on coarse simulations provides another alternative.
It utilizes coarse simulations with fast speed to cover a high-level deformation and leverages learning-based methods to add realistic wrinkles.
Previous methods~\cite{kavan11physics,zurdo2013wrinkles,chen2018synthesizing} commonly require dense correspondences between coarse and fine meshes, so that local details can be added without affecting global deformation.
\YL{Such methods also require coarse meshes to be sufficiently close to fine meshes, as they only add details to coarse meshes.}
To maintain the correspondences for training data and ensure closeness between coarse and fine meshes, weak-form constraints such as various test functions~\cite{kavan11physics,zurdo2013wrinkles,chen2018synthesizing} are applied to make fine meshes track the coarse meshes,
\YL{but as a result, the obtained high-resolution meshes do not fully follow physical behavior, leading to animations that lack realism. An example is shown in Fig.~\ref{fig:lrhrsim1} where the tracked skirt (b) loses a large amount of wrinkles which should appear when simulating on fine meshes (c).}
Without requiring the constraints between coarse and fine meshes, we propose
\gl{the DeformTransformer network
to synthesize detailed thin shell animations from coarse ones, based on deformation transfer.}
This is inspired by the similarity observed between pairs of coarse and fine meshes generated by PBS. %
Although the positions of vertices from two meshes are not aligned, the overall deformation is similar, so it is possible to predict fine-scale deformation with coarse simulation results.
Most previous works~\cite{kavan11physics,zurdo2013wrinkles,chen2018synthesizing} use explicit vertex coordinates to represent 3D meshes, which are sensitive to translations and rotations,
so they require good alignments between low- and high-resolution meshes.
In our work, we regard the cloth animations as non-rigid deformation and propose a novel representation for mesh sequences, called TS-ACAP (Temporal and Spatial As-Consistent-As-Possible) representation.
TS-ACAP is a local deformation representation, capable of representing and solving large-scale deformation problems, while maintaining the details of meshes.
Compared to the original ACAP representation~\cite{gao2019sparse}, TS-ACAP is fundamentally designed to ensure the temporal consistency of the extracted feature sequences, \YL{and meanwhile} it can maintain the original features of ACAP \YL{to cope with large-scale deformations}.
With \YL{TS-ACAP} representations for both coarse and fine meshes, we leverage a sequence transduction network to map the deformation from coarse to fine level to assure the temporal coherence of generated sequences.
Unlike existing works using recurrent neural networks (RNN)~\cite{santesteban2019learning}, we utilize the Transformer network~\cite{vaswani2017attention}, an architecture consisting of frame-level attention mechanisms for our mesh sequence transduction task.
It is based entirely on attention without recursion modules so can be trained significantly faster than architectures based on recurrent %
layers.
With \YL{temporally consistent features and the Transformer network, \YL{our method achieves} stable general cloth synthesis with fine details in an efficient manner.}
In summary, the main contributions of our work are as follows:
\begin{itemize}
\item \YL{We propose a novel framework for the synthesis of cloth dynamics, by learning temporally consistent deformation from low-resolution meshes to high-resolution meshes \gl{with realistic dynamic}, which is $10 \sim 35$ times faster than PBS \cite{Narain2012AAR}.}
\item \YL{To achieve this, we propose a \cl{temporally and spatially as-consistent-as-possible deformation representation (TS-ACAP)} to represent the cloth mesh sequences. It is able to deal with large-scale deformation, essential for mapping between coarse and fine meshes, while ensuring temporal coherence.}
\item \gl{Based on the TS-ACAP, We further design an effective neural network architecture (named DeformTransformer) by improving Transformer network, which successfully enables high-quality synthesis of dynamic wrinkles with rich details on thin shells and maintains temporal consistency on the generated high-resolution mesh sequences.}
\end{itemize}
We qualitatively and quantitatively evaluate our method for various cloth types (T-shirts, pants, skirts, square and disk tablecloth) with different motion sequences.
In Sec.~\ref{sec:related_work}, we review the work most related to ours. We then give the detailed description of our method in Sec.~\ref{sec:approach}.
Implementation details are presented in Sec.~\ref{sec:implementation}. We present experimental results, including extensive
comparisons with state-of-the-art methods in Sec.~\ref{sec:results}, and finally, we draw conclusions and \YL{discuss future work} in Sec.~\ref{sec:conclusion}.
\section{Related work} \label{sec:related_work}
\subsection{Cloth Animation}
Physics-based techniques for realistic cloth simulation have been widely studied in computer graphics, \YL{using methods such as} implicit Euler integrator \cite{BW98,Harmon09asynchronous}, iterative optimization \cite{terzopoulos87elastically,bridson03wrinkles,Grinspun03shell}, collision detection and response \cite{provot97collision,volino95collision}, etc.
\YL{Although such techniques can generate realistic cloth dynamics, }they are time consuming for detailed cloth synthesis, and the robustness and efficiency of simulation systems are also of concern.
\YL{To address these, alternative methods have been developed to generate} the dynamic details of cloth animation via adaptive techniques \cite{lee2010multi,muller2010wrinkle,Narain2012AAR}, data-driven approaches \cite{deAguiar10Stable, Guan12DRAPE, wang10example, kavan11physics,zurdo2013wrinkles} and deep learning-based methods \cite{chen2018synthesizing,gundogdu2018garnet,laehner2018deepwrinkles,zhang2020deep}, etc.
Adaptive techniques \cite{lee2010multi, muller2010wrinkle} usually simulate a coarse model by simplifying the smooth regions and \YL{applying interpolation} to reconstruct the wrinkles, \YL{taking normal or tangential degrees of freedom into consideration.}
Different from simulating a reduced model with postprocessing detail augmentation, Narain {\itshape et al.} \cite{Narain2012AAR} directly generate dynamic meshes in \YL{the} simulation phase through adaptive remeshing, at the expense of increasing \YL{computation time}.
Data-driven methods have drawn much attention since they offer faster cloth animations than physical models.
With \YL{a} constructed database of \YL{high-resolution} meshes, researchers have proposed many techniques depending on the motions of human bodies with linear conditional models\cite{deAguiar10Stable, Guan12DRAPE} or secondary motion graphs \cite{Kim2013near, Kim2008drivenshape}.
However, these methods are limited to tight garments and not suitable for skirts or cloth with more freedom.
An alternative line \YL{of research} is to augment details on coarse simulations \YL{by exploiting knowledge from a} database of paired meshes, to generalize the performance to complicated testing scenes.
In this line, in addition to wrinkle synthesis methods \YL{based on} bone clusters \cite{Feng2010transfer} or human poses \cite{wang10example} for fitted clothes, there are some approaches \YL{that investigate how to} learn a mapping from a coarse garment shape to a detailed one for general \YL{cases} of free-flowing cloth simulation.
Kavan {\itshape et al.} \cite{kavan11physics} present linear upsampling operators to \YL{efficiently} augment \YL{medium-scale} details on coarse meshes.
Zurdo {\itshape et al.} \cite{zurdo2013wrinkles} define wrinkles as local displacements and use \YL{an} example-based algorithm to enhance low-resolution simulations.
\YL{Their approaches mean the} high-resolution cloth \YL{is} required to track \YL{the} low-resolution cloth, \YL{and thus cannot} exhibit full high-resolution dynamics.
Recently deep learning-based methods have been successfully applied for 3D animations of human \YL{faces}~\cite{cao2016real, jiang20183d}, hair \cite{zhang2018modeling, yang2019dynamic} and garments \cite{liu2019neuroskinning, wang2019learning}.
As for garment synthesis, some approaches \cite{laehner2018deepwrinkles, santesteban2019learning, patel2020tailornet} are proposed to utilize a two-stream strategy consisting of global garment fit and local \YL{wrinkle} enhancement.
L{\" a}hner {\itshape et al.} \cite{laehner2018deepwrinkles} present DeepWrinkles, \YL{which recovers} the global deformation from \YL{a} 3D scan system and \YL{uses a} conditional \YL{generative adversarial network} to enhance a low-resolution normal map.
Zhang {\itshape et al.} \cite{zhang2020deep} further generalize the augmentation method with normal maps to complex garment types as well as various motion sequences.
\YL{These approaches add wrinkles on normal maps \YL{rather than geometry}, and thus their effectiveness is restricted to adding fine-scale visual details, not large-scale dynamics.}
Based on \YL{the} skinning representation, some algorithms \cite{gundogdu2018garnet,santesteban2019learning} use neural networks to generalize garment synthesis algorithms to multiple body shapes.
\YL{In addition, other works are} devoted to \YL{generalizing neural networks} to various cloth styles \cite{patel2020tailornet} or cloth materials \cite{wang2019learning}.
Despite tight garments dressed on characters, some deep learning-based methods \cite{chen2018synthesizing, oh2018hierarchical} are %
\YL{demonstrated to work for cloth animation with higher degrees} of freedom.
Chen {\itshape et al.} \cite{laehner2018deepwrinkles} represent coarse and fine meshes via geometry images and use \YL{a} super-resolution network to learn the mapping.
Oh {\itshape et al.} \cite{oh2018hierarchical} propose a multi-resolution cloth representation with \YL{fully} connected networks to add details hierarchically.
Since the \YL{free-flowing cloth dynamics are harder for networks to learn} than tight garments, the results of these methods have not reached the realism of PBS. \YL{Our method based on a novel deformation representation and network architecture has superior capabilities of learning the mapping from coarse and fine meshes, generating realistic cloth dynamics, while being much faster than PBS methods.}
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\linewidth, trim=20 250 20 50,clip]{pictures/mainpicture2.pdf}
\caption{\small The overall architecture of our detail synthesis network. At data preparation stage, we generate low- and high-resolution \gl{thin shell} animations via coarse and fine \gl{meshes} and various motion sequences.
Then we encode the coarse meshes and the detailed meshes to a deformation representation TS-ACAP, respectively.
\YL{Our algorithm then} learns to map the coarse features to fine features %
\YL{by designing a DeformTransformer network that consists of temporal-aware encoders and decoders, and finally reconstructs the detailed animations.}
}
\label{fig:pipeline}
\end{figure*}
\subsection{Representation for 3D Meshes}
Unlike 2D images with regular grid of pixels, \YL{3D meshes have irregular connectivity which makes learning more difficult. To address this, existing deep learning based methods turn 3D meshes to a wide range of representations to facilitate processing~\cite{xiao2020survey},} such as voxels, images \YL{(such as depth images and multi-view images)}, point clouds, meshes, etc.
\YL{The volumetric representation has a regular structure, but it} often suffers from \YL{the problem of extremely high space and time consumption.}
Thus Wang {\itshape et al.} \cite{wang2017cnn} propose an octree-based convolutional neural network and encode the voxels sparsely.
Image-based representations including \YL{depth images} \cite{eigen2014depth,gupta2014learning} and multi-view images \cite{Su2015mvcnn,li20193d} are proposed to encode 3D models in a 2D domain.
It is unavoidable that both volumetric and image-based representations lose some geometric details.
Alternatively, geometry images are used in \cite{sinha2016deep,Sinha2017surfnet,chen2018synthesizing} for mesh classification or generation\YL{, which are obtained through cutting a 3D mesh to a topological disk, parameterizing it to a rectangular domain and regularly sampling the 3D coordinates in the 2D domain~\cite{gu2002geometry}.}
\YL{However, this representation} may suffer from parameterization distortion and seam line problems.
Instead of representing 3D meshes into other formats, recently there are methods \cite{tan2017autoencoder, tan2017variational, hanocka2019meshcnn} applying neural networks directly to triangle meshes with various features.
Gao {\itshape et al.} \cite{gao2016efficient} propose a deformation-based representation, called the rotation-invariant mesh difference (RIMD) which is translation and rotation invariant.
Based on the RIMD feature, Tan {\itshape et al.} \cite{tan2017variational} propose a fully connected variational autoencoder network to analyze and generate meshes.
Wu {\itshape et al.} \cite{wu2018alive} use the RIMD to generate
a 3D caricature model from a 2D caricature image.
However, it is expensive to reconstruct vertex coordinates from the RIMD feature due to the requirement of solving a very complicated optimization.
Thus it is not suitable for fast mesh generation tasks.
A faster deformation representation based on an as-consistent-as-possible (ACAP) formulation \cite{gao2019sparse} is further used to reconstruct meshes \cite{tan2017autoencoder}, which is able to cope with large rotations and efficient for reconstruction.
Jiang {\itshape et al.} \cite{jiang2019disentangled} use ACAP to disentangle the identity and expression of 3D \YL{faces}.
They further apply ACAP to learn and reconstruct 3D human body models using a coarse-to-fine pipeline \cite{jiang2020disentangled}.
\YL{However, the ACAP feature is represented based on individual 3D meshes. When applied to a dynamic mesh sequence, it does not guarantee temporal consistency.}
We propose a \cl{temporally and spatially as-consistent-as-possible (TS-ACAP)} representation, to ensure both spatial and temporal consistency of mesh deformation.
Compared to ACAP, our TS-ACAP can also accelerate the computation of features thanks to the sequential constraints.
\subsection{Sequence Generation with \YL{DNNs (Deep Neural Networks)}}
Temporal information is crucial for stable and \gl{vivid} sequence generation. Previously, recurrent neural networks (RNN) have been successfully applied in many sequence generation tasks \cite{mikolov2010recurrent, mikolov2011extensions}. However, it is difficult to train \YL{RNNs} to capture long-term dependencies since \YL{RNNs} suffer from the vanishing gradient problem \cite{bengio1994learning}. To deal with this problem, previous works proposed some variations of RNN, including long short-term memory (LSTM) \cite{hochreiter1997long} and gated recurrent unit (GRU) \cite{cho2014properties}. These variations of RNN rely on the gating mechanisms to control the flow of information, thus performing well in the tasks that require capturing long-term dependencies, such as speech recognition \cite{graves2013speech} and machine translation \cite{bahdanau2014neural, sutskever2014sequence}. Recently, based on attention mechanisms, the Transformer network \cite{vaswani2017attention} has been verified to outperform \YL{many typical sequential models} for long sequences. This structure is able to inject the global context information into each input. Based on Transformer, impressive results have been achieved in tasks with regard to audio, video and text, \textit{e.g. } speech synthesis \cite{li2019neural, okamoto2020transformer}, action recognition \cite{girdhar2019video} and machine translation \cite{vaswani2017attention}.
We utilize the Transformer network to learn the frame-level attention which improves the temporal stability of the generated animation sequences.
\section{Approach} \label{sec:approach}
With a simulated sequence of coarse meshes $\mathcal{C} = \{\mathcal{C}_1, \dots, \mathcal{C}_n\}$ as input, our goal is to produce a sequence of fine ones $\mathcal{D} = \{\mathcal{D}_1, \dots, \mathcal{D}_n\}$ which have similar non-rigid deformation as the PBS. Given two simulation sets of paired coarse and fine garments, we extract the TS-ACAP representations respectively, \YL{and} then use our proposed DeformTransformer network to learn the \YL{transform} \YL{from the low-resolution space to the high-resolution space}. \YL{As illustrated previously in Fig.~\ref{fig:lrhrsim1}, such a mapping involves deformations beyond adding fine details.}
Once the network is trained by the paired examples, a consistent and detailed animation $\mathcal{D}$ can be synthesized for each input sequence $\mathcal{C}$.
\subsection{Overview}
The overall architecture of our detail synthesis network is illustrated in Fig. \ref{fig:pipeline}.
To synthesize realistic \gl{cloth animations}, we propose a method to simulate coarse meshes first and learn a \YL{temporally-coherent} mapping to the fine meshes.
To realize our goal, we construct datasets including low- and high-resolution cloth animations, \textit{e.g. } coarse and fine garments dressed on a human body of various motion sequences.
To efficiently extract localized features with temporal consistency, we propose a new deformation representation, called TS-ACAP (temporal \YL{and spatial} as-consistent-as-possible), which is able to cope with both large rotations and unstable sequences. It also has significant advantages: it is efficient to compute for \YL{mesh} sequences and its derivatives have closed form solutions.
Since the vertices of the fine models are typically more than ten thousand to simulate realistic wrinkles, it is hard to directly map the coarse features to the high-dimensional fine ones for the network.
Therefore, \YL{convolutional encoder networks are}
applied to encode \YL{coarse and fine meshes in the TS-ACAP representation} to \YL{their latent spaces}, respectively.
The TS-ACAP generates local rotation and scaling/shearing parts on vertices, so we perform convolution \YL{operations} on vertices %
\YL{to learn to extract useful features using shared local convolutional kernels.}
With encoded feature sequences, a sequence transduction network is proposed to learn the mapping from coarse to fine TS-ACAP sequences.
Unlike existing works using recurrent neural networks \YL{(RNNs)}~\cite{santesteban2019learning}, we use the Transformer \cite{vaswani2017attention}, a sequence-to-sequence network architecture, based on frame-level attention mechanisms for our detail synthesis task, \YL{which is more efficient to learn and leads to superior results.}
\subsection{Deformation Representation}
\YL{As discussed before, large-scale deformations are essential to represent \gl{thin shell mode dynamics such as }cloth animations, because folding and wrinkle patterns during animation can often be complicated. Moreover, cloth animations are in the form of sequences, hence the temporal coherence is very important for the realistic. Using 3D coordinates directly cannot cope with large-scale deformations well, and existing deformation representations are generally designed for static meshes, and directly applying them to cloth animation sequences on a frame-by-frame basis does not take temporal consistency into account. }
To cope with this problem, we propose a mesh deformation feature with spatial-temporal consistency, called TS-ACAP, to represent the coarse and fine deformed shapes, which exploits the localized information effectively and reconstructs \YL{meshes} accurately.
Take \YL{coarse meshes} $\mathcal{C}$ for instance and \YL{fine meshes $\mathcal{D}$ are processed in the same way.} \YL{Assume that a sequence} of coarse meshes contains $n$ models with the same topology, each denoted as $\mathcal{C}_{t}$ \YL{($1\leq t \leq n$)}.
\YL{A mesh with the same topology is chosen as the reference model, denoted as $\mathcal{C}_{0}$. For example, for garment animation, this can be the garment mesh worn by a character in the T pose.}
$\mathbf{p}_{t,i} \in \mathbb{R}^{3}$ is the $i^{\rm th}$ vertex on
the $t^{\rm th}$ mesh.
To represent the local shape deformation, the deformation gradient $\mathbf{T}_{t,i} \in \mathbb{R}^{3 \times 3}$ can be obtained by minimizing the following energy:
\begin{equation}
\mathop{\arg\min}_{\mathbf{T}_{t,i}} \ \ \mathop{\sum}_{j \in \mathcal{N}_i} c_{ij} \| (\mathbf{p}_{t,i} - \mathbf{p}_{t,j}) - \mathbf{T}_{t,i} (\mathbf{p}_{0,i} - \mathbf{p}_{0,j}) \|_2^2 \label{con:computeDG}
\end{equation}
where $\mathcal{N}_i$ is the one-ring neighbors of the $i^{\rm th}$ vertex, and $c_{ij}$ is the cotangent weight $c_{ij} = \cot \alpha_{ij} + \cot \beta_{ij} $ \cite{sorkine2007rigid,levi2014smooth}, where $\alpha_{ij}$
and $\beta_{ij}$ are angles opposite to the edge connecting the $i^{\rm th}$ and $j^{\rm th}$ vertices.
The main drawback of the deformation gradient representation is that it cannot handle large-scale rotations, which often \YL{happen} in cloth animation.
Using polar decomposition, the deformation gradient $\mathbf{T}_{t,i} $ can be decomposed into a rotation part and a scaling/shearing part $\mathbf{T}_{t,i} = \mathbf{R}_{t,i}\mathbf{S}_{t,i}$.
The scaling/shearing transformation $\mathbf{S}_{t,i}$ is uniquely defined, while the rotation $\mathbf{R}_{t,i}$ \YL{corresponds to infinite possible rotation angles (differed by multiples of $2\pi$, along with possible opposite orientation of the rotation axis)}. Typical formulation often constrain the rotation angle to be within $[0, \pi]$ which is unsuitable for smooth large-scale animations.
In order to handle large-scale rotations, we first require the orientations of rotation axes and rotation angles of \YL{spatially} adjacent vertices \YL{on the same mesh} to be as consistent as possible.
Especially for our sequence data, we further add constraints for adjacent frames to ensure the temporal consistency of the orientations of rotation axes and rotation angles on each vertex.
We first consider consistent orientation for axes.
\begin{flalign}\label{eqn:axis}
\arg\max_{{o}_{t,i}} \sum_{(i,j) \in \mathcal{E} } {o}_{t,i}{o}_{t,j} \cdot s(\boldsymbol{\omega}_{t,i} \cdot \boldsymbol{\omega}_{t,j}, \theta_{t,i}, \theta_{t,j}) \nonumber\\
+ \sum_{i \in \mathcal{V} } {o}_{t,i} \cdot s(\boldsymbol{\omega}_{t,i} \cdot \boldsymbol{\omega}_{t-1,i}, \theta_{t,i}, \theta_{t-1,i}) \nonumber\\
{\rm s.t.} \quad
{o}_{t,1} = 1, {o}_{t,i} = \pm 1 (i \neq 1) \quad
\end{flalign}
where $t$ is the \YL{index} of \YL{the} frame, $\mathcal{E}$ is the edge set, and $\mathcal{V}$ is the vertex set. \YL{Denote by $(\boldsymbol{\omega}_{t,i}, \theta_{t,i})$ one possible choice for the rotation axis and rotation angle that match $\mathbf{R}_{t,i}$. $o_{t,i} \in \{+1, -1\}$ specifies whether the rotation axis is flipped ($o_{t,i} = 1$ if the rotation axis is unchanged, and $-1$ if its opposite is used instead). }\YL{The first term promotes spatial consistency while the second term promotes temporal consistency.}
$s(\cdot)$ is a function measuring orientation consistency, which is defined as follows:
\begin{equation}
s(\cdot)=\left\{
\begin{aligned}
0 & , & |\boldsymbol{\omega}_{t,i} \cdot \boldsymbol{\omega}_{t,j}|\leq\epsilon_1 \; {\rm or} \;
\theta_{t,i}<\varepsilon_2 \; {\rm or} \; \theta_{t,j}<\varepsilon_2 \\
1 & , & {\rm Otherwise~if}~\boldsymbol{\omega}_{t,i} \cdot \boldsymbol{\omega}_{t,j}>\epsilon_1 \\
-1 & , & {\rm Otherwise~if}~ \boldsymbol{\omega}_{t,i} \cdot \boldsymbol{\omega}_{t,j}<-\epsilon_1 \\
\end{aligned}
\right.
\end{equation}
\YL{The first case here is to ignore cases where the rotation angle is near zero, as the rotation axis is not well defined in such cases.}
As for rotation angles, \YL{we optimize the following}
\begin{flalign}\label{eqn:angle}
\arg\min_{r_{t,i}} &\sum_{(i,j) \in \mathcal{E} } \| (r_{t,i} \cdot 2\pi+{o}_{t,i}\theta_{t,i}) - (r_{t,j}\cdot 2\pi+{o}_{t,j}\theta_{t,j}) \|_2^{2} &\nonumber\\
+ &\sum_{i \in \mathcal{V} } \| (r_{t,i} \cdot 2\pi+{o}_{t,i}\theta_{t,i}) - (r_{t-1,i}\cdot 2\pi+{o}_{t,j}\theta_{t-1,i}) \|_2^{2} \nonumber\\
{\rm s.t.}& \quad r_{t,i} \in \mathbb{Z},~~r_{t,1} = 0.
\end{flalign}
where $r_{t,i} \in \mathbb{Z}$ specifies how many $2\pi$ rotations should be added to the rotation angle.
\YL{The two terms here promote spatial and temporal consistencies of rotation angles, respectively.
These optimizations can be solved using integer programming, and we use the mixed integer solver CoMISo~\cite{comiso2009} which provides an efficient \gl{solver}. See~\cite{gao2019sparse} for more details.}
A similar process is used to compute the TS-ACAP representation of the fine meshes.
\cl{Compared to the ACAP representation, our TS-ACAP representation considers temporal constraints to represent nonlinear deformation for optimization of axes and angles, which is more suitable for consecutive large-scale deformation \YL{sequences}.
We compare ACAP~\cite{gao2019sparse} and our TS-ACAP using a simple example of a simulated disk-shaped cloth animation sequence. Once we obtain deformation representations of the meshes in the sequence,
we interpolate two meshes, the initial state mesh and a randomly selected frame, using linear interpolation of \YL{shape representations}.
\YL{In Fig. \ref{fig:interpolation}, we demonstrate the interpolation results with ACAP representation, which shows that it cannot handle such challenging cases with complex large-scale deformations. In contrast, with our temporally and spatially as-consistent-as-possible optimization, our TS-ACAP representation is able to produce consistent interpolation results.}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{pictures/acap_tacap1_1.pdf}%
\caption{\small Comparison of shape interpolation results with different deformation representations, ACAP and TS-ACAP. %
(a) and (b) are the source (t = 0) and target (t = 1) models with large-scale deformation to be interpolated.
The first row shows the interpolation results by ACAP, and the second row show the results with our TS-ACAP.
\gl{The interpolated models with ACAP feature are plausible in each frame while they are not consistent in the temporal domain.}
}
\label{fig:interpolation}
\end{figure}
}
\subsection{DeformTransformer Networks}
Unlike \cite{tan2017variational, wang2019learning} which use fully connected layers for mesh encoder, we perform convolutions \YL{on meshes to learn to extract useful features using compact shared convolutional kernels.}
As illustrated in Fig. \ref{fig:pointconv}, we use a convolution operator on vertices \cite{duvenaud2015convolutional, tan2017autoencoder} where the output at a vertex is obtained as a linear combination of input in its one-ring neighbors along with a bias.
\YL{The input to our network is the TS-ACAP representation, which for the $i^{\rm th}$ vertex of the $t^{\rm th}$ mesh, we collect non-trivial coefficients from the rotation $\mathbf{R}_{t, i}$ and scaling/shearing $\mathbf{S}_{t,i}$, which forms a 9-dimensional feature vector (see~\cite{gao2019sparse} for more details). Denote by $\mathbf{f}_i^{(k-1)}$ and $\mathbf{f}_i^{k}$ the feature of the $i^{\rm th}$ vertex at the $(k-1)^{\rm th}$ and $k^{\rm th}$ layers, respectively. The convolution operator is defined as follows:
\begin{equation}
\mathbf{f}_i^{(k)} =
\mathbf{W}_{point}^{(k)} \cdot \mathbf{f}_{i}^{(k-1)} +
\mathbf{W}_{neighbor}^{(k)} \cdot \frac{1}{D_i} \mathop{\sum}_{j=1}^{D_i} \mathbf{f}_{n_{ij}}^{(k-1)}
+ \mathbf{b}^{(k)}
\end{equation}
where $\mathbf{W}_{point}^{(k)}$, $\mathbf{W}_{neighbor}^{(k)}$ and $\mathbf{b}^{(k)}$ are learnable parameters for the $k^{\rm th}$ convoluational layer, $D_i$ is the degree of the $i^{\rm th}$ vertex, $n_{ij}(1 \leq j \leq D_i )$ is the $j^{\rm th}$ neighbor of the $i^{\rm th}$ vertex.
}
\begin{figure}[ht]
\centering
\includegraphics[width=0.48\linewidth]{pictures/pointconv.pdf}
\caption{\small Illustration of the convolutional operator on meshes.
The result of convolution for each vertex is obtained by a linear combination from the input in the 1-ring neighbors of the vertex, along with a bias.
}
\label{fig:pointconv}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth, trim=0 50 0 150,clip]{pictures/transformer.pdf} %
\caption{\small The architecture of our DeformTransformer network.
The coarse and fine mesh sequences are embedded into feature vectors using the TS-ACAP representation which \YL{is} defined \YL{at} each vertex as a 9-dimensional vector.
Then two convolutional \YL{encoders} map coarse and fine features to \YL{their latent spaces}, respectively.
These latent vectors are fed into the DeformTransformer network, \cl{which consists of the encoder and decoder, each including a stack of $N=2$ identical blocks with 8-head attention,} to recover \YL{temporally-coherent} deformations.
Notice that in \YL{the} training phase the input high-resolution TS-ACAP \YL{features are those from the ground truth},
\YL{but during testing, these features are initialized to zeros, and once a new high-resolution frame is generated, its TS-ACAP feature is added.}
With predicted feature vectors, realistic and stable cloth animations are generated.
}
\label{fig:Transformer}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\linewidth, trim=18 33 18 3,clip]{pictures/tshirt06_08_poseswithhuman_collision/temp0270keyshot_unsolve.png}
\includegraphics[width=0.4\linewidth, trim=18 33 18 3,clip]{pictures/tshirt06_08_poseswithhuman_collision/temp0270keyshot_solve.png}
\caption{\small For tight clothing, data-driven cloth deformations may suffer from apparent collisions with the body (left). We apply a simple postprocessing step to push
\YL{the collided} T-shirt vertices outside the body (right).
}
\label{fig:collisionrefinement}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\linewidth, trim=50 150 100 150,clip]{pictures/dataset.pdf}
\caption{\small
We test our algorithm on 5 datasets including TSHIRT, PANTS, SKIRT, SHEET and DISK.
The former three are garments (T-shirts, skirts, and pants) dressed on a template body and simulated with various motion sequences.
The SHEET dataset is a square sheet interacting with various obstacles.
The DISK dataset is a round tablecloth draping on a cylinder in the wind of various velocities.
Each cloth shape has a coarse resolution (top) and a fine resolution (bottom).
}
\label{fig:dataset}
\end{figure*}
Let $\mathcal{F}_\mathcal{C} = \{\mathbf{f}_{\mathcal{C}_1}, \dots, \mathbf{f}_{\mathcal{C}_n}\}$ be the sequence of coarse mesh features, and $\mathcal{F}_\mathcal{D} = \{\mathbf{f}_{\mathcal{D}_1}, \dots, \mathbf{f}_{\mathcal{D}_n}\}$ be its counterpart, the sequence of detailed mesh features.
To synthesize $\mathcal{F}_\mathcal{D}$ from $\mathcal{F}_\mathcal{C}$, the DeformTransformer framework is proposed to solve this sequence-to-sequence problem.
The DeformTransformer network consists of several stacked encoder-decoder layers, \YL{denoted} as $Enc(\cdot)$ and $Dec(\cdot)$. To take the order of the sequence into consideration, triangle positional embeddings \cite{vaswani2017attention} are injected into frames of $\mathcal{F}_\mathcal{C}$ and $\mathcal{F}_\mathcal{D}$, respectively.
The encoder takes coarse mesh features as input and encodes it to a \YL{temporally-dependent} hidden space.
It is composed of identical blocks \YL{each} with two sub-modules, one is the multi-head self-attention mechanism, the other is the frame-wise fully connected feed-forward network.
We also employ a residual connection around these two sub-modules, followed \YL{by} the layer normalization.
The multi-head attention is able to build the dependence between any frames, thus ensuring that each input can consider global context of the whole sequence. Meanwhile, compared with other sequence models, this mechanism splits \YL{the} attention into several subspaces so that it can model the frame \YL{relationships} in multiple aspects.
With the encoded latent vector $Enc(\mathcal{F}_\mathcal{C})$, the decoder network attempts to reconstruct a sequence of fine mesh features.
The decoder has two parts:
The first part takes fine mesh sequence $\mathcal{F}_\mathcal{D}$ as \YL{input} and
encodes it similar to the encoder.
\YL{Unlike the encoder, detailed meshes are generated sequentially, and when predicting frame $t$, it should not attend to subsequent frames (with the position after frame $t$). To achieve this, we utilize a masking process
for the self-attention module.} The second part performs multi-head attention over the output of the encoder, thus capturing the long-term dependence between coarse mesh features $\mathcal{F}_\mathcal{C}$ and fine mesh features $\mathcal{F}_\mathcal{D}$.
We train the Transformer network by minimizing the mean squared error between predicted detailed features and the ground-truth.
With predicted TS-ACAP feature vector, we reconstruct the vertex coordinates of \YL{the} target mesh\YL{, in the same way as reconstruction from ACAP features} (please refer to \cite{gao2019sparse} for details).
Our training data is generated by PBS \YL{and is collision-free}.
Since human body \YL{(or other obstacles)} information is unseen in our algorithm, it does not guarantee the predicted cloth \YL{is free from any penetration}.
Especially for tight garment like T-shirts, it will be apparent if collision \YL{between the garment and human body} happens.
We use a fast refinement method \cite{wang2019learning} to push the cloth vertices colliding with the body outside \YL{while} preserving the local wrinkle details (see Fig.~\ref{fig:collisionrefinement}).
For each vertex detected inside the body, we find its closest point over the body surface with normal and position.
Then the cloth mesh is deformed to update the vertices by minimizing the energy which penalizes the euclidean distance and Laplacian difference between the updated mesh and the initial one (please refer to \cite{wang2019learning} for details).
The collision solving process usually takes less than 3 iterations to converge to a collision-free state.
\section{Implementation}\label{sec:implementation}
We describe the details of the dataset construction and the network architecture in this section.
\textbf{\YL{Datasets}.}
To test our method, we construct 5 datasets, called TSHIRT, PANTS, SKIRT, SHEET and DISK respectively.
The former three datasets are different types of garments, \textit{i.e. }, T-shirts, skirts and pants worn on human bodies.
Each type of garment \YL{is represented by both low-resolution and high-resolution meshes}, \YL{containing} 246 and 14,190 vertices for the T-shirts, 219 and 12,336 vertices for the skirts, 200 and 11,967 vertices for the pants.
Garments of the same type and resolution are simulated from a template mesh, which means \YL{such meshes obtained through cloth animations have the same number of vertices and the same connectivity}.
These garments are dressed on animated characters, which are obtained via driving a body \YL{in the SMPL (Skinned Multi-Person Linear) model} \cite{loper2015smpl} with publicly available motion capture data from CMU \cite{hodgins2015cmu}.
Since the motion data is captured, there are some \YL{self-collisions} or long repeated sequences.
\YL{After removing poor quality data}, we select various motions, such as dancing, walking, running, jumping etc., including 20 sequences (\YL{9031, 6134, 7680 frames in total} for TSHIRT, PANTS and SKIRT respectively).
In these motions, 18 sequences are randomly selected for training and the remaining 2 sequences for testing.
The SHEET dataset consists of a pole or a sphere of three different sizes crashing to a piece of \YL{cloth sheet}.
The coarse mesh has 81 vertices and the fine mesh has 4,225 vertices.
There are \YL{4,000} frames in the SHEET dataset, in which 3200 frames for training and \YL{the remaining} 800 frames for testing.
We construct the DISK dataset by draping a round tablecloth to a cylinder in the wind, with 148 and 7,729 vertices for coarse and fine meshes respectively.
We adjust the velocity of the wind to get various animation sequences, in which 1600 frames for training and 400 frames for testing.
\begin{table*}[ht]
\renewcommand\arraystretch{1.5}
\caption{ Statistics and timing (sec/\YL{frame}) of the testing examples including five types of \YL{thin shell animations}.
}
\label{table:runtime}
\centering
\begin{tabular}{cccccccccc}
\toprule[1.2pt]
Benchmark & \#verts & \#verts & PBS & ours & speedup & \multicolumn{4}{c}{our components} \\ \cline{7-10}
& LR & HR & HR & & & coarse & TS-ACAP & synthesizing & refinement \\
& & & & & & sim. & extraction & (GPU) & \\ \hline \hline
TSHIRT & 246 & 14,190 & 8.72 & 0.867 & \textbf{10} & 0.73 & 0.11 & 0.012 & 0.015\\
PANTS & 200 & 11,967 & 10.92 &0.904 & \textbf{12} & 0.80 & 0.078 & 0.013 & 0.013\\
SKIRT & 127 & 6,812 & 6.84 & 0.207 & \textbf{33} & 0.081 & 0.10 & 0.014 & 0.012 \\
SHEET & 81 & 4,225 & 2.48 & 0.157 & \textbf{16} & 0.035 & 0.10 & 0.011 & 0.011 \\
DISK & 148 & 7,729 & 4.93 & 0.139 & \textbf{35} & 0.078 & 0.041 & 0.012 & 0.008 \\
\bottomrule[1.2pt]
\end{tabular}
\end{table*}
To prepare the above datasets, we generate both \YL{low-resolution (LR)} and \YL{high-resolution (HR)} cloth \YL{animations} by PBS.
The initial state of the HR mesh is obtained by applying the Loop subdivision scheme \cite{Thesis:Loop} to the coarse mesh and waiting for several seconds till stable.
Previous works \cite{kavan11physics, zurdo2013wrinkles, chen2018synthesizing} usually constrain the high-resolution meshes by various tracking mechanisms to ensure that the coarse cloth \YL{can be seen as} a low-resolution version of the fine cloth during the complete animation sequences.
However, fine-scale wrinkle dynamics cannot be captured by this model, as wrinkles are defined quasistatically and limited to a \YL{constrained} subspace.
Thus we \YL{instead perform} PBS for the two resolution meshes \emph{separately}, without any constraints between them.
We use a cloth simulation engine called ARCSim \cite{Narain2012AAR} to produce all animation sequences of low- and high-resolution meshes with the same parameter setting.
In our experiment, we choose the Gray Interlock from a library of measured cloth materials \cite{Wang2011DEM} as the material parameters for ARCSim simulation.
Specially for garments interacting with characters, to ensure collision-free, we manually put the coarse and fine garments on a template human body (in the T pose) and run the simulation to let the \YL{clothing} relax. To this end, we define the initial state for all subsequent simulations.
We interpolate 15 frames between the T pose and the initial pose of each motion sequence, before applying the motion sequence, which is smoothed using a convolution operation.
\begin{figure}[ht]
\centering
\subfloat{
\includegraphics[width=0.5\linewidth]{pictures/hyper_inputframes-eps-converted-to.pdf}
}
\subfloat{
\includegraphics[width=0.5\linewidth]{pictures/hyper_hiddensize-eps-converted-to.pdf}
}
\caption{\small Evaluation of hyperparameters in the Transformer network\YL{, using the SKIRT dataset. }
(Left) average error for the reconstructed results as a function of the number of input frames.
(Right) error for the synthesized results under the condition of various dimensions of the latent layer.
}
\label{fig:hyperpara}
\end{figure}
\textbf{Network architecture.}
As shown in Fig.~\ref{fig:Transformer}, our transduction network consists of two components, namely convolutional \YL{encoders} to map coarse and fine mesh sequences into latent spaces for improved generalization capability, and the Transformer network for \YL{spatio-temporally} coherent deformation transduction.
The feature encoder module takes the 9-dimensional TS-ACAP features defined on vertices as input, followed by two convolutional layers with $tanh$ as the activation function.
In the last convolutional layer we abandon the activation function, similar to \cite{tan2017autoencoder}.
A fully connected layer is used to map the output of the convolutional layers into a 16-dimensional latent space.
We train one encoder for coarse \YL{meshes} and another for fine \YL{meshes} separately.
For the DeformTransformer network, its input includes the embedded latent vectors from both \YL{the} coarse and fine domains.
The DeformTransformer network consists of sequential encoders and decoders,
each \YL{including} a stack of 2 identical blocks with 8-head attention.
Different from variable length sequences used in natural language processing, we \YL{fix} the number of input frames \YL{(to 3 in our experiments)} since a motion sequence may include a thousand frames.
\YL{We perform experiments to evaluate the performance of our method with different settings.}
As shown in Fig.~\ref{fig:hyperpara} \YL{(left)}, using 3 input frames is found to perform well in our experiments.
We also evaluate the results generated with various dimensions of latent space shown in Fig. \ref{fig:hyperpara} \YL{(right)}.
When the dimension of latent space is larger than 16, the network can \YL{easily overfit}.
Thus we set the dimension of the latent space %
to 16, which is sufficient for all the examples in the paper.
\begin{table}[tb]
\renewcommand\arraystretch{1.5}
\caption{Quantitative comparison of reconstruction errors for unseen \YL{cloth animations} in several datasets. We compare our results with Chen {\itshape et al.} \cite{chen2018synthesizing} and Zurdo {\itshape et al.} \cite{zurdo2013wrinkles} with LR meshes as a reference. \YL{Three metrics, namely RMSE (Root Mean Squared Error), Hausdorff distance and STED (Spatio-Temporal Edge Difference)~\cite{Vasa2011perception} are used. Since LR meshes have different number of vertices from the ground truth HR mesh, we only calculate its Hausdorff distance.}}
\label{table:compare_zurdo_chen2}
\centering
\begin{tabular}{ccccc}
\toprule[1.2pt]
\multirow{3}{*}{Dataset} & \multirow{3}{*}{Methods} & \multicolumn{3}{c}{Metrics} \\ \cline{3-5}
& & RMSE & Hausdorff & STED \\
& & $\times 10^\YL{-2}$ $\downarrow$ & $\times 10^\YL{-2}$ $\downarrow$ & $\downarrow$ \\
\hline \hline
\multirow{4}{*}{TSHIRT} & LR & - & 0.59 & - \\ \cline{2-5}
& Chen {\itshape et al.} & 0.76 & 0.506 & 0.277 \\ \cline{2-5}
& Zurdo {\itshape et al.} & 1.04 & 0.480 & 0.281 \\ \cline{2-5}
& Our & \textbf{0.546} & \textbf{0.416} & \textbf{0.0776} \\ \hline \hline
\multirow{4}{*}{PANTS} & LR & - & 0.761 & - \\ \cline{2-5}
& Chen {\itshape et al.} & 1.82 & 1.09 & 0.176 \\ \cline{2-5}
& Zurdo {\itshape et al.} & 1.89 & 0.983& 0.151 \\ \cline{2-5}
& Our & \textbf{0.663} & \textbf{0.414} & \textbf{0.0420} \\ \hline \hline
\multirow{4}{*}{SKIRT} & LR & - & 2.09 & - \\ \cline{2-5}
& Chen {\itshape et al.} & 1.93 & 1.31 & 0.562 \\ \cline{2-5}
& Zurdo {\itshape et al.} & 2.19 & 1.52 & 0.178 \\ \cline{2-5}
& Our & \textbf{0.685} & \textbf{0.681} & \textbf{0.0241} \\ \hline \hline
\multirow{4}{*}{SHEET}
& LR & - & 2.61 & - \\ \cline{2-5}
& Chen {\itshape et al.} & 4.37 & 2.60 & 0.155 \\ \cline{2-5}
& Zurdo {\itshape et al.} & 3.02 & 2.34 & 0.0672 \\ \cline{2-5}
& Our & \textbf{0.585} & \textbf{0.417} & \textbf{0.0262} \\ \hline \hline
\multirow{4}{*}{DISK} & LR & - & 3.12 & - \\ \cline{2-5}
& Chen {\itshape et al.} & 7.03 & 2.27 & 0.244 \\ \cline{2-5}
& Zurdo {\itshape et al.} & 11.40 & 2.23 & 0.502 \\ \cline{2-5}
& Our & \textbf{2.16} & \textbf{1.30} & \textbf{0.0557 } \\
\bottomrule[1.2pt]
\end{tabular}
\end{table}
\section{Results}\label{sec:results}
\subsection{Runtime Performance}
We implement our method on a \YL{computer with a} 2.50GHz \YL{4-Core} Intel CPU for coarse simulation and TS-ACAP extraction,
and \YL{an} NVIDIA GeForce\textsuperscript{\textregistered}~GTX 1080Ti GPU for fine TS-ACAP generation by the network and mesh coordinate reconstruction.
Table~\ref{table:runtime} shows average per-frame execution time of our method for various cloth datasets.
The execution time contains four parts: coarse simulation, TS-ACAP extraction, high-resolution TS-ACAP synthesis, and collision refinement.
For reference, we also \YL{measure} the time of a CPU-based implementation of high-resolution PBS using ARCSim \cite{Narain2012AAR}.
Our algorithm is $10\sim35$ times faster than the \YL{PBS} HR simulation.
The low computational cost of our method makes it suitable for the interactive applications.
\begin{figure}[tb]
\centering
\setlength{\fboxrule}{0.5pt}
\setlength{\fboxsep}{-0.01cm}
\setlength{\tabcolsep}{0.00cm}
\renewcommand\arraystretch{0.01}
\begin{tabular}{>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}}
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/0crop0090down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/1crop0090down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/2crop0090down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/3crop0090down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/4crop0090down.png} \\
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/0crop0300down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/1crop0300down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/2crop0300down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/3crop0300down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt06_08_poses/4crop0300down.png} \\
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/0crop0110down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/1crop0110down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/2crop0110down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/3crop0110down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/4crop0110down.png} \\
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/0crop0260down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/1crop0260down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/2crop0260down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/3crop0260down.png} &
\includegraphics[width=\linewidth, trim=17 0 37 0,clip]{pictures/tshirt08_11_poses/4crop0260down.png} \\
\vspace{0.3cm} \footnotesize (a) Input & \vspace{0.3cm} \hspace{-0.3cm} \footnotesize (b) Chen {\itshape et al.} & \vspace{0.3cm} \hspace{-0.2cm} \footnotesize (c) Zurdo {\itshape et al.} & \vspace{0.3cm} \footnotesize (d) Ours & \vspace{0.3cm} \footnotesize (e) GT
\end{tabular}
\caption{Comparison of the reconstruction results for unseen data \YL{on the TSHIRT} dataset.
(a) coarse simulation,
(b) results of \cite{chen2018synthesizing},
(c) results of \cite{zurdo2013wrinkles},
(d) our results,
(e) ground truth generated by PBS.
Our method produces the detailed shapes of higher quality than Chen {\itshape et al.} and Zurdo {\itshape et al.}, see the folds and wrinkles in the close-ups. Chen {\itshape et al.} results suffer from seam line problems. The results of Zurdo {\itshape et al.} exhibit clearly noticeable artifacts.}
\label{fig:comparetoothers_tshirt}
\end{figure}
\begin{figure}[!htb]
\centering
\setlength{\fboxrule}{0.5pt}
\setlength{\fboxsep}{-0.01cm}
\setlength{\tabcolsep}{0.00cm}
\renewcommand\arraystretch{0.01}
\begin{tabular}{>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}}
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/0crop0010down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/1crop0010down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/2crop0010down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/3crop0010down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/4crop0010down.png} \\
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/0crop0060down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/1crop0060down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/2crop0060down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/3crop0060down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/4crop0060down.png} \\
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/0crop0140down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/1crop0140down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/2crop0140down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/3crop0140down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/4crop0140down.png} \\
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/0crop0160down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/1crop0160down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/2crop0160down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/3crop0160down.png} &
\includegraphics[width=\linewidth, trim=28 0 28 5,clip]{pictures/pants09_07_poses/4crop0160down.png} \\
\vspace{0.3cm} \footnotesize (a) Input & \vspace{0.3cm} \hspace{-0.3cm} \footnotesize (b) Chen {\itshape et al.} & \vspace{0.3cm} \hspace{-0.2cm} \footnotesize (c) Zurdo {\itshape et al.} & \vspace{0.3cm} \footnotesize (d) Ours & \vspace{0.3cm} \footnotesize (e) GT
\end{tabular}
\caption{Comparison of the reconstruction results for unseen data in the PANTS dataset.
(a) coarse simulation results,
(b) results of \cite{chen2018synthesizing}, mainly smooth the coarse meshes and barely exhibit any wrinkles.
(c) results of \cite{zurdo2013wrinkles}, have clear artifacts on examples where LR and HR meshes are not aligned well, \textit{e.g. } the trouser legs.
(d) our results, ensures physically-reliable results.
(e) ground truth generated by PBS.
}
\label{fig:comparetoothers_pants}
\end{figure}
\begin{figure*}[htb]
\centering
\subfloat[Input]{
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/0/frm0080_00_skirtlrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/0/frm0110_00_skirtlrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/0/frm0140_00_skirtlrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/0/frm0160_00_skirtlrkeyshot.png}
\end{minipage}}
\subfloat[Chen {\itshape et al.}]{
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/1/temp0080keyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/1/temp0110keyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/1/temp0140keyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/1/temp0160keyshot.png}
\end{minipage}}
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45 ,clip]{pictures/skirt09_06_posescolormap/1/09_06_posesfrm0080_00_skirtlr_result.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45 ,clip]{pictures/skirt09_06_posescolormap/1/09_06_posesfrm0110_00_skirtlr_result.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45 ,clip]{pictures/skirt09_06_posescolormap/1/09_06_posesfrm0140_00_skirtlr_result.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45 ,clip]{pictures/skirt09_06_posescolormap/1/09_06_posesfrm0160_00_skirtlr_result.png}
\end{minipage}
\subfloat[Zurdo {\itshape et al.}]{
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/2/temp0080keyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/2/temp0110keyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/2/temp0140keyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/2/temp0160keyshot.png}
\end{minipage}}
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/2/frm0080_00_skirthr.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/2/frm0110_00_skirthr.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/2/frm0140_00_skirthr.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/2/frm0160_00_skirthr.png}
\end{minipage}
\subfloat[Ours]{
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/3/frm0080_00_skirthrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/3/frm0110_00_skirthrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/3/frm0140_00_skirthrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/3/frm0160_00_skirthrkeyshot.png}
\end{minipage}}
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/3/frm0080_00_skirthr.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/3/frm0110_00_skirthr.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/3/frm0140_00_skirthr.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_posescolormap/3/frm0160_00_skirthr.png}
\end{minipage}
\subfloat[GT]{
\begin{minipage}[b]{0.11\linewidth}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/4/frm0080_00_skirthrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/4/frm0110_00_skirthrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/4/frm0140_00_skirthrkeyshot.png}
\includegraphics[width=1.000000\linewidth, trim=45 45 45 45,clip]{pictures/skirt09_06_poses/4/frm0160_00_skirthrkeyshot.png}
\end{minipage}}
\begin{minipage}[b]{0.08\linewidth}
\includegraphics[width=1.000000\linewidth, trim=0 0 0 0,clip]{pictures/bar.png}
\end{minipage}
\caption{Comparison of the reconstruction results for unseen data in the SKIRT dataset.
(a) the coarse simulation,
(b) the results of \cite{chen2018synthesizing},
(c) the results of \cite{zurdo2013wrinkles},
(d) our results,
(e) the ground truth generated by PBS.
The reconstruction accuracy is qualitatively showed as a difference map.
Reconstruction errors are color-coded and warmer colors indicate larger errors. Our method leads to significantly lower reconstruction errors. }
\label{fig:comparetoothers_skirt}
\end{figure*}
\subsection{\YL{Fine Detail} Synthesis Results and Comparisons}
We now demonstrate our method using various \YL{detail enhancement}
examples \YL{both} quantitatively and qualitatively, \YL{including added wrinkles and rich dynamics.}
Using detailed meshes generated by PBS as ground truth, we compare our results with physics-based coarse simulations, our implementation of a deep learning-based method \cite{chen2018synthesizing} and a conventional machine learning-based method \cite{zurdo2013wrinkles}.
For quantitative comparison, we use \YL{three} metrics: Root Mean Squared Error (RMSE), Hausdorff distance as well as spatio-temporal edge difference (STED) \cite{Vasa2011perception} designed for motion sequences with a focus on `perceptual’ error of models.
The results are shown in Table~\ref{table:compare_zurdo_chen2}.
Note that \YL{for the datasets from the top to bottom in the table,} the Hausdorff \YL{distances} between LR meshes and the ground truth are increasing. \YL{This} tendency is in accordance with the deformation range from tighter T-shirts and pants to skirts and square/disk tablecloth with higher degrees \YL{of freedom}.
Since the vertex position representation cannot handle rotations well, the larger scale the models deform, the more artifacts Chen {\itshape et al.} \cite{chen2018synthesizing} and Zurdo {\itshape et al.} \cite{zurdo2013wrinkles} would \YL{bring in} in the reconstructed models, \YL{leading to increased} RMSE and Hausdorff distances.
The results indicate that our method has better reconstruction results \YL{quantitatively} than the compared methods \YL{on} the 5 datasets with \YL{all the three} metrics.
Especially \YL{for} the SKIRT, SHEET and DISK \YL{datasets} which \YL{contain} loose cloth \YL{and hence larger and richer deformation}, our \YL{method} outperforms \YL{existing methods significantly} since tracking between coarse and fine meshes \YL{is} not required in our algorithm.
\begin{figure}[tb]
\centering
\setlength{\fboxrule}{0.5pt}
\setlength{\fboxsep}{-0.01cm}
\setlength{\tabcolsep}{0.00cm}
\renewcommand\arraystretch{0.01}
\begin{tabular}{>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}}
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/0crop0130down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/1crop0130down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/2crop0130down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/3crop0130down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/4crop0130down.png}\\
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/0crop0180down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/1crop0180down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/2crop0180down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/3crop0180down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/4crop0180down.png} \\
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/0crop0260down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/1crop0260down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/2crop0260down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/3crop0260down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/4crop0260down.png} \\
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/0crop0320down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/1crop0320down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/2crop0320down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/3crop0320down.png} &
\includegraphics[width=\linewidth, trim=25 0 50 6,clip]{pictures/crashballpole0.3withhuman/4crop0320down.png} \\
\vspace{0.3cm} \footnotesize (a) Input & \vspace{0.3cm} \hspace{-0.3cm} \footnotesize (b) Chen {\itshape et al.} & \vspace{0.3cm} \hspace{-0.2cm} \footnotesize (c) Zurdo {\itshape et al.} & \vspace{0.3cm} \footnotesize (d) Ours & \vspace{0.3cm} \footnotesize (e) GT
\end{tabular}
\caption{Comparison of the reconstruction results for unseen data in the SHEET dataset.
(a) the coarse simulation,
(b) the results of \cite{chen2018synthesizing}, with inaccurate and
rough wrinkles different from the GT.
(c) the results of \cite{zurdo2013wrinkles}, show similar global shapes to coarse meshes with some wrinkles and unexpected sharp corner.
(d) our results, show mid-scale wrinkles and similar global deformation as GT.
(e) the ground truth generated by PBS.}
\label{fig:comparetoothers_crashball}
\vspace{-0.2cm}
\end{figure}
\begin{figure}[tb]
\centering
\setlength{\fboxrule}{0.5pt}
\setlength{\fboxsep}{-0.01cm}
\setlength{\tabcolsep}{0.00cm}
\renewcommand\arraystretch{0.001}
\begin{tabular}{>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}>{\centering\arraybackslash}m{0.2\linewidth}}
\includegraphics[width=\linewidth, trim=42 0 30 30,clip]{pictures/disk4.300withhuman/0crop0050down.png} &
\includegraphics[width=\linewidth, trim=42 0 30 30,clip]{pictures/disk4.300withhuman/1crop0050down.png} &
\includegraphics[width=\linewidth, trim=42 0 30 30,clip]{pictures/disk4.300withhuman/2crop0050down.png} &
\includegraphics[width=\linewidth, trim=42 0 30 30,clip]{pictures/disk4.300withhuman/3crop0050down.png} &
\includegraphics[width=\linewidth, trim=42 0 30 30,clip]{pictures/disk4.300withhuman/4crop0050down.png} \\
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/0crop0090down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/1crop0090down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/2crop0090down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/3crop0090down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/4crop0090down.png} \\
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/0crop0160down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/1crop0160down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/2crop0160down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/3crop0160down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/4crop0160down.png} \\
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/0crop0360down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/1crop0360down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/2crop0360down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/3crop0360down.png} &
\includegraphics[width=\linewidth, trim=54 0 18 30,clip]{pictures/disk4.300withhuman/4crop0360down.png} \\
\vspace{0.3cm} \footnotesize (a) Input & \vspace{0.3cm} \hspace{-0.3cm} \footnotesize (b) Chen {\itshape et al.} & \vspace{0.3cm} \hspace{-0.2cm} \footnotesize (c) Zurdo {\itshape et al.} & \vspace{0.3cm} \footnotesize (d) Ours & \vspace{0.3cm} \footnotesize (e) GT
\end{tabular}
\caption{Comparison of the reconstruction results for unseen data in the DISK dataset.
(a) the coarse simulation,
(b) the results of \cite{chen2018synthesizing}, cannot reconstruct credible shapes.
(c) the results of \cite{zurdo2013wrinkles}, show apparent artifacts near the flying tails since no tracking constraints applied.
(d) our results, reproduce large-scale deformations, see the tail of the disk flies like a fan in the wind.
(e) the ground truth generated by PBS.}
\label{fig:comparetoothers_disk}
\end{figure}
\YL{We further make qualitative comparisons on the 5 datasets.}
Fig. \ref{fig:comparetoothers_tshirt} shows \YL{detail synthesis results} on the TSHIRT dataset.
The first and second
rows
are from \YL{sequence} 06\_08, a woman dribbling the basketball sideways and the \YL{last two rows} are from \YL{sequence} 08\_11, a walking woman.
In this dataset of tight t-shirts on human bodies, Chen {\itshape et al.} \cite{chen2018synthesizing}, Zurdo {\itshape et al.} \cite{zurdo2013wrinkles} and our method are able to reconstruct the garment model completely with mid-scale wrinkles.
However, Chen {\itshape et al.} \cite{chen2018synthesizing} suffer from the seam line problems due to \YL{the use of geometry image representation}.
A geometry image is a parametric sampling of the shape, which is \YL{made a topological disk by cutting through some seams.}
The boundary of the disk needs to be fused so that the reconstructed mesh has the original topology.
\YL{The super-resolved geometry image corresponding to high-resolution cloth animations are not entirely accurate, and as a result the fused boundaries no longer match exactly, }
\textit{e.g. } clear seam lines on the shoulder and crooked boundaries on the left side of the waist \YL{for the examples} in Fig.~\ref{fig:comparetoothers_tshirt} (b)),
\YL{while} our method \YL{produces} better results than \cite{chen2018synthesizing} and \cite{zurdo2013wrinkles} which have \YL{artifacts of unsmooth surfaces}.
Fig. \ref{fig:comparetoothers_pants} shows comparative results of the animations of pants on a fixed body shape while changing the body pose over time.
The results of \cite{chen2018synthesizing} \YL{mainly} smooth the coarse meshes and barely exhibit \YL{any} wrinkles.
Zurdo {\itshape et al.} \cite{zurdo2013wrinkles} utilize tracking algorithms to ensure the %
\YL{close alignment}
between coarse and fine meshes, and thus the fine meshes are constrained \YL{and do not exhibit the behavior of full physics-based simulation.}
\YL{So on the PANTS dataset,} the results of \cite{zurdo2013wrinkles} have clear artifacts on examples \YL{where} LR and HR meshes are not aligned well, \textit{e.g. } the trouser legs.
Different from the two compared methods \YL{that reconstruct displacements} or local coordinates,
our method \YL{uses} deformation-based features in both encoding and decoding \YL{phases} which \YL{does not suffer from such restrictions and ensures physically-reliable results.}
For looser garments like \YL{skirts}, we show comparison results in Fig. \ref{fig:comparetoothers_skirt}, with color coding to highlight the differences between synthesized results and the ground truth.
Our method successfully reconstructs the swinging skirt \YL{caused by} the body motion (see the small wrinkles on the waist and the \YL{medium-level} folds on the skirt \YL{hem}).
Chen {\itshape et al.} are able to reconstruct the overall shape of the skirt, however there are many small unsmooth \YL{triangles leading to noisy shapes}
due to the 3D coordinate representation with untracked fine meshes with abundant wrinkles.
This leads to unstable animation, please see the accompanying video.
The results of \cite{zurdo2013wrinkles} have some problems of the global deformation, see the directions of the skirt hem and the large highlighted area in the color map.
Our learned \YL{detail} synthesis model provides better visual quality for shape generation \YL{and the generated results look} closer to the ground truth.
Instead of garments dressed on human bodies, we additionally show some results of free-flying tablecloth.
The comparison of the testing results \YL{on} the SHEET dataset are shown in Fig.~\ref{fig:comparetoothers_crashball}.
The results of \cite{chen2018synthesizing} show inaccurate and rough wrinkles different from the ground truth.
For hanging sheets in the results of \cite{zurdo2013wrinkles}, the global shapes are more like coarse \YL{meshes} with some wrinkles and unexpected sharp corners, \textit{e.g. } the left side in the last row of Fig. \ref{fig:comparetoothers_crashball} (c),
while ours show \YL{mid-scale} wrinkles and similar global deformation \YL{as} the high-resolution meshes.
As for the DISK dataset, from the visual results in Fig.~\ref{fig:comparetoothers_disk}, we can see that Chen {\itshape et al.} \cite{chen2018synthesizing} and Zurdo {\itshape et al.} \cite{zurdo2013wrinkles} cannot handle large-scale rotations well and cannot reconstruct credible shapes in such cases.
\gl{Especially for Zurdo {\itshape et al.} \cite{zurdo2013wrinkles}, the impact of tracking is significant for their algorithm.}
They can reconstruct the top
and part of tablecloth near the cylinder, but the flying tails have apparent artifacts.
Our algorithm does not have such drawbacks.
Notice how our method successfully reproduces ground-truth deformations, including the overall drape (\textit{i.e. }, how the tail of the disk flies like a fan in the wind) and mid-scale wrinkles.
\begin{table}[!htb]
\renewcommand\arraystretch{1.5}
\caption{User study results on cloth \YL{detail} synthesis. We show the average ranking score of the three methods: Chen {\itshape et al.} \cite{chen2018synthesizing}, Zurdo {\itshape et al.} \cite{zurdo2013wrinkles}, and ours. The
ranking ranges from 1 (the best) to 3 (the worst). The results are calculated
based on 320 trials. We see that our method achieves the best in terms of
wrinkles, temporal stability \YL{and overall quality}.}
\label{table:userstudy}
\centering
\begin{tabular}{cccc}
\toprule[1.2pt]
Method & Wrinkles & Temporal stability & Overall \\ \hline
Chen {\itshape et al.} & 2.184 & 2.1258 &2.1319\\ \hline
Zurdo {\itshape et al.} & 2.3742 & 2.5215 & 2.4877\\ \hline
Ours & \textbf{1.4417} & \textbf{1.3528} & \textbf{1.3804} \\
\bottomrule[1.2pt]
\end{tabular}
\end{table}
\gl{We further conduct a user study to evaluate the stability and realistic of the synthesized dense mesh dynamics. 32 volunteers are involved for this user study.}
For every question, we give one sequence and 5 images of coarse meshes as references, \YL{and} then let the user rank the corresponding outputs from Chen {\itshape et al.} \cite{chen2018synthesizing}, Zurdo {\itshape et al.} \cite{zurdo2013wrinkles} and ours according to three different criteria (wrinkles, temporal stability and overall).
We shuffle the order of the algorithms each time we exhibit the question and show shapes from the three methods randomly \YL{to avoid bias}.
We show the results of the user study in Table \ref{table:userstudy}, where we observe that our generated \YL{shapes} perform the best on all three criteria.
\begin{table}[tb]
\renewcommand\arraystretch{1.5}
\caption{Per-vertex error (RMSE) on synthesized shapes with different feature representations: 3D coordinates, ACAP and TS-ACAP.}
\label{table:feature_compare}
\centering
\begin{tabular}{cccccc}
\toprule[1.2pt]
Dataset & TSHIRT & PANTS & SKIRT & SHEET & DISK \\ \hline
3D coordinates & 0.0101 & 0.0193 & 0.00941 & 0.00860 & 0.185 \\ \hline
ACAP & 0.00614 & 0.00785 & 0.00693 & 0.00606 & 0.0351 \\ \hline
TS-ACAP & \textbf{0.00546} & \textbf{0.00663} & \textbf{0.00685} & \textbf{0.00585} & \textbf{0.0216}\\
\bottomrule[1.2pt]
\end{tabular}
\end{table}
\begin{figure}[tb]
\centering
\setlength{\fboxrule}{0.5pt}
\setlength{\fboxsep}{-0.01cm}
\setlength{\tabcolsep}{0.00cm}
\renewcommand\arraystretch{0.001}
\begin{tabular}{>{\centering\arraybackslash}m{0.25\linewidth}>{\centering\arraybackslash}m{0.25\linewidth}>{\centering\arraybackslash}m{0.25\linewidth}>{\centering\arraybackslash}m{0.25\linewidth}}
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/0/crop0040.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/1/crop0040.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/2/crop0040.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/3/crop0040.png} \\
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/0/crop0075.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/1/crop0075.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/2/crop0075.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/3/crop0075.png} \\
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/0/crop0110.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/1/crop0110.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/2/crop0110.png} &
\includegraphics[width=1.000000\linewidth, trim=63 0 0 0,clip]{pictures/skirt09_07_poseswithhuman/3/crop0110.png} \\
\vspace{0.3cm} \small (a) Input & \vspace{0.3cm}\small (b) Coordinates & \vspace{0.3cm}\small (c) Ours & \vspace{0.3cm}\small (d) GT
\end{tabular}
\caption{The evaluation of the TS-ACAP feature in our detail synthesis method.
(a) input coarse \YL{shapes},
(b) the results using 3D coordinates, which can be clearly seen the rough appearance, unnatural deformation and some artifacts, especially in the highlighted regions with details shown in the close-ups.
(c) our results, which show smooth looks and the details are more similar to the GT.
(d) ground truth.
}
\label{fig:ablationstudy_coordiniates_skirt}
\end{figure}
\begin{figure}[htb]
\centering
\setlength{\tabcolsep}{0.05cm}
\renewcommand\arraystretch{0.001}
\begin{tabular}{>{\centering\arraybackslash}m{0.02\linewidth}>{\centering\arraybackslash}m{0.31\linewidth}>{\centering\arraybackslash}m{0.31\linewidth}>{\centering\arraybackslash}m{0.31\linewidth}}
\rotatebox{90}{\small ACAP} &
\includegraphics[width=\linewidth, trim=90 0 0 60,clip]{pictures/tacap_acap/0/crop0103.png} &
\includegraphics[width=\linewidth, trim=90 0 0 60,clip]{pictures/tacap_acap/0/crop0104.png} &
\includegraphics[width=\linewidth, trim=90 0 0 60,clip]{pictures/tacap_acap/0/crop0105.png} \\
\rotatebox{90}{\small TS-ACAP} &
\includegraphics[width=\linewidth, trim=90 0 0 60,clip]{pictures/tacap_acap/1/crop0103.png} &
\includegraphics[width=\linewidth, trim=90 0 0 60,clip]{pictures/tacap_acap/1/crop0104.png} &
\includegraphics[width=\linewidth, trim=90 0 0 60,clip]{pictures/tacap_acap/1/crop0105.png} \\
\vspace{0.3cm} & \vspace{0.3cm} \small $t = 103$ & \vspace{0.3cm} \small $t = 104$ & \vspace{0.3cm} \small $t = 105$
\end{tabular}
\caption{
Three consecutive frames from a testing sequence in the DISK dataset. First row: the results of ACAP. As shown in the second column, the enlarged wrinkles are different from the previous and the next frames.
This causes jumping in the animation.
Second row: the consistent results obtained via TS-ACAP feature, demonstrating that our TS-ACAP representation ensures the temporal coherence.
}
\label{fig:jump_acap}
\end{figure}
\begin{table}[tb]
\renewcommand\arraystretch{1.5}
\fontsize{7.5}{9}\selectfont
\caption{Comparison of RMSE between synthesized shapes and ground truth with different networks, \textit{i.e. } without temporal modules, with RNN, with LSTM and ours with the Transformer network.}
\label{table:transformer_compare}
\centering
\begin{tabular}{cccccc}
\toprule[1.2pt]
Dataset & TSHIRT & PANTS & SKIRT & SHEET & DISK \\ \hline
WO Transformer & 0.00909 & 0.01142 & 0.00831 & 0.00739 & 0.0427 \\ \hline
With RNN & 0.0435 & 0.0357 & 0.0558 & 0.0273 & 0.157 \\ \hline
With LSTM & 0.0351 & 0.0218 & 0.0451 & 0.0114 & 0.102 \\ \hline
With Transformer & \textbf{0.00546} & \textbf{0.00663} & \textbf{0.00685} & \textbf{0.00585} & \textbf{0.0216} \\
\bottomrule[1.2pt]
\end{tabular}
\end{table}
\begin{figure}[tb]
\centering
\setlength{\tabcolsep}{0.0cm}
\renewcommand\arraystretch{-1.9}
\begin{tabular}{>{\centering\arraybackslash}m{0.08\linewidth}>{\centering\arraybackslash}m{0.18\linewidth}>{\centering\arraybackslash}m{0.18\linewidth}>{\centering\arraybackslash}m{0.18\linewidth}>{\centering\arraybackslash}m{0.18\linewidth}>{\centering\arraybackslash}m{0.18\linewidth}}
\rotatebox{90}{\small (a) Input}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/0/0008.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/0/0016.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/0/0022.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/0/0094.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/0/0200.png}
\\
\rotatebox{90}{\small (b) EncDec} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/5/0008.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/5/0016.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/5/0022.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/5/0094.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/5/0200.png}
\\
\rotatebox{90}{\small (c) RNN} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/rnn/0008.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/rnn/0016.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/rnn/0022.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/rnn/0094.png} &
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/rnn/0200.png}
\\
\rotatebox{90}{\small (d) LSTM}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/lstm/0008.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/lstm/0016.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/lstm/0022.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/lstm/0094.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/lstm/0200.png}
\\
\rotatebox{90}{\small (e) Ours}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/3/0008.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/3/0016.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/3/0022.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/3/0094.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/3/0200.png}
\\
\rotatebox{90}{\small (f) GT}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/4/0008.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/4/0016.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/4/0022.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/4/0094.png}&
\includegraphics[width=\linewidth, trim=5 5 5 5,clip]{pictures/tshirt06_08_poses/4/0200.png}
\end{tabular}
\caption{The evaluation of the Transformer network in our model for wrinkle synthesis.
From top to bottom we show (a) %
\gl{input coarse mesh with physical simulation}
(b) the results with an encoder-decoder \YL{dropping out temporal modules}, (c) the results with RNN \cite{chung2014empirical}, (d) the results with LSTM \cite{hochreiter1997long}, (e) ours, and (f) the ground truth generated by PBS.}
\label{fig:transformer_w_o_tshirt}
\end{figure}
\subsection{\YL{Evaluation of} Network Components}
We evaluate the effectiveness of our network components for two aspects: the \YL{capability} of the TS-ACAP feature and the \YL{capability} of the Transformer network.
We evaluate our method qualitatively and quantitatively on different datasets.
\textbf{Feature Representation Evaluation}.
To verify the effectiveness of our TS-ACAP feature, we compare per-vertex position errors to other features to evaluate the generated shapes in different datasets quantitatively.
We compare our method using TS-ACAP feature with our transduction methods using 3D vertex coordinates and ACAP, with network layers and parameters adjusted accordingly to optimize performance alternatively.
The details of numerical comparison are shown in Table \ref{table:feature_compare}.
ACAP and TS-ACAP show quantitative improvements than 3D coordinates.
In Fig. \ref{fig:ablationstudy_coordiniates_skirt}, we exhibit several compared examples of animated skirts of coordinates and TS-ACAP.
\YL{The results using coordinates show rough appearance, unnatural deformation and some artifacts,
I can't really see the two circles?
especially in the highlighted regions with details shown in the close-ups.} Our results with TS-ACAP are more similar to the ground truth than the ones with coordinates.
ACAP has the problem of temporal inconsistency, thus the results are shaking or jumping frequently.
\YL{Although the use of the Transformer network can somewhat mitigate this issue, such artifacts can appear even with the Transformer.}
\YL{Fig.~\ref{fig:jump_acap} shows} three consecutive frames from a testing sequence in the DISK dataset.
Results with TS-ACAP show more consistent wrinkles than the ones with ACAP thanks to the temporal constraints.
\textbf{Transformer Network Evaluation}.
We also evaluate the impact of the Transformer network in our pipeline.
We compare our method to an encoder-decoder network dropping out the temporal modules, our pipeline with the recurrent neural network (RNN) and with the long short-term memory (LSTM) \YL{module}.
An example of T-shirts is given in Fig. \ref{fig:transformer_w_o_tshirt}, \YL{showing} 5 frames in order.
The results without any temporal modules show artifacts on the sleeves and neckline since these places have strenuous \YL{forces}. %
The models using RNN and LSTM stabilize the sequence via eliminating dynamic and detailed deformation, but all the results keep wrinkles on the chest from the initial state\YL{, lacking rich dynamics.}
Besides, they are not able to generate stable and realistic garment animations \YL{that look similar to} the ground truth,
\YL{while} \YL{our} method with the Transformer network \YL{apparently} improves the temporary stability, \YL{producing results close to the ground truth.}
We also quantitatively evaluate the performance of the Transformer network \YL{in our method} via per-vertex error.
As shown in Table \ref{table:transformer_compare}, the RMSE of our model \YL{is} smaller than the other models.
\section{Conclusion and Future Work}\label{sec:conclusion}
In this paper, we introduce a novel algorithm for synthesizing robust and realistic cloth animations via deep learning.
To achieve this, we propose a geometric deformation representation named TS-ACAP which well embeds the details and ensures the temporal consistency.
\YL{Benefiting} from \YL{the} deformation-based feature, there is no explicit requirement of tracking between coarse and fine meshes in our algorithm.
We also use the Transformer network based on attention mechanisms to map the coarse TS-ACAP to fine TS-ACAP, maintaining the stability of our generation.
Quantitative and qualitative results reveal that our method can synthesize realistic-looking wrinkles in various datasets, such as draping tablecloth, tight or \YL{loose} garments dressed on human bodies, etc.
Since our algorithm synthesizes \YL{details} based on the coarse meshes, the time for coarse simulation is unavoidable.
Especially for tight garments like T-shirts and pants, the collision solving phase is time-consuming.
In the future, we intend to generate coarse sequences for tight cloth via skinning-based methods in order to reduce the computation for our pipeline.
Another limitation is that our current network is not able to deal with all kinds of garments with different topology.
\newpage
\bibliographystyle{IEEEtran}
|
1,314,259,994,806 | arxiv | \section{Introduction}\label{sec-intro}
Molecular dynamics has made enormous advances in capabilities through better algorithms,
better interatomic potentials, and improvements in computational power.
However, the use of molecular dynamics directly to treat the deformation and failure of
materials at the mesoscale is still largely beyond reach.
At the mesoscale and above, a continuum model of mechanics is still required in practice.
The question then arises of how molecular dynamics can be used in deriving and calibrating
appropriate continuum models.
This paper addresses the question of how to use molecular dynamics to obtain a peridynamic
material model that is able to treat material nonlinearity and the nucleation and growth of fractures.
To accomplish this, a coarse graining method is described below that maps interatomic forces
into larger-scale degrees of freedom.
The coarse graining method starts with a definition of these degrees of freedom as the
mean atomic displacements weighted by a smoothing function.
It is shown that the coarse grained displacements obey a nonlocal evolution law, which
is the peridynamic equation of motion.
The coarse graining process provides peridynamic bond forces among the coarse grained nodes
that are then used to calibrate a material model.
The bond forces can include long-range interactions, if these are present in the atomic system.
They also reflect any initial distribution of defects.
In the present application of single-sheet graphene, a nonlinear ordinary state-based material
model is found to adequately represent the deformation and failure of the material.
As a molecular dynamics (MD) model of graphene is stretched, the interatomic forces become weaker, and the material
fails.
This process of failure is accelerated by higher temperatures in the MD model, which also affect
the elastic response.
All of these features are reflected in the coarse grained bond forces, so they are carried over
to the peridynamic continuum model after calibration.
The calibrated peridynamic model reproduces the nucleation of damage due to deformation in a
specimen that is initially undamaged.
In principle, the model can be applied within the process zone of a growing crack.
However, with the objective of scaling up the material model to much larger length scales,
it is necessary to include a separate bond breakage criterion that reflects the energy balance in
brittle crack growth without the need to model the process zone in detail.
To treat this,
the peridynamic material model is augmented by a separate bond breakage criterion that approximates
the Griffith criterion for growing cracks in a brittle material.
The literature on graphene is voluminous, and only the papers that are the most relevant to
the present work are summarized here.
Much of what is known about the mechanical properties of graphene is based on MD simulations.
Jiang, Wang, and Li used MD to predict the Young's modulus in graphene, including the effects of
temperature and sample size \cite{jiang09}.
A number of MD studies have treated the
effect of defects on the mechanical and thermal properties of graphene
\cite{ni10,jing12,ansari12,mortazavi13,he14}.
Sakhee-Pour \cite{sakhaee09} and
Javvaji et al. investigated the effects of lattice orientation and sample size on the strength
of graphene \cite{javvaji16}.
Most of these papers, as well as the present paper, treat only the two-dimensional response
of graphene.
However, 3D MD simulations have also been applied to the wrinkling and crumpling of graphene
sheets, for example \cite{becton15}.
MD has also been used to study the mechanical properties of polycrystalline graphene, for
example \cite{yi13,chen15}.
A comprehensive review of the literature on the fracture of graphene, much of which uses MD,
can be found in \cite{zhang15}.
A review of the literature on experimental and theoretical graphene mechanics is available in \cite{cao18}.
Continuum modeling of single-layer graphene has included the use of finite elements with an elastic
material model, for example \cite{hemmasizadeh08,scarpa10}.
A summary of the literature on the equivalent linear elastic properties of graphene sheets is
given by Reddy et al. \cite{reddy06} and by Shi et al. \cite{shi14}.
Finite element analysis including aspects of fracture mechanics has been applied to graphene sheets
\cite{tsai10}.
A hyperelastic continuum material model that includes nonlinearity at large strains was developed
by Xu et al. using density functional theory \cite{xu12}.
An up-to-date review of the literature on finite element modeling of graphene is given
by Chandra et al. \cite{chandra20}.
Nonlocality has been studied in connection to the buckling of single-layer graphene
\cite{pradhan09,pradhan10,asemi14}
and is potentially important in the modeling of multilayer graphene,
partly due to the long-range interaction forces between layers.
Liu et al. \cite{liu18} developed an ordinary state-based peridynamic model for single-layer
graphene that is calibrated using strain energy densities obtained from MD.
Nonlinearity in the stress-strain response is incorporated by including a cubic dependence
of strain energy density on strain.
This method reproduces the stress-strain curves predicted by MD and, when a critical strain
bond breakage criterion is used, also captures the main features of dynamic fracture that are
seen in MD.
The method in \cite{liu18} does not address the dependence of bond force on bond length,
which is treated in the present work.
Other applications of peridynamics to graphene include the work of
Martowicz et al. \cite{martowicz15}, which uses a peridynamic model of graphene nanoribbons to
reproduce wave dispersion.
Diyaroglu et al. \cite{diyaroglu19} apply peridynamics to the wrinkling of graphene membranes,
including thermal expansion.
Liu et al. \cite{liu20} present a bond-based treatment of the
effects of lattice orientation on the strength of graphene sheets in different directions.
A bond-based material model has been applied to the perforation of multilayer graphene
by micrometer-scale projectiles \cite{silling21b}.
In Section~\ref{sec-homogpd} of the present paper, an upscaling method is presented that
provides coarse grained bond forces that are consistent with the momentum balance for the smoothed displacement
variable.
Section~\ref{sec-example} presents an example of coarse graining in a linear small-scale system that
involves long-range forces.
This section also describes the fitting of a peridynamic material model to the coarse grained forces.
Section~\ref{sec-graphene} extends the method to the nonlinear response of graphene, including
the nucleation of damage.
Section~\ref{sec-breakage} describes how a critical bond strain damage criterion can be
combined with the peridynamic model to reproduce the growth of cracks.
In Section~\ref{sec-scale} it is shown how changes in the horizon can be applied to the model with
appropriate scaling of the parameters.
Comparison of a simulation using the new material model for graphene with experimental data on the rupture of nanoscale
membranes is presented in Section~\ref{sec-exper}.
Concluding remarks and ideas for future work are given in Section~\ref{sec-disc}.
\section{Coarse graining of an atomic scale model}\label{sec-homogpd}
This section describes a method for obtaining a larger-scale discretized model from an MD model.
The discussion specializes a more general method described in \cite{silling21a} to the case of
discrete nodes.
The general approach is to first define the coarse grained displacements in terms
of a weighted average of the microscale displacements.
This definition leads to a linear momentum balance for the coarse grained displacements
that is a consequence of the momentum balance for the atoms.
The coarse grained momentum balance has the form of the discretized peridynamic equation of motion.
The bond forces in this peridynamic expression are derived from the atomic scale forces.
How to determine a material model for the coarse grained bond forces is considered
in Section~\ref{sec-graphene}.
Consider a molecular dynamics model of a crystal composed of $N_a$ atoms.
Over time, each atom $\alpha$ interacts with the same set of its neighbors ${\mathcal{H}}_\alpha$.
The mass and displacement of each atom are denoted by $M_\alpha$ and ${\bf U}_\alpha(t)$ respectively.
The atoms interact through some given interatomic potential.
The resulting force that
atom $\beta$ exerts on $\alpha$ is denoted by ${\bf F}_{\beta\alpha}(t)$.
These interatomic forces obey the following antisymmetry relation:
\begin{equation}
{\bf F}_{\alpha\beta}(t)=-{\bf F}_{\beta\alpha}(t)
\label{eqn-Fsym}
\end{equation}
for all $t$.
The forces are not necessarily parallel to the relative position vector between $\alpha$ and $\beta$.
Each atom is also subjected to a prescribed external force ${\bf B}_\alpha(t)$.
The atoms obey Newton's second law:
\begin{equation}
M_\alpha \ddot{\bf U}_\alpha(t)=\sum_{\beta\in{\mathcal{H}}_\alpha}{\bf F}_{\beta\alpha}(t) +{\bf B}_\alpha(t).
\label{eqn-newton}
\end{equation}
To coarse grain the molecular dynamics model, let ${\bf x}_i$, $i=1,2,\dots,N_c$ denote the reference positions
of the coarse grained degrees of freedom.
Let ${\bf u}_i(t)$ denote the displacements at each such position, to be defined below.
For each ${\bf x}_i$, define smoothing weights $\omega_i^\alpha$.
These weights are normalized such that for any atom $\alpha$,
\begin{equation}
\sum_{i=1}^{N_c} \omega_i^\alpha=1.
\label{eqn-omeganorm}
\end{equation}
Equation \eqref{eqn-omeganorm} implies that each atom is covered by at least one smoothing function.
All of the weights are limited to a support of radius $R$:
\begin{equation}
|{\bf x}_i-{\bf X}_\alpha|>R \quad{\Longrightarrow}\quad \omega_i^\alpha=0
\label{eqn-Rdef}
\end{equation}
for any $i$ and $\alpha$, where $R$ is independent of $i$ and $\alpha$.
Define the coarse grained masses and external loads by
\begin{equation}
m_i= \sum_{\alpha=1}^{N_a} \omega_i^\alpha M_\alpha, \qquad
{\bf b}_i(t)= \sum_{\alpha=1}^{N_a} \omega_i^\alpha {\bf B}_\alpha(t).
\label{eqn-mbdef}
\end{equation}
It is assumed for convenience that $m_i>0$ for all $i$, that is, for every $i$, there is some atom
$\alpha$ such that $\omega_i^\alpha>0$.
Define the coarse grained displacements by
\begin{equation}
{\bf u}_i(t)= \frac{1}{m_i}\sum_{\alpha=1}^{N_a} \omega_i^\alpha M_\alpha {\bf U}_\alpha(t).
\label{eqn-udef}
\end{equation}
Thus, the coarse grained displacements are weighted by mass as well as $\omega_i^\alpha$.
Next, the evolution equation for the coarse grained displacements will be derived.
Taking the second time derivative of \eqref{eqn-udef} yields
\begin{equation}
m_i\ddot{\bf u}_i(t)= \sum_{\alpha=1}^{N_a} \omega_i^\alpha M_\alpha \ddot{\bf U}_\alpha(t).
\label{eqn-usec}
\end{equation}
From \eqref{eqn-newton} and \eqref{eqn-usec},
\begin{equation}
m_i\ddot{\bf u}_i(t)= \sum_{\alpha=1}^{N_a} \omega_i^\alpha \left[\sum_{\beta=1}^{N_a}{\bf F}_{\beta\alpha}(t) +{\bf B}_\alpha(t)\right].
\label{eqn-useci}
\end{equation}
For any atom $\beta$, the normalization requirement \eqref{eqn-omeganorm} implies that
\begin{equation}
\sum_{j=1}^{N_c} \omega_j^\beta=1.
\label{eqn-betanorm}
\end{equation}
Combining \eqref{eqn-useci} and \eqref{eqn-betanorm}, and using the second equation in \eqref{eqn-mbdef},
\begin{equation}
m_i\ddot{\bf u}_i(t)= \sum_{\alpha=1}^{N_a} \omega_i^\alpha \left[\sum_{\beta=1}^{N_a}{\bf F}_{\beta\alpha}(t)\sum_{j=1}^{N_c} \omega_j^\beta\right] +{\bf b}_i(t).
\label{eqn-usecii}
\end{equation}
Rearranging \eqref{eqn-usecii} leads to
\begin{equation}
m_i\ddot{\bf u}_i(t)= \sum_{j=1}^{N_c} {\bf f}_{ji}(t) +{\bf b}_i(t)
\label{eqn-usecpd}
\end{equation}
where the {\emph{pairwise bond force}} is defined by
\begin{equation}
{\bf f}_{ji}(t)= \sum_{\alpha=1}^{N_a} \sum_{\beta=1}^{N_a} \omega_i^\alpha \omega_j^\beta {\bf F}_{\beta\alpha}(t).
\label{eqn-ffdef}
\end{equation}
Using \eqref{eqn-Fsym} and interchanging the summation variables $\alpha$ and $\beta$, it follows immediately from \eqref{eqn-ffdef} that
\begin{equation}
{\bf f}_{ij}(t)=-{\bf f}_{ji}(t)
\label{eqn-fsym}
\end{equation}
for all $i$, $j$, and $t$.
Suppose that the underlying interatomic potential has a cutoff distance $d$:
\begin{equation}
|{\bf X}_\beta-{\bf X}_\alpha|>d \quad{\Longrightarrow}\quad {\bf F}_{\beta\alpha}(t)={\bf {0}}
\label{eqn-ddef}
\end{equation}
for all $\alpha$, $\beta$, and $t$.
As suggested by Figure~\ref{fig-horizon},
\eqref{eqn-Rdef}, \eqref{eqn-ffdef}, and \eqref{eqn-ddef} imply that
\begin{equation}
|{\bf x}_j-{\bf x}_i|>\delta \quad{\Longrightarrow}\quad {\bf f}_{ji}(t)={\bf {0}}
\label{eqn-deltadefi}
\end{equation}
for all $i$, $j$, and $t$, where $\delta$ is the {\emph{horizon}} defined by
\begin{equation}
\delta=2R+d.
\label{eqn-deltadef}
\end{equation}
So, $\delta$ is the cutoff distance for coarse grained bond force interactions.
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{doc50-fig-horizon.pdf}
\caption{The horizon is determined by the weight function radius and the cutoff distance for
the interatomic potential.}
\label{fig-horizon}
\end{figure}
The definition of ${\bf f}_{ji}$ given by \eqref{eqn-ffdef} does not, in itself, provide a viable material model
for the coarse grained model.
Such a material model would relate the pairwise bond forces to the coarse grained displacements, not
to the interatomic forces, which would be unknown in a coarse grained model.
However, \eqref{eqn-ffdef} does provide a means to calibrate a prescribed material model, as will be
demonstrated in the next section.
\section{Example}\label{sec-example}
Consider a square lattice of particles in 2D, with spacing $\ell=1$ and layer thickness $\tau=1$.
The mass of each particle is $M_\alpha=1$.
The particles interact according to the following hypothetical model:
\begin{equation}
{\bf F}_{\beta\alpha} = \left\lbrace \begin{array}{ll}
(B{\bf M}_{\beta\alpha}) e^{-|{\bf X}_\beta-{\bf X}_\alpha|/d} \quad & {\mathrm{if}}\; |{\bf X}_\beta-{\bf X}_\alpha|\le d, \\
0 & {\mathrm{otherwise}} \\
\end{array}\right.
\label{eqn-sqlatt}
\end{equation}
where
\begin{equation}
{\bf M}_{\beta\alpha} = \frac{ {\bf X}_\beta-{\bf X}_\alpha} {|{\bf X}_\beta-{\bf X}_\alpha|}
\label{eqn-Mba}
\end{equation}
and $B=0.90915$, $d=10$.
Thus, long-range interactions are present up to 10 interatomic distances.
The coarse grained nodes are on a square lattice with a spacing of $h=5$ (Figure~\ref{fig-lrsquare}).
The smoothing functions are defined with the help of the cone-shaped function $S$ given by
\begin{equation}
S({\bf z})=\max\left\{0, 1-\frac{1}{R}\sqrt{z_1^2+z_2^2} \right\}
\label{eqn-Sdef}
\end{equation}
where $z_1$ and $z_2$ are the components of the vector ${\bf z}$ in the plane
and where $R$ is the radius of the cone.
In this example, $R=2h$.
The weighting functions $\omega_i^\alpha$ are given by
\begin{equation}
\omega_i^\alpha=\frac{ S({\bf x}_i-{\bf X}_\alpha)} {\sum_j S({\bf x}_j-{\bf X}_\alpha)}
\label{eqn-Ommd}
\end{equation}
which is designed to satisfy the normalization \eqref{eqn-omeganorm}.
The small-scale model is deformed in isotropic extension with a strain ${\epsilon}$:
\begin{equation}
{\bf U}_\alpha = {\epsilon}{\bf X}_\alpha
\label{eqn-Usqlat}
\end{equation}
where ${\epsilon}=0.00019$.
The coarse grained displacements ${\bf u}_i$ and pairwise bond forces ${\bf f}_{ji}$
are evaluated from \eqref{eqn-udef} and \eqref{eqn-ffdef}.
It is convenient to express these forces as being comprised of contributions ${\bf t}_{ji}$ and ${\bf t}_{ij}$
from the material models applied at $i$ and $j$ respectively:
\begin{equation}
{\bf f}_{ji}=({\bf t}_{ji}-{\bf t}_{ij})V^2, \qquad {\bf t}_{ji}=-{\bf t}_{ij}=\frac{{\bf f}_{ji}}{2V^2}
\label{eqn-ft}
\end{equation}
where $V$ is the volume of each coarse grained (CG) node:
\begin{equation}
V=\tau h^2.
\label{eqn-Vh}
\end{equation}
In this example, $V=1$.
The vector ${\bf t}_{ji}$ is called the {\emph{bond force density}} and has dimensions of force/volume$^2$.
Figure~\ref{fig-lrt} shows the CG bond force densities $|{\bf t}_{ji}|$
as a function of CG bond length $| {\boldsymbol{\xi}} _{ji}|$ (red dots),
where $ {\boldsymbol{\xi}} _{ji}={\bf x}_j-{\bf x}_i$.
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{doc50-fig-lrsquare.pdf}
\caption{Coarse graining example.
Left: original small-scale grid.
Right: coarse grained nodes. Colors show the force in bonds connected to the center CG node.
}
\label{fig-lrsquare}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{doc50-fig-lrt.pdf}
\caption{Coarse grained and fitted peridynamic model for the square lattice example.
Left: bond force as a function of bond length.
Right: dispersion curves.
}
\label{fig-lrt}
\end{figure}
In specifying a material model, the bond strain is defined by
\begin{equation}
s_{ji}=\frac{|{\bf y}_j-{\bf y}_i|}{|{\bf x}_j-{\bf x}_i|}-1
\label{eqn-sdef}
\end{equation}
where the deformed CG node positions are given by
\begin{equation}
{\bf y}_j={\bf x}_j+{\bf u}_j
\label{eqn-ydef}
\end{equation}
for any $j$.
Also define the deformed CG bond direction unit vector by
\begin{equation}
{\bf M}_{ji}=\frac{{\bf y}_j-{\bf y}_i}{|{\bf y}_j-{\bf y}_i|}
\label{eqn-Mdef}
\end{equation}
and the normalized bond length by
\begin{equation}
r_{ji}=\frac{| {\boldsymbol{\xi}} _{ji}|}{\delta}.
\label{eqn-rdef}
\end{equation}
For purposes of demonstrating the calibration of a continuum model, suppose a bond-based model is assumed:
\begin{equation}
{\bf t}_{ji}={ \mathcal{T} }_{ji}{\bf M}_{ji}
\label{eqn-tsqlat}
\end{equation}
where ${ \mathcal{T} }_{ji}$ is a scalar.
The general pattern of the CG bond forces in Figure~\ref{fig-lrt} suggests the following form:
\begin{equation}
{ \mathcal{T} }_{ji}=A{\mathsf{R}}(r_{ji})s_{ji}
\label{eqn-trsqlat}
\end{equation}
where
\begin{equation}
{\mathsf{R}}(r) = r^{\mu_1}(1-r)^{\mu_2}
\label{eqn-Rr}
\end{equation}
and where $A$, $\mu_1$, and $\mu_2$ are constants.
Because the assumed form of the material model \eqref{eqn-trsqlat} is linear in $s_{ji}$ and contains
no dependence on other bonds, it is a bond-based, linear microelastic material model.
To evaluate the parameters, let $i$ be the {\emph{target node}} at the center of the CG grid.
Let $j$ be any node that interacts with $i$.
Taking the logarithm of both sides of each of \eqref{eqn-trsqlat} and rearranging leads to
\begin{equation}
\log { \mathcal{T} }_{ji} = \log A+\mu_1\log r_{ji}+\mu_2\log(1-r_{ji})+\log{\epsilon}
\label{eqn-logt}
\end{equation}
where, for uniaxial extension, $s_{ji}={\epsilon}$.
Evaluating ${ \mathcal{T} }_{ji}$ from the CG data at the three bond lengths $| {\boldsymbol{\xi}} _{ji}|=1,2,3$,
\eqref{eqn-logt} forms a linear algebraic system with unknowns $\log A$, $\mu_1$, and $\mu_2$.
This system is easily solved for these quantities.
The parameters $A$, $\mu_1$, and $\mu_2$ are therefore now known.
These values are listed in Table~\ref{table-sqlat}.
\begin{table}
\centering
\begin{tabular}{|l|l|l|l|l|}
\hline
Parameter & Value \\ \hline \hline
$A$ & 0.3501 \\ \hline
$\mu_1$ & 1.902 \\ \hline
$\mu_2$ & 3.332 \\ \hline
$\delta$ & 18.03 \\ \hline
\end{tabular}
\caption{Parameters for the peridynamic material model fitted to MD data in the square lattice example.}
\label{table-sqlat}
\end{table}
Figure~\ref{fig-lrt} shows the dispersion curves for the original small-scale model
\eqref{eqn-sqlatt} and the fitted peridynamic model \eqref{eqn-tsqlat}, \eqref{eqn-trsqlat}.
For comparison, the dispersion curve from the local theory (linear elasticity) is also shown.
The peridynamic model provides better agreement with the original model than the local
theory for wavelengths above the CG node spacing.
At smaller wavelengths, the peridynamic model does not include the small-scale interactions that
influence dispersion.
The peridynamic grid has 4\% as many nodes as the original small-scale grid and allows a time step
size 5 times larger.
So, there is a substantially reduced cost in using the coarse grained peridynamic model.
In the continuous form of the peridynamic model, the equation of motion is given by
\begin{equation}
\rho({\bf x})\ddot{\bf u}({\bf x},t)=\int_{\mathcal{H}_{\bf x}} \big[{\bf t}({\bf q},{\bf x},t)-{\bf t}({\bf x},{\bf q},t)\big]\;{ {\text{d}} }{\bf q} +{\bf b}({\bf x},t).
\label{eqn-eomt}
\end{equation}
Using \eqref{eqn-trsqlat} and \eqref{eqn-Rr}, the material model in this example problem is then
\begin{equation}
{\bf t}({\bf q},{\bf x},t)={\bf M} A{\mathsf{R}}(r)s
\label{eqn-fttsq}
\end{equation}
where
\begin{equation}
{\bf M}=\frac{{\bf y}({\bf q},t)-{\bf y}({\bf x},t)} {|{\bf y}({\bf q},t)-{\bf y}({\bf x},t)|}, \quad
r=\frac{|{\bf q}-{\bf x}|}{\delta}, \quad
s=\frac{|{\bf y}({\bf q},t)-{\bf y}({\bf x},t)|} {|{\bf q}-{\bf y}|}-1.
\label{eqn-Msq}
\end{equation}
\section{Application to graphene}\label{sec-graphene}
To apply the method to graphene, an MD model of a
single-layer graphene sheet was constructed (Figure~\ref{fig-mdmesh}).
The MD mesh is a 10nm square containing 3634 atoms arranged in a hexagonal lattice.
The initial interatomic spacing is 0.146nm.
The atoms interact through a Tersoff potential \cite{tersoff88}.
The temperature is controlled by a thermostat using Langevin dynamics that randomly increases or reduces the
thermal energy of the atoms to keep the mean kinetic energy constant.
To reduce the effect of thermal oscillations on the coarse grained displacements,
the atomic displacements are smoothed over time according the following expression:
\begin{equation}
{\bf U}_\alpha(0)=0, \qquad \dot{\bf U}_\alpha(t)={\varepsilon}(\tilde{\bf U}_\alpha(t)-{\bf U}_\alpha(t))
\label{eqn-Usmooth}
\end{equation}
where $\tilde{\bf U}_\alpha$ is the unsmoothed displacement of atom $\alpha$ (including thermal oscillations).
The parameter ${\varepsilon}$ is a constant taken to be ${\varepsilon}=0.005/\Delta t$, where $\Delta t$ is the
MD time step size.
The smoothed displacements ${\bf U}_\alpha$ are used in the coarse grained expressions such as \eqref{eqn-udef}.
The MD grid is initially allowed to reach a constant temperature in an unstressed state before
loading is applied.
After this initial period, constant velocity boundary conditions are applied at the edges of the grid.
When this transition occurs, a velocity gradient is added to the thermal velocities in the grid such
that the atomic velocities are consistent with the boundary conditions.
The thermostat continues to be applied during loading, since otherwise the temperature would change
due to thermoelasticity.
The edges of the MD mesh have prescribed velocity.
The calculation is stopped when the strain exceeds 30\%, at which point the maximum stress
has been reached and the stress is decreasing.
The loading rate is such that this global strain is attained in about 5000 time steps.
To calibrate the peridynamic material model described below, only two loading cases are needed.
These are (1) uniaxial strain, and (2) isotropic extension.
The coarse graining positions ${\bf x}_i$ are generated on a square lattice with spacing $h=0.5$nm.
The weighting functions are the cone-shaped functions given by
\eqref{eqn-Sdef} and \eqref{eqn-Ommd}.
The CG mesh contains 121 nodes.
Thus, each CG node represents nominally $3634/121\approx30$ atoms.
The CG displacements are computed according to \eqref{eqn-udef}, and the CG bond forces are
computed from \eqref{eqn-ffdef}, using the MD displacements and forces.
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{doc50-fig-mdmesh.pdf}
\caption{Undeformed MD (left) and coarse grained (right) meshes.}
\label{fig-mdmesh}
\end{figure}
The CG bond force data show a softening trend as a function of strain, as shown in
Figure~\ref{fig-isotropic}.
Graphene sheets can be treated as nearly isotropic for purposes of deformation in the plane,
with a significant Poisson effect.
To show this, MD calculations of uniaxial strain at a temperature of 300K were performed with three
different orientations of the hexagonal lattice (Figure~\ref{fig-isotropic}).
The stress-strain curves show that even in the nonlinear regime, the orientation makes only about
a 12\% difference in the stress.
The process of failure in a typical MD simulation is shown in Figure~\ref{fig-mddeform}.
The graphene sheet at 300K is deformed under (globally) uniaxial strain.
When the grid is strained beyond the maximum in the stress-strain curve, the perfect hexagonal symmetry
is disrupted due to the onset of material instability, leading rapidly to material failure.
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{doc50-fig-isotropic.pdf}
\caption{Stress-strain curves for graphene under uniaxial strain for three different lattice orientations.}
\label{fig-isotropic}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{doc50-fig-mddeform.pdf}
\caption{MD simulation of uniaxial strain at a temperature of 300K.}
\label{fig-mddeform}
\end{figure}
To carry out the fitting of a peridynamic model to the CG data, a target CG node $i$
is chosen at the center of the CG mesh.
For node $i$, let ${\mathcal{H}}_i$ denote the {\emph{family}} of $i$, defined by
\begin{equation}
{\mathcal{H}}_i=\left\{ j\; \big\vert\; |{\bf x}_j-{\bf x}_i|\le\delta \right\}
\label{eqn-Hidef}
\end{equation}
where $\delta$ is the coarse grained horizon given by \eqref{eqn-deltadef}.
The two MD calculations (for uniaxial strain and isotropic extension), after coarse graining,
provide curves of bond force density ${\bf t}_{ji}$ as a function of the bond strain $s_{ji}$
defined by \eqref{eqn-sdef}.
Also recall the normalized bond length given by \eqref{eqn-rdef}.
Plotting the curves of $|{\bf t}_{ji}|$ as a function of $s_{ji}$ and of $r_{ji}$ for many bonds reveals the general
shapes shown in Figure~\ref{fig-bondfit}.
The softening response shown in the CG bond forces (dashed lines) suggests the following form:
\begin{equation}
{ \mathcal{T} }_{ji}=A{\mathsf{R}}(r_{ji}) {\mathsf{S}}(s_i^+)
\left[\left(1-\frac{\beta}{2}\right)s_{ji}+\beta \bar s_i\right],
\label{eqn-fasp}
\end{equation}
where the bond length term ${\mathsf{R}}$ has the same form as in the previous example \eqref{eqn-Rr},
and the strain softening term ${\mathsf{S}}$ is given by
\begin{equation}
{\mathsf{S}}(p) = \left\lbrace \begin{array}{ll}
n/s_0 & {\mathrm{if}}\; p\le0, \\
\\
\displaystyle{
\frac{1}{p} \left(1-\left\vert 1-\frac{p}{s_0}\right\vert^n\right)
} & {\mathrm{if}}\; 0<p<2s_0, \\
\\
0 & {\mathrm{if}}\; 2s_0<p \\
\end{array} \right.
\label{eqn-Ss}
\end{equation}
for any $p$.
The parameters $A$, $\mu_1$, $\mu_2$, $n$, $s_0$, and $\beta$ are constants independent of the bond and of the deformation.
In \eqref{eqn-fasp}, the variables $\bar s_i$ and $s_i^+$ are the mean and maximum strains among all the bonds
in the family of $i$:
\begin{equation}
\bar s_i = \frac{ \sum_{j\in{\mathcal{H}}_i} s_{ji} }
{ \sum_{j\in{\mathcal{H}}_i} 1 }, \qquad
s_i^+ = \max_{j\in{\mathcal{H}}_i}\big\{ s_{ji}\big\}.
\label{eqn-spm}
\end{equation}
The mean bond strain is similar to a nonlocal dilatation.
In \eqref{eqn-fasp}, the term involving $\beta$ represents the bond strain adjusted by the mean strain.
This term captures the Poisson effect.
The function ${\mathsf{S}}$ is a softening term, which, under tension, drops off to 0 for large strain.
If ${\mathsf{S}}$ were constant, the model would be linearly elastic with variable Poisson ratio.
${\mathsf{S}}$ depends only on the maximum current bond strain in the family, $s_i^+$.
The next step is to find the parameters in the expressions \eqref{eqn-fasp}--\eqref{eqn-Ss}.
In the following discussion, the stress tensor obtained from the CG bond force data \cite{sill:15} is
defined by
\begin{equation}
\boldsymbol{\sigma}_i = \sum_{j\in{\mathcal{H}}_x} {\bf t}_{ji}\otimes {\boldsymbol{\xi}} _{ji} V.
\label{eqn-sigmacg}
\end{equation}
The 11 components of the stress tensor in \eqref{eqn-sigmacg} will be denoted by $\sigma_i$ in the present discussion:
\begin{equation}
\sigma_i({\epsilon}) = \sum_{j\in{\mathcal{H}}_x} (t_1)_{ji}(\xi_1)_{ji} V.
\label{eqn-sigma1cg}
\end{equation}
The two coarse grained MD simulations used for calibrating the model parameters have the following strains:
\begin{itemize}
\item Uniaxial strain (UX) with strain ${\epsilon}$ in the $x_1$ direction:
\begin{equation}
s_i^+={\epsilon}, \qquad \bar s_i=\frac{{\epsilon}}{2}.
\label{eqn-ux}
\end{equation}
\item Isotropic extension (IE) with strain ${\epsilon}$:
\begin{equation}
s_i^+={\epsilon}, \qquad \bar s_i={\epsilon}.
\label{eqn-ie}
\end{equation}
\end{itemize}
The constant $\beta$ will be determined first.
In the IE and UX cases with global strain ${\epsilon}$, the bond strain in a bond with polar angle $\theta$ is given by
\begin{equation}
s^{ {\text{IE}} }={\epsilon}, \qquad s^{ {\text{UX}} }={\epsilon}\cos^2\theta.
\label{eqn-stheta}
\end{equation}
Then from
\eqref{eqn-fasp},
\eqref{eqn-sigma1cg}, \eqref{eqn-ux}, \eqref{eqn-ie}, and \eqref{eqn-stheta},
\begin{equation}
\frac{\sigma_i^{ {\text{IE}} }}{\sigma_i^{ {\text{UX}} }}= \frac
{\sum_{j\in{\mathcal{H}}_i} A{\mathsf{R}}(r_{ij}){\mathsf{S}}({\epsilon}) (1+\beta/2){\epsilon}\xi_{ji} \cos^2\theta V}
{\sum_{j\in{\mathcal{H}}_i} A{\mathsf{R}}(r_{ij}){\mathsf{S}}({\epsilon})[ (1-\beta/2)\cos^2\theta + \beta/2]{\epsilon}\xi_{ji} \cos^2\theta V}
\label{eqn-ftheta}
\end{equation}
where $\xi_{ji}=| {\boldsymbol{\xi}} _{ji}|$.
Approximating \eqref{eqn-ftheta} by replacing the sums with integrals and noting that
${\mathsf{R}}$ and ${\mathsf{S}}$ are independent of $\theta$ leads to
\begin{equation}
\frac{\sigma_i^{ {\text{IE}} }}{\sigma_i^{ {\text{UX}} }}= \frac
{ (1+\beta/2) \int_0^{2\pi}\cos^2\theta\,{ {\text{d}} }\theta }
{ (1-\beta/2)\int_0^{2\pi}\cos^4\theta\,{ {\text{d}} }\theta + (\beta/2) \int_0^{2\pi}\cos^2\theta\,{ {\text{d}} }\theta }.
\label{eqn-fthetaint}
\end{equation}
Since $\int\cos^2\theta=\pi$ and $\int\cos^4\theta=3\pi/4$, solving \eqref{eqn-fthetaint} for $\beta$ yields
\begin{equation}
\beta= \frac{8-6\gamma}{\gamma-4}, \qquad\gamma:= \frac{\sigma_i^{ {\text{IE}} }}{\sigma_i^{ {\text{UX}} }}.
\label{eqn-betacal}
\end{equation}
The constants $s_0$ and $n$ are determined next.
For UX, combining \eqref{eqn-fasp}, \eqref{eqn-Ss}, \eqref{eqn-sigma1cg}, and \eqref{eqn-ux} leads to
\begin{equation}
\sigma_i^{ {\text{UX}} }({\epsilon}) = \sum_{j\in{\mathcal{H}}_x} A{\mathsf{R}}(r_{ji}) \left(1-\left\vert 1-\frac{{\epsilon}}{s_0}\right\vert^n\right) (\xi_1)_{ji}V.
\label{eqn-sigmaux}
\end{equation}
The maximum of the function in \eqref{eqn-sigmaux} occurs at ${\epsilon}=s_0$, and its value is given by
\begin{equation}
\sigma_i^{ {\text{UX}} }(s_0) = \sum_{j\in{\mathcal{H}}_x} A{\mathsf{R}}(r_{ji}) (\xi_1)_{ji}V.
\label{eqn-sigmauxmax}
\end{equation}
The values of $s_0$ and $\sigma_i^{ {\text{UX}} }(s_0)$ are easily read off from the CG data.
Differentiating \eqref{eqn-sigmaux} yields
\begin{equation}
\frac{{ {\text{d}} }\sigma_i^{{ {\text{UX}} }}}{{ {\text{d}} }{\epsilon}}(0)=\frac{n}{s_0} \sum_{j\in{\mathcal{H}}_x} A{\mathsf{R}}(r_{ji}) (\xi_1)_{ji}V.
\label{eqn-dS}
\end{equation}
The slope of the curve at the origin ${ {\text{d}} }\sigma_i^{{ {\text{UX}} }}/{ {\text{d}} }{\epsilon}(0)$ is easily obtained from the coarse grained CG
data by numerical differentiation.
Then from \eqref{eqn-sigmauxmax} and \eqref{eqn-dS}, the value of $n$ is found from
\begin{equation}
n = \frac{s_0}{\sigma_i^{{ {\text{UX}} }}(s_0)} \frac{{ {\text{d}} }\sigma_i^{{ {\text{UX}} }}}{{ {\text{d}} }{\epsilon}}(0).
\label{eqn-ns}
\end{equation}
The parameters $s_0$ and $n$ are now known.
The values of $A$, $\mu_1$, and $\mu_2$ are determined from the IE simulation as in Section~\ref{sec-example}
using \eqref{eqn-logt}.
Now all the parameters are known, and the calibration process for the model is complete.
The parameters for the
material model evaluated for the CG node $i$ at the center of the square are given in
Table~\ref{table-matfit}.
A comparison between the fitted peridynamic material model and the coarse grained bond forces
is shown in Figure~\ref{fig-bondfit}.
To illustrate the effect of distributed defects,
the analysis was repeated for a graphene sheet with 10\% of the atoms removed.
The results are shown in Figure~\ref{fig-void}.
As expected, the sample with defects is less stiff and fails at a lower stress.
\begin{table}
\centering
\begin{tabular}{|l|l|l|l|l|}
\hline
Parameter & Value & Units \\ \hline \hline
$A$ & 34.94 & nN/nm$^6$ \\ \hline
$s_0$ & 0.2345 & \\ \hline
$n$ & 2.338 & \\ \hline
$\mu_1$ & 1.335 & \\ \hline
$\mu_2$ & 2.922 & \\ \hline
$\beta$ & -1.035 & \\ \hline
$\delta$ & 2.121 & nm \\ \hline
$G_c$ & 17.5 & J/m$^2$ \\ \hline
$s_c$ & 0.145 & \\ \hline
$\tau$ & 0.335 & nm \\ \hline
\end{tabular}
\caption{Parameters for the peridynamic material model fitted to MD data at 300K with $h=0.5nm$.}
\label{table-matfit}
\end{table}
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{doc50-fig-bondfit.pdf}
\caption{Fitted peridynamic material model for coarse grained bond forces in a perfect graphene sheet at 300K.
Left: Dependence of bond force on bond length.
Right: Dependence of bond force on bond strain for a bond with length $h$ in the $x_1$-direction.
}
\label{fig-bondfit}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{doc50-fig-void.pdf}
\caption{Graphene sheet with 10\% void.
Left: Initial MD grid.
Right: CG and fitted peridynamic stress-strain curves in uniaxial strain for 0\% and 10\% void.}
\label{fig-void}
\end{figure}
The continuous form of the model is then
\begin{equation}
{\bf t}({\bf q},{\bf x},t)= {\bf M} { \mathcal{T} }, \quad
{ \mathcal{T} }= A{\mathsf{R}}(r) {\mathsf{S}}(s^+) \left[\left(1-\frac{\beta}{2}\right)s+\beta \bar s\right]
\label{eqn-grcontin}
\end{equation}
where ${\bf M}$ and $r$ are given by \eqref{eqn-Msq} and
\begin{equation}
s^+({\bf x},t)=\max_{ {\boldsymbol{\xi}} \in{\mathcal{H}}} s( {\boldsymbol{\xi}} ,t), \quad
\bar s({\bf x},t)=\frac{\int_{\mathcal{H}} s( {\boldsymbol{\xi}} ,t)\;{ {\text{d}} } {\boldsymbol{\xi}} }{\int_{\mathcal{H}} { {\text{d}} } {\boldsymbol{\xi}} }, \quad
{\boldsymbol{\xi}} ={\bf q}-{\bf x}.
\label{eqn-rpcontin}
\end{equation}
\section{Bond breakage}\label{sec-breakage}
The process of coarse graining described above starts with an MD model that does not
contain initiated cracks, although it can contain distributed defects.
The distinction is that after initiation, the damage near the crack tip evolves in such
a way that the Griffith criterion applies.
This means that a growing crack consumes a definite amount of energy per unit area of
new crack surface.
This energy is a material property called the {\emph{critical energy release rate}},
denoted by $G_c$.
So, the nonlinear material model obtained by coarse graining is designed to simulate
nucleation of damage, but not the details of what happens in the process zone near
a crack that is already present.
To incorporate previously initiated cracks into the continuum model and allow for
rescaling, a value of $G_c$ can be determined easily from the MD model in a separate
simulation.
To do this, assume that all the energy that goes into growing a crack is converted to
surface energy \cite{zhang14}.
The MD interatomic potential is reduced when each atom is surrounded by a certain number
of neighbors, which is 3 in the case of graphene.
It follows that when some neighbors are removed, as would happen on a crack surface, the
total energy increases.
So, $G_c$ can be determined by performing an MD simulation in which the sample is split into
two halves (Figure~\ref{fig-esurface}).
The total potential energy values before and after the split are $E_0$ and $E_1$ respectively.
The value of $G_c$ is then
\begin{equation}
G_c=\frac{E_1-E_0}{\tau L},
\label{eqn-GcE}
\end{equation}
where $L$ is the total length of the MD grid along the split and $\tau$ is the thickness (0.335nm
for graphene).
After carrying out the above calculation, the resulting value of $G_c$ is $17.5$J/m$^2$, which is similar
to experimentally measured values \cite{zhang14}.
Bond breakage is added to the coarse grained continuum model \eqref{eqn-grcontin} using the standard
form of irreversible bond breakage:
\begin{equation}
{ \mathcal{T} }=A{\mathsf{R}} {\mathsf{S}}{\mathcal{B}}( {\boldsymbol{\xi}} ,t) \left[\left(1-\frac{\beta}{2}\right)s+\beta \bar s\right]
\label{eqn-continbreak}
\end{equation}
where ${\mathcal{B}}$ is a binary-valued function that switches from 1 to 0 when the bond $ {\boldsymbol{\xi}} $ breaks:
\begin{equation}
{\mathcal{B}}( {\boldsymbol{\xi}} ,t)=\left\lbrace
\begin{array}{ll}
1 & {\mathrm{if}}\;s( {\boldsymbol{\xi}} ,t')<s_* \;{\mathrm{for\;all}}\;0\le t'\le t, \\
0 & {\mathrm{otherwise.}}
\end{array}
\right.
\label{eqn-mubreak}
\end{equation}
where $s_*$ is the critical strain for bond breakage.
A scalar damage variable $\phi({\bf x},t)$ can be defined as the fraction of bonds connected to a point ${\bf x}$ that have broken:
\begin{equation}
\phi({\bf x},t) = 1-\frac{ \int_{\mathcal{H}} {\mathcal{B}}( {\boldsymbol{\xi}} ,t)\;{ {\text{d}} } {\boldsymbol{\xi}} } { \int_{\mathcal{H}} { {\text{d}} } {\boldsymbol{\xi}} }.
\label{eqn-phidef}
\end{equation}
Once $G_c$ is known from MD, a critical bond strain $s_c$ in the CG material model can be determined by
requiring that the work per unit area consumed in separating two halves of the CG grid matches
this $G_c$.
Suppose the CG grid is split into two halves ${\mathcal R}_-$ and ${\mathcal R}_+$.
Assuming uniaxial strain,
the total work done through the bonds that initially connected the two halves is given by
\begin{eqnarray}
\tau LG_c&=& \sum_{i\in{\mathcal R}_-} \sum_{j\in{\mathcal R}_+}V^2\int_0^{s_c} { \mathcal{T} }_{ji}{ {\text{d}} }(\xi_{ji}s) \nonumber\\
&=&AV^2\sum_{i\in{\mathcal R}_-} \sum_{j\in{\mathcal R}_+}\xi_{ji} {\mathsf{R}}(r_{ji}) \int_0^{s_c} {\mathsf{S}}(s)s\;{ {\text{d}} } s.
\label{eqn-Es}
\end{eqnarray}
Equation \eqref{eqn-Es} is solved numerically for $s_c$, using the value for $G_c$ that was determined
from MD using \eqref{eqn-GcE}.
Equation \eqref{eqn-Es} is simply the classical expression for the peridynamic energy release rate \cite{madenci:13}
specialized to the present material model.
Since the Griffith fracture criterion only applies to cracks that already exist, rather than new cracks,
the value of $s_c$ obtained from \eqref{eqn-Es} is applied to the bonds connected to ${\bf x}$ only when damage is
already present within the family of ${\bf x}$.
Define the maximum damage within the family of ${\bf x}$ by $\bar\phi({\bf x},t)$:
\begin{equation}
\bar\phi({\bf x},t) = \max_{{\bf q}\in{\mathcal{H}_{\bf x}}} \phi({\bf q},t).
\label{eqn-barphi}
\end{equation}
The critical strain for bond breakage changes from the coarse grained value $s_0$ that reflects crack nucleation
to the Griffith value $s_c$:
\begin{equation}
s_*=\left\lbrace
\begin{array}{ll}
s_0 & {\mathrm{if}}\;\bar\phi < \phi_{trans} \\
s_c & {\mathrm{otherwise.}}
\end{array}
\right.
\label{eqn-sstar}
\end{equation}
where $\phi_{trans}$ is the transition value of damage, usually set to 0.3.
The use of different values of the critical strain for the nucleation and growth phases is discussed
further in \cite{silling21b} in the context of the microelastic nucleation and growth (MNG) material model.
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{doc50-fig-esurface.pdf}
\caption{Potential energy of atoms on a free edge of a graphene sheet is higher than in the interior.
Colors represent potential energy in the Tersoff interatomic potential.}
\label{fig-esurface}
\end{figure}
\section{Changing the horizon}\label{sec-scale}
A peridynamic model obtained from coarse grained data can be rescaled to use any desired horizon $\delta'$.
Let $\delta$ denote the original horizon determined in the coarse graining process, and
let $\kappa=\delta'/\delta$.
It is required that the stress be unchanged by the rescaling:
\begin{equation}
\int_{{\mathcal{H}}'} {\bf t}'( {\boldsymbol{\xi}} ')\otimes {\boldsymbol{\xi}} '\;{ {\text{d}} } {\boldsymbol{\xi}} ' =\int_{\mathcal{H}} {\bf t}( {\boldsymbol{\xi}} )\otimes {\boldsymbol{\xi}} \;{ {\text{d}} } {\boldsymbol{\xi}}
\label{eqn-resct}
\end{equation}
where ${\bf t}'$ is the rescaled material model, to be determined.
Since the integrals in \eqref{eqn-resct} are area integrals
in 2D, \eqref{eqn-resct} is satisfied for all deformations if ${\bf t}'$ is set to
\begin{equation}
{\bf t}'( {\boldsymbol{\xi}} ')=\kappa^{-3} {\bf t}( {\boldsymbol{\xi}} '/\kappa)
\label{eqn-rescg}
\end{equation}
for all $ {\boldsymbol{\xi}} '$.
(In 3D the exponent in \eqref{eqn-rescg} would be $-4$.)
The critical strain derived from the Griffith criterion $s_c$ follows a different scaling relation.
In both 2D and 3D, this relation is given by
\begin{equation}
s_c'=\kappa^{-1/2} s_c
\label{eqn-ressc}
\end{equation}
which follows from the standard derivation of the critical strain \cite{madenci:13}.
In the application below in Section~\ref{sec-exper}, a value of $\kappa=5$ was used.
The coarse grained material model, before rescaling, embeds length scales from the
original small scale or MD model, as demonstrated by the dispersion curves in
Figure~\ref{fig-lrt}.
However, these physical length scales are lost when rescaling according to \eqref{eqn-rescg}.
In fact, after rescaling, there may be no compelling reason to use the same bond length dependence
${\mathsf{R}}$ as was obtained by coarse graining.
This can be replaced by some other convenient form, say ${\mathsf{R}}'$, provided that
\begin{equation}
\int_0^{\delta'}{\mathsf{R}}'(\xi'/\delta'){\xi'}^2\;{ {\text{d}} }\xi'
=\int_0^{\delta}{\mathsf{R}}(\xi/\delta){\xi}^2\;{ {\text{d}} }\xi,
\label{eqn-resR}
\end{equation}
which ensures that the stress is unchanged.
\section{Comparison with experiment}\label{sec-exper}
Lee et al. \cite{lee08} performed experiments in which the elastic response and strength of nearly perfect graphene
sheets were measured.
The sheets were suspended over circular cavities with diameter 1000nm or 1500nm.
The sheets were then deflected by an atomic force microscope (AFM) probe with a nominally hemispherical tip.
The main data reported was the force on the probe as a function of its deflection.
The case with a specimen diameter of 1000nm and an AFM probe tip radius of 27.5nm was simulated with
the coarse grained material model discussed above for a perfect graphene monolayer.
This material model was implemented in the Emu peridynamic code \cite{sias:05}.
The grid spacing in the CG model was scaled up by a factor of 5, resulting in a grid spacing in Emu of 2.5nm
and a horizon of $\delta'=10.61$nm.
The AFM probe tip was modeled as a rigid sphere with constant velocity.
The load on the AFM predicted by the peridynamic simulation is compared with typical experimental data \cite{lee08} in
Figure~\ref{fig-afmload}.
The experimental data has a statistical variation between tests of about 20\%.
The oscillations in the simulated curve come from vibrations of the membrane in ``trampoline'' mode, since the
simulation is dynamic rather than quasi-static.
The simulation assumed infinite friction, that is, no sliding between the probe and the membrane.
The alternative assumption of zero friction reduces the predicted peak load in the simulation.
It is also uncertain whether the probe is actually hemispherical and smooth, as is assumed in the calculation.
The simulated shape of the membrane and strain distribution just prior to failure are shown in Figure~\ref{fig-membrane}.
After failure, the specimen is predicted to form petals, a feature that is also observed in the experiment.
The Emu calculation had 125,629 nodes and used a time step size of 100fs.
In contrast, a full MD calculation of this problem would require over 28,000,000 atoms and
have a time step of about 0.5fs.
So, the peridynamic model offers a substantial saving in computer resources compared with full MD.
A peridynamic code with an implicit solver would allow a much larger time step size to be used than in Emu,
which uses explicit differencing in time.
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{doc50-fig-afmload.pdf}
\caption{Simulated load on an AFM probe deflecting a graphene sheet compared with typical experimental data \cite{lee08}.}
\label{fig-afmload}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{doc50-fig-membrane.pdf}
\caption{Peridynamic simulation of the perforation of a graphene sheet by and AFM probe.}
\label{fig-membrane}
\end{figure}
\section{Discussion}\label{sec-disc}
The main result of this paper is a demonstration that the coarse graining method described in Section~\ref{sec-homogpd}
can be used to calibrate an appropriate peridynamic continuum or discretized material model.
The distinguishing features of this method are that it derives nonlocal bond forces directly from MD, and
that these forces are compatible with the use of smoothed displacements according to a prescribed
weighting function.
A peridynamic material model for graphene obtained from these bond forces
provides good agreement with nanoscale test data while greatly reducing
the cost of the calculation in comparison with molecular dynamics, especially when used together with
rescaling the horizon.
It was further demonstrated here that the coarse grained model can be combined with standard peridynamic bond
breakage to treat both the nucleation and growth phases of fracture.
As illustrated in Section~\ref{sec-example}, the method can treat long-range forces.
However, graphene sheets do not involve long-range forces, since the Tersoff potential
causes each atom to interact only with its nearest neighbors, of which there are 3.
Long-range forces would arise from the application of surface charge to graphene.
Long-range forces would also be present in multilayer graphene, since adhesion between the layers
occurs through interactions similar to Van der Waals forces \cite{kiti05}.
So, the capability of the coarse graining method to treat long-range forces
would be needed for these applications.
A possible extension of the method is to apply the calibration process in Section~\ref{sec-graphene} individually
at each CG node, rather than at just one target node $i$.
This would allow the incorporation of defects such as grain boundaries into the calibrated peridynamic model,
in which the material parameters would then become dependent on position.
This extension appears to be practical, because the process of fitting described here is direct, rather than
relying on an optimization technique.
The coarse graining method provides bond forces as the primary quantity that is used for fitting a material
model.
This limits the number of MD simulations that are needed (only uniaxial strain and isotropic extension are used here)
rather than a large suite of training data that might be required in alternative methods.
A different approach \cite{you20,you21,xu21}
is to apply machine learning to fit a peridynamic model to coarse grained displacements.
The machine learning approach avoids the use of coarse grained bond forces but requires many different loading
cases as training data.
Machine learning may offer the potential to learn the form of a peridynamic model from small-scale data
in addition to calibrating the parameters.
\section*{Acknowledgment}
This work was supported by the
U.S. Army Combat Capabilities Development Command (DEVCOM) Army Research Laboratory
and by LDRD programs at Sandia National Laboratories.
Sandia National Laboratories is a multimission laboratory managed and operated by
National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of
Honeywell International Inc. for the U.S. Department of Energy’s National
Nuclear Security Administration under contract DE-NA0003525.
This paper, SAND2021-11007 R, describes objective technical results and analysis.
Any subjective views or opinions that might be expressed in the paper do not necessarily
represent the views of the U.S. Department of Energy or the United States Government.
\bibliographystyle{abbrv
|
1,314,259,994,807 | arxiv | \section{Introduction}
Recently, photonic technologies have become promising to address the bottleneck for high-speed communication~\cite{ShiTianGervais+2020+4629+4663} and high-performance computing~\cite{Psaltis1990,Zhou2021} instead of traditional all-electronic technologies. The next-generation photonic devices need to manipulate light in multiple independent channels at high speeds, and metasurfaces are inherently suited to provide spatial multiplexing capabilities~\cite{Zheludev2012,Shaltouteaat3100}. Still today, the vast majority of photonic systems are passive. Amongst the available mechanisms that are explored for active control of light, hybrid~\cite{Monat2020} electro-optic interconnects employ $\chi^{(2)}$ effects to transduce a signal from the electronic domain into the optical domain and combine the low-loss, low-dissipation hallmarks of photonics with the compactness and reconfigurability of electronic circuits. Importantly, they have proven superior to alternative techniques when it comes to speed: the control fields may oscillate at frequencies well beyond the microwaves into the terahertz~\cite{Benea-Chelmus:20}. As a result, an unprecedented variety of electro-optic modulators has been reported, driven by the progress in the molecular engineering~\cite{D0TC05700B}, growth~\cite{Eltes2016}, fabrication and stability~\cite{Lu2020} of materials such as organic non-linear molecules~\cite{Xu2020}, barium titanate~\cite{Abel2019a} and lithium niobate~\cite{Luke:20}. Today, most demonstrations target fiber applications and foster photonic integrated platforms~\cite{Zhu:21, Elshaari2020} such as silicon-on-insulator technologies, where a low optical loss and a high transduction efficiency have been achieved. These two figures of merit are key for a wide range of applications, and are especially crucial for quantum applications~\cite{Rueda2019a, Youssefi2021}. Waveguide-based electro-optic modulators have a size on the order of few tens to few hundreds of wavelengths~(such as e.g. ring resonators of the kind shown in Fig.~\ref{fig:FigVision}~a), since it allows one to use whispering gallery modes with lowest radiative losses and hence narrowest linewidth.
\begin{figure}[htb]
\centering
\includegraphics[width=16cm]{Visionpicture_6.pdf}
\caption{\textcolor{black}{\textbf{Comparison of waveguide-based and free-space electro-optic modulators.} \textbf{a,} Waveguide-based electro-optic modulators rely on resonant structures that are in on-chip waveguides such as e.g. ring resonators of high azimuthal order or on-chip interferometers. Light propagating in the waveguide acquires a phase modulation at the RF frequency $f_{RF}$ due to the electro-optic effect; the interaction region may be hundreds of wavelengths long. \textbf{b,} In contrast, free-space electro-optic modulators change the properties of a beam that is incident from free space. Sub-wavelength Mie nanoresonators impart a phase modulation via the electro-optic effect to the incident light that propagates through the thin film; the interaction length is typically shorter than a single wavelength. \textbf{c,} Resonant electro-optic modulators work on the principle that their resonant frequency $\omega_{res}$ is tuned by $\Delta \omega_{eo} (t)$ linearly by an applied bias, due to the phase shift induced by the electro-optic effect. A radio-frequency bias $V_{RF} (t)= V_{eo}\times sin(2\pi f_{RF} t) $ displaces the resonance frequency around its zero bias value. Narrowband resonances that satisfy $\Delta \omega_{eo} > \delta \omega_{res}$ are preferred for full intensity modulation at low switching voltages. Dashed black arrows indicate the applied tuning field that introduces the electro-optic effect. Red arrows indicate the propagating optical field. EO = electro-optic, GND = ground.}}
\label{fig:FigVision}
\end{figure}
Instead, ultrathin electro-optic modulators from sub-wavelength resonators are exquisite candidates in applications that require direct control over free-space light in a compact and spatially multiplexed way, such as free-space optical communication links~\cite{Mphuthi:19}, coherent laser ranging, active optical components, high-speed spatial light modulators~\cite{Smolyaninov2019,Benea-ChelmusI.-C.MeretskaM.ElderL.D.TamagnoneM.DaltonR.L.Capasso2021} and active control of free-space emitters~\cite{Traverso:21}. Flat optical components such as metasurfaces~\cite{Chen2020,Khorasaninejad1190} rely on sub-wavelength sized nanostructures that change the properties of a beam that is incident from free-space onto the metasurface. In contrast to waveguide-based modulators, free-space electro-optic modulators from $\chi^{(2)}$ materials have been researched less~(few examples are Refs.~\cite{Gao2021,Timpu2020,doi:10.1002/adom.202000623}) despite their commonalities illustrated in Fig.~\ref{fig:FigVision}~a and b. In any electro-optic modulator, the the microwave field is applied via metallic electrodes~\cite{Wang2018, Haffner2018} or antenna structures~\cite{Salamin2015, Benea-Chelmus:20} and changes the refractive index $n_{mat}$ of the non-linear material at optical frequencies via the linear electro-optic effect, also known as Pockels effect, by $\Delta n (t) = - \frac{1}{2}n_{mat}^3rE (t)$, with $r$ the electro-optic coefficient of the material and $E (t)= \frac{V_{RF}(t)}{d}$ the tuning field~(voltage $V_{RF}(t)$ applied across the distance $d$). In resonant modulators that employ the $r_{33}$ component of the electro-optic tensor, this change in refractive index $\Delta n(t)$ modifies the resonant frequency as illustrated in Fig.~\ref{fig:FigVision}~c by $\Delta \omega_{eo}(t) = -\frac{\Delta n (t)}{n_{mat}}\omega_{res}\Gamma_c = g_{eo}V_{RF}(t)$, with $g_{eo} = \frac{1}{2}n_{mat}^2r_{33} \frac{1~V}{d}\omega_{res}\Gamma_c$ the electro-optic coupling rate at 1~V applied voltage~\cite{Zhang2019} and $\Gamma_c$ the overlap factor of the two interacting fields with the nonlinear medium. The shift is proportional to the applied voltage and its polarity.
The fundamental challenge to realize free-space electro-optic modulators as shown in Fig.~\ref{fig:FigVision}~b compared to waveguide-based electro-optic modulators stems, preponderantly, from the quality factors that can be achieved in the two systems and thus the underlying available interaction times. Resonances with a high quality factor~(and thus a small full width half maximum $\delta \omega_{res} = 2\pi\times \delta f_{res}$) are favorable as they minimize the so-called switching voltage $V_{eo} = V_{switch}$ that is necessary to fully shift the resonance away from its unbiased value, which occurs when $\delta \omega_{res} \leq \Delta \omega_{eo}$. In conditions of high-Q, an optical beam experiences full modulation of its intensity or phase even for low $V_{eo}$. The frequency shift can be derived from the phase modulation $\Delta \phi_{eo} = \Delta \omega_{eo}t_{int}$ introduced by the Pockels effect, where $t_{int} = \frac{2\pi}{\gamma_{rad}}=\frac{2\pi}{\delta \omega_{res}} $ is the interaction time of the optical beam with the control field within the nonlinear material and $\gamma_{rad}$ the radiative loss rate of the optical field out of the interaction region, into the far-field. The natural wavelength-scale dimensions of free-space modulators typically limits the efficiency straight in two ways. First, the spatial extent of the interaction region is only few hundreds of nanometers long, commensurate with the typical thickness of flat optics. This results into an electro-optic transduction that is often inefficient. Second, wavelength-sized resonators have long been contended with low quality factors as a result of their small azimuthal modal order. However, recent breakthroughs in engineering of high-Q plasmonic resonators~\cite{Bin-Alam2021} or Mie resonators~\cite{Koshelev2021} from silicon nanoantennas~\cite{Lawrence2020} or bound states and quasi-bound states in the continuum~(quasi-BICs)~\cite{PhysRevLett.121.193903} have showcased compelling free-space candidates that now routinely reach quality factors on the order of few hundreds to few thousands, albeit in electronically passive structures that do not necessitate integration with microwave metallic waveguides that can introduce significant losses and lower the performance.
In this work, we harness quasi-BIC resonances for hybrid silicon-organic electro-optic modulators that feature a small footprint and low dimensions (as illustrated in Fig.~\ref{fig:FigVision}~b) and that preserve a Q-factor up to 550 even when homogeneously integrated with high-performance electro-optic molecules and interdigitated driving electrodes. A highly efficient electro-optic transduction is made possible by state-of-the-art $\chi^{(2)}$ organic molecules JRD1 in polymethylmethacrylare~(PMMA)~\cite{doi:10.1063/1.4884829} that are low-loss and are spatially located within the high-field areas of the optical nearfield. By judicious three-dimensional engineering, we incorporate metallic coplanar waveguides that provide GHz-speed driving fields. In short, the physics behind quasi-BICs relies on confined modes that can exist inside a continuum. Our geometry~(discussed in detail in~\cite{PhysRevLett.121.193903}) explores symmetry breaking as a mean to influence the linewidth of the optical modes. Finally, we benchmark the performance of quasi-BIC modes for free-space transduction against guided mode resonances~(GMR) that can arise in similar nanostructures. GMRs appear due to the scattering of incident light by the silicon pillars into grating orders that correspond to the propagation vector of guided modes.
\section{Results}
An array of elliptical silicon resonators is patterned on a quartz substrate and gold interdigitated electrodes are deposited around the resonators and then covered by the high-quality active organic layer. The embedded array has a sub-wavelength thickness and operates in a transmission geometry where an optical field is normally incident from free-space. Fabricated devices are shown in Fig.~\ref{fig:FigDevice} and the fabrication protocol is discussed in the methods and sketched in Supplementary Fig.~S1. The scanning electron micrographs~(SEM) of Fig.~\ref{fig:FigDevice}~a, c and d show the array of silicon resonators prior to and after the deposition of the metallic electrodes. The layer of organic electro-optic molecules consists of JRD1:PMMA of $50\%$wt, has a lower refractive index than silicon ($\tilde n = n+ik$, with $n = 1.67$ and $k = 5\times 10^{-5}$ at $\lambda_{res}=1550~nm$) and is not shown here. Wavelength-, voltage- and concentration-dependent properties of the active layer are extensively reported in reference~\cite{Benea-ChelmusI.-C.MeretskaM.ElderL.D.TamagnoneM.DaltonR.L.Capasso2021} and its associated supplementary information. By choice of the exact geometrical parameters of one unit cell of the array illustrated in the insets of Fig.~\ref{fig:FigDevice}~a~(TV = top view) and d~(SV = side view), the silicon nanostructures can be engineered to exhibit quasi-BIC and guided mode resonances in the C- and L- telecom bands, as shown in Fig.~\ref{fig:FigDevice}~b. While their linewidth is clearly very distinct, both types of optical modes are localized mostly in the layer of organic molecules and outside the high-index silicon material, as demonstrated by their field profiles in Fig.~\ref{fig:FigDevice}~e and f, where the arrows denote the orientation of the electric field in the plane A of the resonators. Moreover, we chose to pattern the silicon resonators of height $h_{Si} = 200~$nm on top of an elliptical pedestal from silicon dioxide of height $h_{SiO_2}=$200~or~300~nm, as visible from both the SEM figures and the cross-section of one unit cell shown in Fig.~\ref{fig:FigDevice}~b. This step is essential to minimize the overlap and thereby the losses of the optical field with the metallic electrodes. Simulated cross-sections of the optical field are displayed in Supplementary Fig.~S2 to demonstrate its localization in the near-field of the silicon resonators.
We note a particular feature of the two modes considered here. While both resonances are excited with x-polarized light, the optical nearfield of the resonators is mainly z-polarized for the quasi-BIC mode and x-polarized for the GMR mode. This fact explains our choice of electrode orientation that is different for the two resonances: for the quasi-BIC structure, the electrodes are parallel to the x-axis, while for the GMR structure, they are parallel to the z-axis. This orientation maximizes the alignment of the optical nearfield parallel to the applied RF-field, oriented perpendicularly to the electrodes, and allows us to exploit the $r_{33}$ coefficient of the electro-optic tensor of the JRD1:PMMA layer. We note that in the case of the organic layer used here, the orientation of the electro-optic tensor with respect to the geometrical coordinate system of the sample is established post-fabrication, by electric field poling, a procedure during which the molecules orient along a DC electric field~\cite{Benea-ChelmusI.-C.MeretskaM.ElderL.D.TamagnoneM.DaltonR.L.Capasso2021, Heni2017a,Xu2020} that is applied via the gold electrodes. By definition, in the organic layer utilized here, $r_{33}$ corresponds to the direction of the poling field. We provide in Supplementary Fig.~S2 electrostatic simulations of the poling fields for the two geometries. The orientation of the poling fields and their relative strength is indicative of the orientation and level of alignment of the organic electro-optic molecules. Since in our case, the entire array is poled at once by interdigitated electrodes, the electro-optic coefficient $r_{33}$ alternates in sign from one electrode period to the next as illustrated by the green and red areas in Fig.~\ref{fig:FigDevice}~c~-~d yielding an overall in-plane periodically poled JRD1:PMMA film with a typical $r_{33} = 100$~pm/V as was previously demonstrated and characterized in detail in reference~\cite{Benea-ChelmusI.-C.MeretskaM.ElderL.D.TamagnoneM.DaltonR.L.Capasso2021}. As a result, this particular structure allows to maximize the overlap factor $\Gamma_c$ for both modes.
\begin{figure}[htp]
\centering
\includegraphics[width=16cm]{OverviewHybridSiliconOrganicSLM.pdf}
\caption{\textcolor{black}{\textbf{Hybrid silicon-organic free-space electro-optic modulators based on Mie resonances for C- and L-bands.} \textbf{a,} A single electro-optic modulator is made from a rectangular array of silicon nanoresonators patterned onto a quartz substrate on top of a silicon dioxide pedestal, here shown prior to the deposition of the metallic electrodes and the organic electro-optic layer~(green) which is applied post-fabrication by spin-coating and covers the nanoresonators. Inset shows the top view~(TV) of one single unit cell. Scalebar = 500~nm. This geometry can sustain two distinct types of resonances, quasi-bound states in the continuum~(quasi-BICs) and guided mode resonances~(GMR), shown in \textbf{b,}, with corresponding geometries as in \textbf{c,} and \textbf{d,}. Inset shows the side-view~(SV) of one unit cell. The two types of resonances are excited by an incident beam that is x-polarized and have distinct distributions of the near-fields of the resonators, shown in \textbf{e,} and \textbf{f,} (cross-section A of SV). While the quasi-BIC mode is circulating in the nearfield, and has a dominant component along the z-axis~(hence perpendicular to the excitation polarization), the guided mode resonance is predominantly x-polarized~(as the excitation). Given this vectorial orientation of the optical fields in the near-field of the resonators, metallic electrodes are deposited in between each row of ellipses and are oriented along the x-axis for the quasi-BICs and along the z-axis for the GMR, as shown in \textbf{b,} and \textbf{c,}~(scale bars upper picture = $5~\mu$m and lower pictures = $1~\mu$m). The interdigitated electrodes serve for the activation of the JRD1:PMMA layer by electric field poling and for the application of DC and RF tuning fields. Black arrows indicate poling direction.}}
\label{fig:FigDevice}
\end{figure}
In the following, we first present experimental tuning properties of the hybrid silicon-organic free-space modulators when a DC voltage $V_{eo}$ is applied to the interdigitated electrodes uniformly across the entire array ($f_{RF} = 0$). In Fig~\ref{fig:FigResults}~a~-~f we report the experimental results for operation on a quasi-BIC mode engineered in the C- or L- telecom band with geometrical dimensions characterized by a scaling parameter $\alpha$, as provided in Methods. A particular feature of the quasi-BIC mode is that its Q-factor is highly dependent on the asymmetry angle $\theta$: in the absence of material losses, the quality factor increases towards infinity in the limit of $\theta = 0$. We report in Supplementary Fig.~S3 the simulated dependence of the resonance on $\theta$ for the fabricated structures. Noting that in the presence of losses, at high Q-factors the resonance depth also decreases~(eventually leading to less intensity modulation), we choose $\theta = 15^{\circ}$ and $25^{\circ}$. We confirm by experiment the expected increase of quality factor when reducing the angle from $\theta = 25^{\circ}$~($Q = 212$ and $Q = 320$ for $\alpha = 0.7$ and $\alpha = 0.725$, respectively) to $\theta = 15^{\circ}$~($Q = 357$ and $Q = 557$ for $\alpha = 0.7$ and $\alpha = 0.725$, respectively), see Fig.~\ref{fig:FigResults}~g. The measurements were performed on structures similar to Fig.~\ref{fig:FigDevice}~c prior to electric field poling of the devices. Furthermore, the measured red-shift of the resonance with decreasing $\theta$ are well reproduced by our simulations. In Fig.~\ref{fig:FigResults}~c we report the DC tuning characteristics of the modulator based on quasi-BIC modes as a function of applied voltage $V_{eo}$ of the structure with $\alpha = 0.7$ and $\theta = 25^{\circ}$, after the non-linearity of the electro-optic molecules is established by electric field poling. First, we observe at $V_{eo} = 0~$V a shift of the resonant wavelength by 12~nm in the poled sample~($\lambda_{res} = 1540~$nm) compared to the unpoled sample~($\lambda_{res} = 1528~$nm). The experimental Q-factor of the poled sample is $Q=276$. Second, we find that under an applied voltage change from $V_{eo}=100~$V to $V_{eo}=-100~$V, the resonance shifts linearly with applied voltage, as expected (according to $\frac{\Delta \lambda_{res} }{\lambda_{res}}= -\frac{1}{2}n_{mat}^2r_{33}E\Gamma_c$, with $E=\frac{V_{eo}}{L}$) up to a maximum of $\Delta \lambda_{max}=11$~nm, which suffices to satisfy $\Delta \omega_{eo, 100~V} - \Delta \omega_{eo, -100~V} \sim 2\times \delta \omega_{res} \geq \delta \omega_{res} $~(see inset). We introduce the switching voltage $V_{switch}$ as a figure of merit that quantifies the voltage that is necessary to switch the transmission between its maximal and its minimal value~(which corresponds conceptually to the widely used $V_{\pi}$ in modulators similar to the integrated circuit shown in Fig.~\ref{fig:FigVision}~a). We find a $V_{switch} = 100~V$ to be sufficient to tune the absolute intensity transmitted through the sample at a chosen operation wavelength $\lambda_{OP}$ between its minimum at $T_{min}= 30\%$ and its maximum at $T_{max} = 90\%$ of its maximum value~(shown also in Fig.~\ref{fig:FigResults}~h). This corresponds to a maximal modulation depth $\eta_{max} = \frac{\Delta T}{T_0} = 100\%$, where $\Delta T=T_{max}-T_{min}$ is the total modulation change and $T_0 = \frac{T_{max}+T_{min}}{2}$. In a second example shown in Fig.~\ref{fig:FigResults}~f, we choose to operate on the narrower resonance present when $\theta = 15^{\circ}$ and $\alpha = 0.725$. In this case, we report a maximal tuning of the resonance by $\Delta \lambda_{max}=10$~nm, which corresponds to a $\Delta \omega_{eo, 100~V} - \Delta \omega_{eo, -100~V} \sim 3.46\times \delta \omega_{res} \geq \delta \omega_{res} $. Also in this case, at $V_{eo} = 0~$V we observe a shift of the resonant wavelength by 11.6~nm of the poled sample~($\lambda_{res} = 1594~$nm) compared to the unpoled sample~($\lambda_{res} = 1582.4~$nm). The Q-factor of the poled sample is $Q=550$. Importantly, in this case, because of the higher Q-factor, a voltage change of only $V_{eo}=60~$V or $V_{eo}=-60~$V suffices to fully tune through the entire resonance~(see inset). Consequently, we find that a voltage $V_{switch} = 60~V$ is sufficient switch the absolute intensity transmitted through the sample between its minimum at $60\%$ and its maximum at $100\%$ of its maximum value. In this case, $\eta_{max} = 50\%$, see Fig.~\ref{fig:FigResults}~h.
\begin{figure}[htp]
\centering
\includegraphics[width=14cm]{Results_4.pdf}
\caption{\textcolor{black}{\textbf{DC tuning properties of Mie-based modulators.} \textbf{a,-b,} and \textbf{d,-e,} Experimental transmission results are compared to simulated transmission curves for various geometries of electro-optic modulators based on quasi-BIC structures. We find, as expected, by experiment and simulation, that the geometrical scaling factor $\alpha$ shifts the resonances within the telecom band. In addition, the assymetry angle $\theta$ influences the linewidth of the resonances. \textbf{c,} and \textbf{f,} are DC tuning maps of the electro-optic modulators for $(\alpha , \theta) = (0.7, 25^\circ)$ and $(\alpha , \theta) = (0.725, 15^\circ)$, respectively. Insets show three distinct curves at 0~V, and $V_{switch} = \pm100~$V and $\pm60~$V, respectively. \textbf{g,} Experimentally extracted quality factors for two distinct heights of the silicon dioxide pedestal are compared and we find that an increase in height from 200~nm to 300~nm leads to an increase of the quality factor. Dashed circles represent quasi-BIC structures with $\theta = 15^\circ$ and full contour circles represent $\theta = 25^\circ$. The circles for $h_{SiO2} = 300~$nm denote the measurements as-labled to the right of the circles. The circles for $h_{SiO2} = 200~$nm represent measurements of equivalent structures with $h_{SiO2} = 200~$nm. \textbf{h,} Detailed voltage-dependent transmission curves are reported for two exemplary operating wavelengths for the case of the two device geometries discussed in \textbf{a,} and \textbf{d,}. Full switching between ON-OFF-ON states of transmission is achieved for both geometries. \textbf{i,}-\textbf{j,} In contrast, GMRs in the same structure have much broader linewidths, demonstrated by experiment and simulation. \textbf{k,} Their resonance wavelength can be tuned over $\Delta \lambda_{res} = 20~$nm. Q-factors and asymmetry angle $\theta$ are indicated for all colormaps.}}
\label{fig:FigResults}
\end{figure}
To contrast these two examples, we now analyze in Fig.~\ref{fig:FigResults}~i~-~k the DC tuning behavior when we operate the elliptical resonators on the GMR modes introduced in Fig.~\ref{fig:FigDevice}~d~and~f with geometrical dimensions as provided in the Methods. We find from both experiments and simulations a much broader resonance with an experimental $Q = 37$, where a voltage change of $V_{eo}=100~$V to $V_{eo}=-100~$V tunes the resonant wavelength over a maximal range of $\Delta \lambda_{max} = 20~$nm. This value is approximately twice larger than what we find for the quasi-BIC modes, and can be attributed to a more efficient interaction enabled by the $r_{33}$ electro-optic coefficient due to higher alignment of the nearfield of the nanoresonators with the tuning field (see side-by-side mode profiles and poling/tuning field simulations in Fig.~S2. However, the broad linewidth of the resonance would require a $V_{switch}$ larger than 100~V, thereby demonstrating that GMR can be utilized in scenarios where a large tuning of broad resonances is preferred over a large intensity modulation, as may be the case of modulating broadband emission. Notably, the achieved tuning is approximately twice larger than previous reports that investigated GMR inside hybrid organic-metallic structures from JRD1:PMMA which however did not make use of sub-wavelength resonators~\cite{Benea-ChelmusI.-C.MeretskaM.ElderL.D.TamagnoneM.DaltonR.L.Capasso2021}.
Finally, we analyze the GHz-speed properties of the Mie modulators in Fig.~\ref{fig:RFmeasGHz}. A photograph of several fabricated devices is provided in Fig.~\ref{fig:RFmeasGHz}~a and displays two sets of devices: Mie modulators fitted with interdigitated tuning electrodes that are connected to GHz-speed coplanar waveguides (CPW) and test devices which consist only of the CPW~(no metasurface and no interdigitated electrodes). We first characterize these two structures electrically using a vector network analyzer~(VNA) that outputs the scattering matrix, including the amount of transmitted RF power, characterised by $S_{21, dB}$, using the setup shown in Fig.~\ref{fig:RFmeasGHz}~c. We find a -6~dB cut-off of the Mie modulators at $f_{-6~dB} = 4.2~$GHz, after which a roll-over of -20~dB/decade is observed, which is agrees well with the RC-time constant of the interdigitated electrode array of the Mie modulators~(see Methods). After the roll-over, the voltage across the modulators drops towards zero. This is in contrast to the test CPW which does not feature such decay. RF cable losses are deducted from the S21 response. Then, we characterize the GHz-speed electro-optic tuning properties of the Mie modulators around their resonance with optical transmission characteristics as shown in Fig.~\ref{fig:RFmeasGHz}~d. We apply a drive field $V_{RF} = V_{eo}\times \sin(2\pi f_{RF}t)$. We use a double modulation scheme in combination with a local oscillator and lock-in detection to characterize the sample up to 5~GHz, above the lock-in bandwidth. Details of the experimental setup are given in the Methods and photographs of the lab setting in Supplementary Fig.~S4. In Fig.~\ref{fig:RFmeasGHz}~e, we first report the peak electro-optic modulation $\eta_{peak, dB}$ as a function of frequency $f_{RF}$~(we note that here the peak modulation amplitude has been normalized to its value at 100~MHz $\eta_{peak}(f_{RF} = 100~MHz)$ and computed using $\eta_{peak, dB} = 10\log_{10}\frac{\eta_{peak}(f_{RF})}{\eta_{peak}(f_{RF} = 100~MHz)}$). We find that the sample electro-optic bandwidth is $f_{EO,-3~dB} =3$~GHz, and that at $f_{RF}=5$~GHz, the modulation amplitude is approximately 7.75~dB lower than the maximum. The discrepancy between the electronic bandwidth and the electro-optic bandwidth can be ascribed to attenuation in the cable from the photodiode to the lock-in amplifier, passing several stages of mixers, which was not accounted for in this experiment. In the inset of Fig.~\ref{fig:RFmeasGHz}~e, we show wavelength-resolved EO modulation for three distinct modulation frequencies~($f_{RF}=~$1.4~GHz, 2.5~GHz and 4.3~GHz at an RF power of 27~dBm at the source). For each wavelength, we normalized the absolute electro-optic modulation to the transmission through the unbiased sample. We find, as expected, that the modulation strength peaks on one side of the asymmetric resonance, more specifically at the wavelength $\lambda$ with the highest slope in the transmission and that it changes sign at the resonance wavelength $\lambda_{res}$. Moreover, we note that a modulation can be measured beyond the 3-dB cut-off, e.g. at $f_{RF}=4.3$~GHz. In Fig.~\ref{fig:RFmeasGHz}~f we investigate the dependence of the modulation amplitude on the drive power at frequencies 1.5~GHz and 5~GHz and observe an approximately linear behavior as expected.
\begin{figure}[H]
\centering
\includegraphics[width=16cm]{Figure5_RFmeasurements_4.pdf}
\caption{\textcolor{black}{\textbf{GHz-speed properties of the Mie modulators.} \textbf{a,} Picture of a fabricated chip shows Mie modulators that are integrated with GHz coplanar waveguides~(CPW). Also visible are test CPW. \textbf{b,} Electronic scattering parameters $S_{21}$ of Mie modulators are compared to test CPW. The $S_{21}$ are measured using a vector network analyzer~(VNA) connected to the sample by high-frequency cables and high-speed microwave GSG~(ground-source-ground) probes (one ground floating) and exhibits a cut-off of $f_{-6~dB} = 4.2~$GHz owing to the intrinsic RC bandwidth. RF cable losses are deducted from the $S_{21}$ response. Beyond 4.2~GHz, only the Mie modulators exhibit a decay of -20~dB/decade (much less steep roll-over for the test CPW). \textbf{c,} Optoelectronic experimental setup. The electronic characteristics are measured in a transmission configuration using the VNA, and the wavelength-resolved electro-optical~(EO) modulation is measured using a lock-in amplifier. A double modulation scheme combined with a local oscillator~(LO) is used, where the laser emission is modulated at the source and the Mie modulators~(details in the methods). \textbf{d,} Resonance of sample ($h_{SiO2} = 200~$nm, $\theta = 25^{\circ}$). \textbf{e,} Peak electro-optic modulation amplitude for frequencies $f_{RF}$ up to 5 GHz. We find a 3-dB electro-optic bandwidth of $f_{EO,-3~dB} =3$~GHz. Insets: Wavelength-resolved modulation strength for several values of $f_{RF}$, the peak values have been utilized to plot the data in \textbf{e,}. \textbf{f,} Peak electro-optic modulation amplitude for different modulation voltages~(reported as power in dBm), at 1.5 and 5~GHz, the latter well beyond the electro-optic bandwidth.}}
\label{fig:RFmeasGHz}
\end{figure}
\section{Discussion and outlook}
Our work using hybrid silicon-organic high-Q metasurfaces and microwave-compatible actuating electrodes is the first step towards a new class of free-space electro-optic modulators that leverage on the unique design flexibility of sub-wavelength resonators covered by an active electro-optic layer to achieve efficient GHz-speed tuning. Also at DC voltages, the use of quasi-BIC modes allows us to achieve a tuning over 31$\%$ of the C-telecom band, corresponding to 11~nm, while GMRs achieve a tuning up to 20~nm, a factor of 2 higher than previous reports that do not use sub-wavelength resonators~\cite{Benea-ChelmusI.-C.MeretskaM.ElderL.D.TamagnoneM.DaltonR.L.Capasso2021}. Furthermore, the demonstrated switching speeds can enable time-dependent and high-speed on-demand control of light, for example for vortex beam generation~\cite{Huang2020,Wang2020} or for time-resolved microscopy and sensing~\cite{Tittl1105}. By reducing the in-plane footprint of the devices from a current approximative area of $330\times330~\mu$m$^2$ to potentially $100\times100~\mu$m$^2$, the intrinsic RC time constant of the device would be reduced by a factor of 10 and would potentially allow operation up to 30~GHz, and represents a key advance for their use as optical links in wireless optical communications. We note here that, recently, bound states in the continuum have been demonstrated even for individual resonators~\cite{Koshelev2020} rather than resonator arrays, thereby suggesting that a further increase in bandwidth beyond tens of GHz might become possible. With the added ability to change the parameters~(angle $\theta$, size) of each single pair of elliptical resonators, the wavefront can be locally affected to achieve an overall functionality of the entire metasurface~\cite{Gigli:21}, for example spatial multiplexing. Importantly, the geometry we propose can be generalized to a much larger variety of Mie resonances, e.g. to achieve phase-only modulation~\cite{Staude2013} or polarisation modulation. Finally, the achieved relative bandwidth tuning of $0.7\%$ in combination with the high Q-factor and high-speed characteristics may allow to investigate emergent optical phenomena in the area of time-varying~\cite{Shaltout2015} and spatio-temporal~\cite{Wang:20, Shaltouteaat3100} metasurfaces with electro-optic materials~\cite{Wang:20}, and provide an alternative path to magnet-free isolators beyond optomechanical~\cite{Ruesink2016} or piezoelectric~\cite{Tian2021} actuation. Lastly, the device architecture demonstrated here can be used for other nonlinear effects that require highly confined fields in combination with highly performant nonlinear materials, such as second harmonic generation~\cite{Koshelev2020, Anthur2020}, which has so far not been demonstrated in hybrid silicon-organic systems of this kind.
\textbf{Acknowledgments}
The authors acknowledge insightful discussions with Christopher Bonzon and Michele Tamagnone. I.-C. Benea-Chelmus acknowledges support through the Swiss National Science Foundation for the postdoctoral fellowship P2EZP2.181935 and an independent research grant from the Hans Eggenberger foundation. M. L. Meretska is supported by NWO Rubicon Grant 019.173EN.010, by the Dutch Funding Agency NWO and by the Air Force Office of Scientific Research under award number FA9550-19-1-0352. D.L. Elder and L. R. Dalton acknowledge support from the Air Force Office of Scientific Research (FA9550-19-1-0069). This work was performed in part at the Harvard University Center for Nanoscale Systems (CNS); a member of the National Nanotechnology Coordinated Infrastructure Network (NNCI), which is supported by the National Science Foundation under NSF award no. ECCS-2025158. Additionally, financial support from the Office of Naval Research (ONR) MURI program, under grant no. N00014-20-1-2450 is acknowledged. The electro-optic molecules were synthesized in the Chemistry Department at the University of Washington.
\textbf{Author contribution} I.-C.B.-C. conceived, designed and implemented the experiments. S.M. assisted with the measurements. I.-C.B.-C. and M.M. fabricated the samples. I.-C.B.-C. did the theoretical derivations and the simulations. D.E. and L.D developed the electro-optic molecules and assisted with the poling of the devices. I.-C.B.-C. and A.S. built the microwave-optical characterization setup. D.K. and A.S. helped with the high-frequency measurements. I.-C.B.-C., M.M., D.K., A.S. and F.C. analysed the data. I.-C.B.-C. wrote the manuscript with feedback from all authors.
\textbf{Competing interests} A provisional patent application with U.S. Serial No.: 63/148,595 has been filed on the subject of this work by the President and Fellows of Harvard College.
\textbf{Corresponding author} Correspondence to Ileana-Cristina Benea-Chelmus (cristinabenea@g.harvard.edu) and Federico Capasso (capasso@seas.harvard.edu).
\textbf{Data availability} The main dataset contained within this paper will be made available in the Zenodo database prior to publication.
\textbf{Code availability} The code used to plot the data within this paper will be made available in the Zenodo database prior to publication.
|
1,314,259,994,808 | arxiv | \section{Introduction}
Topological materials are, nowadays, a rich and well developed research field in condensed-matter physics. The study of two-dimensional (2D) topological systems started in the early 80's, with the experimental discovery of the integer quantum Hall effect in GaAs \cite{vonKlitzing}.
Thereafter, the deep relation between this novel phase and the topological invariant induced by a non-trivial Berry phase was theoretically unveiled \cite{KTNN}. An essential feature of these quantum states is that time-reversal symmetry is broken in the bulk. However, the recent discovery of 2D two-dimensional topological insulators (TIs) \cite{Kane-Mele,Hasan,Bernevig3,Molenkamp1} has opened the way to the exploration and classification of a vast number of novel materials, also in higher dimensions. In 3D, similar versions of 2D TIs have been firstly theoretically formulated \cite{Qi} and then experimentally discovered \cite{Molenkamp,Zhang}.
These systems support surface gapless modes, topologically protected by the non-trivial topological number in the gapped bulk.
Although the free-fermion topological phases have been completely classified for all dimensions in terms of their symmetries \cite{Altland, Ryu}, much less is known about the complete classification and characterization of interacting systems, where a variety of quantum phenomena and quasi-particles emerge in the low-energy regime.
This is the case of anyons in fractional quantum Hall states \cite{Laughlin,Jain,Fradkin} and fractional topological insulators \cite{Goerbig,Murthy-Shankar}, which carry fractional electric charge and spin, Cooper pairs (bound states of spin-up and spin-down electrons) in topological superconductors \cite{Read}, and excitons, i.e. particle-hole bound states in bilayer systems \cite{McDonald, Khveshchenko, Joglekar, Budich, Levitov}.
At the microscopic level, Hubbard-like Hamitonians have been employed in the study of exciton condensation in monolayer \cite{Gamayun} and bilayer graphene \cite{Franz1}, bilayer quantum Hall systems \cite{McDonald,MSmith0, McDonald2} and in 3D thin-film TIs in the class AII \cite{Franz2,Xu,Sokolik}. In the latter case, the electron-hole pairs residing on the surface states can condense to form a topological exciton condensate. This kind of condensation can be seen as an electronic superfluid with dissipationless electronic transport and could enable ultra-low-power and energy-efficient devices, as already proposed in Ref.~\cite{Zaanen}. At a theoretical level, mean-field theory studies show the presence of an excitonic gap induced by the short-range part of the Coulomb interaction between the surface states \cite{Franz2}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.35\textwidth]{thinfilms.pdf}
\caption{The surfaces of a 3D TI separated by a distance $d$.}
\label{System}
\end{figure}
In this paper, we propose a precise and self-consistent derivation of the gauge theory describing the short-range interaction in thin films of TIs. In these materials, the free-surface states are defined in terms of massless Dirac fermions and the corresponding interactions are encoded in quantum electrodynamics (QED). Our theoretical model is based on the fact that the massless Dirac fermions are confined on the 2D surfaces, while the virtual photons that mediate their quantum electromagnetic interactions are free to propagate in the 3D surrounding space. This approach has been already successfully employed in the study of several quantum systems, such as graphene \cite{MSmith1,MSmith3}, transition-metal dichalcogenides \cite{MSmith2}, and the edge modes of 2D TIs \cite{Menezes}. The local part of our effective-field theory is given by a generalized 2+1-D Thirring model, which has important applications in both condensed-matter and particle physics \cite{Herbut, Palumbo1, Palumbo2, Gomes}, and represents one the main results of this paper. Importantly, our approach fixes uniquely the value of its coupling constant, which turns out to be proportional to the electric charge and the width of our thin-film TI.
Moreover, if on one hand our work reproduces the effective local Hubbard-like model proposed in Ref.~\cite{Franz2}, on the other hand it does not require any mean-field theory approximation for the identification of the exciton mass gap. By solving the Schwinger-Dyson equation \cite{Schwinger} for the 2+1-D effective field theory in the strong-coupling regime, we show that the mass generation in the exciton condensation is induced dynamically. The dynamical mass generation is due to the breaking of the chiral symmetry \cite{Itoh, Kondo, Gies, Alves}, and represents a non-perturbative phenomenon, beyond the standard mean-field theory.
\section{The model}
We start our analysis with the description of two gapless surface states in 3D thin-film TIs in class AII. They support an odd number of topologically protected helical massless Dirac fermions, which are described by a 2+1-D Dirac theory. We then consider the interactions in and between the two surfaces by including a quantum dynamical U(1) gauge field coupled to the Dirac fermions. This is encoded in the standard QED by introducing a minimal coupling between the gauge potential $A_{\mu}$ and the fermionic current $J_{\mu}$. Importantly, while the masless fermions are confined on the surfaces of the material, the virtual photons that carry the electromagnetic interaction are free to propagate in the 3D space. This is the crucial assumption that will allow us to derive an effective 2+1-D projected theory. Thus, for simplicity, we consider a single Dirac fermion per surface, such that our system is described by the following QED-like action
\begin{eqnarray}
S_{{\rm }}=i\hbar\int d^{3}r\,\PC{\bar{\psi}_{{\rm t}}\bar{\sigma}^{\mu}\partial_{\mu}\psi_{{\rm t}}+\bar{\psi}_{{\rm b}}\sigma^{\mu}\partial_{\mu}\psi_{{\rm b}}}\nonumber \\
-\int d^{4}r\, \left(\frac{\varepsilon_{0}c}{4}\,F_{\alpha\beta}F^{\alpha\beta} + eJ_{3+1}^{\alpha}A_{\alpha}\right), \label{QED4}
\end{eqnarray}
where $\psi_{{\rm b}}$ and $\psi_{{\rm t}}$ denote fermionic fields with $\bar{\psi}_{i}=\psi^{\dagger}_{i}\sigma^{0}$, which are constraint to propagate on the \textit{top} (t) and \textit{bottom} (b) surfaces of the TI, respectively. Here, $\sigma^{\mu}$ are $2\times 2$ Pauli matrices with $\mu=0,1,2$, and we adopt $\bar{\sigma}^{\mu}=-\sigma^{\mu}$, meaning that the two fermions have opposite helicity. The differential elements are given by $d^{3}r=v\,dx\, dy\, dt$ and $d^{4}r=c\, dx\, dy\, dz\, dt$, with $v$ and $c$ the Fermi velocity and the speed of light, respectively. The coupling constant between the matter current and the gauge field $e$ is the electric charge carried by each fermion. $\varepsilon_{0}$ is the vacuum dielectric constant, $F_{\alpha\beta}=\partial_{\alpha}A_{\beta}-\partial_{\beta}A_{\alpha}$ is the field-strength tensor, $J^{\alpha}_{3+1}=j^{\alpha}_{{\rm t}} +j^{\alpha}_{{\rm b}} =\bar{\psi}_{{\rm t}}\sigma^{\alpha}\psi_{{\rm t}}+\bar{\psi}_{{\rm b}}\sigma^{\alpha}\psi_{{\rm b}}$, and $\alpha,\beta=0,1,2,3$.
We will focus on the interaction between the two fermionic species $\psi_{{\rm t},{\rm b}}$, which in our context represent quasi-particles and quasi-holes confined on two different surfaces. As illustrated in Fig.~\ref{System}, the surfaces of the 3D TI are separated by a distance $d$, which is the width of the thin-film, and we describe the surface Dirac fermions by imposing the following constraints on the matter current
\begin{eqnarray}
j^{\alpha}_{{\rm t},{\rm b}}(t,x,y,z) =
\begin{cases}
j^{\mu}_{{\rm t}}(t,x,y)\delta\PC{z-d/2}, \\
j^{\mu}_{{\rm b}}(t,x,y)\delta\PC{z+d/2}.
\end{cases}\label{c1}
\end{eqnarray}
Because the fermions interact with a dynamical quantum electromagnetic field, we can integrate out the gauge field to obtain the effective non-local interaction term
\begin{eqnarray}
S^{{\rm eff}}_{{\rm int}} = -\frac{e^{2}}{2}\int d^{4}r d^{4}r'J^{\alpha}_{3+1}(r)\frac{1}{(-\Box)}J_{\alpha}^{3+1}(r'). \label{1}
\end{eqnarray}
By imposing the constraints given in Eq.~(\ref{c1}) we are effectively describing the system as a single surface living in the middle of the thin-film. Hence, Eq.~(\ref{1}) becomes
\begin{eqnarray}
S^{{\rm eff}}_{{\rm int}} &=& -\frac{e^{2}}{2}\int d^{3}r d^{3}r' j^{\mu}_{\kappa}(r)V_{\kappa\rho}(r-r')j_{\mu}^{\rho}(r'), \label{2}
\end{eqnarray}
where $V_{\kappa\rho}(r-r')=\PR{1/(-\Box)}_{\xi_{\kappa\rho}}$, $\kappa,\rho={\rm t},{\rm b}$ and $\xi_{\kappa\rho}$ represents the different values at which the Green's function has to be evaluated.
Although the system from now on may be treated as an effectively two-dimensional surface, the information about the thin-film width $d$ is carried within the projection. As known in the literature \cite{MacDonald3,Sokolik,Xu}, the exciton condensation in thin-films may only occur when the inter-surface distance $d$ is smaller than an in-plane distance $a$, i.e. $d/a<1$. We introduce this minimal in-plane distance $a$ in our model by shifting the coordinates of the quasiparticles as follows: $r \rightarrow r - a/2$ and $r' \rightarrow r' + a/2$. In this way, Eq.~(\ref{2}) becomes
\begin{eqnarray}
S^{{\rm eff}}_{{\rm int}} = -\frac{e^{2}}{2} \int d^{3}r d^{3}r' j^{\mu}_{\kappa}(r-a/2)V_{\kappa\rho}(r-r'-a)j_{\mu}^{\rho}(r'+a/2), \nonumber\\ \label{shift}
\end{eqnarray}
and now the effective interaction carries the information about the length $a$.
The explicit values of $\xi_{\kappa\rho}$ are
\begin{eqnarray}&&\xi_{{\rm tt}}: z=z'=d/2,\quad \xi_{{\rm tb}}: z=d/2 \ {\rm and}\ z'=-d/2,\nonumber \\
&&\xi_{{\rm bt}}: z'=d/2 \ {\rm and}\ z=-d/2 ,\quad \xi_{{\rm bb}}: z=z'=-d/2,\nonumber\end{eqnarray}
where, after the projection, the top and bottom components represent two different flavors in the effective middle plane.
For both $\xi_{{\rm tt}}$ and $\xi_{{\rm bb}}$, we obtain similar results as found in Ref.~\cite{Marino}, namely
\begin{eqnarray}
\PR{\frac{1}{(-\Box)}}_{\xi_{ii}} &=& \frac{1}{2} \int \frac{d^{3}k }{(2\pi)^{3}} \frac{e^{i k\cdot(r-r'-a)}}{\sqrt{k^{2}}}\nonumber\\
&=& \frac{1}{4\pi^{2}(|r-r'-a|^{2}+a^{2})},
\end{eqnarray}
where $a$ settles a minimum distance between the quasiparticles, implying a cutoff on the momenta $k_{{\rm max}} = 1/a$.
The terms $\xi_{{\rm tb}}$ and $\xi_{{\rm bt}}$ yield
\begin{eqnarray}
\PR{\frac{1}{(-\Box)}}_{\xi_{ij}} =\frac{1}{2} \int \frac{d^{3}k }{(2\pi)^{3}} \frac{e^{-d\sqrt{k^{2}}}e^{i k\cdot(r-r'-a)}}{\sqrt{k^{2}}}. \label{3}
\end{eqnarray}
Now, by considering that $d|k|<1$ \cite{MacDonald3,Sokolik,Gorbar-Miransky}, we expand the exponential $\exp(-d|k|) \approx 1 - d|k|$ and perform the integration over $k$ to find
\begin{eqnarray}
\PR{\frac{1}{(-\Box)}}_{\xi_{ij}} \approx \frac{1}{4\pi^{2}(|r-r'-a|^{2}+a^{2})}- \frac{d}{2}\delta^{3}(r-r'-a).\nonumber\\ \label{4}
\end{eqnarray}
Here, we used the approximation
\begin{eqnarray}
\int \frac{d^{3}k }{(2\pi)^{3}} e^{i k\cdot(r-r'-a)} \approx \delta^{3}(r-r'-a). \label{5}
\end{eqnarray}
We can finally summarize the results for the effective interaction $V_{\kappa \rho}$ after the projection,
\begin{eqnarray}
&&V_{{\rm tt}}=V_{{\rm bb}}= \frac{1}{4\pi^{2}|r-r'-a|^{2}}, \nonumber \\
&&V_{{\rm tb}}=V_{{\rm bt}}\approx \frac{1}{4\pi^{2}|r-r'-a|^{2}}-\frac{d}{2}\delta(r-r'-a).\nonumber
\end{eqnarray}
where we neglected terms proportional to $a^{2}\approx 0$. By plugging back the interactions above into Eq.~(\ref{shift}), we may write down $S^{{\rm eff}}_{{\rm int}}$ as a long and a short-range contribution (see Appendix A for details).
\section{Single-surface description}
The aim of this section is to describe a two-surface system in terms of a single effective surface with two species of fermions. Our 2+1-D effective action after the projection is given by
\begin{eqnarray}
S^{{\rm eff}} = i\hbar \int d^{3}r \PC{\bar{\psi}_{{\rm t}}\sigma^{\mu}\partial_{\mu}\psi_{{\rm t}}-\bar{\psi}_{{\rm b}}\sigma^{\mu}\partial_{\mu}\psi_{{\rm b}}} \nonumber\\- \frac{e^{2}}{2\varepsilon_{0}c}\int d^{3}r'\int d^{3}r\ j^{\mu}_{\kappa}\ V_{\kappa \rho}\ j_{\mu}^{\rho}.\label{S3E1}
\end{eqnarray}
where $\kappa,\rho={\rm t},{\rm b}$ represent the different surfaces. Now, we can rewrite the action (\ref{S3E1}) in terms of a single spinor $\Psi = (\psi_{{\rm t}}, \psi_{{\rm b}})^{\top} $. For the kinetic part, we obtain
\begin{eqnarray}
\bar{\psi}_{{\rm t}}\sigma^{\mu}\partial_{\mu} \psi_{{\rm t}} - \bar{\psi}_{{\rm b}}\sigma^{\mu}\partial_{\mu} \psi_{{\rm b}} = \bar{\Psi}\gamma^{\mu}\partial_{\mu}\Psi,\label{S3E2}
\end{eqnarray}
where the $4\times4$ $\gamma$-matrices are defined as \cite{Gomes}
\begin{eqnarray}
\gamma^{\mu}=\left( \begin{array}{cc}
\sigma^{\mu} & 0 \\
0 & -\sigma^{\mu} \end{array} \right) \nonumber,
\end{eqnarray}
with
\begin{eqnarray}
\gamma^{0}=\left( \begin{array}{cc}
\sigma^{0} & 0 \\
0 & -\sigma^{0} \end{array} \right), \quad
\gamma^{\tau}=i\left( \begin{array}{cc}
\sigma^{\tau} & 0 \\
0 & -\sigma^{\tau} \end{array} \right). \nonumber
\end{eqnarray}
Here, $\tau=1,2$, $\gamma^{\mu} \equiv \sigma^{0}\otimes\sigma^{\mu}$, and $\otimes$ represents the tensor product.
The fermionic currents can be written in terms of the new spinors
\begin{eqnarray}
j^{\mu}_{{\rm t}}&=& \frac{1}{2}\bar{\Psi}(\mathds{1}+\sigma^{0})\otimes\sigma^{\mu}\Psi, \\ \label{S3E4}
j^{\mu}_{{\rm b}}&=& \frac{1}{2}\bar{\Psi}(\mathds{1}-\sigma^{0})\otimes\bar{\sigma}^{\mu}\Psi, \label{S3E5}
\end{eqnarray}
where $\mathds{1}\otimes\sigma^{\mu}= -i\gamma^{\mu}\gamma^{3}\gamma^{5}$, with
\begin{eqnarray}
\gamma^{3}=i\left( \begin{array}{cc}
0 & \mathds{1} \\
-\mathds{1} & 0 \end{array} \right), \quad
\gamma^{5}=i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}=\left( \begin{array}{cc}
0 & \mathds{1} \\
\mathds{1} & 0 \end{array} \right). \nonumber
\end{eqnarray}
Once we have expressed all contributions to the effective action (\ref{S3E1}) in terms of four-component spinors $\bar{\Psi}$ and $\Psi$, we can write down the following single-surface action
\begin{eqnarray}
&&S^{{\rm eff}}[\bar{\Psi},\Psi] = \frac{e^{2}}{2\varepsilon_{0}c}\int d^{3}r'\int d^{3}r \mathcal{J}^{\mu}_{35} \frac{1}{4\pi^{2}|r-r'|^{2}} \mathcal{J}_{\mu}^{35}\nonumber\\
&&+ \hbar\int d^{3}r\PR{ i \bar{\Psi}\gamma^{\mu}\partial_{\mu}\Psi + \frac{e^{2}d}{8\hbar\varepsilon_{0}c} \PC{\mathcal{J}^{\mu}\mathcal{J}_{\mu}+ \mathcal{J}^{\mu}_{35}\mathcal{J}_{\mu}^{35}}},
\label{S3E6}
\end{eqnarray}
where $\mathcal{J}^{\mu} \equiv \bar{\Psi}\gamma^{\mu}\Psi$ and $\mathcal{J}^{\mu}_{35} \equiv \bar{\Psi}\gamma^{\mu}\gamma^{3}\gamma^{5}\Psi$.
\section{Dynamical gap generation}
In the previous section, we derived an effective single-surface interacting model (see Eq.~(\ref{S3E6})), which involves both a short- and a long-range interaction. The former corresponds to a generalized Thirring model \cite{Gies, Palumbo2}, while the latter is similar to the non-local field theory studied in Refs.~\cite{MSmith1, MSmith2,Alves}. These kind of interactions have been already studied separately in the context of dynamical mass generation in Refs.~\cite{Itoh, Kondo,Gies,Alves}. This mechanism is relevant in interacting quantum-field theories and is related to the dynamical breaking of a classical symmetry due to quantum effects. In fact, all the three interaction terms in our effective action (\ref{S3E6}) are invariant under chiral symmetry, which is dynamically broken at the quantum level.
In the first part of this section, we will focus on the short-range interactions $\mathcal{J}^{\mu}\mathcal{J}_{\mu}+\mathcal{J}^{\mu}_{35}\mathcal{J}_{\mu}^{35}$. By following the approach developed in Ref.~\cite{Itoh}, we will show that in the strong-coupling regime both Thirring-like terms yield the same mass generation, and their combined action leads to a larger critical number of fermion flavors $N_{c}$, as compared to a single Thirring term. At last, we will add the long-range interaction and show that the excitonic gap is then enhanced, in agreement with the results found in Refs.~\cite{Gamayun, Alves2} for the case of Gross-Neveu theory.
\subsection{Short-range interactions}
Firstly, let us focus on the dynamical mass generated due to the Thirring-like interactions of Eq.~(\ref{S3E6}). In the large-$N$ approximation, we can write down the effective Lagrangian as
\begin{eqnarray}
&&\mathcal{L}^{{\rm eff}}[\bar{\Psi},\Psi] = i \hbar\bar{\Psi}_{a}\gamma^{\mu}\partial_{\mu}\Psi_{a}+\nonumber\\
&& \frac{g}{2N} \PC{\bar{\Psi}_{a}\gamma^{\mu}\gamma^{3}\gamma^{5}\Psi_{a} \bar{\Psi}_{\bar{a}}\gamma_{\mu}\gamma^{3}\gamma^{5}\Psi_{\bar{a}} + \bar{\Psi}_{a}\gamma^{\mu}\Psi_{a} \bar{\Psi}_{\bar{a}}\gamma_{\mu}\Psi_{\bar{a}}},\nonumber
\end{eqnarray}
where $g=e^{2}d N/4\varepsilon_{0}c$. Here the indexes $a,\bar{a}$ denote a sum over $N$ fermion flavors.
Through a Hubbard-Stratonovich transformation, we introduce two auxiliary vector fields $W_{n}^{\mu}$ ($n=1,2$) and two scalar fields $\phi_{n}$ in a way to preserve gauge symmetry. Thus, we obtain
\begin{eqnarray}
\mathcal{L}^{{\rm eff}} [\bar{\Psi},\Psi,W^{1},W^{2},\phi^{1},\phi^{2}]= i \hbar \bar{\Psi}_{a}\gamma^{\mu}\mathcal{D}_{\mu}\Psi_{a} \nonumber\\
- \sum_{n=1,2}\frac{1}{2g}\PC{W_{n}^{\mu}-\sqrt{N}\partial^{\mu}\phi_{n}}^{2}, \label{S4E4}
\end{eqnarray}
where $\mathcal{D}_{\mu}= \partial_{\mu}-(i/\sqrt{N})\gamma^{3}\gamma^{5}W^{1}_{\mu}-(i/\sqrt{N})W^{2}_{\mu}$.
By following a similar procedure as adopted in Ref.~\cite{Itoh}, we introduce a non-local gauge-fixing term of the form
\begin{eqnarray}
-\frac{1}{2}\PR{\partial_{\mu}W^{\mu}+\sqrt{N}\frac{\zeta(\partial^{2})}{g}\phi} \frac{1}{\zeta(\partial^{2})} \PR{\partial_{\nu}W^{\nu}+\sqrt{N}\frac{\zeta(\partial^{2})}{g}\phi} \nonumber
\end{eqnarray}
for each gauge field $W^{\mu}_{n}$ in the Lagrangian (\ref{S4E4}). As a result, we obtain
\begin{eqnarray}
&&\mathcal{L}^{{\rm eff}} [\psi, \bar{\psi},W^{1},W^{2}] + \mathcal{L}^{{\rm eff}} [\phi^{1},\phi^{2}] =\nonumber\\
&& i \hbar \bar{\Psi}_{a}\gamma^{\mu}\mathcal{D}_{\mu}\Psi_{a} - \frac{1}{2g}W^{n}_{\mu}W_{n}^{\mu} -\frac{1}{2}\partial_{\mu}W_{n}^{\mu}\frac{1}{\zeta(\partial^{2})}\partial_{\nu}W_{n}^{\nu}\nonumber \\
&& - \frac{1}{2g}\PR{\zeta(\partial^{2})\phi_{n}}\phi_{n}-\frac{1}{2}\partial_{\mu}\phi_{n}\partial^{\mu}\phi_{n}, \quad\label{S4E6}
\end{eqnarray}
where the gauge-fixing term decoupled the $\phi$-boson fields, which have also been rescaled as $\sqrt{N/g}\phi_{n} \rightarrow \phi_{n}$. The double index $n$ indicates a summation over the fields. Notice in Eq.~(\ref{S4E6}) that only the strong-coupling regime $g\rightarrow \infty$ preserves gauge symmetry, leading to a massless gauge boson. We shall return to this point later in the Schwinger-Dyson analysis.
Once we have obtained the gauge theory in Eq.~(\ref{S4E6}), we proceed by defining the Feynman rules needed for calculating the mass generation. The full fermion propagator reads
\begin{eqnarray}
S(p)=\frac{i}{A(-p^{2})\gamma^{\mu}p_{\mu}-B(-p^{2})},\label{S4E8}
\end{eqnarray}
where $A$ represents a correction to the fermion-field wave function, and $B$ is the order parameter of the chiral symmetry, which preserves parity in 2+1 dimensions. The Schwinger-Dyson equation for the fermion two-point function is given by
\begin{eqnarray}
S^{-1}(p)= S_{0}^{-1}(p) - i\Sigma (p),\label{S4E9}
\end{eqnarray}
where $S_{0}=i/\gamma^{\mu}p_{\mu}$ is the free-fermion propagator. The self-energy $\Sigma$ contains the contribution from both types of local interaction, and it is determined by
\begin{eqnarray}
-i\Sigma =-\frac{1}{N} \int \frac{d^{3}k}{(2\pi)^{3}} \gamma^{\mu}\gamma^{3}\gamma^{5} S(k)\Gamma^{\nu}\gamma^{3}\gamma^{5} G^{1}_{\mu\nu}(p-k) \nonumber\\
-\frac{1}{N} \int \frac{d^{3}k}{(2\pi)^{3}} \gamma^{\mu} S(k)\Gamma^{\nu} G^{2}_{\mu\nu}(p-k).\qquad \label{S4E10}
\end{eqnarray}
$\Gamma^{\nu}$ and $G^{n}_{\mu\nu}$ are the full-vertex function and the full gauge-boson propagators, respectively. Here, we will adopt the bare-vertex approximation, i.e. $\Gamma^{\nu}=\gamma^{\nu}$. The explicit expression for the full gauge-boson propagator reads
\begin{eqnarray}
G^{n}_{\mu\nu} (k)= iG^{n}_{0}(-k^{2})\PC{g_{\mu\nu}-\eta(-k^{2})\frac{k_{\mu}k_{\nu}}{k^{2}}},\label{S4E14}
\end{eqnarray}
where $G^{1}_{0}=1/(g^{-1}-\Pi)$, $G^{2}_{0}=1/(g^{-1}+\Pi)$, and $\eta$ is a non-trivial function of the momentum related to the non-local gauge approximation \cite{Itoh}. The function $\Pi(-k^{2})$ emerges from the one-loop polarization tensor, inducing dynamics to the gauge fields $W^{n}_{\mu}$ through interaction effects.
In the strong-coupling regime ($g \rightarrow \infty$), both contributions in Eq.~(\ref{S4E10}) reduce to a single term. By replacing the respective $\Gamma^{\nu}$ and $G^{n}_{\mu\nu}$ functions into Eq.~(\ref{S4E10}) and using that $[\gamma^{\mu},\gamma^{3}\gamma^{5}]=0$, we obtain
\begin{eqnarray}
&&[A(p^{2})-1]\gamma^{\mu}p_{\mu}-B(p^{2}) =\nonumber\\
&& \frac{2}{N}\int \frac{d^{3}k}{(2\pi)^{3}} \frac{\gamma^{\mu}(A\gamma^{\alpha}k_{\alpha}+B)\gamma^{\nu}}{(A^{2}k^{2}+B^{2})\Pi(q^{2})} \PC{g_{\mu\nu}-\eta\frac{q_{\mu}q_{\nu}}{q^{2}}},\
\label{S4E15}
\end{eqnarray}
where $q=p-k$. We also performed a transformation to the Euclidean space ($k_{0} \rightarrow ik_{0}^{E}$).
By taking the trace over $\gamma$-matrices in Eq.~(\ref{S4E15}), we obtain two coupled equations: one related to the renormalization of the fermion wavefunction and another related to the generation of the fermionic mass. Within the non-local gauge-fixing picture, the fermion wavefunction is not renormalized. This means that $A(p^2)=1$, and it leads to both
\begin{eqnarray}
0 = \frac{2}{Np^{2}}\int \frac{d^{3}k}{(2\pi)^{3}} \frac{1}{(k^{2}+B^{2})\Pi} &&\left[(\eta - 1)p\cdot k \right. \nonumber\\
&&-\left. 2\eta\frac{(k\cdot q)(p\cdot q)}{q^{2}} \right], \quad
\label{S4E18}
\end{eqnarray}
and
\begin{eqnarray}
B = \frac{2}{N}\int \frac{d^{3}k}{(2\pi)^{3}} \frac{B \PC{3-\eta}}{(k^{2}+B^{2})\Pi}, \label{S4E19}
\end{eqnarray}
where Eq.~(\ref{S4E18}) is used to determine $\eta(q^{2})$. After some calculations, one finds that in the massless gauge boson limit $g\rightarrow \infty$, $\eta=1/3$ is a constant (see Appendix B for details). Within the Schwinger-Dyson equations, this limit is only defined for a nonzero polarization-tensor contribution, i.e. $\Pi(q^{2}) \neq 0$, as seen in Eq.~(\ref{S4E19}). Hence, the \textit{quenched} approximation $\Pi(q^{2})=0$ sometimes used in the literature \cite{Alves} to simplify calculations can only be used here in the case of a massive gauge boson.
We proceed with the computation by considering the massless gauge boson limit with $\eta=1/3$, which yields
\begin{eqnarray}
B = \frac{128}{3N}\int \frac{d^{3}k}{(2\pi)^{3}} \frac{B }{(k^{2}+B^{2})\sqrt{(p-k)^{2}}}, \label{S4E26}
\end{eqnarray}
where we used $\Pi(q^{2})= \sqrt{q^{2}}/8$.
The integrals over $k$ in Eq.~(\ref{S4E26}) are performed in spherical coordinates. We first integrate over the solid angle, and then split the remaining integral over positive values of $k$ into two regions,
\begin{eqnarray}
B &=& \frac{64}{3\pi^{2}N} \left \{ \int_{0}^{p} dk \frac{k^{2}B(k^{2})}{k^{2}+B^{2}(k^{2})} \frac{1}{|p|} \right. \nonumber \\
&+& \left. \int_{p}^{\Lambda} dk \frac{k^{2}B(k^{2})}{k^{2}+B^{2}(k^{2})} \frac{1}{|k|} \right\},
\label{S4E27}
\end{eqnarray}
where the virtual-momentum $k$ is, respectively, less or greater than the external momentum $p$. Here, $\Lambda$ is a cutoff and $p=|p|$. Now, we transform the integral Eq.~(\ref{S4E27}) into a differential equation, and by considering $p^{2}+B^{2}(p^{2}) \approx p^{2}$, we obtain
\begin{eqnarray}
p^{2}\frac{d^{2}B}{dp^{2}}+2p\frac{dB}{dp}+ \frac{64}{3\pi^{2}N} B=0.\label{S4E28}
\end{eqnarray}
The solution of Eq.~(\ref{S4E28}) reads
\begin{eqnarray}
B(p) =\sqrt{\frac{m}{p}}\PR{C_{1}\cos\PC{\lambda \ln \frac{p}{m}} + iC_{2}\sin\PC{\lambda \ln \frac{p}{m}}},\ \quad
\label{S4E29}
\end{eqnarray}
where we have introduced the infrared parameter $m$ such that the ratio $p/m$ is dimensionless and the solution obeys the normalization condition $B(m)=m$. $C_{1}$ and $C_{2}$ are coefficients to be determined according to the ultraviolet (UV) and infrared (IR) boundary conditions. The parameter $\lambda$ indicates the behavior of the solutions of Eq.~(\ref{S4E28}), and it is given by
\begin{eqnarray}
\lambda = \frac{1}{2}\sqrt{\frac{256}{3\pi^{2}N}-1}.\label{S4E30}
\end{eqnarray}
We see in Eq.~(\ref{S4E30}) that there is a critical value $N_{c}=256/3\pi^{2} \approx 8.6$ determining the point at which the solution changes from oscillatory to exponential. This critical number is twice the one in QED$_{2+1}$ with a non-local gauge fixing. For values of $N>256/3\pi^{2}$, the solutions in Eq.~(\ref{S4E29}) are real exponentials, with a contribution that increases in the UV limit. Hence, the only possible solution in this regime is $B(p)=0$ (trivial solution; no mass generation) \cite{Appelquist}. For $N<256/3\pi^{2}$, we obtain the oscillatory solutions (\ref{S4E29}). This implies that $B(p)\neq 0$, and consequently, the chiral symmetry has been broken by the dynamical generation of a fermion mass.
The IR and UV boundary conditions are, respectively,
\begin{eqnarray}
\PR{\frac{dB(p)}{dp}}_{p=m}=0,\ {\rm and} \
\PR{p\frac{dB(p)}{dp}+B(p)}_{p=\Lambda}=0.\ \quad \label{S4E32}
\end{eqnarray}
The IR condition yields a relation between the coefficients $C_{1}$ and $C_{2}$, $C_{1}=2i\lambda C_{2}$. By using this result in the UV condition, we obtain an expression for $m$
\begin{eqnarray}
m = \Lambda \exp\PR{-\frac{1}{\lambda}\arctan\PC{\frac{4\lambda}{4\lambda^{2}-1}}}. \label{S4E33}
\end{eqnarray}
The solution (\ref{S4E29}) can be rewritten as
\begin{eqnarray}
B(p) = m \mathcal{F}\PC{\frac{p}{m},\lambda}, \label{S4E34}
\end{eqnarray}
with
$$\mathcal{F}\PC{\frac{p}{m},\lambda} = \sqrt{\frac{m}{p}} \PR{\cos\PC{\lambda \ln \frac{p}{m}} + \frac{1}{2\lambda}\sin\PC{\lambda \ln \frac{p}{m}}}.$$
So far, we have shown that the Thirring-like interactions derived within the dimensional-reduction method break the chiral symmetry and generate a mass in the fermionic sector with a critical number $N_c$ that is twice the value of the standard Thirring model derived in Ref.~\cite{Itoh}. This makes sense in the strong-coupling regime because the contributions of both Thirring-like interactions sum up, yielding the multiplicative factor 2 in Eq.~(\ref{S4E15}).
\subsection{Long-range interaction}
At last, we investigate the effect of the long-range interaction in the strong-coupling regime. First, we rewrite the long-range interaction of Eq.~(\ref{S3E6}) in terms of a gauge theory, e.g.
\begin{eqnarray}
H^{\mu\nu}\frac{1}{\sqrt{\Box}}H_{\mu\nu}+ \bar{g} h_{\mu}\mathcal{J}^{\mu}_{35},\label{S4E35}
\end{eqnarray}
where $H_{\mu\nu}=\partial_{\mu}h_{\nu}-\partial_{\nu}h_{\mu}$ and $\bar{g}$ is the coupling constant. This non-local gauge theory is similar to the one studied in Ref.~\cite{Alves}, where the authors also showed the breaking of chiral symmetry.
By adding the contribution of the long-range interaction to $\Sigma(p)$ and following a standard procedure, we obtain a differential equation similar to Eq.~(\ref{S4E28}), but with a different coefficient multiplying the fuction $B(p)$. In other words, we obtain a different parameter $\lambda$, namely
\begin{eqnarray}
\lambda' = \frac{1}{2}\sqrt{\frac{4}{N}\PC{\frac{64}{3\pi^{2}}+\frac{8}{\pi^{2}}}-1},\label{S4E36}
\end{eqnarray}
where $32/N\pi^{2}$ is the long-range contribution. The new parameter $\lambda'$ leads to a critical number $N_{c} = 352/3\pi^{2} \approx 11.8$. Thus, the difference between the effect caused by the short- and the long-range interaction is mainly associated to the critical number of fermions (or critical coupling) below which the symmetry is dynamically broken.
Our results show that the short-range interaction yields the major contribution to the dynamical mass generation when compared to the long-range one. However, both interaction effects add up in a way to increase the value of the critical fermion flavor $N_{c}$ for the occurrence of exciton condensation. This dynamical mechanism is driven mainly by the presence of \textit{electronic interactions} between the surfaces of 3D TI thin-films, and is robust only when the surfaces are strongly interacting. The resulting gap is \textit{time-reversal invariant} and represents a signature of excitonic bound states.
\subsection{Application: Bi$_{2}$Se$_{3}$ thin-film}
Here, we apply our theoretical results about the dynamical gap generation to Bi$_{2}$Se$_{3}$ thin films. This material is one of the most investigated three-dimensional topological insulators \cite{Zhang,Lu}, together with Bi$_{2}$Te$_{3}$ \cite{Chen}. Experimentally, the size of the gap depends on the material, on the thickness of the film, and on the substrate where the material is grown. In particular, the width of the sample drives the transition from a trivial insulator to a quantum spin Hall insulator, up to the limit in which the material presents the characteristics of a true three-dimensional topological insulator. This transition has been theoretically and experimentally investigated in Ref.~\cite{Zhang}.
In our manuscript, to describe these thin films, we adopted the regime where the distance between the surfaces $d$ -- the width of the 3D TI -- is smaller than the in-plane average separation $a$ between electrons and holes. In general, one would not expect interactions between the surfaces of a 3D TI because of the high values of the bulk dielectric constant. However, the bulk dielectric constant depends on the thickness of the material and decreases for thinner samples \cite{D-Wu,Starkov}. In this limit, the effect of electronic interactions becomes relevant. As we have shown, in the strong coupling regime there is a gap generation in each of the surfaces.
Within these assumptions, by using Eq.~(\ref{S4E33}) we are able to estimate the excitonic gap generated at zero temperature. This estimative depends on the material and dielectric constant of the substrate via the cutoff $\Lambda$, which in the case of Bi$_2$Se$_3$, for a single Dirac mode ($N=1$), is $0.1$ eV \cite{Sokolik}. By considering these parameters, we theoretically estimate $\lambda\simeq 1.65$ and determine the maximum value for the gap, $m \approx 0.07$ eV, arising from the electronic interactions. Interestingly, this value agrees with the gap measured through ARPES for a thin-film thickness of 4 nm in Bi$_2$Se$_3$ \cite{Zhang}.
\section{Conclusions}
It was theoretically proposed that the excitonic bound states at zero magnetic field may have important technological applications such as for dispersionless switching devices \cite{Application1}, or in the design of topologically protected qubits \cite{Application2}, or in heat exchangers \cite{Zaanen}.
It is also well known that TI-based electronic devices are attractive as platforms for spintronic applications.
In this work, we provide further theoretical support for exciton condensation in thin-film 3D TIs by investigating the influence of electromagnetic interactions in these systems.
We started by considering that the photons propagate through the 3D surrounding space where the material is immersed, while the mobile electrons propagate on the two 2D surfaces of the 3D TI. Upon projecting the photon dynamics to these two 2D surfaces, we found the effective intra- and inter-surfaces interaction in the system. The problem was then mapped into a single surface one, in which the top and bottom layers appear as flavors of a single fermionic spinor. Within a single-surface picture, we showed that the fermions interact via two effective short-range and one long-range interaction terms. By using a Hubbard-Stratonovich transformation, we introduced the corresponding effective gauge theory and analyzed the dynamical gap generation through the Schwinger-Dyson equation. This gap term is time-reversal invariant and is associated to the chiral symmetry breaking.
Our results indicate that the combined effect of short- and long-range interactions that emerge from projecting QED enhance the value of the critical fermion flavor number $N_{c}$ in comparison to models that only include short- or long-range interaction. They also confirm the existence and robustness of excitonic bound states in thin-film TIs in the non-perturbative regime. Notice that these results are achieved in the strongly-coupling regime, which is usually difficult to access with analytic techniques due to the failure of the standard perturbation-theory approach.
The method used here can be extended to multi-layer systems, which involve a larger number of fermion species. This will allow one to analyze the chiral-symmetry breaking and dynamical mass generation in experimentally available samples of multi-layered Dirac materials. At present, the multi-layer samples are of higher quality than the corresponding single-layer ones, and it is therefore essential that theoretical investigation tackle those more complex, multi-flavor systems. Furthermore, the same method can be used to study lower-dimensional excitonic bound states, which have been recently proposed in two parallel nanowires \cite{Nanowire}. This problem will be analyzed in future work.
\acknowledgments
This work was supported by CNPq (Brazil) through the Brazilian government project Science Without Borders. We are grateful to S. Kooi, S. Vandoren, E. C. Marino for fruitful discussions. \\
|
1,314,259,994,809 | arxiv | \section{INTRODUCTION} In \cite{HTh,H},
a technical result was proved establishing a bijective
correspondence between certain open projections in a
$C^*$-algebra containing an operator algebra $A$,
and certain one-sided ideals of $A$. Here we give several
remarkable consequences of this result. These include
a generalization of the theory of hereditary subalgebras of
a $C^*$-algebra,
and the solution of a ten year old
problem concerning the Morita equivalence of operator algebras.
In particular, the latter yields the conceptually
cleanest generalization of
the notion of Hilbert $C^*$-modules to
nonselfadjoint algebras.
We show that an `ideal' of a general operator space $X$ is
the intersection of $X$ with an `ideal' in any containing
$C^*$-algebra or $C^*$-module. Finally, we discuss
the noncommutative variant of the classical theory of
`peak sets'. If $A$ is a function algebra on a compact
space $X$, then a {\em $p$-set} may be characterized
as a closed subset $E$ of $X$ such that for any open set $U$
containing $E$ there is a function in Ball$(A)$ which is $1$ on $E$,
and $< \epsilon$ in modulus outside of $U$. We prove
a noncommutative version of this result.
An {\em operator algebra} is a closed algebra of operators on
a Hilbert space; or equivalently a closed subalgebra of a $C^*$-algebra.
We refer the reader to \cite{BLM} for the basic theory of
operator algebras which we shall need. We say that
an operator algebra $A$ is {\em unital}
if it has an identity of norm $1$,
and {\em approximately unital} if it
has a contractive approximate identity (cai).
A {\em unital-subalgebra} of a $C^*$-algebra $B$
is a closed subalgebra containing $1_B$.
In this paper we will often work
with closed right ideals $J$ of an operator algebra $A$
possessing a contractive left approximate identity (or left cai)
for $J$. For brevity we will call these
{\em r-ideals}.
The matching class of left ideals
with right cai will be called {\em $\ell$-ideals}, but these will
not need to be mentioned much for reasons of symmetry.
In fact r-ideals are exactly the {\em right $M$-ideals}
of $A$ if $A$ is approximately unital \cite{BEZ}. For
$C^*$-algebras r-ideals are precisely the right
ideals, and there is an obvious bijective correspondence
between r-ideals and $\ell$-ideals, namely $J \mapsto J^*$.
For nonselfadjoint operator algebras it is not at all clear
that there is a bijective correspondence between r-ideals and $\ell$-ideals.
In fact there is, but this seems at present
to be a deep result, as we shall see.
It is easy to see that there is a bijective correspondence between r-ideals
$J$ and certain projections $p$ in the second dual $A^{**}$
(we recall that $A^{**}$ is also an operator algebra
\cite[Section 2.5]{BLM}). This
bijection takes $J$ to its {\em left support
projection}, namely the weak* limit of a left cai
for $J$; and conversely takes $p$ to the right ideal
$p A^{**} \cap A$. The main theorem from \cite{H},
which for brevity we will refer to as {\em Hay's theorem}, states
that if $A$ is a unital-subalgebra of a $C^*$-algebra $B$
then the projections $p$ here may be characterized as the
projections in $A^{\perp \perp}$ which are open in $B^{**}$
in the sense of e.g.\ \cite{Ake,Ped}.
Although this result sounds innocuous,
its proof is presently quite technical and lengthy, and
uses the noncommutative Urysohn lemma
\cite{Ake2} and various nonselfadjoint analogues of it.
One advantage of this condition is that it has a left/right
symmetry, and thus it leads naturally into a theory of
hereditary subalgebras ({\em HSA's} for short)
of general operator algebras. For commutative
$C^*$-algebras of course HSA's are precisely the
closed two-sided ideals. For
noncommutative $C^*$-algebras the hereditary subalgebras are
the intersections
of a right ideal with its canonically associated left ideal
\cite{Ef,Ped}.
They are also the selfadjoint `inner ideals'. (In this paper, we say that a
subspace $J$ of an algebra $A$ is an
inner ideal if $J A J \subset J$.
Inner ideals in this sense are sometimes called `hereditary subalgebras' in
the literature, but we will reserve the latter term for
something more specific.) The fact that
HSA's of $C^*$-algebras are the selfadjoint inner ideals
follows quickly
from Proposition \ref{her15} below and its proof, or it
can be deduced from \cite{ER1}.
HSA's play some of the role that two-sided ideals play
in the commutative theory. Also, their usefulness stems in large part
because many important properties of the algebra pass to hereditary
subalgebras
(for example, primeness or primitivity).
We now summarize the content of our paper. In Section 2, we use
Hay's theorem to generalize some of the
$C^*$-algebraic theory of HSA's. Also
in Section 2 we use our results to give a solution\footnote{
An earlier attempt to solve this problem was made in \cite{K}.}
to a problem raised in
\cite{B}. In that paper an operator algebra $A$ was said to have
{\em property (${\mathcal L}$)} if it has a
left cai $(e_t)$ such that $e_s e_t \to e_s$ with $t$ for each $s$.
It was asked if every operator algebra with a
left cai has property (${\mathcal L}$).
As an application of this, in Section 3 we settle
a problem going back to the early days of the theory of strong
Morita equivalence
of nonselfadjoint
operator algebras. This gives a very clean generalization of
the notion of Hilbert $C^*$-module to such algebras.
In Section 4, we generalize to
nonselfadjoint algebras the connections between HSA's,
weak* closed faces of the state space, and lowersemicontinuity.
We remark that facial structure in the algebra itself
has been looked at in the nonselfadjoint
literature, for example in
\cite{Kat} and references therein.
In Section 5 we show that every right $M$-ideal in any
operator space $X$ is an
intersection of $X$ with a canonical
right submodule of any $C^*$-module (or `TRO') containing $X$.
Similar results hold for two-sided, or `quasi-', $M$-ideals.
This generalizes to arbitrary
operator spaces the theme from e.g.\ Theorem \ref{cher1} below, and
from \cite{H}, that r-ideals (resp.\
HSA's) are very tightly related to
matching right ideals (resp.\
HSA's) in a containing $C^*$-algebra.
In the final Section 6 we discuss connections
with the peak and $p$-projections
introduced in \cite{HTh,H}. The motivation
for looking at these objects is to attempt to generalize
the tools of peak sets and `peak interpolation' from
the classical theory of function algebras (due to
Bishop, Glicksberg, Gamelin, and others). In particular,
we reduce the main open question posed
in \cite{H},
namely whether the $p$-projections coincide with
the support projections of r-ideals,
to a simple sounding question about
approximate identities: If $A$ is an approximately unital operator algebra
then does $A$ have an approximate identity
of form $(1- x_t)$ with $x_t \in {\rm Ball}(A^1)$?
Here $1$ is the identity of the unitization
$A^1$ of $A$.
We imagine that the answer to this is in the negative.
We also show that $p$-projections
are exactly the closed projections
satisfying
the `nonselfadjoint Urysohn lemma' or `peaking'
property discussed at the beginning
of this introduction. Thus even if the question above
turns out in the negative, these projections should play an important
role in future `nonselfadjoint interpolation theory'.
Hereditary subalgebras of not necessarily selfadjoint unital
operator algebras have previously been considered in the
papers \cite{MZ,ZZ} on inner ideals. We thank Lunchuan Zhang for
sending us a copy of these papers.
Another work that has a point of contact
with our paper is the unpublished note \cite{Kun}.
Here {\em quasi-$M$-ideals},
an interesting variant of the one-sided $M$-ideals of Blecher, Effros,
and Zarikian \cite{BEZ}
were defined.
Kaneda showed that the product $R L$
of an r-ideal and an $\ell$-ideal in an approximately unital operator
algebra A is an inner ideal (inner ideals are called `quasi-ideals' there),
and is a quasi-$M$-ideal. It is also noted there that
in a $C^*$-algebra $A$, the following three are the same:
quasi-$M$-ideals in
A, products $R L$ of an r-ideal and an $\ell$-ideal, and
inner ideals (see also \cite{ER1},
particularly Corollary 2.6 there).
Hereditary subalgebras
in the sense of our paper were not considered in \cite{Kun}.
We thank Kaneda for permission to
describe his work here and in Section 5.
Some notations: In this paper, all projections are orthogonal
projections. If $X$ and $Y$ are sets (in an operator algebra say)
then we write
$X Y$ for the {\em norm closure} of the span of terms
of the form $x y$, for $x \in X, y \in Y$.
The second dual $A^{**}$ of an operator algebra $A$
is again an operator algebra, and the first dual
$A^*$ is a bimodule over $A^{**}$ via the
actions described, for example, on the bottom of
p.\ 78 of \cite{BLM}.
A projection $p$ in the second dual of a $C^*$-algebra $B$
is called {\em open} it is the sup of an increasing net of
positive elements of $B$.
Such projections $p$ are in a bijective correspondence with
the right ideals $J$ of $B$, or with
the HSA's (see \cite{Ped}). It is well known, and easy
to see, that $p$ is open iff there is a net $(x_t)$ in $B$
with $x_t \to p$ weak*, and $p x_t = x_t$.
We recall that
TRO's are essentially the same thing as Hilbert
$C^*$-modules, and may be viewed as
closed subspaces $Z$ of $C^*$-algebras with the
property that $Z Z^* Z \subset Z$. See e.g.\ \cite[Section 8.3]{BLM}.
Every operator space $X$ has a `noncommutative
Shilov boundary' or `ternary envelope'
$(Z,j)$ consisting of a TRO $Z$ and a
complete isometry $j : X \to Z$ whose
range `generates' $Z$. This ternary envelope has a
universal property which may be found in \cite{Ham,BLM}:
For any complete isometry $i : X \to Y$ into a
TRO $Y$, whose
range `generates' $Y$, there exists a (necessarily unique and
surjective) `ternary morphism' $\theta : Y \to Z$ such that
$\theta \circ i = j$.
If $A$ is
an approximately unital operator algebra then
the noncommutative Shilov boundary is written as $C^*_e(A)$ (see e.g.\
\cite[Section 4.3]{BLM}), and was first introduced
by Arveson \cite{SOC}.
\section{HEREDITARY SUBALGEBRAS}
Throughout this section $A$ is an operator algebra (possibly not
approximately unital). Then $A^{**}$ is an operator algebra.
We shall say that
a projection $p$ in $A^{**}$ is {\em
open in $A^{**}$} if $p \in (p A^{**} p \cap A)^{\perp \perp}$.
In this case we also say that $p^{\perp}$ is
{\em closed in $A^{**}$}, or is an {\em approximate
$p$-projection} (this notation was used in \cite{H}
since these projections have properties analoguous
to the {\em $p$-sets} in the theory of uniform algebras;
see e.g.\ \cite[Theorem 5.12]{H}).
Clearly these notions
are independent of
any particular $C^*$-algebra containing $A$.
If $A$ is a $C^*$-algebra then
these concepts coincide with the usual notion
of open and closed projections (see e.g.\ \cite{Ake,Ped}).
\medskip
{\bf Example.}
Any projection
$p$ in the multiplier algebra $M(A) \subset A^{**}$
is open in $A^{**}$,
if $A$ is approximately unital.
Indeed $p A^{**} p \cap A = p A p$, and
if $(e_t)$ is a cai for $A$, then $p e_t p \to p$ weak*.
\medskip
If $p$ is open in $A^{**}$ then clearly $D = p A^{**} p \cap A$
is a closed subalgebra of $A$, and it has
a cai by \cite[Proposition 2.5.8]{BLM}. We call such a subalgebra
a {\em hereditary subalgebra} of $A$ (or for brevity, a {\em HSA}).
Perhaps more properly (in view of the next result) we should call these
`approximately unital
HSA's', but for convenience we use the shorter term. We say that
$p$ is the {\em support projection} of the HSA $D$; and it follows
by routine arguments that
$p$ is the weak* limit of any cai from $D$.
\begin{proposition}[\cite{MZ,ZZ}] \label{her15}
A subspace of an operator algebra $A$ is
a HSA iff it is an approximately unital inner ideal.
\end{proposition}
\begin{proof} We have already said that HSA's
are approximately unital, and clearly they
are inner ideals.
If $J$ is an approximately unital inner ideal
then by \cite[Proposition 2.5.8]{BLM} we have
that $J^{\perp \perp}$ is an algebra with
identity $e$ say. Clearly
$J^{\perp \perp} \subset e A^{**} e$.
Conversely, by a routine weak* density argument
$J^{\perp \perp}$ is an inner ideal,
and so $J^{\perp \perp} = e A^{**} e$.
Thus $J = e A^{**} e \cap A$, and
$e$ is open. \end{proof}
We can often assume that the containing algebra
$A$ above is unital, simply
by adjoining a unit to $A$ (see \cite[Section 2.1]{BLM}).
Indeed it follows from the last
proposition that a subalgebra
$D$ of $A$ will be hereditary in the unitization $A^1$
iff it is hereditary in $A$.
The following is a second (of many) characterization of
HSA's. We leave the proof to the reader.
\begin{corollary} \label{her8} Let $A$ be an operator algebra
and suppose that $(e_t)$ is a net in ${\rm Ball}(A)$ such that
$e_t e_s \to e_s$ and $e_s e_t \to e_s$ with $t$.
Then $\{ x \in A : x e_t \to x , e_t x \to x \}$ is
a HSA of $A$. Conversely,
every HSA of $A$ arises in this
way.
\end{corollary}
Note that this implies that
any approximately unital subalgebra $D$ of $A$
is contained in a HSA.
We next refine Hay's theorem from \cite{H}.
\begin{theorem} \label{TGBTG}
Suppose that $A$ is an operator algebra (possibly not
approximately unital),
and that $p$ is a projection in $A^{**}$.
The following are equivalent:
\begin{itemize}
\item [(i)] $p$ is open in $A^{**}$.
\item [(ii)] $p$ is open as a projection in $B^{**}$,
if $B$ is a $C^*$-algebra containing $A$ as a subalgebra.
\item [(iii)] $p$ is the left support projection of an r-ideal
of $A$ (or, equivalently, $p$ is contained
in $(p A^{**} \cap A)^{\perp \perp}$).
\item [(iv)] $p$ is the right support projection of an $\ell$-ideal of $A$.
\item [(v)] $p$ is the support projection of a hereditary subalgebra
of $A$.
\end{itemize}
\end{theorem}
\begin{proof} That (v) is equivalent to (i) is just the
definition of being open in $A^{**}$.
Also (i) implies (ii) by
facts about open projections mentioned in the introduction.
Supposing (ii), consider $A^1$ as a unital-subalgebra of $B^1$.
Then $p$ is open as a projection in $(B^1)^{**}$.
Since $p \in A^{\perp \perp}$ it follows from Hay's theorem
that $J = p (A^1)^{**} \cap A^1$ is an r-ideal
of $A^1$ with left support projection $p$. If $x \in J$ then
$x = p x \in (A^{\perp \perp} A^1) \cap A^1 \subset A^{\perp \perp}
\cap A^1 = A$.
Thus $J = p A^{**} \cap A$, and we have proved (iii).
Thus to complete the proof
it suffices to show that (iii) implies (i) (the equivalence with (iv)
following by symmetry).
(iii) $\Rightarrow$ (i) \
First assume that $A$ is unital, in which case
(iii) is equivalent to (ii) by Hay's theorem.
We work in $A^*$. As stated in the intoduction,
$A^*$ is a right $A^{**}$-module via the
action $(\psi \eta)(a) = \langle \eta a , \psi \rangle$
for $\psi \in A^*, \eta \in A^{**}, a \in A$.
Similarly it is a left $A^{**}$ module. Let $q = p^\perp$,
a closed projection in $B^{**}$ for any $C^*$-algebra $B$
generated by $A$. We first claim that $A^* q =
J^\perp$, where $J$ is the right ideal of $A$ corresponding to $p$,
and so $A^* q$ is weak* closed.
To see that $A^* q =
J^\perp$, note that clearly $A^* q\subset J^\perp$, since $J = p A^{**} \cap A$.
Thus if $\psi \in J^\perp$, then $\psi q \in J^\perp$,
and so $\psi p \in J^\perp$ since $\psi = \psi p + \psi q$.
However, if $\psi p \in J^\perp = (p A^{**})_\perp$,
then $\psi p \in (A^{**})_\perp = \{ 0 \}$. Thus
$\psi = \psi q \in A^* q$.
Similarly, using the equivalence with (ii) here,
we have that $q A^*$ is weak* closed. Now
$q A^* + A^* q$ is the kernel of the projection
$\psi \to p \psi p$ on $A^*$, and hence it is
norm closed. By \cite[Lemma I.1.14]{HWW}, $q A^* + A^* q$ is
weak* closed. Claim: $(q A^* + A^* q)^\perp
= p A^{**} p$. Assuming this claim, note that
$(q A^* + A^* q)_\perp \subset p A^{**} p \cap A$;
and $p A^{**} p \cap A \subset (q A^* + A^* q)^\perp$,
so that $p A^{**} p \cap A = (q A^* + A^* q)_\perp$.
Thus $(p A^{**} p \cap A)^{\perp \perp} = p A^{**} p$, and the
proof is complete.
In order to prove the claim, first note that it is clear that
$p A^{**} p \subset (q A^* + A^* q)^\perp$. On the
other hand, if $\eta \in (q A^* + A^* q)^\perp$ then
write $\eta = p \eta p + p \eta q + q \eta p + q \eta q$.
Thus $p \eta q + q \eta p + q \eta q \in (q A^* + A^* q)^\perp$.
In particular, applying this element to a functional $q \psi \in q A^*$
gives $$0 = \langle p \eta q + q \eta q , q \psi \rangle =
\langle p \eta q + q \eta q , \psi \rangle , \qquad \psi \in A^* .$$
Thus $p \eta q + q \eta q = 0$, and left multiplying by $p$ shows that
$p \eta q = q \eta q = 0$.
Similarly $q \eta p = 0$. Thus $\eta \in p A^{**} p$.
Now assume that $A$ is nonunital. If $J$ is the
r-ideal, then $J$ is an r-ideal
in $A^1$.
Thus by the earlier part, $p \in (p (A^1)^{**} p \cap A^1)^{\perp \perp}$.
If $(e_t)$ is the cai for $p (A^1)^{**} p \cap A^1$,
then $e_t \to p$ weak*.
Since $p (A^1)^{**} \cap A^1 = J$ we have $p (A^1)^{**} p \cap A^1
\subset J \subset A$. Thus $e_t \in p A^{**} p \cap A$,
and so $p \in (p A^{**} p \cap A)^{\perp \perp}$.
Note too that the above shows that $p (A^1)^{**} p \cap A^1 =
p A^{**} p \cap A$. \end{proof}
{\bf Remarks.} 1) It is clear from the above
that a sup of open projections in $A^{**}$ is open in $A^{**}$.
From this remark, it is easy to give an alternative proof of
a result from \cite{BZ} which states that the closure of the
span of a family of r-ideals, again is an
r-ideal.
2) If $A$ is approximately unital then one can add to the
characterization of open projections in the theorem,
the condition that $A^* p^\perp$ is weak* closed in $A^*$.
The second paragraph of the proof above shows one
direction of this. Conversely, if $A^* p^\perp$ is weak* closed,
then $A^* p^\perp = J^\perp$ for a subspace $J$ of
$A$ such that $J^{\perp \perp} = (A^* p^\perp)^\perp = p A^{**}$.
Thus $p$ is the support projection of the r-ideal
$A \cap p A^{**} = J$.
3) A modification of part of the proof of the theorem shows
that if $A$ is approximately unital and if
$p, r$ are open projections in $A^{**}$
then
$(p A^{**} r \cap A)^{\perp \perp}
= p A^{**} r$. Note that $p A^{**} r \cap A$ is an inner ideal of
$A$. Such subspaces are precisely the intersection of an
r-ideal and an $\ell$-ideal.
\begin{corollary} \label{propL} Every operator algebra with a
left cai has property (${\mathcal L}$).
\end{corollary}
\begin{proof}
Let $C$ be an operator algebra with a
left cai, and let $A$ be its unitization.
Then $C$ is an r-ideal in $A$, and the left support
projection $p$ of $C$ in $A^{**}$ is a weak* limit
of the left cai. Also, $C = p A^{**} \cap A$.
By Theorem \ref{TGBTG}, we have
$p \in (p A^{**} p \cap A)^{\perp \perp}$,
and
$p A^{**} p \cap A$ is a closed subalgebra
of $C$ containing a cai $(x_t)$ with
$x_t \to p$ weak*. If $J = \{ a \in A
: x_t a \to a \}$ then $J$ is a right ideal of $A$
with support projection $p$, so that $J = C$.
Hence $C$ has property (${\mathcal L}$).
\end{proof}
Some implications of this result are mentioned in
\cite{B}, however our main application appears in the
next section.
In the following, we use some notation introduced in \cite{B}.
Namely, if $J$ is an operator algebra with a
left cai $(e_t)$ such that $e_s e_t \to e_s$ with $t$,
then we set ${\mathcal L}(J) = \{ a \in J : a e_t \to a \}$.
This latter space does not depend on the particular $(e_t)$,
as is shown in \cite{B}.
\begin{corollary} \label{her3}
A subalgebra of an operator algebra $A$
is hereditary if and only if
it equals ${\mathcal L}(J)$ for an r-ideal $J$ of
$A$.
Moreover the correspondence $J \mapsto {\mathcal L}(J)$
is a bijection from the set of r-ideals of $A$ onto
the set of HSA's of $A$. The inverse of
this bijection is the map
$D \mapsto D A$. Similar results hold
for the $\ell$-ideals of $A$.
\end{corollary}
\begin{proof}
If $D$ is a HSA of $A$ then
by Corollary \ref{her8} we have
$D = \{ x \in A : x e_t \to x , e_t x \to x \}$,
and $(e_t)$ is the cai for $D$. Set $J = \{ x \in A : e_t x \to x \}$,
an r-ideal with $D = {\mathcal L}(J)$.
Conversely, if $J$ is an r-ideal then
by Corollary \ref{propL},
we can choose a left cai $(e_t)$ of $J$ with the
property that $e_s e_t \to e_s$ with $t$.
Then $D = \{ x \in A : x e_t \to x , e_t x \to x \}$
is an HSA by Corollary \ref{her8}, and
$D = {\mathcal L}(J)$. Note that ${\mathcal L}(J) A
\subset J$, and conversely if $x \in J$ then
$x = \lim_t \, e_t x \in {\mathcal L}(J) A$.
Thus $J = {\mathcal L}(J) A$. This shows that
$J \mapsto {\mathcal L}(J)$ is one-to-one. The last
paragraph shows that it is onto.
\end{proof}
\begin{corollary} \label{her7} If $D$ is a
hereditary subalgebra of an operator
algebra $A$, and if $J = D A$ and $K = A D$,
then $J K = J \cap K = D$.
\end{corollary}
\begin{proof} Clearly $J K \subset J \cap K$.
Conversely, if $x \in J \cap K$ and $(e_t)$ is the cai for $D$
then $x = \lim_t x e_t \in J K$. So $J K = J \cap K$
(see also e.g.\ \cite[Proposition 6.2]{FSIO} and \cite[Lemma 1.4.1]{Kth}).
Clearly $J K \subset D$ since $D$ is an inner
ideal. Conversely, $D = D^4 \subset J K$.
\end{proof}
\begin{theorem} \label{cher1}
If $A$ is a closed subalgebra of a $C^*$-algebra $B$ then there
is a bijective correspondence between r-ideals of $A$ and
right ideals of $B$ with left support in $A^{\perp \perp}$.
Similarly, there
is a bijective correspondence between HSA's of $A$ and
HSA's of $B$ with support in $A^{\perp \perp}$.
The correspondence takes an r-ideal (resp.\ HSA) $J$ of
$A$ to $JB$ (resp.\ $J B J^*$). The inverse
bijection is simply intersecting with $A$.
\end{theorem}
\begin{proof} We leave the proof of this to the reader, using
the ideas above
(and, in particular, Hay's theorem). At some
point an appeal to \cite[Lemma 2.1.6]{BLM} might be necessary.
\end{proof}
In the $C^*$-algebra case the correspondence between r-ideals
and $\ell$-ideals has a simple formula: $J \mapsto J^*$.
For nonselfadjoint algebras $A$, one formula setting
up the same correspondence is $J \mapsto A {\mathcal L}(J)$.
It is easy to see from the last theorem that,
for subalgebras $A$ of a $C^*$-algebra $B$, this
correspondence becomes $J \mapsto BJ^* \cap A$.
Here $J$ is an r-ideal; and notice that $BJ^* \cap A$ also equals
$B D^* \cap A$, where $D$ is the associated HSA
of $A$
(we remark that by \cite[Lemma 2.1.6]{BLM} it is easy to see
that $B D^* = B D$). This allows
us to give another description of ${\mathcal L}(J)$ as
$J \cap BJ^*$.
\begin{theorem} Suppose $D$ is a hereditary subalgebra of an
approximately unital operator algebra $A$. Then every
$f \in D^*$ has a unique
Hahn-Banach extension to a functional in $A^{\ast}$ (of the same norm).
\end{theorem}
\begin{proof}
Let $g$ and $h$ be two such extensions. Since $D=pA^{\ast\ast}p \cap A$
for an open projection $p$, it
is easy to see that $pgp = php$. Since
$\|g\|=\|pgp\|=\|php\|=\|h\|$, we need only show that $g=pgp$ and similarly $h=php$.
Consider $A^{\ast\ast}$ as a unital-subalgebra of a W*-algebra $B$.
Since the canonical projection from $B$ onto $pBp +(1-p)B(1-p)$ is contractive,
and since $\Vert p b p + (1-p) b (1-p) \Vert = \max \{
\Vert p b p \Vert , \Vert (1-p) b (1-p) \Vert \}$
for $b \in B$, it is easy to
argue that
\[
\|g\| \geq \|pgp +(1-p)g(1-p)\| = \|pgp\|+\|(1-p)g(1-p)\|\geq \|g\| .
\]
Hence, $(1-p)g(1-p)=0$. Since $g = pgp + pg(1-p)+pg(1-p) + (1-p)g(1-p)$,
it suffices to show that $pg(1-p)+pg(1-p)=0$. To this end,
we follow the proof in
Proposition 1 of \cite{FR}, which proves the analogous result for JB*-triples.
For the readers convenience,
we will reproduce this pretty argument in
our setting, adding a few more details.
Of course $B$ is a JB*-triple. We will use the notation
$pBp=B_{2}(p)$, $pB(1-p)+(1-p)Bp =B_{1}(p)$, and $(1-p)B(1-p) =B_{0}(p)$.
For this proof only, we will write $x^{2n+1}$ for $x(x^{\ast}x)^{n}$
(this unusual notation is used in the JB*-triple literature).
In Lemma 1.4 of
\cite{FR}, it is proved that, for $x \in B_{2}(p) \cup B_{0}(p)$,
$y \in B_{1}(p)$, and $t > 0$,
\begin{eqnarray}
\; \; (x+ty)^{3^{n}}
=x^{3^{n}}+t2^{n}D(x^{3^{n-1}},x^{3^{n-1}}) \cdots D(x^{3},x^{3})D(x,x)y+
O(t^{2}). \label{J}
\end{eqnarray}
\noindent where, in our setting, $D(w,w)$ is the operator
$D(w,w)z=(ww^{\ast}z+z w^{\ast} w)/2$ on $B$.
Here, $O(t^{2})$ denotes
a polynomial in $x$, $y$, and $t$,
with all terms at least quadratic in $t$. This polynomial has
a certain number of terms that depends only on $n$, and
the coefficients of the monomials in $x, y$ and $t$ also
depend only on $n$.
Choose $y \in B_{1}(p) \cap
A^{\ast\ast}$. We may assume that $\|g\|=1$, $g(y)\geq 0$, and $\|y\|
\leq 1$. Given $\epsilon > 0$, we choose $x
\in D$ with $\|x\|=1$ and $f(x) \geq 1-\epsilon$. Then, for $t > 0$, we
have
\[
\|x+ty\| \geq g(x+ty)=g(x)+t g(y) \geq 1-\epsilon +tg(y).
\]
Thus, by (\ref{J}) above, and the fact that $\|x\| \leq 1$,
\begin{eqnarray*}
(1- \epsilon +tg(y))^{3^{n}} \leq \|x+ty\|^{3^{n}} &=& \|(x+ty)^{3^{n}}\|\\
&\leq & \|x^{3^{n}}\|+t2^{n}\|y\|+ \Vert O(t^{2}) \Vert \\
&\leq & 1 + t2^{n}\|y\|+ p(t) ,
\end{eqnarray*}
where $p(t)$ is a polynomial in $t$ with all terms at least degree 2,
and coefficients which depend only on $n$ and $\Vert y \Vert$.
Letting $\epsilon \rightarrow 0$ we have
$(1 +tg(y))^{3^{n}} \leq 1 + t2^{n}\|y\|+ p(t)$, and so
\[
1 + 3^{n}tg(y) \leq
1+t2^{n}\|y\|+ r(t),
\]
where $r(t)$ is a polynomial with the same properties as $p$,
and in particular has all terms at least degree 2.
Dividing by $3^{n} t$, we obtain
\[
g(y) \leq
\left(\frac{2}{3}\right)^{n}\|y\|+\frac{r(t)}{t 3^{n}} .
\]
Letting $t \rightarrow 0$ and then
$n \rightarrow \infty$, we see that $g(y)=0$. Hence
$pg(1-p)+(1-p)gp = 0$ as desired.
\end{proof}
One might hope to improve the previous theorem
to address extensions of completely bounded maps from $D$ into $B(H)$.
Unfortunately, simple examples such as the one-dimensional
HSA $D$ in $\ell^\infty_2$ which is supported in the
first entry, with $f : D \to M_2$ taking $(1,0)$ to $E_{11}$,
shows that one needs to impose strong restrictions on the
extensions. This two dimensional example contradicts
several theorems on unique completely
contractive extensions in the literature.
We found the following positive result after reading \cite{ZZ}.
Although some part of it
is somewhat tautological, it may be the best that one could hope for.
To explain the notation in (iii), if $A$ is an
approximately unital operator algebra and $B$ is a unital weak* closed
operator algebra,
then we say that a bounded map $T : A \to B$ is {\em weakly nondegenerate} if
the canonical weak* continuous extension $\tilde{T} : A^{**} \to B$ is unital.
By 1.4.8 in \cite{BLM} for example, this is equivalent to:
$T(e_t) \to 1_B$ weak* for some contractive approximate
identity $(e_t)$ of $A$; and is also equivalent to the same statement
with `some' replaced by `every'.
\begin{proposition} \label{unx} Let $D$ be an
approximately unital subalgebra
of an approximately unital operator algebra $A$.
The following are equivalent:
\begin{itemize}
\item [(i)] $A$ is a hereditary subalgebra of $A$.
\item [(ii)] Every completely contractive unital
map from $D^{**}$ into a unital operator algebra $B$, has
a unique completely contractive unital extension from
$A^{**}$ into $B$.
\item [(iii)] Every completely contractive weakly nondegenerate
map from $D$ into a unital weak* closed operator algebra $B$
has a unique completely contractive weakly nondegenerate extension from $A$
into $B$.
\end{itemize}
\end{proposition}
\begin{proof} We are identifying $D^{**}$ with $D^{\perp \perp}
\subset A^{**}$. Let $e$ be the identity of $D^{**}$.
(ii) $\Rightarrow$ (i) \ If (ii)
holds,
then the identity map on $D^{**}$ extends to a
unital complete contraction $S : A^{**} \to D^{**} \subset e A^{**} e$.
The map $x \mapsto e x e$ on $A^{**}$
is also a completely contractive unital extension
of the inclusion map $D^{**} \to e A^{**} e$.
It follows from the hypothesis that these maps
coincide, and so $e A^{**} e = D^{**}$, which implies
that $D$ is a HSA.
(i) $\Rightarrow$ (ii) \ If $D$ is a HSA, then extensions
of the desired kind exist by virtue of the canonical
projection from $A^{**}$ onto $D^{\perp \perp}$. For the
uniqueness, suppose that $\Phi$ is
such an extension of a completely contractive unital
map $T : D^{\perp \perp} \to B$.
Since $e$ is
an orthogonal projection in $A^{**}$, it follows from
the last remark in 2.6.16 in \cite{BLM} that
$$T(e x e) = \Phi(e x e) = \Phi(e) \Phi(x) \Phi(e) = \Phi(x) , \qquad x \in A^{**}. $$
Hence (ii) holds.
Inspecting the proof above shows that (i) is
equivalent to the variant of (ii) where $B$ is
weak* closed and all maps are
also weak* continuous. Then the equivalence with (iii)
is easy to see using the facts immediately above the
proposition statement, and also the bijective
correspondence between complete contractions
$A \to B$ and weak* continuous complete contractions $A^{**} \to B$
(see 1.4.8 in \cite{BLM}). \end{proof}
\section{APPLICATION: A GENERALIZATION OF $C^*$-MODULES}
In the early 1990's, the first author together with Muhly and Paulsen
generalized Rieffel's strong
Morita equivalence to nonselfadjoint operator algebras \cite{BMP}.
This study was extended to include a
generalization of Hilbert $C^*$-modules to nonselfadjoint algebras,
which were called {\em rigged modules} in \cite{Bghm},
and {\em (P)-modules} in \cite{BMP}. See \cite[Section 11]{Bnat} for
a survey. There are very many
equivalent definitions of these objects in these papers.
The main purpose
of this section is to settle a problem going back to the
early days of this theory. This results in the conceptually
clearest definition of rigged modules; and also tidies
up one of the characterizations of strong Morita equivalence.
The key tool we will use is
the Corollary \ref{propL} to our main theorem from Section 2.
Throughout this section, $A$ is an approximately unital operator algebra.
For a positive integer $n$ we write $C_n(A)$ for
the $n \times 1$ matrices with entries in $A$, which
may be thought of as the first column of the
operator algebra $M_n(A)$.
In our earlier work mentioned above
$C_n(A)$ plays the role of the prototypical
right $A$-rigged module, out of which all others
may be built via `asymptotic factorizations' similar
to the kind considered next.
\begin{definition} \label{rig}
An operator space $Y$ which is
also a right $A$-module is {\em $A$-Hilbertian}
if there exists a net of positive integers $n_\alpha$,
and completely contractive $A$-module maps
$\varphi_\alpha : Y \to C_{n_\alpha}(A)$ and
$\psi_\alpha : C_{n_\alpha}(A) \to Y$,
such that $\psi_\alpha \,
\varphi_\alpha \to I_Y$ strongly on $Y$.
\end{definition}
The name `$A$-Hilbertian' is due to Paulsen around 1992, who
suggested that these modules should play an important
role in the Morita theory.
A few years later
the question became
whether they coincide with the
rigged modules/(P)-modules from \cite{Bghm,BMP}.
This question appears explicitly in \cite[Section 11]{Bnat} for example,
and was discussed also several times
in \cite[Chapter 4]{BMP} in terms of the
necessity of adding further conditions
to what we called the `approximate identity property'.
Assuming for simplicity that $A$ is unital, one of the many equivalent
definitions of rigged modules is that they are the
modules satisfying Definition \ref{rig},
but that in addition $\varphi_\beta \psi_\alpha \,
\varphi_\alpha \to \varphi_\beta$ in norm for
each fixed $\beta$ in the directed set. We were
not able to get the theory going without this extra
condition. Thus
the open question referred to above may be restated
as follows: can one always replace
the given nets in Definition \ref{rig} with
ones which satisfy this additional condition?
The first author proved this if $A$ is a $C^*$-algebra in \cite{Bna}; indeed
$A$-Hilbertian modules coincide with $C^*$-modules if $A$ is
a $C^*$-algebra. A simpler proof of this
result due to Kirchberg is included in \cite[Theorem 4.24]{BMP}.
Although the `asymptotic factorization' in the definition above
is clean, it can sometimes
be clumsy to work with, as is somewhat illustrated
by the proof of the next result.
\begin{proposition} \label{prol} Let $Y$ be an operator space
and right $A$-module, such that
there exists a net of positive integers $n_\alpha$,
$A$-Hilbertian modules $Y_\alpha$, and completely contractive $A$-module maps
$\varphi_\alpha : Y \to Y_\alpha$ and
$\psi_\alpha : Y_\alpha \to Y$, such that $\psi_\alpha \,
\varphi_\alpha \to I_Y$ strongly. Then $Y$ is
$A$-Hilbertian.
\end{proposition}
\begin{proof}
We use a net reindexing argument based on \cite[Lemma 2.1]{Bghm}.
Suppose that
$\sigma^\alpha_\beta : Y_\alpha \to Z^\alpha_{\beta}$
and $\tau^\alpha_\beta : Z^\alpha_{\beta} \to Y_\alpha$,
are the `asymptotic factorization' nets corresponding to $Y_\alpha$.
We define a new directed set
$\Gamma$ consisting of 4-tuples
$\gamma = (\alpha, \beta, V, \epsilon)$,
where $V$ is a finite subset of $Y$, $\epsilon > 0$,
and such that $$\Vert \psi_\alpha \tau^\alpha_\beta \sigma^\alpha_\beta
\varphi_\alpha(y) - \psi_\alpha \varphi_\alpha(y) \Vert < \epsilon ,
\qquad y \in V .$$
This is a directed set with ordering
$(\alpha, \beta, V, \epsilon) \leq (\alpha',\beta',V',\epsilon')$
iff $\alpha \leq \alpha', V \subset V'$ and $\epsilon' \leq \epsilon$.
(We recall that directed sets for nets make no essential
use of the `antisymmetry' condition for the ordering, and we follow many
authors in not requiring this.)
Define $\varphi^\gamma = \sigma^\alpha_\beta \, \varphi_\alpha$
and $\psi^\gamma = \psi_\alpha \tau^\alpha_\beta$, if
$\gamma = (\alpha, \beta, V, \epsilon)$. Given $y \in Y$
and $\epsilon > 0$, choose $\alpha_0$ such that
$\Vert \psi_\alpha \, \varphi_\alpha (y) - y \Vert < \epsilon$ whenever
$\alpha \geq \alpha_0$. Choose $\beta_0$ such that
$\gamma_0 = (\alpha_0, \beta_0, \{ y \} , \epsilon) \in \Gamma$.
If $\gamma \geq \gamma_0$ in $\Gamma$ then
$$\Vert \psi^\gamma \, \varphi^\gamma(y) - y \Vert
\leq \Vert \psi_\alpha \tau^\alpha_\beta \sigma^\alpha_\beta
\varphi_\alpha(y) - \psi_\alpha \varphi_\alpha(y) \Vert
+ \Vert \psi_\alpha \varphi_\alpha(y) - y \Vert < \epsilon' + \epsilon
\leq 2 \epsilon .$$
Thus $\psi^\gamma \,
\varphi^\gamma(y) \to y$, and so $Y$ is
$A$-Hilbertian.
\end{proof}
{\bf Remark.} If desired, the appearance of the integers $n_\alpha$ in Definition
\ref{rig} may be avoided by the following trick.
Let $C_\infty(A)$ be the space of columns $[x_k]_{k \in \Ndb}$, with $x_k \in A$,
such that $\sum_k \, x^*_k x_k$ converges in $A$.
It is easy to see that $C_\infty(A)$ is $A$-Hilbertian, and for any
$m \in \Ndb$ there is an obvious factorization of the identity map
on $C_m(A)$ through $C_\infty(A)$.
It follows from this, and from the last Proposition, that Definition
\ref{rig} will be unchanged if all occurrences
of $n_\alpha$ there are replaced by $\infty$.
\begin{theorem} \label{main} An operator space $Y$ which is
also a right $A$-module is a
rigged $A$-module if and only if it is $A$-Hilbertian.
This is also equivalent to $Y$ having
the `approximate identity property' of
{\rm \cite[Definition 4.6]{BMP}}. \end{theorem}
\begin{proof} Suppose that $Y$ is an operator space and a
right $A$-module which is $A$-Hilbertian.
It is easy to see that $Y$ is
an
operator $A$-module, since the $C_{n_{\alpha}}(A)$ are,
and since $$\Vert [y_{ij}] \Vert
= \sup_\alpha \Vert [ \varphi_\alpha(y_{ij}) ]
\Vert = \lim_\alpha \Vert [ \varphi_\alpha(y_{ij}) ] \Vert
, \qquad [y_{ij}] \in M_n(Y) .$$
If
$(e_t)$ is a cai for $A$ then the triangle inequality easily
yields that for any $\alpha$,
\begin{eqnarray*}
\Vert y - y e_t \Vert & = & \Vert y - \psi_\alpha \varphi_\alpha(y)
+ \psi_\alpha(\varphi_\alpha(y) - \varphi_\alpha(y) e_t)
+ (\psi_\alpha \varphi_\alpha(y) - y) e_t \Vert \\
& \leq & 2
\Vert y - \psi_\alpha \varphi_\alpha(y)) \Vert
+ \Vert \varphi_\alpha(y) - \varphi_\alpha(y) e_t \Vert ,
\end{eqnarray*}
from which the nondegeneracy is easily seen.
Next, we reduce to the unital case.
Let $B = A^1$, the unitization of $A$. Note that
$A$ is $B$-Hilbertian: the maps
$A \to B$ and
$B \to A$ being respectively the
inclusion, and left multiplication by elements in the cai
$(e_t)$.
Tensoring these maps with the identity map on
$C_{n_{\alpha}}$, we see that
$C_{n_{\alpha}}(A)$ is $B$-Hilbertian.
By Proposition \ref{prol},
$Y$ is $B$-Hilbertian.
By \cite[Proposition 2.5]{Bghm} it is easy to see that
$Y$ satisfies (the right module variant of)
\cite[Definition 4.6 (ii)]{BMP}.
By the results following that definition
we have that
$C = Y \otimes_{hB} CB_B(Y,B)$ is
a closed right ideal in $CB_B(Y)$ which has a
left cai. By
\cite[Theorem 2.7]{Bghm} or \cite[Theorem 4.9]{BMP}
we know that $CB_B(Y)$ is a unital operator
algebra.
By Corollary \ref{propL},
$C$ possesses a left cai $(v_\beta)$ such that
$v_{\gamma} v_\beta \to v_{\gamma}$ with $\beta$ for
each $\gamma$. Let $D = \{ a \in C : a v_\beta \to a
\}$, which is an operator algebra with cai.
Since the
uncompleted algebraic tensor product $J$ of $Y$ with
$CB_B(Y,B)$ is
a dense ideal in $C$, and since
$J v_{\gamma} \subset D$ for each $\gamma$, it is
easy to see by the triangle inequality
that $J \cap D$ is a
dense ideal in $D$. Thus we can rechoose a cai $(u_\nu)$
for $D$ from this ideal, if necessary by using
\cite[Lemma 2.1]{Bghm}. This
cai will be a left cai for $C$ (e.g.\ see proof
of
Corollary \ref{propL}). This implies that
$Y$ satisfies (a), and hence also (b), of
\cite[Definition 4.12]{BMP}. That is,
$Y$ is a (P)-module, or equivalently a
rigged module, over $B$. It is known that
this implies that $Y$ is $A$-rigged. One way to see this
is to observe that by an application
of Cohen's
factorization theorem as in \cite[Lemma 8.5.2]{BLM}, we have
$B_B(Y,B) = B_A(Y,A)$.
It follows that $Y$ satisfies \cite[Definition 4.12]{BMP}
as an $A$-module, and hence
$Y$ is an $A$-rigged module.
That every rigged module
is $A$-Hilbertian follows from \cite[Definition 3.1]{Bghm}.
The equivalence with the `approximate identity property'
is essentially contained in the above argument.
\end{proof}
This theorem
impacts only a small portion
of \cite{Bghm}. Namely, that paper may now be improved
by replacing Definition 3.1 there by the modules
in Definition \ref{rig} above; and by tidying up
some of the surrounding exposition.
One may also now give alternative constructions of, for example,
the interior tensor product of rigged modules, by following
the idea in \cite[Theorem 8.2.11]{BLM}.
Similarly, one may now tidy up
one of the characterizations of strong Morita equivalence.
By the above, what was called the `approximate identity
property' in \cite[Chapter 4]{BMP} implies that the module
is a (P)-module, and so in Theorems 4.21 and 4.23 in
\cite{BMP} one may
replace conditions from Definition 4.12 with those in 4.6.
That is, we have the following improved
characterization of the strong Morita equivalence of \cite{BMP}.
(The reader needing further details is referred to that source.)
\begin{theorem} \label{morth} If $Y$ is a right $A$-Hilbertian
module with the `dual approximate identity property' of \cite[Definition 4.18]{BMP},
then $Y$ implements a strong Morita equivalence between $A$
and the algebra $\Kdb_A(Y)$ of so-called `compact' operators on $Y$.
Conversely, every strong Morita equivalence
arises in such a way. \end{theorem}
\medskip
{\bf Remark.} The `dual approximate identity property'
mentioned in the theorem may also be phrased in terms of
`asymptotic factorization'
of $I_A$ through spaces of the form $C_m(Y)$---this is
mentioned in \cite[p.\ 416]{Bghm} with a mistake that is
discussed in \cite[Remark 4.20]{BMP}.
\medskip
We refer the reader to \cite{Bghm} for the theory of rigged modules.
It is easy to see using Corollary \ref{her7}
or Theorem \ref{main},
that any hereditary subalgebra $D$ of an approximately unital
operator algebra $A$ gives rise to a rigged module.
Indeed, if $J = D A$, then
$J$ is a right rigged $A$-module, the
canonical dual rigged module $\tilde{J}$ is just the
matching
$\ell$-ideal $A D$, and the
operator algebra $\Kdb_A(J)$ of `compact operators' on $J$ is just $D$
completely isometrically isomorphically.
From the theory of rigged modules \cite{Bghm} we know for example that
any completely contractive representation of $A$ induces
a completely contractive representation of $D$,
and vice versa. More
generally, any left operator $A$-module will give rise
to a left operator $D$-module by left tensoring with $J$,
and vice versa by left tensoring with $\tilde{J}$. Since
$J \otimes_{hA} \tilde{J} = D$ it follows that there is
an `injective' (but not in general `surjective') functor from
$D$-modules to $A$-modules.
If $A$ and $B$ are approximately unital
operator algebras which are strongly Morita
equivalent in the sense of \cite{BMP},
then $A$ and $B$ will clearly be hereditary subalgebras of the
`linking operator algebra' associated with the
Morita equivalence \cite{BMP}.
Unfortunately, unlike the $C^*$-algebra case,
not every HSA $D$ of an operator algebra $A$
need be strongly Morita equivalent
to $A D A$. One would also need a condition similar to
that of \cite[Definition 5.10]{BMP}. Assuming the presence
of such an extra condition, it follows that the representation
theory for the algebra $A$ is `the same' as the representation
theory of $D$; as is always the case if one has a
Morita equivalence.
\medskip
{\bf Example.}
If $a \in {\rm Ball}(A)$, for an
operator algebra $A$, let $D$ be the
closure of $(1-a) A (1-a)$. Then it follows from
the later Lemma \ref{wilp} that $D$ is a hereditary subalgebra
of $A$. The associated r-ideal is $J$, the
closure of $(1-a) A$. The dual rigged module
$\tilde{J}$ is equal to the
closure of $A (1-a)$, and $\Kdb_A(J) \cong D.$
It is easy to check that even for examples of this kind,
the $C^*$-algebra $C^*(D)$ generated by $D$
need not be a hereditary $C^*$-subalgebra
of $C^*(A)$ or $C^*_e(A)$. For example, take $A$
to be the subalgebra of $M_2(B(H))$ consisting of all
matrices whose 1-1 and 2-2 entries
are scalar multiples of $I_H$, and whose 2-1 entry is $0$.
Let $a = 0_H \oplus I_H$.
In this case $D = (1-a) A (1-a)$ is one dimensional, and it
is not a HSA of $C^*(A)$. Also $D$ is
not strongly Morita equivalent to $A D A$.
\section{CLOSED FACES AND LOWERSEMICONTINUITY}
Suppose that $A$ is an approximately unital operator algebra.
The state space
$S(A)$ is the set of functionals $\varphi \in A^*$ such that
$\Vert \varphi \Vert = \lim_t \varphi(e_t) = 1$,
if $(e_t)$ is a cai for $A$. These are
all restrictions to $A$ of states on any $C^*$-algebra generated
by $A$.
If $p$ is
a projection in $A^{**}$, then any $\varphi \in S(A)$ may be
thought of as a state on $A^{**}$, and hence
$p(\varphi) \geq 0$. Thus $p$ gives a
nonnegative scalar function on $S(A)$, or on
the quasistate space (that is, $\{ \alpha \varphi :
0 \leq \alpha \leq 1 , \varphi \in S(A) \}$).
We shall see that this function is
lowersemicontinuous if and only if $p$ is open in $A^{**}$.
In the following generalization of a well known result from
the $C^*$-algebra theory \cite{Ped}, we assume for simplicity
that $A$ is unital.
If $A$ is only approximately unital then a similar result holds
with a similar proof, but one must use the quasistate space in
place of $S(A)$; this is weak* compact.
\begin{theorem} \label{her11} Suppose that
$A$ is a unital-subalgebra of a $C^*$-algebra
$B$.
If $p$ is a projection in $A^{\perp \perp} \cong A^{**}$,
then the following are equivalent:
\begin{itemize}
\item [(i)] $p$ is open as a projection in $B^{**}$ (or,
equivalently, in $A^{**}$).
\item [(ii)]
The set $F_p = \{ \varphi \in S(A) : \varphi(p) = 0 \}$
is a weak* closed face in $S(A)$.
\item [(iii)] $p$ is lowersemicontinuous on $S(A)$.
\end{itemize}
\end{theorem}
\begin{proof}
(i) $\Rightarrow$ (ii) \
For any projection $p \in A^{\perp \perp}$, the set
$F_p$ is a face in $S(A)$. For if $\psi_i \in S(A)$, $t \in [0,1]$,
and $t \psi_1 + (1-t) \psi_2 \in F_p$ then $t \psi_1(p) +
(1-t) \psi_2(p) = 0$ which forces $\psi_1(p) = \psi_2(p) = 0$,
and $\psi_i \in F_p$.
If $p$ is open then $G_p = \{ \varphi \in S(B): \varphi(p) = 0 \}$
is a weak* compact face in $S(B)$ by \cite[3.11.9]{Ped}.
The restriction map $r : \varphi \in S(B) \mapsto
\varphi_{|A} \in S(A)$ is weak* continuous,
and maps $G_p$ into $F_p$. On the other hand, if $\varphi \in F_p$
and $\hat{\varphi}$ is a Hahn-Banach extension of $\varphi$ to $B$
then one can show that $\langle p , \varphi \rangle
= \langle p , \hat{\varphi} \rangle$, and so the map $r$ above
maps $G_p$ onto $F_p$. Hence $F_p$ is weak* closed.
(ii) $\Rightarrow$ (i) \ We use the notation of the
last paragraph. If $F_p$ is weak* closed, then
the inverse image of $F_p$ under $r$ is weak* closed.
But this inverse image is $G_p$, since
if $\varphi \in S(B)$ then $\langle p , \varphi \rangle
= \langle p , r(\varphi) \rangle$ by a fact in the
last paragraph. Thus by \cite[3.11.9]{Ped} we have (i).
(i) $\Rightarrow$ (iii) \ If $p$ is open,
then $p$ is a lowersemicontinuous function on $S(B)$.
Thus $\{ \varphi \in S(B) : \langle p , \varphi
\rangle \leq t \}$ is weak* compact for any $t \geq 0$.
Hence its image under the map $r$ above, is
weak* closed in $S(A)$. However, as in the above, this
image is $\{ \varphi \in S(A) : \langle p , \varphi \rangle \leq t \}$.
Thus $p$ is lowersemicontinuous on $S(A)$.
(iii) $\Rightarrow$ (i) \
If $p$ gives a lowersemicontinuous function on $S(A)$,
then the composition of this function with $r :
S(B) \to S(A)$ is lowersemicontinuous on $S(B)$.
By facts in \cite[p.\ 77]{Ped}, we have that $p$ is open.
\end{proof}
{\bf Remark.}
Not all weak* closed faces of $S(A)$ are
of the form in (ii) above. For example, let $A$ be the
algebra of $2 \times 2$ upper triangular matrices with
constant diagonal entries. In this case $S(A)$ may be
parametrized by complex numbers $z$ in a closed disk of
a certain radius centered at the origin. Indeed states
are determined precisely by the assignment $e_{12} \mapsto z$.
The faces of $S(A)$ thus include the faces corresponding to
singleton sets of points on the
boundary circle; and none of these faces equal $F_p$
for a projection $p \in A = A^{\perp \perp}$.
\medskip
In view of the classical situation,
it is natural to ask about the relation between
minimal closed projections in $B^{**}$ which
lie in $A^{\perp \perp}$ and
the noncommutative Shilov boundary mentioned in
the introduction. By the universal property
of the latter object,
if $B$ is generated as a
$C^*$-algebra by its subalgebra $A$, then there is
a canonical
$*$-epimorphism $\theta$ from $B$ onto the
noncommutative Shilov boundary of $A$, which in
this case is a $C^*$-algebra.
The kernel of $\theta$ is called
(Arveson's) {\em Shilov boundary ideal}
for $A$. See e.g.\ \cite{SOC} and the third remark in
\cite[4.3.2]{BLM}.
\begin{proposition} \label{min} If $B$ is generated as a
$C^*$-algebra by a closed unital-subalgebra
$A$, let $p$ be the open central
projection in $B^{**}$ corresponding to the Shilov ideal
for $A$. Then $p^\perp$ dominates all minimal
projections in $B^{**}$ which lie in $A^{\perp \perp}$.
\end{proposition}
\begin{proof} Suppose that $q$ is a minimal
projection in $B^{**}$ which lies in $A^{\perp \perp}$.
Then either $q p = 0$ or $q p = q$.
Suppose that $q p = q$.
If $\theta$ is as above,
then since $\theta$ annihilates
the Shilov ideal we have $$\theta^{**}(q) =
\theta^{**}(q p) = \theta^{**}(q) \theta^{**}(p) =
0 .$$
On the other hand, $\theta$ is a complete isometry from
the copy of $A$ in $B$ to the copy of $A$ in $\theta(B)$,
and so $\theta^{**}$ restricts to a complete isometry on
$A^{\perp \perp}$.
Thus $q p = 0$, so that $q = q p^\perp$ and $q \leq p^\perp$.
\end{proof}
{\bf Example.}
The sup of closed projections in $A^{**}$ which are also minimal
projections in $B^{**}$ need not
give the `noncommutative Shilov boundary'.
Indeed if $A$ is the $2 \times 2$ upper triangular
matrices with constant main diagonal entries, then
there are
no nonzero minimal projections in $M_2$ which lie in $A$.
\section{HEREDITARY $M$-IDEALS}
A {\em left $M$-projection} of an operator space $X$ is a projection
in the $C^*$-algebra of (left) adjointable
maps on $X$; and the latter may be viewed as the restrictions
of adjointable right module maps on a $C^*$-module
containing $X$
(see e.g.\ Theorem 4.5.15 and Section 8.4 in
\cite{BLM}). This $C^*$-module can be taken to be
the ternary envelope of $X$. The range of a left $M$-projection
is a {\em right $M$-summand} of $X$.
A {\em right $M$-ideal}
of an operator space $X$ is a subspace $J$ such that
$J^{\perp \perp}$ is a right $M$-summand of $X^{**}$.
The following result from \cite{BEZ} has been
sharpenened in the summand case:
\begin{proposition} \label{lma}
If $A$ is an approximately unital operator algebra,
then the left $M$-projections on $A$ are precisely
`left multiplications' by projections in the multiplier algebra $M(A)$.
Such projections are all open in $A^{**}$.
The right $M$-summands of $A$ are thus the spaces $p A$
for a projection $p \in M(A)$.
The right $M$-ideals of $A$ coincide with the r-ideals of $A$.
\end{proposition}
\begin{proof}
We claim that if $p$ is a projection
(or more generally, any hermitian) in the
left multiplier algebra $LM(A)$,
then $p \in M(A)$. Suppose that $B$ is a $C^*$-algebra generated by $A$,
and view $LM(A) \subset A^{\perp \perp} \subset B^{**}$. If $a \in A$
and if $(e_t)$ is a cai for $A$, then
by \cite[Lemma 2.1.6]{BLM} we have $p a^* = \lim_t p e_t a^* \in B$.
Thus $p$ is a selfadjoint element of $LM(B)$,
and so $p \in M(B)$. Thus $A p \subset
B \cap A^{\perp \perp} = A$, and so $p \in M(A)$. Hence $p$ is
open as remarked early in Section 2.
The remaining assertions follow from \cite[Proposition 6.4]{BEZ}.
\end{proof}
The $M$-ideals of a unital operator algebra
are the approximately unital two-sided ideals
\cite{EfR2}.
In this case these coincide with
the {\em complete $M$-ideals} of \cite{EfR}, which are
shown in \cite{BEZ} to be just the right $M$-ideals
which are also left $M$-ideals. See e.g.\
\cite[Section 7]{BZ} for more information on these.
The HSA's of $C^*$-algebras are just the selfadjoint
inner ideals as remarked in the
introduction; or equivalently as we shall see
below, they are the
selfadjoint `quasi-$M$-ideals'.
With the above facts in mind, it is
tempting to try to extend some of our results for ideals and
hereditary algebras to general $M$-ideals, be they
one-sided, two-sided, or `quasi'. A first step along these
lines is motivated by the fact, which we have explored
in Theorem \ref{cher1} and in \cite{H}, that
r-ideals in an operator algebra
$A$ are closely tied to a matching right ideal
in a $C^*$-algebra $B$ containing $A$. We will show
that a general (one-sided, two-sided, or `quasi')
$M$-ideal in an arbitrary operator space $X$ is
the intersection of $X$ with
the same variety of $M$-ideal in any $C^*$-algebra
or TRO containing $X$. This generalizes a well known
fact about $M$-ideals in subspaces of $C(K)$ spaces
(see \cite[Proposition I.1.18]{HWW}).
For an operator space $X$,
Kaneda proposed in \cite{Kun} a {\em quasi-$M$-ideal} of $X$ to be a
subspace $J \subset X$ such that $J^{\perp \perp} = p X^{**} q$
for respectively left and right $M$-projections $p$
and $q$ of $X^{**}$. Right (resp.\ two-sided, `quasi')
$M$-ideals of a TRO or $C^*$-module are exactly the
right submodules (resp.\ subbimodules, inner ideals).
See e.g.\ \cite[p.\ 339]{BLM} and \cite{EMR,ER2}.
Here, by an inner ideal of a TRO $Z$ we mean a subspace
$J $ with $J Z^* J \subset J$. The assertion here
that they coincide with the quasi $M$-ideals of $Z$
follows immediately from
Edwards and R\"uttimann's characterization of
weak* closed TRO's. Indeed
if $J$ is an inner ideal of $Z$, then so is
$J^{\perp \perp}$; hence \cite{ER2} gives that $J^{\perp \perp}$
is of the desired form $p Z^{**} q$. The other direction
follows by reversing this argument (it also may be seen as a
trivial case of Theorem \ref{csa} below).
In fact Kaneda has considered the quasi-$M$-ideals of an
approximately unital operator algebra $A$ in
this unpublished work \cite{Kun}. What we will need from this is
the following argument: If $J \subset A$ is a
quasi-$M$-ideal, then by Proposition \ref{lma} it is clear
that there exist projections $p, q \in A^{**}$
such that $J^{\perp \perp} = p A^{**} q$. Thus
$J$ is the algebra $p A^{**} q \cap A$.
\begin{proposition} \label{qu1} The hereditary subalgebras of
an approximately unital operator algebra $A$ are precisely the
approximately unital quasi-$M$-ideals.
\end{proposition}
\begin{proof}
If $J \subset A$ is a
quasi-$M$-ideal, then as we stated above,
there exist projections $p, q \in A^{**}$
such that $J^{\perp \perp} = p A^{**} q$, and
$J = p A^{**} q \cap A$. If this is
approximately unital then by \cite[Proposition 2.5.8]{BLM}
$p A^{**} q$ contains a projection $e$ which is
the identity of $p A^{**} q$. Since $e = p e q$ we have
$e \leq p$ and $e \leq q$. So $p A^{**} q = e p A^{**} q e
= e A^{**} e$. Thus $J = e A^{**} e \cap A$, which is
a HSA. Conversely, if $D$ is a HSA then
$J^{\perp \perp} = p A^{**} p$, and so
$J$ is a quasi-$M$-ideal.
\end{proof}
If ${\mathcal S}$ is a subset of a TRO we write
$\langle {\mathcal S} \rangle$ for the subTRO
generated by ${\mathcal S}$. We write $\widehat{ \phantom{.} } \,$
for the canonical map from a space into its second dual.
\begin{lemma} \label{stros} If $X$ is an operator space, and
if $({\mathcal T}(X^{**}),j)$ is a ternary envelope of
$X^{**}$, then $\langle j(\hat{X}) \rangle$ is a ternary envelope of $X$.
\end{lemma}
\begin{proof}
This follows from a diagram chase. Suppose that $i : X \to W$ is
a complete isometry into a TRO, such that $\langle i(X) \rangle = W$.
Then $i^{**} : X^{**} \to W^{**}$ is
a complete isometry. By the universal property of the
ternary envelope, there is a ternary morphism
$\theta : \langle i^{**}(X^{**}) \rangle \to {\mathcal T}(X^{**})$
such that $\theta \circ i^{**} = j$. Now $W$ may also be regarded
as the subTRO of $W^{**}$ generated by $i(X)$,
and the restriction $\pi$ of
$\theta$ to $W = \langle i(X) \rangle$ is a
ternary morphism into ${\mathcal T}(X^{**})$ which has the
property that $\pi(i(x)) = j(\hat{x})$. Thus
$\langle j(\hat{X}) \rangle$ has the universal property of the
ternary envelope.
\end{proof}
\begin{theorem} \label{csa} Suppose that $X$ is a subspace of a TRO $Z$
and that $J$ is a right $M$-ideal (resp.\ quasi $M$-ideal,
complete $M$-ideal) of $X$.
In the `complete $M$-ideal' case we also assume that $\langle X \rangle
= Z$.
Then $J$ is the intersection of $X$
with the right $M$-ideal (resp.\ quasi $M$-ideal,
complete $M$-ideal) $J Z^* Z$ (resp.\ $J Z^* J$,
$Z J^* Z$) of $Z$. \end{theorem}
\begin{proof}
There are three steps. We will also use the fact
that in a TRO $Z$, for any $z \in Z$ we have that $z$ lies in the
closure of $z \langle z \rangle^* z$.
This follows by considering the polar decomposition
$z = u |z|$, which implies that
$z z^* z = u |z|^3$, for example. Then use
the functional calculus for $|z|$, and the
fact that one may approximate the monomial
$t$ by polynomials in $t$ with only odd powers and degree $\geq 3$.
Similarly, $z$ lies in the
closure of $\langle z \rangle z^* \langle z \rangle$.
First, suppose that $Z$
is the ternary envelope ${\mathcal T}(X)$ of $X$.
Suppose that $J$ is a right $M$-ideal (resp.\
quasi-$M$-ideal, complete $M$-ideal)
in $X$.
If $({\mathcal T}(X^{**}),j)$ is a ternary envelope of
$X^{**}$, then $j(J^{\perp \perp}) = p j(X^{**})$ for
a left adjointable projection $p$ (resp.\ $j(J^{\perp \perp}) = p j(X^{**}) q$
for left/right adjointable projections $p, q$) on ${\mathcal T}(X^{**})$.
In the complete $M$-ideal case we have $p w = w q$ for all
$w \in {\mathcal T}(X^{**})$; this follows from e.g.\
\cite[Theorem 7.4 (vi)]{BZ} and its proof.
We view ${\mathcal T}(X) \subset {\mathcal T}(X^{**})$ as above.
Let $\tilde{J}$ be the set of
$z \in {\mathcal T}(X)$ such that $p z = z$ (resp.\ $z = p z q$).
Then $\tilde{J} \cap j(\hat{X}) = j(\hat{J})$,
since $J = J^{\perp \perp} \cap X$.
Next, define $\bar{J} = j(\hat{J}) {\mathcal T}(X)^* {\mathcal T}(X)$
(resp.\ $= j(\hat{J}) {\mathcal T}(X)^* j(\hat{J})$,
$= {\mathcal T}(X) j(\hat{J})^* {\mathcal T}(X))$. This
is a right $M$-ideal (resp.\ inner ideal,
$M$-ideal) in ${\mathcal T}(X)$,
and it is clear,
using the fact in the first paragraph of the proof, that
$$j(\hat{J}) \subset \bar{J} \cap j(\hat{X}) \subset \tilde{J}
\cap j(\hat{X}) = j(\hat{J}) .$$
Thus $J = \bar{J} \cap X$.
In the rest of the proof we consider
only the quasi $M$-ideal case, the others are similar.
Second, suppose that $X$ generates $Z$ as a TRO.
Let $j : X \to {\mathcal T}(X)$ be the Shilov embedding.
If $x \in (J Z^* J) \cap X$ then applying the
universal property of ${\mathcal T}(X)$
there exists a ternary morphism $\theta : Z \to {\mathcal T}(X)$
with
$$j(x) = \theta(x) \in j(X) \cap \theta(J) \theta(Z)^* \theta(J) \subset
j(X) \cap j(J) {\mathcal T}(X)^* j(J) =
j(X) \cap \bar{J} = j(J) ,$$ by the
last paragraph. Hence $x \in J$.
Third, suppose that $X \subset Z$, and that the subTRO
$W$ generated by $X$ in $Z$ is not $Z$. We claim that
$X \cap (J Z^* J) = X \cap (J W^* J)$.
To see this, we set $J' = J W^* J$. This
is an inner ideal in $W$. Moreover, $J \subset J'$ by the
fact at the start of the proof.
We claim that for any inner ideal $K$ in $W$,
we have $(K Z^* K) \cap W = K$. Indeed if $e$ and $f$ are
the support projections for $K$, then
$$(K Z^* K) \cap W \subset (e Z f) \cap W \subset (e W f) \cap W = K .$$
This implies
$$X \cap (J Z^* J) \subset X \cap (J' Z^* J')
\subset X \cap J' = X \cap (J W^* J) = J,$$
as required.
\end{proof}
\section{REMARKS ON PEAK AND $p$-PROJECTIONS}
Let $A$ be a unital-subalgebra of a $C^*$-algebra $B$.
We recall from \cite{H} that a {\em peak projection} $q$ for $A$ is
a closed projection in $B^{**}$, such that there exists
an $a \in {\rm Ball}(A)$ with $q a = q$ and
satisfying any one of a long list
of equivalent conditions; for example $\Vert a r \Vert < 1$ for
every closed projection $r \leq q^\perp$.
We say that $a$ peaks at $q$. A {\em $p$-projection}
is an infimum of peak projections;
and this is equivalent to it being a
weak* limit of a decreasing net of peak projections
by \cite[Proposition 5.6]{H}. Every
$p$-projection is an {\em approximate $p$-projection},
where the latter term means
a closed projection in $A^{**}$. The most glaring problem
concerning these projections is that it is currently unknown whether
the converse of this is true, as is the
case in the classical setting of function algebras
\cite{Gam}. Motivated partly by this question,
in this section we offer several
results concerning these projections.
Our next result implies that
this question is equivalent to the following simple-sounding question:
\smallskip
{\em Question:} Does every approximately unital operator algebra $A$
have an approximate identity
of form $(1- x_t)$ with $x_t \in {\rm Ball}(A^1)$?
Here $1$ is the identity of the unitization
$A^1$ of $A$.
\medskip
Equivalently, does every operator algebra $A$
with a left cai have a left cai of the form $(1- x_t)$ for $x_t \in {\rm Ball}(A^1)$?
\medskip
By a routine argument,
these are also equivalent to: If $A$ is an approximately unital operator algebra
and $a_1, \cdots , a_n \in A$ and
$\epsilon > 0$, does there exist $x \in {\rm Ball}(A^1)$ with
$1-x \in A$ and $\Vert
x a_k \Vert < \epsilon$ for all $k = 1, \cdots, n$?
\medskip
Note that if these were true, and if $A$ does not have an identity,
then necessarily $\Vert x_t \Vert = 1$. For if $\Vert x_t \Vert < 1$
then $1- x_t$ is invertible in $A^1$, so that
$1 \in A$.
\begin{theorem} \label{one} If $J$ is a closed subspace of
a unital operator algebra $A$, then
the following are equivalent:
\begin{itemize}
\item [(i)] $J$ is a right ideal with a left approximate identity
(resp.\ a HSA with approximate identity) of
the form $(1- x_t)$ for $x_t \in {\rm Ball}(A)$.
\item [(ii)] $J$ is an r-ideal
(resp.\ HSA) for whose support projection $p$ we
have
that $p^\perp$ is a $p$-projection for $A$.
\end{itemize}
\end{theorem}
\begin{proof}
Suppose that $J = \{ a \in A : q^\perp a = a \}$ for a $p$-projection $q$
for $A$ in $B^{**}$. We may suppose that $q$ is a decreasing weak* limit
of a net of peak projections $(q_t)$ for $A$.
If $a \in A$ peaks at $q_t$, then
by a result in \cite{H} we have that $a^n \to q_t$ weak*.
Next let ${\mathcal C} = \{ 1 - x : x \in {\rm Ball}(A) \} \cap J$,
a convex subset of $J$ containing the elements $1-a^n$
above. Thus $q_t^\perp \in \overline{{\mathcal C}}^{w*}$
and therefore $q^\perp \in \overline{{\mathcal C}}^{w*}$.
Let $e_t \in {\mathcal C}$ with $e_t \to q^\perp$ w*.
Then $e_t x \to q^\perp x = x$ weak* for all $x \in J$.
Thus $e_t x \to x$ weakly. Next, for fixed
$x_1, \cdots , x_m \in J$ consider the convex set
$F = \{ (x_1 - u x_1 , x_2 - u x_2 , \cdots , x_m - u x_m ) :
u \in {\mathcal C} \}$. (In the HSA case
one also has to include coordinates $x_k - x_k u$ here.)
Since $(0,0,\cdots ,0)$ is in the
weak closure of $F$ it is in the norm closure.
Given $\epsilon > 0$, there exists $u \in {\mathcal C}$ such that
$\Vert x_k - u x_k \Vert < \epsilon $ for all $k = 1, \cdots , m$.
From this it is clear (see the end of the proof of \cite[Proposition 2.5.8]{BLM})
that there is a left approximate identity
for $J$ in ${\mathcal C}$, which shows (i).
Suppose that $J$ is a right ideal with a left approximate
identity $(e_t)$ of the stated form $e_t = 1 - x_t$.
If $(x_{t_\mu})$ is any w*-convergent subnet of
$(x_t)$, with limit $r$, then $\Vert r \Vert \leq 1$.
Also $1 - x_{t_\mu} \to 1 - r$. On the other hand,
$(1 - x_{t_\mu})x \to x$ for any $x \in J$, so that
$(1-r) x = x$. Hence $(1-r) \eta = \eta$ for any $\eta \in J^{\perp \perp}$,
so that $1-r$ is the (unique) left identity $p$ for $J^{\perp \perp}$.
Hence $1-r$ is idempotent, so that $r$ is idempotent. Hence
$r$ is an orthogonal projection, and therefore so also is
$p = 1-r$. Also, $e_t \to p$ w*, by a fact in topology about
nets with unique accumulation points.
We have $J = p A^{**} \cap A =
\{ a \in A : p a = a \}$. Since $p$ has norm $1$,
$J$ has a left cai. Since $p e_t = e_t$,
$p$ is an open projection in $B^{**}$, so that $q = 1-p$ is closed.
If $a = e_t$ then we have that $l(a)^\perp (1-a) = l(a)^\perp$,
where $l(a)$ is the left support projection for $a$.
Thus by a result in \cite{H} there is a peak projection $q_a$ with peak
$a_0 = 1 - \frac{a}{2} \in A$
such that $l(a)^\perp \leq q_a$. Since $a_0^n \to q_a$ weak*,
and since $(1-p) a_0^n = 1-p$, we have $(1-p) q_a = 1-p$. That is,
$q \leq q_a$. Let $J_a = \{ x \in A : q_a^\perp x = x \}$.
By the last paragraph, $J_a$ is an r-ideal,
and since $q \leq q_a$ we have that $J_a \subset J$.
The closed span of all the $J_a$ for $a = e_t$
equals $J$, since $e_t \in J_{e_t}$ and
any $x \in J$ is a limit of $e_t x \in J_{e_t}$.
By the proof of
\cite[Theorem 4.8.6]{BLM} we deduce that the supremum of the $q_a^\perp$
equals $q^\perp$. Thus $q$ is a $p$-projection.
The HSA case follows easily from this and Corollary \ref{her7}.
\end{proof}
\begin{corollary} \label{two}
Let $A$ be a unital-subalgebra of a $C^*$-algebra $B$.
A projection $q \in B^{**}$ is a $p$-projection for
$A$ in $B^{**}$, if and only if there exists a net $(x_t)$ in ${\rm Ball}(A)$ with
$q x_t = q$, and $x_t \to q$ weak*.
\end{corollary}
\begin{proof} Supposing that $q$ is a $p$-projection,
we have by the last result
that $J = \{ a \in A : q^\perp a = a \}$ has a left approximate
identity $(1 - x_t)$ with $x_t \in {\rm Ball}(A)$,
and by the proof of that result $q^\perp$ is the
support projection, so that $1 - x_t \to q^\perp$ weak*.
Conversely, supposing the existence of such a
net, let $J = \{ a \in A : q^\perp a = a \}$. This is a
right ideal. Moreover $J^{\perp \perp} \subset q^\perp A^{**}$.
If $a \in A$ then $q^\perp a = \lim_t \, (1 - x_t) a
\in J^{\perp \perp}$. By a similar argument,
$q^\perp \eta \in J^{\perp \perp}$ for any $\eta \in A^{**}$.
Thus $J^{\perp \perp} = q^\perp A^{**}$, and so
$q^\perp$ is the support projection for $J$, and $J$ has a
left cai. By a slight variation of
the argument at the end of the first
paragraph of the proof of the last result,
$J$ satisfies (i) of that result, and hence
by that result $q$ is a $p$-projection.
\end{proof}
The following known result (see e.g.\ \cite{MAII,DP}) is quite
interesting in light of the
question just above Theorem \ref{one}.
\begin{proposition} \label{nine} If $J$ is an
nonunital operator algebra with a cai (resp.\ left cai),
then $J$ has an (resp.\ a left) approximate identity
of the form $(1-x_t)$, where $x_t \in J^1$ and
$\lim_t \, \Vert x_t \Vert = 1$ and
$\lim_t \, \Vert 1-x_t \Vert = 1$.
Here $J^1$ is the unitization of $J$.
\end{proposition}
\begin{proof} We just sketch the proof in the
left cai case, following the proof of \cite[Theorem 3.1]{MAII}.
Let $A = J^1$. Thus $J$ is an
r-ideal in the unital operator algebra $A$.
Suppose that the support projection is
$p = q^\perp \in A^{**}$, and that $(u_t)$ is the left cai in
$J$. If $B$ is a $C^*$-algebra generated by $A$, then there is
an increasing net in ${\rm Ball}(B)$ with
weak* limit $p$.
We can assume that the increasing net is indexed by the same directed set.
Call it $(e_t)$. Since $e_t - u_t \to 0$ weakly, new nets of convex combinations
$(\widetilde{e_s})$ and $(\widetilde{u_s})$ will satisfy
$\Vert \widetilde{e_s} - \widetilde{u_s} \Vert \to 0$.
We can assume that $(\widetilde{u_s})$ is a left cai for $J$.
We have
$$\Vert 1 - \widetilde{u_s} \Vert \leq \Vert 1 - \widetilde{e_s} \Vert
+ \Vert \widetilde{e_s} - \widetilde{u_s} \Vert \leq 1 +
\Vert \widetilde{e_s} - \widetilde{u_s} \Vert \to 1 .$$
The result follows easily from this. \end{proof}
We are also able to give another
characterization of $p$-projections,
which is of `nonselfadjoint Urysohn lemma' or `peaking'
flavor, and therefore should be useful in
future applications of
`nonselfadjoint peak interpolation'. This result should be compared
with \cite[Theorem 5.12]{H}.
\begin{theorem} \label{peakch} Let $A$ be a
unital-subalgebra of $C^*$-algebra $B$ and let
$q \in B^{**}$ be a closed projection. Then
$q$ is a $p$-projection for $A$ iff for any open
projection $u \geq q$, and any $\epsilon > 0$,
there exists an $a \in {\rm Ball}(A)$ with
$a q = q$ and $\Vert a (1-u) \Vert < \epsilon$
and $\Vert (1-u) a \Vert < \epsilon$.
\end{theorem}
\begin{proof}
($\Leftarrow$) \ This follows by an easier variant of
the proof of \cite[Theorem 4.1]{H}.
Suppose that for each open $u \geq q$, and
positive integer $n$, there exists an $a_n \in {\rm Ball}(A)$ with
$a_n q = q$ and $\Vert a_n (1-u) \Vert < 1/n$.
By taking a weak* limit we find $a \in A^{\perp\perp}$
with $a q = q$ and $a (1-u) = 0$.
We continue as in \cite[Theorem 4.1]{H}.
Later in the proof where $q_n$ is defined, we appeal
to Lemma 3.5 in place of Lemma 3.6, so that $q_n$ is a
peak projection. Now the proof is quickly finished:
Let $Q = \bigwedge_n q_n$, a $p$-projection. As in
the other proof we have that $q \leq Q \leq r \leq u$, and
that this forces $q = Q$. Thus $q$ is a $p$-projection.
($\Rightarrow$) \ Suppose that $q$ is a $p$-projection,
and $u \geq q$ with $u$ open. By `compactness' of $q$
(see the remark just above \cite[Proposition 2.2]{H}),
there is a peak projection $q_1$ with
$q \leq q_1 \leq u$. Note that if $a q_1 = q_1$ then
$a q = a q_1 q = q_1 q = q$. Thus we may assume that
$q$ is a peak projection. By the noncommutative Urysohn lemma
\cite{Ake2}, there is an $x \in B$ with
$q \leq x \leq u$. Suppose that $a \in {\rm Ball}(A)$
peaks at $q$, and $a^n \to q$ weak* (see e.g.\ \cite[Lemma 3.4]{H}
or the results below).
Then $a^n (1-x) \to q (1-x) = 0$ weak*, and hence weakly
in $B$. Similarly, $(1-x) a^n \to 0$ weakly.
By a routine convexity argument in $B \oplus B$, given $\epsilon > 0$
there is a convex combination $b$ of the $a^n$
such that $\Vert b (1-x) \Vert < \epsilon$ and
$\Vert (1-x) b \Vert < \epsilon$.
Therefore $\Vert b (1-u) \Vert =
\Vert b (1-x) (1-u) \Vert < \epsilon$.
Similarly $\Vert (1-u) b \Vert < \epsilon$. \end{proof}
We would guess that being a $p$-projection is also equivalent to
the special case where $a = 1$ and $x \leq 1$ of the following
definition.
If $A$ is a unital-subalgebra of $C^*$-algebra $B$
and if $q \in B^{**}$ is closed then we say that $q$ is
a {\em strict $p$-projection} if given $a \in A$ and a
strictly positive $x \in B$ with $a^*qa \le x$, then
there exists $b
\in A$ such that $qb=qa$ and $b^*b \le x$. In \cite[Proposition 3.2]{H}
it is shown that if $q$ is a projection
in $A^{\perp\perp}$ then
the conditions in the last line hold except that
$b^*b \le x + \epsilon$.
So being a strict $p$-projection is the case $\epsilon = 0$ of
that interpolation result.
\begin{corollary} \label{strict} Let $A$ be a
unital-subalgebra of $C^*$-algebra $B$ and let
$q \in B^{**}$ be a strict $p$-projection for $A$.
Then $q$ is a $p$-projection.
\end{corollary}
\begin{proof}
Using the noncommutative Urysohn lemma
as in the first
few lines of the proof of \cite[Theorem 4.1]{H},
it is easy to see that $q$ satisfies the condition in
Theorem \ref{peakch}.
\end{proof}
The above is related to
the question of whether every r-ideal $J$ in
a (unital say) operator algebra is `proximinal'
(that is, whether every $x \in A$ has a closest point in $J$).
\begin{proposition} \label{prox} If
$q$ is a strict $p$-projection for
a unital operator algebra $A$,
then the corresponding r-ideal $J =
q^\perp A^{**} \cap A$ is proximinal in $A$.
\end{proposition}
\begin{proof}
Let $a \in A$. By Proposition 3.1 in \cite{H}, $\|a + J\| =
\|qa\|$. Also, $a^*qa \le \|qa\|^2$, so by hypothesis there exists $b
\in A$ such that $qb = qa$ and $b^*b \le \|qa\|^2$. Thus $\|b\|^2 =
\|b^*b\| \le \|q a\|^2$.
Then $\|a + J\| = \|qa\|
\geq \|b\| = \|a + (b-a)\|$. However,
$b-a \in J$ since $q(b-a) = 0$. So $J$ is proximinal.
\end{proof}
Some of the results below stated for right ideals also have
HSA variants which we leave to the reader.
\begin{proposition} \label{four} A $p$-projection $q$ for
a unital operator algebra $A$ is a
peak projection iff the associated right ideal is of the
form $\overline{(1-a) A}$ for some $a \in {\rm Ball}(A)$.
In this case, $q$ is the peak for $(a+1)/2$. \end{proposition}
\begin{proof}
Let $J = \{ a \in A : q^\perp a = a \}$, for a $p$-projection $q$.
($\Rightarrow$) \ If $q$ peaks at $a$ then $q^\perp (1-a) = (1-a)$, so that
$(1-a)A \subset J$. If $\varphi \in ((1-a)A)^\perp$ then
$\varphi((1-a^{n+1}) A) = \varphi((1-a)(1 + a + \cdots + a^n) A) = 0$.
In the limit we see that $\varphi \in (q^\perp A)_\perp$, so that
$\varphi \in J^\perp$. Hence $J = \overline{(1-a) A}$.
($\Leftarrow$) \ Suppose that $J = \overline{(1-a) A}$ for some $a \in {\rm Ball}(A)$.
Then $q a = q$, and by a result in \cite{H} there exists
a peak projection $r \geq q$ with peak $b = (a+1)/2$.
Since $1 - b = (1-a)/2$, it is clear that $J = \overline{(1-b) A}$.
If $(e_t)$ is the left cai for $J$ then $r^\perp e_t = e_t$.
In the limit, $r^\perp q^\perp = q^\perp$, so that $r \leq q$.
Thus $r = q$. \end{proof}
This class of `singly generated' right ideals has played an important
role in some work of G. A. Willis (see e.g.\ \cite{Wil}).
\begin{lemma} \label{wilp} If $A$ is an operator algebra,
and if $a \in {\rm Ball}(A)$
then $\overline{(1-a) A}$ is an r-ideal of $A$
with a sequential left approximate identity
of the form $(1- x_n)$ for $x_n \in {\rm Ball}(A)$.
Similarly, $\overline{(1-a) A (1-a)}$ is a HSA of $A$.
\end{lemma}
\begin{proof} Let $J = \overline{(1-a) A}$,
and let $e_n = 1 - \frac{1}{n} \sum_{k=1}^n a^k$,
which is easy to see is in $(1-a) A$.
Moreover,
$$e_n (1-a) = 1 - \frac{1}{n} \sum_{k=1}^n a^k
- a + \frac{1}{n} \sum_{k=2}^{n+1} a^k =
1 - a - \frac{1}{n}(a - a^{n+1}) \to 1-a .$$
Note that $J$ is an r-ideal
by Theorem \ref{one}. We leave the rest to the reader. \end{proof}
\begin{corollary} \label{seven} If
$a$ is a contraction in a unital $C^*$-algebra $B$ then
\begin{itemize}
\item [(i)] The Cesaro averages of $a^n$ converge weak* to a
peak projection $q$ with $q a = q$.
\item [(ii)] If $a^n \to q$ weak* then $q$ is a peak projection.
Conversely, if $q$ is a peak projection then there
exists an $a \in {\rm Ball}(B)$ with $a^n \to q$ weak*.
\end{itemize}
Also $q$ is the peak for $(a+1)/2$.
\end{corollary}
\begin{proof} (i) \ By Theorem \ref{one} and Lemma \ref{wilp}
(and its proof),
$J = \overline{(1-a) A} = \{ a \in A : q^\perp a = a \}$
for a $p$-projection $q$ which is a weak* limit
of $e_n = 1 - \frac{1}{n} \sum_{k=1}^n a^k$.
Thus $\frac{1}{n} \sum_{k=1}^n a^k \to q$ weak*, and
clearly $q a = q$. By \ref{four} and its proof, $q$ is a peak projection
with $(a+1)/2$ as a peak.
(ii) \ If $a^n \to q$ weak* then it is easy to check that
$\frac{1}{n} \sum_{k=1}^n a^k \to q$ weak*. Thus one direction
of (ii) follows from (i), and the other direction
is in \cite{H}. \end{proof}
{\bf Remarks.}
1) \ In fact it is not hard to show that
the Cesaro averages in (i) above converge
strongly, if $B$ is in its universal representation.
\medskip
2) \ We make some remarks on support projections.
We recall from \cite{H} that
if $q$ is a projection in
$B^{**}$ and if $q$ peaks at a contraction $b \in B$
then $q^\perp$ is the right support projection $r(1-b)$.
Conversely, if $b \in {\rm Ball}(B)$ then
the complement of the right support projection $r(1-b)$
is a peak projection which peaks at $(1+b)/2$.
Thus the peak projections are precisely the
complements of the
right support projections $r(1-b)$ for contractions $b \in B$.
It follows that $q$ is a $p$-projection for a
unital-subspace $A$ of a $C^*$-algebra
$B$ iff $q = \wedge_{x \in {\mathcal S}} \,
r(1-x)^\perp$ for a nonempty subset ${\mathcal S} \subset
{\rm Ball}(A)$.
Also, if $J$ is a right ideal of
a unital operator algebra $A$, and if $J$ has a left approximate identity
of the form $(1 - x_t)$ with $x_t \in {\rm Ball}(A)$,
then it is easy to see that the support projection of $J$ is
$\vee_t \, l(1 - x_t)$.
|
1,314,259,994,810 | arxiv | \section{Introduction}\label{s.introduction}
Consider a continuous time Markov chain on the finite set~$S$,
$|S|\ge2$, where the rate of going from $x$ to $y$ is $q(x,y)$.
We let
$\qmax:=\max\{\sum_{y\neq x} q(x,y)\dvtx x\in S\}$ be the maximum rate that
we leave a state.
Next, $(S,q)$ yields a continuous time Markov process on
$\{0,1\}^S$ called \emph{the noisy voter model with voting mechanism $(S,q)$}
(often abbreviated \emph{the noisy voter model}) where, independently,
(1) for each two sites $x$ and $y$,
the state at site $x$ changes to the value of
the state at site $y$ at rate $q(x,y)$, and
(2) each site rerandomizes its state at rate 1.
By \textit{rerandomizes}, we mean that the state at that site switches to
1 or 0, each
with probability $1/2$, independently of everything else.
The noisy voter model was introduced by Granovsky and Madras \cite{GM}.
Denoting an element of $\{0,1\}^S$ by
$\eta=\{\eta(x)\}_{x\in S}$, one can describe this
dynamic in the following way: independently at each $x\in S$,
\begin{eqnarray}
\label{eq:nvmrates}
&&0 \to 1 \mbox{ at rate }
\frac{1}2 + \sum_{y\ne
x}q(x,y)\eta(y) \qquad\mbox{if
}\eta(x)=0,
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&& 1 \to 0 \mbox{ at rate }\frac{1}2 + \sum_{y\ne
x}q(x,y)
\bigl(1-\eta(y)\bigr) \qquad\mbox{if }\eta(x)=1.
\end{eqnarray}
Observe that whether or not $(S,q)$ is irreducible, the
corresponding noisy voter model is clearly
irreducible and hence has a unique stationary distribution.
If there were no rerandomization, this would simply be
the ordinary voter model associated to $q$, which has, in the case
where $q$ is irreducible,
two absorbing states,
all 0's and all 1's. On the other hand, if there were no voter mechanism
[essentially meaning that $q(x,y)=0$ for all $x$ and $y$], then the model
would simply be continuous time random walk on the hypercube.
Throughout this paper, given $q$, we let $\{\eta_t\}_{t\ge0}$ denote
the corresponding noisy voter model, $\mu_\infty$ denote its stationary
distribution and
$\mu^{\eta}_{t}$ denote the law of $\eta_t$ when $\eta_0\equiv\eta$.
(The dependence of these on $q$ is implicit.) If we have a sequence of
such systems, we let
$\{\eta^n_t\}_{t\ge0}$, $\mu^n_\infty$ and $\mu^{n,\eta}_{t}$ denote
these objects
for the $n$th system.
Recall that the total variation distance between two probability measures
$m_1$ and $m_2$ on a finite set $\Omega$ is defined to be
\[
\| m_1- m_2\|_{\mathtt{TV}}:= \frac{1}{2} \sum
_{s \in\Omega
}\bigl|m_1(s)-m_2(s)\bigr|.
\]
Next, given a noisy voter model, for $\vep>0$, we let
\[
t_{\mathtt{mix}}(\vep):= \inf\Bigl\{t\ge0\dvtx \max_{\eta\in\{0,1\}^S} \bigl\|
\mu ^{\eta}_{t}-\mu_\infty\bigr\|_{\mathtt{TV}}\le\vep\Bigr
\}
\]
denote the $\vep$-mixing time.
The main theorem of the paper is the following.
\begin{thmm}\label{thmm:main1}
Assume that we have a sequence $(S^n,q^n)$ of continuous time Markov
chains with
$\lim_{n\to\infty}|S^n|=\infty$ and $\sup_n \qnmax< \infty$. Assume
further that
there is
$C$ such that for each $n$, there is a stationary distribution for
$(S^n,q^n)$ where the ratio of the largest and smallest point masses is
at most $C$.
(This holds, e.g., in any transitive situation.) Then, for each $\vep$,
\begin{equation}
\label{eq:MainMixing} t_{\mathtt{mix}}(\vep) = \tfrac{1}2 \log\bigl|S^n\bigr|
\bigl(1+o(1)\bigr).
\end{equation}
Moreover, we have that
\begin{equation}
\label{eq:main1} \lim_{\alpha\to\infty}\liminf_{n\to\infty} \bigl\|
\mu^{n,\mathbf1}_{({1}/2)\log|\Sn|-\alpha} - \mun_\infty\bigr\| _{\mathtt
{TV}} = 1,
\end{equation}
where \textbf{1} denotes the configuration of all 1's and
\begin{equation}
\label{eq:main2} \lim_{\alpha\to\infty}\limsup_{n\to\infty} \max
_{\eta\in\{0,1\}^{S^n}}\bigl\| \mu^{n,\eta}_{({1}/2)\log|\Sn|+\alpha} -
\mun_\infty\bigr\|_{\mathtt{TV}} = 0.
\end{equation}
\end{thmm}
\begin{remark*}
We will see that~\eqref{eq:main2} holds in fact whenever $\lim_{n\to
\infty}|S^n|=\infty$,
and therefore the upper bound~\eqref{eq:MainMixing} also holds under
this assumption.
\end{remark*}
Theorem~\ref{thmm:main1} tells us that under the given conditions,
the mixing time is of order $\frac{1}2\log|\Sn|$ and that there is a
cutoff with a window of size of order 1. (We define mixing
times and cutoff in Section~\ref{s.background} below.) These
assumptions are
necessary. Clearly if there is no bound on
$(\qnmax)$, then the mixing time can easily be made to be of
order 1. More interestingly, even if $(\qnmax)$ is bounded,
\eqref{eq:main1} is not necessarily true without some
condition on the set of
stationary distributions. An example of this is continuous time
random walk on the \emph{$n$-star}, which is the graph
that has one vertex with $n$ edges emanating from it.
(By \textit{continuous time random walk on a graph}, we mean that
the walker waits an exponential time and then chooses a neighbor at random.)
This will be explained in Section~\ref{s.wheel}. We also mention that
it is easy to see that the condition involving the set of
stationary distributions is not necessary
in order for~(\ref{eq:main1}) and \eqref{eq:main2} to hold since one
could take
$(\qnmax)$ going to 0 sufficiently quickly so that the voter mechanism never
comes into play.
We mention that it was proved by Ramadas \cite{R} that when
randomization occurs at any rate $\delta$, the mixing time
for the noisy voter model on any graph with $n$ vertices is
$O_{\delta}(\log n)$.
Theorem~\ref{thmm:main1} has an interesting consequence for
the stochastic Ising model on cycles. The Ising model on any graph $G=(V,E)$
with parameter (inverse temperature)
$\beta\ge0$ is the probability measure on
$\{-1,1\}^{V}$ which gives, up to a normalization factor, probability
$e^{\beta\sum_{\{x,y\}\in E}\sigma(x)\sigma(y)}$ to configuration
$\sigma$.
The stochastic Ising model on $G$ with
\textit{heat-bath} dynamics is the continuous time Markov chain on
$\{-1,1\}^{V}$ where each site at rate 1 erases its present state and
chooses to be in state
$-1$ or 1, according to the conditional distribution for the Ising model,
given the other states at
that time. For the case $(\bbZ/n\bbZ)^d$, Lubetzky and Sly (see \cite{LS1})
proved that for $d=1$ and all $\beta$, $d=2$ and all $\beta$ below the
critical value
and $d=3$ and all $\beta$ sufficiently small,
one has cutoff at some constant times $\log n$ with a window of order
$\log\log n$. In \cite{LS2}, Lubetzky and Sly improved and extended
these results
in a number of directions; in particular, they proved that the result
holds for
all $\beta$ below the critical value in all dimensions and that
the window above can be taken to be of order 1. While the arguments in
this second paper
are somehow easier, they are still quite involved, including that for $d=1$.
Interestingly, for the cycle $\bbZ/n\bbZ$, the stochastic Ising model
and the noisy voter model
(where one performs random walk on $\bbZ/n\bbZ$) turn out to be the
same model,
and hence the special case of Theorem~\ref{thmm:main1} for random walk
on the cycle is already known.
In this special case, the stochastic Ising model corresponds to the
dynamics where independently at each $x\in S^n$,
the rate at which $\sigma(x)$ flips to $-\sigma(x)$ is
\begin{equation}
\label{eq:Isingrates} \bigl[ 1+ \exp \bigl(2\beta\sigma(x)\bigl[\sigma(x-1) + \sigma(x+1)
\bigr] \bigr) \bigr]^{-1}.
\end{equation}
An easy calculation, which we will leave to the reader, shows that if
we consider the noisy
voter model on the cycle with $q(x,x+1)=q(x,x-1)=(e^{4\beta}-1)/4$ and
multiply time by
$\theta:= \frac{2}{1+e^{4\beta}}$, we obtain the above stochastic
Ising model.
While the work of Lubetzky and Sly implies Theorem~\ref{thmm:main1} for
the cycle (and also yields some
further results), the proof given here turns out to be easier.
Mossel and Schoenebeck \cite{MS} consider a similar type of voting
model where there is no
noise and study, among other things, the time it takes to become
absorbed. Here, properly
related to our model, they show an upper bound of order $n^3$ which
would be the correct order
for the cycle. We see, from the last part of Theorem~\ref{thmm:main1}, a
drastic change
when even small noise is introduced into the system since
now it takes only order $n\log n$ to reach equilibrium. On a related note,
Mossel and Tamuz \cite{MT} provide a fascinating survey of various
``opinion exchange dynamics.''
Earlier, we mentioned the $n$-star as providing a
counterexample to
\eqref{eq:main1} when there is no
condition imposed on the stationary distributions.
The noisy voter model on the $n$-star has an additional fascinating feature.
\begin{thmm}\label{thmm:wheel} Consider the noisy voter model
corresponding to
continuous time random walk with parameter 1 on the $n$-star with $n$
even:
\begin{longlist}[(ii)]
\item[(i)] Let ${\eta}_0$ denote any configuration which is 1 on exactly
half of the leaves.
If $n\ge3$ and $t=\frac{1}4(\log n -C)> 0$, then
\begin{equation}
\label{eq:wheel1}\bigl \| \mu^{\eta_0}_{t} -\mu_\infty
\bigr\|_{\mathtt{TV}} \ge\frac
{e^{C}}{48+e^{C}}.
\end{equation}
\item[(ii)] The time it takes for the distribution starting from all 1's
to be within distance $1/4$ in total variation norm from the stationary
distribution is $O(1)$.
\end{longlist}
\end{thmm}
This is quite surprising since one typically expects that for monotone systems,
the mixing time for the system should be governed by the time it takes
the two extremal
states to become close in total variation norm.
We end this \hyperref[s.introduction]{Introduction} with a brief description of the results
obtainable for
a natural version of a discrete time noisy voter model.
The input for such a model is a discrete time Markov chain on a finite
set $S$ and a parameter
$\gamma\in[0,1]$. Given these, the model is defined by first
choosing an $x$ in $S$ uniformly at random, and then
with probability $1-\gamma$, one selects $y$ with probability $P(x,y)$,
at which point
the state of $x$ changes to the state of $y$, while with probability
$\gamma$, the state
at vertex $x$ is rerandomized to be 0 or 1, each with probability
$1/2$. Discrete time analogues of~\eqref{eq:main1} [and~\eqref
{eq:main1FINITE} later on]
can easily be obtained with the exact same methods we use below.
The mixing times, however, will now be at time $\frac{|S|\log
|S|}{2\gamma}$ since we
are only updating 1 vertex at a time and rerandomizing with probability
$\gamma$.
Similarly, a~discrete time analogue of~\eqref{eq:wheel1}
can be obtained when, for example, $\gamma=1/2$; here the
relevant time will be $n\log n/2$. The connection with the Ising model
holds exactly when
moving to discrete time, but then one must consider the discrete time
version of the Ising model.
The paper by Chen and Saloff-Coste (see \cite{CS}) contains various
results which allow one
to transfer between a discrete time model and its continuous time
version (where updates are
done at the times of a Poisson process). In particular, Proposition~3.2(2) in this paper
allows us to obtain a discrete time analogue of~\eqref{eq:main2} (with
time scaled again by
$n/\gamma$) \emph{from} the continuous time version of this result.
Finally a discrete time analogue of Theorem~\ref{thmm:wheel}(ii) with
the $O(1)$ term being
replaced by an $O(n)$ term can be obtained; this is done by
modifying the proof of Theorem~20.3(ii) in \cite{LPW} to obtain a
discrete time version of
Lemma~\ref{lemma.reducedwheel} from the continuous time version of
this lemma.
The rest of the paper is organized as follows.
In Section~\ref{s.background}, we briefly recall some standard
definitions concerning
mixing times and cutoff as well as introduce some notation.
In Section~\ref{s.thmproof} we prove a stronger version of
Theorem~\ref
{thmm:main1}, namely
Theorem~\ref{thmm:main1again}.
The coalescing Markov chain descriptions of both the voter model
and the noisy voter model are important tools in its analysis. However,
in this paper,
we only need these tools for the proof of the last statement of
Theorem~\ref{thmm:main1} or
equivalently for Theorem~\ref{thmm:main1again}(ii)
(as well as in the first remark in Section~\ref{s.wheel}), and
therefore these
descriptions are discussed only at those points in the paper. Finally,
Theorem~\ref{thmm:wheel} is proved in Section~\ref{s.wheel}.
\section{Background}\label{s.background}
In this section, we recall some standard definitions.
Consider a continuous time irreducible Markov chain on a finite set
$\Omega$ with
transition matrices $\{P^t(x,y)\}_{t\ge0}$ and stationary distribution
$\pi$.
Letting $P^t(x,\cdot)$ denote the distribution at time $t$ starting
from $x$, we let
\begin{equation}\qquad
\label{eq:LPWdef} d(t) (x) :=\bigl \|P^t(x,\cdot)-\pi\bigr\|_{\mathtt{TV}},\qquad \bar
d(t) (x,y) := \bigl\|P^t(x,\cdot)-P^t(y,\cdot)
\bigr\|_{\mathtt{TV}}
\end{equation}
and
\[
d(t) := \max_{x\in\Omega} \,d(t) (x),\qquad \bar d(t) := \max
_{x,y\in
\Omega} \bar d(t) (x,y).
\]
Next for $\vep>0$, we let $t_{\mathtt{mix}}(\vep) := \inf\{t\ge0\dvtx d(t)\le\vep\}$ denote
the $\vep$-mixing time, and then by convention we take
$t_{\mathtt{mix}}:=t_{\mathtt{mix}}(1/4)$ and call this the \emph{mixing time}.
The following notions are very natural but are perhaps not standard.
For $\vep>0$, we also let
$t_{\mathtt{mix}}(\vep)(x) := \inf\{t\ge0\dvtx d(t)(x)\le\vep\}$
and $t_{\mathtt{mix}}(x):=t_{\mathtt{mix}}(1/4)(x)$.
Following Levin, Peres and Wilmer \cite{LPW}, we say that a sequence of
Markov chains exhibits \emph{cutoff}
if for all $\vep>0$, we have
\[
\lim_{n\to\infty} \frac{t^n_{\mathtt{mix}}(\vep)}{t^n_{\mathtt
{mix}}(1-\vep)}=1.
\]
We say that a sequence of Markov chains exhibits \emph{cutoff} with a
window of size $w_n$ if
$w_n=o(t^n_{\mathtt{mix}})$ and in addition
\[
\lim_{\alpha\to\infty}\liminf_{n\to\infty} \,d_n
\bigl(t^n_{\mathtt{mix}}-\alpha w_n\bigr)=1 \quad\mbox{and}\quad
\lim_{\alpha\to\infty}\limsup_{n\to\infty} \,d_n
\bigl(t^n_{\mathtt{mix}}+\alpha w_n\bigr)=0.
\]
For continuous time random walk with rate 1 on the hypercube of
dimension~$n$, it is known
(see \cite{DGM}) that $t^n_{\mathtt{mix}}\sim\frac{1}{4}\log n$ and
that there is cutoff
with a window of order 1. Theorem~\ref{thmm:main1} states that for the
noisy voter model,
under the given assumptions, we have
that $t^n_{\mathtt{mix}}\sim\frac{1}{2}\log n$ and that there is cutoff
with a window of order 1. (The difference of $\frac{1}{4}$ and $\frac{1}{2}$
here is simply due to the fact that continuous time random walk with
rate 1 on the
hypercube of dimension $n$ has each coordinate changing its state at
rate 1 rather than
rerandomizing at rate 1.) We point out that in most cases where cutoff
is proved, the chain
is reversible, while Theorem~\ref{thmm:main1} provides for us a large
class of
nonreversible chains.
\section{Proof of Theorem~\texorpdfstring{\protect\ref{thmm:main1}}{1}}\label{s.thmproof}
We state here a stronger and more detailed version of Theorem~\ref{thmm:main1}.
First, given any probability measure on a set, we let
\[
\pimax:=\max_{x\in S}\pi(x),\qquad \pimin:= \min_{x\in S}
\pi(x) \quad\mbox{and}\quad\rho(\pi) := \frac{\pimax}{\pimin}.
\]
Given $S$ and $q$ as above, we let $\calD(q)$ denote the collection of
stationary distributions and
let
\[
\rho(q) := \min_{\pi\in\calD(q)}\rho(\pi).
\]
\begin{thmm}\label{thmm:main1again}
\textup{(i)} Fix $S$ and $q$. Let \textbf{1} denote the configuration of all 1's and
$\alpha\ge1$, and assume that
$t:=\frac{1}2\log|S|-\alpha\ge1$. Then
\begin{equation}
\label{eq:main1FINITE} \bigl\| \mu^{\mathbf1}_t - \mu_\infty
\bigr\|_{\mathtt{TV}} \ge \frac{0.7e^{2\alpha}}{16(1+\qmax)^2\rho^2(q)+0.7e^{2\alpha}}.
\end{equation}
\textup{(ii)} Fix $S$ and $q$. Letting superscript $H$ denote random walk (sped down
by a factor of 2) on $\{0,1\}^S$ (i.e., $q\equiv0$), we have that for
all $t$
\begin{equation}
\label{eq:main2FINITE} \max_{\eta_1,\eta_2\in\{0,1\}^{S}}\bigl\|\mu^{\eta_1}_t-
\mu^{\eta
_2}_t\bigr\| _{\mathtt{TV}} \le \max_{\eta_1,\eta_2\in\{0,1\}^{S}}
\bigl\|\mu^{\eta_1,H}_t-\mu^{\eta
_2,H}_t\bigr\|
_{\mathtt{TV}}.
\end{equation}
\end{thmm}
Note that~\eqref{eq:main1FINITE} implies~\eqref{eq:main1} under the
assumptions given in
Theorem~\ref{thmm:main1}. Next, since
$\max_{\eta_1,\eta_2\in\{0,1\}^{S}}\|\mu^{\eta_1,H}_{({1}/2)\log
|S|+\alpha}-\mu^{\eta_2,H}_{({1}/2)\log|S|+\alpha}\|_{\mathtt{TV}}$
is (see \cite{DGM}) at most
$\frac{4}{\sqrt{\pi}} \int_0^{{e^{-\alpha}}/{\sqrt{8}}}
e^{-t^2} \,dt
+o(1)$ as $|S|\to\infty$,
we have that~\eqref{eq:main2FINITE} implies~\eqref{eq:main2} under the
assumption that
$\lim_{n\to\infty}|S^n|=\infty$.
\subsection{Proof of Theorem~\texorpdfstring{\protect\ref{thmm:main1again}}{3}\textup{(i)}}
\mbox{}
\begin{pf*}{Proof of Theorem~\ref{thmm:main1again}\normalfont{(i)}}
We will apply
Wilson's method for obtaining lower
bounds on mixing times; see \cite{W} or Section~13.2 in \cite{LPW}.
Choose $\pi\in\calD(q)$ which minimizes $\rho(\pi)$, and let
$\Phi(\eta):=2\sum_{x\in S} \eta(x)\pi(x)-1$. We claim that we
have that
\begin{equation}
\label{eq:eigenvalue} \E_\eta\bigl[\Phi(\eta_t)
\bigr]=e^{-t}\Phi(\eta).
\end{equation}
To see this, let $\eta^x$ denote the configuration
$\eta$ except that the coordinate at $x$ is changed to
$1-\eta(x)$, and note that $\Phi(\eta^x)-\Phi(\eta) =
2\pi(x)(1-2\eta(x))$. Then by~\eqref{eq:nvmrates},
\begin{eqnarray*}
&&\frac{d}{dt} \E_{\eta} \bigl(\Phi(\eta_t)
\bigr)\Big|_{t=0}
\\
&&\qquad = \sum_{x\in S} \biggl(\frac{1}2 + \sum
_{y\ne x}q(x,y)1\bigl\{\eta(y)\ne \eta(x)\bigr\} \biggr)
2\pi(x) \bigl(1-2\eta(x)\bigr)
\\
&&\qquad = -\Phi(\eta) + 2\sum_{x, y\ne x}\pi(x)q(x,y)1\bigl\{
\eta(y)\ne \eta(x)\bigr\} \bigl(1-2\eta(x)\bigr).
\end{eqnarray*}
A calculation using the stationarity of $\pi$
shows that the last sum is zero. This proves $\frac{d}{dt} \E_{\eta}
(\Phi(\eta_t))|_{t=0}=-\Phi(\eta)$, and hence
\eqref{eq:eigenvalue} holds.
Next we claim that for any $t$,
\begin{equation}\qquad
\label{eq:Rbound} \E_\eta \bigl(\bigl|\Phi(\eta_{t})-\Phi(
\eta)\bigr|^2 \bigr)\le (2\pimax)^2\bigl[|S|(1+\qmax) t+
\bigl(|S|(1+\qmax)t\bigr)^2\bigr].
\end{equation}
This is because a jump of $\eta_t$ changes $\Phi$ by at most
$2\pimax$, while by \eqref{eq:nvmrates} the number of
jumps during the interval $[0,t]$ is stochastically dominated above by a
Poisson random variable with mean $|S|(1+\qmax)t$.
Now consider the \emph{discrete} time Markov chain obtained by
sampling $\eta_t$ at times which are integer multiples of
$1/|S|$. Then $\Phi$ is an eigenfunction for this discrete time chain
with eigenvalue
$\lambda:=e^{-1/|S|}\in(\frac{1}2,1)$ (if $|S|\ge2$). We can now apply
equation (13.9) from Section~13.2 of \cite{LPW} to this discrete time
Markov chain with
$t$ being $|S|(\frac{1}2\log|S|-\alpha)$,
$x$ being the configuration ${\mathbf1}$ (whose corresponding $\Phi$ value
is 1)
and $R$ being $8\pimax^2(1+\qmax)^2$; see~\eqref{eq:Rbound}. Using
$\pimax\le\rho(q)/|S|$
and multiplying the numerator and denominator of the obtained fraction
from (13.9) in~\cite{LPW}
by $|S|^2$ yields~(\ref{eq:main1FINITE});
recall our continuous time system at time $\frac{1}2\log|S|-\alpha$ is
the discrete time system at time
$|S|(\frac{1}2\log|S|-\alpha)$.
\end{pf*}
\subsection{Proof of Theorem~\texorpdfstring{\protect\ref{thmm:main1again}}{3}\normalfont{(ii)}}
For part Theorem~\ref{thmm:main1again}(ii), we need to recall for the
reader the
graphical representation for the noisy voter model in terms of
coalescing Markov chains. In preparation for this part of the proof, we
will also give a result of Evans et al.
\cite{EKPS} concerning channels for noisy trees.
\begin{figure}
\includegraphics{1108f01.eps}
\caption{The graphical representation and its associated
trees: arrows represent voting moves and asterisks
represent rerandomization times. In this realization,
there are three trees.}
\label{fig:harris-rep}
\end{figure}
We construct our $(S,q)$ noisy voter model using a so-called
graphical representation. Figure~\ref{fig:harris-rep} illustrates the different
elements that arise in the graphical represention. The meaning of the trees,
depicted by the dotted, solid and dashed lines will be discussed when
we get to
the proof of Theorem~\ref{thmm:main1again}(ii).
We start with the random voting times and random choices,
$T^x=\{T^x_n,n\ge1\}$ and
$W^x=\{W^x_n,n\ge1\}$, $x\in S$. The $T^x$ are independent
Poisson processes, $T^x$ has rate $q(x):=\sum_{y\ne
x}q(x,y)$ and the $W^x_n$ are independent
$S$-valued random variables, independent
of the Poisson processes, with $\P(W^x_n=y)=q(x,y)/q(x)$
for $x\ne y$. The
rerandomization times and places are given by
$R^x=\{R^x_n,n\ge1\}$ and $Z^x=\{Z^x_n,n\ge
1\}$, $x\in S$. The $R^x$ are independent rate $1$ Poisson
processes, and
the $Z^x_n$ are i.i.d. Bernoulli random variables,
$\P(Z^x_n=1)=\P(Z^x_n=0)=1/2$.
Given $\eta_0\in\{0,1\}^S$, we define $\eta_t,t>0$ as
follows: (i) At the times $t=T^x_n$, we
draw an arrow $(x,T^x_n)\to(W^x_n,T^x_n)$ and
set $\eta_t(x) = \eta_{t-}(W^x_n)$. (ii) At
the times $t=R^x_n$, we put a $*$ at $(x,t)$ and set
$\eta_t(x)=Z^x_n$. A little thought shows that $\{\eta_t\}_{t\ge0}$ has
the dynamics specified by \eqref{eq:nvmrates}.
We construct the usual voter model dual process of coalescing
Markov chains. For
$x\in S$ and $t>0$ we construct $B^{x,t}_s,0\le
s\le t$ as follows: Set $B^{x,t}_0 =x$, and then let
$B^{x,t}_s$ trace out a path going backward in time to time
0, following the arrows for jumps. More precisely,
if $T^x\cap(0,t)=\varnothing$, put $B^{x,t}_s=x$ for $0\le s\le t$. Otherwise,
let $k=\max\{n\ge1\dvtx T^x_n<t\}$ and $u=T^x_k$, and set
\[
B^{x,t}_s = x \qquad\mbox{for }0<s<t-u \quad\mbox{and}\quad
B^{x,t}_{t-u} = W^x_k.
\]
We continue this process starting at $(B^{x,t}_{t-u},t-u)$,
thus defining $B^{x,t}_s$ for all $0\le
s\le t$. Observe that for each $x\in S$, $B^{x,t}_s$ is a
$q$-Markov chain starting at $x$. Also, these chains are
independent until they
meet, at which time they coalesce and move together thereafter.
For $t>0$, introduce $\Pi_t=\{(y,R^y_k), y\in
S,k\ge1\dvtx R^y_k\le t\}$, which contains all information up to time $t$
concerning the rerandomization \emph{times}.
For each $x\in S$, we want to look at the
time it takes the chain $B^{x,t}$ to first encounter a
rerandomization event, and also the rerandomization choice.
We do this as follows:\vspace*{1pt} If $(B^{x,t}_{s},t-s)\notin\Pi_t$
for all $0\le s\le t$, put $e(x,t)=\infty$. Otherwise,\vspace*{-1pt} let
$y,k$ satisfy $B^{x,t}_{t-R^y_k}=y$ and
$(B^{x,t}_{s},t-s)\notin\Pi_t$ for $s<t-R^y_k$, and put
$e(x,t)=t-R^y_k$ and $Z(x,t)=Z^y_k$. Given any $\eta\in
\{0,1\}^S$, the noisy voter model $\eta^\eta_t$ with
initial state $\eta^\eta_0=\eta$ can be represented as
\begin{equation}
\label{eq:duality} \eta^\eta_t(x) = Z(x,t)1\bigl\{e(x,t) \le t
\bigr\} + \eta\bigl(B^{x,t}_t\bigr)1\bigl\{ e(x,t)>t\bigr\},
\end{equation}
and this representation will be assumed in the rest of the
proof.
In our proof of Theorem~\ref{thmm:main1again}(ii) we will use the
above graphical construction to construct certain
\emph{noisy trees} and their associated \emph{stringy trees}.
A noisy tree $T$ is a tree with flip probabilities in
$(0,\frac{1}2]$ labeling the edges. Its associated \emph{stringy tree}
$\wh{T}$
is the tree which has the same set of root--leaf paths as $T$,
but in which these paths act independently. More precisely,
for every root--leaf path in $T$, there exists an identical (in terms
of length and flip probabilities on the edges) root--leaf path in
$\wh{T}$, and in addition, all the root--leaf paths in $\wh{T}$ are
edge-disjoint. See Figure~\ref{fig:stringy-2} for an example.
\begin{figure}[b]
\includegraphics{1108f02.eps}
\caption{A tree $T$ and the corresponding stringy tree
$\wT$.}\label{fig:stringy-2}
\end{figure}
Starting with $\sigma_\rho\in\{-1,+1\}$ uniform at the root
$\rho$ of $T$, we proceed upward along the tree, assigning
a value to each vertex by independently reversing the value
of the state of the parent vertex with the probability
assigned to the connecting edge (and retaining the value
otherwise). Theorem~6.1 in \cite{EKPS} relates the
conditional joint distribution (given $\sigma_\rho$) of the
resulting variables $\sigma_w$, where $w$ is a leaf of $T$
with the corresponding conditional joint distribution (given
$\sigma_{\hat{\rho}}$) for the associated stringy tree
$\wh{T}$ using \emph{channels}.
If $X$ is a random variable taking values in $\Omega_X$, and $Y$
is a random variable taking values in $\Omega_Y$,
a \emph{channel} from $X$ to $Y$ is a mapping $f\dvtx \Omega_X\times
[0,1]\rightarrow\Omega_Y$
such that if $Z$ is a uniform random variable on $[0,1]$ independent of
$X$, then
$f(X,Z)$ has distribution $Y$. See Section~15.6 in \cite{CT}.
\begin{thmm}[(Theorem~6.1 in \cite{EKPS})] \label{thmm:channel}
Given a finite noisy tree $T$ with leaves $W$ and root $\rho$,
let $\wh{T}$, with leaves
$\wh{W}$ and root $\hat{\rho}$, be the stringy tree associated with~$T$. There is a channel which, for $\xi\in\{\pm1\}$, transforms
the conditional distribution $\sigma_{\wh{W}} |
(\sigma_{\hat{\rho}}=\xi)$
into the conditional distribution $\sigma_W | (\sigma_\rho=
\xi)$.
Equivalently, we say that $\wh{T}$ dominates $T$.
\end{thmm}
\begin{pf*}{Sketch of proof}
Our sketch of proof is motivated by and very similar to the
proof sketch given in \cite{P}.
We only establish a key special case of the theorem:
namely, that the tree $\Upsilon$ shown in
Figure~\ref{fig:uvtrees-3}
is dominated by the corresponding stringy tree $\wh{\Upsilon}$.
The general case is derived from it by
applying an inductive argument; see \cite{EKPS} for details.
\begin{figure}
\includegraphics{1108f03.eps}
\caption{$\Upsilon$ is dominated by $\widehat{\Upsilon}$.}
\label{fig:uvtrees-3}
\end{figure}
Let $\theta,\theta_1,\theta_2\in(0,\frac{1}2]$ be the edge flip
probabilities in Figure~\ref{fig:stringy-2}, and assume neither $\theta_1$
nor $\theta_2$ equals $\frac{1}2$ (otherwise the identity channel will
work), and w.l.o.g. assume also that $\theta_1\le\theta_2$.
Let $\sigma_\rho=\wh{\sigma}_\rho$, and let $z$ be a
$\pm1$-valued random variable, independent of
the edge flip variables, with mean
$(1-2\theta_2)/(1-2\theta_1)\in(0,1]$.
Given $0 \leq\alpha\leq1$, to be specified below,
we define the channel as follows:
\begin{equation}
\sigma^*_1 =\wh{\sigma}_1 \quad\mbox{and}\quad
\sigma^*_2 =\cases{
\wh{\sigma}_2,
&\quad $\mbox{with probability } \alpha,$
\vspace*{2pt}\cr
\wh{\sigma}_1z,& \quad$\mbox{with probability } 1-\alpha.$}
\end{equation}
It suffices to prove, for the appropriate choice of $\alpha$, that
$(\sigma_\rho,\sigma_1,\sigma_2)$
and $(\wh{\sigma}_\rho,\sigma^*_1,\sigma^*_2)$ have the same
distribution, and for this it is enough to show that the
means of all corresponding products are equal.
(This is a special case of the fact that the
characters on any finite Abelian group $G$ form a basis
for the vector space of complex functions on $G$.)
By symmetry it is only the pair correlations which require work.
Let $\gamma=1-2\theta$ and
$\gamma_i=1-2\theta_i$, $i=1,2$. Clearly
$\E(\wh{\sigma}_\rho\sigma^*_1)=\E(\sigma_\rho\sigma_1)$,
$\E(\wh{\sigma}_\rho\wh{\sigma}_1) =\gamma\gamma_1$ and
$\E(\wh{\sigma}_\rho\wh{\sigma}_2) =\gamma\gamma_2$, whence
$\E(\wh{\sigma}_\rho\sigma^*_2)
=\gamma\gamma_2=\E(\sigma_\rho\sigma_2)$ for any choice of
$\alpha$. Finally, from
$\E(\wh{\sigma}_1\wh{\sigma}_2)=\gamma^2\gamma_1\gamma_2$,
it follows that
\[
\E\bigl(\sigma^*_1\sigma^*_2\bigr) = \alpha
\gamma^2\gamma_1\gamma_2 + (1-\alpha)
\frac{\gamma_2}{\gamma_1} =\gamma_1\gamma_2 \biggl[ \alpha
\gamma^2 + (1-\alpha)\frac{1}{\gamma_1^2} \biggr].
\]
Since $\gamma^2< 1$ and $1/\gamma_1^2> 1$,
we can choose $\alpha\in[0,1]$ so that
$\E(\sigma^*_1\sigma^*_2)=\gamma_1\gamma_2=\E(\sigma_1\sigma_2)$;
explicitly,
\begin{equation}
\label{eq:alph} \alpha= \bigl(1-\gamma_1^2\bigr)/
\bigl(1-\gamma^2\gamma_1^2\bigr).
\end{equation}
This proves that $\wh{\Upsilon}$ dominates $\Upsilon$.
\end{pf*}
\begin{pf*}{Proof of Theorem~\ref{thmm:main1again}\normalfont{(ii)}}
Fix $t>0$ throughout.
Now for $\eta\in\{0,1\}^{S}$, consider the construction
of $\eta^{\eta}_t$ given in \eqref{eq:duality}.
Letting $\calZ(t)=\{B^{x,t}_u,x\in S, u\in[0,t]\}$, we may write
\[
\mu^{\eta}_t=\int \mu^{\eta}_{t}\bigl(
\cdot |\calZ(t)\bigr) \,d\P\bigl(\calZ(t)\bigr).
\]
Therefore, to prove \eqref{eq:main2FINITE}, it suffices to
prove the stronger fact that
for any $\eta_1,\eta_2\in\{0,1\}^{S}$ and any realization $\calZ$,
\begin{equation}
\label{eq:main2againagain}\bigl \| \mu^{\eta_1}_t( \cdot |\calZ)-
\mu^{\eta_2}_t( \cdot |\calZ )\bigr\|_{\mathtt{TV}} \le\max
_{\eta_1,\eta_2\in\{0,1\}^{S}}\bigl\|\mu^{\eta_1,H}_t-\mu
^{\eta
_2,H}_t\bigr\|_{\mathtt{TV}}.
\end{equation}
To proceed, we will first give for any $\eta\in S$ and
realization $\calZ$, a useful alternative description of
$\mu^{\eta}_t( \cdot |\calZ)$. Clearly $\calZ$ yields
a finite number of disjoint trees $T_1,T_2,\ldots,T_m$ which
describe the coalescing picture. (In the realization of
Figure~\ref{fig:harris-rep}, there are three trees indicated
by the dotted, solid and dashed lines.) Each tree has its
root sitting at $S\times\{0\}$ and its leaves sitting at $S
\times\{t\}$ in the space--time diagram. Let $x_j$ be the
root of $T_j$ and $L_j$ be the set of leaves; the $L_j$'s
are disjoint, and their union is (identified with) $S$. We
also let $\calV_j$ be the set of space--time points which consists
of the root $(x_j,0)$ along with the leaves $(\ell,t)$
and branch points of $T_j$, and view $\calV_j$ as a
tree. [If at time $s$, a chain moves from $w$ to $z$
coalescing with another walker, then we consider the branch
point to be $(z,s)$ rather than $(w,s)$.] None of this
depends on the configuration $\eta$. Note that the
branching is always at most 2 and that the tree can move
from one vertical line to another; see the solid tree in
Figure~\ref{fig:harris-rep}.
Let $Y^{\eta,j}$ be the process $\{\eta^{\eta}_s\}_{s\le t}$
conditioned on $\calZ$ restricted to $\calV_j$.
(This process also depends of course on $t$ and $\calZ$, but its dependence
on $\eta$ is what we wish to emphasize.)
Next, conditioned on $\calZ$, $Y^{\eta,1},Y^{\eta,2},\ldots,Y^{\eta,m}$
are clearly independent since $Y^{\eta,j}$ depends only on
$\Pi_t\cap T_j$ and the corresponding $Z^x_n$'s.
[This implies of course that
$\eta^{\eta}_{t}(L_1),\eta^{\eta}_{t}(L_2),\ldots,\eta^{\eta}_{t}(L_m)$
are conditionally independent given $\calZ$.]
We also let $Y^{\eta}$ be the process $\{\eta^{\eta}_s\}_{s\le t}$
restricted to $\bigcup_j \calV_j$. Crucially, $Y^{\eta,j}$ has the following
alternative simpler description as a tree-indexed Markov chain,
which is easy to verify and left to the reader.
At the root $(x_j,0)$ of $\calV_j$, the value of $Y^{\eta,j}$ is
$\eta(x_j)$.
Inductively, the value of $Y^{\eta,j}$ at a particular node is taken
to be the same as the value of its parent node
(which is lower down on the time axis) with probability
$\frac{1+e^{-s}}{2}$ where $s$ is the time difference
between these two nodes, and the opposite value otherwise.
These random choices are taken independently.
The dependence of $Y^{\eta,j}$ on $\eta$ is
only through the initial state $\eta(x_j)$; otherwise, the transition
mechanism is the same.
Consider now the process $\tilde{Y}^\eta$ indexed by $S$ and defined
by the
following two properties: the random variables $\tilde Y^\eta(x),x\in S$
are independent, and for each $j$, for all $x\in L_j$,
$\tilde{Y}^\eta(x)=\eta(x_j)$ with probability $\frac{1+e^{-t}}{2}$
and the opposite value
otherwise. It is easy to see that the distribution
of $\tilde{Y}^\eta$ is simply the distribution for
continuous time random walk on the hypercube at time $t$
started from the configuration whose state at $x$ is $\eta(x_j)$
for $x\in L_j$, $j=1,\ldots,m$.
Theorem~\ref{thmm:channel} now implies
that for each $j$, there is a channel (depending on $T_j$)
\emph{not depending on $\eta(x_j)$} which transforms the random variables
$\tilde{Y}^\eta(L_j)$ to the random variables $Y^{\eta}(L_j)=Y^{\eta
,j}(L_j)$,
meaning that given the tree $T_j$, there is a function
\[
f_{j}\dvtx \{0,1\}^{L_j}\times[0,1]\rightarrow\{0,1
\}^{L_j}
\]
so that if $U$ is a uniform random variable on $[0,1]$,
independent of everything else, we have that for each value of $\eta(x_j)$,
\[
f_{j}\bigl(\tilde{Y}^\eta(L_j), U\bigr) \quad\mbox{and}\quad {Y}^\eta(L_j)
\]
are equal in distribution.
Since $\tilde{Y}^\eta(L_j)$ are independent as we vary $j$ and similarly
for $Y^\eta(L_j)$, it follows that we have a function (depending on
$\calZ$)
\[
f\dvtx \{0,1\}^{S}\times[0,1]\rightarrow\{0,1\}^{S}
\]
so that if $U$ is a uniform random variable on $[0,1]$,
independent of everything else, we have that for any $\eta$,
\[
f\bigl(\tilde{Y}^\eta(S), U\bigr) \quad\mbox{and}\quad {Y}^\eta(S)
\]
are equal in distribution.
This then easily yields that for any $\eta_1$ and $\eta_2$,
\[
\bigl\| {Y}^{\eta_1}(S) - {Y}^{\eta_2}(S)\bigr\|_{\mathtt{TV}} \le \bigl\|
\tilde{Y}^{\eta_1}(S) - \tilde{Y}^{\eta_2}(S)\bigr\|_{\mathtt{TV}}.
\]
Finally, it is clear from construction that
\[
\bigl\| \tilde{Y}^{\eta_1}(S) - \tilde{Y}^{\eta_2}(S)\bigr\|_{\mathtt{TV}}
\le \max_{\eta_1,\eta_2\in\{0,1\}^{S}}\bigl\|\mu^{\eta_1,H}_t-
\mu^{\eta
_2,H}_t\bigr\| _{\mathtt{TV}},
\]
completing the proof.
\end{pf*}
\section{The $n$-star and the proof of Theorem~\texorpdfstring{\protect\ref{thmm:wheel}}{2}}\label
{s.wheel}
In this section, we consider the noisy voter model
$\{\eta^n_t\}$ on the $n$-star. We first explain why this
gives us an example showing that conclusion
\eqref{eq:main1} of Theorem~\ref{thmm:main1} is not true in
general without the assumption of a uniform bound on the
$\rho_n$'s even if $(\qnmax)$ is bounded. Consider first
continuous time random walk on the $n$-star with rate 1,
meaning that the walker waits an exponential amount of time
with parameter 1 and then moves to a uniform neighbor. If we
run a corresponding system of coalescing Markov chains
starting from each point, it is not hard to see that any
given pair coalesces in time $O(1)$, and that
the expected time until all chains coalesce is at most
$O(\log n)$. If we now multiply all the rates by a certain
large constant $c$, we will have that the expected time
until all chains coalesce is at most $\log n/32$. Then by
Markov's inequality, the probability that the chains have
not coalesced by time $\log n/4$ is at most $1/8$. Since
each site is rerandomized at rate 1, it is easy to see from
this fact and the graphical construction in
Section~\ref{s.thmproof} that this implies that the mixing
time at most $\log n/3$.
The rest of the section is devoted to the proof of Theorem~\ref{thmm:wheel}.
\begin{pf*}{Proof of Theorem~\ref{thmm:wheel}}
We begin with (i). This is similar to the proof of Theorem~\ref
{thmm:main1again}(i),
except one considers a different eigenfunction. Partition the leaves
into disjoint sets $A$ and $B$ each with $n/2$ elements. Let
\[
\Phi(\eta):=\sum_{x\in A}\eta(x)-\sum
_{x\in B}\eta(x).
\]
It is elementary to check that
\begin{equation}
\E_\eta\bigl[\Phi(\eta_t)\bigr]=e^{-2t}\Phi(
\eta).
\end{equation}
[Note that here the eigenvalue at time $t$ is $e^{-2t}$,
while in \eqref{eq:eigenvalue} it is $e^{-t}$.]
As in the proof of Theorem~\ref{thmm:main1again}(i), we
consider the discrete time Markov chain obtained by
sampling our process at times which are integer multiples of
$1/n$. Then $\Phi$ is an eigenfunction for this discrete time chain
with eigenvalue
$\lambda:=e^{-2/n}\in(\frac{1}2,1)$ (if $n\ge3$). We can now apply
equation (13.9) from Section~13.2 of \cite{LPW} to this discrete time
Markov chain with
$t$ being $\frac{n}{4}(\log n-C)$,
$x$ being the configuration $\eta_0$ (whose corresponding $\Phi$ value
is $n/2$)
and $R$ being $6$. After simplification
[and recalling that our continuous time system at time $\frac{1}4(\log
n-C)$ is the discrete time system at time
$\frac{n}{4}(\log n-C)$] we get~\eqref{eq:wheel1}.
For (ii), note first that, in the terminology introduced in
Section~\ref{s.background},
we want to show that $t^n_{\mathtt{mix}}({\mathbf1})=O(1)$.
We first note that by symmetry, if we only look at the state of the
center of the star and
the number of leaves which are in state 1, then this is also a Markov chain.
(It is a \emph{projection} of the original chain in the sense of
Section~2.3.1 in
\cite{LPW}.) Let $^{\rm R}\eta^n_t$ denote this ``reduced'' Markov chain
whose state space is $\{0,1\}\times\{0,1,\ldots, n\}$.
The key step in proving that $t^n_{\mathtt{mix}}({\mathbf1})=O(1)$
is to show that this reduced chain has mixing time $O(1)$, which is interesting
in itself; this is stated in Lemma~\ref{lemma.reducedwheel} below.
Assuming this lemma, one proceeds as follows.
Keeping symmetry in mind, we can generate a realization of the
configuration at time $t$
starting from all 1's by considering the reduced system at time $t$
starting from $(1,n)$,
and if the reduced system is in state $(a,k)$, we then construct a
configuration for
the full system by letting the center be in state $a$ and choosing a uniform
random subset of size $k$ from the $n$ leaves to be in state 1 and the
rest to be in state 0.
We can generate a realization from the stationary distribution for the
full system
in an analogous way by choosing $(a,k)$ from the stationary
distribution of the reduced system
and then letting the center be in state $a$ and choosing a uniform
random subset of size $k$ from the $n$ leaves to be in state 1 and the
rest to be in state 0.
Therefore, by an obvious coupling, we have that the total
variation distance between the full system at time $t$ started from
${\mathbf1}$ and the stationary distribution for the full system
is exactly the total variation distance
between the reduced system at time $t$ started from $(1,n)$
and the stationary distribution for the reduced system.
Now the proposition follows from Lemma~\ref{lemma.reducedwheel}.
\end{pf*}
\begin{lemma}\label{lemma.reducedwheel}
The mixing times for $\{^{\rm R}\eta^n_t\}$ is O(1).
\end{lemma}
\begin{pf}
Observe that the infinitesimal rates for this reduced chain are as follows:
\begin{eqnarray*}
(0,k)&\rightarrow&(1,k) \mbox{ at rate } \frac{1}{2}+\frac{k}{n},
\\
(0,k)&\rightarrow&(0,k+1) \mbox{ at rate } \frac{n-k}{2},
\\
(0,k)&\rightarrow&(0,k-1) \mbox{ at rate } \frac{3k}{2},
\\
(1,k)&\rightarrow&(0,k) \mbox{ at rate } \frac{1}{2}+\frac{n-k}{n},
\\
(1,k)&\rightarrow&(1,k+1) \mbox{ at rate } \frac{3(n-k)}{2},
\\
(1,k)&\rightarrow&(1,k-1) \mbox{ at rate } \frac{k}{2}.
\end{eqnarray*}
We denote this reduced system by $(X_t,Y_t)$ where $n$ is suppressed in
the notation.
The key fact that we will show is
that there exists $c_1>0$ so that for all $n$, for all (initial) states
$(a,\ell)$ and
for all (final) states $(b,k)$ with $k\in[0.4n,0.6n]$,
\[
\P_{(a,\ell)}\bigl((X_{10},Y_{10})=(b,k)\bigr)\ge
c_1/n.
\]
By equation (4.13) in \cite{LPW}, this implies that there
exists $c_2>0$ so that for all $n$,
for any two initial states, the total variation distance of
the corresponding processes at time 10 is at most $1-c_2$. This
easily leads to the claim of the lemma.
Since it is very easy for the center to change states, it is easy to
see that it suffices
to prove the above key fact when $a=1$ and $b=0$.
Let $U$ be the event that the center during $[0,10]$ never attempts an
update by looking
at one of its neighbors.
Letting $A_t:=U\cap\{X_s=1\ \forall s\in[0,t]\}$, one checks that
the conditional distribution of $Y_t$ given $A_t$
is the sum of two independent binomial distributions with
respective parameters $(\ell, \frac{3}4+\frac{1}4 e^{-2t})$ and
$(n-\ell, \frac{3}4-\frac{3}4 e^{-2t})$. In particular,
\[
g(t):=\E\biggl[\frac{Y_t}{n}\Big|A_t\biggr]=\frac{3}4+
\biggl(\frac{\ell}{n}-\frac{3}4\biggr) e^{-2t}.
\]
One also easily checks that for all $n$ and $\ell$,
\begin{equation}
\label{eq:gLipshitz} \bigl|g(t)-g(s)\bigr|\le2|t-s|.
\end{equation}
The representation of $Y_t$ as a sum of two binomials when conditioned
on $A_t$ yields
$\operatorname{Var}(\frac{Y_t}{n}|A_t)\le1/n$, and hence by Chebyshev's inequality we
have that
for all $n$, $\ell$, $t$ and $\sigma$,
\begin{equation}
\label{eq:gconcentration} \P_{(a,\ell)}\biggl(\biggl|\frac{Y_t}{n}-g(t)\biggr|\ge
\frac{\sigma}{\sqrt
{n}}\Big|A_t\biggr)\le 1/\sigma^2.
\end{equation}
Now, letting $B_t:=U\cap\{X_s=0\ \forall s\in[t,10]\}$, one checks that
the conditional distribution of $Y_{10}$ given $B_t\cap\{Y_t=nu\}$
is the sum of two independent binomial distributions with
respective parameters $(nu, \frac{1}4+\frac{3}4 e^{-2(10-t)})$ and
$(n(1-u), \frac{1}4-\frac{1}4 e^{-2(10-t)})$. In particular,
\[
h(u,t):=\E\biggl[\frac{Y_{10}}{n}\Big|B_t\cap\{Y_t=nu\}
\biggr]=\frac{1}4+\biggl(u-\frac{1}4\biggr) e^{-2(10-t)}.
\]
One also easily checks that for all $u,v$ and $t,s\in[0,10]$,
\begin{equation}
\label{eq:hLipshitz}\bigl |h(u,t)-h(v,s)\bigr|\le2\bigl(|u-v|+|t-s|\bigr).
\end{equation}
By an easy variant of the local central limit theorem, there exists
$c_3>0$ so that for all $n$,
$u$, $t\in[0,9.9]$ and the integers $v\in[nh(u,t)-10\sqrt
{n},nh(u,t)+10\sqrt{n}]$, one has that
\begin{equation}
\label{eq:LCT} \P\bigl[Y_{10}=v|B_t\cap
\{Y_t=nu\}\bigr]\ge\frac{c_3}{\sqrt{n}}.
\end{equation}
Next, one easily checks that $h(g(0),0)\le0.4$ and $h(g(9.9),9.9)\ge
0.6$, and hence by
our assumptions on $k$, there exists $t^{\star}\in[0,9.9]$ such that
$h(g(t^{\star}),t^{\star})=\frac{k}{n}$. [It is easily checked that
$h(g(t),t)$ is increasing
in $t$ but this is not needed to conclude the existence of $t^{\star}$.]
We now let $G$ be the intersection of the events $U$ and that during $[0,10]$,
the center flips its state exactly once and that this occurs during
$[t^{\star}-\frac{1}{n^{1/2}},t^{\star}+\frac{1}{n^{1/2}}]$. Clearly
there exists
$c_4>0$ so that for all $n$ and $t^{\star}$, we have that $\P(G)\ge
\frac{c_4}{\sqrt{n}}$.
On the event $G$, we let $T$ denote this unique flipping time of the center.
Now, by \eqref{eq:gLipshitz}, $|g(T)-g(t^{\star})|\le2/\sqrt{n}$
and hence
\[
\biggl\{\biggl|\frac{Y_T}{n}-g\bigl(t^{\star}\bigr)\biggr|\ge4/\sqrt{n}\biggr\}
\subseteq \biggl\{\biggl|\frac{Y_T}{n}-g(T)\biggr|\ge2/\sqrt{n}\biggr\}.
\]
Applying \eqref{eq:gconcentration}, this yields
\[
\P_{(a,\ell)}\biggl(\biggl|\frac{Y_T}{n}-g\bigl(t^{\star}\bigr)\biggr|\ge
\frac{4}{\sqrt
{n}}\Big|G,T\biggr)\le1/4.
\]
We therefore have
\[
\P_{(a,\ell)}(G\cap H)\ge\frac{c_4}{2\sqrt{n}},
\]
where $H:=\{|\frac{Y_T}{n}-g(t^{\star})|\le\frac{4}{\sqrt{n}}\}$.
Given this lower bound, to prove the key claim now, it would suffice
to show that for all parameters,
\begin{equation}
\label{eq:final} \P_{(a,\ell)}(Y_{10}=k|G\cap H)\ge
\frac{c_3}{\sqrt{n}},
\end{equation}
where $c_3$ comes from \eqref{eq:LCT}.
By \eqref{eq:hLipshitz}, $|T-t^{\star}| \le\frac{1}{\sqrt{n}}$ and
$|\frac{Y_T}{n}-g(t^{\star})|\le\frac{4}{\sqrt{n}}$ imply that
\[
\biggl|h\biggl(\frac{Y_T}{n},T\biggr)-h\bigl(g\bigl(t^{\star}
\bigr),t^{\star}\bigr)\biggr| \le\frac{10}{\sqrt{n}},
\]
and hence by the definition of $t^{\star}$, we have
$|h(\frac{Y_T}{n},T)-\frac{k}{n}| \le\frac{10}{\sqrt{n}}$.
Finally \eqref{eq:final} now follows from \eqref{eq:LCT} by
conditioning on
the exact values of $T$ and $Y_T$, completing the proof.
\end{pf}
\begin{remark*} In view of the proof of Theorem~\ref{thmm:wheel}(ii),
it also follows that for the reduced system, $t^n_{\mathtt{mix}}(\vep
)({\mathbf1})=O(\log(1/\vep))$.
\end{remark*}
\section*{Acknowledgments}
This work was initiated when the third author was
visiting Microsoft Research in Redmond, WA and was continued when
the first author was visiting Chalmers University of Technology.
|
1,314,259,994,811 | arxiv | \section{Introduction}
\label{intro}
Row projection methods are a class of iterative linear system solvers
\cite{bramley1989row,bramley1992row,kamath1989projection} that are used for solving linear system of equations of the form
\begin{equation}
\label{eq:systemAx}
Ax=f,
\end{equation}
where $A$ is an $n \times n$ sparse nonsymmetric nonsingular matrix, $x$ and $f$ are column vectors of size $n$.
In these methods, the solution is computed through successive projections onto rows of $A$.
There are mainly two major variations known as Kacmarz \cite{kaczmarz1937angenaherte} and Cimmino \cite{cimmino1938calcolo}.
Kaczmarz obtains the solution through a product of orthogonal projections whereas Cimmino reaches the solution through a sum of orthogonal projections.
Cimmino is known to be more amenable to parallelism than Kaczmarz \cite{bramley1992row}.
However, Kaczmarz can be still parallelized via block Kaczmarz~\cite{kamath1989projection}, CARP~\cite{gordon2005} or multi-coloring~\cite{galgon2015parallel}.
The required number of iterations for Cimmino algorithm, however, could be quite large.
One alternative variation is the block Cimmino~\cite{arioli1992block} which is a block row projection method.
Iterative block Cimmino has been studied extensively in \cite{arioli1992block,arioli1995block,bramley1992row,censor1988parallel,drummond2015partitioning,
elfving1980block,elfving2009properties}.
A pseudo-direct version of block Cimmino based on the augmented block row projections is proposed in~\cite{duff2015augmented}.
However as in any other direct solver~\cite{MUMPS,bolukbasi2016multithreaded,li2005overview,schenk2004solving}, this approach also suffers from extensive memory requirement due to fill-in.
In the block Cimmino scheme, the linear system~\cref{eq:systemAx} is partitioned into $K$ block rows,
where $K \leq n $, as follows:
\begin{equation}
\label{eq:blockCimmino}
\begin{pmatrix}
A_1 \\
A_2 \\
\vdots \\
A_K \\
\end{pmatrix}
x =
\begin{pmatrix}
f_1 \\
f_2 \\
\vdots \\
f_K \\
\end{pmatrix}.
\end{equation}
\noindent In~\cref{eq:blockCimmino}, the coefficient matrix and right-hand side vector are partitioned conformably. Here $A_i$ is a submatrix of size $m_i \times n$ and $f_i$ is a subvector of size $m_i$, for $i=1,2,...,K$, where
\begin{equation}
n = \sum\limits_{i=1}^{K}m_i.
\end{equation}
The block Cimmino scheme is given in~\Cref{algo1}, where $A_i^+$ is the Moore-Penrose pseudoinverse of $A_i$ and it is defined as \begin{equation}
A_i^+ = A_i^T(A_iA_i^T)^{-1}.
\end{equation}
In~\Cref{algo1}, $A_i^+$ is used for the sake of the clarity of the notation and it is never computed explicitly.
In fact, at line 4 of~\Cref{algo1}, the minimum norm solution of an underdetermined linear least squares problem is obtained via the augmented system approach as discussed later.
In~\Cref{algo1}, $\delta_i$ vectors, which are of size $n$, can be computed independently in parallel without any communication, hence the block Cimmino algorithm is quite suitable for parallel computing platforms.
At line 6, the solution is updated by the sum of projections which is scaled by the relaxation parameter ($\omega$).
In the parallel Cimmino algorithm, communication is required only for summing up the $\delta_i$s.
\begin{algorithm}
\caption{Block Cimmino method}
\label{algo1}
\begin{algorithmic}[1]
\STATE{Choose $x^{(0)}$}
\WHILE{$t=0,1,2,\ldots,$ until convergence}
\FOR{$i = 1,\ldots,K$}
\STATE{$\delta_i = A_i^+(f_i - A_ix^{(t)})$}
\ENDFOR
\STATE{$x^{(t+1)} = x^{(t)} + \omega \sum\limits_{i=1}^{K} \delta_i $ }
\ENDWHILE
\end{algorithmic}
\end{algorithm}
An iteration of block Cimmino method can be reformulated as follows:
\begin{equation}
\label{eq:iterscheme}
\begin{tabular}{ll}
$x^{(t+1)} $ & $ = x^{(t)} + \omega \sum\limits_{i=1}^{K} A_i^+ \left( f_i - A_ix^{(t)} \right) $ \\
& $ = \left( I - \omega \sum\limits_{i=1}^{K} A_i^+A_i \right)x^{(t)} + \omega \sum\limits_{i=1}^{K} A_i^+f_i $ \\
& $ = (I - \omega H) x^{(t)} + \omega \sum\limits_{i=1}^{K} A_i^+f_i $ \\
& $ = Qx^{(t)} + \omega \sum\limits_{i=1}^{K} A_i^+f_i $,
\end{tabular}
\end{equation}
where $Q$ is the iteration matrix for block Cimmino algorithm.
$\omega H = I- Q$ is the sum of $\mathcal{P_R}(A_i^T)$s (projections onto $A_i^T$) and it is defined by
\begin{equation}
\label{eq:H}
\begin{split}
\omega H & = \omega \sum\limits_{i=1}^{K} \mathcal{P_R}(A_i^T) = \omega \sum\limits_{i=1}^{K} A_i^+A_i.
\end{split}
\end{equation}
The projections in block Cimmino iterations can be calculated using several approaches, such as normal equations~\cite{golub2012matrix}, seminormal equations~\cite{golub1969,golub2012matrix}, QR factorization~\cite{golub2012matrix} and augmented system \cite{arioli1989augmented,golub2012matrix}.
The normal and seminormal equation approaches are not considered since they have the potential of introducing numerical difficulties that can be disastrous \cite{demmel1993improved,golub1969} in some cases when the problem is ill-conditioned.
Although the QR factorization is numerically more stable, it is computationally expensive.
Therefore we use the augmented system approach, which requires the solution of a sparse linear systems that can be done effectively by using a sparse direct solver~\cite{arioli1992block}.
Note that if submatrix $A_i$ is in a column overlapped block diagonal form, one could also use the algorithm in~\cite{torun2017parallel}.
However, this approach is not considered since we assume no structure for the coefficient matrix.
In the augmented system approach, we obtain the solution of~\cref{augmented} by solving the linear system~\cref{augmentedMatrix},
\begin{equation}
A_i \delta_i = r_i \hspace{1cm} (r_i=f_i - A_i x^{(t)})
\label{augmented}
\end{equation}
\begin{equation}
\begin{pmatrix}
I & A_i^T \\
A_i & 0 \\
\end{pmatrix}
\begin{pmatrix}
\delta_i \\
\varsigma_i \\
\end{pmatrix}
=
\begin{pmatrix}
0 \\
r_i \\
\end{pmatrix}.
\label{augmentedMatrix}
\end{equation}
Hence, the solution of the augmented system gives $\delta_i$.
\subsection{The Conjugate Gradient acceleration of block Cimmino method}
\label{subsectioncgaccel}
Convergence rate of the block Cimmino algorithm is known to be slow~\cite{bramley1992row}.
In~\cite{bramley1992row}, the Conjugate Gradient (CG) method is proposed to accelerate the row projection method.
It is also reported that the CG accelerated Cimmino method competes favorably compared to classical preconditioned Generalized Minimum Residual (GMRES) and Conjugate Gradient on Normal Equations (CGNE) for the solution of nonsymmetric linear systems that arise in an elliptic partial differential equation.
On the other hand, it should be noted that the main motivation of block Cimmino algorithm is its amenability of parallelism \cite{drummond2015partitioning}.
The iteration scheme of the block Cimmino \cref{eq:iterscheme} gives
\begin{equation}
x^{(t+1)} = (I-\omega H)x^{(t)} + \omega \sum\limits_{i=1}^{K} A_i^+f_i ,
\end{equation}
where the $H$ matrix is symmetric and positive definite according to~\cref{eq:H} if $A$ is square and full rank.
Hence, one can solve the following system using CG,
\begin{equation}
\label{eq:cg}
\omega Hx = \omega\xi,
\end{equation}
where $\xi = \sum_{i=1}^{K} A_i^+f_i$ and $x$ is the solution vector of the system~\cref{eq:systemAx}.
Note that since $\omega$ appears on both sides of~\cref{eq:cg}, it does not affect the convergence of CG.
\Cref{algo2} is the pseudocode for the CG accelerated block Cimmino method~\cite{zenadi2013methodes}, which is in fact the classical CG applied on the system given in~\cref{eq:cg}.
In the second line of the algorithm, the initial residual is computed in the same way as the first iteration of \cref{algo1}.
The matrix vector multiplications in the CG is expressed as the solution of $K$ independent underdetermined systems at line $5$ which can be done in parallel and need to be summed to obtain $\psi^{(t)}$ by using an all-reduce operation.
\begin{algorithm}
\caption{Conjugate Gradient acceleration of block Cimmino method}
\label{algo2}
\begin{algorithmic}[1]
\STATE{Choose $x^{(0)}$}
\STATE{$r^{(0)}= \xi - \sum_{i=1}^{K} A_i^+A_i \; x^{(0)}$}
\STATE{$p^{(0)} = r^{(0)}$}
\WHILE{$t=0,1,2,\ldots,$ until convergence} \vspace{0.1cm}
\STATE{$\psi^{(t)} = \sum_{i=1}^{K} A_i^+A_i \; p^{(t)}$} \vspace{0.1cm}
\STATE{$ \alpha^{(t)} = ({{{r}}^{(t)}}^T {{r}}^{(t)}) / ({p^{(t)}}^T \psi^{(t)}) $} \vspace{0.05cm}
\STATE{$ x^{(t+1)} = x^{(t)} + \alpha^{(t)} p^{(t)} $} \vspace{0.05cm}
\STATE{$ r^{(t+1)} = r^{(t)} - \alpha^{(t)} \psi^{(t)}$} \vspace{0.05cm}
\STATE{$ \beta^{(t)} = ({r^{(t+1)}}^T r^{(t+1)}) / ({{{r}}^{(t)}}^T {{r}}^{(t)}) $} \vspace{0.05cm}
\STATE{$ p^{(t+1)} = r^{(t+1)} + \beta^{(t)} p^{(t)} $}
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\subsection{The effect of partitioning}
The convergence rate of CG accelerated block Cimmino algorithm is essentially the convergence rate of CG applied on~\cref{eq:cg}. A well-known upper bound on the convergence rate can be given in terms of the extreme eigenvalues ($\lambda_{min}$ and $\lambda_{max}$) of the coefficient matrix. Let
\begin{equation}
\kappa = \frac{\lambda_{max}}{\lambda_{min}}
\end{equation}
be the 2-norm condition number of $H$. Then, as in \cite{golub2012matrix}, an upper bound on the convergence rate of the CG accelerated block Cimmino can be given by
\begin{equation}
\frac{||x^{(t)} -x^{*}||_{H}}{||x^{(0)} -x^{*}||_{H}} \leq 2 \left( \frac{\sqrt{\kappa} -1 } {\sqrt{\kappa} + 1} \right)^{t}
\end{equation}
where $x^{*}$ is the exact solution and $||y||_{H}= y^{T}Hy$. Furthermore, it was shown that the convergence rate of CG not only depends on the extreme eigenvalues but also on the separation between those extreme eigenvalues and interior eigenvalues~\cite{vanderSluis1986} as well the clustering of the internal eigenvalues~\cite{jennings1977influence}.
In summary, the convergence rate of the CG accelerated block Cimmino depends on the extreme eigenvalues and the number of clusters as well as the quality of the clustering.
Therefore, the partitioning of the coefficient matrix $A$ into block rows is crucial for improving the convergence rate of the CG accelerated block Cimmino algorithm, since it can improve the eigenvalue distribution of $H$.
Note that the eigenvalues of $H$ is only affected by the block-row partitioning of the coefficient matrix $A$ and independent of any column ordering~\cite{drummond2015partitioning}.
Let the QR factorization of $A_i^T$ be defined as
\begin{equation}
Q_iR_i = A_i^T,
\end{equation}
where the matrices $Q_i$ and $R_i$ have dimensions of $n \times m_i$ and $m_i \times m_i$, respectively.
Then, the $H$ matrix can be written as follows \cite{arioli1992block};
\begin{equation}
\label{eq:Iterat}
{\renewcommand{\arraystretch}{1.7}
\begin{tabular}{ll}
$ H $ & $ = \sum\limits_{i=1}^{K} A_i^T(A_iA_i^T)^{-1}A_i$ \\
$ $ & $ = \sum\limits_{i=1}^{K} Q_iQ_i^T$ \\
$ $ & $ = (Q_1, \ldots ,Q_K)(Q_1, \ldots ,Q_K)^T $. \\
\end{tabular}
}
\end{equation}
Since the eigenvalue spectrum of matrix $(Q_1, \ldots ,Q_K)(Q_1, \ldots ,Q_K)^T$ is the same as the eigenvalue spectrum of matrix $(Q_1, \ldots ,Q_K)^T(Q_1, \ldots ,Q_K)$ \cite{golub1965calculating}, $H$ is similar to
\begin{equation}
\begin{pmatrix}
{\renewcommand{\arraystretch}{2.0}
\begin{tabular}{llll}
$I_{m_1\times m_1}$ & ${Q_1}^T Q_2$ & $ \ldots $ & ${Q_1}^TQ_K$ \\
${Q_2}^T Q_1 $ & $I_{m_2\times m_2} $ & $ \ldots $ & ${Q_2}^TQ_K$ \\
$\sbox0{\dots}\makebox[\wd0]{\vdots} $ & $ \hdots $ & $ \ddots $ & $ \sbox0{\dots}\makebox[\wd0]{\vdots}$ \\
${Q_K}^T Q_1 $ & ${Q_K}^T Q_2 $ & $ \ldots $ & $I_{m_K \times m_K}$ \\
\end{tabular}
}
\end{pmatrix},
\label{HQR}
\end{equation}
where the singular values of matrix ${Q_i}^TQ_j$ represent the principal angles between the subspaces spanned by the rows of $A_i$ and $A_j$ \cite{bjorck1973numerical}.
Hence, the smaller the off-diagonals of the matrix~\cref{HQR}, the more eigenvalues of $H$ will be clustered around one by the Gershgorin theorem \cite{gershgorin1931uber}.
Therefore, the convergence rate of the block Cimmino method highly depends on the orthogonality among $A_i$ blocks.
If $A_i$ blocks are more orthogonal to each other, row inner products between blocks would be small and hence the eigenvalues will be clustered around one.
Consequently, CG is expected to converge in a fewer number of iterations.
In the literature, Cuthill-Mckee (CM)~\cite{cuthill1969reducing} based partitioning strategies~\cite{drummond2015partitioning,ruiz1992solution,zenadi2013methodes} are utilized to define block rows using CM level set information on the normal equations of the original matrix.
These strategies benefit from the level sets of CM for creating the desired number of block rows.
In CM, nodes in a level set have the same distance from the starting node and these nodes have neighbors only in the previous and the next level sets.
Therefore permuted matrix based on the ordering of the level sets constitutes a block tridiagonal structure.
These strategies may suffer from not reaching the desired the number of block rows due to smaller number of level sets.
They also suffer from obtaining quite unbalanced partitions due to a relatively larger sizes of some level sets.
Numerical values are only considered when dropping based filtering strategy is used.
Although filtering small elements on normal equations before applying CM allows more freedom in partitioning by increasing the number of level sets, however, it does not hold the properties of the strict two-block partitioning~\cite{arioli1995block,ruiz1992solution,zenadi2013methodes}.
In addition, it is difficult to determine the best filtering threshold value a priori and find a common threshold which is ``good'' for all matrices.
In recent studies~\cite{drummond2015partitioning,zenadi2013methodes}, a hypergraph partitioning method is used to find a ``good" block-row partitioning for the CG accelerated block Cimmino algorithm.
It is reported that it performs better than CM based methods (with or without filtering small elements).
In hypergraph partitioning, the partitioning objective of minimizing the cutsize corresponds to minimizing the number of linking columns among row blocks, where a linking column refers to a column that contains nonzeros in more than one block row.
This in turn loosely relates to increasing the structural orthogonality~\cite{li2006miqr} among row blocks.
Here, two rows are considered to be structurally more orthogonal if they have fewer nonzeros in the same columns. This measure depends on only nonzero counts and ignores the numerical values of nonzeros.
In this work, we propose a novel graph theoretical block-row partitioning method for the CG accelerated block Cimmino algorithm.
For this purpose, we introduce a row inner-product graph model of a given matrix $A$ and then the problem of finding a ``good" block-row partitioning is formulated as a graph partitioning problem on this graph model. The proposed method takes the numerical orthogonality between block rows of $A$ into account. In the proposed method, the partitioning objective of minimizing the cutsize directly corresponds to minimizing the sum of inter-block inner products between block rows thus leading to an improvement in the eigenvalue spectrum of $H$.
The validity of the proposed method is confirmed against two baseline methods by conducting experiments on a large set of matrices that arise in a variety of real life applications.
One of the two baseline methods is the state-of-the-art hypergraph partitioning method introduced in~\cite{drummond2015partitioning}.
We conduct experiments to study the effect of the partitioning on the eigenvalue spectrum of $H$.
We also conduct experiments to compare the performance of three methods in terms of the number of CG iterations and parallel CG time to solution.
The results of these experiments show that the proposed partitioning method is significantly better than both baseline methods in terms of all of these performance metrics.
Finally, we compare the preprocessing overheads of the methods and show that the proposed method incurs much less overhead than the hypergraph partitioning method, thus allowing a better amortization.
The rest of the paper is organized as follows. \Cref{section2} presents the proposed partitioning method and its implementation.
In \cref{section3}, we present and discuss the experimental results.
Finally, the paper is concluded with conclusions and directions for future research in \cref{section4}.
\section{The proposed partitioning method}
\label{section2}
In this section, we first describe the row inner-product graph model and then show that finding a ``good" block-row partitioning can be formulated as a graph partitioning problem on this graph model.
Finally, we give the implementation details for the construction and partitioning of the graph model.
We refer the reader to \cref{appendix} for a short background on graph partitioning.
\subsection{Row inner-product graph model}
In the row-inner product graph \linebreak $\mathcal{G}_{\rm{RIP}}(A) = ( \mathcal{V}, \mathcal{E})$ of matrix $A$, vertices represent the rows of matrix $A$ and edges represent nonzero inner products between rows.
That is, $\mathcal{V}$ contains vertex $v_i $ for each row ${\rm{r}}_i $ of matrix $A$.
$\mathcal{E}$ contains an edge $(v_i,v_j)$ only if the inner product of rows ${\rm{r}}_i$ and ${\rm{r}}_j$ is nonzero.
That is,
\begin{equation}
\begin{aligned}
\mathcal{E}=\{(v_i,v_j) \; | \; {\rm{r}}_i{\rm{r}}_j^T \; \neq \; 0 \}.\\
\end{aligned}
\end{equation}
\noindent
Each vertex $v_i$ can be associated with a unit weight or a weight that is equal to the number of nonzeros in row ${\rm{r}}_i$, that is
\begin{equation}
\label{eq16}
w(v_i)=1 \; \text{ or } \; w(v_i)=nnz({\rm{r}}_i),
\end{equation}
respectively.
Each edge $(v_i,v_j)$ is associated with a cost equal to the absolute value of the respective inner product.
That is,
\begin{equation}
\label{eq14}
cost(v_i,v_j) \; = \; \@ifstar{\oldabs}{\oldabs*}{{\rm{r}}_i{\rm{r}}_j^T} \; \text{ for all } \; (v_i,v_j) \in \mathcal{E}.
\end{equation}
If we prescale the rows of coefficient matrix $A$ such that each row has a unit \mbox{2-norm}, then the cost of edge $(v_i,v_j)$ will correspond to the cosine of the angle between the pair of rows ${\rm{r}}_i$ and ${\rm{r}}_j$.
Therefore we prescale the matrix and the right-hand side vector in order to improve the effectiveness of the proposed graph model.
We note that the convergence of block Cimmino algorithm is independent of row scaling~\cite{elfving1980block,ruiz1992solution,zenadi2013methodes}.
This graph is topologically equivalent to the standard graph representation of the symmetric matrix resulting from the sparse matrix-matrix multiplication operation $C=AA^T$.
That is, the sparsity pattern of $C$ corresponds to the adjacency matrix representation of $\mathcal{G}_{\rm{RIP}}$.
Each nonzero $c_{ij}$ of the resulting matrix $C$ incurs an edge $(v_i,v_j)$.
Since each nonzero entry $c_{ij}$ of $C$ is computed as the inner product of row ${\rm{r}}_i$ and row ${\rm{r}}_j$, the absolute value of the nonzero $c_{ij}$ determines the cost of the respective edge $(v_i,v_j)$.
That is, since $c_{ij} = {\rm{r}}_i{\rm{r}}_j^T $, we have $cost(v_i,v_j) = \@ifstar{\oldabs}{\oldabs*}{c_{ij}}$.
\Cref{graph11} shows a $12 \times 12$ sample sparse matrix that contains $38$ nonzeros.
Note that for the sake of clarity of presentation, rows of the sample matrix $A$ are not prescaled.
\Cref{graph12} depicts the proposed row inner-product graph $\mathcal{G}_{\rm{RIP}}$ for this sample matrix.
As seen in~\cref{graph12}, $\mathcal{G}_{\rm{RIP}}$ contains $12$ vertices each of which corresponds to a row and $35$ edges each of which corresponds to a nonzero row inner product.
For example, the inner product of rows ${\rm{r}}_2$ and ${\rm{r}}_4$ is nonzero where $ {\rm{r}}_2{\rm{r}}_4^T = (9\! \times \!12) + (6\! \times \!18) = 216$ so that $\mathcal{G}_{\rm{RIP}}$ contains the edge $(v_2,v_4)$ with $cost(v_2,v_4)=216$.
In~\cref{graph12}, edges with cost larger than $100$ are shown with thick lines in order to make such high inner-product values more visible.
\begin{figure}[hbtp]
\begin{center}
\subfloat [Sample matrix $A$] {
\includegraphics[scale=.45]{images/mtx-eps-converted-to.pdf}
\label{graph11}
} \hspace{1em}
\subfloat [Row inner-product graph $\mathcal{G}_{\rm{RIP}}$ of A]{
\includegraphics[scale=.23]{images/graph4_2-eps-converted-to.pdf}
\label{graph12}
} \\
\end{center}
\caption{Row inner-product graph model.}
\label{fig:sample}
\vspace{-0.3cm}
\end{figure}
\Cref{aat} shows the resulting matrix $C$ of the sparse matrix-matrix multiplication $C= AA^T$.
Only the off-diagonal nonzero entries together with their values are shown since the values of the diagonal entries do not affect $\mathcal{G}_{\rm{RIP}}$.
The cells that contain nonzeros larger than $100$ are shown with black background in order to make such high values more visible.
Comparison of~\cref{graph12,aat} shows that the topology of the standard graph model $\mathcal{G}(C)$ of matrix $C=AA^T$ is equivalent to the topology of $\mathcal{G}_{\rm{RIP}}$.
As also seen in~\cref{aat}, the values of the nonzero entries of matrix $C$ are equal to the costs of respective edges of $\mathcal{G}_{\rm{RIP}}$.
For example nonzero $c_{24}$ with a value $216$ incurs an edge $(v_2,v_4)$ with $cost(v_2,v_4)=216$.
\subsection{Block-row partitioning via partitioning $\mathcal{G}_{\rm{RIP}}$}
A $K$-way partition $\Pi = \{ \mathcal{V}_1,\mathcal{V}_2,\ldots,\mathcal{V}_K \}$ of $\mathcal{G}_{\rm{RIP}}$ can be decoded as a partial permutation on the rows of $A$ to induce a permuted matrix $A^\Pi$, where
\begin{equation}
A^\Pi = PA =
\begin{bmatrix}
A_1^\Pi \\
\vdots \\
A^\Pi_k \\
\vdots \\
A^\Pi_K
\end{bmatrix} = \begin{bmatrix}
\mathcal{R}_1 \\
\vdots \\
\mathcal{R}_k \\
\vdots \\
\mathcal{R}_K
\end{bmatrix}.
\end{equation}
\noindent
Here, $P$ denotes the row permutation matrix which is defined by the $K$-way partition $\Pi$ as follows:
the rows associated with the vertices in $\mathcal{V}_{k+1}$ are ordered after the rows associated with the vertices in $\mathcal{V}_{k}$ for $k = 1,2,\ldots,K-1$.
That is, the block-row $\mathcal{R}_k$ contains the set of rows corresponding to the set of vertices in part $\mathcal{V}_k$ of partition $\Pi$, where ordering of the rows within block row $\mathcal{R}_k$ is arbitrary for each $k=1,2,\ldots,K$.
Note that we use the notation $\mathcal{R}_k$ to denote both $k^{th}$ block row $A^\Pi_k$ and the set of rows in $A^\Pi_k$.
Since the column permutation does not affect the convergence of block Cimmino algorithm~\cite{drummond2015partitioning}, the original column ordering of $A$ is maintained.
Consider a partition $\Pi$ of $\mathcal{G}_{\rm{RIP}}$.
The weight $W_k$ of a part $\mathcal{V}_k$ is either equal to the number of rows or number of nonzeros in block row $\mathcal{R}_k$ depending on the vertex weighting scheme used according to~\cref{eq16}.
That is,
\begin{equation}
W_k = |\mathcal{R}_k| \; \text{ or } \; W_k = \sum\limits_{r_i \in \mathcal{R}_k} nnz({\rm{r}}_i).
\end{equation}
\noindent
Therefore in partitioning $\mathcal{G}_{\rm{RIP}}$, the partitioning constraint of maintaining balance among part weights according to~\cref{partbal} corresponds to maintaining balance on either the number of rows or the number of nonzeros among the block rows.
Consider a partition $\Pi$ of $\mathcal{G}_{\rm{RIP}}$.
A cut edge $(v_i,v_j)$ between parts $\mathcal{V}_k$ and $\mathcal{V}_m$ represents a nonzero inter-block inner product ${\rm{r}}_i{\rm{r}}_j^T$ between block rows $\mathcal{R}_k$ and $\mathcal{R}_m$.
Therefore the cutsize of $\Pi$ (given in~\cref{partcut}) is equal to
\begin{equation}
\begin{aligned}
{\mathrm{cutsize}}(\Pi) \triangleq \sum\limits_{ (v_i,v_j) \in \mathcal{E}_{cut}} cost(v_i,v_j) & = \sum\limits_{1 \leq k < m \leq K } \sum\limits_{\substack{v_i \in \mathcal{V}_k \\ v_j \in \mathcal{V}_m}} cost(v_i,v_j) \\
& = \sum\limits_{1 \leq k < m \leq K } \sum\limits_{\substack{{\rm{r}}_i \in \mathcal{R}_k \\ {\rm{r}}_j \in \mathcal{R}_m}} \@ifstar{\oldabs}{\oldabs*}{{\rm{r}}_i{\rm{r}}_j^T},
\end{aligned}
\end{equation}
\noindent
which corresponds to the total sum of the inter-block inner products ($\rm{ interIP}(\Pi)$).
So, in partitioning $\mathcal{G}_{\rm{RIP}}$, the partitioning objective of minimizing the cutsize corresponds to minimizing the sum of inter-block inner products between block rows.
Therefore this partitioning objective corresponds to making the block rows numerically more orthogonal to each other.
This way, we expect this method to yield a faster convergence in the CG accelerated block Cimmino algorithm.
We introduce~\cref{graphpart1,graphpart2} in order to clarify the proposed graph partitioning method for block-row partitioning.
\Cref{unipart} shows a straightforward 3-way block-row partition $ \{ \mathcal{R}_1^{s},\mathcal{R}_2^s,R_3^s \}$ of the sample matrix $A$ given in~\cref{graph11}, where the first four, the second four and the third four consecutive rows in the original order constitute the block rows $\mathcal{R}_1^{s},\mathcal{R}_2^s$ and $\mathcal{R}_3^s$, respectively.
\Cref{uni} shows the $3$-way vertex partition $\Pi^s(\mathcal{V}) = \{\mathcal{V}_1^s,\mathcal{V}_2^s,\mathcal{V}_3^s\}$ of $\mathcal{G}_{\rm{RIP}}$ that corresponds to this straightforward $3$-way block-row partition.
\Cref{met} shows a ``good" $3$-way vertex partition $\Pi^g(\mathcal{V}) = \{\mathcal{V}_1^g,\mathcal{V}_2^g,\mathcal{V}_3^g\}$ of $\mathcal{G}_{\rm{RIP}}$ obtained by using the graph partitioning tool METIS~\cite{metis}.
\Cref{metpart} shows the permuted $A^\Pi$ matrix and block-row partition $ \{\mathcal{R}_1^{g},\mathcal{R}_2^{g},R_3^{g}\}$ induced by the 3-way vertex partition $\Pi^g(\mathcal{V}).$
As seen in~\cref{graphpart1,graphpart2}, both straightforward and ``good" block-row partitions achieve perfect balance on the row counts of blocks by having exactly four rows per block.
The quality difference between straightforward and ``good" block-row partitions can be easily seen by comparing the $3$-way partitions of $\mathcal{G}_{\rm{RIP}}$ in \cref{uni,met}, respectively.
As seen in~\cref{uni}, eight out of nine thick edges remain on the cut of $\Pi^s(\mathcal{V})$, whereas all of the nine thick edges remain internal in $\Pi^g(\mathcal{V})$ as seen in~\cref{met}.
\begin{figure}[hbtp]
\vspace{-0.3cm}
\begin{center}
\subfloat [] {
\includegraphics[scale=.39]{images/part_uni3-eps-converted-to.pdf}
\label{unipart}
}
\subfloat [] {
\includegraphics[scale=.2]{images/graph4uni_part-eps-converted-to.pdf}
\label{uni}
}
\end{center}
\caption{ (a) Straightforward 3-way row partition of $A$ and (b) 3-way partition $\Pi^s(\mathcal{V})$ of $\mathcal{G}_{\mathrm{{RIP}}}(A)$ induced by~\cref{unipart}.}
\label{graphpart1}
\end{figure}
\begin{figure}[hbtp]
\begin{center}
\subfloat []{
\includegraphics[scale=.2]{images/graph4metis_part-eps-converted-to.pdf}
\label{met}
}
\subfloat []{
\includegraphics[scale=.39]{images/part_metis3-eps-converted-to.pdf}
\label{metpart}
}
\end{center}
\caption{(a)``Good" 3-way partition $\Pi^g(\mathcal{V})$ of $\mathcal{G}_{\rm{RIP}}(A)$ and (b) 3-way row partition of $A$ induced by~\cref{met}.}
\label{graphpart2}
\vspace{-0.3cm}
\end{figure}
\Cref{sampleAAT} shows the $3 \times 3$ block-checkerboard partitioning of the resulting matrix $C=AA^T$ induced by straightforward and ``good" block-row partitioning of the sample matrix $A$ in~\cref{unipart,metpart}, respectively.
Note that both rows and columns of the $C$ matrix are partitioned conformably with the row partitions of the $A$ matrix.
The comparison of~\cref{aat,aatmetis} shows that large nonzeros (dark cells) are scattered across the off-diagonal blocks of matrix $C$ for the straightforward partitioning, whereas large nonzeros (dark cells) are clustered to the diagonal blocks of $C$ for the ``good" partitioning.
\begin{figure}[hbpt]
\vspace{-0.3cm}
\begin{center}
\subfloat []{
\includegraphics[scale=.37]{images/sA4_AAT_black_nodiag-eps-converted-to.pdf}
\label{aat}
}
\subfloat [] {
\includegraphics[scale=.37]{images/sA4_AAT_metis_black_nodiag-eps-converted-to.pdf}
\label{aatmetis}
}
\end{center}
\caption{$3 \times 3$ block-checkerboard partition of matrix $C=AA^T$ induced by 3-way (a) straightforward (\cref{unipart}) and (b) ``good" (\cref{metpart}) block-row partitions of matrix $A$.}
\label{sampleAAT}
\vspace{-0.1cm}
\end{figure}
\begin{figure}[hbpt]
\vspace{-0.3cm}
\begin{center}
\begin{minipage}{.5\textwidth}
\centering
\begin{minipage}{.25\textwidth}
IP$(\Pi^s)$ =
\end{minipage}
\begin{minipage}{.5\textwidth}
\subfloat []{
\includegraphics[scale=.55]{images/sumProduni_nodiag-eps-converted-to.pdf}%
\label{sumprod}
\end{minipage}%
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\begin{minipage}{.25\textwidth}
IP$(\Pi^g)$ =
\end{minipage}
\begin{minipage}{.5\textwidth}
\subfloat [] {
\includegraphics[scale=.55]{images/sumProdmetis_nodiag-eps-converted-to.pdf}%
\label{sumprodmetis}
} %
\end{minipage}%
\end{minipage}
\end{center}
\caption{Inter-block row-inner-product matrix \rm{IP}$(\Pi)$ for (a) straightforward and (b) ``good" block-row partitions.}
\label{sumprods}
\end{figure}
We introduce~\cref{sumprods} to compare the quality of the straightforward and ``good" block-row partitions in terms of inter-block row-inner-product sums.
In the figure, each off-diagonal entry ip$_{km}$ of the $3 \times 3$ IP matrix shows the sum of the inter-block row inner-products between the respective block rows $\mathcal{R}_k$ and $\mathcal{R}_m$.
That is,
\begin{equation}
{\mathrm{ip}}_{km} \triangleq \sum\limits_{\substack{ {\mathrm{r}}_i \in \mathcal{R}_k \\ {\mathrm{r}}_j \in \mathcal{R}_m } } \@ifstar{\oldabs}{\oldabs*}{ {\rm{r}}_i{\mathrm{r}}_j^T } \; \text{ for } k \neq m.
\end{equation}
As seen in~\cref{sumprods}, ip$^s_{12} = 1,\!222$ for the straightforward partition, whereas ip$^g_{12} = 70$ for the ``good" partition.
Note that ip$_{km}$ is also equal to the sum of the absolute values of the nonzeros of the off-diagonal block $C_{km}$ at the $k^{th}$ row block and $m^{th}$ column block of the $C$ matrix, i.e.,
\begin{equation}
{\rm{ip}}_{km} = \sum\limits_{\substack{{\rm{r}}_i \in R_k \\ {\rm{r}}_j \in R_m }} |c_{ij}| .
\end{equation}
Therefore the total sum of inter-block inner products is
\begin{equation}
\begin{aligned}
\mathrm{interIP}(\Pi^s) &= {\rm{ip}}^s_{12} + {\rm{ip}}^s_{13} + {\rm{ip}}^s_{23} \\
& = 1,\!222+ 624 + 323 = 2,\!169 \\
\end{aligned}
\end{equation}
for the straightforward partition, whereas for the ``good" partition it is
\begin{equation}
\begin{aligned}
\mathrm{interIP}(\Pi^g) = 70 + 94 + 90 = 254
\end{aligned}.
\end{equation}
\Cref{eig,metiseig}, respectively show the eigenvalue spectrum of $H$ for the straightforward and ``good" partitionings.
As seen in the figures, for the straightforward partitioning the eigenvalues reside in the interval $[3.0\!\times\!10^{-2},2.53]$, whereas for "good" partitioning the eigenvalues reside in the interval $[5.1\!\times\!10^{-1},1.55]$.
As seen in~\cref{metiseig}, after using $\mathcal{G}_{\mathrm{RIP}}$ partitioning the eigenvalues are much better clustered around $1$ and the smallest eigenvalue is much larger than that of the straightforward partitioning method.
\begin{figure}[tbhp]
\vspace{-0.4cm}
\begin{center}
\subfloat [Straightforward partitioning (\cref{unipart})] {
\includegraphics[scale=.55]{images/sA4_UP-eps-converted-to.pdf}
\label{eig}
} \hspace{0.2cm}
\subfloat [``Good" partitioning (\cref{metpart})]{
\includegraphics[scale=.55]{images/sA4_GP-eps-converted-to.pdf}
\label{metiseig}
}
\end{center}
\caption{Eigenvalue spectrum of $H$ for the block-row partitionings of the sample matrix given in~\cref{graph11}.}
\label{sampleeig}
\vspace{-0.4cm}
\end{figure}
\subsection{Implementation}
\label{subsec:impl}
Implementation of the proposed partitioning method consists of two stages which are constructing $\mathcal{G}_{\rm{RIP}}$ and partitioning $\mathcal{G}_{\rm{RIP}}$.\\
\noindent
\textbf{Constructing $\mathcal{G}_{\rm{RIP}}$:}
For constructing $\mathcal{G}_{\rm{RIP}}$, we choose to use basic sparse matrix-matrix multiplication (SpGEMM)~\cite{gustavson1978two} kernel due to existing efficient implementations.
The edges of the $\mathcal{G}_{\rm{RIP}}$ are obtained from the nonzeros of the $C=AA^T$ matrix, whereas their weights are obtained from the absolute values of those nonzeros.
Note that when matrix $A$ has dense column(s), the corresponding matrix $C =AA^T$ will be quite dense.
In other words, when a column has $nz$ nonzeros, corresponding $C$ matrix will have at least $nz^2$ nonzeros.
For example,~\cref{eye1} shows a $25 \times 25$ sparse matrix $A$ which has a dense column having $23$ nonzero entries.
As seen in~\cref{eye2}, matrix $AA^T$ is dense as it has $531$ nonzero entries.
Clearly, large number of nonzeros in $C$ (i.e., large number of edges in $\mathcal{G}_{\rm{RIP}}$) increases the memory requirement and computation cost of SpGEMM as well as the time requirement for partitioning $\mathcal{G}_{\rm{RIP}}$.
In order to alleviate the aforementioned problem, we propose the following met-hodology for sparsifying $C$.
We identify a column $A(:,i)$ (in MATLAB~\cite{MATLAB:2015} notation) of an $n\!\times\!n$ matrix $A$ as a dense column if it contains more than $\sqrt{n}$ nonzeros \mbox{($nnz(A(:,i)) > \sqrt{n}$)}.
Given $A$, we extract a sparse matrix $\tilde{A}$ by keeping the largest (in absolute value) $\sqrt{n}$ nonzeros of dense columns of $A$.
That is, the smallest $nnz(A(:,i)) - \sqrt{n}$ entries of a dense $A$-matrix column $A(:,i)$ is ignored during constructing column $\tilde{A}(:,i)$ of $\tilde{A}$.
Hence, the SpGEMM operation is performed on $\tilde{A}$ to obtain sparsified resulting matrix $\tilde{C}=\tilde{A}\tilde{A}^T$.
This will lead to a sparsified $\mathcal{\tilde{G}}_{\rm{RIP}}$ graph.
For example,~\cref{eye3} shows this sparsity pattern of a sparse matrix $\tilde{A}$ which is extracted from $A$ by keeping $5$ largest nonzeros in the dense column of $A$.
As seen in~\cref{eye4}, matrix $\tilde{C}=\tilde{A}\tilde{A}^T$ is very sparse with respect to~\cref{eye2}.
Note that both $\tilde{A}$ and $\tilde{C}$ are used only for constructing $\mathcal{\tilde{G}}_{\rm{RIP}}$ of $A$. After the partitioning stage, both matrices are discarded.
In the rest of the paper, $\mathcal{\tilde{G}}_{\rm{RIP}}$ will be referred to as $\mathcal{G}_{\rm{RIP}}$ for the sake of the simplicity of presentation.
\begin{figure}[htbp]
\vspace{-0.4cm}
\begin{center}
\subfloat [$A$] {
\includegraphics[scale=.23]{images/ey1-eps-converted-to.pdf}
\label{eye1}
} \hspace{-0.9cm}
\subfloat [$C=AA^T$]{
\includegraphics[scale=.23]{images/ey2-eps-converted-to.pdf}
\label{eye2}
} \hspace{-0.1cm}
\subfloat [$\tilde{A}$] {
\includegraphics[scale=.23]{images/ey3-eps-converted-to.pdf}
\label{eye3}
} \hspace{-0.9cm}
\subfloat [$\tilde{C}=\tilde{A}\tilde{A}^T$]{
\includegraphics[scale=.23]{images/ey4-eps-converted-to.pdf}
\label{eye4}
}
\end{center}
\caption{Nonzero patterns of $A$, $AA^T$, $\tilde{A}$ and $\tilde{A}\tilde{A}^T$.}
\label{eye}
\vspace{-0.25cm}
\end{figure}
\noindent \textbf{{Partitioning $\mathcal{G}_{\rm{RIP}}$}:}
We use multilevel graph partitioning tool METIS~\cite{metis} for partitioning $\mathcal{G}_{\rm{RIP}}$.
In order to compute integer edge weights required by METIS, we multiply the floating-point edge cost values with $\alpha$ and round them up to the nearest integer value; $wgt(v_i,v_j) = \ceil*{ \alpha \times {cost(v_i,v_j) }},$ where $\alpha$ is a sufficiently large integer.
Here, $cost(v_i,v_j)$ is the edge cost computed according to~\cref{eq14} and $wgt(v_i,v_j)$ is the weight of the respective edge provided to METIS.
Since the rows of matrix $A$ are prescaled to have 2-norm equal to one in the preprocessing phase, each edge cost $cost(v_i,v_j)$ should be in the range $(0,1]$ and the resulting edge weight $wgt(v_i,v_j)$ will be an integer in the range $[1,\alpha]$.
\section{Experimental Results}
\label{section3}
\subsection{Experimental Framework}
In the experiments, we used the CG accelerated block Cimmino implementation available in ABCD Solver v1.0~\cite{abcd}.
In ABCD Solver, we used MUMPS 5.1.2~\cite{MUMPS} sparse direct solver to factorize the systems in~\cref{augmentedMatrix} once and solve the system iteratively.
We note that the proposed scheme is designed for the classical block Cimmino by improving the numerical orthogonality between blocks and it does not intend to improve the structural orthogonality.
Hence, it is not applicable to the augmented block Cimmino algorithm that is also available in the ABCD Solver where the number of the augmented columns depends only on structural orthogonality.
We adopted the same matrix scaling procedure~\cite{amestoy2008parallel} as in ABCD Solver.
This is a parallel iterative procedure which scales the columns and rows of $A$ so that the absolute value of largest entry in each column and row is one.
We first perform row and column scaling in order to avoid problems due to poor scaling of the input matrix.
Then, we also perform row scaling on $A$ to have 2-norm equal to exactly one, so that the actual values in $AA^T$ would then correspond to cosines of the angles between pairs of rows in $A$~\cite{drummond2015partitioning,zenadi2013methodes}.
We note that $H$ is numerically independent of row scaling. However the column scaling affects $H$ and it can be considered as preconditioner~\cite{ruiz1992solution,zenadi2013methodes}.
ABCD Solver includes a stabilized block-CG accelerated block Cimmino algorithm~\cite{arioli1995block} especially for solving systems with multiple right-hand side vectors.
Since the classical CG is guaranteed to converge~\cite{o1980block} for systems where the coefficient matrix is symmetric and positive definite and its convergence theory is well-established, in this work we utilized the classical CG accelerated block Cimmino in~\cref{algo2} for solving sparse linear system of equations with single right-hand side vector rather than the block-CG acceleration.
In parallel CG accelerated block Cimmino algorithm, the work distribution among processors is performed in exactly the same way as in ABCD Solver.
That is, if the number of row blocks is larger than the number of processors, row-blocks are distributed among processors so that each processor has equal workload in terms of number of rows.
If the number of row-blocks is smaller than the number of processors, master-slave computational approach~\cite{arioli1995parallel,drummond2015partitioning,duff2015augmented} is adopted.
Each master processor owns a distinct row-block and responsible for inner-product and matrix-vector computations.
Each slave processor is a supplementary processor which helps the specific master processor in the factorization and solution steps of MUMPS.
After the analysis phase of MUMPS, slave processors are mapped to some master processors according to the information of FLOP estimation in the analysis phase.
In all experiments with the CG accelerated block Cimmino, we use the normwise backward error~\cite{arioli1992stopping} at iteration $t$
\begin{equation}
\label{stopping}
\gamma^{(t)} = \dfrac{\Vert Ax^{(t)} -f \Vert_\infty}{ \Vert A \Vert_\infty \Vert x^{(t)} \Vert_1 + \Vert f \Vert_\infty} < 10^{-10}
\end{equation}
as the stopping criterion and we use $10,\!000$ as the maximum number of iterations.
The right-hand side vectors of the systems are obtained by multiplying the coefficient matrices with a vector whose elements are all one.
In all instances, the CG iterations are started from the zero vector.
In the experiments, we have compared the performance of the proposed partitioning method (GP) against two baseline partitioning methods already available in ABCD Solver.
The first baseline method, which is referred to as uniform partitioning (UP) method in ABCD Solver, partitions the rows of the coefficient matrix into a given number of block rows with almost equal number of rows without any permutation on the rows of $A$.
Note that UP method is the same as the straightforward partitioning method mentioned in \cref{section2}.
The second baseline method is the hypergraph partitioning (HP) method~\cite{drummond2015partitioning}.
This method uses the column-net model~\cite{1999hypergraph} of sparse matrix $A$ in which rows and columns are respectively represented by vertices and hyperedges both with unit weights~\cite{drummond2015partitioning}.
Each hyperedge connects the set of vertices corresponding to the rows that have a nonzero on the respective column.
In the HP method, a $K$-way partition of the vertices of the column-net model is used to find $K$ block rows.
The partitioning constraint of maintaining balance on part weights corresponds to finding block rows with equal number of rows.
CM-based partitioning strategies~\cite{drummond2015partitioning,ruiz1992solution,zenadi2013methodes} are also considered as another baseline approach.
However, our experiments showed that CM-based strategies fail in producing the desired number of balanced partitions and achieving convergence for most of the test instances.
Due to a significantly larger number of failures of CM-based strategies compared to UP and HP, UP and HP are selected as a much better baseline algorithms.
The HP method in ABCD Solver uses the multilevel hypergraph partitioning tool PaToH~\cite{1999patoh} for partitioning the column-net model of matrix $A$.
Here, we use the same parameters for PaToH specified in ABCD Solver.
That is, the final imbalance (i.e., $\epsilon$ in \cref{partbal}) and initial imbalance (imbalance ratio of the coarsest hypergraph) parameters in PaToH are set to $50\%$ and $100\%$, respectively.
The other parameters are left as default of PaToH as in ABCD Solver.
Parallel block Cimmino solution times on some real problems are experimented by using PaToH with different imbalance ratios in \cite{drummond2015partitioning}.
It is reported that although partitioning with weak balancing reduces greatly the number of interconnections which leads to decrease in the number of iterations, however, it increases the parallel solution time because of highly unbalanced computational workload among processors.
Therefore, finding ``good" partition imbalance ratios can be important for the parallel performance of block Cimmino.
Due to the space limitation, the impact of different imbalance ratios on the parallel performance is left as a future work.
In the proposed GP method, as mentioned in \cref{subsec:impl}, the multilevel graph partitioning tool METIS is used to partition the row inner-product graph model $\mathcal{G}_{\rm{RIP}}$ of $A$.
The imbalance parameter of METIS is set to 10\% and k-way option is used.
For the sake of a fair comparison between HP and GP methods, unit vertex weights are used in $\mathcal{G}_{\rm{RIP}}$.
The other parameters are left as default of METIS.
Since both PaToH and METIS use randomized algorithms, we report the results of the geometric mean of $5$ experiments with different seeds for each instance.
Here and hereafter, we use GP, HP and UP to refer to the respective block-row partitioning method as well as ABCD Solver that utilizes the regular block Cimmino algorithm for solving the systems partitioned by the respective method.
The extensive numerical experiments were conducted on a shared memory system.
The shared memory system is a four socket 64-core computer that contains four AMD Opteron 6376 processors, where each processor has 16 cores running at 2.3GHz and a total of 64GB of DDR3 memory.
Due to memory bandwidth limitations of the platform, experiments are performed with 32 cores.
ABCD Solver~\cite{abcd} is implemented in C/C++ programming language with MPI-OpenMP based hybrid parallelism.
Furthermore, an additional parallelism level can be incorporated with multithreaded BLAS/LAPACK libraries.
However, in the experiments, we used pure MPI-based parallelism which gives the best performance for our computing system.
\subsection{Effect of block-row partitioning on eigenvalue spectrum of $H$}
The convergence rate of the CG accelerated block Cimmino algorithm is related to the eigenvalue spectrum of $H$.
By the nature of the block Cimmino algorithm, most of the eigenvalues of $H$ are clustered around 1, but there can be some eigenvalues at extremes of the spectrum.
In this subsection, we conduct experiments to study the effect of the partitioning on the eigenvalue spectrum of $H$ by comparing the proposed GP method against UP and HP.
In these experiments, in order to be able to compute the eigenvalue spectrum requiring reasonable amount of time and memory, we use four small nonsymmetric sparse matrices: ${\mathtt{sherman3}}$, ${\mathtt{GT01R}}$, ${\mathtt{gemat11}}$ and ${\mathtt{LeGresley\_4908}}$
from SuiteSparse Matrix Collection~\cite{davis2011university}.
The first and the second matrices arise in Computational Fluid Dynamics problems, whereas the third and fourth matrices arise in Power Network problems.
We partition the matrices into $8$ block rows for all of the three partitioning methods UP, HP and GP.
\begin{figure}[htbp]
{
\tiny
\begin{center}
\subfloat [$\mathtt{sherman3}:$ $n=5,\!005$, $nnz=20,\!033$] {
\includegraphics[scale=.56,trim={1.7cm 0 1.1cm 0.22cm}, clip]{images/sherman3_eig-eps-converted-to.pdf}
\label{sherman_1}
} \hspace{0.7cm}
\subfloat [$\mathtt{GT01R}:$ $n=7,\!980$, $nnz=430,\!909$ ] {
\includegraphics[scale=.56,trim={1.7cm 0 1.1cm 0.22cm}, clip]{images/GT01R_8_eigs-eps-converted-to.pdf}
\label{GT01R_1}
} \\ [+2ex]
\subfloat [$\mathtt{gemat11}:$ $n=4,\!929$, $nnz=33,\!108$] {
\includegraphics[scale=.56,trim={1.7cm 0 1.1cm 0.22cm}, clip]{images/gemat11_8_eigs-eps-converted-to.pdf}
\label{LeGresley_1}
} \hspace{0.7cm}
\subfloat [$\mathtt{LeGresley\_4908}:$ $n=4\!,\!908$, $nnz=30\!,\!482$] {
\includegraphics[scale=.56,trim={1.7cm 0 1.1cm 0.22cm}, clip]{images/legresley-eps-converted-to.pdf}
\label{gemat11_1}
}
\end{center}
}
\caption{Eigenvalue spectrum of $H$ (with the smallest and largest eigenvalues) and the number of CG iterations (iters) required for convergence.}
\label{eigens}
\vspace{-0.4cm}
\end{figure}
\Cref{eigens} shows the eigenvalue spectrum of $H$ obtained by UP, HP and GP methods for each test instance.
The figure also reports the number of CG iterations (iters) required for convergence as well as the smallest and largest eigenvalues of $H$.
As seen in the figure, both HP and GP methods achieve significantly better clustering of the eigenvalues around 1 compared to UP.
This experimental finding is reflected to remarkable decrease in the number of CG iterations attained by both HP and GP over UP.
In the comparison of HP and GP, GP achieves better eigenvalue clustering and hence better convergence rate than HP for all instances.
For ${\mathtt{sherman3}}$, ${\mathtt{GT01R}}$, ${\mathtt{gemat11}}$ and ${\mathtt{LeGresley\_4908}}$ instances,
the better clustering quality attained by GP over HP leads to significant improvement in the convergence rate by $66\%$, $39\%$, $61\%$ and $15\%$, respectively.
\subsection{Dataset for Performance Analysis}
For the following experiments, we selected all nonsingular nonsymmetric square matrices whose dimensions are between $50,\!000$ and $5,\!000,\!000$ rows and columns from the SuiteSparse Matrix Collection~\cite{davis2011university}.
The number of matrices based on this criteria turns out to be $112$.
Only the {\tt HV15R} and {\tt cage14} matrices are excluded due to memory limitations.
We have observed that at least one of the three partitioning methods was able to converge in $76$ instances out of these $110$ in less than $10,\!000$ CG iterations.
\Cref{matrix_prop} shows the properties of those $76$ matrices that are used in the experiments.
We note that the main advantage of the block Cimmino algorithm is its amenability to parallelism and requirement of less storage compared to direct methods. It would not be competitive for the smallest problems in the dataset against direct solvers or classical preconditioned iterative solvers.
In \cref{matrix_prop}, the matrices are displayed in increasing sorted order according to their sizes.
The matrices are partitioned into a number of partitions where each row block has approximately $20,\!000$ rows if $n\!>\!100,\!000$ or $10,\!000$ rows if $n\!<\!100,\!000$.
Thus in our dataset, the smallest matrix is partitioned into 6 row blocks whereas the largest matrix is partitioned into 235 row blocks.
\begin{table}[tbhp]
\centering
\caption{Matrix properties ($n$: number of rows/columns, $nnz$: number of nonzeros)}
\label{matrix_prop}
\small
\resizebox{\textwidth}{!}
{
\begin{tabular}{lrr|lrr}
\hline \vspace{-0.3cm}\\
\multicolumn{1}{l}{ Matrix name} & \multicolumn{1}{c}{$n$} & \multicolumn{1}{c}{$nnz$} & \multicolumn{1}{l}{ Matrix name} & \multicolumn{1}{c}{$n$} &\multicolumn{1}{c}{$nnz$} \\
\hline
\vspace{-0.3cm}\\
rajat26 & 51,032 & 247,528 & dc3 & 116,835 & 766,396 \\
ecl32 & 51,993 & 380,415 & trans4 & 116,835 & 749,800 \\
2D\_54019\_highK & 54,019 & 486,129 & trans5 & 116,835 & 749,800 \\
bayer01 & 57,735 & 275,094 & matrix-new\_3 & 125,329 & 893,984 \\
TSOPF\_RS\_b39\_c30 & 60,098 & 1,079,986 & cage12 & 130,228 & 2,032,536 \\
venkat01 & 62,424 & 1,717,792 & FEM\_3D\_thermal2 & 147,900 & 3,489,300 \\
venkat25 & 62,424 & 1,717,763 & para-4 & 153,226 & 2,930,882 \\
venkat50 & 62,424 & 1,717,777 & para-10 & 155,924 & 2,094,873 \\
laminar\_duct3D & 67,173 & 3,788,857 & para-5 & 155,924 & 2,094,873 \\
lhr71c & 70,304 & 1,528,092 & para-6 & 155,924 & 2,094,873 \\
shyy161 & 76,480 & 329,762 & para-7 & 155,924 & 2,094,873 \\
circuit\_4 & 80,209 & 307,604 & para-8 & 155,924 & 2,094,873 \\
epb3 & 84,617 & 463,625 & para-9 & 155,924 & 2,094,873 \\
poisson3Db & 85,623 & 2,374,949 & crashbasis & 160,000 & 1,750,416 \\
rajat20 & 86,916 & 604,299 & majorbasis & 160,000 & 1,750,416 \\
rajat25 & 87,190 & 606,489 & ohne2 & 181,343 & 6,869,939 \\
rajat28 & 87,190 & 606,489 & hvdc2 & 189,860 & 1,339,638 \\
LeGresley\_87936 & 87,936 & 593,276 & shar\_te2-b3 & 200,200 & 800,800 \\
rajat16 & 94,294 & 476,766 & stomach & 213,360 & 3,021,648 \\
ASIC\_100ks & 99,190 & 578,890 & torso3 & 259,156 & 4,429,042 \\
ASIC\_100k & 99,340 & 940,621 & ASIC\_320ks & 321,671 & 1,316,085 \\
matrix\_9 & 103,430 & 1,205,518 & ASIC\_320k & 321,821 & 1,931,828 \\
hcircuit & 105,676 & 513,072 & ML\_Laplace & 377,002 & 27,582,698 \\
lung2 & 109,460 & 492,564 & RM07R & 381,689 & 37,464,962 \\
rajat23 & 110,355 & 555,441 & language & 399,130 & 1,216,334 \\
Baumann & 112,211 & 748,331 & CoupCons3D & 416,800 & 17,277,420 \\
barrier2-1 & 113,076 & 2,129,496 & largebasis & 440,020 & 5,240,084 \\
barrier2-2 & 113,076 & 2,129,496 & cage13 & 445,315 & 7,479,343 \\
barrier2-3 & 113,076 & 2,129,496 & rajat30 & 643,994 & 6,175,244 \\
barrier2-4 & 113,076 & 2,129,496 & ASIC\_680k & 682,862 & 2,638,997 \\
barrier2-10 & 115,625 & 2,158,759 & atmosmodd & 1,270,432 & 8,814,880 \\
barrier2-11 & 115,625 & 2,158,759 & atmosmodj & 1,270,432 & 8,814,880 \\
barrier2-12 & 115,625 & 2,158,759 & Hamrle3 & 1,447,360 & 5,514,242 \\
barrier2-9 & 115,625 & 2,158,759 & atmosmodl & 1,489,752 & 10,319,760 \\
torso2 & 115,967 & 1,033,473 & atmosmodm & 1,489,752 & 10,319,760 \\
torso1 & 116,158 & 8,516,500 & memchip & 2,707,524 & 13,343,948 \\
dc1 & 116,835 & 766,396 & circuit5M\_dc & 3,523,317 & 14,865,409 \\
dc2 & 116,835 & 766,396 & rajat31 & 4,690,002 & 20,316,253 \\
\hline
\end{tabular}
}
\vspace{-0.3cm}
\end{table}
\subsection{Convergence and parallel performance}
In this subsection, we study the performance of the proposed GP method against UP and HP in terms of the number of CG iterations and parallel CG time to solution.
\Cref{itertable} shows the number of CG iterations and parallel CG time for each matrix.
In the table, ``$\mathtt{F}$" denotes that an algorithm fails to reach the desired backward error in $10,\!000$ iterations for the respective matrix instance.
As seen in~\cref{itertable}, UP and HP fail to converge in $26$ and $18$ test instances, respectively, whereas GP does not fail in any test instance.
In~\cref{itertable}, the best result for each test instances is shown in bold.
As seen in the table, out of 76 instances, the proposed GP method achieves the fastest convergence in 58 instances, whereas HP and UP achieve the fastest convergence in only 8 and 11 instances, respectively.
As also seen in~\cref{itertable}, GP achieves the fastest iterative solution time in 56 instances, whereas HP and UP achieve the fastest solution time in 11 and 9 instances, respectively.
\begin{table}[tbhp]
\centering
\caption{Number of CG iterations and parallel CG times in seconds }
\label{itertable}
\centering
\scriptsize\renewcommand{\arraystretch}{.9}
{
\begin{tabular}{lrrrrrrr}
\hline \vspace{-0.15cm}\\
\multirow{3}{*}{Matrix name} & \multicolumn{3}{l}{\# of CG iterations} & & \multicolumn{3}{l}{Parallel CG time to soln.} \\
\cmidrule(l){2-4} \cmidrule(l){6-8}
& UP & HP & GP & & UP & HP & GP \\ \hline
\vspace{-0.15cm}\\
rajat26 & F & 3303 & \textbf{245} & & F & 62.4 & \textbf{8.4} \\
ecl32 & 5307 & 1253 & \textbf{314} & & 246.0 & 36.8 & \textbf{10.1} \\
2D\_54019\_highK & 42 & \textbf{4} & 9 & & 0.9 & \textbf{0.1} & 0.2 \\
bayer01 & 2408 & 382 & \textbf{131} & & 63.8 & 8.7 & \textbf{3.4} \\
TSOPF\_RS\_b39\_c30 & 676 & \textbf{262} & 473 & & 24.9 & \textbf{4.6} & 8.0 \\
venkat01 & 54 & 37 & \textbf{34} & & 1.8 & \textbf{0.9} & 0.9 \\
venkat25 & 915 &625 & \textbf{599} & & 30.8 & \textbf{14.3} & 14.4 \\
venkat50 & 1609 & 975 & \textbf{970} & & 56.2 & \textbf{23.5} & 23.7 \\
laminar\_duct3D & 466 & 630 & \textbf{394} & & 29.6 & 39.0 & \textbf{23.0} \\
lhr71c & F & 6164 & \textbf{4166} & & F & 193.3 & \textbf{136.5} \\
shyy161 & \textbf{13} & 20 & 15 & & \textbf{0.4} & 0.6 & 0.5 \\
circuit\_4 & F & 256 & \textbf{183} & & F & 15.5 & \textbf{12.3} \\
epb3 & 2583 & 3089 & \textbf{2318} & & 87.7 & 100.9 & \textbf{77.9} \\
poisson3Db & 4797 & 983 & \textbf{715} & & 715.0 & 51.9 & \textbf{41.0} \\
rajat20 & 629 & 641 & \textbf{322} & & 81.5 & 40.9 & \textbf{34.6} \\
rajat25 & 1172 & 937 & \textbf{448} & & 121.0 & 62.6 & \textbf{48.9} \\
rajat28 & 556 & 369 & \textbf{207} & & 86.3 & 22.3 & \textbf{21.2} \\
LeGresley\_87936 & F & 7625 & \textbf{3102} & & F & 266.5 & \textbf{122.2} \\
rajat16 & 6834 & 1022 & \textbf{180} & & 835.0 & 68.5 & \textbf{20.0} \\
ASIC\_100ks & F & \textbf{23} & 45 & & F & \textbf{1.0} & 2.1 \\
ASIC\_100k & 76 & 324 & \textbf{49} & & 22.7 & 93.5 & \textbf{13.4} \\
matrix\_9 & \textbf{3944} & F & 9276 & & \textbf{272.0} & F & 704.0 \\
hcircuit & 2061 & 2477 & \textbf{460} & & 591.0 & 136.5 & \textbf{32.7} \\
lung2 & \textbf{12} & \textbf{12} & 13 & & 0.9 & \textbf{0.8} & 0.9 \\
rajat23 & F & 7770 & \textbf{501} & & F & 465.6 & \textbf{33.4} \\
Baumann & \textbf{732} & 1551 & 1340 & & \textbf{47.4} & 86.5 & 89.8 \\
barrier2-1 & F & F & \textbf{1219} & & F & F & \textbf{145.1} \\
barrier2-2 & F & F & \textbf{1024} & & F & F & \textbf{99.8} \\
barrier2-3 & F & F & \textbf{1013} & & F & F & \textbf{106.1} \\
barrier2-4 & F & F & \textbf{1360} & & F & F & \textbf{116.0} \\
barrier2-10 & F & F & \textbf{1206} & & F & F & \textbf{135.8} \\
barrier2-11 & F & F & \textbf{1169} & & F & F & \textbf{125.7} \\
barrier2-12 & F & F & \textbf{1139} & & F & F & \textbf{116.1} \\
barrier2-9 & F & F & \textbf{1306} & & F & F & \textbf{136.2} \\
torso2 & 16 & \textbf{14} & 15 & & 0.7 & \textbf{0.6} & 0.7 \\
torso1 & F & \textbf{4376} & 9200 & & F & \textbf{322.1} & 833.7 \\
dc1 & 629 & 2059 & \textbf{83} & & 255.0 & 745.4 & \textbf{32.9} \\
dc2 & 478 & 1313 & \textbf{68} & & 170.3 & 504.0 & \textbf{22.4} \\
dc3 & 2172 & 3329 & \textbf{90} & & 793.0 & 1220.1 & \textbf{31.9} \\
trans4 & 292 & 1416 & \textbf{23} & & 105.9 & 452.1 & \textbf{5.8} \\
trans5 & 1006 & 4533 & \textbf{33} & & 368.7 & 1693.1 & \textbf{8.5} \\
matrix-new\_3 & F & 9739 & \textbf{6707} & & F & 709.7 & \textbf{519.4} \\
cage12 & \textbf{9} & 12 & 10 & & \textbf{1.7} & 3.1 & 2.2 \\
FEM\_3D\_thermal2 & 67 & 54 & \textbf{45} & & 4.9 & 4.3 & \textbf{3.8} \\
para-4 & 7675 & F & \textbf{3546} & & 1600.0 & F & \textbf{423.4} \\
para-10 & F & F & \textbf{5565} & & F & F & \textbf{588.6} \\
para-5 & F & F & \textbf{5054} & & F & F & \textbf{610.5} \\
para-6 & F & F & \textbf{5019} & & F & F & \textbf{574.1} \\
para-7 & F & F & \textbf{4576} & & F & F & \textbf{508.6} \\
para-8 & F & F & \textbf{4973} & & F & F & \textbf{535.7} \\
para-9 & F & F & \textbf{5667} & & F & F & \textbf{682.0} \\
crashbasis & 68 & \textbf{17} & 21 & & 10.0 & \textbf{1.1} & 1.4 \\
majorbasis & 48 & \textbf{16} & 18 & & 6.6 & \textbf{1.0} & 1.2 \\
ohne2 & \textbf{2623} & F & 3881 & & F & 2103.1 & \textbf{820.8} \\
hvdc2 & F & 5622 & \textbf{3042} & & F & 446.7 & \textbf{262.1} \\
shar\_te2-b3 & \textbf{23} & 27 & 26 & & 9.3 & 8.9 & \textbf{8.8} \\
stomach & \textbf{8} & 11 & 9 & & \textbf{1.1} & 1.4 & 1.3 \\
torso3 & 22 & 30 & \textbf{11} & & 5.2 & 7.0 & \textbf{3.1} \\
ASIC\_320ks & F & 20 & \textbf{2} & & F & 4.5 & \textbf{0.6} \\
ASIC\_320k & 114 & 37 & \textbf{11} & & 56.9 & 40.4 & \textbf{8.0} \\
ML\_Laplace & 8615 & 9136 & \textbf{8438} & & \textbf{3890.3} & 4220.8 & 4215.4 \\
RM07R & F & F & \textbf{3944} & & F & F & \textbf{10200.0} \\
language & 974 & 661 & \textbf{453} & & 335.0 & 196.4 & \textbf{159.0} \\
CoupCons3D & 277 & 132 & \textbf{107} & & 100.0 & 57.8 & \textbf{48.3} \\
largebasis & 1155 & 701 & \textbf{348} & & 549.1 & 174.8 & \textbf{88.3} \\
cage13 & \textbf{10} & 13 & 12 & & \textbf{6.7} & 14.7 & 10.6 \\
rajat30 & 157 & 200 & \textbf{61} & & 229.0 & 274.5 & \textbf{86.3} \\
ASIC\_680k & 10 & 19 & \textbf{2} & & 11.3 & 38.2 & \textbf{3.5} \\
atmosmodd & \textbf{744} & 2055 & 1183 & & \textbf{869.2} & 2115.2 & 1279.6 \\
atmosmodj & \textbf{787} & 2235 & 1253 & & \textbf{994.0} & 2323.2 & 1293.1 \\
Hamrle3 & F & 2394 & \textbf{2010} & & F & 2375.8 & \textbf{2012.9} \\
atmosmodl & 1206 & 805 & \textbf{379} & & 1800.6 & 1005.3 & \textbf{484.0} \\
atmosmodm & 1164 & 697 & \textbf{202} & & 1730.0 & 871.4 & \textbf{257.6} \\
memchip & 3278 & 1002 & \textbf{379} & & 7450.3 & 1589.9 & \textbf{862.3} \\
circuit5M\_dc & 173 & 58 & \textbf{10} & & 512.0 & 143.6 & \textbf{23.4} \\
rajat31 & 2840 & 2767 & \textbf{1500} & & 9590.1 & 9456.2 & \textbf{5166.6} \\ \hline \vspace{-0.2cm}\\
\textbf{Number of bests}& \textbf{11} & \textbf{8} & \textbf{58} & & \textbf{9}& \textbf{11}& \textbf{56} \\
\hline
\end{tabular}
}
\end{table}
\Cref{perf0} shows the performance profiles \cite{dolan2002benchmarking} of $76$ matrix instances which compare multiple methods over a set of test instances.
A performance profile is used to compare multiple methods with respect to the best performing method for each test instance
and report the fraction of the test instances in which performance is within a factor of that of the best method.
For example, in~\cref{perf1} a point (abscissa=$2$, ordinate=$0.40$) on the performance curve of a method refers to the fact that the performance of the respective method is no worse than that of the best method by a factor of 2 in approximately $40\%$ of the instances.
If a method is closer to the top-left corner, then it achieves a better performance.
\begin{figure}[tbhp]
\vspace{-0.3cm}
\begin{center}
\subfloat [] {
\includegraphics[scale=.47]{images/iter-eps-converted-to.pdf}
\label{perf1}
}
\subfloat [] {
\includegraphics[scale=.47]{images/cg_time-eps-converted-to.pdf}
\label{perf2}
}
\end{center}
\caption{ Performance profiles for (a) convergence rate and (b) parallel CG time.}
\label{perf0}
\vspace{-0.4cm}
\end{figure}
In~\cref{perf1,perf2}, we show the performance profiles in terms of the number of CG iterations and parallel CG times, respectively.
As seen in~\cref{perf1}, the number of CG iterations required by GP for convergence does not exceed that of the best method by a factor of 2 in approximately 95\% of the instances, whereas HP and UP
achieve the same relative performance compared to the best method in approximately 42\% and 30\% of the instances, respectively.
As seen in~\cref{perf2}, the CG time using GP is not slower than that of the best method by a factor of 2 in approximately 95\%
of instances.
Whereas HP and UP achieve the same relative performance compared to the best method in approximately 45\% and 23\% of the instances, respectively.
\Cref{perf1,perf2} show that the CG time is directly proportional to the number of CG iterations as expected.
It is clear that GP mainly aims at improving the convergence rate, whereas HP mainly aims at reducing total communication volume.
We made additional measurements in order to discuss this trade off between these two methods.
Due to the lack of space, here we only summarize the average results using 32 processors.
Although GP incurs 44\% more total communication volume than HP per iteration this results in only 6\% increase the per-iteration execution time.
Here, the per-iteration execution time for a given instance shows the overall parallelization efficiency attained by the respective partitioning method irregardless of its convergence performance.
This experimental finding can be attributed to several other factors affecting the communication overhead in addition to the total communication volume as well as the fact that per-iteration execution time is dominated by the computational cost of local solution of the linear systems via a direct solver in block Cimmino.
On average, although GP incurs only 6\% more per-iteration time than HP, GP requires 59\% smaller number of iterations for convergence than HP.
This explains the significant overall performance improvement achieved by GP against HP.
\subsection{Preprocessing Overhead and Amortization}
In this subsection, we analyze the relative preprocessing overhead of the three methods UP, HP and GP in order to find out whether the intelligent partitioning methods HP and GP are amortized.
For all of the three block-row partitioning methods, the preprocessing overhead includes matrix scaling, block-row partitioning, creating the submatrices corresponding to these block rows and distributing these submatrices among processors.
Recall that the partitioning for UP is straightforward and hence it incurs only negligible additional cost to the preprocessing overhead.
On the other hand, intelligent partitioning algorithms utilized in HP and GP incur considerable amount of cost to the overall preprocessing overhead.
Furthermore, for GP, construction of row inner-product graph also incurs significant amount of cost.
In~\cref{pretable}, we display total preprocessing time and total execution time for all three methods for each one of the 76 test instances.
In the table, total execution time is the sum of the total preprocessing and the total solution time.
Here the total solution time includes parallel factorization and parallel CG solution times.
Note that the factorization in block Cimmino algorithm in ABCD Solver is performed by parallel sparse direct solver MUMPS~\cite{MUMPS}.
The factorization, which needs to be performed only once, involves symbolic and numerical factorizations of the coefficient matrices of the augmented systems that arise in the block Cimmino algorithm.
Note that this factorization process is embarrassingly parallel since the factorization of the coefficient matrices of the augmented systems are done independently.
Recall that MUMPS is also used during the iterative solution stage, where at each iteration a linear system is solved using the factors of the augmented systems that were computed during the factorization stage.
\begin{table}[htbhp]
\centering
\caption{Preprocessing time and total execution time (including preprocessing) in seconds }
\label{pretable}
\centering
\scriptsize\renewcommand{\arraystretch}{.9}
{
\begin{tabular}{lrrrrrrr}
\hline \vspace{-0.15cm}\\
\multirow{3}{*}{Matrix name} & \multicolumn{3}{l}{Preprocessing time} & & \multicolumn{3}{l}{Total execution time} \\
\cmidrule(l){2-4} \cmidrule(l){6-8}
& UP & HP & GP & & UP & HP & GP \\ \hline
\vspace{-0.15cm}\\
rajat26 & 0.12 & 0.53 & 0.52 & & F & 63.7 & \textbf{9.3} \\
ecl32 & 0.20 & 0.98 & 0.54 & & 246.7 & 39.3 & \textbf{11.8} \\
2D\_54019\_highK & 0.22 & 0.69 & 0.50 & & 1.5 & 1.4 & \textbf{1.3} \\
bayer01 & 0.14 & 0.41 & 0.39 & & 64.4 & 9.7 & \textbf{4.0} \\
TSOPF\_RS\_b39\_c30 & 0.47 & 0.93 & 0.87 & & 26.1 & \textbf{6.2} & 9.4 \\
venkat01 & 0.75 & 2.02 & 1.44 & & 3.0 & 3.4 & \textbf{2.8} \\
venkat25 & 0.67 & 2.05 & 1.55 & & 32.0 & 16.9 & \textbf{16.7} \\
venkat50 & 0.76 & 2.07 & 1.46 & & 57.3 & 26.0 & \textbf{25.8} \\
laminar\_duct3D & 1.30 & 5.89 & 4.63 & & 34.3 & 53.8 & \textbf{33.8} \\
lhr71c & 0.60 & 1.82 & 1.68 & & F & 197.0 & \textbf{140.3} \\
shyy161 & 0.16 & 0.50 & 0.32 & & \textbf{1.2} & 1.7 & 1.5 \\
circuit\_4 & 0.16 & 0.79 & 1.13 & & F & 16.6 & \textbf{13.1} \\
epb3 & 0.21 & 0.70 & 0.44 & & 88.3 & 102.1 & \textbf{78.7} \\
poisson3Db & 1.10 & 6.02 & 5.08 & & 723.0 & 60.2 & \textbf{48.6} \\
rajat20 & 0.31 & 2.05 & 2.05 & & 83.0 & 44.1 & \textbf{37.8} \\
rajat25 & 0.28 & 2.10 & 2.01 & & 122.1 & 66.5 & \textbf{49.9} \\
rajat28 & 0.30 & 2.02 & 2.06 & & 87.9 & 25.8 & \textbf{24.4} \\
LeGresley\_87936 & 0.35 & 0.88 & 0.64 & & F & 294.7 & \textbf{123.5} \\
rajat16 & 0.24 & 1.97 & 1.82 & & 836.5 & 71.8 & \textbf{22.9} \\
ASIC\_100ks & 0.37 & 1.50 & 1.32 & & F & \textbf{3.1} & 4.1 \\
ASIC\_100k & 0.51 & 2.24 & 1.62 & & 27.7 & 99.5 & \textbf{18.2} \\
matrix\_9 & 0.57 & 2.19 & 1.81 & & \textbf{275.0} & F & 753.9 \\
hcircuit & 0.27 & 0.73 & 0.81 & & 593.4 & 137.8 & \textbf{34.2} \\
lung2 & 0.24 & 0.52 & 0.46 & & \textbf{1.6} & 1.8 & 1.9 \\
rajat23 & 0.31 & 1.12 & 1.77 & & F & 467.9 & \textbf{37.9} \\
Baumann & 0.39 & 1.39 & 0.81 & & \textbf{49.4} & 89.5 & 92.5 \\
barrier2-1 & 1.10 & 4.40 & 3.58 & & F & F & \textbf{150.2} \\
barrier2-2 & 1.10 & 4.40 & 3.56 & & F & F & \textbf{109.0} \\
barrier2-3 & 1.10 & 4.42 & 3.45 & & F & F & \textbf{121.0} \\
barrier2-4 & 1.10 & 4.35 & 3.58 & & F & F & \textbf{125.8} \\
barrier2-10 & 1.20 & 4.42 & 3.61 & & F & F & \textbf{147.2} \\
barrier2-11 & 1.20 & 4.40 & 3.66 & & F & F & \textbf{136.3} \\
barrier2-12 & 1.10 & 4.47 & 3.61 & & F & F & \textbf{128.0} \\
barrier2-9 & 1.10 & 4.32 & 3.54 & & F & F & \textbf{145.7} \\
torso2 & 0.46 & 1.17 & 0.86 & & \textbf{1.8} & 2.6 & 2.4 \\
torso1 & 3.10 & 35.19 & 7.59 & & F & \textbf{421.5} & 983.1 \\
dc1 & 0.37 & 1.82 & 1.49 & & 287.7 & 752.9 & \textbf{51.3} \\
dc2 & 0.35 & 1.77 & 1.44 & & 204.0 & 514.1 & \textbf{30.3} \\
dc3 & 0.37 & 1.80 & 1.51 & & 828.1 & 1233.9 & \textbf{48.5} \\
trans4 & 0.39 & 2.01 & 1.54 & & 129.5 & 462.0 & \textbf{9.9} \\
trans5 & 0.40 & 2.05 & 1.42 & & 377.3 & 1705.9 & \textbf{12.4} \\
matrix-new\_3 & 0.42 & 1.90 & 1.37 & & F & 715.0 & \textbf{524.1} \\
cage12 & 0.97 & 5.47 & 3.99 & & \textbf{23.9} & 64.2 & 33.6 \\
FEM\_3D\_thermal2 & 1.40 & 4.90 & 3.75 & & \textbf{8.3} & 11.6 & 10.3 \\
para-4 & 1.40 & 6.62 & 4.99 & & 1605.4 & F & \textbf{478.3} \\
para-10 & 1.00 & 4.45 & 3.07 & & F & F & \textbf{673.6} \\
para-5 & 0.91 & 4.82 & 3.20 & & F & F & \textbf{619.7} \\
para-6 & 1.10 & 4.57 & 3.13 & & F & F & \textbf{610.2} \\
para-7 & 1.00 & 4.62 & 3.20 & & F & F & \textbf{552.0} \\
para-8 & 1.10 & 4.57 & 3.07 & & F & F & \textbf{608.8} \\
para-9 & 1.00 & 4.72 & 3.23 & & F & F & \textbf{691.2} \\
crashbasis & 0.84 & 2.50 & 2.01 & & 13.2 & 5.2 & \textbf{5.1} \\
majorbasis & 0.72 & 2.40 & 2.11 & & 9.6 & 5.1 & \textbf{5.0} \\
ohne2 & 2.40 & 12.24 & 9.27 & & F & 2128.6 & \textbf{843.9} \\
hvdc2 & 0.79 & 1.82 & 1.34 & & F & 449.4 & \textbf{264.2} \\
shar\_te2-b3 & 0.16 & 2.97 & 3.10 & & \textbf{24.2} & 35.2 & 35.8 \\
stomach & 1.20 & 5.12 & 3.21 & & \textbf{5.1} & 9.5 & 7.8 \\
torso3 & 1.90 & 9.27 & 4.90 & & \textbf{14.1} & 25.5 & 17.4 \\
ASIC\_320ks & 1.10 & 3.02 & 3.02 & & F & 8.7 & \textbf{5.2} \\
ASIC\_320k & 1.30 & 12.45 & 5.34 & & 61.9 & 64.9 & \textbf{20.4} \\
ML\_Laplace & 9.30 & 58.70 & 29.05 & & \textbf{3913.4} & 4269.0 & 4288.5 \\
RM07R & 14.00 & 110.00 & 69.14 & & F & F & \textbf{10727.5} \\
language & 1.20 & 4.50 & 4.32 & & 338.5 & 204.8 & \textbf{160.3} \\
CoupCons3D & 6.40 & 32.72 & 17.66 & & 109.5 & 103.0 & \textbf{84.5} \\
largebasis & 2.20 & 6.37 & 4.61 & & 553.1 & 182.9 & \textbf{98.8} \\
cage13 & 4.20 & 30.25 & 17.10 & & \textbf{78.6} & 301.1 & 217.3 \\
rajat30 & 2.90 & 19.71 & 14.99 & & 326.9 & 350.0 & \textbf{127.5} \\
ASIC\_680k & 2.60 & 16.21 & 7.79 & & \textbf{19.3} & 228.2 & 23.8 \\
atmosmodd & 5.60 & 36.38 & 11.97 & & \textbf{881.9} & 2165.8 & 1464.6 \\
atmosmodj & 6.80 & 36.17 & 12.29 & & \textbf{1006.7} & 2372.9 & 1535.1 \\
Hamrle3 & 4.70 & 20.00 & 9.92 & & F & \textbf{2406.8} & 2469.9 \\
atmosmodl & 7.20 & 45.18 & 14.71 & & 1815.8 & 1067.0 & \textbf{585.5} \\
atmosmodm & 8.00 & 45.68 & 14.07 & & 1747.3 & 934.9 & \textbf{333.2} \\
memchip & 10.00 & 43.12 & 20.15 & & 7476.3 & 1640.0 & \textbf{893.4} \\
circuit5M\_dc & 16.00 & 58.06 & 25.95 & & 540.3 & 209.7 & \textbf{64.5} \\
rajat31 & 25.00 & 74.36 & 37.53 & & 9634.0 & 9539.1 & \textbf{5220.2} \\
\hline \vspace{-0.2cm}\\
& \multicolumn{3}{c}{Geometric means} & & \multicolumn{3}{c}{ Number of bests} \\
& \textbf{0.9} & \textbf{4.2} & \textbf{ 2.9 } & & \textbf{15} & \textbf{4} & \textbf{57} \\
\hline
\end{tabular}
}
\end{table}
As seen in~\cref{pretable}, in terms of the preprocessing time, UP is the clear winner in all of the $76$ instances as expected, whereas GP incurs much less preprocessing time than HP in all except 7 instances.
As also seen in~\cref{pretable}, comparing all three methods, GP achieves the smallest total execution time in 57 instances, whereas UP and HP respectively achieve the smallest total
execution time in only 15 and 4 instances.
In the relative comparison of GP and UP, GP incurs only $209\%$ more preprocessing time than UP, on average, which leads GP to achieve less total execution time than UP in 61 out of 76 instances.
In other words, compared to UP, the sequential implementation of GP amortizes its preprocessing cost for those 61 instances by reducing the number of CG iterations sufficiently.
That is, GP amortizes its preprocessing cost for the solution of those 61 instances even if we solve only one linear system with a single right-hand side vector.
Note that in many applications in which sparse linear systems arise, the solution of consecutive linear systems are required where the coefficient matrix remains the same but the right-hand side vectors change.
In such applications, the amortization performance of the proposed GP method will further improve.
For example, the amortization performance of GP will improve from 61 to 64 instances for solving only two consecutive linear systems.
In~\cref{s_part2,total}, we show the performance profiles in terms of the preprocessing time and the total execution time, respectively.
As seen in~\cref{s_part2}, the preprocessing overhead incurred by GP remains no worse than that of the best method UP by a factor of 4 in approximately $77\%$ of instances.
As seen in~\cref{total}, the total execution time attained by GP does not exceed that of the best method by a factor of 2 in $96\%$ of the instances.
On the other hand, HP and UP achieves the same relative performance compared to the best method only in approximately $44\%$ and $33\%$ of the instances, respectively.
\begin{figure}[htbhp]
\begin{center}
\vspace{-0.4cm}
\subfloat []
{
\includegraphics[scale=.47]{images/pre-eps-converted-to.pdf}
\label{s_part2}
}
\subfloat [] {
\includegraphics[scale=.47]{images/tot-eps-converted-to.pdf}
\label{total}
}
\end{center}
\caption{Performance profiles for (a) preprocessing time and (b) total execution time for GP, HP and UP methods.}
\label{perf}
\vspace{-0.4cm}
\end{figure}
\section{Conclusion and future work}
\label{section4}
In this paper, we proposed a novel partitioning method in order to improve the CG accelerated block Cimmino algorithm.
The proposed partitioning method takes the numerical orthogonality between block rows of the coefficient matrix into account.
The experiments on a large set of real world systems show that the proposed method improves the convergence rate of the CG accelerated block Cimmino compared to the state-of-the-art hypergraph partitioning method.
Moreover, it requires not only less preprocessing time and fewer number of CG iterations, but also much less total execution time than the hypergraph partitioning method.
As a future work, we consider two issues:
further reducing the number of iterations through preconditioning and reducing the preprocessing overhead through parallelization.
Even though the $H$ matrix is not available explicitly, it could be still possible to obtain a preconditioner, further reducing the required number of CG iterations. One viable option could be using sparse approximate inverse type preconditioner where the coefficient matrix does not need to be available explicitly. This approach could be viable especially when consecutive linear systems are needed to be solved with the same coefficient matrix.
The proposed method involves two computational stages, namely constructing row inner-product graph via computing SpGEMM operation and partitioning this graph.
For parallelizing the first stage, parallel SpGEMM~\cite{akbudak2014simultaneous,kadirSpGEMM,azad2016exploiting} operation could be used to construct local subgraphs on each processor.
For parallelizing the second stage, a parallel graph partitioning tool ParMETIS~\cite{karypis1998parallel} could be used.
In each processor, the local subgraphs generated in parallel in the first stage could be used as input for ParMETIS.
|
1,314,259,994,812 | arxiv | \section{Introduction}
The novel coronavirus, severe acute respiratory syndrome coronavirus 2
(SARS-CoV-2), that emerged in late 2019 belongs to the coronaviridae
family. SARS-CoV-2 has an unparalleled capacity for human-to-human
transmission and became the reason for the COVID-19 pandemic. Having
witnessed two recent pandemics caused by coronaviridae, namely
SARS-CoV in 2002 and MERS-CoV in 2012, there was an immediate research
interest in studying the zoonotic origin, transmissibility, mutation
and variants of SARS-COV-2~\cite{kuzmin2020machine,Laporte2020TheSA}.
SARS-CoV-2 has a positive-strand RNA genome of about 30Kb and encodes
two categories of proteins: structural and non-structural (see
Figure~\ref{fig_spike_seq_example}). The spike protein is one of the
substantial structural proteins of the virus, having $1160$ to $1542$
amino acids. The spike protein's primary function is to serve as a
mechanism for the virus to enter inside the human cell by binding the
ACE2 receptor.
Detailed study of the structure of spike glycoprotein unveils the
molecular mechanism behind host cell recognition, attachment, and
admission. Notably, the spike glycoprotein of SARS-CoV-2 has two
subunits, $S_1$ and $S_2$, belonging to the N and C terminals,
respectively~\cite{galloway2021emergence,kuzmin2020machine}. The
receptor binding domain (RBD) ($\approx$ $220$ amino acids) of the
$S_1$ subunit helps the virus attach to the host cell by binding the
ACE2 receptor protein, and the $S_2$ subunit helps to insert into the
cell. SARS-CoV-2 continues to mutate over time, resulting in changes
in its amino acid sequences. The change in the spike protein's amino
acids, specifically in the RBD, makes the virus more transmissible and
adaptable to the human immune system. In the language of
phylogenetics and virology, the virus is creating new variants and
strains by accruing new amino acid changes in the spike protein and
its genome
\cite{galloway2021emergence,naveca2021phylogenetic,yadav2021neutralization,zhang2021emergence}.
The state-of-the-art mRNA vaccines train the host immune system to
create specific antibodies that can bind to spike protein, which leads
to preventing the virus from entering inside the host cell.
Therefore, changing amino acids in the spike protein generates new
variants which could potentially be more contagious and more resistant
to vaccines~\cite{Krishnan2021PredictingVaccineHesitancy}.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.4,page=1]{Figures/spike_sequence_example_figure.pdf}
\caption{The SARS-CoV-2 genome is around 29--30kb encoding
structural and non-structural proteins. ORF1ab encodes the
non-structural proteins and the four structural proteins: spike,
envelope, membrane, and nucleocapsid encoded by their respective
genes. The spike protein has 1160 to 1542 amino acids.}
\label{fig_spike_seq_example}
\end{figure}
In-depth studies of alterations in the spike protein to classify and
predict amino acid changes in SARS-CoV-2 are crucial in understanding
the immune invasion and host-to-host transmission properties of
SARS-CoV-2 and its variants. Knowledge of mutations and variants will
help identify transmission patterns of each variant that will help
devise appropriate public health interventions to prevent rapid
spread \cite{Ahmad2016AusDM,ahmad2017spectral,Tariq2017Scalable,AHMAD2020Combinatorial}.
This will also help in vaccine design and efficacy. A
massive amount of genomic sequences of SARS-CoV-2 are available from
different parts of the globe, with various clinical, epidemiological,
and pathological information from the GISAID open source
database\footnote{\url{https://www.gisaid.org/}}. In this study, we
design sophisticated machine learning models which will leverage the
open-source genomic data and metadata to understand, classify and
predict the changes in amino acid in SARS-CoV-2, most notably in its
spike
protein~\cite{Krishnan2021PredictingVaccineHesitancy,Laporte2020TheSA,Lokman2020ExploringTG}.
When sequences have the same length and they are aligned, i.e., a
one-to-one correspondence between positions or indices of the
sequences is established, machine learning methods devised for vectors
in metric spaces can be employed for sequence analysis. This approach
treats sequences as vectors, considering each character (e.g., amino
acid or nucleotide) as the value of the vector at a coordinate, e.g.,
using one-hot encoding~\cite{kuzmin2020machine}. In this case, the
order of positions loses its significance, however. Since the order
of indices in sequences is a defining factor, ignoring it may result
in performance degradation of the underlying machine learning models.
In representation based data analysis, on the other hand, each data
object is first mapped to a vector in a fixed-dimensional vector
space, taking into account important structure in the data (such as
order). Vector space machine learning algorithms are then used on the
vector representations of sequences. This approach has been highly
successful in the analysis of data from various domains such as
graphs~\cite{hassan2020estimating,Hassan2021Computing}, nodes in
graphs~\cite{ali2021predicting}, electricity
consumption~\cite{ali2019short,Ali2020ShortTerm} and images ~\cite{Bo_ImageKernel}.
This approach yields significant success in sequence analysis, since
the features representation takes into account the sequential nature
of the data, such as
texts~\cite{Shakeel2020LanguageIndependent,Shakeel2020Multi,Shakeel2019MultiBilingual}, electroencephalography and
electromyography sequences~\cite{atzori2014electromyography,ullah2020effect}, Networks~\cite{Ali2019Detecting1}, and biological
sequences~\cite{leslie2002mismatch,farhan2017efficient,Kuksa_SequenceKernel,ali2021effective}. For
biological sequences (DNA and protein), a feature vector based on
counts of all length $k$ substrings (called $k$-mers) occurring
exactly or inexactly up to $m$ mismatches (mimicking biological
mutations) is proposed in~\cite{leslie2002mismatch}. The kernel value
between two sequences --- the dot product between the corresponding
feature vectors --- serves as a pairwise proximity measure and is the
basis of kernel based machine learning. We provide the technical
definition of feature maps and the computational aspects of kernel
computation in Section~\ref{sec_proposed_approach}.
In this paper, our contributions are the following:
\begin{enumerate}
\item We propose a method based on $k$-mers (for feature vector
generation) and kernel approximation (to compute pairwise similarity
between spike sequences) that classify the variants with very high
accuracy.
\item We show that spike sequences alone can be used to efficiently
classify different COVID-19 variants.
\item We show that the proposed method outperforms the baseline in
terms of accuracy, precision, recall, F1-measure, and ROC-AUC.
\item We show that with only $1\%$ of the data for training, we can
achieve high prediction accuracy.
\end{enumerate}
The rest of the paper is organised as follow:
Section~\ref{related_work} contains the previous work related to our
problem. Section~\ref{sec_proposed_approach} contains the proposed
approach of this paper in detail along with the description of
baseline model. Section~\ref{sec_data_set_detail} contains the
information related to different datasets.
We show our results in
Section~\ref{sec_results_and_discussion}. Finally, we conclude the
paper in Section~\ref{sec_conclusion}.
\section{Related Work }\label{related_work}
Phylogeny based inference of disease transmission~\cite{Dhar2020TNet}
and sequence homology (shared ancestry) detection between a pair of
proteins are important tasks in bioinformatics and biotechnology.
Sequence classification is a widely studied problem in both of these
domains~\cite{Krishnan2021PredictingVaccineHesitancy}. Most sequence
classification methods for viruses require the alignment of the input
sequence to fixed length predefined vectors, which enables the machine
learning algorithm to compare homologous feature
vectors~\cite{Dwivedi2012ClassificationOH}.
Pairwise local and global alignment similarity scores between
sequences were used traditionally for sequence
analysis~\cite{Chowdhury2017MultipleSequences}.
Alignment-based methods are computationally expensive, especially for
long sequences, while heuristic methods require a number of ad-hoc
settings such as the penalty of gaps and substitutions, and alignment
methods may not perform well on highly divergent regions of the
genome. To address these limitations, various alignment-free
classification methods have been proposed
\cite{farhan2017efficient,Chang2014ANA}.
The use of $k$-mer (substrings of length $k$) frequencies for
phylogenetic applications started
with~\cite{Blaisdell1986AMeasureOfSimilarity}, who reported success in
constructing accurate phylogenetic trees from several coding and
non-coding nDNA sequences. Typically, $k$-mer frequency vectors are
paired together with a distance function to measure the quantitative
similarity score between any pair of
sequences~\cite{Zielezinski2017AlignmentfreeSC}.
However, the basic bottleneck for these techniques is the quadratic
(in the lengths of the sequences) runtime of kernel evaluation.
Farhan et al.~\cite{farhan2017efficient} proposed an algorithm for
kernel computation efficiently (see Theorem~\ref{thm_kernel}). They
provide an approach to compute the approximate pairwise proximity
matrix that can be used for various machine learning tasks like
classification through kernel PCA~\cite{hoffmann2007kernel}.
With the global outbreak of Covid-19 in 2020, different mutations of
its variants were discovered by the genomics community. Massive
testing and large-scale sequencing produced a huge amount of data,
creating ample opportunity for the bioinformatics community.
Researchers started exploring the evolution of
SARS-CoV-2~\cite{Ewen2021TargetedSelf} to vaccine
landscapes~\cite{su2021learning} and long-term effects of covid to
patients~\cite{tankisi2020critical}. In~\cite{Laporte2020TheSA}, the
authors indicate how the coronavirus spike protein is fine-tuned
towards the temperature and protease conditions of the airways, to
enhance virus transmission and pathology.
After the global spread of the coronavirus, researchers started
exploring ways to identify new variants and measuring vaccine
effectiveness. In~\cite{Leila2021Genotype}, the authors study genome
structures of SARS-CoV-2 and its previous versions,
while~\cite{Lokman2020ExploringTG} explores the genomics and proteomic
variants of the SARS-CoV-2 spike glycoprotein.
It is also important to study the conditional dependencies between the attributes (amino acids) and the class label (if any such dependency exist). Studying these dependencies can help finding any (hidden) relationships/dependencies, which in turn can help in the analysis of the protein sequences.
\section{Proposed Approach}\label{sec_proposed_approach}
Given a set of spike sequences, our goal is to find the similarity
score between each pair of sequences (kernel matrix).
The resultant kernel matrix is given as input to the kernel PCA method
for dimensionality reduction. The reduced dimensional principal
components-based feature vector representation is given as input to
the classical machine learning models. We discuss each step in detail
below.
\subsection{$k$-mers Computation}
For mapping protein sequences to fixed-length vectors, it is important
to preserve the order of the amino acids within a sequence. To
preserve the order, we use substrings (called mers) of length $k$.
For each spike sequence, the total number of $k$-mers are following:
\begin{equation}
\text{Total number of $k$-mers} = N - k + 1
\end{equation}
where $N$ is the total number of amino acids in the spike sequence
($1274$), and $k$ is a user-defined parameter for the size of each
mer. An example of $k$-mers (where k = $4$) is given in Figure
\ref{fig_k_mer_demo}.
\begin{figure}[!ht]
\centering
\includegraphics[scale = 0.4] {Figures/k_mers_demo.png}
\caption{Example of $k$-mers (where k = 4) in a spike sequence
"MFVFVFVLPLV".}
\label{fig_k_mer_demo}
\end{figure}
However, since we do not know the total unique $k$-mers in all spike
sequences, we need to consider all possible pairs of $k$-mers to
design a general purpose feature vector representation for spike
sequences in a given dataset. In this way, given an alphabet $\Sigma$
(a finite set of symbols), we know that a spike sequence $X \in
\Sigma$ ($X$ contains a list of ``ordered'' amino acids from
$\Sigma$). Similarly, we can extract sub-strings (mers) from $X$ of
length $k$, which we called $k$-mers.
Given $X$, $k$, and $\Sigma$, we have to design a frequency feature
vector $\Phi_k (X)$ of length $\vert \Sigma \vert^k$, which will
contain the exact number of occurrences of each possible $k$-mer in
$X$. The distance between two sequences $X$ and $Y$ is then simply
the hamming distance $d_H$ (count the number of mismatched values).
After computing the feature vector, a kernel function is defined that
measures the pairwise similarity between pairs of feature vectors
(usually using dot product). The problem we consider so far is the
huge dimensionality of the feature vector $\vert \Sigma \vert^k$ that
can make kernel computation very expensive. Therefore, in the
so-called {\em kernel trick}, kernel values are directly evaluated
instead of comparing indices.
The algorithm proposed in \cite{farhan2017efficient} takes the feature
vectors (containing a count of each $k$-mers) as input and returns a
real-valued similarity score between each pair of vectors. Given two
feature vectors $A$ and $B$, the kernel value for these vectors is
simply the dot product of $A$ and $B$. For example, given a $k$-mer,
if the frequency of that $k$-mer in $A$ is $2$ and $B$ is $3$, its
contribution towards the kernel value of $A$ and $B$ is simply $2
\cdot 3$. The process of kernel value computation is repeated for
each pair of sequences and hence we get a (symmetric) matrix (kernel
matrix) containing a similarity score between each pair of sequences.
\begin{theorem}\label{thm_kernel}
The runtime of the kernel computation is bounded above by $O(k^2 \ n
\ log \ n)$ \cite{farhan2017efficient}, where $k$ is the length of
$k$-mers and $n$ is the length of sequences
\end{theorem}
Note that $k$ is a user-defined parameter --- in our experiments we
use $k = 9$.
\subsection{Kernel PCA}
Due to a high-dimensional kernel matrix, we use Kernel PCA (K-PCA)
\cite{hoffmann2007kernel} to select a subset of principal components.
These extracted principal components corresponding to each spike
sequence act as the feature vector representations for the spike
sequences (we selected 50 principal components for our experiments).
\subsection{Machine Learning Classifiers}
Various machine learning algorithms have been utilized for the
classification task. K-PCA output, which is $50$ components fed to
different classifiers for prediction purposes. We use Support Vector
Machine (SVM), Naive Bayes (NB), Multi-Layer Perceptron (MLP),
K-Nearest Neighbour (KNN) (with $K = 5$), Random Forest (RF), Logistic
Regression (LR), and Decision Tree (DT) classifiers. For the training
of classifiers, default parameters are used from the literature. All
experiments are done on Core i5 system with Windows 10 OS and 32 GB
RAM. Implementation of our algorithm is done in Python. Our code and
pre-processed datasets are available
online \footnote{\url{https://github.com/sarwanpasha/covid_variant_classification}}.
For evaluation of the results, we use Weka
software\footnote{\url{https://www.cs.waikato.ac.nz/ml/weka/}}. The
evaluation metrics that we are using to measure the goodness of the
classifiers are average accuracy, precision, recall, F1 (weighted), F1
(macro), and receiver operator curve (ROC) area under the curve (AUC).
\subsection{Baseline Model}
We consider the approach of~\cite{kuzmin2020machine} as a baseline
model. The authors of~\cite{kuzmin2020machine} convert spike sequences
into one-hot encoding vectors that are used to classify the viral
hosts. We have the 21 amino acids ``\textit{ACDEFGHIKLMNPQRSTVWXY}"
(unique alphabets forming $\Sigma$). The length of each spike
sequence is $1273$ (with $*$ at the $1274^{th}$ location).
After converting sequences into one-hot encoding vectors we will get a
$26,733$ dimensional vector ($21 \times 1273 = 26,733$). Principal
Component Analysis (PCA) on these vectors is applied to reduce
dimensionality for the underlying classifiers.
For reference, we use the name ``One-Hot'' for this baseline approach
in the rest of the paper. For PCA, we select $100$ principal
components (see Figure~\ref{fir_pca_components}).
\begin{figure}[!ht]
\centering
\includegraphics{results/pca.tikz}
\caption{Explained variance of principal components for GISAID 1
dataset.}
\label{fir_pca_components}
\end{figure}
\section{Dataset Description and Preprocessing}
\label{sec_data_set_detail}
We sampled three subsets of spike sequences from the largest known
database of Covid-19 in humans,
GISAID \footnote{\url{https://www.gisaid.org/}}. We refer to those
$3$ subsets as GISAID 1, GISAID 2, and GISAID 3, having $7000$,
$7000$, and $3029$ spike sequences (aligned), respectively, each of
length 1274 from $5$ variants. For GISAID 1 and GISAID 2 datasets, we preserve the proportion of each variant as given in the original GISAID database. For GISAID 3 dataset, we use comparatively different proportion of variants to analyse the behavior of our algorithm.
See
Table~\ref{tbl_variant_information} for more information.
\begin{table}[H]
\centering
\begin{tabular}{lllc | l}
\hline
\begin{mybox}
Pango\\Lineage\end{mybox} & Region & Labels & \begin{mybox}
Num mutations\\S-gene/Genome \end{mybox} & \begin{mybox}\hskip.3in Num sequences in\\ GISAID 1 \hskip.01in GISAID 2 \hskip.01in GISAID 3 \end{mybox} \\
\hline \hline
B.1.1.7 & UK~\cite{galloway2021emergence} & Alpha & 8/17 & \hskip.1in 5979 \hskip.3in 5979\hskip.4in 2055\\
B.1.351 & South Africa~\cite{galloway2021emergence} & Beta & 9/21& \hskip.1in 124
\hskip.4in 124\hskip.4in 133\\
B.1.617.2 & India~\cite{yadav2021neutralization} & Delta & 8/17 & \hskip.1in 596
\hskip.4in 596\hskip.4in 44\\
P.1 & Brazil~\cite{naveca2021phylogenetic} & Gamma & 10/21 & \hskip.1in 202
\hskip.4in 202\hskip.4in 625\\
B.1.427 & California~\cite{zhang2021emergence} & Epsilon & 3/5 & \hskip.1in 99
\hskip.5in 99\hskip.4in 182\\
\hline
\end{tabular}
\caption{Variants information and distribution in the three
datasets. The S/Gen. column represents number of mutations on the
S gene / entire genome.}
\label{tbl_variant_information}
\end{table}
\vskip-.3in
\noindent To visualize the local structure of spike sequences, we use
t-distributed stochastic neighbor embedding (t-SNE)
\cite{van2008visualizing} that maps input sequences to 2d real
vectors. The t-SNE helps to visualize (hidden) clusters in the data.
The visualization results are shown in Figure~\ref{fig_tsn_embedding},
revealing that variants are not well separated; making variant
classification a challenging task. It is clear from
Figure~\ref{fig_tsn_embedding} that the dominant Alpha variant is not
in a single cluster and smaller variants are scattered around
(e.g. the least frequent variant, B.1.427, appears in most clusters).
\begin{figure}[!ht]
\centering
\includegraphics[scale = 0.55,page = 1] {Figures/t_sne/tsne_fig.pdf}
\caption{t-SNE embeddings of spike sequences.}
\label{fig_tsn_embedding}
\end{figure}
\section{Experimental Evaluation}
\label{sec_results_and_discussion}
In this section, we first report the performance of different
classifiers using multiple performance metrics. Then we analyze the
importance of the positions of each amino acid in the spike sequence
using information gain. Results for the GISAID 1, 2 and 3 datasets
are given in
Tables~\ref{tbl_avg_classification_results_second_dataset}--\ref{tbl_avg_classification_results_third_dataset}.
We present results for each classifier separately for the baseline
method and compare it with our proposed method. We can observe that
for most of the classifiers, our proposed method is better than the
baseline. For example, in the case of SVM classifier, the one-hot
method got $0.962$ F1-Macro score for the GISAID 1 dataset while our
proposed model got $0.973$, which is a significant improvement
considering that all values are on the higher side. Similar behavior
is observed for other classifiers also. For all of these results, we
use $1\%$ data for training and $99\%$ for testing purposes. Since we
are getting such a higher accuracies, we can conclude that with a
minimum amount of available data, we can train a classifier that can
classify different variants very efficiently. Also, we can observe
that the SVM classifier is consistently performing best for all the
datasets. Note that results in
Tables~\ref{tbl_avg_classification_results_second_dataset}--\ref{tbl_avg_classification_results_third_dataset}
are averaged over all variants.
\begin{table}[!ht]
\centering
\begin{tabular}{cp{0.8cm}cccccc}
\hline
Approach & ML Algo. & Acc. & Prec. & Recall & F1 (weighted) & F1 (Macro) & ROC-AUC \\
\hline \hline
\multirow{7}{*}{One-Hot \cite{kuzmin2020machine}}
& SVM & 0.990 & 0.990 & 0.990 & 0.990 & 0.962 & 0.973 \\
& NB & 0.957 & 0.964 & 0.951 & 0.952 & 0.803 & 0.881 \\
& MLP & 0.972 & 0.971 & 0.975 & 0.974 & 0.881 & 0.923 \\
& KNN & 0.978 & 0.964 & 0.977 & 0.965 & 0.881 & 0.900 \\
& RF & 0.964 & 0.962 & 0.961 & 0.963 & 0.867 & 0.878 \\
& LR & 0.985 & 0.981 & 0.983 & 0.984 & 0.935 & 0.950 \\
& DT & 0.941 & 0.945 & 0.947 & 0.944 & 0.793 & 0.886\\
\hline
\multirow{7}{*}{Kernel Approx.}
& SVM & \textbf{0.994} & \textbf{0.994} & \textbf{0.995} & \textbf{0.995} & \textbf{0.973} & \textbf{0.988} \\
& NB & 0.987 & 0.985 & 0.985 & 0.986 & 0.901 & 0.912 \\
& MLP & 0.975 & 0.977 & 0.976 & 0.978 & 0.921 & 0.935 \\
& KNN & 0.979 & 0.967 & 0.979 & 0.967 & 0.887 & 0.904 \\
& RF & 0.981 & 0.987 & 0.988 & 0.980 & 0.944 & 0.945 \\
& LR & 0.992 & 0.990 & 0.993 & 0.992 & 0.991 & 0.990 \\
& DT & 0.985 & 0.981 & 0.985 & 0.987 & 0.898 & 0.944\\
\hline
\end{tabular}
\caption{Variants Classification Results (1\% training set and 99\%
testing set) for GISAID 1 Dataset. Best values are shown in bold}
\label{tbl_avg_classification_results_second_dataset}
\end{table}
\begin{table}[!ht]
\centering
\begin{tabular}{cp{0.8cm}cccccc}
\hline
Approach & ML Algo. & Acc. & Prec. & Recall & F1 (weighted) & F1 (Macro) & ROC-AUC \\
\hline \hline
\multirow{7}{*}{One-Hot \cite{kuzmin2020machine}}
& SVM & 0.994 & 0.994 & 0.993 & 0.992 & 0.975 & 0.983 \\
& NB & 0.912 & 0.936 & 0.912 & 0.920 & 0.794 & 0.913 \\
& MLP & 0.970 & 0.970 & 0.970 & 0.969 & 0.880 & 0.921 \\
& KNN & 0.960 & 0.960 & 0.960 & 0.958 & 0.841 & 0.863 \\
& RF & 0.966 & 0.967 & 0.966 & 0.964 & 0.888 & 0.885 \\
& LR & 0.993 & 0.993 & 0.993 & 0.993 & 0.968 & 0.973 \\
& DT & 0.956 & 0.957 & 0.956 & 0.956 & 0.848 & 0.913 \\
\hline
\multirow{7}{*}{Kernel Approx.}
& SVM & \textbf{0.998} & \textbf{0.997} & \textbf{0.997} & \textbf{0.998} & \textbf{0.998} & \textbf{0.997} \\
& NB & 0.985 & 0.988 & 0.985 & 0.984 & 0.946 & 0.967 \\
& MLP & 0.973 & 0.971 & 0.972 & 0.970 & 0.889 & 0.925 \\
& KNN & 0.965 & 0.962 & 0.963 & 0.967 & 0.845 & 0.867 \\
& RF & 0.990 & 0.992 & 0.991 & 0.996 & 0.978 & 0.977 \\
& LR & 0.997 & 0.994 & 0.996 & 0.997 & 0.991 & 0.993 \\
& DT & 0.991 & 0.990 & 0.994 & 0.996 & 0.952 & 0.963 \\
\hline
\end{tabular}
\caption{Variants Classification Results (1\% training set and 99\%
testing set) for GISAID 2 Dataset. Best values are shown in bold}
\label{tbl_avg_classification_results_first_dataset}
\end{table}
\begin{table}[!ht]
\centering
\begin{tabular}{cp{0.8cm}cccccc}
\hline
Approach & ML Algo. & Acc. & Prec. & Recall & F1 (weighted) & F1 (Macro) & ROC-AUC \\
\hline \hline
\multirow{7}{*}{One-Hot \cite{kuzmin2020machine}} & SVM & 0.988 & 0.986 & 0.987 & 0.982 & 0.924 & 0.961 \\
& NB & 0.764 & 0.782 & 0.761 & 0.754 & 0.583 & 0.747 \\
& MLP & 0.947 & 0.941 & 0.944 & 0.942 & 0.813 & 0.898 \\
& KNN & 0.920 & 0.901 & 0.924 & 0.901 & 0.632 & 0.773 \\
& RF & 0.928 & 0.935 & 0.922 & 0.913 & 0.741 & 0.804 \\
& LR & 0.982 & 0.981 & 0.983 & 0.984 & 0.862 & 0.921 \\
& DT & 0.891 & 0.891 & 0.890 & 0.895 & 0.679 & 0.807\\
\hline
\multirow{7}{*}{Kernel Approx.}
& SVM & \textbf{0.991} & \textbf{0.993} & \textbf{0.995} & \textbf{0.991} & \textbf{0.989} & \textbf{0.997} \\
& NB & 0.864 & 0.922 & 0.861 & 0.884 & 0.783 & 0.887 \\
& MLP & 0.926 & 0.922 & 0.921 & 0.923 & 0.805 & 0.909 \\
& KNN & 0.947 & 0.921 & 0.942 & 0.934 & 0.701 & 0.826 \\
& RF & 0.975 & 0.971 & 0.971 & 0.972 & 0.904 & 0.918 \\
& LR & 0.991 & 0.990 & 0.994 & 0.990 & 0.983 & 0.992 \\
& DT & 0.960 & 0.969 & 0.964 & 0.967 & 0.812 & 0.891\\
\hline
\end{tabular}
\caption{Variants Classification Results (1\% training set and 99\% testing set) for GISAID 3 Dataset. Best values are shown in bold}
\label{tbl_avg_classification_results_third_dataset}
\end{table}
We also show the variant-wise performance of our best classifier
(SVM). Table~\ref{tbl_svm_heatmap} contains the resulting confusion
matrices using the kernel based and One-Hot approaches for GISAID
1. Clearly, the kernel-based approach performs better than the One-Hot
approach for most of the variants.
\begin{table}[h!]
\footnotesize
\subfloat{
\begin{tabular}{@{}l|cccccccc@{}}
\toprule
Variant & {\bf Alpha} & {\bf Beta} & {\bf Delta} & {\bf Gamma} & {\bf Epsi.} \\ \midrule
{\bf Alpha} & 5373 & 3 & 7 & 0 & 5 \\
{\bf Beta} & 6 & 110 & 0 & 0 & 0 \\% {\bf B.1.351}
{\bf Delta} & 6 & 0 & 523 & 0 & 0 \\
{\bf Gamma} & 0 & 0 & 0 & 176 & 0 \\
{\bf Epsilon} & 2 & 0 & 0 & 0 & 89 \\
\bottomrule
\end{tabular}
}
\subfloat{
\footnotesize
\begin{tabular}{@{}|cccccccc@{}}
\toprule
{\bf Alpha} & {\bf Beta} & {\bf Delta} & {\bf Gamma} & {\bf Epsi.} \\ \midrule
5371 & 9 & 5 & 0 & 3 \\
13 & 103 & 0 & 0 & 0 \\
8 & 0 & 521 & 0 & 0 \\
0 & 0 & 0 & 176 & 0 \\
7 & 0 & 3 & 0 & 81 \\
\bottomrule
\end{tabular}
}
\caption{Confusion matrices for SVM classifiers using kernel
approximation approach(left) and using One-Hot approach(right) for
GISAID 1 dataset.}
\label{tbl_svm_heatmap}
\end{table}
\vskip-.2in
\subsection{Importance of Amino Acid Positions}
To evaluate importance positions in spike sequences, we find the
subset of positions contributing the most towards predicting a
variant. We use the correlation-based feature selection (CFS) that
evaluates a subset of positions by considering the individual
predictive ability of each feature along with the degree of redundancy
between them. Using the step-wise forward greedy search (SFGS), we
select a subset of features, which are highly correlated with the
class (variants) while having low inter-dependency. SFGS may start
with no/all amino acids or from an arbitrary point in the space and it
stops when the addition/deletion of any remaining amino acids results
in a decrease in evaluation. SFGS can also produce a ranked list of
amino acids by traversing the space from one side to the other and
recording the order of amino acids that are selected. The subset of
features selected for each dataset are given in
Table~\ref{tbl_important_subset_of_attributes}.
\begin{table}[!ht]
\centering
\begin{tabular}{ccc}
\hline
Dataset & Total Amino Acids & Selected Amino Acids Positions \\
\hline \hline
GISAID 1 & 10 & 19, 152, 417, 452, 570, 681, 950, 982, 1118, 1176\\
GISAID 2 & 10 & 19, 152, 417, 452, 570, 681, 950, 982, 1118, 1176 \\
GISAID 3 & 10 & 13, 258, 417, 452, 570, 681, 701, 1027, 1118, 1176 \\
\hline
\end{tabular}
\caption{Subset of positions that contributed the most in variant prediction}
\label{tbl_important_subset_of_attributes}
\end{table}
To evaluate individual positions, we measure the Information Gain (IG) for each position with respect to the variant defined as
$IG(Class,position) = H(Class) - H(Class | position)$, where $H=
\sum_{ i \in Class} -p_i \log p_i$ is the entropy, and $p_i$ is the
probability of the class $i$. Figure~\ref{fig_IG_dataset_1_2_3}
depicts how informative a position is to determine variants (higher
value is better). We observe that, as in
Table~\ref{tbl_important_subset_of_attributes} positions such as 452,
570 and 681 are more informative across all datasets. The USA's CDC
also declared mutations at these positions from one variant to the
other, which validates our feature selection algorithm. For instance,
R452L is present in B.1.427(Epsilon) and B.1.617 (Kappa, Delta)
lineages and sub-lineages.
The combination of K417N, E484K, and N501Y substitutions are present in B.1.351 (Beta).
Similarly, K417T, E484K,
and N501Y substitutions are present in
P.1(Gamma)\footnote{\url{https://www.cdc.gov/coronavirus/2019-ncov/variants/variant-info.html}}
(they can be seen having higher IG in
Figure~\ref{fig_IG_dataset_1_2_3}).
\begin{figure}[t]
\centering
\centering
\includegraphics{Tikz_Figures/information_gain_gisaid_1.tikz}
\caption{Information gain of each amino acid position with respect
to variants. $x$-axis corresponds to amino acid positions in the
spike sequences.}
\label{fig_IG_dataset_1_2_3}
\end{figure}
\section{Conclusion and Future Directions}
\label{sec_conclusion}
We propose an approach to efficiently classify COVID-19 variants using
spike sequences. Results show that the $k$-mer based sequence
representation outperforms the typical one-hot encoding approach since
it preserves the actual order of amino acids. We showed the
importance of specific amino acids and demonstrate that it agrees with
the CDC definitions of variants. In the future, we will work towards
detecting new (unknown) variants based on the whole genome sequences.
Another exciting future work is considering other attributes like
countries, cities, and dates to design richer feature vector
representations for spike sequences.
\bibliographystyle{splncs04}
|
1,314,259,994,813 | arxiv | \section{Introduction}
The primary motivation for this work is the
development of an information-theoretic approach to discrete limit laws,
specifically those corresponding to compound Poisson limits.
Recall that the classical central limit theorem
can be phrased as follows: If $X_1,X_2,\ldots$
are independent and identically distributed,
continuous random variables with zero mean and unit variance,
then the entropy of their normalized partial sums
$S_n=\frac{1}{\sqrt{n}}\sum_{i=1}^nX_i$ increases
with $n$ to
the entropy of the standard normal distribution,
which is maximal among all random variables with
zero mean and unit variance. More precisely, if
$f_n$ denotes the density of $S_n$ and $\phi$
the standard normal density, then, as $n\to\infty$,
\begin{equation}
h(f_n)\uparrow h(\phi)
=\sup\{h(f)\;:\;\mbox{densities $f$ with mean 0 and
variance 1}\},
\label{eq:clt}
\end{equation}
where $h(f)=-\int f\log f$ denotes the differential
entropy, and `log' denotes the natural logarithm.
Precise conditions under which
(\ref{eq:clt}) holds are given by Barron \cite{barron} and Artstein et al. \cite{artstein};
also see \cite{johnson14,tulino,madiman}
and the references therein.
Part of the appeal of this formalization of the
central limit theorem comes from its analogy
to the second law of thermodynamics: The
``state'' (meaning the distribution)
of the random variables $S_n$ evolves
monotonically, until the {\em maximum entropy}
state, the standard normal distribution, is
reached. Moreover, the introduction of
information-theoretic ideas and techniques
in connection with the entropy has motivated
numerous related results (and their proofs),
generalizing and strengthening the central
limit theorem in different directions; see
the references above for details.
Recently, some discrete limit laws have been
examined in a similar vein, but, as the discrete entropy
$H(P)=-\sum_x P(x)\log P(x)$ for probability mass
functions $P$ on a countable set naturally
replaces the differential entropy $h(f)$, many of the relevant analytical
tools become unavailable.
For Poisson convergence theorems,
of which the binomial-to-Poisson is the prototypical
example, an analogous program has been carried out in
\cite{shepp,harremoes,johnson11,johnson21}.
Like with the central limit theorem, there are two aspects
to this theory -- the Poisson distribution is first
identified as that which has maximal entropy within
a natural class of probability measures,
and then convergence of appropriate sums of random variables
to the Poisson is established in the sense of relative entropy
(or better still, approximation bounds are obtained
that quantify the rate of this convergence).
One of the main goals of this work is to establish
a starting point for developing an
information-theoretic framework for the
much more general class of {\em compound Poisson}
limit theorems.\footnote{
Recall that the compound Poisson distributions
are the only infinitely divisible distributions
on ${\mathbb Z}_+$, and also
they are (discrete)
stable laws \cite{steutel2}.
In the way of motivation,
we may also recall the remark of Gnedenko
and Korolev \cite[pp. 211-215]{gnedenko2}
that ``there should be mathematical \ldots
probabilistic models of the universal principle
of non-decrease of uncertainty,''
and their proposal that we should
``find conditions under which certain limit
laws appearing in limit theorems of probability
theory possess extremal entropy properties. Immediate
candidates to be subjected to such analysis are,
of course, stable laws.''}
To that end, our first main result,
given in Section~2, provides a maximum
entropy characterization of compound Poisson laws,
generalizing Johnson's characterization
\cite{johnson21} of the Poisson distribution.
It states that if one looks at the class
of all ultra-log-concave distributions
on ${\mathbb Z}_+$ with a fixed mean,
and then compounds each distribution
in this class using a given
probability measure on ${\mathbb N}$,
then the compound Poisson has maximal
entropy in this class, provided it is log-concave.
Having established conditions under which
a compound Poisson measure has maximum entropy,
in a companion work \cite{johnson22} we consider
the problem of establishing compound Poisson
limit theorems as well as finite-$n$ approximation
bounds, in relative entropy and total variation.
The tools developed in the present work,
and in particular the definition and analysis
of a new score function in Section~3,
play a crucial role in these compound Poisson
approximation results.
In a different direction, in Section~6 we demonstrate how the present results provide
new links between classical probabilistic methods and the combinatorial notions
of log-concavity and ultra-log-concavity. Log-concave sequences are well-studied
objects in combinatorics, see, e.g., the surveys by Brenti \cite{Bre89:book} and Stanley \cite{Sta89}.
Additional motivation in recent years has come from the search
for a theory of negative dependence. Specifically, as argued by Pemantle \cite{pemantle},
a theory of negative dependence has long been desired in probability and statistical physics,
in analogy with the theory of positive dependence exemplified
by the Fortuin-Kasteleyn-Ginibre (FKG) inequality \cite{FKG71}
(earlier versions were developed by Harris \cite{Har60} and,
in combinatorics, by Kleitman \cite{Kle66}).
But the development of such a theory is believed to be difficult and delicate.
For instance, Wagner \cite{wagner08} recently formalized a folklore conjecture
in probability theory (called by him the ``Big Conjecture'')
which asserts that, if a probability measure on the Boolean hypercube
satisfies certain negative correlation conditions, then the sequence $\{p_k\}$
of probabilities of picking a set of size $k$, is ultra-log-concave.
This is closely related to Mason's conjecture for independent sets in matroids, which
asserts that the sequence $\{I_k\}$, where $I_k$ is the number of independent
sets of size $k$ in a matroid on a finite ground set, is ultra-log-concave.
Soon after, the ``Big Conjecture'' was falsified, by Borcea, Branden and Liggett \cite{BBL09}
and by Kahn and Neiman \cite{KN10}, who independently produced counterexamples.
In the other direction, very recently (while we were revising this paper),
Lenz \cite{Len11} proved a weak version of Mason's conjecture.
In Section~6 we describe some simple consequences
of our results in the context of matroid theory,
and we also discuss an application to bounding the
entropy of the size of a random independent
set in a claw-free graph.
Before stating our main results in detail,
we briefly mention how this line of work connects with
the growing body of work exploring applications of maximum entropy
characterizations to discrete mathematics.
The simplest maximum entropy result states that,
among all probability distributions on a finite
set $S$, the uniform has maximal entropy,
$\log |S|$. While mathematically trivial,
this result, combined with appropriate structural
assumptions and various entropy inequalities,
has been employed as a powerful tool
and has seen varied applications in combinatorics.
Examples include Radhakrishnan's
entropy-based proof \cite{radha97} of Bregman's
theorem on the permanent of a 0-1 matrix,
Kahn's proof \cite{Kah01a} of the result
of Kleitman and Markowski \cite{KM75}
on Dedekind's problem concerning the number of antichains
in the Boolean lattice,
the study by Brightwell and Tetali \cite{BT03} of the number of
linear extensions of the Boolean lattice (partially confirming a
conjecture of Sha and Kleitman \cite{SK87}),
and the resolution of several conjectures of Imre Ruzsa
in additive combinatorics by
Madiman, Marcus and Tetali \cite{MMT08}.
However, so far, a limitation of this line of work has been
that it can only handle
problems on finite sets.
As modern combinatorics explores
more and more properties of countable structures --
such as infinite graphs or posets --
it is natural that analogues of useful tools
such as maximum entropy characterizations
in countable settings should develop in parallel.
It is particularly natural to develop these in connection with
the Poisson and compound Poisson laws, which
arise naturally in probabilistic combinatorics; see, e.g.,
Penrose's work \cite{Pen03:book} on geometric random graphs.
Section~2 contains
our two main results: The
maximum entropy characterization of log-concave
compound Poisson distributions,
and an analogous result for compound binomials.
Sections~3 and~4, respectively, provide their proofs.
Section~5 discusses conditions for log-concavity,
and gives some additional results.
Section~6 discusses applications
to classical combinatorics,
including graph theory and matroid theory.
Section~7 contains some concluding remarks,
a brief description of potential generalization and
extensions, and a discussion of recent, subsequent
work by Y.\ Yu \cite{Yu09:cp}, which was motivated by
preliminary versions of some of the present results.
\section{Maximum Entropy Results}
First we review the maximum entropy property of the Poisson distribution.
\begin{definition}
\label{def:bp}
For any parameter vector $\vc{p} = (p_1,p_2, \ldots, p_n)$
with each $p_i\in[0,1]$,
the sum of independent Bernoulli random variables $B_i\sim\bern{p_i}$,
$$S_n=\sum_{i=1}^n B_i,$$
is called a {\em Bernoulli sum}, and its
probability mass function is denoted by
$\bp{p}(x):=\Pr\{S_n=x\}$,
for $x=0,1,\ldots$. Further, for each $\lambda>0$, we define
the following sets of parameter vectors:
$$
{\mathcal{ P}}_n(\lambda) \; = \; \big\{ \vc{p}\in[0,1]^n
\;:\; p_1+p_2+\cdots+p_n =\lambda
\big\}
\;\;\;\;
\mbox{and}
\;\;\;\;
{\mathcal{ P}}_{\infty}(\lambda) = \bigcup_{n\geq 1} {\mathcal{ P}}_n(\lambda).
$$
\end{definition}
Shepp and Olkin \cite{shepp} showed
that, for fixed $n\geq1$,
the Bernoulli sum $\bp{p}$ which has maximal
entropy among all Bernoulli sums with
mean $\lambda$,
is Bin$(n,\lambda/n)$,
the binomial with parameters $n$ and $\lambda/n$,
\begin{equation}
H(\mbox{Bin}(n,\lambda/n))
=
\max\Big\{ H(\bp{p})\;:\; {\vc{p}\in{\mathcal{ P}}_n(\lambda)}\Big\},
\label{eq:maxEntB}
\end{equation}
where $H(P)=-\sum_x P(x)\log P(x)$ denotes the discrete
entropy function. Noting that the binomial
$\mbox{Bin}(n,\lambda/n)$ converges to the Poisson
distribution $\mbox{Po}(\lambda)$ as $n\to\infty$,
and that the classes of Bernoulli sums in (\ref{eq:maxEntB})
are nested,
$\{\bp{p}:\vc{p}\in{\mathcal{ P}}_n(\lambda)\}\subset
\{\bp{p}:\vc{p}\in{\mathcal{ P}}_{n+1}(\lambda)\},$
Harremo\"es \cite{harremoes}
noticed that a simple limiting
argument gives the following
maximum entropy property
for the Poisson distribution:
\begin{equation}
H(\mbox{Po}(\lambda))
=
\sup\Big\{
H(\bp{p})\;:\;
\vc{p}\in{\mathcal{ P}}_\infty(\lambda)\Big\}.
\label{eq:maxEntP}
\end{equation}
A key property in generalizing and understanding
this maximum entropy property further
is that of
ultra-log-concavity;
cf.\ \cite{pemantle}. The distribution $P$ of a random variable
$X$ is {\em ultra-log-concave} if $P(x)/\Pi_{\lambda}(x)$ is
log-concave, that is, if,
\begin{equation} \label{eq:ulcdef}
x P(x)^2 \geq (x+1) P(x+1) P(x-1),\;\;\;\; \mbox{for all $x \geq 1$.}
\end{equation}
Note that the Poisson distribution as well as all Bernoulli sums
are ultra-log-concave. A non-trivial property of the class of
ultra-log-concave distributions, conjectured by Pemantle \cite{pemantle}
and proved by Liggett \cite{Lig97} (cf. Gurvits \cite{Gur09} and Kahn and Neiman \cite{KN11}),
is that it is closed under convolution.
Johnson \cite{johnson21} recently
proved the following maximum entropy
property for the Poisson distribution,
generalizing (\ref{eq:maxEntP}):
\begin{equation}
H(\mbox{Po}(\lambda))
=
\max\Big\{
H(P)\;:\;
\mbox{ultra-log-concave $P$ with mean $\lambda$}
\Big\}.
\label{eq:maxEntPJ}
\end{equation}
As discussed in the Introduction, we wish to generalize
the maximum entropy
properties (\ref{eq:maxEntB})
and (\ref{eq:maxEntP}) to the case of
{\em compound Poisson} distributions
on ${\mathbb Z}_+$.
We begin with some definitions:
\begin{definition} Let $P$ be an arbitrary distribution
on ${\mathbb {Z}}_+=\{0,1,\ldots\}$, and $Q$ a distribution on
${\mathbb {N}} = \{1, 2, \ldots \}$.
The {\em $Q$-compound distribution $C_Q P$} is the
distribution of the random sum,
\begin{equation} \label{eq:randsum}
\sum_{j=1}^{Y} X_j,
\end{equation}
where $Y$ has distribution $P$ and the random variables
$\{X_j\}$ are independent and identically distributed
(i.i.d.) with common distribution $Q$ and
independent of $Y$.
The distribution $Q$ is called a
{\em compounding distribution},
and the map $P\mapsto C_Q P$ is
the {\em $Q$-compounding operation}.
The $Q$-compound distribution $C_QP$
can be explicitly written as the mixture,
\begin{equation}
\label{eq:compdis}
C_Q P(x) = \sum_{y=0}^{\infty} P(y) \qst{y}(x),
\;\;\;\;x\geq 0,
\end{equation}
where $Q^{*j}(x)$ is the $j$th convolution power of $Q$ and
$Q^{*0}$ is the point mass at $x=0$.
\end{definition}
Above and throughout the paper,
the empty sum $\sum_{j=1}^0(\cdots)$ is taken to be zero;
all random variables considered are supported
on ${\mathbb {Z}}_+=\{0,1,\ldots\}$; and all compounding
distributions $Q$ are supported on ${\mathbb {N}}=\{1,2,\ldots\}$.
\begin{example} Let $Q$ be an arbitrary distribution on ${\mathbb {N}}$.
\begin{enumerate}
\item
For any $0 \leq p \leq 1$, the {\em compound Bernoulli
distribution $\cbern{p}{Q}$} is the distribution
of the product $BX$, where $B\sim\mbox{Bern}(p)$
and $X\sim Q$ are independent.
It has probability mass function
$C_Q P$, where $P$ is the $\bern{p}$ mass function,
so that, $C_Q P(0)=1-p$ and $C_Q P(x)=pQ(x)$ for $x\geq 1$.
\item
A {\em compound Bernoulli sum} is a sum of independent
compound Bernoulli random variables, all with respect
to the same compounding distribution $Q$: Let
$X_1,X_2,\ldots,X_n$ be i.i.d.\ with common
distribution $Q$ and $B_1,B_2,\ldots,B_n$
be independent Bern($p_i$). We call,
$$ \sum_{i=1}^n B_iX_i \;\mbox{$ \;\stackrel{\cal D}{=}\; $}\; \sum_{j=1}^{\sum_{i=1}^n B_i} X_j,$$
a {\em compound Bernoulli sum}; in view of~{\em (\ref{eq:randsum})},
its distribution is $\cp{p}$, where
$\vc{p} = (p_1,p_2, \ldots, p_n)$.
\item
In the special case of a compound Bernoulli sum with
all its parameters $p_i=p$ for a fixed $p\in[0,1]$,
we say that it has a {\em compound binomial distribution},
denoted by $\mbox{\em CBin}(n,p,Q)$.
\item
Let $\Pi_\lambda(x)=e^{-\lambda}\lambda^x/x!$, $x\geq 0$,
denote the {\em Po}$(\lambda)$ mass function. Then,
for any $\lambda>0$,
the {\em compound Poisson distribution $\mbox{CPo}(\lambda,Q)$}
is the distribution with mass function $C_Q \Pi_\lambda$:
\begin{equation} \label{eq:cppmf}
C_Q \Pi_{\lambda}(x) =
\sum_{j=0}^{\infty} \Pi_\lambda(j)
Q^{*j}(x) =
\sum_{j=0}^{\infty} \frac{ e^{-\lambda} \lambda^j}{j!}
Q^{*j}(x),
\;\;\;\;x\geq 0.
\end{equation}
\end{enumerate}
\end{example}
In view of the Shepp-Olkin maximum entropy property (\ref{eq:maxEntB})
for the binomial distribution, a first natural conjecture
might be that the compound binomial has maximum entropy
among all compound Bernoulli sums $\cp{p}$
with a fixed mean; that is,
\begin{equation}
H(\mbox{CBin}(n,\lambda/n,Q))
=
\max\Big\{ H(C_Q\bp{p})\;:\; {\vc{p}\in{\mathcal{ P}}_n(\lambda)}\Big\}.
\label{eq:maxEntBC}
\end{equation}
But, perhaps somewhat surprisingly, as Zhiyi Chi \cite{chi}
has noted, (\ref{eq:maxEntBC}) fails in general. For example,
taking $Q$ to be the uniform distribution on $\{1,2\}$,
$\vc{p}=(0.00125, 0.00875)$
and $\lambda =p_1+p_2=0.01$,
direct computation shows that,
\begin{equation}
H(\mbox{CBin}(2,\lambda/2,Q))
<0.090798
<0.090804
< H(C_Q\bp{p}).
\label{eq:chi}
\end{equation}
As the Shepp-Olkin result (\ref{eq:maxEntB})
was only seen as an intermediate step in proving
the maximum entropy property of the Poisson
distribution (\ref{eq:maxEntP}), we may still
hope that the corresponding result remains
true for compound Poisson measures,
namely that,
\begin{equation}
H(\mbox{CPo}(\lambda,Q))
=
\sup\Big\{
H(C_Q\bp{p})\;:\;
\vc{p}\in{\mathcal{ P}}_\infty(\lambda)\Big\}.
\label{eq:maxEntPC}
\end{equation}
Again, (\ref{eq:maxEntPC}) fails in general.
For example, taking the same
$Q,\lambda$ and $\vc{p}$ as above,
yields,
$$
H(\mbox{CPo}(\lambda,Q))
<0.090765
<0.090804
< H(C_Q\bp{p}).$$
The main purpose of the present work
is to show that, despite these negative
results, it is possible to provide
natural, broad sufficient conditions,
under which the compound binomial and
compound Poisson distributions can be
shown to have maximal entropy in an
appropriate class of measures.
Our first result (a more general version of which
is proved in Section~3) states that,
as long as $Q$ and the compound Poisson measure
$\mbox{CPo}(\lambda,Q)$ are log-concave,
the maximum entropy statement analogous to
(\ref{eq:maxEntPJ}) remains valid
in the compound Poisson case:
\begin{theorem} \label{thm:mainpoi}
If the distribution $Q$ on ${\mathbb {N}}$ and the compound Poisson distribution
$\mbox{\em CPo}(\lambda,Q)$ are both log-concave,
then,
$$
H(\mbox{\em CPo}(\lambda,Q))
=\max\Big\{
H(C_Q P) \;:\; \mbox{ultra-log-concave $P$ with mean $\lambda$}
\Big\}.$$
\end{theorem}
The notion of log-concavity is central in the development
of the ideas in this work. Recall that
the distribution $P$ of a random variable $X$ on ${\mathbb {Z}}_+$
is {\em log-concave} if its support is a (possibly infinite)
interval of successive integers in ${\mathbb {Z}}_+$, and,
\begin{equation} \label{eq:lcdef}
P(x)^2 \geq P(x+1) P(x-1),\;\;\;\; \mbox{for all $x \geq 1$.}
\end{equation}
We also recall that most of the
commonly used distributions appearing
in applications (e.g.,
the Poisson, binomial, geometric, negative binomial, hypergeometric
logarithmic series, or Polya-Eggenberger distribution)
are log-concave.
Note that ultra-log-concavity of $P$,
defined as in equation (\ref{eq:ulcdef}),
is more restrictive than log-concavity,
and it is equivalent to the requirement that
ratio $P/\Pi_{\lambda}$ is a log-concave
sequence for some (hence all) $\lambda>0$.
Our second result
states that (\ref{eq:maxEntBC})
{\em does} hold, under certain
conditions on $Q$ and CBin($n,\lambda,Q$):
\begin{theorem} \label{thm:mainber}
If the distribution $Q$ on ${\mathbb {N}}$
and the compound binomial distribution
$\mbox{\em CBin}(n,\lambda/n,Q)$
are both log-concave,
then,
$$H(\mbox{\em CBin}(n,\lambda/n,Q))
=\max\Big\{ H(C_Q\bp{p})\;:\; {\vc{p}\in{\mathcal{ P}}_n(\lambda)}\Big\},$$
as long as the tail of $Q$ satisfies
either one of the following properties:
$(a)$~$Q$ has finite support; or
$(b)$~$Q$ has tails heavy enough so that,
for some $\rho,\beta>0$ and $N_0\geq 1$,
we have, $Q(x)\geq \rho^{x^\beta}$,
for all $x\geq N_0$.
\end{theorem}
The proof of Theorem~\ref{thm:mainber} is given in
Section~\ref{sec:compbin}.
As can be seen there,
conditions $(a)$ and $(b)$ are introduced
purely for technical reasons, and can probably
be significantly relaxed; see Section~7 for
a further discussion.
It remains an open question to give {\em necessary
and sufficient} conditions on $\lambda$ and $Q$ for the compound
Poisson and compound binomial distributions to have maximal
entropy within an appropriately defined class. As a first step,
one may ask for natural conditions that imply that a compound binomial
or compound Poisson distribution is log-concave. We discuss several
such conditions in Section~5.
In particular, the discussion in Section~5 implies the following
explicit maximum entropy statements.
\begin{example}
\begin{enumerate}
\item
Let $Q$ be an arbitrary log-concave distribution
on ${\mathbb N}$. Then Lemma~\ref{lem:lc}
combined with Theorem~\ref{thm:mainber} implies that
the maximum entropy property of the
compound binomial distribution in
equation~{\em (\ref{eq:maxEntBC})} holds,
for all $\lambda$ large enough. That is,
the compound binomial
{\em CBin($n,\lambda/n,Q$)} has maximal entropy
among all compound Bernoulli sums $C_Q\bp{p}$
with $p_1+p_2+\cdots+p_n=\lambda$, as long
as $\lambda \geq \frac{nQ(2)}{Q(1)^2+Q(2)}$.
\item
Let $Q$ be an arbitrary log-concave distribution
on ${\mathbb N}$. Then Theorem~\ref{thm:lcconj}
combined with Theorem~\ref{thm:mainpoi} implies that
the maximum entropy property of the compound Poisson
$CPo(\lambda,Q)$ holds if and only if
$\lambda\geq\frac{2Q(2)}{Q(1)^2}$.
\end{enumerate}
\end{example}
As mentioned in the introduction, the above results
can be used in order to gain better understanding
of ultra-log-concave sequences in combinatorics.
Specifically, as discussed in more detail in
Section~6, they can be used to estimate how
``spread out'' these sequences in terms of the
entropy.
\section{Maximum Entropy Property of the Compound Poisson Distribution}
\label{sec:comppoi}
Here we show that, if $Q$ and the
compound Poisson distribution
$\mbox{CPo}(\lambda,Q)=C_Q\Pi_\lambda$
are both log-concave, then
$\mbox{CPo}(\lambda,Q)$
has maximum entropy among all
distributions of the form $C_Q P$, when $P$ has mean
$\lambda$ and is ultra-log-concave.
Our approach is an extension of the
`semigroup' arguments of \cite{johnson21}.
We begin by recording some basic properties
of log-concave and ultra-log-concave distributions:
\begin{itemize}
\item[$(i)$]
If $P$ is ultra-log-concave, then
from the definitions it is immediate
that $P$ is log-concave.
\item[$(ii)$]
If $Q$ is log-concave, then it has finite moments
of all orders; see \cite[Theorem~7]{keilson}.
\item[$(iii)$]
If $X$ is a random variable
with ultra-log-concave distribution $P$, then (by~$(i)$
and~$(ii)$) it has finite moments of all orders.
Moreover, considering the covariance between the decreasing
function $P(x+1) (x+1)/P(x)$ and the increasing function
$x(x-1) \cdots (x-n)$, shows that the falling
factorial moments
of $P$ satisfy,
$$E[(X)_n]:=E[X(X-1) \cdots (X-n+1)] \leq (E(X))^n;$$
see \cite{johnson21} and \cite{johnsonc2}
for details.
\item[$(iv)$]
The Poisson distribution and all Bernoulli
sums are ultra-log-concave.
\end{itemize}
Recall the following definition
from \cite{johnson21}:
\begin{definition}
\label{def:stmap}
Given $\alpha\in[0,1]$ and a random variable $X\sim P$
on ${\mathbb {Z}}_+$ with mean $\lambda\geq 0$,
let $U_\alpha P$ denote the
distribution of the random variable,
$$\sum_{i=1}^X B_i+Z_{\lambda(1-\alpha)},$$
where the $B_i$ are i.i.d.\ $\bern{\alpha}$,
$Z_{\lambda(1-\alpha)}$ has distribution $\mbox{\em Po}(\lambda(1-\alpha))$,
and all random variables are independent
of each other and of $X$.
\end{definition}
Note that, if $X\sim P$ has mean $\lambda$,
then $U_\alpha P$ has the same mean. Also,
recall the following useful relation that
was established in
Proposition~3.6 of \cite{johnson21}: For all $y\geq 0$,
\begin{equation} \label{eq:newheat}
\frac{\partial }{\partial \alpha} U_{\alpha}P(y) = \frac{1}{\alpha} \left(
\lambda
(U_{\alpha}P(y) - U_{\alpha}P(y-1)
- ((y+1) U_{\alpha}P(y+1) - y U_{\alpha}P(y)) \right).
\end{equation}
Next we define another transformation of
probability distributions $P$ on ${\mathbb Z}_+$:
\begin{definition} \label{def:clustermap}
Given $\alpha\in[0,1]$, a distribution $P$ on ${\mathbb {Z}}_+$
and a compounding distribution $Q$ on ${\mathbb {N}}$,
let $U^Q_{\alpha}P$ denote the
distribution $C_Q U_\alpha P$:
$$U^Q_{\alpha} P(x):=C_QU_\alpha P(x)
= \sum_{y=0}^{\infty} U_\alpha P(y) Q^{*y}(x),
\;\;\;\;x\geq 0.$$
\end{definition}
Work of Chafa\"{i} \cite{chafai} suggests that the semigroup
of Definition \ref{def:stmap}
may be viewed as the action of the $M/M/\infty$ queue. Similarly
the semigroup of Definition \ref{def:clustermap} corresponds to the marginal
distributions of a continuous-time hidden Markov process, where the underlying
Markov process is the $M/M/\infty$ queue and the output at each time is
obtained by a compounding operation.
\begin{definition} \label{def:sizebias}
For a distribution $P$ on ${\mathbb {Z}}_+$ with mean $\nu$,
its {\em size-biased} distribution $P^{\#}$ on ${\mathbb {Z}}_+$ is defined by
$$
P^{\#}(y) = \frac{(y+1)P(y+1)}{\nu} .
$$
\end{definition}
An important observation that will be at the
heart of the proof of Theorem~\ref{thm:mainpoi}
below is that, for $\alpha=0$, $U_0^QP$
is simply the compound Poisson
measure CP$(\lambda,Q)$, while for $\alpha=1$,
$U_1^QP=C_QP$. The following lemma
gives a rough bound on the third
moment of $U_\alpha^QP$:
\begin{lemma} \label{lem:moments}
Suppose $P$ is an ultra-log-concave
distribution with mean $\lambda>0$
on ${\mathbb Z}_+$, and
let $Q$ be a log-concave compounding
distribution on ${\mathbb N}$.
For each $\alpha\in[0,1]$,
let $W_\alpha,V_\alpha$ be random variables
with distributions $U_\alpha^QP=C_Q U_\alpha P$
and $C_Q (U_\alpha P)^{\#}$, respectively.
Then the third moments
$E(W_\alpha^3)$
and $E(V_\alpha^3)$ are both bounded above by,
$$\lambda q_3 +3\lambda^2q_1q_2+\lambda^3q_1^3,$$
where $q_1,q_2,q_3$ denote the first, second and third
moments of $Q$, respectively.
\end{lemma}
\begin{proof
Recall that, as stated in properties $(ii)$ and~$(iii)$
in the beginning of Section~\ref{sec:comppoi},
$Q$ has finite moments of all orders, and
that the $n$th falling factorial moment
of any ultra-log-concave random variable $Y$
with distribution $R$ on ${\mathbb Z}_+$ is
bounded above by $(E(Y))^n$. Now for an arbitrary
ultra-log-concave distribution $R$, define
random variables $Y\sim R$ and $Z\sim C_Q R$.
If $r_1,r_2,r_3$ denote the first three moments
of $Y\sim R$, then,
\begin{eqnarray}
E(Z^3)
&=&
q_3r_1 + 3q_1q_2 E[(Y)_2] + q_1^3 E[(Y)_3]
\nonumber\\
&\leq&
q_3r_1 + 3q_1q_2r_1^2 + q_1^3r_1^3.
\label{eq:3rdmomid}
\end{eqnarray}
Since the map $U_\alpha$ preserves ultra-log-concavity
\cite{johnson21}, if $P$ is ultra-log-concave then
so is $R = U_{\alpha} P$, so that (\ref{eq:3rdmomid})
gives the required bound for the third moment of
$W_\alpha$, upon noting that the mean of
the distribution $U_\alpha P$ is equal to $\lambda$.
Similarly, size-biasing preserves ultra-log-concavity;
that is, if $R$ is ultra-log-concave, then so is $R^{\#}$, since
$R^{\#}(x+1)(x+1)/R^{\#}(x) = (R(x+2) (x+2)(x+1))/(R(x+1) (x+1))
= R(x+2) (x+2)/R(x+1)$ is also decreasing.
Hence, $R'=(U_\alpha P)^{\#}$ is ultra-log-concave,
and (\ref{eq:3rdmomid}) applies in this case as well.
In particular, noting that the mean of
$Y'\sim R'= (U_\alpha P)^{\#}=R^{\#}$
can be bounded in terms of the mean of $Y\sim R$ as,
$$E(Y')=\sum_x x\frac{(x+1)U_\alpha P(x+1)}{\lambda}
=\frac{E[(Y)_2]}{E(Y)}\leq\frac{\lambda^2}{\lambda}=\lambda,$$
the bound (\ref{eq:3rdmomid}) yields
the required bound for the third
moment of $V_\alpha$.
\end{proof}
In \cite{johnson21}, the characterization
of the Poisson as a maximum entropy
distribution was proved through
the decrease of its score function. In
an analogous way,
we define the score function
of a $Q$-compound random variable as follows,
cf.\ \cite{johnson22},
\begin{definition} \label{def:score}
Given a distribution $P$ on ${\mathbb {Z}}_+$ with mean $\lambda$,
the corresponding $Q$-compound distribution
$C_Q P$ has score function defined by,
$$
\sco{C_Q P}(x)=\frac{C_Q(P^{\#})(x)}{C_QP(x)}-1,\;\; x\geq 0 .
$$
\end{definition}
More explicitly, one can write
\begin{equation} \label{eq:score}
\sco{C_Q P}(x) = \frac{ \sum_{y=0}^{\infty} (y+1) P(y+1) \qst{y}(x)}{\lambda
\sum_{y=0}^{\infty} P(y) \qst{y}(x) } - 1 .
\end{equation}
Notice that the mean of
of $\sco{C_Q P}$ with respect to $C_Q P$ is zero,
and that if $P\sim\mbox{Po}(\lambda)$
then $\sco{C_Q P}(x) \equiv 0$. Further,
when $Q$ is the point mass at 1
this score function
reduces to the ``scaled score function'' introduced in \cite{johnson11}.
But, unlike the scaled score function and
an alternative score function given in
\cite{johnson22}, this score function is not only a function
of the compound distribution $C_Q P$, but also explicitly depends
on $P$. A projection identity and other properties of
$\sco{C_Q P}$ are proved in \cite{johnson22}.
Next we show that, if $Q$ is log-concave and $P$
is ultra-log-concave, then the score function
$\sco{C_Q P}(x)$ is decreasing in $x$.
\begin{lemma} \label{lem:decsc}
If $P$ is ultra-log-concave and the compounding
distribution $Q$ is log-concave,
then the score function $\sco{C_Q P}(x)$ of
$C_Q P$ is decreasing in $x$.
\end{lemma}
\begin{proof} First we recall Theorem~2.1 of Keilson
and Sumita \cite{keilson2},
which implies that,
if $Q$ is log-concave, then for any $m \geq n$, and for any $x$:
\begin{equation} \label{eq:tech}
\qst{m}(x+1) \qst{n}(x) - \qst{m}(x) \qst{n}(x+1) \geq 0. \end{equation}
[This can be proved by
considering $\qst{m}$ as the convolution of $\qst{n}$ and $\qst{(m-n)}$,
and writing
\begin{eqnarray*}
\lefteqn{ \qst{m}(x+1) \qst{n}(x) - \qst{m}(x) \qst{n}(x+1) } \\
& = & \sum_l \qst{(m-n)}(l) \bigg( \qst{n}(x+1-l) \qst{n}(x) -
\qst{n}(x-l) \qst{n}(x+1) \bigg).
\end{eqnarray*}
Since $Q$ is log-concave, then
so is $\qst{n}$,
cf.\ \cite{karlin3},
so the ratio $\qst{n}(x+1)/\qst{n}(x)$ is decreasing in $x$, and
(\ref{eq:tech}) follows.]
By definition, $\sco{C_Q P}(x) \geq \sco{C_Q P}(x+1)$ if and only if,
\begin{eqnarray}
0 & \leq & \left( \sum_y (y+1) P(y+1) \qst{y}(x) \right) \left(
\sum_z P(z) \qst{z}(x+1) \right) \nonumber \\
& & - \left( \sum_y (y+1) P(y+1) \qst{y}(x+1) \right)
\left( \sum_z P(z) \qst{z}(x) \right) \nonumber \\
& = & \sum_{y,z} (y+1) P(y+1) P(z) \left[ \qst{y}(x) \qst{z}(x+1)
- \qst{y}(x+1) \qst{z}(x) \right]. \label{eq:doublesum}
\end{eqnarray}
Noting that for $y=z$ the term in square brackets in the
double sum becomes zero, and swapping the values of $y$ and
$z$ in the range $y>z$,
the double sum in
(\ref{eq:doublesum}) becomes,
$$ \sum_{y < z} [(y+1) P(y+1) P(z) - (z+1) P(z+1) P(y)]
\left[ \qst{y}(x) \qst{z}(x+1)
- \qst{y}(x+1) \qst{z}(x) \right].$$
By the ultra-log-concavity of $P$, the first square
bracket is positive for $y \leq z$,
and by equation~(\ref{eq:tech}) the second square bracket is
also positive for $y \leq z$.
\end{proof}
We remark that, under the same assumptions, and using a very similar
argument, an analogous result holds for some alternative
score functions recently introduced in \cite{johnson22}
and in related work.
Combining Lemmas~\ref{lem:decsc} and~\ref{lem:moments}
with equation~(\ref{eq:newheat})
we deduce the following result,
which is the main technical step
in the proof of Theorem~\ref{thm:mainpoi} below.
\begin{proposition} \label{prop:deriv}
Let $P$ be an ultra-log-concave distribution on ${\mathbb Z}_+$
with mean $\lambda>0$, and assume that
$Q$ and $\mbox{\em CPo}(\lambda,Q)$ are
both log-concave. Let $W_\alpha$ be a
random variable with distribution $U_\alpha^QP$,
and define, for all $\alpha\in[0,1],$
the function,
$$E(\alpha):=E[-\log C_Q \Pi_{\lambda}(W_{\alpha})].$$
Then $E(\alpha)$ is continuous for all $\alpha\in[0,1]$,
it is differentiable for $\alpha\in(0,1)$, and,
moreover, $E'(\alpha)\leq 0$ for $\alpha\in(0,1)$.
In particular, $E(0)\geq E(1)$.
\end{proposition}
\begin{proof}
Recall that,
$$
U^Q_{\alpha}P(x) =
C_Q U_\alpha P(x)
=
\sum_{y=0}^{\infty} U_{\alpha}P(y) \qst{y}(x)
=\sum_{y=0}^{x} U_{\alpha}P(y) \qst{y}(x),
$$
where the last sum is restricted to the
range $0\leq y\leq x$, because
$Q$ is supported on ${\mathbb {N}}$.
Therefore, since $U_\alpha P(x)$ is continuous
in $\alpha$ \cite{johnson21},
so is $U_\alpha^Q P(x)$,
and to show that $E(\alpha)$ is continuous
it suffices to show that the series,
\begin{eqnarray}
E(\alpha)
:=E[-\log C_Q \Pi_{\lambda}(W_{\alpha})]
=-\sum_{x=0}^\infty U_\alpha^QP(x)\logC_Q \Pi_\lambda(x),
\label{eq:series}
\end{eqnarray}
converges uniformly. To that end,
first observe that log-concavity of $C_Q\Pi_\lambda$
implies that $Q(1)$ is nonzero. [Otherwise,
if $i>1$ be the smallest integer $i$ such that $Q(i)\neq 0$,
then $C_Q\Pi_\lambda(i+1)=0$, but
$C_Q\Pi_\lambda(i)$ and
$C_Q\Pi_\lambda(2i)$ are both strictly positive,
contradicting the log-concavity of
$C_Q\Pi_\lambda$.]
Since $Q(1)$ is nonzero, we can bound the
compound Poisson probabilities as,
$$1 \geq C_Q \Pi_{\lambda}(x) = \sum_{y} [e^{-\lambda} \lambda^y/y!]\qst{y}(x)
\geq e^{-\lambda} [\lambda^x/x!] Q(1)^x,
\;\;\;\;\mbox{for all}\;x\geq 1,$$
so that the summands in (\ref{eq:series})
can be bounded,
\begin{equation} \label{eq:boundlog}
0 \leq - \log C_Q \Pi_{\lambda}(x) \leq \lambda + \log x! - x \log( \lambda Q(1))
\leq Cx^2,
\;\;\;\;x\geq 1,\end{equation}
for a constant $C>0$ that depends only on $\lambda$ and $Q(1)$.
Therefore, for any $N\geq 1$, the tail of the series (\ref{eq:series})
can be bounded,
$$
0\leq -\sum_{x=N}^\infty U_\alpha^QP(x)\logC_Q \Pi_\lambda(x)
\leq C E[W^2_\alpha{\mathbb I}_{\{W_\alpha\geq N\}}]
\leq \frac{C}{N}E[W_\alpha^3],$$
and, in view of Lemma~\ref{lem:moments},
it converges uniformly.
Therefore, $E(\alpha)$ is continuous in $\alpha$,
and, in particular, convergent for all $\alpha\in[0,1]$.
To prove that it is differentiable at each $\alpha\in(0,1)$
we need to establish that: (i)~the summands in (\ref{eq:series})
are continuously differentiable in $\alpha$ for each $x$;
and (ii)~the series
of derivatives converges uniformly.
Since, as noted above, $U_\alpha^Q P(x)$ is defined
by a finite sum, we can differentiate with respect
to $\alpha$ under the sum, to obtain,
\begin{eqnarray}
\frac{\partial}{\partial \alpha}
U^Q_{\alpha} P(x)
=
\frac{\partial}{\partial \alpha}
C_Q U_\alpha P(x)
= \sum_{y=0}^{x} \frac{\partial}{\partial \alpha}
U_{\alpha}P(y) \qst{y}(x).
\label{eq:finite}
\end{eqnarray}
And since $U_\alpha P$ is continuously
differentiable in $\alpha\in(0,1)$
for each $x$ (cf.\ \cite[Proposition~3.6]{johnson21}
or equation (\ref{eq:newheat}) above),
so are the summands in (\ref{eq:series}),
establishing~(i); in fact, they are
infinitely differentiable, which can be seen
by repeated applications of (\ref{eq:newheat}).
To show that the
series of derivatives converges uniformly,
let $\alpha$ be restricted in an arbitrary
open interval $(\epsilon,1)$ for some $\epsilon>0$.
The relation (\ref{eq:newheat})
combined with (\ref{eq:finite}) yields,
for any $x$,
\begin{eqnarray}
\lefteqn{\frac{\partial}{\partial \alpha} U_\alpha^Q P (x)} \nonumber \\
& = & \sum_{y=0}^{x}
\biggl( \lambda
(U_{\alpha}P(y) - U_{\alpha}P(y-1)
- ((y+1) U_{\alpha}P(y+1) - y U_{\alpha}P(y)) \biggr)
\qst{y}(x) \nonumber \\
& = & -\frac{1}{\alpha} \sum_{y=0}^{x}
\left( (y+1) U_{\alpha}P(y+1) - \lambda U_{\alpha}P(y) \right)
(\qst{y}(x) - \qst{y+1}(x)) \nonumber \\
& = & - \frac{1}{\alpha} \sum_{y=0}^{x}
\left( (y+1) U_{\alpha}P(y+1) - \lambda U_{\alpha}P(y) \right)
\qst{y}(x) \nonumber \\
& & + \sum_{v=0}^{x} Q(v) \frac{1}{\alpha} \sum_{y=0}^{x}
\left( (y+1) U_{\alpha}P(y+1) - \lambda U_{\alpha}P(y) \right)
\qst{y}(x-v) \nonumber \\
& = & - \frac{\lambda}{\alpha} U_\alpha^Q P(x) \left(
\frac{ \sum_{y=0}^{x} (y+1) U_{\alpha}P(y+1) \qst{y}(x)}{\lambda
U_\alpha^Q P(x) } - 1 \right) \nonumber \\
& & + \frac{\lambda}{\alpha}
\sum_{v=0}^{x} Q(v) U_\alpha^Q P(x-v) \left(
\frac{ \sum_{y=0}^{x} (y+1) U_{\alpha}P(y+1) \qst{y}(x-v)}{\lambda
U_\alpha^Q P(x-v) } - 1 \right) \nonumber \\
& = & - \frac{\lambda}{\alpha} \left( U_\alpha^Q P(x) \sco{U_\alpha^Q P}(x)
- \sum_{v=0}^{x} Q(v) U_\alpha^Q P(x-v) \sco{U_\alpha^Q P}(x-v) \right) .
\label{eq:derivative} \end{eqnarray}
Also,
for any $x$, by definition,
$$|U_\alpha^Q P(x) \sco{U_\alpha^Q P}(x)|
\leq
C_Q(U_\alpha P)^{\#}(x) +U_\alpha^QP(x),
$$
where, for any distribution $P$, we write
$P^{\#}(y) = P(y+1)(y+1)/\lambda$ for its size-biased version.
Hence for any $N\geq 1$, equations
(\ref{eq:derivative}) and (\ref{eq:boundlog}) yield the bound,
\begin{eqnarray*}
\lefteqn{
\left| \sum_{x=N}^{\infty} \frac{\partial}{\partial \alpha}
U_\alpha^Q P(x) \log C_Q \Pi_{\lambda}(x) \right| } \\
& \leq &
\sum_{x=N}^{\infty}
\frac{C\lambda x^2}{\alpha}
\Big\{
C_Q(U_\alpha P)^{\#}(x) +U_\alpha^QP(x)
+ \sum_{v=0}^{x} Q(v)
[
C_Q(U_\alpha P)^{\#}(x-v) +U_\alpha^QP(x-v)
] \Big\}\\
& = &
\frac{2C}{\alpha}
E\Big[
\Big(
V_\alpha^2+W_\alpha^2+X^2
+XV_\alpha+XW_\alpha
\Big)
{\mathbb I}_{\{V_\alpha\geq N,\;W_\alpha\geq N,\;X\geq N\}}
\Big]\\
&\leq&
\frac{C'}{\alpha}
\Big\{
E[V_\alpha^2 {\mathbb I}_{\{V_\alpha\geq N\}}]
+E[W_\alpha^2 {\mathbb I}_{\{W_\alpha\geq N\}}]
+E[X^2 {\mathbb I}_{\{X\geq N\}}]
\Big\}\\
&\leq&
\frac{C'}{N\alpha}
\Big\{
E[V_\alpha^3]
+E[W_\alpha^3]
+E[X^3]
\Big\},
\end{eqnarray*}
where $C,C'>0$ are appropriate finite constants,
and the random variables
$V_\alpha\sim C_Q(U_\alpha P)^{\#}$,
$W_\alpha\sim U^Q_\alpha P$ and $X\sim Q$ are independent.
Lemma~\ref{lem:moments} implies that this bound
converges to zero uniformly in $\alpha\in(\epsilon,1)$, as $N\to\infty$.
Since $\epsilon>0$ was arbitrary,
this establishes that
$E(\alpha)$ is differentiable for all $\alpha\in(0,1)$
and, in fact, that we can differentiate the
series (\ref{eq:series})
term-by-term, to obtain,
\begin{eqnarray}
\lefteqn{
E'(\alpha)
\;=\;
- \sum_{x=0}^{\infty} \frac{\partial}{\partial \alpha}
U_\alpha^Q P(x) \log C_Q \Pi_{\lambda}(x)
}
\label{eq:step1} \\
& = &
\frac{\lambda}{\alpha} \sum_{x=0}^{\infty}
\left( U_\alpha^Q P(x) \sco{U_\alpha^Q P}(x)
- \sum_{v=0}^{x} Q(v) U_\alpha^Q P(x-v) \sco{U_\alpha^Q P}(x-v) \right)
\log C_Q \Pi_{\lambda}(x) \nonumber \\
& = &
\frac{\lambda}{\alpha} \sum_{x=0}^{\infty}
U_\alpha^Q P(x) \sco{U_\alpha^Q P}(x)
\left( \log C_Q \Pi_{\lambda}(x) - \sum_{v=0}^{\infty} Q(v)
\log C_Q \Pi_{\lambda}(x+v)
\right),
\nonumber
\end{eqnarray}
where the second equality follows from using
(\ref{eq:derivative}) above, and the rearrangement
leading to the third equality follows by interchanging
the order of (second) double summation and replacing $x$
by $x+v$.
Now we note that, exactly as in \cite{johnson21},
the last series above is the covariance between
the (zero-mean) function $\sco{U_\alpha^Q P}(x)$
and the function $\left( \log C_Q \Pi_{\lambda}(x)
- \sum_v Q(v) \log C_Q \Pi_{\lambda}(x+v) \right)$,
under the measure $U_\alpha^Q P$.
Since $P$ is ultra-log-concave, so is $U_\alpha P$
\cite{johnson21}, hence the score function
$\sco{U_\alpha^Q P}(x)$ is decreasing in $x$,
by Lemma~\ref{lem:decsc}. Also, the
log-concavity of $C_Q \Pi_{\lambda}$ implies that the
second function is increasing, and
Chebyshev's rearrangement lemma
implies that the covariance is
less than or equal to zero, proving
that $E'(\alpha)\leq 0$, as claimed.
Finally, the fact that $E(0)\geq E(1)$
is an immediate consequence of the
continuity of $E(\alpha)$ on $[0,1]$
and the fact that
$E'(\alpha)\leq 0$ for all $\alpha\in(0,1)$.
\end{proof}
Notice that, for the above proof to work, it is not necessary that
$C_Q \Pi_{\lambda}$ be log-concave; the weaker property
that $\left( \log C_Q \Pi_{\lambda}(x)
- \sum_v Q(v) \log C_Q \Pi_{\lambda}(x+v) \right)$ be increasing is enough.
We can now state and prove a slightly more general form of Theorem~\ref{thm:mainpoi}.
Recall that the relative entropy between distributions $P$ and $Q$ on ${\mathbb {Z}}_+$, denoted by $D(P\|Q)$,
is defined by
$$D(P\|Q):=\sum_{x\geq 0}P(x)\log\frac{P(x)}{Q(x)}.$$
\begin{theorem} \label{thm:mainpoi-D}
Let $P$ be an ultra-log-concave distribution on ${\mathbb {Z}}_+$ with mean $\lambda$.
If the distribution $Q$ on ${\mathbb {N}}$ and the compound Poisson distribution
$C_Q \Pi_\lambda$ are both log-concave, then
$$
D(C_Q P\|C_Q \Pi_\lambda) \leq
H(C_Q \Pi_\lambda)- H(C_Q P) .
$$
\end{theorem}
\begin{proof}
As in Proposition~\ref{prop:deriv},
let $W_\alpha\sim U^Q_\alpha P=C_Q U_\alpha P$.
Noting that $W_0\sim C_Q \Pi_\lambda$ and $W_1\sim C_Q P$,
we have
\begin{eqnarray*}
H(C_Q P)+D(C_Q P\|C_Q \Pi_\lambda
&=&
-E[\log C_Q \Pi_\lambda(W_1)]\\
&\leq&
-E[\log C_Q \Pi_\lambda(W_0)]\\
&=&
H(C_Q \Pi_\lambda),
\end{eqnarray*}
where the inequality is exactly
the statement that $E(1)\leq E(0)$,
proved in Proposition~\ref{prop:deriv}.
\end{proof}
Since $0\leq D(C_Q P\|C_Q \Pi_\lambda)$,
Theorem~\ref{thm:mainpoi} immediately follows.
\section{Maximum Entropy Property of the Compound Binomial Distribution}
\label{sec:compbin}
Here we prove the maximum entropy result for compound
binomial random variables, Theorem~\ref{thm:mainber}.
The proof, to some extent, parallels some
of the arguments in \cite{harremoes}\cite{mateev}\cite{shepp},
which rely on differentiating the compound-sum probabilities
$\bp{p}(x)$
for a given parameter vector $\vc{p}=(p_1,p_2,\ldots,p_n)$
(recall Definition~\ref{def:bp} in the Introduction),
with respect to an individual $p_i$.
Using the representation,
\begin{equation} \label{eq:master}
\cp{p}(y) =
\sum_{x=0}^n \bp{p}(x) Q^{*x}(y),
\;\;\;\;y\geq 0,
\end{equation}
differentiating $\cp{p}(x)$
reduces to differentiating
$\bp{p}(x)$,
and leads to an expression
equivalent to that derived
earlier
in (\ref{eq:derivative})
for the derivative of $C_Q U_\alpha P$
with respect to $\alpha$.
\begin{lemma} \label{lem:partials}
Given a parameter vector $\vc{p}=(p_1,p_2,\ldots,p_n)$,
with $n\geq 2$ and
each $0 \leq p_i \leq 1$,
let,
$$ \vc{p_t} = \left( \frac{p_1 + p_2}{2} + t, \frac{p_1 + p_2}{2} - t, p_3, \ldots, p_n \right),$$
for $t \in [-(p_1+p_2)/2, (p_1 + p_2)/2]$. Then,
\begin{equation} \label{eq:maindiff}
\frac{\partial}{\partial t} \cp{p_t}(x) = (- 2t)
\sum_{y=0}^n \bp{\wt{p}}(y)
\left( Q^{*(y+2)}(x) - 2 Q^{*(y+1)}(x) + Q^{*y}(x) \right),
\end{equation}
where $\vc{\wt{p}} = (p_3, \ldots, p_n)$.
\end{lemma}
\begin{proof} Note that the sum of the entries of $\vc{p}_t$ is
constant as $t$ varies, and that $\vc{p_t} = \vc{p}$
for $t = (p_1 - p_2)/2$, while
$\vc{p_t} = ( (p_1 + p_2)/2, (p_1 + p_2)/2, p_3, \ldots, p_n)$
for $t=0$. Writing
$k = p_1 + p_2$, $\bp{p_t}$ can be expressed,
\begin{eqnarray*}
\bp{p_t}(y) & = &
\left( \frac{k^2}{4} - t^2 \right)
\bp{\wt{p}}(y-2)
+ \left( k \left( 1 - \frac{k}{2} \right)
+2 t^2 \right) \bp{\wt{p}}(y-1) \\
& & + \left( \left( 1 - \frac{k}{2} \right)^2 - t^2 \right)
\bp{\wt{p}}(y),
\end{eqnarray*}
and its derivative with respect to $t$ is,
$$ \frac{ \partial}{\partial t} \bp{p_t}(y)
= - 2t \left( \bp{\wt{p}}(y-2) - 2 \bp{\wt{p}}(y-1) + \bp{\wt{p}}(y) \right).$$
The expression (\ref{eq:master}) for
$\cp{p_t}$ shows that it is
a finite linear combination of compound-sum
probabilities $\bp{p_t}(x)$,
so we can differentiate inside the sum to obtain,
\begin{eqnarray*}
\frac{ \partial}{\partial t} \cp{p_t}(x)
& = & \sum_{y=0}^n \frac{ \partial}{\partial t} \bp{p_t}(y)
Q^{*y}(x) \\
& = & - 2t \sum_{y=0}^n \left( \bp{\wt{p}}(y-2) - 2 \bp{\wt{p}}(y-1) + \bp{\wt{p}}(y) \right) Q^{*y}(x) \\
& = & -2 t \sum_{y=0}^{n-2} \bp{\wt{p}}(y) \left(
Q^{*(y+2)}(x) - 2 Q^{*(y+1)}(x) + Q^{*y}(x) \right),
\end{eqnarray*}
since $\bp{\wt{p}}(y) = 0$ for $y \leq -1$ and $y \geq n-1$. \end{proof}
Next we state and prove the equivalent
of Proposition~\ref{prop:deriv} above. Note
that
the distribution of a compound Bernoulli sum
is invariant under permutations of the
Bernoulli parameters $p_i$. Therefore,
the assumption $p_1\geq p_2$ is made below
without
loss of generality.
\begin{proposition} \label{prop:deriv2}
Suppose that the distribution $Q$ on ${\mathbb {N}}$
and the compound binomial distribution
$\mbox{\em CBin}(n,\lambda/n,Q)$
are both log-concave; let
$\vc{p}=(p_1,p_2,\ldots,p_n)$ be a
given parameter vector with $n\geq 2$,
$p_1 +p_2+ \ldots + p_n = \lambda>0$,
and $p_1\geq p_2$;
let $W_t$ be a
random variable with distribution $\cp{p_t}$;
and define, for all $t\in[0,(p_1-p_2)/2],$
the function,
$$E(t):=E[-\log \cp{\vc{\overline{p}}}(W_t)],$$
where $\vc{\overline{p}}$ denotes the parameter
vector with all entries equal to $\lambda/n$.
If $Q$ satisfies either of the conditions:
$(a)$~$Q$ finite support; or
$(b)$~$Q$ has tails heavy enough so that,
for some $\rho,\beta>0$ and $N_0\geq 1$,
we have, $Q(x)\geq \rho^{x^\beta}$,
for all $x\geq N_0$, then
$E(t)$ is continuous for all
$t\in[0,(p_1-p_2)/2]$,
it is differentiable for
$t\in(0,(p_1-p_2)/2)$,
and, moreover, $E'(t)\leq 0$ for
$t\in(0,(p_1-p_2)/2)$.
In particular, $E(0)\geq E((p_1-p_2)/2)$.
\end{proposition}
\begin{proof}
The compound distribution $C_Q\bp{p_t}$ is
defined by the finite sum,
$$
C_Q\bp{p_t}(x)=\sum_{y=0}^n\bp{p_t}(y)Q^{*y}(x),$$
and is, therefore, continuous in $t$. First,
assume that $Q$ has finite support.
Then so does $C_Q\bp{p}$ for any parameter
vector $\vc{p}$, and the continuity and
differentiability of $E(t)$ are trivial.
In particular, the series defining $E(t)$
is a finite sum, so we can differentiate
term-by-term, to obtain,
\begin{eqnarray}
E'(t)
& = & - \sum_{x=0}^{\infty} \frac{\partial}{\partial t} \cp{p_t}(x)
\log \cp{\vc{\overline{p}}}(x) \nonumber \\
& = & 2t \sum_{x=0}^{\infty}
\sum_{y=0}^{n-2} \bp{\wt{p}}(y)
\left( Q^{*(y+2)}(x) - 2 Q^{*(y+1)}(x) + Q^{*y}(x) \right)
\log \cp{\vc{\overline{p}}}(x)
\label{eq:binstep2} \\
& = & 2t \sum_{y=0}^{n-2} \sum_{z=0}^{\infty} \bp{\wt{p}}(y) Q^{*y}(z) \sum_{v,w} Q(v) Q(w)
\bigg[ \log \cp{\vc{\overline{p}}}(z+v+w) - \log \cp{\vc{\overline{p}}}(z+v) \nonumber \\
& & \hspace*{6.5cm}
- \log \cp{\vc{\overline{p}}}(z+w) + \log \cp{\vc{\overline{p}}}(z)
\bigg], \label{eq:binstep3}
\end{eqnarray}
where (\ref{eq:binstep2}) follows by Lemma~\ref{lem:partials}.
By assumption, the distribution $\cp{\vc{\overline{p}}}=\mbox{CBin}(n,\lambda/n,Q)$
is log-concave,
which implies that,
for all $z,v,w$ such that $z+v+w$ is in the
support of $\mbox{CBin}(n,\lambda/n,Q)$,
\begin{equation*}
\frac{ \cp{\vc{\overline{p}}}(z)}{\cp{\vc{\overline{p}}}(z+v)}
\leq \frac{ \cp{\vc{\overline{p}}}(z+w)}{\cp{\vc{\overline{p}}}(z+v+w)}.
\end{equation*}
Hence the term in square brackets in equation (\ref{eq:binstep3})
is negative, and the result follows.
Now, suppose condition $(b)$ holds on the tails of $Q$.
First we note that the moments of $W_t$ are all uniformly
bounded in $t$: Indeed, for any $\gamma>0$,
\begin{equation}
E[W_t^\gamma]=\sum_{x=0}^\infty
C_Q\bp{p_t}(x)
x^\gamma
=
\sum_{x=0}^\infty
\sum_{y=0}^n\bp{p_t}(y) Q^{*y}(x)
x^\gamma
\leq
\sum_{y=0}^n
\sum_{x=0}^\infty
Q^{*y}(x)
x^\gamma
\leq C_nq_\gamma,
\label{eq:moment}
\end{equation}
where $C_n$ is a constant depending
only on $n$, and $q_\gamma$ is the
$\gamma$th moment of $Q$, which
is of course finite; recall property~$(ii)$
in the beginning of Section~\ref{sec:comppoi}.
For the continuity of $E(t)$, it suffices to show that
the series,
\begin{eqnarray}
E(t):=E[-\log \cp{\vc{\overline{p}}}(W_t)]=
-\sum_{x=0}^\infty C_Q\bp{p_t}(x)\log C_Q\bp{\vc{\overline{p}}}(x),
\label{eq:Eseries}
\end{eqnarray}
converges uniformly. The tail assumption on $Q$ implies
that, for all $x\geq N_0$,
$$1 \geq \cp{\vc{\overline{p}}}(x) = \sum_{y=0}^n \bp{\vc{\overline{p}}}(y) \qst{y}(x)
\geq \lambda(1-\lambda/n)^{n-1} Q(x)
\geq \lambda(1-\lambda/n)^{n-1} \rho^{x^\beta},$$
so that,
\begin{equation}
0\leq -\log \cp{\vc{\overline{p}}}(x)\leq Cx^\beta,
\label{eq:logQ}
\end{equation}
for an appropriate constant $C>0$.
Then, for $N\geq N_0$, the tail of the series
(\ref{eq:Eseries}) can be bounded,
$$0\leq -\sum_{x=N}^\infty C_Q\bp{p_t}(x)\log C_Q\bp{\vc{\overline{p}}}(x)
\leq C E[ W^\beta_t{\mathbb I}_{\{W_t\geq N\}}]
\leq \frac{C}{N}E[W_t^{\beta+1}]
\leq \frac{C}{N}C_nq_{\beta+1},
$$
where the last inequality follows from (\ref{eq:moment}).
This obviously converges to zero, uniformly
in $t$, therefore $E(t)$ is continuous.
For the differentiability of $E(t)$,
note that the summands in (\ref{eq:series}) are
continuously differentiable (by Lemma~\ref{lem:partials}),
and that the series of derivatives converges uniformly
in $t$; to see that, for $N\geq N_0$ we apply
Lemma~\ref{lem:partials} together with the bound
(\ref{eq:logQ}) to get,
\begin{eqnarray*}
\lefteqn{
\left| \sum_{x=N}^{\infty} \frac{\partial}{\partial t}
\cp{p_t}(x) \log \cp{\vc{\overline{p}}}(x) \right|
} \\
& \leq &
2 t \sum_{x=N}^{\infty}
\sum_{y=0}^n \bp{\wt{p}}(y)
\left( Q^{*(y+2)}(x) + 2 Q^{*(y+1)}(x) + Q^{*y}(x) \right)
Cx^\beta\\
& \leq &
2 C t
\sum_{y=0}^n
\sum_{x=N}^{\infty}
\left( Q^{*(y+2)}(x) + 2 Q^{*(y+1)}(x) + Q^{*y}(x) \right)
x^\beta,
\end{eqnarray*}
which is again easily seen to converge to zero
uniformly in $t$ as $N\to\infty$, since
$Q$ has finite moments of all orders.
This establishes the differentiability of $E(t)$
and justifies the term-by-term differentiation
of the series (\ref{eq:series}); the rest of
the proof that $E'(t)\leq 0$ is the same as in case~$(a)$.
\end{proof}
Note that, as with Proposition~\ref{prop:deriv},
the above proof only requires that the compound
binomial distribution $\mbox{CBin}(n,\lambda/n,Q)=\cp{\vc{\overline{p}}}$
satisfies a property weaker than log-concavity, namely
that the function,
$\log \cp{\vc{\overline{p}}}(x) - \sum_v Q(v) \log \cp{\vc{\overline{p}}}(x+v),$
be increasing in $x$.
\begin{proof}{\bf (of Theorem~\ref{thm:mainber})}
Assume, without loss of generality,
that $n\geq 2$. If $p_1 > p_2$, then
Proposition~\ref{prop:deriv2}
says that, $E((p_1-p_2)/2)\leq E(0)$, that is,
$$ -\sum_{x=0}^{\infty}
\cp{p}(x) \log \cp{\vc{\overline{p}}}(x)
\leq - \sum_{x=0}^{\infty}
\cp{p^*}(x) \log \cp{\vc{\overline{p}}}(x),$$
where $\vc{p^*} = ((p_1 + p_2)/2, (p_1 + p_2)/2, p_3, \ldots p_n)$
and $\vc{\vc{\overline{p}}} = (\lambda/n,\ldots,\lambda/n)$.
Since the expression in the above right-hand-side
is invariant under permutations of the elements of
the parameter vectors,
we deduce that it is maximized
by $\vc{p_t} = \vc{\overline{p}}$. Therefore,
using, as before, the nonnegativity
of the relative entropy,
\begin{eqnarray*}
H(\cp{p})
&\leq&
H(\cp{p}) + D(\cp{p} \| \cp{\vc{\overline{p}}})\\
& = &
-\sum_{x=0}^{\infty}
\cp{p}(x) \log \cp{\vc{\overline{p}}}(x) \\
&\leq&
-\sum_{x=0}^{\infty}
\cp{\vc{\overline{p}}}(x) \log \cp{\vc{\overline{p}}}(x)\\
& = &
H( \cp{\vc{\overline{p}}} )
\;=\; H(\mbox{CBin}(n,\lambda/n,Q)),
\end{eqnarray*}
as claimed.
\end{proof}
Clearly one can also state a slightly more general version of
Theorem~\ref{thm:mainber} analogous to Theorem~\ref{thm:mainpoi-D}.
\section{Conditions for Log-Concavity} \label{sec:lccond}
Theorems~\ref{thm:mainpoi} and~\ref{thm:mainber} state that
log-concavity is a sufficient condition for
compound binomial and compound Poisson distributions
to have maximal entropy within a natural class.
In this section, we discuss when
log-concavity holds.
Recall that Steutel and van Harn \cite[Theorem~2.3]{steutel2} showed that,
if $\{i Q(i)\}$ is a decreasing sequence, then
CPo$(\lambda,Q)$ is a unimodal distribution,
which is a necessary condition for log-concavity.
Interestingly, the same condition
provides a dichotomy of results
in compound Poisson approximation
bounds as developed by Barbour, Chen and Loh \cite{barbour-chen-loh}:
If $\{i Q(i)\}$ is decreasing,
then the bounds are
of the same form and order as in
the Poisson case, otherwise
the bounds are much larger.
In a slightly different direction, Cai and Willmot \cite[Theorem~3.2]{cai}
showed that if $\{Q(i)\}$ is decreasing
then the cumulative distribution function of the compound Poisson
distribution CPo$(\lambda,Q)$, evaluated at the integers,
is log-concave.
Finally, Keilson and Sumita \cite[Theorem~4.9]{keilson2} proved
that, if $Q$ is log-concave, then
the ratio,
$$ \frac{ C_Q \Pi_{\lambda}(n)}{C_Q \Pi_{\lambda}(n+1)}, \;\;
$$
is decreasing in $\lambda$ for any fixed $n$.
In the present context, we first show
that a compound Bernoulli sum is log-concave
if the compounding distribution $Q$ is log-concave
and the Bernoulli parameters are sufficiently large.
\begin{lemma} \label{lem:lc}
Suppose $Q$ is a log-concave distribution on ${\mathbb {N}}$,
and all the elements $p_i$
of the parameter vector $\vc{p}=(p_1,p_2,\ldots,p_n)$
satisfy $p_i \geq \frac{1}{1 + Q(1)^2/Q(2)}$.
Then the compound Bernoulli sum distribution
$\cp{p}$ is log-concave.
\end{lemma}
\begin{proof}
Observe that, given that $Q$ is log-concave,
the compound Bernoulli distribution
$\cbern{p}{Q}$ is log-concave if and only if,
\begin{equation}\label{cbern-lc}
p \geq \frac{1}{1 + Q(1)^2/Q(2)}.
\end{equation}
Indeed, let $Y$ have distribution
$\cbern{p}{Q}$. Since $Q$ is log-concave
itself, the log-concavity of
$\cbern{p}{Q}$ is equivalent to
the inequality,
$ \Pr(Y=1)^2 \geq \Pr(Y=2) \Pr(Y=0)$,
which states that,
$ (p Q(1))^2 \geq (1-p) p Q(2)$,
and this is exactly the assumption \eqref{cbern-lc}.
The assertion of the lemma now follows
since the sum of independent log-concave
random variables is log-concave; see, e.g., \cite{karlin3}.
\end{proof}
Next we examine conditions under which a compound
Poisson measure is log-concave, starting with a simple
necessary condition.
\begin{lemma}\label{lem:nec}
A necessary condition for {\em CPo}$(\lambda,Q)$
to be log-concave is that,
\begin{equation} \label{eq:nec}
\lambda \geq \frac{2 Q(2)}{Q(1)^2} .
\end{equation}
\end{lemma}
\begin{proof}
For any distribution $P$, considering
the difference,
$C_Q P(1)^2 - C_Q P(0) C_Q P (2)$,
shows that a necessary condition for $C_Q P$
to be log-concave is that,
\begin{equation} \label{eq:nec2}
(P(1)^2 - P(0) P(2))/P(0) P(1) \geq Q(2)/Q(1)^2. \end{equation}
Now take $P$ to be the Po$(\lambda)$
distribution.
\end{proof}
Similarly, for $P=\bp{p}$, a necessary condition
for the compound Bernoulli sum
$C_Q\bp{p}$ to be log-concave is that,
$$ \sum_i \frac{ p_i}{1-p_i} + \left(\sum_i \frac{p_i^2}{(1-p_i)^2} \right)
\left(\sum_i \frac{p_i}{1-p_i} \right)^{-1} \geq \frac{2 Q(2)}{Q(1)^2},$$
which, since the left-hand-side
is greater than $\sum_i p_i/(1-p_i) \geq \sum_i p_i$,
will hold as long as
$\sum_i p_i \geq 2 Q(2)/Q(1)^2$.
Note that, unlike for the Poisson distribution,
it is not the case that every compound Poisson
distribution CPo$(\lambda,Q)$
is log-concave.
Next we show that for some particular choices of $Q$
and general compound distributions $C_Q P$,
the above necessary condition is sufficient
for log-concavity.
\begin{theorem} \label{thm:qgeom}
Let $Q$ be a geometric distribution
on ${\mathbb {N}}$. Then $C_Q P$ is log-concave
for any distribution $P$ which is log-concave
and satisfies the condition~{\em (\ref{eq:nec2})}.
\end{theorem}
\begin{proof} If $Q$ is geometric with mean $1/\alpha$, then,
$Q^{*y}(x) = \alpha^y (1-\alpha)^{x-y} \binom{x-1}{y-1}$,
which implies that,
$$ C_Q P(x) = \sum_{y=0}^x P(y) \alpha^y (1-\alpha)^{x-y} \binom{x-1}{y-1}.$$
Condition~(\ref{eq:nec2}) ensures
that $C_Q P(1)^2 - C_Q P(0) C_Q P (2) \geq 0$, so,
taking $z = y-1$, we need only prove that the sequence,
$$ C(x) := C_Q P(x+1)/(1-\alpha)^x = \sum_{z=0}^x P(z+1) \left(
\frac{\alpha}{1-\alpha} \right)^{z+1} \binom{x}{z}$$
is log-concave.
However, this follows immediately from
\cite[Theorem~7.3]{karlin3}, which proves
that if $\{a_i\}$ is a log-concave sequence,
then so is $\{b_i\}$, defined by
$ b_i = \sum_{j=0}^i \binom{i}{j} a_j.$
\end{proof}
\begin{theorem} \label{thm:q2pt}
Let $Q$ be a distribution supported on the set $\{ 1, 2 \}$.
Then the distribution $C_Q P$ is log-concave
for any ultra-log-concave distribution $P$
with support on $\{0,1,\ldots,N\}$
(where $N$ may be infinite),
which satisfies
\begin{equation}\label{12cond}
(x+1) P(x+1)/P(x) \geq
2 Q(2)/Q(1)^2
\end{equation}
for all $x=0,1,\ldots,N$.
In particular, if $Q$ is supported on $\{ 1, 2 \}$,
the compound Poisson distribution {\em CPo}$(\lambda,Q)$
is log-concave for all $\lambda \geq \frac{2 Q(2)}{Q(1)^2}$.
\end{theorem}
Note that the condition \eqref{12cond} is equivalent to requiring
that $ NP(N)/P(N-1) \geq 2Q(2)/Q(1)^2$ if $N$ is finite,
or that $\lim_{x \rightarrow \infty} (x+1) P(x+1)/P(x)
\geq 2 Q(2)/Q(1)^2$ if $N$ is infinite.
The proof of Theorem~\ref{thm:q2pt} is based
in part on some of the ideas in
Johnson and Goldschmidt
\cite{johnson17}, and also
in Wang and Yeh \cite{wang3},
where transformations that
preserve log-concavity are studied.
Since the proof is slightly involved and
the compound Poisson part of the theorem is superseded
by Theorem~\ref{thm:lcconj} below, we give it in the appendix.
Lemma~\ref{lem:nec} and Theorems~\ref{thm:qgeom} and \ref{thm:q2pt},
supplemented by some calculations of the quantities
$C_Q \Pi_{\lambda}(x)^2 - C_Q \Pi_{\lambda}(x-1) C_Q \Pi_{\lambda}(x+1)$
for small $x$, suggest that
compound Poisson measure {\em CPo}$(\lambda,Q)$ should
be log-concave, as long as $Q$ is log-concave
and $\lambda Q(1)^2\geq 2Q(2)$.
Indeed, the following
slightly more general result holds; see Section~7 for
some remarks on its history.
As per Definition~\ref{def:sizebias}, we use
$Q^{\#}$ to denote the size-biased version
of $Q$. Observe that log-concavity of $Q^{\#}$
is a weaker requirement than log-concavity of $Q$.
\begin{theorem}\label{thm:lcconj}
If $Q^{\#}$ is log-concave and $\lambda Q(1)^2\geq 2Q(2)$ with $Q(1)>0$, then
the compound Poisson measure {\em CPo}$(\lambda,Q)$ is log-concave.
\end{theorem}
\begin{proof}
It is well-known that compound Poisson probability mass functions obey a
recursion formula:
\begin{equation}\label{panjer}
k C_Q \Pi_{\lambda}(k) = \lambda \sum_{j=1}^{k} j Q(j) C_Q \Pi_{\lambda} (k-j) \;\;\;\;
\mbox{ for all $k\in {\mathbb {N}}$.}
\end{equation}
(This formula, which is easy to prove for instance
using characteristic functions, has been
repeatedly rediscovered; the earliest reference
we could find was to a 1958 note of Katti and Gurland
mentioned by N. de Pril \cite{deP85}, but later references
are Katti \cite{Kat67},
Adelson \cite{Ade66} and Panjer \cite{Pan81};
in actuarial circles, the above is known
as the Panjer recursion formula.)
For notational convenience, we write $\mu_Q$ for the mean of $Q$,
$r_{j}=\lambda (j+1) Q(j+1)=\lambda \mu_Q Q^{\#}(j)$,
and $p_{j}=C_Q \Pi_{\lambda}(j)$ for $j\in\mathbb{Z}_{+}$. Then \eqref{panjer} reads,
$$
(k+1)p_{k+1}= \sum_{j=0}^{k} r_j p_{k-j}
$$
for all $k\in\mathbb{Z}_{+}$.
Theorem~\ref{thm:lcconj} is just a restatement using \eqref{panjer}
of \cite[Theorem 1]{Han88}. For completeness, we sketch the proof
of Hansen \cite{Han88},
which proceeds by induction.
Note that one only needs to prove the following
statement: If $Q^{\#}$ is strictly log-concave and $\lambda Q(1)^2> 2Q(2)$,
then
the compound Poisson measure CPo$(\lambda,Q)$ is strictly log-concave.
The general case follows by taking limits.
By assumption, $\lambda Q(1)^2> 2Q(2)$,
which can be rewritten as $r_0^2> r_1$,
and hence,
$$
p_1^2-p_0 p_2 = \frac{p_0^2}{2} (r_0^2 - r_1) > 0 .
$$
This initializes the induction procedure by showing that the subsequence
$(p_0,p_1,p_2)$ is strictly log-concave.
Hansen \cite{Han88} developed the following identity,
which can be verified using
the recursion \eqref{panjer}: Setting $p_{-1}=0$,
\begin{equation}\label{eq:hansen}\begin{split}
m(m+2) [p_{m+1}^2-p_{m} p_{m+2}]
&= p_{m+1} (r_0 p_{m}-p_{m+1}) \\
&\quad+ \sum_{l=0}^m \sum_{k=0}^l
(p_{m-l}p_{m-k-1}-p_{m-k}p_{m-l-1}) (r_{k+1}r_{l}-r_{l+1}r_{k}) .
\end{split}\end{equation}
Observe that each term in the double sum is positive as a
consequence of the induction hypothesis (namely, that the
subsequence $(p_0,p_1,\ldots,p_{m+1})$ is strictly log-concave),
and the strict log-concavity of $r$.
To see that the first term is also positive, note that the induction hypothesis
implies that $p_{k+1}/p_k$ is decreasing for $k\leq m+1$;
hence,
$$
r_0=\frac{p_1}{p_0} >\frac{p_{m+1}}{p_m} .
$$
Thus it is shown that $p_{m+1}^2 > p_{m} p_{m+2}$, which proves the theorem.
\end{proof}
We note that Hansen's remarkable identity \eqref{eq:hansen}
is reminiscent of (although more complicated than) an identity
that can be used to prove the well-known fact that the convolution
of two log-concave sequences is log-concave. Indeed,
as shown for instance in Liggett \cite{Lig97}, if $c=a\star b$, then,
$$
c_k^2-c_{k-1}c_{k+1}
= \sum_{i<j} (a_i a_{j-1}-a_{i-1} a_{j}) (b_{k-i} b_{k-j+1}-b_{k-i+1} b_{k-j}).
$$
Observe that \eqref{panjer} can be interpreted as saying that the
size-biased version of $C_Q \Pi_{\lambda}$ is the convolution
of the sequence $r$ with the sequence $p$.
\section{Applications to Combinatorics}
\label{sec:applns}
There are numerous examples of ultra-log-concave sequences
in discrete mathematics, and also many examples of interesting
sequences where ultra-log-concavity is conjectured.
The above maximum entropy results for ultra-log-concave
probability distributions on $\mathbb{Z}_{+}$ yield bounds on the
``spread'' of such ultra-log-concave sequences,
as measured by entropy.
Two particular examples are considered below.
\subsection{Counting independent sets in a claw-free graph}
Recall that for a graph $G=(V,E)$, an independent set is a subset of the
vertex set $V$ such that no two elements of the subset are connected by an
edge in $E$. The collection of independent sets of
$G$ is denoted $\mathcal{I}(G)$.
Consider a graph $G$ on a randomly weighted
ground set, i.e., associate with each $i\in V$ the random weight
$X_i$ drawn from a probability distribution $Q$ on ${\mathbb {N}}$, and suppose
the weights $\{X_i:i\in V\}$ are independent. Then
for any independent set $I\in \mathcal{I}(G)$, its weight is given by
the sum of the weights of its elements,
$$
w(I)=\sum_{i\in I} X_i \mbox{$ \;\stackrel{\mathcal{D}}{=}\; $} \sum_{i=1}^{|I|} X_i',
$$
where $\mbox{$ \;\stackrel{\mathcal{D}}{=}\; $}$ denotes equality in distribution,
and $X_i'$ are i.i.d. random variables drawn from $Q$.
For the weight of a random independent set $\mathbb{I}$ (picked uniformly
at random from $\mathcal{I}(G)$), one similarly has,
$$
w(\mathbb{I})=\sum_{i\in \mathbb{I}} X_i \mbox{$ \;\stackrel{\mathcal{D}}{=}\; $} \sum_{i=1}^{|\mathbb{I}|} X_i',
$$
and the latter, by definition, has distribution $C_Q P$, where
$P$ is the probability distribution on $\mathbb{Z}_{+}$ induced by $|\mathbb{I}|$.
The following result of Hamidoune \cite{Ham90}
(see also Chudnovsky and Seymour \cite{CS07} for a generalization
and different proof) connects this discussion with ultra-log-concavity.
Recall that a claw-free graph is a graph that does not
contain the complete bipartite graph $K_{1,3}$ as an induced subgraph.
\begin{theorem}[Hamidoune \cite{Ham90}]\label{thm:hamidoune}
For a claw-free finite graph $G$, the sequence $\{I_k\}$,
where $I_k$ is the number of
independent sets of size $k$ in $G$, is ultra-log-concave.
\end{theorem}
Clearly, Theorem~\ref{thm:hamidoune} may be restated as follows:
For a random independent set $\mathbb{I}$,
$$
P(k):=\text{Pr}\{|\mathbb{I}|=k\} \propto I_k,
$$
is an ultra-log-concave probability distribution. This yields the
following corollary.
\begin{corollary}\label{cor:graph}
Suppose the graph $G$ on the ground set $V$ is claw-free.
Let $\mathbb{I}$ be a random independent set,
and let the average cardinality of $\mathbb{I}$
be $\lambda$. Suppose the elements of the ground set are
given i.i.d. weights drawn from a probability distribution $Q$ on ${\mathbb {N}}$,
where $Q$ is log-concave with $Q(1)>0$ and $\lambda Q(1)^2\geq 2Q(2)$.
If $W$ is the random weight assigned to $\mathbb{I}$, then,
$$
H(W) \leq H(C_Q \Pi_{\lambda}).
$$
\end{corollary}
If $Q$ is the unit mass at 1, then $W=|\mathbb{I}|$,
and Corollary~\ref{cor:graph}
gives a bound on the entropy of the cardinality of a
random independent set in a claw-free graph. That is,
\begin{equation*}
H(|\mathbb{I}|) \leq H(\Pi_{\lambda}) ,
\end{equation*}
where $\lambda=E|\mathbb{I}|$. Observe that this bound
is independent of $n$ and depends {\em only} on the average
size of a random independent set, which suggests that
it could be of utility in studying sequences associated with
graphs on large ground sets.
And, although the entropy of a Poisson (or compound Poisson)
measure cannot easily expressed in closed form, there are
various simple bounds \cite[Theorem 8.6.5]{CT06:book} such as,
\begin{equation}\label{poi-ent-bd}
H(\Pi_{\lambda}) \leq \frac{1}{2} \log \bigg[2\pi e \bigg(\lambda+\frac{1}{12}\bigg)\bigg],
\end{equation}
as well as good approximations for large $\lambda$; see, e.g.,
\cite{Kne98,JS99,Fla99}.
One way to use this bound is via the
following crude relaxation: Bound the average
size $\lambda$ of a random independent set by the independence
number $\alpha(G)$ of $G$, which is defined as
the size of a largest independent set of $G$. Then,
\begin{equation}
H(|\mathbb{I}|) \leq \frac{1}{2} \log \bigg[2\pi e \bigg(\alpha(G)+\frac{1}{12}\bigg)\bigg],
\end{equation}
which can clearly be much tighter than the trivial bound,
$H(|\mathbb{I}|) \leq \log \alpha(G)$,
using the uniform distribution, when $\alpha(G)>16$.
\subsection{Mason's conjecture}
Recall that a matroid $M$ on a
finite ground set $[n]$ is a collection of subsets of $[n]$,
called ``independent sets''\footnote{Note that although
graphs have associated cycle matroids, there is no connection
between independent
sets of matroids and independent sets of graphs; indeed, the latter are often
called ``stable sets'' in the matroid literature to distinguish the two.},
satisfying the following:
(i) The empty set is independent.
(ii) Every subset of an independent set is independent.
(iii) If $A$ and $B$ are two independent sets and $A$
has more elements than $B$,
then there exists an element in $A$ which is not in $B$
and when added to $B$ still
gives an independent set.
Consider a matroid $M$ on a randomly weighted
ground set, i.e., associate with each $i\in[n]$ the random weight
$X_i$ drawn from a probability distribution $Q$ on ${\mathbb {N}}$, and suppose
the weights $\{X_i:i\in[n]\}$ are independent. As before,
for any independent set $I\in M$, its weight is given by
the sum of the weights of its elements,
and the weight of a random independent set $\mathbb{I}$ (picked uniformly
at random from $M$), is,
$$
w(\mathbb{I})=\sum_{i\in \mathbb{I}} X_i \mbox{$ \;\stackrel{\mathcal{D}}{=}\; $} \sum_{i=1}^{|\mathbb{I}|} X_i',
$$
where the $X_i'$ are i.i.d.\ random variables drawn from $Q$.
Then $w(\mathbb{I})$
has distribution $C_Q P$, where
$P$ is the probability distribution on $\mathbb{Z}_{+}$ induced by $|\mathbb{I}|$.
\begin{conjecture}[Mason \cite{Mas72}]\label{conj:mason}
The sequence $\{I_k\}$, where $I_k$ is the number
of independent sets of size $k$ in
a matroid on a finite ground set, is ultra-log-concave.
\end{conjecture}
Strictly speaking, Mason's original conjecture asserts ultra-log-concavity of some finite order
(not defined in this paper) whereas this paper is only concerned with ultra-log-concavity of
order infinity; however the slightly weaker form of the conjecture
stated here is still difficult and open. The only special cases in which Conjecture~\ref{conj:mason}
is known to be true is for matroids whose rank (i.e., cardinality of the largest independent
set) is 6 or smaller (as proved by Zhao \cite{Zha85}), and for matroids on a ground set of 11 or
smaller (as proved by Kahn and Neiman \cite{KN11}). Very recently, Lenz \cite{Len11}
proved that the sequence $\{I_k\}$ is strictly log-concave, which is clearly a weak form of
Conjecture~\ref{conj:mason}.
Conjecture~\ref{conj:mason} equivalently says
that, for a random independent set $\mathbb{I}$,
the distribution, $P(k)=
\text{Pr}\{|\mathbb{I}|~=~k\} \propto I_k,
$
is ultra-log-concave. This yields the
following corollary.
\begin{corollary}\label{cor:matroid}
Suppose the matroid $M$ on the ground set $[n]$ satisfies Mason's conjecture.
Let $\mathbb{I}$ be a random independent set of $M$,
and let the average cardinality of $\mathbb{I}$
be $\lambda$. Suppose the elements of the ground set are
given i.i.d. weights drawn from a probability distribution $Q$ on ${\mathbb {N}}$,
where $Q$ is log-concave and satisfies $Q(1)>0$ and $\lambda Q(1)^2\geq 2Q(2)$.
If $W$ is the random weight assigned to $\mathbb{I}$, then,
$$
H(W) \leq H(C_Q \Pi_{\lambda}).
$$
\end{corollary}
Of course, if $Q$ is the unit mass at 1,
Corollary~\ref{cor:matroid} gives (modulo Mason's conjecture) a bound
on the entropy of the cardinality of a
random independent set in a matroid. That is,
\begin{equation*}
H(|\mathbb{I}|) \leq H(\Pi_{\lambda}) ,
\end{equation*}
where $\lambda=E|\mathbb{I}|$.
As in the case of graphs, this bound is independent of $n$ and
can be estimated in terms of the average size of a random independent set
(and hence, more loosely, in terms of the matroid rank)
using the Poisson entropy bound \eqref{poi-ent-bd}.
\section{Extensions and Conclusions}
\label{sec:disc}
The main results in this paper describe the solution
of a discrete entropy maximization problem, under both
shape constraints involving log-concavity and constraints
on the mean. Different entropy problems involving
log-concavity of continuous densities have also been
studied by Cover and Zhang \cite{cover2}
and by Bobkov and Madiman \cite{BM11:it},
using different methods and motivated by
different questions than those in this work.
The primary motivation for this work was the
development of an information-theoretic approach to discrete limit laws,
and specifically those corresponding to compound Poisson limits.
Above we have shown that, under appropriate conditions,
compound Poisson distributions have maximum entropy
within a natural class. This is analogous
to the maximum entropy property of the Gaussian and
Poisson measures, and their corresponding roles in
Gaussian and Poisson approximation, respectively.
Moreover, the techniques introduced here --
especially the introduction and analysis
of a new score function that naturally connects
with the compound Poisson family -- turn out
to play a central role in the development of an
information-theoretic picture of compound Poisson
limit theorems and approximation bounds
\cite{johnson22}.
After a preliminary version of this paper was made
publicly available \cite{jkm-arxiv},
Y.\ Yu \cite{Yu09:cp} provided different proofs
of our Theorems~\ref{thm:mainpoi} and \ref{thm:mainber},
under less restrictive conditions, and using
a completely different mathematical approach.
Also, in the first version of \cite{jkm-arxiv},
motivated in part by the results of Lemma~\ref{lem:nec}
and Theorems~\ref{thm:qgeom} and \ref{thm:q2pt},
we conjectured that the compound Poisson measure
CPo$(\lambda,Q)$ is log-concave,
if $Q$ is log-concave and $\lambda Q(1)^2\geq 2Q(2)$.
Y. Yu \cite{Yu09:cp} subsequently established
the truth of the conjecture by pointing out
that it could be proved by an application
of the results of Hansen in \cite{Han88}.
Theorem~\ref{thm:lcconj} in Section~5 is
a slightly more general version of that
earlier conjecture. Note that in order
to prove the conjecture it is not necessary
to reduce the problem to the strictly log-concave
case (as done in the proof
of Theorem~\ref{thm:lcconj}),
because the log-concavity of $Q$ implies
a bit more than log-concavity for $Q^{\#}$.
Indeed, the following variant of
Theorem~\ref{thm:lcconj} is easily proved:
If $Q$ is log-concave with $Q(1)>0$ and $\lambda Q(1)^2> 2Q(2)$,
then the compound Poisson measure
CPo$(\lambda,Q)$ is strictly log-concave.
In closing we mention another possible
direction in which the present results
may be extended.
Suppose that the compounding distribution $Q$
in the setup described in Section~2 is supported
on $\mathbb R$ and has a density with respect to
Lebesgue measure. The definition of compound distributions
$C_Q P$ (including compound Poissons)
continues to make sense for probability distributions
$P$ on the nonnegative integers, but these now clearly
are of mixed type, with a continuous component
and an atom at 0. Furthermore,
limit laws for sums converging to such mixed-type compound Poisson
distributions hold exactly as in the discrete case.
It is natural and interesting to ask for such
`continuous' analogs of the present maximum entropy
results, particularly as neither their form nor method
of proof are obvious in this case.
\section*{Acknowledgement}
We wish to thank Zhiyi Chi for sharing his unpublished compound binomial
counter-example mentioned in equation~(\ref{eq:chi}),
and David G. Wagner and Prasad Tetali for useful comments.
Some of the ideas leading to the combinatorial
connections described in Section~6 were sparked by the participation
of the third-named author in the Workshop on Combinatorial and Probabilistic
Inequalities at the Isaac Newton Institute for Mathematical Sciences in
Cambridge, UK, in June 2008,
and in the Jubilee Conference for Discrete Mathematics
at the Banasthali Vidyapith in Rajasthan, India, in January 2009;
he expresses his gratitude to the
organizers of both these events for their hospitality.
|
1,314,259,994,814 | arxiv | \section{Introduction}
The study of mechanical properties of physical membranes, which are two-dimensional surfaces embedded in three-dimensional space, has a prominent experimental platform in graphene,\cite{NF04} a single layer of carbon atoms arranged in a hexagonal crystalline order.\cite{CG09,VKG10} Graphene is a stiff membrane, which presents long wavelength modulations of the out-of-plane displacements, commonly referred as ripples.\cite{MR07,BG08,FLK07} The impact of corrugation on electronic transport, as well as the mechanical properties of graphene, are subjects of intense investigation.\cite{KC08,MO08,G09b,EK10,SGG11,MO10,CG10}
Of special interest is to understand the effects of an external strain applied to the membrane.\cite{GP88} This is so because most of the graphene samples are subject to some finite amount of strain, due either to the pinning to the substrate (for samples on SiO$_2$, for example) or to the electrostatic force due to the gate on suspended samples. In particular, the existence of tension affects the dispersion of flexural phonons.\cite{MO10,CG10} In fact, whereas in the harmonic approximation and in the absence of strain, the dispersion relation of flexural phonons is quadratic, $\omega_{fl}(q)\sim q^2$, strain introduces a characteristic wave-vector $q_s$ where the dispersion changes from linear (for $q<q_s$) to quadratic (for $q>q_s$). However, anharmonic coupling between bending and stretching modes is important and leads to a further renormalization of the mode dispersion, especially at long wavelengths.\cite{NP87,AL88,DR92,KM09,LF09,BH10}
In this paper, we study the effect of tension on the flexural phonons of a 2D membrane. For this aim, we include a strain field in the free energy, which is studied first in the harmonic approximation and then including anharmonic effects, using the self-consistent screening approximation\cite{DR92,G09c,ZRFK10} (SCSA). The results of a stiff membrane as graphene are compared to those for a softer membrane, for which the bending rigidity has been highly reduced. The validity of the continuum elastic theory is checked by comparing the SCSA results to atomistic Monte Carlo (MC) simulations. Our numerical results show that, for stiff membranes as graphene, small amounts of tension can be used to suppress the anharmonic effects. The case of compressional strain is much more complex due to its highly non-linear behavior,\cite{SS02,CM03,MG99,BD11} and it cannot be accounted for in the SCSA. However, we can still use atomistic MC simulations for this case and, in fact, we find a highly non-trivial behavior for the correlation function of a compressed graphene membrane, with no crossover to a power-law behavior as in the tensioned case.
The paper is organized as follows. In Sec. \ref{Sec:Harmonic} we discuss, in the harmonic approximation, the effect of an external strain in the system, and compare the results to the unstrained case. In Sec. \ref{Sec:SCSA} we consider the anharmonic coupling between bending and stretching modes in the SCSA. In Sec. \ref{Sec:MC} the results of the continuum elastic theory are compared to atomistic MC simulations for the height-height correlation function. The MC simulation for a compressed membrane is also included. The main conclusions of our work are summarized in Sec. \ref{Sec:Conclusions}.
\section{Harmonic approximation}\label{Sec:Harmonic}
In the absence of any external strain, the flat phase of a 2D membrane at sufficiently long scales is well described by a free energy that is a sum of a bending and a stretching part\cite{NPW04}
\begin{equation}\label{Eq:F_u=0}
{\cal F}[{\bf u},h]=\frac{1}{2}\int d^2{\bf r} \left[\kappa \left(\nabla^2h\right)^2+2\mu u_{\alpha\beta}^2+\lambda u_{\alpha\alpha}^2\right]
\end{equation}
where $\kappa$ is the bending rigidity, $\lambda$ and $\mu$ are the first Lam\'e constant and the shear modulus, respectively,\footnote{In most part of this paper, we use the typical parameters for graphene at room temperature: $\kappa\approx1.1$eV, $\lambda\approx2.4$eV\AA$^{-2}$ and $\mu\approx9.95$eV\AA$^{-2}$.} and $u_{\alpha\beta}$ is the {\it internal} strain tensor
\begin{equation}\label{Eq:StrainTensor}
u_{\alpha\beta}\approx \frac{1}{2}(\partial_{\alpha}u_{\beta}+\partial_{\beta}u_{\alpha}+\partial_{\alpha}h\partial_{\beta}h).
\end{equation}
In the harmonic approximation, the bending and stretching modes are decoupled. From Eq. (\ref{Eq:F_u=0}), one can calculate the correlation function for the out-of-plane displacements $h({\bf r})$ which, in Fourier space, reads
\begin{equation}\label{Eq:G0_u=0}
\langle |h({\bf q}) |^2 \rangle_{u=0}=\frac{k_BT}{\kappa q^4},
\end{equation}
where $k_B$ is the Boltzman constant, $T$ is the temperature, and the suffix $u=0$ in the average denotes the absence of any external strain. The effect of an external strain applied to the membrane is modeled by the inclusion of a new term in the theory, $\tau_{\alpha\beta}$, that couples to the internal strain tensor. Therefore, we use the following expansion for the free energy
\begin{equation}\label{Eq:F}
{\cal F}[{\bf u},h,\tau_{\alpha\beta}]=\frac{1}{2}\int d^2{\bf r} \left[\kappa \left(\nabla^2h\right)^2+2\mu u_{\alpha\beta}^2+\lambda u_{\alpha\alpha}^2+\tau_{\alpha\beta}u_{\alpha\beta}\right]
\end{equation}
where $\tau_{\alpha\beta}=\lambda\delta_{\alpha\beta}u^{ext}_{\alpha\beta}+2\mu u^{ext}_{\alpha\beta}$ is expressed in terms of the {\it external} strain tensor $u^{ext}_{\alpha\beta}$. In the harmonic approximation, the Fourier component of the height-height correlation function reads simply
\begin{equation}\label{Eq:G0}
G_0({\bf q})\equiv \langle |h({\bf q}) |^2 \rangle_{u}=\frac{k_BT}{q^2\left (\kappa q^2+\lambda u^{ext}_{\alpha\alpha}+2\mu u^{ext}_{\alpha\beta}\frac{q_{\alpha}q_{\beta}}{|{\bf q}|^2}\right)}.
\end{equation}
For an isotropic expansion and in the long wavelength limit, we can approximate
\begin{equation}
u^{ext}_{\alpha\beta}=u\delta_{\alpha\beta}
\end{equation}
where $u=\delta S/2S$ accounts for the uniform dilation of the membrane, where $S$ is the membrane surface and $\delta S$ is the change in area due to the application of strain. This reduces $G_0({\bf q})$ to
\begin{equation}\label{Eq:G0uniform}
G_0({\bf q})=\frac{k_BT}{q^2[\kappa q^2+2(\lambda+\mu)u]}
\end{equation}
where $\tau=2(\lambda+\mu)u$ is the stress of the system. Notice that for the unstrained case ($u=0$), as given by Eq. (\ref{Eq:G0_u=0}), the mean square amplitude of the out-of-plane displacement diverges, in the harmonic approximation, as\cite{NPW04} $\langle h^2\rangle_{u=0} \propto L^2$, where $\langle h^2\rangle=\sum_{{\bf q}}\langle | h({\bf q})|^2\rangle$ and $L$ is the sample size. Furthermore, the normal-normal correlation $\langle {\bf n}({\bf r})\cdot{\bf n}(0)\rangle$ diverges logarithmically as $r\rightarrow \infty$. However, for a membrane under uniform dilation, we can use Eq. (\ref{Eq:G0uniform}) and obtain
\begin{equation}
\langle h^2\rangle= \frac{k_BT}{8\pi(\lambda+\mu)u}\log\left(1+\frac{2(\lambda+\mu)u}{\kappa q_{min}^2}\right)\propto \frac{\log (L^2u)}{u}
\end{equation}
where $q_{min}=2\pi/L$ is an infrared cutoff of the order of the inverse sample size $L$. Furthermore, from the normal-normal correlation function $\langle |{\bf n}({\bf q})|^2\rangle=k_BT/[\kappa(q^2+q_u^2)]$, where $q_u=\sqrt{2(\lambda+\mu)u/\kappa}$ we obtain that
\begin{equation}
\langle {\bf n}({\bf r})\cdot{\bf n}(0)\rangle = \frac{k_BT}{\kappa}\int \frac{d^2{\bf q}}{(2\pi)^2} \frac{e^{i{\bf q}\cdot{\bf r}}}{q^2+q_u^2}=\frac{ k_BT}{2\pi\kappa}K_0(q_ur),
\end{equation}
where $K_0(x)$ is a modified Bessel function of the second kind. Taking into account that, for $x\gg 1$, $K_0(x)\approx \sqrt{\pi/2x}e^{-x}$, we obtain that, for $r\rightarrow \infty$
\begin{equation}
\langle {\bf n}({\bf r})\cdot{\bf n}(0)\rangle \approx \frac{ k_BT}{\kappa}\frac{e^{-q_ur}}{\sqrt{8\pi q_ur}}.
\end{equation}
Therefore, the application of external strain, as expected, guarantees the long range 2D order of the membrane. Furthermore, $q_u^{-1}$ defines a length-scale that separates the strain dominated from the unstrained regions of the correlation functions.
\newline
\section{Anharmonic effects: continuum elastic theory in the SCSA}\label{Sec:SCSA}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{fig/Fig-u.pdf}
\caption{(Color online) Correlation function in the harmonic approximation, $G_0({\bf q})$ (blue dashed line) and renormalized correlation function in the SCSA $G({\bf q})$ (full red line) for several values of strain $u$ from 0 to $10^{-2}$. Dotted-dashed vertical lines indicate $q_c\approx 0.24$ \AA$^{-1}$, according to the Ginzburg criterion, Eq. (\ref{Eq:GC}). Dotted vertical lines indicates the position of $q_s^h$ and dashed vertical lines point the position of $q_s$ (see text).}
\label{Fig:G-u}
\end{figure}
In the previous section we have seen that the application of an external strain stabilizes, even in the harmonic approximation, the flat phase of a 2D membrane. But even in the absence of strain, it is known that the flat phase is stable. This is due to the anharmonic coupling between bending and stretching modes.\cite{NPW04} Therefore, anharmonic effects lead to a further renormalization of the characteristic lengths and elastic constants discussed in the previous section. In the following, we study the effect of anharmonicity in the correlation function of a strained 2D membrane. First, one notices that the in-plane phonons in the free energy Eq. (\ref{Eq:F}) can be integrated out exactly, what allows us to write an effective action in terms only of the $h$ fields\cite{RN91}
\begin{eqnarray}\label{Eq:Freal}
{\cal F}_{eff}[h,\tau_{\alpha\beta}]&=&\int d^2{\bf r} \left[\frac{1}{2}\kappa\left( \nabla^2h \right)^2 + \tau_{\alpha\beta}\partial_{\alpha}h\partial_{\beta}h\right ] \nonumber\\
&+&\frac{1}{8}Y\int d^2{\bf r} (P_{\alpha\beta}^T\partial_{\alpha}h\partial_{\beta}h+P_{\alpha\beta}^Tu^{ext}_{\alpha\beta})^2 \nonumber\\
\end{eqnarray}
where $Y=4\mu(\mu+\lambda)/(2\mu+\lambda)$ is the 2D Young modulus and $P_{\alpha\beta}^T=\delta_{\alpha\beta}-\partial_{\alpha}\partial_{\beta}/\nabla^2$ is the transverse projection operator. Eq. (\ref{Eq:Freal}) can be expressed in terms of the Fourier components of the height field, $h({\bf q})$. To the lowest order in $u^{ext}$ we obtain
\begin{widetext}
\begin{eqnarray}\label{Eq:FFourier}
{\cal F}_{eff}[h,\tau_{\alpha\beta}]&=&\frac{1}{2}\int \frac{d^2{\bf k}}{(2\pi)^2} k^2\left(\kappa k^2+\lambda u_{\alpha\alpha}^{ext}+2\mu \frac{k_{\alpha}k_{\beta}}{k^2}u_{\alpha\beta}^{ext}\right) |h({\bf k})|^2\nonumber\\
&+&\frac{1}{8}Y\int \frac{d^2{\bf k}_1}{(2\pi)^2}\int \frac{d^2{\bf k}_2}{(2\pi)^2}\int \frac{d^2{\bf k}_3}{(2\pi)^2} P_{\alpha\beta}^TP_{\gamma\delta}^T({\bf q}) k_{1\alpha}k_{2\beta}k_{3\gamma}k_{4\delta}[h({\bf k}_1)h({\bf k}_2)][h({\bf k}_3)h({\bf k}_4)]\nonumber\\
&+&\frac{1}{4}Y\int \frac{d^2{\bf k}_1}{(2\pi)^2}\int \frac{d^2{\bf k}_2}{(2\pi)^2} P_{\alpha\beta}^TP_{\gamma\delta}^T({\bf q})k_{1\alpha}k_{2\beta}u^{ext}_{\gamma\delta}({\bf q})h({\bf k}_1)h({\bf k}_2)
\end{eqnarray}
\end{widetext}
where ${\bf q}={\bf k}_1+{\bf k}_2$ and in the second term ${\bf k}_1+{\bf k}_2+{\bf k}_3+{\bf k}_4=0$. The first line of Eq. (\ref{Eq:FFourier}) is nothing but the bending part of the free energy in the harmonic approximation, from which we have defined the non-interacting correlation function $G_0({\bf q})$, as in Eq. (\ref{Eq:G0}) of the previous section. The second and third lines build the interaction term of the theory. The second line of Eq. (\ref{Eq:FFourier}) accounts for the four point vertex, whereas the last term of this equation leads to a two-point vertex that renormalize the propagator. This problem is similar to that of a polymerized membrane with long-range disorder, and can be treated in the SCSA.\cite{DR93}
For stiff membranes as graphene, the anharmonic effects are quickly suppressed under the application of strain. Therefore, we assume that the renormalization of the propagator due to the vertices associated to the last term of Eq. (\ref{Eq:FFourier}) is weak, and that we can neglect this class of diagrams in our calculations. We will see in Sec. \ref{Sec:MC} that the correlation functions calculated with this assumption in the SCSA agree well with those obtained from atomistic MC simulations, justifying the simplification. Then, the renormalized correlation function can be calculated from a closed self-consistent set of two coupled integral equations for the self-energy\cite{DR92,G09c}
\begin{eqnarray}\label{Eq:SCSA}
\Sigma({\bf k})&=&2k_{\alpha}k_{\beta}\kgk_{\delta}\int \frac{d^2{\bf q}}{(2\pi)^2} {\tilde R}_{\alpha\beta,\gamma\delta}({\bf q})G({\bf k}-{\bf q})\label{Eq:Sigma}\\
{\tilde R}_{\alpha\beta,\gamma\delta}({\bf q})&=&R_{\alpha\beta,\gamma\delta}({\bf q})-R_{\alpha\beta,\mu\nu}({\bf q})\Pi_{\mu\nu,\mu'\nu'}({\bf q}){\tilde R}_{\mu'\nu',\gamma\delta}({\bf q})\nonumber\\
\label{Eq:Vertex}
\end{eqnarray}
where $G^{-1}({\bf q})=G_0^{-1}({\bf q})+\Sigma({\bf q})$ is the inverse of the {\it dressed} propagator, $\Pi_{\alpha\beta,\gamma\delta}({\bf q})$ are the vacuum polarization functions,
\begin{equation}
\Pi_{\alpha\beta,\gamma\delta}({\bf q})=\int \frac{d^2 {\bf k}}{(2\pi)^2} k_{\alpha}k_{\beta}(k_{\gamma}-q_{\gamma})(k_{\delta}-q_{\delta}) G({\bf k})G({\bf q}-{\bf k}),
\end{equation}
$R_{\alpha\beta,\gamma\delta}({\bf q})=(Y/2-\mu)P_{\alpha\beta}^TP_{\gamma\delta}^T+(\mu/2)(P_{\alpha\gamma}^TP_{\beta\delta}^T+P_{\alpha\delta}^TP_{\beta\gamma}^T)$ is the unrenormalized four-point interaction vertex and ${\tilde R}_{\alpha\beta,\gamma\delta}({\bf q})$ is the screened interaction.
The set of equations (\ref{Eq:Sigma})-(\ref{Eq:Vertex}) can be solved, at any wave-vector, following the method introduced in Ref. \onlinecite{ZRFK10}. Fig. \ref{Fig:G-u} shows the momentum dependence of $G_0({\bf q})$ and $G({\bf q})$ for different values of strain, $u=0,...,10^{-2}$. First, one notices in Fig. \ref{Fig:G-u}(a) that, without strain, the harmonic approximation is valid only in the short wavelength region, where $G({\bf q})\approx G_0({\bf q})$.\cite{ZRFK10} The Ginzburg criterion, which considers only the first order correction to the correlation function, allows to estimate the characteristic wave-vector $q_c$ above which the harmonic behavior applies. For 2D membranes, $q_c$ is approximately given by\cite{NPW04}
\begin{equation}\label{Eq:GC}
q_c=\sqrt{\frac{3k_BTY}{8\pi\kappa^2}}.
\end{equation}
For the parameters of graphene, $q_c\approx 0.24~\rm\AA^{-1}$ at room temperature, a value which is shown by the vertical dot-dashed lines in Fig. \ref{Fig:G-u}. This is the characteristic wave-vector at which the renormalized correlation function $G({\bf q})$ (full red line) separates from the harmonic approximation $G_0({\bf q})$ (dashed blue line), pointing out that anharmonic effects are very important at long length scales. If we consider the effect of external strain on the membrane, we still can distinguish between the harmonic and the anharmonic regimes, as shown in Fig. \ref{Fig:G-u}(b)-(f).\footnote{The inclusion of the last term of Eq. (\ref{Eq:FFourier}) in the calculation would lead to an external strain dependence of $q_c$, effect that is neglected in the simple approximation used to obtain Eq. (\ref{Eq:GC}).} However, there also exists a characteristic scale at which the behavior of flexural phonons is dominated by strain effects. This scale manifests itself as a change in the slope of the correlation functions: $G_0({\bf q})$ changes from $\sim q^{-4}$ to $q^{-2}$ and $G({\bf q})$ changes from $q^{-4+\eta}$ also to $q^{-2}$, where $\eta$ is a characteristic exponent. In fact, we can observe from Fig. \ref{Fig:G-u}(b)-(f) how the region of intermediate momenta where the two functions $G_0(q)$ and $G(q)$ are different, and therefore anharmonic effects are important, is reduced as the value of $u$ grows. From these results we see that very small amounts of strain are enough to suppress the anharmonic effects.
\begin{figure}[t]
\centering
\includegraphics[width=0.40\textwidth]{fig/Fig-A-T.pdf}
\caption{The temperature dependence of the parameter $A$ of Eq. (\ref{Eq:BR}), obtained from the SCSA correlation functions at different temperatures (red dots). The dashed line is a fitting to Eq. (\ref{Eq:A}). For these plots, we have used the wave-vector $q=10^{-2}\AA^{-1}$.}
\label{Fig:A-T}
\end{figure}
These results can be used to study the effect of strain on flexural (out-of-plane) phonons. The dispersion relation for flexural phonons of a 2D membrane under isotropic tension can be written as
\begin{equation}
\omega_{fl}({\bf q})=\sqrt{\frac{\kappa({\bf q})}{\rho}q^4+u\frac{2(\lambda+\mu)}{\rho}q^2}
\end{equation}
where $\rho$ is the density and $\kappa({\bf q})$ is the bending rigidity. In the harmonic approximation, $\kappa({\bf q})\equiv\kappa$ and the dispersion changes from linear to quadratic at a wave-vector equal to
\begin{equation}
q_s^h=\sqrt{\frac{2u(\mu+\lambda)}{\kappa}}.
\end{equation}
This characteristic wave-vector is denoted by the vertical dotted lines in Fig. \ref{Fig:G-u}(b)-(f).\footnote{Notice that $q_s^h$ coincides with the wave-vector $q_u$ discussed in the previous section.} However, anharmonic effects are important at long scales. To obtain analytical results, we use the effective Dyson equation for the correlation function\cite{FLK07,ZRFK10}
\begin{equation}\label{Eq:Ga}
G_a^{-1}({\bf q})=G_0^{-1}({\bf q})+\Sigma_a({\bf q}),
\end{equation}
where $G_a({\bf q})$ is an approximated correlation function dressed by the self-energy $\Sigma_a({\bf q})$, which is approximated by
\begin{equation}\label{Eq:Sigmaa}
\Sigma_a({\bf q})=Aq^4\left(\frac{q_0}{q} \right)^{\eta},
\end{equation}
where $A$ is some numerical factor, $\eta\approx 0.82$,\cite{DR92} and $q_0=2\pi\sqrt{Y/\kappa}$. From the approximation Eq. (\ref{Eq:Ga}) one can obtain the renormalized bending rigidity
\begin{equation}\label{Eq:BR}
\kappa_R({\bf q})=\kappa+k_BTA\left(\frac{q_0}{q}\right)^{\eta}.
\end{equation}
It is important to mention that the coefficient $A$ is temperature dependent. Notice that anharmonic effects are present in $G_a({\bf q})$ below a characteristic wave-vector $q^*$, which is solution of $\Sigma_a(q^*)\approx G_0^{-1}(q^*)$. Assuming that $q_c$ is the only crossover wave-vector from harmonic to anharmonic behavior, and that $q^*\simeq q_c$, then one can easily obtain that the temperature dependence of the parameter $A$ in Eq. (\ref{Eq:Sigmaa}) follows the power-law $A\propto (k_BT/\kappa)^{\frac{\eta}{2}-1}$.\cite{K10} By fitting the SCSA correlation function for different temperatures to Eq. (\ref{Eq:Ga}), we find the dependence of the parameter $A$ on temperature, and the results are shown in Fig. \ref{Fig:A-T} (red dots). This allows to define an approximate expression for the adimensional parameter $A$, which is (using the elastic constants valid for graphene)
\begin{equation}\label{Eq:A}
A\approx 4.6 T[K]^{\frac{\eta}{2}-1}
\end{equation}
where $T[K]$ is the temperature expressed in Kelvin. The results are shown in Fig. \ref{Fig:A-T} by the dashed line, which fits rather well the values of $A$ obtained numerically. This confirms that the assumption $q^*\simeq q_c$ is indeed valid within the SCSA. The main message is that the bending rigidity Eq. (\ref{Eq:BR}) grows with temperature as
\begin{equation}\label{Eq:kappaT}
\kappa_R\propto T^{\eta/2}.
\end{equation}
This power-law behavior is similar to the temperature dependence found by Monte Carlo simulations in the harmonic regime.\cite{FLK07,ZLKF10} However we emphasize that here we are assuming that the parameters $\kappa$, $\mu$ and $\lambda$ of the Hamiltonian (\ref{Eq:F_u=0}) are independent of temperature. However, while $\lambda$ and $\mu$ are only weakly dependent on $T$, the temperature dependence of the bending rigidity $\kappa$ found in MC simulations is rather strong,\cite{ZKF09} and this is not accounted for by Eq. (\ref{Eq:kappaT}). The origin of this $T$-dependence is probably beyond the continuum medium approximation, and it lies beyond the scope of this work.
Then, from Eq. (\ref{Eq:BR}) one observes that the slope of the dispersion relation $\omega_{fl}({\bf q})$ changes from $\sim q$ to $\sim q^{2-\eta/2}$ at the wave-vector solution of
\begin{equation}
\left[\kappa+k_BTA\left(\frac{q_0}{q_s} \right)^{\eta}\right]q_s^2=2u(\lambda+\mu).
\end{equation}
The values of $q_s$ for the values of strains studied here are shown by the vertical dashed lines in Fig. \ref{Fig:G-u}(b)-(f). Notice that the characteristic wave-vector obtained by the approximation to the bending rigidity Eq. (\ref{Eq:BR}) agrees well with the exact result of the SCSA equations.
From the previous expressions, and imposing $q_s=q_c$, it is possible to find the critical value for the strain that is enough to suppress the anharmonic effects completely, at any wave-vector, and this is
\begin{equation}\label{Eq:uc}
u_c=\frac{3k_BT}{4\pi}\frac{\mu}{\kappa(2\mu+\lambda)}.
\end{equation}
For the parameters of graphene at room temperature, this corresponds to $u_c\approx 0.0025$. In fact, notice that $q_s^h$ and $q_s$ already coincide for $u=10^{-2}>u_c$ and that both are to the right of $q_c$ ($q_s,q_s^h>q_c$) [Fig. \ref{Fig:G-u}(f)], pointing out that anharmonic effects are already absent for this value ($\sim 1\%$) of external strain.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{fig/Fig-graph-soft-a.pdf}
\includegraphics[width=0.47\textwidth]{fig/Fig-graph-soft-b.pdf}
\caption{(Color online) $G_0({\bf q})$ for graphene (dashed blue line) and for a softer membrane (dashed gray line), and $G({\bf q})$ for graphene (full red line) and for a softer membrane (full black line). In the two cases, $u=10^{-8}$. In (a) we have used, for the soft membrane, 1/100 times the bending rigidity $\kappa$ valid for graphene at this temperature, whereas $\mu$ and $\lambda$ are the same as in graphene. In this case, $q_c\approx 0.24~\rm\AA^{-1}$ for graphene (vertical red dotted-dashed line) and $q_c\approx 24~\rm\AA^{-1}$ for the soft membrane (vertical black dotted-dashed line). In (b) we compare the correlation functions for graphene to a softer membrane for which all the elastic constants are reduced to a $1\%$ their value in graphene. In this case, $q_c\approx 2.4~\rm\AA^{-1}$ for the soft membrane, as indicated by the position of the vertical black dotted-dashed line. The vertical dotted and dashed lines represent the positions of $q_s^h$ and $q_s$ respectively, as in Fig. \ref{Fig:G-u}.}
\label{Fig:graph-soft}
\end{figure}
Finally, we compare the results for graphene to those of a softer membrane. In Fig. \ref{Fig:graph-soft}(a) we show $G_0({\bf q})$ and $G({\bf q})$ for graphene, and for a membrane with the same $\mu$ and $\lambda$ as graphene, but with a bending rigidity $\kappa$ which is $1\%$ of the corresponding for graphene. In this case, we see that $q_c^{soft}>q_c^{graph}$, as seen by the position of the vertical dotted-dashed lines (red for graphene and black for the soft membrane). This means that anharmonic effects manifest themselves at larger wave-vectors for a soft membrane. Furthermore, the change in slope of the harmonic $G_0({\bf q})$ occurs at higher wave-vectors for the soft membrane (vertical black dotted line) as compared to graphene (vertical red dotted line). However, the out-of-plane component of the dispersion for flexural phonons dominates in a wider region of momenta for the soft membrane as compared to graphene, $q_s^{soft}<q_s^{graph}$, as it can be seen by the relative position of $q_s$ for graphene (dashed red line) with respect to that of a soft membrane (dashed black line). Notice that in the latter case, the strain necessary to suppress all the anharmonic effects is $u_c\approx 0.25$, also two orders of magnitude larger than for graphene. In Fig. \ref{Fig:graph-soft}(b) we compare the correlation functions of graphene to those of a softer membrane where not only the bending rigidity $\kappa$, but also the Lam\'e constants $\lambda$ and $\mu$ have been reduced to $1\%$ of their values in graphene. The situation is similar to that described for Fig. \ref{Fig:graph-soft}(a), with the difference that the wave-vectors at which anharmonic and strain effects appear are reduced, as it can be seen by the respective shifts to the left of the black dotted-dashed and dashed lines in Fig. \ref{Fig:graph-soft}(b) with respect to (a). Furthermore, for the parameters of Fig. \ref{Fig:graph-soft}(b), the change in slope of the harmonic $G_0({\bf q})$ is the same in the two cases (as shown by the vertical dotted line). Finally, we notice that increasing the temperature acts like an effective softening of the membrane, due to the reduction of the ratio $\kappa/k_BT$.
\section{Comparison to atomistic Monte Carlo simulations}\label{Sec:MC}
In this section we compare the results obtained by using the continuum elastic theory methods as described in Sec. \ref{Sec:Harmonic} and \ref{Sec:SCSA}, with the results of the Monte Carlo simulations of graphene. The correlation function $G({\bf q})$ for graphene has been calculated as described in Ref. \onlinecite{LF09} for unstrained graphene by means of MC simulations based on an accurate interatomic potential for carbon.\cite{LF05} The simulations are done for a sample of 37888 atoms in a roughly square sample of $314.82 \times 315.24$ \AA$^2$, in the $NPT$ isothermal isobaric ensemble. The simulations for the strained case were done for smaller samples of 8640 atoms ($147.57\times153.36$~\AA$^2$), which limits the range of accessible wave-vectors with respect to the unstrained case. In Fig. \ref{Fig:G-MC} we compare the correlation functions obtained from numerical simulations (full lines) to the SCSA results (dashed lines) for different values of the strain. To highlight the change of slope of $G({\bf q})$ due to strain, in Fig. \ref{Fig:G-MC} we plot $q^2G({\bf q})$ that becomes flat when $G({\bf q})\propto q^{-2}$ as discussed in Sec. \ref{Sec:SCSA}. First, one notices that the $G({\bf q})$ calculated by atomistic simulations deviates from those calculated in the continuum limit for wave-vectors close to the Bragg peak at $q = \frac{4\pi}{3a} = 2.94$~\AA$^{-1}$ with $a = 1.42$~\AA~ being the carbon-carbon distance in graphene. We mention also that, for the strained cases, the error bars of the Monte Carlo simulations are negligible.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{fig/Fig-G-MC.pdf}
\caption{(Color online) Comparison of the normal-normal correlation function $\langle|{\bf n}({\bf q})|^2\rangle =q^2G({\bf q})$ obtained from continuum elastic theory, as described in Sec. \ref{Sec:Harmonic} and \ref{Sec:SCSA} (dashed lines), to atomistic MC simulations (solid lines), for different values of external strain.}
\label{Fig:G-MC}
\end{figure}
Starting from the unstrained case ($u=0$), we see that the MC (full black line) and the SCSA (dashed black line) results agree reasonably well and both deviate from the correlation function in the harmonic approximation (dot-dashed black line) at small wave-vectors, pointing out the importance of anharmonic effects at long scales in unstrained samples.\cite{ZRFK10} For the strained cases, the SCSA and MC results are also comparable, what justify the use of SCSA when dealing with samples under tension. However, we must emphasize that for $0.4\%$ strain, there is almost no difference between $G_0({\bf q})$ [as obtained by Eq. (\ref{Eq:G0})] and $G({\bf q})$ in the SCSA [full solution of Eq. (\ref{Eq:Sigma})-(\ref{Eq:Vertex})], as discussed in Sec. \ref{Sec:SCSA}, and they are exactly the same for the highest value of strain shown here, $1.5\%$. A more rigorous check of the validity of SCSA would require MC simulations for samples under even weaker strain, which requires several times larger samples to achieve the same accuracy and makes such simulations much more time consuming. Nevertheless, the present results already confirm that rather weak strain is enough to suppress the anharmonicities in stiff membranes as graphene, as it can be seen in Fig. \ref{Fig:G-MC} by the almost flat line-shape of $q^2G({\bf q})$ as we move from the Bragg peak towards small wave-vectors of the spectrum, for tensions $\gtrsim 0.4\%$.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{fig/Fig-G-MC-Compress.pdf}
\caption{(Color online) Comparison of the normal-normal correlation function $\langle|{\bf n}({\bf q})|^2\rangle =q^2G({\bf q})$ obtained from atomistic MC simulations for the case of tension (solid lines), and compression (dashed lines), for different values of external strain.}
\label{Fig:G-MC-Compress}
\end{figure}
The study of a compressed membrane is more delicate because of the fact that its equilibrium state does not correspond to the flat phase any more.\cite{SS02,CM03,MG99,BD11} Therefore, the standard elastic theory that we have used in Sec. \ref{Sec:SCSA} does not apply to this case. However, one can at least study the system by means of Monte Carlo simulations. In Fig. \ref{Fig:G-MC-Compress} we compare the MC results for the correlation function of a tensioned membrane to that of a compressed membrane. There we see that $G({\bf q})$ for a compressed membrane does not show the characteristic gradual crossover to another power law as in the tensioned case. Instead, there exist a wave-vector for which the correlation function suffers an abrupt deviation from the harmonic behavior, presenting signatures of a possible first order buckling phase transition.
Finally, all our results are graphically summarized in Fig. \ref{Fig:sample}, where we show a snapshot of the Monte Carlo sample for a tensioned graphene membrane [Fig. \ref{Fig:sample}(a)], for an unstrained membrane, Fig. \ref{Fig:sample}(b), and for a compressed graphene sheet, Fig. \ref{Fig:sample}(c). In the first case, the equilibrium state is an almost perfectly flat membrane, for which the anharmonic effects have been suppressed due to the application of tension. The anharmonic coupling between bending and stretching modes is instead important for the case of an unstrained membrane, as the one of Fig. \ref{Fig:sample}(b), which leads to a corrugated low energy phase due to the existence of thermal ripples in the system,\cite{FLK07} and which is well described by means of a continuum elastic theory as the SCSA. However, this theory is not applicable to a compressed membrane as the one shown in Fig. \ref{Fig:sample}(c), for which the sheet buckles into shapes that remove in-plane compression, in order to reduce its elastic energy.\cite{SS02}
\begin{figure*}[t]
\centering
\includegraphics[width=0.77\textwidth]{fig/Fig-sampleb.pdf}
\caption{(Color online) Typical Monte Carlo configurations of a graphene sample of 8640 atoms for: a) 1.5\% tension, b) unstrained, and c) 1.5\% compression.}
\label{Fig:sample}
\end{figure*}
\section{Conclusions}\label{Sec:Conclusions}
In summary, we have studied the effect of external strain in the correlation function of flexural modes in the SCSA. In the presence of strain, three different regimes can be distinguished in the dispersion relation of flexural phonons: $\omega_{fl}({\bf q})\sim q$ in the long wavelength limit, $\omega_{fl}({\bf q})\sim q^{2-\eta/2}$ in the intermediate range of wave-vectors of the spectrum (where $\eta\approx 0.82$ is a characteristic exponent\cite{DR92}), and finally $\omega_{fl}({\bf q})\sim q^2$ at shorter wavelengths. The results show that, for a soft membrane, rather high values of strain are needed to suppress anharmonic effects, whereas for a stiff membrane as graphene, anharmonic effects are completely suppressed by less than 1\% tensile strain. The correlation functions obtained with the SCSA compare well with those calculated from atomistic MC simulations. Taking into account that the scattering of electrons by flexural phonons has been shown to be the main limitation for the charge mobility in suspended graphene,\cite{CG10} our results point that the application of a small tension to the graphene layer would reduce the out-of-plane vibrations that lead to the flexural modes, increasing the mobility of the suspended samples.
\begin{acknowledgments}
This work is part of the research program of the 'Stichting voor Fundamenteel Onderzoek der Materie (FOM)', which is financially supported by the 'Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO)'. We thank the EU-India FP-7 collaboration under MONAMI, and the Netherlands National Computing Facilities foundation (NCF).
\end{acknowledgments}
|
1,314,259,994,815 | arxiv | \section{Hessenberg varieties and Poincar\'e duals}
\label{sec:Hessenberg}
In this section we define Hessenberg varieties
and compute the equivariant Poincar\'e duals (in $G/B$) of Hessenberg varieties corresponding to regular elements.
We also define the Peterson variety \ensuremath{\mathbf{P}},
and recall from \cite{gms:peterson} some results on the equivariant (co)homology of \ensuremath{\mathbf{P}}.
\subsection{Hessenberg Varieties}
Let $\ensuremath{\mathfrak g}:= Lie(G)$, $\ensuremath{\mathfrak b}=Lie(B)$, and $\ensuremath{\mathfrak h}:=Lie(T)$.
A subspace $H\subset\ensuremath{\mathfrak g}$ is called a \emph{Hessenberg space} if it is $B$-stable and if $\ensuremath{\mathfrak b}\subset H$.
Let
\begin{equation*}
H_0=\ensuremath{\mathfrak b}\oplus\bigoplus\limits_{\alpha\in\Delta}\ensuremath{\mathfrak g}_{-\alpha}.
\end{equation*}
We say that a Hessenberg space $H$ is \emph{indecomposable} if $H_0\subset H$.
Recall that the vector bundle $G\times^B\ensuremath{\mathfrak g}\to G/B$ is trivialized by the map
\begin{align*}
\mu_\ensuremath{\mathfrak g}:G\times^B\ensuremath{\mathfrak g}\to\ensuremath{\mathfrak g},&&(g,x)\mapsto Ad(g)x,
\end{align*}
i.e., we have an isomorphism $G\times^B\ensuremath{\mathfrak g}\to G/B\times\ensuremath{\mathfrak g}$ given by
$(g,x)\mapsto (gB,Ad(g)x)$.
Let $H$ be a Hessenberg space,
and let $\mu_H$ denote the restriction of $\mu_{\mathfrak g}$ to the sub-bundle $G\times^BH\subset G\times^B\ensuremath{\mathfrak g}$.
\begin{center}
\begin{tikzcd}
G\times^BH\arrow[dr,"\mu_H"]\arrow[rr,hook]&& G/B\times\ensuremath{\mathfrak g}\arrow[ld,swap,"\mu_\ensuremath{\mathfrak g}"]\\
&\ensuremath{\mathfrak g}&
\end{tikzcd}
\end{center}
For $x\in\ensuremath{\mathfrak g}$, the fibre $\mu_H^{-1}(x)$ (viewed as a subscheme of $G/B$) is called the Hessenberg scheme $\mathbf H(x,H)$.
If $H$ is indecomposable, $\mathbf H(x,H)$ is reduced and irreducible for all $x$,
see \cite[Thm 1.2]{abe.fujita.zeng}.
In this case, we call $\mathbf H(x,H)$ a \emph{Hessenberg variety}.
\emph{
For the rest of this article,
we assume without mention that the Hessenberg space $H$ is indecomposable.
}
For each positive root $\alpha \in \Phi^+$,
choose a root vector $e_\alpha \in \mathfrak{g}_\alpha$.
Set $e= \sum_{\alpha \in \Phi^+} e_\alpha$.
The element $e$ is a regular nilpotent element in \ensuremath{\mathfrak b},
see e.g. \cite{mcgovern.collingwood:nilpotent.orbits,kostant:principal}.
Recall from \cref{htFn} the sub-torus $S\subset T$ lifting the element $h=2\rho^\vee\in\ensuremath{\mathfrak h}$.
Following \cite[Ch~6, Prop 29]{bourbaki:Lie46}, we have $\langle\rho^\vee,\alpha\rangle=1$,
and hence
\begin{align*}
[h,e]=[h,\sum_{\alpha\in\Delta} e_\alpha]=\sum_{\alpha\in\Delta} \langle h,\alpha\rangle e_\alpha=2e.
\end{align*}
We see that the vector space $\mathbb C e$ is $h$-stable, and hence also $S$-stable.
Consequently, the Hessenberg variety $\mathbf H(e,H)$ is $S$-stable.
Since $h\in\ensuremath{\mathfrak h}$, the adjoint action of $T$ on $\mathfrak s=\mathbb C h$ is trivial.
In particular, $\mathfrak s$ is $T$-stable, and hence so is $\mathbf H(h,H)$.
\begin{thm}
\label{HessClass}
For any indecomposable Hessenberg space $H$, we have
\begin{align*}
[\mathbf H(h,H)]_T&=\prod\left(c_1^T(\mathcal L_\alpha)\right)\cap[\ensuremath{G/B}]_T,\\
[\mathbf H(e,H)]_S&=\prod\left(c_1^S(\mathcal L_\alpha)-t\right)\cap[\ensuremath{G/B}]_S,
\end{align*}
where the product is over the set $\set{\alpha\in\phi^+}{\ensuremath{\mathfrak g}_{-\alpha}\not\subset H}$.
\end{thm}
\begin{proof}
Consider the vector bundle
$
\mathcal V=G\times^B(\ensuremath{\mathfrak g}/H)\to G/B,
$
which admits a filtration with quotient bundles
$
\set{\mathcal L_\alpha}{\alpha\in\phi^+,\ensuremath{\mathfrak g}_{-\alpha}\not\subset H}.
$
For $x$ a regular element of \ensuremath{\mathfrak g},
let $s_x:G/B\to\mathcal V$ be the section of $\mathcal V$ given by $s_x(gB)=(g, Ad(g^{-1})x)$.
Following \cite[Prop 3.6]{abe.fujita.zeng},
we have $\mathbf H(x,H)=Z(s_x)$, the zero scheme of $s_x$.
Observe that $\mathcal V$ is a $T$-equivariant vector bundle,
and $s_h$ is a $T$-invariant section.
Therefore by \cref{lem:eqSec},
the fundamental class of $\mathbf H(h,H)=Z(s_h)$ is given by the first equality.
On the other hand, the section $s_e$ lies in the $t$-eigenspace of the $S$-action on $H^0(G/B,\mathcal V)$,
hence the second equality holds for the fundamental class of $\mathbf H(e,H)=Z(s_e)$ by \cref{cor:twistedSection}.
\end{proof}
\subsection{The Peterson variety}
\label{subsec:petersonVariety}
The Peterson variety is defined by
\begin{align}
\label{defn:Pet}
\ensuremath{\mathbf{P}}:= \mathbf H(e,H_0)\subset\ensuremath{G/B}.
\end{align}
It is a subvariety of \ensuremath{G/B}\ of dimension $rk(G)=|\Delta|$, singular in general.
Following \cref{HessClass}, the $S$-equivariant fundamental class of the Peterson variety in $H_*^S(G/B)$ is given by
\begin{align}
\label{PetClass}
[\ensuremath{\mathbf{P}}]_S=\prod\limits_{\alpha\in\phi^+\backslash\Delta}\left(c_1^S\left(\mathcal L_\alpha\right)-t\right)\cap[\ensuremath{G/B}]_S.
\end{align}
Let $\ensuremath{\mathbf{P}}_I$ denote the Peterson variety corresponding to the Dynkin diagram $I\subset\Delta$.
The following was proved in classical types by Tymoczko \cite[Thm~4.3]{tymoczko:paving}
and generalized to all Lie types by Precup \cite{precup},
see also \cite[Appendix A]{gms:peterson}.
\begin{prop}\label{prop:vectorSpace}
There exists a natural embedding $\ensuremath{\mathbf{P}}_I\subset\ensuremath{\mathbf{P}}$,
and a corresponding $S$-stable affine paving $\ensuremath{\mathbf{P}}=\bigsqcup\limits_{I\subset\Delta}\ensuremath{\mathbf{P}}_I^\circ$,
where $\ensuremath{\mathbf{P}}_I^\circ=\ensuremath{\mathbf{P}}_I\backslash\bigcup\limits_{J\subsetneq I}\ensuremath{\mathbf{P}}_J$.
\end{prop}
The subvarieties $\ensuremath{\mathbf{P}}^\circ_I$ (called \emph{Peterson cells}) are $S$-stable affine spaces.
The Peterson cell $\ensuremath{\mathbf{P}}^\circ_I$ has a unique $S$-stable point $w_I$,
the longest element in the Weyl subgroup $W_I$.
Following \cref{lemma:generate},
the fundamental classes $\set{[\ensuremath{\mathbf{P}}_I]_S}{I\subset\Delta}$ form a basis of $H_*^S(\ensuremath{\mathbf{P}})$.
\begin{prop}
{\rm(\cite[Thm 4.3]{gms:peterson})}
\label{prop:duality}
Consider the inclusion $i:\ensuremath{\mathbf{P}}\hookrightarrow\ensuremath{G/B}$.
For each $I\subset\Delta$, fix a Coxeter element $v_I$ for $I$.
There exist positive integers $m(v_I)$ such that
\begin{align*}
\left\langle i^*\sigma_{v_I}^S,[\ensuremath{\mathbf{P}}_J]_S\right\rangle=m(v_I)\delta_{IJ}.
\end{align*}
In particular,
$\set{i^*\sigma_{v_I}^S}{I\subset \Delta}$ is a basis for $H^*_S(\ensuremath{\mathbf{P}})$.
Furthermore, the numbers $m(v_I)$ do not depend on the superset $\Delta$ containing $I$.
\end{prop}
\Cref{stab1,stability} deal with the stability of Schubert classes and their pullbacks to the Peterson variety.
\begin{prop}
{\rm(\cite[Thm 6.6]{gms:peterson})}
\label{stab1}
Consider the inclusions $\iota_J:\ensuremath{\mathbf{P}}_J\hookrightarrow\ensuremath{\mathbf{P}}$
and $i_J:\ensuremath{\mathbf{P}}_J\hookrightarrow G_J/B_J$.
For $w\in W$, let $p_w=i^*\sigma_w^S$.
For $w\in W_J$, let $p^J_w=i_J^*\sigma_w^S$.
Then $\iota_J^*p_w=p_w^J$.
\end{prop}
\begin{thm
\label{stability}
Consider the inclusions $i:\ensuremath{\mathbf{P}}\to\ensuremath{G/B}$ and $i_I:\ensuremath{\mathbf{P}}_I\to G_I/B_I$.
Let $p_w=i^*\sigma^S_w$ for $w\in W$, and $p_w^I=i_I^*\sigma^S_w$ for $w\in W_I$.
There exists an injective ring map $h: H^*_S(\ensuremath{\mathbf{P}}_I)\to H^*_S(\ensuremath{\mathbf{P}})$,
satisfying $h(p^I_w)=p_w$ for all $w\in W_I$.
\end{thm}
\begin{proof}
Following \cref{prop:vectorSpace},
the set of $S$-fixed points in $\ensuremath{\mathbf{P}}$ and $\ensuremath{\mathbf{P}}_I$ are precisely
$\set{w_J}{J\subset\Delta}$ and $\set{w_J}{J\subset I}$ respectively.
Let $\ell:\set{w_J}{J\subset\Delta}\hookrightarrow\ensuremath{\mathbf{P}}$
and $\ell_I:\set{w_I}{J\subset I}\hookrightarrow\ensuremath{\mathbf{P}}_I$
denote the inclusions of the fixed point sets.
Following \cite[Cor 1.3.2, Thm 1.6.2, Thm 6.3]{GKM},
the maps $\ell_I^*:H^*_S(\ensuremath{\mathbf{P}}_I)\to \bigoplus\limits_{J\subset I}H^*_S(\{w_J\})$
and $\ell^*:H^*_S(\ensuremath{\mathbf{P}})\to \bigoplus\limits_{J\subset\Delta}H^*_S(\{w_J\})$
are injective.
Recall that we have an injective ring map
$H^*_S(G_I/B_I)\hookrightarrow H^*_S(G/B)$ satisfying $\sigma_w^S\mapsto\sigma_w^S$ for all $w\in W_I$.
Following \cref{prop:duality}, the maps $i^*_I:H^*_S(G_I/B_I)\to H^*_S(\ensuremath{\mathbf{P}}_I)$
and $i^*:H^*_S(G/B)\to H^*_S(\ensuremath{\mathbf{P}})$ are surjective.
Consequently, we have a commutative diagram,
\begin{center}
\begin{tikzcd}
H^*_S(G_I/B_I)\arrow[r,"f",hook]\arrow[d,"i_I^*",twoheadrightarrow] & H^*_S(G/B)\arrow[d,"i^*",twoheadrightarrow]\\
H^*_S(\ensuremath{\mathbf{P}}_I)\arrow[r,hook,dashed,"h",]\arrow[d,"\ell_I^*",hook] & H^*_S(\ensuremath{\mathbf{P}})\arrow[d,"\ell^*",hook]\\
\bigoplus\limits_{J\subset I}H^*_S(\{w_J\})\arrow[r,"g",hook] & \bigoplus\limits_{J\subset\Delta}H^*_S(\{w_J\})
\end{tikzcd}
\end{center}
Observe that since $\ell_I^*$, $\ell^*$, and $g$ are injective,
we have
\begin{align*}
\ker(i_I^*)=\ker(g\ell_I^*i^*_I)=\ker(\ell^*i^*f)=\ker(i^*f).
\end{align*}
Consequently, the composite map $i^*f$ factors injectively through $i^*_I$,
i.e., we have $i^*f=hi_I^*$ for some injective map
$h: H^*_S(\ensuremath{\mathbf{P}}_I)\to H^*_S(\ensuremath{\mathbf{P}})$.
Furthermore,
we have $h(p_w^I)=h(i_I^*(\sigma_w^S))=i^*(f(\sigma_w^S))=i^*(\sigma_w^S)=p_w$
for any $w\in W_I$.
\end{proof}
For $w\in W_I$, \cref{stab1,stability} allows us to abuse notation and denote by $p_w$ the class $i^*\sigma_w^S$ in $H_S^*(\ensuremath{\mathbf{P}})$,
and its pullback $p_w^I$ in $H^*_S(\ensuremath{\mathbf{P}}_I)$.
\section{Dual Peterson Classes and Chevalley and Monk formulae}
\label{sec:chevalley}
In this section,
we present a Chevalley formula for the cap product of a divisor class with a fundamental class $[\ensuremath{\mathbf{P}}_I]_S$,
and a Monk rule with respect to the basis $\set{\Omega_I}{I\subset\Delta}$.
The Monk rule is dual to the Chevalley formula;
the precise relationship between the two follows from \cref{prop:duality,thm:mult}.
We also recover the presentation of $H^*_S(\ensuremath{\mathbf{P}})$ as a quotient of $H^*_S(\ensuremath{G/B})$,
first obtained by Harada, Horiguchi, and Masuda \cite{harada.horiguchi.masuda:Peterson}.
Recall the Schubert classes $\sigma^S_\alpha$ from \cref{sec:Schubert},
and the line bundles $\mathcal L_\lambda\to\ensuremath{G/B}$ from \cref{sec:lineBundles}.
For $\alpha,\beta\in\Delta$, let $a_{\alpha\beta}=\left\langle\beta^\vee,\alpha\right\rangle$ be the $\alpha\beta^{th}$ entry of the Cartan matrix of $\Delta$.
For $\alpha\in\Delta$,
we set $p_\alpha=i^*\sigma_\alpha^S$ and $q_\alpha=\sum_{\beta\in\Delta}a_{\alpha\beta} p_\beta$,
where $i$ denotes the embedding $i:\ensuremath{\mathbf{P}}\hookrightarrow\ensuremath{G/B}$.
\subsection{Dual Peterson Classes}
In \cref{qChevalley}, we compute the equivariant cohomology class in $H^*_S(\ensuremath{\mathbf{P}})$ which is dual to the Peterson subvariety $\ensuremath{\mathbf{P}}_I$.
This is a key step in our proof of the Chevalley formula.
\begin{lemma}
\label{qalpha}
For $\alpha\in\Delta$, we have $c_1^S(i^*\mathcal L_\alpha)=q_\alpha-t$.
\end{lemma}
\begin{proof}
Let $\varpi_\alpha$ be the fundamental weight dual to the coroot $\alpha^\vee$,
let $V_{\varpi_\alpha}$ be the corresponding irreducible $G$-representation,
and let $\pr:V_{\varpi_\alpha}\to\mathbb C_{-\varpi_\alpha}$
the $B$-equivariant projection onto the lowest weight space in $V_{\varpi_\alpha}$.
Let $\mathbf 1\in\mathbb C_{-\varpi_\alpha}$ be a lowest weight vector in $V_{\varpi_\alpha}$,
and consider the section $s$
of the line bundle $\mathcal L_{\varpi_\alpha}\to G/B$
given by $s(gB)= (g,\pr(g^{-1}\mathbf 1))$.
Observe that the torus $T$ acts on $s$ via the character $-\varpi_\alpha$,
\begin{align*}
(z\cdot s)(gB) & =zs(z^{-1}gB) = z(z^{-1}g,\pr(g^{-1}z\mathbf 1))\\
& =z(z^{-1}g,\varpi_\alpha(z^{-1})\pr(g^{-1}\mathbf 1))\\
& =\varpi_\alpha(z^{-1}) (g,\pr(g^{-1}\mathbf 1)) =\varpi(z^{-1})s(gB).
\end{align*}
Further, the zero scheme $Z(s)$ of $s$ is supported precisely on the Schubert divisor $X^{s_\alpha}$,
thus $[Z(s)]_T=m[X^{s_\alpha}]_T$ for some positive integer $m$.
It follows from \cref{cor:twistedSection} that
\begin{align}
\label{work1}
m\sigma^T_\alpha=c_1^T(\mathcal L_{\varpi_\alpha})+\varpi_\alpha.
\end{align}
We evaluate \cref{work1} under the localization
$\ell_{s_\alpha}^*:H^*_T(G/B)\to H^*(\{s_\alpha\})$.
Recall that $\mathcal L_{\varpi_\alpha}=G\times^B\mathbb C_{-\varpi_\alpha}$,
and hence $\ell_{s_\alpha}^*(c_1(\mathcal L_{\varpi_\alpha}))=s_\alpha(-\varpi_\alpha)=-\varpi_\alpha+\alpha$.
Using the localization formula of Andersen, Jantzen, and Soergel \cite{AJS}, and Billey \cite{billey:kostant},
we have $\ell_{s_\alpha}^*(\sigma_\alpha)=\alpha$.
It follows that $m=1$, i.e.,
\begin{align*}
c_1^T(\mathcal L_{\varpi_\alpha})=\sigma^T_\alpha-\varpi_\alpha.
\end{align*}
Recall that $\alpha=\sum a_{\alpha\beta}\varpi_\beta$,
where $a_{\alpha\beta}=\left\langle\beta^\vee,\alpha\right\rangle$
is the $\alpha\beta^{th}$ entry of the Cartan matrix.
We have
\begin{align*}
c_1^T(\mathcal L_\alpha)
=\sum a_{\alpha\beta} c_1^T(\mathcal L_{\varpi_\beta})
=\sum a_{\alpha\beta}\sigma^T_\beta-\sum a_{\alpha\beta}\varpi_\beta
=\sum a_{\alpha\beta}\sigma^T_\beta-\alpha.
\end{align*}
Consequently, we have
$
c_1^S(\mathcal L_\alpha)
=\sum a_{\alpha\beta}\sigma^S_\beta-t.
$
Applying the $S$-equivariant pullback $i^*$, we obtain the claimed equality,
$c_1^S(i^*\mathcal L_\alpha)=q_\alpha-t$.
\end{proof}
\begin{prop}
\label{eulerClassGP}
For $I\subset\Delta$,
we have the following equality in $H_*^S(G/B)$:
\begin{align*}
\prod\limits_{\alpha\in\phi^+\backslash\phi_I^+}\left(c_1^S(\mathcal L_\alpha)-t\right)\cap[G/B]_S= \frac{|W|}{|W_I|}[X_{w_I}]_S.
\end{align*}
\end{prop}
\begin{proof}
Let $P\subset G$ be the parabolic subgroup corresponding to $I$,
and let $\mathfrak p=Lie(P)$.
Recall that the tangent bundle $T(G/P)$ has the following description:
$T(G/P)=G\times^P(\ensuremath{\mathfrak g}/\mathfrak p)$.
Let $\pr:\ensuremath{\mathfrak g}\to\ensuremath{\mathfrak g}/\mathfrak p$ denote the projection,
and consider the section $s:G/P\to T(G/P)$ given by
\begin{align*}
s(gP)=(g,\pr(Ad(g^{-1})e)).
\end{align*}
The zero scheme of $s$ is supported at the single point $1.P$,
and $S$ acts on $s$ via the character $t$.
It follows from \cref{cor:twistedSection} that there exists an integer $N$ such that
\begin{align}
\label{eq:EulerChar}
N[1.P]_S = e^S(T(G/P)\otimes\underline\mathbb C_{-t})\cap [G/P].
\end{align}
Let $\chi(\_)$ denote the Euler characteristic.
Mapping \cref{eq:EulerChar} to ordinary cohomology,
we obtain
$$
N = e(T(G/P))\cap [G/P] = \chi(G/P),
$$
where the second equality follows from the Gauss-Bonnet theorem.
Moreover, since the Euler characteristic equals the number of distinct $W_I$-cosets in $W$, it follows that $N = \frac{|W|}{|W_I|}$.
Pulling back \cref{eq:EulerChar} along the ($S$-equivariant) flat map $\pi:G/B\to G/P$,
we have
\begin{align*}
\frac{|W|}{|W_I|}[X_{w_I}]_S=e^S(\pi^*T(G/P)\otimes\underline\mathbb C_{-t})\cap [G/B]
\end{align*}
Finally, we observe that $\pi^*T(G/P)=G\times^B\mathfrak g/\mathfrak p$
has a filtration with quotients
$\set{\mathcal L_\alpha}{\alpha\in\phi^+\setminus \phi_I^+}$, so that
$$
e^S(\pi^*T(G/P)\otimes \underline{\mathbb C}_{-t}) = \prod_{\alpha\in\alpha\in\phi^+\backslash\phi_I^+ } (c^S_1(\mathcal L_\alpha)-t).
$$ The result of the lemma now follows after capping with $[G/B]$.
\end{proof}
Using \cref{eulerClassGP}, we can compute cohomology classes in $H^*_S(\ensuremath{\mathbf{P}})$ that are dual to the Peterson subvarieties.
\begin{thm}
\label{qChevalley}
For any $I\subset\Delta$, we have
\begin{align*}
\prod\limits_{\alpha\in\Delta\backslash I}(q_\alpha-2t)\cap[\ensuremath{\mathbf{P}}]_S=\frac{|W|}{|W_I|}[\ensuremath{\mathbf{P}}_I]_S.
\end{align*}
\end{thm}
\begin{proof}
It is sufficient to prove the result in the case where $I=\Delta\backslash\{\alpha\}$ for some $\alpha\in\Delta$.
Let $G_I$ be the standard Levi subgroup corresponding to $I\subset\Delta$, and let $B_I=G_I\cap B$.
We identify $G_I/B_I$ with the Schubert variety $X_{w_I}$.
For convenience, we write $r_\alpha=c_1^S(\mathcal L_\alpha)-t$.
We have (in $H^S_*(G/B)$)
\begin{align*}
r_\alpha\cap[\ensuremath{\mathbf{P}}]_S
&=r_\alpha\prod\limits_{\beta\in\phi^+\backslash\Delta} r_\beta\cap [G/B]_S
&&\text{by \cref{PetClass}}\\
&= \prod\limits_{\beta\in\phi_I^+\backslash I}r_\beta\prod\limits_{\beta\in\phi^+\backslash\phi_I^+} r_\beta\cap [G/B]_S\\
&=\frac{|W|}{|W_I|} \prod\limits_{\beta\in\phi_I^+\backslash I}r_\beta\cap[G_I/B_I]_S
&&\text{by \cref{eulerClassGP}}\\
&=\frac{|W|}{|W_I|}[\ensuremath{\mathbf{P}}_I]_S
&&\text{by \cref{PetClass}}.
\end{align*}
Following \cref{qalpha}, we have $i^*r_\alpha=q_\alpha-2t$.
The result follows from the projection formula (\cref{projectionFormula}) applied to the inclusion $i:\ensuremath{\mathbf{P}}\to G/B$.
\end{proof}
We also recover the presentation of $H^*_S(\ensuremath{\mathbf{P}})$ as a quotient of $H^*_S(\ensuremath{G/B})$,
first obtained by Harada, Horiguchi, and Masuda \cite{harada.horiguchi.masuda:Peterson}.
\begin{cor
\label{hhm}
Recall that $q_\alpha=\sum_{\beta\in\Delta}\left\langle\beta^\vee,\alpha\right\rangle p_\beta$.
The equivariant cohomology ring of the Peterson variety admits the presentation
\begin{align*}
H^*_S(\ensuremath{\mathbf{P}})=\frac{\mathbb Q[p_\alpha]_{\alpha\in\Delta}}{\left\langle p_\alpha(q_\alpha-2t)\right\rangle}_{\alpha\in\Delta}.
\end{align*}
\end{cor}
\begin{proof}
Following \cref{commTri}, the map $H^*(G/B)\to H^*(\ensuremath{\mathbf{P}})$ is surjective,
and hence, so is the map $H^*_S(G/B)\to H^*_S(\ensuremath{\mathbf{P}})$.
Recall that $H^*_T(G/B)$ is generated (as a ring) by the divisor classes.
Consequently, the $\set{p_\alpha}{\alpha\in\Delta}$ are ring generators for $H^*_S(\ensuremath{\mathbf{P}})$.
Let $I=\Delta\backslash\{\alpha\}$.
Following \cite[Thm 6.5]{gms:peterson} and \cref{thm:chevalley}, we have
\begin{align*}
p_\alpha(q_\alpha-2t)\cap[\ensuremath{\mathbf{P}}]_S=p_\alpha\cap [\ensuremath{\mathbf{P}}_I]_S=0.
\end{align*}
By the universal coefficients theorem \cite[Ch 17, \S 3]{MR1702278}, the map
$\omega\mapsto\omega\cap[\ensuremath{\mathbf{P}}]$ is an isomorphism $H^*_S(\ensuremath{\mathbf{P}})\xrightarrow\sim H_*^S(\ensuremath{\mathbf{P}})$.
It follows that $p_\alpha(q_\alpha-2t)=0$,
and consequently, we obtain a surjective map
\begin{align}
\label{surjRelations}
\frac{\mathbb Q[p_\alpha]_{\alpha\in\Delta}}{\left\langle p_\alpha(q_\alpha-2t)\right\rangle}_{\alpha\in\Delta}\longrightarrow H^*_S(\ensuremath{\mathbf{P}}).
\end{align}
It remains to show that this map is an isomorphism, i.e., there are no other relations.
Following \cite[Thm 3]{klyachko85} and \cref{commTri},
we have
\begin{align*}
H^*(\ensuremath{\mathbf{P}})=\dfrac{\mathbb Q[p_\alpha]_{\alpha\in\Delta}}{\left\langle p_\alpha q_\alpha\right\rangle_{\alpha\in\Delta}}.
\end{align*}
We deduce that the kernel of the map in \cref{surjRelations} is $t$-divisible.
However, $t$ acts freely on $H^*_S(\ensuremath{\mathbf{P}})$,
and hence \cref{surjRelations} is an isomorphism.
\end{proof}
\subsection{The Chevalley and Monk formulae}
\begin{thm}[Equivariant Chevalley formula]
\label{thm:chevalley}
For $\alpha\in\Delta$, $J\subset\Delta$, we have
\begin{align*}
p_\alpha\cap[\ensuremath{\mathbf{P}}_J]_S=
\begin{cases}
0 &\text{if }\alpha\not\in J,\\
\left\langle2\rho_J^\vee,\varpi^J_\alpha\right\rangle t\,[\ensuremath{\mathbf{P}}_J]_S+
\sum\limits_{\substack{\beta\in J\\ K=J\backslash\{\beta\}}}
\left\langle\varpi^{J\vee}_\beta,\varpi^J_\alpha\right\rangle
\frac{|W_J|}{|W_K|}[\ensuremath{\mathbf{P}}_K]_S
&\text{if }\alpha\in J.
\end{cases}
\end{align*}
\end{thm}
\begin{proof}
Recall the inclusion map $\iota_J:\ensuremath{\mathbf{P}}_J\hookrightarrow\ensuremath{\mathbf{P}}$.
Following \cite[Thm. 6.6(b)]{gms:peterson}, we have
\begin{equation*}\iota_J^*p_\alpha=
\begin{cases}
p_\alpha & \text{if }\alpha\in J,\\
0 & \text{otherwise}.
\end{cases}
\end{equation*}
In particular, we have $p_\alpha\cap[\ensuremath{\mathbf{P}}_J]_S=0$ for $\alpha\not\in J$.
Next, for $\alpha\in J$, we have
$
\varpi^J_\alpha = \sum_{\beta\in J} \left\langle \varpi^{J\vee}_\beta,\varpi^J_\alpha\right\rangle \beta,
$
and hence
\begin{align*}
&&\iota_J^*p_\alpha =\sum_{\beta\in J} \left\langle \varpi^{J\vee}_\beta,\varpi^J_\alpha\right\rangle \iota_J^* q_\beta
&&\text{in }H^*_S(\ensuremath{\mathbf{P}}_J).
\end{align*}
Following \cref{qChevalley}, we have
\begin{align*}
&&q_\beta\cap[\ensuremath{\mathbf{P}}_J]_S
=\frac{|W_J|}{|W_K|}[\ensuremath{\mathbf{P}}_K]_S+2t[\ensuremath{\mathbf{P}}_J]_S,
&&K=J\backslash\{\beta\}.
\end{align*}
Further, for $\beta\in J$, we have $\iota_J^*q_\beta=\sum_{\alpha\in J}a_{\beta\alpha}\iota_J^*p_\alpha=\sum_{\alpha\in J}a_{\beta\alpha}p_\alpha=q_\beta$.
Hence, by the projection formula (\cref{projectionFormula}),
we have
\begin{align*}
p_\alpha\cap[\ensuremath{\mathbf{P}}_J]_S&=\sum_{\beta\in J}\langle \varpi^{J\vee}_\beta,\varpi^J_\alpha\rangle\, q_\beta \cap [\ensuremath{\mathbf{P}}_J]_S \\
&=
\sum_{\substack{\beta\in J\\ K=J\backslash\{\beta\}}}
\left\langle\varpi^{J\vee}_\beta,\varpi^J_\alpha\right\rangle
\left(
2t [\ensuremath{\mathbf{P}}_J]_S+
\frac{|W_J|}{|W_K|}[\ensuremath{\mathbf{P}}_K]_S\right)
\\
&=
\left\langle2\rho_J^\vee,\varpi^J_\alpha\right\rangle t
[\ensuremath{\mathbf{P}}_J]_S+
\sum_{\substack{\beta\in J\\ K=J\backslash\{\beta\}}}
\left\langle\varpi^{J\vee}_\beta,\varpi^J_\alpha\right\rangle
\frac{|W_J|}{|W_K|}[\ensuremath{\mathbf{P}}_K]_S.
\end{align*}
\end{proof}
\begin{example}
\label{ex1}
Let $\Delta=B_2$.
\begin{center}
\begin{tikzpicture}
\draw[fill=blue] (0,0) circle (.1cm);
\draw[fill=blue] (1,0) circle (.1cm);
\draw (.1,.03)--+(0.8,0);
\draw (.1,-.03)--+(0.8,0);
\draw (0.6,0)--+(-0.12,0.12);
\draw (0.6,0)--+(-0.12,-0.12);
\draw (0,-.2) node {$\scriptscriptstyle 1$};
\draw (1,-.2) node {$\scriptscriptstyle 2$};
\end{tikzpicture}
\end{center}
We compute
\begin{align*}
p_1\cap[\ensuremath{\mathbf{P}}]_S=
\left\langle2\rho^\vee,\varpi_1\right\rangle t\,[\ensuremath{\mathbf{P}}]_S+
\left\langle\varpi^{\vee}_1 ,\varpi_1\right\rangle
\frac{|W|}{|W_{\{2\}}|}[\ensuremath{\mathbf{P}}_{\{2\}}]_S+
\left\langle\varpi^{\vee}_2,\varpi_1\right\rangle
\frac{|W|}{|W_{\{1\}}|}[\ensuremath{\mathbf{P}}_{\{1\}}]_S.
\end{align*}
Recall the realization of the root system of $B_2$ in $\mathbb R^2$,
given by $\alpha_1=\epsilon_1-\epsilon_2$, $\alpha_2=\epsilon_2$.
We have $\varpi_1=\epsilon_1=\alpha_1+\alpha_2$.
Observe that $\left\langle \varpi_j^\vee,\varpi_i\right\rangle$ equals the coefficient of $\alpha_j$ in the expansion of $\varpi_i$ as a sum of simple roots.
Hence, we have
\begin{align*}
\left\langle\varpi_1^\vee,\varpi_1\right\rangle=1,&&
\left\langle\varpi_2^\vee,\varpi_1\right\rangle=1.
\end{align*}
Furthermore, $2\rho^\vee=2\varpi_1^\vee+2\varpi_2^\vee$,
and hence $\left\langle 2\rho^\vee,\varpi_1\right\rangle=4$.
Finally, we have $|W|=8$ and $|W_{\{1\}}|=|W_{\{2\}}|=2$.
Therefore,
\begin{align*}
p_1\cap[\ensuremath{\mathbf{P}}]_S=4t[\ensuremath{\mathbf{P}}]_S+4[\ensuremath{\mathbf{P}}_{\{1\}}]_S+4[\ensuremath{\mathbf{P}}_{\{2\}}]_S.
\end{align*}
\end{example}
Recall that $\Omega_I=\prod_{\alpha\in I}p_\alpha$,
so that, in particular, $\Omega_\alpha=p_\alpha$.
Following \cref{prop:duality,Giambelli}, the set $\set{\Omega_I}{I\subset\Delta}$ is a basis of $H^*_S(\ensuremath{\mathbf{P}})$ over $\mathbb Q[t]$.
Using \cref{prop:duality,thm:mult},
we can deduce a Monk rule for this basis from the Chevalley formula.
\begin{thm}[Equivariant Monk Rule]
\label{thm:monk}
Let $f_J$ denote the \emph{connection index} of the Dynkin diagram $J$,
i.e., the determinant of the Cartan matrix of $J$.
For $\alpha\in\Delta$, we have
\begin{align*}
\Omega_\alpha \Omega_I=
\begin{cases}
\Omega_{I\cup\{\alpha\}}
&\text{if }\alpha\not\in I,\\
2\left\langle\rho^\vee_I,\varpi^I_\alpha\right\rangle t \Omega_I+
\sum\limits_{\substack{\gamma\in\Delta\backslash I\\J=I\cup\{\gamma\}}}
\dfrac{f_J}{f_I}\left\langle\varpi_\gamma^{J\vee},\varpi^J_\alpha\right\rangle \Omega_J
&\text{if }\alpha\in I.
\end{cases}
\end{align*}
\end{thm}
\begin{proof}
Consider the coefficients $c_{\alpha I}^J\in\mathbb Q[t]$ in the product $\Omega_\alpha \Omega_I=\sum c_{\alpha I}^J \Omega_J$.
Following \cref{prop:duality} we have
\begin{equation*}
c_{\alpha I}^J=
\frac{\left\langle \Omega_\alpha \Omega_I,[\ensuremath{\mathbf{P}}_J]_S\right\rangle}
{\left\langle \Omega_J,[\ensuremath{\mathbf{P}}_J]_S\right\rangle}
=\frac{\left\langle \Omega_I,\Omega_\alpha\cap [\ensuremath{\mathbf{P}}_J]_S\right\rangle}
{\left\langle \Omega_J,[\ensuremath{\mathbf{P}}_J]_S\right\rangle}
=\frac{f_J}{|W_J|}\left\langle \Omega_I,\Omega_\alpha\cap [\ensuremath{\mathbf{P}}_J]_S\right\rangle,
\end{equation*}
where the final equality is from \cref{lem:mult}.
Consider first $\alpha\not\in J$.
Following \cref{thm:chevalley}, we have $\Omega_\alpha\cap[\ensuremath{\mathbf{P}}_J]_S=0$,
and hence $c_{\alpha I}^J=0$.
Consider now $\alpha\in J$.
Recall from \cref{prop:duality} that $\left\langle\Omega_I,[\ensuremath{\mathbf{P}}_K]_S\right\rangle=0$ unless $I=K$.
Further, by \cref{thm:chevalley}, the only $[\ensuremath{\mathbf{P}}_I]_S$ appearing in the expansion of $\Omega_\alpha\cap[\ensuremath{\mathbf{P}}_J]_S$
correspond to $I=J$ or $I=J\backslash\{\gamma\}$ for some $\gamma\in J$.
Thus $c_{\alpha I}^J=0$ unless $I=J$ or $I=J\backslash\{\gamma\}$ for some $\gamma\in J$.
For $J=I$, we have,
\begin{align*}
c_{\alpha I}^I
&=\frac{ f_I}{|W_I|}\left\langle\Omega_I, \Omega_\alpha\cap [\ensuremath{\mathbf{P}}_I]_S\right\rangle\\
&=\frac{f_I}{|W_I|}\left\langle \Omega_I, \left\langle2\rho_I^\vee,\varpi^I_\alpha\right\rangle t\,[\ensuremath{\mathbf{P}}_I]_S
+ \sum\limits_{\substack{\beta\in I\\ K=I\backslash\{\beta\}}} \left\langle\varpi^{I\vee}_\beta,\varpi^I_\alpha\right\rangle\frac{|W_I|}{|W_K|}[\ensuremath{\mathbf{P}}_K]_S \right\rangle \\
&=\frac{f_I}{|W_I|}\left\langle2\rho_I^\vee,\varpi^I_\alpha\right\rangle t\,\left\langle \Omega_I,[\ensuremath{\mathbf{P}}_I]_S\right\rangle\\
&= \left\langle2\rho_I^\vee,\varpi^I_\alpha\right\rangle t.
\end{align*}
In the case where $J=I\sqcup\{\gamma\}$ for some $\gamma\in\Delta$,
we have
\begin{align*}
c_{\alpha I}^J&=
\frac{ f_J}{|W_J|}\left\langle \Omega_I,\Omega_\alpha\cap [\ensuremath{\mathbf{P}}_J]\right\rangle\\
&=\frac{f_J}{|W_J|}\left\langle \Omega_I, \left\langle2\rho_J^\vee,\varpi^J_\alpha\right\rangle t\,[\ensuremath{\mathbf{P}}_J]_S
+ \sum\limits_{\substack{\beta\in J\\ K=J\backslash\{\beta\}}} \left\langle\varpi^{J\vee}_\beta,\varpi^J_\alpha\right\rangle\frac{|W_J|}{|W_K|}[\ensuremath{\mathbf{P}}_K]_S\right\rangle
\\
&=\frac{f_J}{|W_J|}\frac{|W_J|}{|W_I|}
\left\langle\varpi^{J\vee}_\gamma,\varpi^J_\alpha\right\rangle
\left\langle \Omega_I,
[\ensuremath{\mathbf{P}}_I]_S
\right\rangle \\
&= \left\langle\varpi^{J\vee}_\gamma,\varpi^J_\alpha\right\rangle
\frac{f_J}{f_I}.
\end{align*}
\end{proof}
\begin{example}
Consider $\Delta=B_3$, and let $I=\{1,2\}\subset\Delta$ .
\begin{center}
\begin{tikzpicture}
\def-4.3{-1.45}
\def0.6{0.15}
\def.7{.7}
\def6{3}
\foreach \x in {1,...,6}
{
\draw (.7*\x-.7+-4.3 , 0.6 ) circle (.1cm);
\draw (.7*\x-1*.7+-4.3 , 0.6-.5*.7 ) node {$\scriptscriptstyle{\pgfmathparse{int(\x)}\pgfmathresult}$};
}
\foreach \x in {3,...,6} \draw[xshift=3] (.7*\x-3*.7+-4.3 , 0.6) --+(.7-.2,0);
\draw[xshift=3,yshift=1] (.7*6-2*.7+-4.3 , 0.6) --+(.7-.2,0);
\draw[xshift=3,yshift=-1] (.7*6-2*.7+-4.3 , 0.6) --+(.7-.2,0);
\draw (.7*6-1.4*.7+-4.3 , 0.6) --+(-.2*.7,.2*.7);
\draw (.7*6-1.4*.7+-4.3 , 0.6) --+(-.2*.7,-.2*.7);
\draw[fill=blue!80] (-4.3 , 0.6 ) circle (.1cm);
\draw[fill=blue!80] (-4.3+.7 , 0.6 ) circle (.1cm);
\end{tikzpicture}
\end{center}
We compute the product $\Omega_{2}\Omega_I$.
By \cref{thm:monk},
\begin{align*}
\Omega_{2}\Omega_I&=
2\left\langle\rho^\vee_I,\varpi_{2}^I\right\rangle t \Omega_I+
\sum\limits_{\substack{\gamma\in\Delta\backslash I\\J=I\cup\{\gamma\}}}
\dfrac{f_J}{f_I}\left\langle\varpi_\gamma^{J\vee},\varpi^J_{2}\right\rangle \Omega_J\\
&=
2\left\langle\rho^\vee_I,\varpi_{2}^I\right\rangle t \Omega_I+
\dfrac{f_{\Delta}}{f_I} \left\langle\varpi_{3}^{\vee},\varpi^{}_{2}\right\rangle \Omega_\Delta.
\end{align*}
Since the subdiagram $I$ is isomorphic to $A_2$,
the term $\left\langle\rho_I^\vee,\varpi_2^I\right\rangle$ is calculated in $A_2$.
We have $\rho_I^\vee=\frac12(\alpha_1^\vee+\alpha_2^\vee+(\alpha_1^\vee+\alpha_2^\vee))=\alpha_1^\vee+\alpha_2^\vee$,
and hence $\langle\rho_I^\vee,\varpi_2^I\rangle=1$.
The term $\left\langle\varpi_3^\vee,\varpi_2\right\rangle$ is the coefficient of $\alpha_3$ in the expansion of $\varpi_2$ as a sum of simple roots.
Recall the usual realization of the $B_3$ root system inside a $3$-dimensional vector space $V=\left\langle \epsilon_1,\epsilon_2,\epsilon_3\right\rangle$,
given by $\alpha_1=\epsilon_1-\epsilon_2$, $\alpha_2=\epsilon_2-\epsilon_3$, and $\alpha_3=\epsilon_3$.
The fundamental weight $\varpi_2$ is given by
\begin{align*}
\varpi_3=\epsilon_1+\epsilon_2=\alpha_1+2\alpha_2+2\alpha_3,
\end{align*}
and hence $\left\langle\varpi_3^\vee,\varpi_2\right\rangle=2$.
The connection indices are
\begin{align*}
f_I=\det\begin{pmatrix}2&-1\\-1&2\end{pmatrix}=3,&&
f_\Delta=\det\begin{pmatrix}2&-1&0\\-1&2&-2\\0&-1&2\\\end{pmatrix}=2.
\end{align*}
Hence, we have $\Omega_2\Omega_I=2t\Omega_I+\frac43\Omega_\Delta$.
\end{example}
\begin{rem}
\label{rem:monk}
Fix a Coxeter element $v_I$ for each $I\subset\Delta$,
and set $p_{v_I}=i^*\sigma_{v_I}^S$.
It is common in the literature to work with the basis $\set{p_{v_I}}{I\subset\Delta}$.
We see from \cref{Giambelli} that $\set{p_{v_I}}{I\subset\Delta}$ and $\set{\Omega_I}{I\subset\Delta}$
are related by a diagonal change of basis matrix.
This allows us to translate \cref{thm:monk} into a Monk rule for the basis $\set{p_{v_I}}{I\subset\Delta}$,
\begin{align*}
p_\alpha p_{v_I}=
\begin{cases}
\dfrac{(|I|+1)R(v_I)}{R(v_{I\cup\{\alpha\}})}p_{v_{I\cup\{\alpha\}}}
&\text{if }\alpha\not\in I,\\
2
\left\langle\rho^\vee_I,\varpi_\alpha\right\rangle t p_{v_I}+
\sum\limits_{\substack{\gamma\in\Delta\backslash I\\J=I\cup\{\gamma\}}}
\dfrac{|J|f_J R(v_I)}{f_I R(v_J)}
\left\langle\varpi_\gamma^{J\vee},\varpi^J_\alpha\right\rangle p_{v_J}
&\text{if }\alpha\in I.
\end{cases}
\end{align*}
\end{rem}
\section{Equivariant (Co)Homology}
\label{sec:cohomology}
Let $X$ be a complex algebraic variety equipped with a left action of a torus $T$.
We recall aspects of the $T$-equivariant homology and cohomology of $X$.
We will use the Borel model of equivariant cohomology,
and equivariant Borel-Moore homology,
following the setup in Graham's paper \cite{graham:positivity}.
We refer to \cite[Ch~19]{fulton:IT}, \cite[Appendix~B]{fulton:young}, and \cite[\S 2.6]{chriss.ginzburg}
for more details about cohomology and Borel-Moore homology.
\emph{We will study (co)homology with rational coefficients}.
Recall that the cohomology ring $H^*_T(pt)$ of a point is naturally identified with
$\operatorname{Sym}(\ensuremath{\mathfrak h}^*)$,
the symmetric algebra of the dual of the Lie algebra of $T$.
The morphism $X \to \{pt\}$ from $X$ to a point
gives the equivariant cohomology $H^*_T(X)$ the structure
of a graded algebra over $H^*_T(pt)$ via the pullback map $H_T^*(pt)\to H_T^*(X)$.
In addition, the cap product
\begin{align*}
\cap: H^k_T(X)\times H_\ell^T(X)\to H_{\ell-k}^T(X)
\end{align*}
endows the
equivariant homology $H_*^T(X)$ with a graded module structure over $H^*_T(X)$.
Equivalently, there is a compatibility of cap and cup products given by
\begin{align*}
&&(a \cup b) \cap c = a \cap (b \cap c)
&&a,b \in H^*_T(X),\ c \in H_*^T(X).
\end{align*}
For any map $S\to T$ of tori,
we have a natural map of algebras $H^*_T(X) \to H^*_S(X)$,
compatible with the algebra map $H^*_T(pt)\to H^*_S(pt)$ induced by $Lie(T)^*\to Lie(S)^*$.
In particular, taking $S$ to be the trivial subgroup in $T$,
we obtain the restriction to ordinary cohomology,
$
H^*_T(X)\to H^*(X)
$.
\subsection{The integration pairing}
\label{sec:pairing}
Each irreducible, $T$-stable, closed subvariety $Z \subset X$ of complex dimension
$k$ has a fundamental class $[Z]_T \in H_{2k}^T(X)$. If $X$ is smooth and irreducible,
then there exists a unique class $\eta_Z \in H^{2 (\dim X - k)}_T(X)$,
called the Poincar{\'e} dual of $Z$,
such that \begin{equation*}\eta_Z \cap [X]_T = [Z]_T.\end{equation*}
Given a $T$-equivariant proper map $f:X\to Y$,
there is a push-forward $f_*:H_i^T(X) \to H_i^T(Y)$,
determined by the fact that if $Z \subset X$ is irreducible and $T$-stable,
then
\begin{align*}
f_*([Z])=
\begin{cases}
d_Z[f(Z)]&\text{if }\dim f(Z)=\dim Z,\\
0&\text{if }\dim f(Z)<\dim Z,
\end{cases}
\end{align*}
where $d_Z$ is the generic degree of the restriction $f:Z \to f(Z)$.
The push-forward and pull-back are related by the projection formula
\begin{equation}
\label{projectionFormula}
f_* (f^*(\eta) \cap c) = \eta \cap f_*(c),
\end{equation}
for $\eta\in H^*_T(Y)$ and $c\in H^T_*(X)$.
Recall that we have an isomorphism
\begin{align}
\label{pointPair}
&&H^j_T(pt)\xrightarrow\sim H_{-j}^T(pt),&&
a\mapsto a\cap[pt]_T.
\end{align}
In particular, $H^*_T(pt)$ lives in non-negative degrees,
and $H_*^T(pt)$ lives in non-positive degrees.
Suppose now that $X$ is complete, so that $f: X \to pt$ is proper.
For a homology class $c \in H_{-j}^T(X)$, we denote by
$\int_X c$ the class $f_*(c) \in H_{-j}^T(pt)$,
viewed as an element of $H_T^{j}(pt)$ via \cref{pointPair}.
Then we may define a pairing,
\begin{equation}
\label{intPairing}
\langle\ , \,\rangle : H^i_T(X) \mathop
\times
H_{-j}^T(X) \to H^{i+j}_T(pt) \/; \quad
\langle \eta, c \rangle := \int_X \eta \cap c \/. \end{equation}
The pairing in \cref{intPairing} is compatible with the pairing in ordinary (co)homology.
We have forgetful maps $H^*_T(X)\to H^*(X)$ and $H_*^T(X)\to H_*(X)$,
and a commutative diagram,
\begin{center}
\begin{tikzcd}
H^i_T(X) \mathop\times H_{-j}^T(X)\arrow[r,"\langle\ \text{,}\ \rangle"]\arrow[d]&H^{i+j}_T(pt)\arrow[d]\\
H^i(X) \mathop\times H_{-j}(X)\arrow[r,"\langle\ \text{,}\ \rangle"]& H^{i+j}(pt).
\end{tikzcd}
\end{center}
\subsection{Spaces with affine paving}
Following \cite[Ex~1.9.1]{fulton:IT} (see also \cite{graham:positivity}) we say that
a $T$-variety $X$ admits a {\em $T$-stable affine paving}
if it admits a filtration $X:=X_n \supset X_{n-1} \supset \ldots$ by closed $T$-stable subvarieties
such that each $X_i \setminus X_{i-1}$ is a finite disjoint union
of $T$-invariant varieties $U_{ij}$ isomorphic to affine spaces $\mathbb A^i$.
\begin{lemma}[cf. \cite{graham:positivity}]
\label{lemma:generate} Assume $X$ admits a $T$-stable affine paving, with cells $U_{ij}$.
\begin{enumerate}[label=(\alph*)]
\item
The equivariant homology $H_*^T(X)$ is a free $H^*_T(pt)$-module with
basis $\{[\overline{U_{ij}}]_T\}$.
\item
If $X$ is complete, the pairing from \cref{intPairing} is perfect,
and so we may identify
$H^*_T(X) = \operatorname{Hom}_{H^*_T(pt)}(H_*^T(X), H^*_T(pt))$.
\end{enumerate}
\end{lemma}
\subsection{Chern classes and Euler classes}
\label{sec:eulerClass}
We will denote by $c_i^T(\ )$ the $i^{th}$ $T$-equivariant Chern class,
and by $e^T(\ )$ the $T$-equivariant Euler class of a vector bundle.
We say that a section $s$ of a vector bundle $\mathcal V\to X$ is regular if the codimension of the zero set of $s$ equals the rank of $\mathcal V$.
Recall that the torus $T$ acts on the space $H^0(X,\mathcal V)$ of sections via the formula $(z\cdot s)(x) = zs(z^{-1}x)$ for all $x\in X$ and $z\in T$.
The sections which are invariant under this action are precisely those that intertwine the action,
i.e., satisfy $s(zx)=zs(x)$ for all $x\in X$ and $z\in T$.
\begin{lemma}{\rm (cf. \cite[Lemma 2.2]{graham2020cominuscule})}
\label{lem:eqSec}
If $\mathcal L$ is a $T$-equivariant line bundle on a $T$-scheme $X$,
and $s$ is a $T$-invariant regular section of $\mathcal L$ with zero-scheme $Y$,
then
\begin{equation*}
[Y]_T=c_1^T(\mathcal L)\cap[X]_T.
\end{equation*}
\end{lemma}
\begin{cor}
\label{cor:EulerClass}
If $\mathcal V$ is a $T$-equivariant vector bundle on a $T$-scheme $X$,
and $s$ is a $T$-invariant regular section of $\mathcal V$ with zero-scheme $Y$,
then $[Y]_T=e^T(\mathcal V)\cap[X]_T$.
\end{cor}
For $\lambda$ a character of $T$, let
$\underline{\mathbb C}_{\lambda} = X\times\mathbb C\to X$
denote the (geometrically trivial) equivariant line bundle,
with $T$-action given by $z(x,v)=(zx,\lambda(z)v)$ for all $z\in T$.
By a standard abuse of notation, we write $\lambda$ for the $T$-equivariant first Chern class of $\underline\mathbb C_{\lambda}$.
\begin{cor}
\label{cor:twistedSection}
Let $\mathcal V\to X$ be a $T$-equivariant vector bundle.
For a character $\lambda$ of $T$,
let $s$ be a regular section of $\mathcal V$ that lies in the $\lambda$-weight space, i.e.,
$(z\cdot s)=\lambda(z) s $ for all $z\in T$.
The zero scheme $Z(s)$ of $s$ is $T$-invariant, and we have
\begin{equation*}
[Z(s)]_T=e^{T}\left(\mathcal V\otimes\underline\mathbb C_{-\lambda}\right)\cap[X]_T.
\end{equation*}
If $\mathcal V$ admits a filtration with $T$-equivariant line bundle quotients $\{\mathcal L_i\}$,
we have
\begin{equation*}
[Z(s)]_T=\prod\left(c_{1}^{T}(\mathcal L_i)-\lambda\right)\cap[X]_T.
\end{equation*}
\end{cor}
\begin{proof}
If $\lambda$ is non-trivial, the section $s$ is not invariant.
Observe however that the section $r=s\otimes 1$
of the vector bundle $\mathcal V\otimes\underline\mathbb C_{-\lambda}$ is $T$-invariant, since
\begin{align*}
r(z x)&=s(z x)\otimes 1=\left(zz^{-1}s(zx)\right)\otimes 1\\
&= \left(z\left( \left(z^{-1}\cdot s\right)(x) \right)\right) \otimes 1 \\
&= \left(z\left( \lambda(z^{-1})s(x) \right)\right)\otimes 1\\
& = \left(\lambda(z^{-1})zs(x)\right)\otimes 1 \\
&=zs(x)\otimes \lambda(z^{-1})=z(s(x)\otimes 1)=zr(x).
\end{align*}
Further, $r$ has the same zero scheme as $s$, i.e., $Z(r)=Z(s)$.
Hence the first equality follows from \cref{cor:EulerClass}.
Suppose $\mathcal V$ admits a filtration by line bundles $\{\mathcal L_i\}$.
Then $\mathcal V\otimes\underline\mathbb C_{-\lambda}$
admits a filtration by line bundles $\{\mathcal L_i\otimes\underline\mathbb C_{-\lambda}\}$.
Applying the splitting principle, we have
\begin{align*}
e^T\left(\mathcal V\otimes\underline\mathbb C_{-\lambda}\right)
=\prod\left(c_1^{T}\left(\mathcal L_i\otimes\underline\mathbb C_{-\lambda}\right)\right)
=\prod\left(c_1^{T}(\mathcal L_i)-\lambda\right),
\end{align*}
from which the second equality follows.
\end{proof}
\subsection{Tables of Structure Constants}
\label{sec:examples}
Observe that $\left\langle\varpi_\gamma^{J\vee},\varpi_\alpha^J\right\rangle$ is precisely the coefficient of $\gamma$ in the expression of the fundamental weight $\varpi_\alpha^J$ as a sum of the simple roots in $J$.
These coefficients, and the connection indices of the Dynkin diagrams, are listed in \cite[Tables 2, 3]{onishchik.vinberg},
see also \cite[Ch.6, \S4]{bourbaki:Lie46}.
Using this, we can compute the structure constants in the Chevalley and Monk formulae.
For the reader's convenience, we tabulate the equivariant structure constants for the Monk and Chevalley formulae in types A--D.
The ordinary structure constants in the Monk rule for all Dynkin diagrams have also been recently computed and tabulated by Horiguchi in \cite[Table 2]{horiguchi2021mixed}.
\footnote{Our structure constants $c_{\alpha_i,K}^J$ correspond to the numbers $m_{i,K}^J$ in \cite{horiguchi2021mixed}.}
We will denote by $c_{iJ}^K$ and $d_{iK}^J$ the structure constants given by
\begin{align*}
\Omega_{\alpha_i}\Omega_I=\sum c_{iI}^J\Omega_J,&&
p_{\alpha_i}\cap[\ensuremath{\mathbf{P}}_J]_S=\sum d_{iJ}^K[\ensuremath{\mathbf{P}}_K]_S
\end{align*}
respectively.
Following \cref{thm:chevalley,thm:monk}, for $i\in I$, we have
\begin{align*}
&&c_{iI}^I & =d_{iI}^I =\left\langle2\rho_I^\vee,\varpi^I_i\right\rangle t, & \\
&&c_{iI}^J & = \dfrac{f_J}{f_I}\left\langle\varpi_j^{J\vee},\varpi^J_i\right\rangle & J=I\sqcup\{j\},
\end{align*}
and for $i\in J$, we have
\begin{align*}
&&d_{iJ}^K=\left\langle\varpi^{J\vee}_j,\varpi^J_i\right\rangle \frac{|W_J|}{|W_K|},&& K=J\backslash\{j\}.
\end{align*}
For the classical type Dynkin diagrams, these values are listed in \cref{MonkTable}.
The ordinary structure constants for the Chevalley formula are listed in \cref{TableChevalley}.
The structure constants not involving $t$, in the Monk rule, are listed (for types A--D) in \cref{ordCM}.
\input{tableC.tex}
\input{table2.tex}
\begin{example}
\label{ex2}
Consider $\Delta=B_3$, and $I=\{2,3\}$.
We compute the product $p_2\cap[\ensuremath{\mathbf{P}}_I]_S$ using \cref{MonkTable,TableChevalley}.
\begin{center}
\begin{tikzpicture}
\def-4.3{-1.45}
\def0.6{0.15}
\def.7{.7}
\def6{3}
\foreach \x in {2,...,6}
{
\draw[fill=blue] (.7*\x-.7+-4.3 , 0.6 ) circle (.1cm);
\draw (.7*\x-1*.7+-4.3 , 0.6-.5*.7 ) node {$\scriptscriptstyle{\pgfmathparse{int(\x)}\pgfmathresult}$};
}
\draw (-4.3 , 0.6-.5*.7 ) node {$\scriptscriptstyle{\pgfmathparse{int(1)}\pgfmathresult}$};
\foreach \x in {3,...,6} \draw[xshift=3] (.7*\x-3*.7+-4.3 , 0.6) --+(.7-.2,0);
\draw[xshift=3,yshift=1] (.7*6-2*.7+-4.3 , 0.6) --+(.7-.2,0);
\draw[xshift=3,yshift=-1] (.7*6-2*.7+-4.3 , 0.6) --+(.7-.2,0);
\draw (.7*6-1.4*.7+-4.3 , 0.6) --+(-.2*.7,.2*.7);
\draw (.7*6-1.4*.7+-4.3 , 0.6) --+(-.2*.7,-.2*.7);
\draw (-4.3 , 0.6 ) circle (.1cm);
\end{tikzpicture}
\end{center}
We have
$
p_2\cap[\ensuremath{\mathbf{P}}_I]_S=d_{2I}^I[\ensuremath{\mathbf{P}}_I]
+d_{2I}^{\{2\}}[\ensuremath{\mathbf{P}}_{\{2\}}]_S+d_{2I}^{\{3\}}[\ensuremath{\mathbf{P}}_{\{3\}}]_S.
$
The subdiagram $I$ is isomorphic to $B_2$,
so the coefficient $d_{2I}^I=4t$ corresponds to $i=1$, $n=2$ in the second row of \cref{MonkTable}.
The coefficient $d_{2I}^{\{2\}}=4$ corresponds to $i=1$, $j=2$, and $n=2$ in the second row of \cref{TableChevalley}.
The coefficient $d_{2I}^{\{3\}}=4$ corresponds to $i=1$, $j=1$, and $n=2$ in the second row of \cref{TableChevalley}.
Hence, we have
\begin{align*}
p_2\cap[\ensuremath{\mathbf{P}}_I]_S=4t[\ensuremath{\mathbf{P}}_I]_S+4[\ensuremath{\mathbf{P}}_{\{2\}}]_S+4[\ensuremath{\mathbf{P}}_{\{3\}}]_S.
\end{align*}
\end{example}
Recall from \cref{thm:chevalley} that the coefficients in the Chevalley formula do not depend on the superset $\Delta$ containing $J$.
This phenomenon can be observed by comparing \cref{ex1} with \cref{ex2}.
\begin{example}
Consider $\Delta=D_6$, and let $I=\{3,4,5\}\subset\Delta$ .
\begin{center}
\begin{tikzpicture}
\def-4.3{-4.3}
\def0.6{0.6}
\def.7{.7}
\def6{6}
\draw (.7*4-3*.7+-4.3 , 0.6 ) circle (.1cm);
\draw[xshift=3] (.7*4-4*.7+-4.3 , 0.6) --+(.7-.2,0);
\draw (.7*4-3*.7+-4.3 , 0.6-.5*.7 ) node {$\scriptscriptstyle 2$};
\foreach \x in {5,...,6}
{
\draw[fill=blue!80] (.7*\x-3*.7+-4.3 , 0.6 ) circle (.1cm);
\draw[xshift=3] (.7*\x-4*.7+-4.3 , 0.6) --+(.7-.2,0);
\draw (.7*\x-3*.7+-4.3 , 0.6-.5*.7 ) node {$\scriptscriptstyle{\pgfmathparse{int(\x-2)}\pgfmathresult}$};
}
\draw (-4.3 , 0.6 ) circle (.1cm);
\draw (-4.3 , 0.6-.5*.7 ) node {$\scriptscriptstyle1$};
\draw (-4.3+6*.7-2*.7 , 0.6+.7 ) node {$\scriptscriptstyle{\pgfmathparse{int(6-1)}\pgfmathresult}$};
\draw (-4.3+6*.7-2*.7 , 0.6-.7 ) node {$\scriptscriptstyle{\pgfmathparse{int(6)}\pgfmathresult}$};
\draw[yshift=2,xshift=2] (-4.3+6*.7-3*.7 , 0.6) --+(.7*.53,.7*.53);
\draw[yshift=-2,xshift=2] (-4.3+6*.7-3*.7 , 0.6) --+(.7*.53,-.7*.53);
\draw (-4.3+6*.7-2.27*.7 , 0.6-.73*.7) circle (.1cm);
\draw[fill=blue!80] (-4.3+6*.7-2.27*.7 , 0.6+.73*.7) circle (.1cm);
\end{tikzpicture}
\end{center}
We compute the product $\Omega_{3}\Omega_I$ using \cref{MonkTable,ordCM}.
Observe first that for $\gamma\in\Delta\backslash I$, $J=I\cup\{\gamma\}$,
we have $\left\langle\varpi_\gamma^{J\vee},\varpi_\alpha^J\right\rangle=0$ if $\gamma$ is not connected to $I$.
We deduce that $c_{\alpha_3I}^{I\cup\{1\}}=0$, and that
\begin{align*}
\Omega_{3}\Omega_I&=c_{\alpha_3I}^I\Omega_I+c_{\alpha_3I}^{I\cup\{2\}}\Omega_{I\cup\{2\}}+c_{\alpha_3I}^{I\cup\{6\}}\Omega_{I\cup\{6\}}.
\end{align*}
The coefficient $c_{\alpha_3I}^I=3t$ corresponds to $i=1$, $n=3$ in the first row of \cref{MonkTable},
The coefficient $c_{\alpha_3I}^{I\cup\{6\}}=\frac12$ corresponds to $i=2$, $n=4$ in the sixth row of \cref{ordCM},
and the coefficient $c_{\alpha_3I}^{I\cup\{1\}}=\frac34$ corresponds to $i=3$ and $n=4$ in the first row of \cref{ordCM}.
Hence, we have
\begin{align*}
\Omega_{3}\Omega_I=3t\Omega_I+\frac34\Omega_{I\cup\{2\}}+\frac12\Omega_{I\cup\{6\}}.
\end{align*}
\end{example}
\section{The Giambelli formula and intersection multiplicities}
\label{sec:giambelli}
In this section,
we describe the relationship between $H^*(\ensuremath{G/B})$, $H^*(\ensuremath{\mathbf{P}})$, and $H^*(\ensuremath{\mathbf H(h,H_0)})$.
We use this description to compute the multiplicities $m(v_I)$ of \cref{prop:duality},
and to develop an equivariant Giambelli formula for \ensuremath{\mathbf{P}},
i.e., a formula expressing the pullback of Schubert classes as a polynomial in the divisor classes.
\subsection{Cohomology of regular Hessenberg varieties}
\label{sec:bc}
Let $H$ be any (indecomposable) Hessenberg space.
Klyachko \cite{klyachko85,klyachko95} and Tymoczko \cite{tymoczko2007permutation} have constructed an action of $W$ on $H^*(\mathbf H(h,H))$,
called the Tymoczko dot-action.
Recently, Brosnan and Chow \cite{brosnan-chow}, and B\u alibanu and Crooks \cite{balibanu2020perverse}
have identified the Tymoczko dot-action as a monodromy action.
We briefly explain some of their results here, following the exposition in \cite{balibanu2020perverse}.
Recall the map $\mu_H:G\times^BH\to\ensuremath{\mathfrak g}$ given by $(g,x)\mapsto Ad(g)x$.
Let $\ensuremath{\mathfrak g}^r$ denote the set of regular elements in \ensuremath{\mathfrak g}, and let $H^r=H\cap \ensuremath{\mathfrak g}^r$.
Following \cite[Sec 4]{balibanu2020perverse},
for any $x\in\ensuremath{\mathfrak g}$,
there exists a Euclidean open neighbourhood $D_x$ of $x$,
such that the (non-equivariant) inclusion $\mu_H^{-1}(x)\hookrightarrow\mu_H^{-1}(D_x)$ induces an isomorphism
\begin{equation}
\label{DxIso}
H^*(\mu_H^{-1}(D_x))\overset\sim\longrightarrow H^*(\mu_H^{-1}(x)).
\end{equation}
Let $Z=\mu_H^{-1}(D_x)\cap(G\times^BH^r)$,
let $s$ be a regular semisimple element contained in $D_x$,
and consider the commutative diagram
\begin{equation}
\label{bigDiag}
\begin{tikzcd}
G\times^BH\arrow[d]\arrow[r,hookleftarrow]&\mu_H^{-1}(D_x)\arrow[r,hookleftarrow]&Z\arrow[r,hookleftarrow]&\mathbf H(s,H)\\
G/B\arrow[r,hookleftarrow]&\mathbf H(x,H)\arrow[u,hook].
\end{tikzcd}
\end{equation}
Composing the induced pullback map $H^*(\mu_H^{-1}(D_x))\to H^*(\mathbf H(s,H))$
with the isomorphism \eqref{DxIso},
we obtain the so-called local invariant cycle map
\begin{align*}
\lambda_x:H^*(\mathbf H(x,H))\to H^*(\mathbf H(s,H))^W.
\end{align*}
Applying the local invariant cycle theorem of Beilinson, Bernstein, and Deligne \cite{MR751966},
see also \cite{MR3752459},
we have the following result.
\begin{prop}
\label{bbd}
The map $\lambda_x:H^*(\mathbf H(x,H))\to H^*(\mathbf H(s,H))^W$ is surjective.
\end{prop}
\begin{cor}
Consider the inclusion $f:\mathbf H(s,H)\to G/B$.
The image of the pullback $f^*:H^*(\ensuremath{G/B})\to H^*(\mathbf H(s,H))$ is precisely $H^*(\mathbf H(s,H))^W$.
\end{cor}
\begin{proof}
We apply \cref{bbd} to $x=0$, in which case $\mathbf H(x,H)=G/B$,
and
$\lambda_0$ is precisely the pullback for the inclusion $\mathbf H(s,H)\hookrightarrow G/B$.
\end{proof}
\begin{prop}
We have a commutative diagram,
\begin{equation}
\label{commTri}
\begin{tikzcd}
&H^*(\ensuremath{G/B})\arrow[dl,swap,"i^*",twoheadrightarrow]\arrow[d,twoheadrightarrow,dashed]\arrow[dr,"j^*"]&\\
H^*(\mathbf H(e,H))\arrow[r,"\sim"]&H^*(\mathbf H(h,H))^W\arrow[hook,r]&H^*(\mathbf H(h,H)).
\end{tikzcd}
\end{equation}
\end{prop}
\begin{proof}
For $x=e$, the map $\lambda_e:H^*(\mathbf H(e,H))\to H^*(\mathbf H(s,H))^W$ is an isomorphism,
cf. \cite[Prop. 4.7]{balibanu2020perverse}.
Following the construction of $\lambda_e$, we have the following commutative diagram,
\begin{center}
\begin{tikzcd}
&H^*(Z)\arrow[dl]\arrow[dr]&\\
H^*(\mathbf H(e,H))\arrow[rr,"\lambda_e"]&&H^*(\mathbf H(s,H))^W.
\end{tikzcd}
\end{center}
Following \cref{bigDiag}, the pullbacks $i^*:H^*(G/B)\to H^*(\mathbf H(e,H))$ and $j^*:H^*(G/B)\to H^*(\mathbf H(h,H))$ factor through the pullback $H^*(G/B)\to H^*(Z)$,
and hence we have a commutative diagram,
\begin{equation}
\label{work2}
\begin{tikzcd}
&H^*(\ensuremath{G/B})\arrow[dl,swap,"i^*",twoheadrightarrow]\arrow[d,twoheadrightarrow,dashed]\arrow[dr,"j^*"]&\\
H^*(\mathbf H(e,H))\arrow[r,"\sim"]&H^*(\mathbf H(s,H))^W\arrow[hook,r]&H^*(\mathbf H(s,H)).
\end{tikzcd}
\end{equation}
Finally, since $s$ and $h$ are both regular and semisimple,
there exists some $g\in G$ such that $Ad(g)s=h$.
The translation map $G/B\to G/B$ given by $g'B\mapsto gg'B$ induces the identity map on $H^*(G/B)$,
and sends $\mathbf H(s,H)$ to $\mathbf H(h,H)$.
Consequently, we have a commutative diagram,
\begin{equation}
\label{work3}
\begin{tikzcd}
&H^*(G/B)\arrow[dl]\arrow[dr]&\\
H^*(\mathbf H(s,H))\arrow[rr,"\sim"]&&H^*(\mathbf H(h,H)).
\end{tikzcd}
\end{equation}
\Cref{commTri} follows from \cref{work2,work3}.
\end{proof}
In the case $H=H_0$, we have $\ensuremath{\mathbf{P}}=\mathbf H(e,H_0)$, $\ensuremath{\mathbf H(h,H_0)}=\mathbf H(h,H_0)$,
hence \cref{commTri} yields an identification of $H^*(\ensuremath{\mathbf{P}})$ as the $W$-invariants of $H^*(\ensuremath{\mathbf H(h,H_0)})$.
Further, $i^*$ and $j^*$ have the same kernel,
hence \emph{any relation amongst the classes $j^*\sigma_w$ also holds amongst the classes $i^*\sigma_w$}.
We are now ready to prove the Giambelli and multiplicity formulae.
\begin{lemma}[Ordinary Giambelli Formula]
\label{klyachkoFormula}
Let $v_I$ be a Coxeter element for $I\subset\Delta$,
and let $R(v_I)$ be the number of reduced words for $v_I$.
We have
\begin{equation}
i^*\sigma_{v_I}=\frac{R(v_I)}{|I|!}\prod_{\alpha\in I}i^*\sigma_\alpha.
\end{equation}
\end{lemma}
\begin{proof}
Let $\mathcal R(v_I)$ denote the set of reduced words for $v_I$.
Following \cite{klyachko85,klyachko95} (see also \cite[Thm 8.1]{nadeau.tewari}),
we have
\begin{align*}
j^*\sigma_{v_I}=\frac{1}{\ell(v_I)!}\sum_{\underline v\in\mathcal R(v_I)}\prod_{s_\alpha\in\underline v}j^*\sigma_\alpha,
\end{align*}
where $\ell(\_)$ denotes the length function on $W$.
Since $v_I$ is a Coxeter word for $I$,
we have $\ell(v_I)=|I|$.
Further,
every reduced word $\underline v\in\mathcal R(v_I)$ contains each simple reflection $\set{s_\alpha}{\alpha\in I}$ exactly once.
Hence we have,
\begin{align*}\frac{1}{\ell(v_I)!}\sum_{\underline v\in\mathcal R(v_I)}\prod_{s_\alpha\in\underline v}j^*\sigma_\alpha
=\frac{R(v_I)}{|I|!}\prod_{\alpha\in I}j^*\sigma_\alpha.
\end{align*}
The claim now follows from \cref{commTri}.
\end{proof}
Recall that we denote by $p_w$ the pullback class $i^*\sigma_w^S$,
and by $p_\alpha$ the pullback class $i^*\sigma_{\alpha}^S$.
For convenience, we write the equivariant class
\begin{align*}
\Omega_I:=\prod_{\alpha\in I}p_\alpha = \prod_{\alpha\in I}i^*(\sigma^S_{\alpha})
\end{align*}
for any $I\subset\Delta$, where $i^*: H_S^*(G/B)\rightarrow H^*_S(\ensuremath{\mathbf{P}})$ is the pullback map in equivariant cohomology induced by the inclusion $i:\ensuremath{\mathbf{P}}\to G/B$.
\begin{thm}[Equivariant Giambelli formula]
\label{Giambelli}
Let $v_I$ be a Coxeter element for $I\subset\Delta$,
and let $R(v_I)$ be the number of reduced words for $v_I$.
We have
\begin{align*}
p_{v_I}=\frac{R(v_I)}{|I|!} \Omega_I.
\end{align*}
\end{thm}
\begin{proof}
Following \cref{stability}, we may assume $I=\Delta$.
We write $v=v_\Delta$.
Observe that the restriction to ordinary cohomology $H^*_S(\ensuremath{\mathbf{P}})\to H^*(\ensuremath{\mathbf{P}})$
is given by
\begin{align*}
\Omega_J\mapsto \prod_{\alpha\in J}i^*\sigma_\alpha.
\end{align*}
Following \cref{klyachkoFormula,prop:duality},
the set $\set{\prod_{\alpha\in J}i^*\sigma_\alpha}{J\subset\Delta}$ is linearly independent,
and hence so is the set $\set{\Omega_J}{J\subset\Delta}$.
Further, $H^*_S(\ensuremath{\mathbf{P}})$ is a free module over $\mathbb Q[t]$,
hence we may view $H^*_S(\ensuremath{\mathbf{P}})$ as a subset inside the $\mathbb Q(t)$ vector space $H^*_S(\ensuremath{\mathbf{P}})\otimes\mathbb Q(t)$.
We have
\begin{align*}
\dim H^*_S(\ensuremath{\mathbf{P}})\otimes{\mathbb Q(t)}=\dim H_*^S(\ensuremath{\mathbf{P}})\otimes{\mathbb Q(t)}=\#\set{\Omega_J}{J\subset\Delta},
\end{align*}
and hence $\set{\Omega_J}{\ J\subset \Delta}$ forms a basis of $H^*_S(\ensuremath{\mathbf{P}})\otimes{\mathbb Q(t)}$ over $\mathbb Q(t)$.
Consequently, there exist $c_{v}^J\in\mathbb Q$ such that
\begin{align}
\label{pvExpansion}
p_v=i^*\sigma_{v }^{S}=\sum_{J\subset\Delta} c_{v }^J\Omega_J,
\end{align}
where the $c_v^J$ are $t$-monomials of degree $|\Delta|-|J|$.
Applying the specialization $H^*_S(\ensuremath{\mathbf{P}})\to H^*(\ensuremath{\mathbf{P}})$,
we deduce from \cref{klyachkoFormula} that
\begin{align*}
c_{v }^\Delta=\frac{R(v )}{|\Delta|!}.
\end{align*}
It remains to show that $c_{v}^J=0$ for all $J\subsetneq\Delta$.
Fix $J\subsetneq\Delta$, and
consider the inclusion $\iota_J:\ensuremath{\mathbf{P}}_J\hookrightarrow\ensuremath{\mathbf{P}}$
and the corresponding pull-back $\iota_J^*:H^*_S(\ensuremath{\mathbf{P}})\to H^*_S(\ensuremath{\mathbf{P}}_J)$.
Following \cite[Thm. 6.6(b)]{gms:peterson}, $v=v_\Delta$ implies $\iota_J^*p_v=0$ for $J\neq\Delta$,
and
\begin{align*}
\iota_J^*p_\alpha &=\begin{cases} p_\alpha&\text{for }\alpha\in J,\\ 0 &\text{otherwise.}\end{cases}
\end{align*}
Applying $\iota_J^*$ to \cref{pvExpansion},
we obtain a system of equations indexed by $J\subsetneq\Delta$,
\begin{align*}
&&&&\sum_{K\subset J} c_{v }^K\Omega_K=0 &&\text{in }H^*_S(\ensuremath{\mathbf{P}}_J).
\end{align*}
Since the $\Omega_K$ are linearly independent in $H^*_S(\ensuremath{\mathbf{P}}_J)$,
we deduce that $c_{v }^J=0$.
\end{proof}
\begin{lemma}
\label{lem:mult}
We have
$
\left\langle\Omega_\Delta,[\ensuremath{\mathbf{P}}]_S\right\rangle=\frac{|W|}{f_\Delta},
$
where $f_\Delta$ is the connection index of $\Delta$.
\end{lemma}
\begin{proof}
Observe that since $\deg\left([\ensuremath{\mathbf{P}}]_S\right)=\deg\left(\Omega_\Delta\right)=|\Delta|$,
we have
$
\left\langle\Omega_\Delta,[\ensuremath{\mathbf{P}}]_S\right\rangle=
\left\langle\prod i^*\sigma_{\alpha},[\ensuremath{\mathbf{P}}]\right\rangle,
$
where the latter expression is the pairing in ordinary (co)homology.
Applying the forgetful maps $H^*_T(G/B)\to H^*(G/B)$ and $H^*_S(G/B)\to H^*(G/B)$ to the equations in \cref{HessClass},
we deduce that $[\ensuremath{\mathbf{P}}]=[\ensuremath{\mathbf H(h,H_0)}]$ in $H_*(\ensuremath{G/B})$.
It follows that
\begin{equation*}
\left\langle\Omega_\Delta,[\ensuremath{\mathbf{P}}]_S\right\rangle=
\left\langle\prod_{\alpha\in\Delta} i^*\sigma_\alpha,[\ensuremath{\mathbf{P}}]\right\rangle=
\left\langle\prod_{\alpha\in\Delta} i^*\sigma_\alpha,[\ensuremath{\mathbf H(h,H_0)}]\right\rangle=\frac{|W|}{f_\Delta},
\end{equation*}
where the latter equality is from \cite[Thm 3]{klyachko85}.
\end{proof}
\begin{thm}[Multiplicity Formula]
\label{thm:mult}
For $v_I$ a Coxeter element of $I$,
we have
\begin{equation*}
\left\langle p_{v_I},[\ensuremath{\mathbf{P}}_I]_S\right\rangle=\frac{R(v_I)|W_I|}{|I|!\,f_I}.
\end{equation*}
\end{thm}
\begin{proof}
Since $m(v_I)$ does not depend on the diagram $\Delta$ containing $I$,
we may assume $I=\Delta$.
Using \cref{Giambelli,lem:mult}, we obtain
\begin{equation*}
\left\langle p_{v_I},[\ensuremath{\mathbf{P}}_I]_S\right\rangle
=\frac{R(v_I)}{|I |!}\left\langle\Omega_I ,[\ensuremath{\mathbf{P}}]\right\rangle=\frac{R(v_I)|W_I|}{|I |!f_I}.
\end{equation*}
\end{proof}
\section{Introduction}
Let $G$ be a complex semisimple Lie group
corresponding to a Dynkin diagram $\Delta$.
Let $B$ and $B^-$ a pair of opposite Borel subgroups in $G$,
and let $T:= B \cap B^-$ be the corresponding maximal torus in $G$.
We identify $\Delta$ with the set of simple roots of $(G,B,T)$.
The quotient $G/B$ is the associated flag manifold, with a left $T$ action.
The $T$-equivariant homology $H_*^T(G/B)$ of $G/B$ has a basis consisting of {\em Schubert varieties,} which are $B$ orbit closures in $G/B$.
Since $G/B$ is smooth, there exists a dual basis of the $T$-equivariant cohomology $H_T^*(G/B)$
given by Schubert classes $\{\sigma_w: w\in W\}$, where $W$ is the Weyl group.
This basis enjoys {\em Graham positivity}, i.e., the structure constants $c_{uv}^w\in H_T^*(pt)$ defined by
\begin{align*}
\sigma_u\sigma_v = \sum_{w\in W} c_{uv}^w \sigma_w
\end{align*}
are polynomials with non-negative coefficients in the set $\alpha\in \Delta$ for all $u,v,w\in W$.
This paper is concerned with a class of subvarieties of $G/B$ called {\em Hessenberg varieties}, with a particular focus on the {\em Peterson variety}, a regular nilpotent Hessenberg variety.
Hessenberg varieties were first introduced by De Mari \cite{MR2636295} as part of the study of certain matrix decomposition algorithms.
They were generalized outside type A by De Mari, Procesi, and Shayman \cite{MR1043857},
who further identified the permutahedral variety, a toric variety studied by Klyachko \cite{klyachko85,klyachko95}, as a particular Hessenberg variety.
The Peterson variety is a flat degeneration of the permutahedral variety.
Peterson \cite{peterson:notes} and Kostant \cite{kostant:flag} showed that the coordinate ring of a particular open affine subvariety of the Peterson variety is isomorphic to the quantum cohomology ring of the flag variety; see also \cite{rietsch:totally}.
The Peterson variety admits a natural $\mathbb C^*$ action.
In \cite{gms:peterson}, Goldin, Mihalcea, and Singh show that $\mathbb C^*$-equivariant Peterson Schubert calculus also satisfies Graham positivity; see \cref{eq:pstruct} below.
The equivariant cohomology of Peterson varieties in all Lie types is described in \cite{harada.horiguchi.masuda:Peterson}, using generators and relations.
Drellich \cite{drellich:monk} found a Giambelli formula for certain Coxeter elements (see \cref{intro:Giambelli}) using a type-by-type analysis.
Horiguchi \cite{horiguchi2021mixed} obtained a Monk rule for ordinary cohomology using similar methods.
In type A, an equivariant Monk rule was developed by Harada and Tymoczko \cite{harada.tymoczko:Monk}. Goldin and Gorbutt \cite{goldin.gorbutt:PetSchubCalc} subsequently found positive combinatorial formulae for all equivariant structure constants for the Peterson variety in type A.
In this paper, we prove an equivariant Giambelli formula for all Coxeter elements, in any Lie type, using type-independent methods.
We also prove an equivariant Monk rule (\cref{intro:thm:monk}) and a dual equivariant Chevalley formula (\cref{intro:thm:chevalley}), both of which use a pairing of equivariant cohomology and homology for the Peterson variety (see \cref{intro:thm:mult}).
We denote by $\phi$, $\phi^+$, and $W$ the set of roots, set of positive roots, and Weyl group, respectively.
Let $\ensuremath{\mathfrak g}=Lie(G)$, $\ensuremath{\mathfrak b}=Lie(B)$, $\ensuremath{\mathfrak h}=Lie(T)$,
and let $\ensuremath{\mathfrak g}_\alpha\subset\ensuremath{\mathfrak g}$ denote the root space corresponding to $\alpha\in\phi$.
Consider the subspace
\begin{align*}
H_0=\ensuremath{\mathfrak b}\oplus\bigoplus_{\alpha\in\Delta}\ensuremath{\mathfrak g}_{-\alpha}.
\end{align*}
A $B$-stable subspace $H$ of \ensuremath{\mathfrak g}\ containing $H_0$ is called an \emph{indecomposable Hessenberg space}.
For $x\in\ensuremath{\mathfrak g}$,
we have a corresponding \emph{Hessenberg variety},
\begin{align*}
\mathbf H(x,H)=\set{gB\in G/B}{Ad(g^{-1})x\in H}.
\end{align*}
The Hessenberg variety $\mathbf H(x,H)$ admits an action by the projective centralizer of $x$,
given by $
\widetilde C_G(x)=\set{g\in G}{Ad(g)x=\lambda x\text{ for some }\lambda\in\mathbb C}.
$
Fix non-zero elements $e_\alpha\in\ensuremath{\mathfrak g}_\alpha$ for $\alpha\in\Delta$,
and let
\begin{align*}
h=\sum_{\alpha\in\phi^+}\alpha^\vee\in\ensuremath{\mathfrak h},&&
e=\sum_{\alpha\in\Delta} e_\alpha.
\end{align*}
The Hessenberg variety $\ensuremath{\mathbf{P}}:=\mathbf H(e,H_0)$ is called the \emph{Peterson variety},
and the Hessenberg variety $\ensuremath{\mathbf H(h,H_0)}:=\mathbf H(h,H_0)$
is called the \emph{permutahedral variety}.
Recall that $h$ is a regular semisimple element, with $\widetilde C_G(h)=T$,
and hence $T$ acts on $\mathbf H(h,H)$.
Let $\mathfrak s=\mathbb C h$ be the one-dimensional Lie algebra spanned by $h$,
and let $S\subset T$ be the one-dimensional torus corresponding to $\mathfrak s$.
Since $[h,e]=2e$, we have $S\subset \widetilde C_G(e)$,
and hence an $S$-action on $\ensuremath{\mathbf{P}}=\mathbf H(e,H)$.
Let $t$ be the image of a simple root $\alpha$ under the natural restriction $\ensuremath{\mathfrak h}^*\to\mathfrak s^*$.
The element $t$ is independent of the choice of $\alpha$,
and further, $H^*_S(pt;\mathbb Q)=\mathbb Q[t]$.
Under the mild assumption $H_0\subset H$,
Abe, Fujita, and Zeng \cite[Cor. 3.9]{abe.fujita.zeng} have identified the Poincar\'e dual of the fundamental class $[\mathbf H(x,H)]$ in ordinary cohomology
as the Euler class of the vector bundle $G\times^B(\ensuremath{\mathfrak g}/H)$.
Our first result (\cref{HessClass}) is an extension of their result to the equivariant setup.
For $\lambda$ a character of $T$,
we denote by $\mathcal L_\lambda\to G/B$ the line bundle $\mathcal L_\lambda=G\times^B\mathbb C_{-\lambda}$.
Let $c_1^T$ (resp. $c_1^S$) denote the $T$ (resp. $S$)-equivariant first Chern class.
\begin{thm}
\label{intro:HessClass}
Suppose $H_0\subset H$.
We have the following equalities in the equivariant homology of the flag manifold $G/B$:
\begin{align*}
[\mathbf H(e,H)]_S&=\prod\limits\left(c_1^S(\mathcal L_\alpha)-t\right)\cap[\ensuremath{G/B}]_S,\\
[\mathbf H(h,H)]_T&=\prod\left(c_1^T(\mathcal L_\alpha)\right)\cap[\ensuremath{G/B}]_T,
\end{align*}
where the product is over the set $\set{\alpha\in\phi}{\ensuremath{\mathfrak g}_{-\alpha}\not\subset H}$.
\end{thm}
Recall the Schubert classes $\sigma^S_w\in H^{\ell(w)}_S(G/B)$,
Poincar\'e dual to the Schubert varieties $X^w=\overline{B^-wB/B}$.
Let $i^*:H_S^*(G/B)\to H_S^*(\ensuremath{\mathbf{P}})$ be the pullback induced by the inclusion $i:\ensuremath{\mathbf{P}}\to G/B$,
and let $p_v=i^*\sigma^S_v$.
For convenience, we write $\sigma_\alpha=\sigma_{s_\alpha}$ and $p_\alpha=i^*\sigma^S_\alpha$ for $\alpha\in\Delta$.
An element $v_I\in W$ is called a \emph{Coxeter element} for some $I\subset\Delta$
if each simple reflection $s_\alpha$, $\alpha\in I$ appears exactly once in a reduced expression of $v_I$.
Fix a Coxeter element $v_I$ for each $I\subset\Delta$.
In \cite{gms:peterson} we prove that $\set{p_{v_I}}{I\subset\Delta}$ is a basis for $H^*_S(\ensuremath{\mathbf{P}})$ in all Lie types, and that the structure constants $c_{IJ}^K\in H_S^*(pt)$ defined by the equation
\begin{equation}\label{eq:pstruct}
p_{v_I}p_{v_J} = \sum_{K\subset \Delta} c_{IJ}^K p_{v_K}
\end{equation}
are polynomials in $t$ with non-negative coefficients.
Our next result (\cref{Giambelli})
is an equivariant Giambelli formula,
expressing the pullback of a \emph{Coxeter Schubert class} as a polynomial in the divisor classes $p_\alpha$.
\begin{thm}[Giambelli formula]
\label{intro:Giambelli}
Let $v_I$ be a Coxeter element for $I$,
let $R(v_I)$ be the number of reduced words for $v_I$,
and let $\Omega_I=\prod_{\alpha\in I}p_\alpha\in H^*_S(\ensuremath{\mathbf{P}})$.
We have
\begin{align}
\label{intro:eq:Giambelli}
p_{v_I}=\frac{R(v_I)}{|I|!} \Omega_I.
\end{align}
\end{thm}
\Cref{intro:eq:Giambelli} was first obtained by Drellich \cite{drellich:monk}
for a particular choice of Coxeter element $v_I$ for each $I$,
using a type by type analysis,
and the localization formula of Andersen, Jantzen, and Soergel \cite{AJS},
and Billey \cite{billey:kostant}.
Our proof has the benefit of being type independent, and working all Coxeter elements.
The Peterson variety admits a cell-stratification,
$
\ensuremath{\mathbf{P}}=\bigsqcup_{I\subset\Delta}\ensuremath{\mathbf{P}}_I^\circ,
$
see \cite{tymoczko:paving,precup,balibanu:peterson}.
Consequently, we have a natural basis for the equivariant homology $H_*^S(\ensuremath{\mathbf{P}})$,
given by the fundamental classes $\set{[\ensuremath{\mathbf{P}}_I]_S}{I\subset\Delta}$.
Let
\begin{align*}
\left\langle\ ,\,\right\rangle:H^*_S(\ensuremath{\mathbf{P}})\times H_*^S(\ensuremath{\mathbf{P}})\to H^*_S(pt)
\end{align*}
be the pairing given by equivariant integration,
i.e., $\left\langle\omega,[Z]\right\rangle=\int_{[Z]}\omega$.
Following \cite[Thm 1.1]{gms:peterson},
if $v_I$ is a Coxeter element for some $I\subset\Delta$,
then
\begin{align}
\label{intro:dual}
\left\langle p_{v_I},[\ensuremath{\mathbf{P}}_J]_S\right\rangle=m(v_I)\delta_{IJ}
\end{align}
for some positive integer $m(v_I)$.
In particular, $\set{p_{v_I}}{I\subset\Delta}$ is a basis of $H_S^*(\ensuremath{\mathbf{P}})$,
dual (up to scaling) to the fundamental class basis $\set{[\ensuremath{\mathbf{P}}_I]}{I\subset\Delta}$.
The multiplicities $m(v_I)$, which depend only on $I$ and not on $\Delta$,
were calculated for certain Coxeter elements $v_I$ in \cite[Thm 1.3]{gms:peterson}.
We prove in \cref{thm:mult} a general formula for $m(v_I)$, for any Coxeter element $v_I$,
conjectured in \cite{gms:peterson}.
Let $C_I$ be the Cartan matrix of $I$.
Recall that $f_I=\det(C_I)$ is called the \emph{connection index} of $I$, see \cite{bourbaki:Lie46}.
\begin{thm}[Multiplicity Formula]
\label{intro:thm:mult}
Let $v_I$ be a Coxeter element of $I$,
and let $R(v_I)$ denote the number of reduced expressions for $v_I$.
We have
\begin{equation*}
m(v_I)=\left\langle p_{v_I},[\ensuremath{\mathbf{P}}_I]_S\right\rangle=\frac{R(v_I)|W_I|}{|I|!\,f_I}.
\end{equation*}
\end{thm}
Let us say a few words about the proofs of \cref{intro:Giambelli,intro:thm:mult}.
The Hessenberg varieties $\mathbf H(x,H)$,
as $x$ varies over the set of regular elements in $\mathfrak g$,
form a flat family of subvarieties in $G/B$,
see \cite[Prop. 6.1]{abe.fujita.zeng}.
Further, following the work of Brosnan and Chow \cite{brosnan-chow},
and B\u alibanu and Crooks \cite{balibanu2020perverse},
we have a commutative diagram (see \cref{sec:bc}),
\begin{equation}
\label{intro:commTri}
\begin{tikzcd}
&H^*\ensuremath{G/B}\arrow[dl,swap,"i^*",twoheadrightarrow]\arrow[d,twoheadrightarrow,dashed]\arrow[dr,"j^*"]&\\
H^*(\ensuremath{\mathbf{P}})\arrow[r,"\sim"]&H^*(\ensuremath{\mathbf H(h,H_0)})^W\arrow[hook,r]&H^*(\ensuremath{\mathbf H(h,H_0)}).
\end{tikzcd}
\end{equation}
Here $j^*:H^*(G/B)\to H^*(\ensuremath{\mathbf H(h,H_0)})$ is the pullback induced by the inclusion $j:\ensuremath{\mathbf H(h,H_0)}\to G/B$.
This allows us to relate computations in the ordinary cohomology of the Peterson variety
to corresponding computations on the ordinary cohomology of the permutahedral variety.
In \cite{klyachko85,klyachko85} (see also \cite{nadeau.tewari}),
Klyachko presented a Giambelli formula expressing the pullback class
$j^*\sigma_w$ of any Schubert class as a polynomial in the divisor classes $j^*\sigma_{s_\alpha}$,
\begin{align}
\label{intro:eq:klyachko}
j^*\sigma_v=\frac{1}{\ell(v)!}\sum_{\underline v\in\mathcal R(v)}\prod_{s_\alpha\in\underline v}j^*\sigma_{s_\alpha}.
\end{align}
Here $\mathcal R(v)$ is the set of reduced expressions for $v$,
and the product is over all occurrences of $s_\alpha$ in the reduced expression $\underline v$.
Following \cref{intro:commTri}, the same relation holds amongst the $i^*\sigma_v$ and $i^*\sigma_\alpha$ (in ordinary cohomology).
\Cref{intro:Giambelli} is an equivariant version of \cref{intro:dual}.
We then use \cref{intro:Giambelli} and the duality (\cref{intro:dual})
to reduce the calculation of $m(v_I)$ in \cref{intro:thm:mult} to the non-equivariant integral
$\int_\ensuremath{\mathbf H(h,H_0)}\prod_{\alpha\in\Delta}\sigma_{s_\alpha}$,
which we found in \cite{klyachko85}.
In our final results, we develop a \emph{Chevalley formula} (\cref{thm:chevalley}),
and a dual \emph{Monk rule} (\cref{thm:monk}).
A Monk rule for the ordinary cohomology of \ensuremath{\mathbf{P}}\ was recently obtained by Horiguchi \cite{horiguchi2021mixed}.
The equivalence of the Chevalley formula and the Monk rule is a consequence of \cref{intro:dual,intro:thm:mult}.
Recall that for any Dynkin subdiagram $J\subset\Delta$,
we have unique elements $\varpi_\alpha^J$ in the weight lattice of $J$,
called the fundamental weights,
satisfying $\left\langle\beta^\vee,\varpi_\alpha^J\right\rangle=\delta_{\alpha\beta}$ for all $\beta\in J$.
Similarly, we have fundamental coweights $\varpi_\alpha^{J\vee}$ in the coweight lattice of $J$,
dual to the roots $\alpha\in J$.
We write $\varpi_\alpha$ (resp. $\varpi_\alpha^\vee$) for the fundamental weights (resp. coweights) for $\Delta$.
In general, we have
$\varpi_\alpha^I\neq\varpi_\alpha$ and
$\varpi^{I\vee}_\alpha\neq\varpi^\vee_\alpha$, see \cref{sec:stab}.
\begin{thm}[Equivariant Chevalley formula]
\label{intro:thm:chevalley}
For $\alpha\in\Delta$, $J\subset\Delta$, we have
\begin{align*}
p_\alpha\cap[\ensuremath{\mathbf{P}}_J]_S=
\begin{cases}
0 &\text{if }\alpha\not\in J,\\
\left\langle2\rho_J^\vee,\varpi_\alpha\right\rangle t\,[\ensuremath{\mathbf{P}}_J]_S+
\sum\limits_{\substack{\beta\in J\\ K=J\backslash\{\beta\}}}
\left\langle\varpi^{J\vee}_\beta,\varpi^J_\alpha\right\rangle
\frac{|W_J|}{|W_K|}[\ensuremath{\mathbf{P}}_K]_S
&\text{if }\alpha\in J.
\end{cases}
\end{align*}
Here $\rho_J^\vee=\frac 12\sum_{\alpha\in\phi_J^+}\alpha^\vee$ is one-half the sum of the positive coroots supported on $J$,
and $W_J$ and $W_K$ are the Weyl subgroups of the Dynkin diagrams $J$ and $K$ respectively.
\end{thm}
It is common in the literature
(see, for example, \cite{drellich:monk,insko.tymoczko:intersection.theory,goldin.gorbutt:PetSchubCalc})
to fix a Coxeter element $v_I$ for each $I\subset\Delta$,
and to work with the basis $\set{p_{v_I}}{I\subset\Delta}$ of $H^*_S(\ensuremath{\mathbf{P}})$.
Following \cref{intro:Giambelli}, $\set{\Omega_I:=\prod_{\alpha\in I}p_\alpha}{I\subset\Delta}$ is also a basis of $H_S^*(\ensuremath{\mathbf{P}})$.
We develop a Monk rule for the basis $\{\Omega_I\}$, resulting in a formula that does not depend a choice of Coxeter element for each $I$. For the reader's convenience, we present the Monk rule for the basis $\set{p_{v_I}}{I\subset\Delta}$ in \cref{rem:monk}.
\begin{thm}[Equivariant Monk Rule]
\label{intro:thm:monk}
For $\alpha\in\Delta$, we have
\begin{align*}
\Omega_\alpha \Omega_I=
\begin{cases}
\Omega_{I\cup\{\alpha\}}
&\text{if }\alpha\not\in I,\\
2\left\langle\rho^\vee_I,\varpi_\alpha\right\rangle t \Omega_I+
\sum\limits_{\substack{\gamma\in\Delta\backslash I\\J=I\cup\{\gamma\}}}
\dfrac{f_J}{f_I}\left\langle\varpi_\gamma^{J\vee},\varpi^J_\alpha\right\rangle \Omega_J
&\text{if }\alpha\in I.
\end{cases}
\end{align*}
\end{thm}
Consider $\alpha\in\Delta$, and let $I=\Delta\backslash\{\alpha\}$.
A key step in the proof of
\cref{intro:thm:chevalley} is the following formula (see \cref{qChevalley}):
\begin{align}
\label{intro:qChevalley}
(c_1^S(i^*\mathcal L_\alpha)-t)\cap[\ensuremath{\mathbf{P}}]_S=\frac{|W|}{|W_I|}[\ensuremath{\mathbf{P}}_I]_S.
\end{align}
As a further consequence of \cref{intro:qChevalley}, we
obtain a new proof of the description of $H^*_S(\ensuremath{\mathbf{P}})$ by generators and relations
developed by Harada, Horiguchi, and Masuda \cite{harada.horiguchi.masuda:Peterson}; see \cref{hhm}.
Let us now outline the organization of the paper.
In \cref{sec:cohomology}, we recall some results on the equivariant cohomology of spaces with affine paving,
as developed by Edidin, Graham, and Kreiman in \cite{edidin.graham,graham:positivity,graham2020cominuscule}.
In \cref{sec:preliminaries},
we recall some results on root systems, flag manifolds, and Schubert varieties.
In \cref{sec:Hessenberg}, we describe Hessenberg varieties,
and compute the Poincar\'e dual of the fundamental class of a regular Hessenberg variety as a polynomial in the Chern classes of line bundles
(\cref{intro:HessClass}).
We also recall from \cite{tymoczko:paving,precup,gms:peterson} some results on Peterson varieties.
In \cref{sec:giambelli}, we describe the relationship between the ordinary cohomology rings
$H^*(\mathbf H(e,H))$, $H^*(\mathbf H(h,H))$, and $H^*(G/B)$,
following ideas developed by Brosnan and Chow \cite{brosnan-chow},
and further refined by B{\u{a}}libanu and Crooks \cite{balibanu2020perverse}.
We then use these ideas to prove the Giambelli formula (\cref{intro:Giambelli})
and the multiplicity formula (\cref{intro:thm:mult}).
Finally, in \cref{sec:chevalley},
we prove the Chevalley formula (\cref{intro:thm:chevalley}),
its dual Monk rule (\cref{intro:thm:monk}),
and recover the Harada-Horiguchi-Masuda presentation of $H^*_S(\ensuremath{\mathbf{P}})$.
We also tabulate the structure constants appearing in the Chevalley and Monk formulae,
and present some examples applying these fomulae.
\emph{Acknowledgements:}
We would like to thank Ana B\u alibanu for explaining the results and consequences of \cite{balibanu2020perverse} to us.
These explanations were foundational in the development of \cref{sec:bc}.
We would also like to thank Leonardo Mihalcea for some very illuminating discussions.
Computer calculations in service of this paper were coded in SageMath \cite{sagemath}.
Parts of this work were conducted while RS was at Virginia Tech,
and parts while at ICERM.
RS gratefully acknowledges the support of these institutions.
\section{Flag Manifolds}
\label{sec:preliminaries}
Fix a complex semisimple Lie group $G$,
opposite Borel subgroups $B, B^- \subset G$,
and let $T= B \cap B^-$ be the common maximal torus.
We will further assume that $G$ is simply connected;
this ensures that all line bundles on the flag manifold $G/B$ are $T$-equivariant.
Denote by $\Delta$ the system of simple positive roots associated to $(G,B,T)$,
by $\Phi^+ \subset \Phi$ the set of positive roots included in the set of all roots,
by $s_\alpha$ the simple reflections for $\alpha\in\Delta$,
and by $W$ the Weyl group of $G$.
Recall also the connection index $f$ of $\Delta$,
which equals the determinant of the Cartan matrix of $\Delta$.
For $I\subset \Delta$, we denote by $\phi_I$, $\phi^+_I$, $W_I$, and $f_I$
the set of roots, positive roots, Weyl group, and the connection index of $I$ respectively.
\subsection{Flag manifolds and Schubert varieties}
\label{sec:Schubert}
The flag manifold $\ensuremath{G/B}$ is a projective algebraic manifold
with a transitive action of $G$ given by left multiplication.
It has a stratification into finitely many $B$-orbits
(resp. $B^-$-orbits) called the {\em Schubert cells}
$X_w^\circ:= BwB/B$
(resp. $X^{w,\circ}:= B^- wB/B $),
i.e.,
\begin{equation}
\label{schubStrat}
\ensuremath{G/B} = \bigsqcup_{w \in W} X_w^\circ = \bigsqcup_{w \in W} X^{w,\circ} \/.
\end{equation}
The closures $X_w:=\overline{X_w^\circ}$ and $X^w:=\overline{X^{w,\circ}}$ are called {\em Schubert varieties}.
The \emph{Bruhat order} is a partial order on $W$ characterized by inclusions of Schubert varieties,
i.e., $X_v \subset X_w$ if and only if $v \le w$,
and $X^w\subset X^v$ if and only if $v\leq w$.
Following \cref{lemma:generate}, the fundamental classes
$\set{[X_v]_T}{v\leq w}$ (resp. $\set{[X^v]_T}{w\leq v}$) form a basis of $H_*^T(X_w)$ (resp. $H_*^T(X^w)$).
The cohomology classes $\sigma_v^T\in H_T^*(X)$ Poincar\'e dual to the $[X^v]_T$,
i.e. characterized by the equation $\sigma_v^T\cap[\ensuremath{G/B}]_T=[X^v]_T$,
are called {\em Schubert classes}.
Following \cref{lemma:generate},
the Schubert classes $\set{\sigma_v^T}{v\in W}$ form a basis of $H_T^*(G/B)$ as a module over $H_T^*(pt)$.
\subsection{Line bundles on the flag manifold}
\label{sec:lineBundles}
Recall that since $G$ is simply connected,
the character group $\X T$ of $T$ equals the weight lattice of $\Delta$.
For $\lambda\in\X T$,
let $\mathbb C_\lambda$ be the one-dimensional $B$-representation on which $T$ acts via the character $\lambda$.
We will denote by $\mathcal L_\lambda$ the $T$-equivariant line bundle
\begin{align*}
\mathcal L_\lambda:=G\times^B\mathbb C_{-\lambda}\to G/B,&&(g,v)\mapsto gB,
\end{align*}
with $T$-action given by $t\cdot (g,v)=(tg,v)$.
\subsection{Stability of Dynkin diagrams}
\label{sec:stab}
Let $a_{\alpha\beta}=\left\langle\beta^\vee,\alpha\right\rangle$ denote the $\alpha\beta^{th}$ entry of the Cartan matrix.
For $I\subset\Delta$, the Cartan matrix of $I$ is the submatrix of $\Delta$ spanned by the rows and columns indexed by the roots in $I$.
In particular, the pairing $\langle\ ,\,\rangle$ on $\phi_I^\vee\times\phi_I$ is the restriction of the pairing $\phi^\vee\times\phi$
to $\phi_I\subset\phi$ and $\phi^\vee_I\subset\phi^\vee$.
We describe this by saying that the roots and coroots are stable for the inclusion of Dynkin diagrams.
Consider the elements $\varpi^I_\alpha\in\bigoplus\limits_{\alpha\in I}\mathbb Q\alpha$
and $\varpi^{I\vee}_\alpha\in \bigoplus\limits_{\alpha\in I}\mathbb Q\alpha^\vee$
given by the equations
\begin{align*}
&&\langle\varpi^{I\vee}_\alpha,\beta\rangle=\langle\beta^\vee,\varpi^I_\alpha\rangle =\delta_{\alpha\beta}&&\forall\beta\in I.
\end{align*}
Then $\varpi_\alpha:=\varpi_\alpha^\Delta$ is the fundamental weight dual to $\alpha^\vee$,
and $\varpi_\alpha^\vee:=\varpi_\alpha^{\Delta\vee}$ is the fundamental coweight dual to the root $\alpha$.
In general,
\begin{align*}
&&\varpi_\alpha^I\neq\varpi_\alpha&&
\text{and}&&
\varpi^{I\vee}_\alpha\neq\varpi^\vee_\alpha.&&
\end{align*}
We express this fact by saying that the fundamental weights and coweights are \emph{not stable} for the inclusion of Dynkin diagrams.
\subsection{The height function}
\label{htFn}
Let
\begin{align*}
\rho_I=\frac 12\sum_{\alpha\in\phi_I^+}\alpha=\sum_{\alpha\in I}\varpi_\alpha^I, && \rho_I^\vee=\frac 12\sum_{\alpha\in\phi_I^+}\alpha^\vee=\sum_{\alpha\in I}\varpi_\alpha^{I\vee}.
\end{align*}
We set $\rho=\rho_\Delta$, and $\rho^\vee=\rho^\vee_\Delta$.
Following \cite[Ch~6, Prop 29]{bourbaki:Lie46},
we have $\langle \rho^\vee,\alpha\rangle=1$ for $\alpha\in\Delta$.
For $\lambda=\sum_{\alpha\in\Delta}a_\alpha\alpha$,
we define the height of $\lambda$ to be
\begin{align*}
ht(\lambda)=\sum a_\alpha=\langle\rho^\vee,\lambda\rangle.
\end{align*}
Let $h=2\rho^\vee$, and let $\mathfrak s\subset\ensuremath{\mathfrak g}$ be the Lie subalgebra spanned by $h$.
Observe that $h$ is in the coroot lattice,
and hence there exists a one-dimensional sub-torus $S\subset T$ with $Lie(S)=\mathfrak s$.
For any $\alpha,\beta\in\Delta$, we have $\left\langle h,\alpha\right\rangle=\left\langle h,\beta\right\rangle$,
and hence $\alpha|\mathfrak s=\beta|\mathfrak s$.
Let $t=\alpha|\mathfrak s$ for some $\alpha\in\Delta$.
The restriction map $\ensuremath{\mathfrak h}^*\to\mathfrak s^*$ (dual to the inclusion $\mathfrak s\hookrightarrow\ensuremath{\mathfrak h}$)
satisfies $\alpha\mapsto t$ for all $\alpha\in\Delta$,
and hence is given by
$\lambda\mapsto ht(\lambda)t=\left\langle\rho^\vee,\lambda\right\rangle t$.
|
1,314,259,994,816 | arxiv | \section{Introduction}\label{Sec:intro}
The lattice symmetry sets up the common and important feature of all crystalline magnetically-ordered materials: the magnetocrystalline anisotropy (MCA). The MCA determines such parameters of a magnetic medium as magnetization direction, magnetic resonances frequencies, coercive fields etc. Since the MCA relates directly to the lattice, applying stress to a magnetic medium allows modifying static and dynamical magnetic properties of the latter. This effect known as inverse magnetostriction or Villary effect was discovered at the end of 19th century and is widely used in both fundamental research and applications. Inverse magnetostriction plays a tremendous role in scaling magnetic devices down, e.g. in tailoring the MCA parameters of nanometer ferromagnetic films by properly chosen lattice mismatch with substrate and providing sensing mechanism in microelectromechanical systems (MEMS). Recently, the importance of inverse magnetostriction was also recognized in the emerging field of ultrafast magnetism focused at manipulating magnetic state of matter on the (sub-) picosecond time scale \cite{Kirilyuk-RMP2010}.
Magnetization control via the inverse magnetostriction at ultrashort timescale is based on the techniques of generating picosecond strain pulses in solids developed in picosecond ultrasonics \cite{Thomsen-PRL1984}. When an opaque medium is subjected to a pico- or femtosecond laser pulse, the light absorption and the following rapid increase of lattice temperature induces thermal stress in a surface region. This results in generation of a picosecond strain pulse with spatial size down to 10\,nm and broad acoustic spectrum (up to 100 GHz), which propagates from excited surface as coherent acoustic wavepacket. It has been demonstrated experimentally, that injection of such a strain pulse into a thin film of ferromagnet can modify the MCA and trigger the precessional motion of magnetization \cite{Scherbakov-PRL2010}. This experiment has initiated intense experimental and theoretical research activities \cite{Scherbakov-PRL2010,Bombeck-PRB2012,Kim-PRL2012,Jager-APL2013,Yahagi-PRB2014,Afanasiev-PRL2014,Jager-PRB2015,Janusonis-APL2015,Janusonis-SciRep2016} in what is now referred to as ultrafast magnetoacoustics.
High interest to ultrafast magnetoacoustics is driven by a number of features, which are specific to interaction between coherent acoustic excitation and magnetization, and do not occur when other ultrafast stimuli are employed. The wide range of generated acoustic frequencies overlaps with the range of magnetic resonances in the magnetically-ordered media. Furthermore, thin films and nanostructures possess specific magnetic and acoustic modes, and matching their frequencies and wavevectors may drastically increase their coupling efficiency \cite{Bombeck-PRB2012}. Finally, there is a well-developed theoretical and computational apparatus for high-precision modeling of spatial-temporal evolution of the strain pulse and respective modulation of the MCA \cite{Linnik-PRB2011,Tas-PRB1994,Wright-IEEE1995}. These advantages, however, may be exploited only if the strain-induced effects are not obscured by other processes triggered by direct ultrafast laser excitation.
Generally, there are two main approaches to single-out the strain-induced impact on magnetization. The first one is the spatial separation, when the response of magnetization to the strain pulses is monitored at the sample surface opposite to the one excited by a laser pulse. It has been used in a number of experiments with various ferromagnetic materials \cite{Scherbakov-PRL2010,Bombeck-PRB2012,Kim-PRL2012,Jager-APL2013}. The irrefutable advantage of such an approach is that the laser-induced heating of a magnetic medium is eliminated due to spatial separation of the laser-impact area and the magnetic specimen. An alternative approach employs the spectral selection instead, when the initially generated strain with broad spectrum is converted into monochromatic acoustic excitation. In this case the efficiency of its interaction with ferromagnetic material is controlled by external magnetic field, which shifts the magnetic resonance frequency. This approach was realized in the experiments with ferromagnetic layer embedded into acoustic Fabry-Perot resonator \cite{Jager-PRB2015} and by means of lateral patterning of ferromagnetic film or optical excitation resulting in excitation of surface acoustic waves \cite{Yahagi-PRB2014,Janusonis-APL2015,Janusonis-SciRep2016}.
Very recently we have demonstrated that the strain-induced impact on the MCA can be reliably traced even in a ferromagnetic film excited directly by a femtosecond laser pulse, despite the complexity of the laser-induced electronic, lattice and spin dynamics emerging in this case \cite{Kats-PRB2016}. Here we present overview of our recent experimental and theoretical studies of the ultrafast strain-induced effects in ferromagnetic galfenol films, where the dynamical strain serves as a versatile tool to control MCA. Magnetization precession serves in this experiments as the macroscopic manifestation of ultrafast changes of MCA. We demonstrate the modulation of the MCA and the corresponding response of magnetization under two different experimental approaches, when the \textit{strain pulses} are injected into the film from the substrate, and when the strain with a \textit{step-like temporal profile} is optically generated directly in a ferromagnetic film. In the case of direct optical excitation we also compare the strain-induced change of MCA to the conventional change of anisotropy via optically-induced heating emerging, and demonstrate that these two contributions can be unambiguously distinguished and suggest the regimes at which either of them dominates. In these studies we have utilized the specific MCA of low-symmetry magnetostrictive galfenol film grown on a (311)-GaAs substrate, which enables generation of dynamical strain of mixed, compressive and shear character, as compared to the pure compressive strain in high-symmetry structures.
The paper is organized as follows. In Sec.\,\ref{Sec:exp} we describe the sample under study and three experimental geometries which enable us to investigate ultrafast changes of magnetic anisotropy. In Sec.\,\ref{Sec:theoryMag} we describe phenomenologically magnetocrystalline anisotropy of the (311) galfenol film and consider how it can be altered on an ultrafast timescale. The following Sec.\ref{Sec:theoryAc} is devoted to generation of dynamical strain in metallic films of low symmetry. In Secs.\,\ref{Sec:expAc},\ref{Sec:extOpt} we present experimental results and analysis of the magnetization precession triggered by purely acoustical pump and by direct optical excitation and demonstrate that even in the latter case optically-generated strain may be a dominant impact allowing ultrafast manipulation of the MCA.
\section{Experimental}\label{Sec:exp}
\subsection{Sample}
Film of a galfenol alloy Fe$_{0.81}$Ga$_{0.19}$ (thickness $d_\mathrm{FeGa}$=100\,nm) was grown on the (311)-oriented GaAs substrate ($d_\mathrm{GaAs}$=100\,$\mu$m) (Fig.\,\ref{Fig:Exp}(a)). As was shown in our previous works \cite{Scherbakov-PRL2010,Jager-APL2013}, the magnetic film of this content and thickness of 100\,nm facilitates a strong response of the magnetization to picosecond strain pulses. The film was deposited by DC magnetron sputtering at a power of 22\,W in an Ar pressure of 1.6\,mTorr. The GaAs substrate was first prepared by etching in dilute hydrochloric acid before baking at 773\,K in vacuum. The substrate was cooled down to 298 K prior to deposition. Detailed x-ray diffraction studies \cite{Bowe-thesis} revealed that the film is polycrystalline, and the misorientation of crystallographic axes of crystallites, average size of which was of a few nanometers, was not exceeding a few degrees. Therefore, the studied film can be treated as the single crystalline one. The equilibrium value of the saturation magnetization is $M_s$=1.59\,T \cite{Restorff-JAP2012}. The SQUID measurements confirmed that the easy magnetization axis is oriented in the film plane along the [0$\bar{1}$1] crystallographic direction ($y$-axis). In our experiments external DC magnetic field \textbf{B} was applied in the sample plane along the magnetization hard axis, which lies along [$\bar{2}$33] crystallographic direction ($x$-axis). In this geometry magnetization \textbf{M} orients along the applied field if the strength of the latter exceeds $B$=150\,mT. At lower field strengths magnetization is along an intermediate direction between the $x$- and $y$-axes.
\subsection{Experimental techniques}
Three experimental geometries were used in order to explore the impact of dynamical strain on the MCA of the galfenol film. First, the experiments were performed with the dynamical strain being the only stimulus acting on the galfenol film (Fig.\,\ref{Fig:Exp}(b)). A 100-nm thick Al film was deposited on the back side of the GaAs substrate and was utilized as an optoacoustic transducer to inject picosecond strain pulses into the substrate \cite{Thomsen-PRB1986}. The 100-fs optical pump pulses with the central wavelength of 800\,nm, generated by a Ti:sapphire regenerative amplifier, were incident on the Al film inducing rapid increase of its temperature. As a result, as discussed in detail in Sec.\,\ref{Sec:theoryAc}, the picosecond strain pulses were injected into the GaAs substrate. These pulses propagated through the substrate, reached the film (Fe,Ga) film, modified its MCA and triggered the magnetization precession.
\begin{figure}
\includegraphics[width=8.6cm]{Fig1v7.eps}
\caption{(Color online) (a) Schematic presentation of the galfenol film grown on the (311) GaAs substrate. $x'$-, $y'$- and $z'$-axes are directed along the crystallographic [100], [010] and [001] axes, respectively. DC magnetic field \textbf{B} is applied along the [$\bar{2}$33] crystallographic direction, which is the hard magnetization axis. (b-d) Experimental geometries. (b) The optical pump pulses excite 100\,nm thick Al film on the back of the GaAs substrate, thus generating strain pulses injected into the substrate. They act as the acoustical pump triggering the magnetization precession in the galfenol film. The precession is detected by monitoring the rotation of polarization plane of the probe pulses reflected from the galfenol film. (c) The optical pump pulses excite the galfenol film directly. The propagating strain pulses are detected by monitoring polarization rotation for the probe pulses, which penetrate into the GaAs substrate. (d) The optical pump pulses excite the galfenol film directly, increasing the lattice temperature and generating the dynamical strain in the film. Excited magnetization precession is detected by monitoring the rotation of polarization of the probe pulses reflected from the galfenol film. Experiment (b) was performed at $T=$20\,K, experiments (c,d) were performed at room temperature.}
\label{Fig:Exp}
\end{figure}
The probe pulses split from the same beam were incident on the (Fe,Ga) film at the angle close to 0, and the time-resolved polar magneto-optical Kerr effect (TRMOKE) was measured. In this experimental geometry, TRMOKE rotation angle $\beta_K$ is directly proportional to the out-of-plane deviation of magnetization $\Delta M_z$ induced by a pump:
\begin{equation}
\Delta \beta_K(t)=\left[\sqrt{\varepsilon_0}(\varepsilon_0-1)\right]^{-1}\chi_{xyz}\Delta M_z(t),\label{Eq:TRMOKE}
\end{equation}
where $\varepsilon_0$ is the diagonal dielectric permittivity tensor component of (Fe,Ga) at the probe wavelength, $\chi_{xyz}$ is the magneto-optical susceptibility at the same wavelength, which enters off-diagonal dielectric permittivity component as $i\varepsilon_{xy}=i\chi_{xyz} M_z$ \cite{Zvezdin-book}. By normalizing TRMOKE rotation by the static one at saturation ($\beta_K^s\sim M_s$), one gets the measure of deviation of the magnetization out of the sample plane, $\Delta M_z(t)/M_s=\Delta\beta_K(t)/\beta_K^s$. These experiments were performed at $T$=20\,K. The choice of low temperature in this experiment was dictated by the fact, that this prevents attenuation of higher frequency components of the strain pulses in the GaAs substrate \cite{Chen-PhMag1994}, thus allowing excitation of precession with high frequency in relatively high applied magnetic fields.
Second and the third types of experiments (Fig.\,\ref{Fig:Exp}(c,d)) were conducted in the geometry, where the (Fe,Ga) film was directly excited by the optical pump pulses. In this geometry there were two contribution to the change of MCA: (i) direct modification of the MCA due to heating \cite{Carpene-PRB2010} and (ii) inverse magnetostrictive effects (See Sec.\,\ref{Sec:theoryMag} for details). In these experiments we used 170-fs pump and probe pulses of the 1030-nm wavelength generated by the Yb:KGd(WO$_4$)$_2$ regenerative amplifier. These experiments were performed at room temperature.
In the second geometry the probe pulses were incident onto the back side of the GaAs substrate (Fig.\,\ref{Fig:Exp}(c)). Since the probe pulses wavelength is well below the GaAs absorption edge, it penetrated the substrate and reached the magnetic film. Thus, here we were able to probe optically excited dynamics of the magnetization of the (Fe,Ga) film. Additionally, this experimental geometry enables one to detect strain pulses injected into the substrate from the film with the velocity $s_j$, where $j$ denotes the particular strain pulses polarization. Upon propagation through GaAs these pulses modified its dielectric permittivity via photoelastic effect. The intensity and the polarization of the probe pulses were therefore modified in the oscillating manner \cite{Thomsen-PRB1986}, with the frequency
\begin{equation}
\nu_j=2s_j\sqrt{\varepsilon_0}\lambda_\mathrm{pr}^{-1},\label{Eq:freqAc}
\end{equation}
where $\lambda_\mathrm{pr}$ is the probe wavelength, $\varepsilon_0$ is the dielectric permittivity of GaAs, and the angle of incidence for the probe pulses is taken to be 0. These oscillations are often referred to as Brillouin oscillations. The main purpose of this experiment was to confirm generation of dynamical strain upon excitation of the (Fe,Ga) film by optical pump pulses.
Third type of experiments was performed in the conventional optical pump-probe geometry, when both optical pump and probe pulses were incident directly on the galfenol film (Fig.\,\ref{Fig:Exp}(d)). This is the main experiment in our study, which demonstrates how various contributions to the optically-induced MCA change can be distinguished and separated.
\section{Thermal and strain-induced control of the magnetic anisotropy in (311) galfenol film}\label{Sec:theoryMag}
The magnetic part of the normalized free energy density of the single crystalline galfenol film $F_M=F/M_s$ grown on the (311)-GaAs substrate (Fig.\,\ref{Fig:Exp}(a)) can be expressed as
\begin{eqnarray}
F_M(\mathbf{m})&=&-\mathbf{m}\cdot\mathbf{B}+B_dm^2_z\label{Eq:energy}\\
&+&K_1\left(m^2_{x'}m^2_{y'}+m^2_{z'}m^2_{y'}+m^2_{x'}m^2_{z'}\right)-K_um_y^2\nonumber\\
&+&b_1(\epsilon_{x'x'}m^2_{x'}+\epsilon_{y'y'}m^2_{y'}+\epsilon_{z'z'}m^2_{z'})\nonumber\\
&+&b_2(\epsilon_{x'y'}m_{x'}m_{y'}+\epsilon_{x'z'}m_{x'}m_{z'}+\epsilon_{y'z'}m_{y'}m_{z'}),\nonumber
\end{eqnarray}
where $\mathbf{m}=\mathbf{M}/M_s$. Here for a sake of convenience Zeeman, shape, and uniaxial anisotropy terms are written in the coordinate frame associated with the film, i.e the $z$-axis is directed along the sample normal. Cubic anisotropy term and the magneto-elastic terms are written in the frame given by the crystallographic axes $x'y'z'$ (Fig.\,\ref{Fig:Exp}(a)). Strain components $\epsilon_{ij}$ are considered to be zero at equilibrium. Corresponding equilibrium orientation of magnetization is given by the direction of an effective magnetic field expressed as
\begin{equation}
\mathbf{B}_\mathrm{eff}=-\frac{\partial F_M(\mathbf{m})}{\partial\mathbf{m}}.\label{Eq:Beff}
\end{equation}
Rapid change of any of the terms in Eq.\,(\ref{Eq:energy}) under an external stimulus may result in reorientation of the effective field (\ref{Eq:Beff}). This and thus trigger magnetization precession, which can be described by the Landau-Lifshitz equation \cite{LL,Gurevich-book}:
\begin{equation}
\frac{d\mathbf{m}}{d t}=-\gamma\cdot\mathbf{m}\times\mathbf{B}_\mathrm{eff}(t),\label{Eq:LL}
\end{equation}
where $\gamma$ is the gyromagnetic ratio. This precession plays a two-fold role. On the one hand, magnetization precession triggered by an ultrafast stimulus is in itself an important result attracting a lot of attention nowadays. On the other hand, magnetization precession is the macroscopical phenomenon, which can be easily observed in conventional pump-probe experiments and, at the same time, allows getting a insight into the complex microscopical processes triggered by various ultrafast stimuli.
We exclude from the further discussion the ultrafast laser-induced demagnetization \cite{Beaurepaire-PRL1996}, which may trigger the magnetization precession \cite{Koopmans-PRL2002} due to the decrease of the demagnetizing field $\mu_0M_s/2$. This contribution to the change of the effective field orientation is proportional to $z$-component of $\mathbf{M}$ at equilibrium, which is zero in the experimental geometry discussed here, with magnetic field applied in the film plane. Thus, we focus on the effects related to the change of the MCA solely and consider two mechanisms.
First mechanism allowing ultrafast change of the MCA relies on \textit{heat-induced} changes of the parameters $K_1$ and $K_u$ in Eq.\,(\ref{Eq:energy}). This phenomenon is inherent to various magnetic metals \cite{Carpene-PRB2010}, semiconductors \cite{Hashimoto-PRL2008}, and dielectrics \cite{deJong-PRB2011}. In metallic films absorption of laser pulse results in subpicosecond increase of electronic temperature $T_e$. Subsequent thermalization between electrons and lattice takes place on a time scale of several picoseconds and yields an increase of the lattice temperature $T_l$. Magnetocrystalline anisotropy of a metallic film, and galfenol in particular, is temperature-dependent \cite{Clark-JAP2005}. Therefore, laser-induced lattice heating results in decrease of MCA parameters. Importantly, this mechanism is expected to be efficient if magnetization is not aligned along the magnetic field \cite{Ma-JAP2015,Shelukhin-arxiv2015}. This can be realized by applying magnetic field of moderate strength along the hard magnetization axis. Otherwise, decrease of $K_{1,u}$ would not tilt $\mathbf{B}_\mathrm{eff}$ already aligned along $\mathbf{B}$.
Second mechanism relies on inverse magnetostriction. As it follows from Eq.\,\ref{Eq:energy}, dynamical strain $\hat\epsilon$ induced in a magnetic film can effectively change the MCA. Such dynamical strain can be created in a film either upon injection from the substrate \cite{Scherbakov-PRL2010}, or due to the thermal stress induced by rapid increase of the lattice temperature by optical pulse. It is important to emphasise, that, in contrast to the heat-induced change of the magnetocrystalline anisotropy constants, the \textit{strain-induced} mechanism can be efficient even if the $\mathbf{B}_\mathrm{eff}$ is aligned along $\mathbf{B}$, if the symmetry of the film and the polarization of the dynamical strain are properly chosen.
\section{Optical generation of the dynamical compressive and shear strain in a metallic film on a low-symmetry substrate}\label{Sec:theoryAc}
\subsection{Optical generation of the dynamical strain}
Increase of the electronic $T_e$ and lattice $T_l$ of a metallic film excited by a femtosecond laser pulse is described by the coupled differential equations:
\begin{eqnarray}
C_e\frac{\partial T_e}{\partial t}&=&\kappa\frac{\partial^2T_e}{\partial z^2}-G(T_e-T_l)+P(z,t);\nonumber\\
C_l\frac{\partial T_l}{\partial t}&=&-G(T_l-T_e),\label{Eq:TwoTemp}
\end{eqnarray}
where $P(z,t)=I(t)(1-R)\alpha\exp{(-\alpha z)}$ is the absorbed optical pump pulse power density, with $I(t)$ describing the Gaussian temporal profile, $\alpha$ is the absorption coefficient, $R$ is the reflection Fresnel coefficient. $C_e=A_eT_e$ and $C_l$ are the specific electronic and lattice heat capacities, respectively; $\kappa$ - the thermal conductivity, $G$ - the electron-phonon coupling constant, considered to be temperature independent. $T_l$ stands for the lattice temperature. Heat conduction to the substrate is usually much less than the one within the film and, thus, is neglected. The boundary conditions are $\partial T_e/\partial z=0$ at $z$=0, and $T_e$=$T_l$=$T$ at $z=\infty$, where $T$ is the initial temperature.
Lattice temperature increase sets up the thermal stress, which in turn leads to generation of dynamical strain \cite{Thomsen-PRB1986,Tas-PRB1994,Wright-IEEE1995}. Details of this process are determined by the properties of the metallic film and of the interface between the metallic film and the substrate. As a generalization, we consider the strain mode with the polarization vector $\mathbf{e}_j$ and the amplitude $u_{0,j}$. Following the procedure described in \cite{Wright-IEEE1995} for a high-symmetry film we express the displacement amplitude in the frequency domain as
\begin{eqnarray}
\delta T_e(z,\omega)&=&\frac{\alpha(1-R)}{\kappa}\frac{I(\omega)}{\alpha^2-p_T^2}\left[\e^{-\alpha z}+\frac{\alpha}{p_T}e^{-p_Tz}\right];\label{Eq:tempE}\\
\delta T_l(z,\omega)&=&\frac{\delta T_e(z,\omega)}{1-i\omega C_lG^{-1}};\label{Eq:tempL}\\
u_{0,j}(z,\omega)&=&\sigma_j\left(\frac{e^{-\alpha z}}{\alpha^2+k_j^2}-\frac{e^{-p_Tz}}{p_T^2+k_j^2}\right.\nonumber\\
&+&\frac{e^{k_jz}}{2ik_j}\left[\frac{1}{\alpha+ik_j}-\frac{1}{p_T+ik_j}\right]\nonumber\\
&+&\left.\frac{e^{-k_jz}}{2ik_j}\left[\frac{1}{\alpha-ik_j}+\frac{1}{p_T-ik_j}\right]\right)\nonumber\\
&+&A_je^{ik_jz}+B_je^{-ik_jz},\label{Eq:strain}
\end{eqnarray}
where we introduced the parameters
\begin{eqnarray}
\sigma_j&=&\frac{e_{j,z}}{\rho s_j^2}\frac{\beta C_l}{1-i\omega C_lG^{-1}}\frac{\alpha^2(1-R)I(\omega)}{\kappa(\alpha^2-p_T^2)},\nonumber\\
p_T&=&\sqrt{\frac{-i\omega C_e}{\kappa}\left(1+\frac{C_lC_e^{-1}}{1-i\omega C_lG^{-1}}\right)},\\
k_j&=&\omega s_j^{-1},\,\mathrm{Re}(p_T)>0.\nonumber
\end{eqnarray}
Here $\beta$ is Gruneisen parameter, $\rho$ is the galfenol density. The constants $A_j$ and $B_j$ are determined from the boundary condition at the free surface $z=0$ and at the (311)-(FeGa)/GaAs interface.
From Eq.\,(\ref{Eq:strain}) it can be seen that thermal stress induces two contributions to the strain in the metallic film. First one is maximal at the film surface and decays exponentially along $z$, which is shown schematically in Figs.\,\ref{Fig:Exp}(b-d). In fact, it closely follows spatial evolution of the lattice temperature $T_l$ in Eq.\,(\ref{Eq:tempL}). In the time domain, this contribution emerges on a picosecond time scale following lattice temperature increase and decays slowly towards equilibrium due to the heat transfer to the substrate. Therefore, on the typical time scale of experiment on ultrafast change of the MCA, i.e $\sim$1\,ns, this contribution can be considered as the \textit{step-like strain emergence}. Second contribution describes the picosecond \textit{strain pulse} propagating away from the film surface along $z$ \cite{Scherbakov-OptExp2013}.
\subsection{Injection of compressive and shear dynamical strain pulses into (311)-galfenol film}
First we consider the scenario illustrated in Fig.\,\ref{Fig:Exp}(b). The Al film serving as the optoacoustic transducer, is polycrystalline and, thus, acoustically isotropic. Thus, longitudinal (LA) strain is generated due to optically-induced thermal stress. Its polarization vector is $\mathbf{e}_\mathrm{LA}=(0,0,1)$ and the amplitude is $u_{0,\mathrm{LA}}$. Corresponding strain component is $\epsilon^\mathrm{LA}_{zz}=e_{\mathrm{LA},z}\partial u_{0,\mathrm{LA}}/\partial z$. This strain is purely compressive/tensile. Due to mode conversion at the interface shear strain may be also generated, but the efficiency of this process is low \cite{Hurley-UltraS2000}. After transmission of the strain pulse through the interface between elastically isotropic Al film and anisotropic low-symmetry single crystalline (311)-GaAs substrate, two strain pulses emerge, quasi-longitudinal (QLA) and quasi-transversal (QTA), with the polarization vectors $\mathbf{e}_\mathrm{QLA}$=(0.165,\,0,\,0.986) and $\mathbf{e}_\mathrm{QTA}$=(0.986,\,0,\,-0.165), propagating further to the substrate \cite{Scherbakov-OptExp2013}. Importantly, both QLA and QTA strain pulses have significant shear components. Expressions for the corresponding amplitudes are found in \cite{Scherbakov-OptExp2013} by taking into account interference between LA and TA modes within the film and multiple reflections and mode conversion at the interface. QLA and QTA pulses injected thus into GaAs substrate propagate with their respective sound velocities.
Upon reaching magnetic (Fe,Ga) film these strain pulses can trigger the magnetization precession \cite{Scherbakov-PRL2010,Jager-APL2013}, by modifying magneto-elastic terms in Eq.\,(\ref{Eq:energy}). Since the QTA and QLA pulse velocities in the 100\,$\mu$m (311)-GaAs substrate are $s_\mathrm{QTA}=$2.9\,km$\cdot$s$^{-1}$ and $s_\mathrm{QLA}=$5.1\,km$\cdot$s$^{-1}$ \cite{Popovic-PRB1993,Scherbakov-OptExp2013}, they reach (Fe,Ga) film after 35 and 20\,ns, respectively, and thus, their impact on the magnetic film can be separated in time. Strictly speaking, polarization vectors of QL(T)A in (Fe,Ga) and in GaAs differ, and transformation of the strain pulses upon crossing GaAs/(Fe,Ga) interface should be taken into account. However, since the mismatch is rather small and both QL(T)A strain pulses remain polarized in the $xz$ plane, we neglect it in the analysis. Therefore, in the experimental geometry, shown in Fig.\,\ref{Fig:Exp}(b), propagating strain pulses (\ref{Eq:strain}) are employed to control magnetization.
\subsection{Generation of compressive and shear dynamical strain pulses in (311)-galfenol film}
By contrast to polycrystalline Al film, in the single crystalline (Fe,Ga) film on the (311)-GaAs substrate the elastic anisotropy plays essential role already at the stage of the strain generation \cite{Matsuda-PRL2004}. Two strain components $\epsilon_{xz}$ and $\epsilon_{zz}$ arise due to coupling of thermal stress to QLA and QTA acoustic waves. Their polarizations are $\mathbf{e}_\mathrm{QLA}$=(0.286,\,0,\,0.958) and $\mathbf{e}_\mathrm{QTA}$=(0.958,\,0,\,-0.286) \cite{Kats-PRB2016} in the film coordinate frame $xyz$ (Fig.\,\ref{Fig:Exp}(a)). Corresponding strain components can be found as
\begin{eqnarray}
\epsilon^\mathrm{QL(T)A}_{xz}&=&0.5e_{QL(T)A,x}\frac{\partial u_{0,\mathrm{QL(T)A}}}{\partial z}\nonumber\\
\epsilon^\mathrm{QL(T)A}_{zz}&=&e_{QL(T)A,z}\frac{\partial u_{0,\mathrm{QL(T)A}}}{\partial z},\label{Eq:strainAmpl}
\end{eqnarray}
i.e. the generated strain is of mixed, compressive and shear, character. Both step-like emergence of the strain and propagating strain pulses can modify MCA. Possible contribution from this step-like emergence of the strain to the change of MCA was pointed out in \cite{Zhao-APL2005}, however, no detailed consideration was performed allowing to confirm feasibility of this process. Importantly, since the step-like emergence of the strain closely follows temporal and spatial evolution of the lattice temperature, distinguishing their effect on the magnetic anisotropy can be ambiguous. We note that in the case of optically excited (Fe,Ga) film the QLA and QTA strain pulses will be also injected into GaAs, and can be detected employing the scheme shown in Fig.\,\ref{Fig:Exp}(c).
\section{Magnetization dynamics in the (311) galfenol film induced by picosecond strain pulses}\label{Sec:expAc}
First we examine excitation of the magnetization precession by dynamical strain only, which is realized in the experimental geometry shown in Fig.\,\ref{Fig:Exp}(b). In Fig.\,\ref{Fig:AcExPrecession}(a) we present changes of the probe polarization rotation measured as a function of pump-probe time delay $t$ after QLA or QTA strain pulse arrives to the galfenol film. Time moment $t$=0 for each shown trace corresponds to the time required for either QLA or QTA pulse to travel through the 100\,$\mu$m thick GaAs substrate, and was verified by monitoring reflectivity change \cite{Jager-APL2013}. As one can see, both the QLA and QTA pulses excite oscillations of the probe polarization. Two lines are clearly seen in the Fast Fourier Transform (FFT) spectra of the time traces (Fig.\,\ref{Fig:AcExPrecession}(b)) separated by few GHz. Frequencies of both lines change with the applied field (Fig.\,\ref{Fig:AcExPrecession}(c)), thus confirming that the observed oscillations of the probe polarization originate from the magnetization precession triggered by QLA and QTA strain pulses. The character of the field dependence of $\nu$ (Fig.\,\ref{Fig:AcExPrecession}(c)) corresponds to the one expected for the geometry, when the external magnetic field is applied along the magnetization hard axis. Presence of two field dependent frequencies in the FFT spectra can be attributed to the excitation of two spin wave modes, which is one of the signatures of the magnetization precession excited by picosecond acoustic pulses. As discussed in details in \cite{Bombeck-PRB2012} excitation of several spin waves is enabled by the broad spectrum of the strain pulses and is governed by the boundary conditions in the thin film.
\begin{figure}
\includegraphics[width=8.6cm]{Fig2v2-resub.eps}
\caption{(Color online) (a) Probe polarization rotation vs. time delay $t$ measured in the geometry shown in Fig.\,\ref{Fig:Exp}(b) at various values of the magnetic field. $t$=0 is the moment of arrival of the QLA or QTA pulse to the galfenol film, and corresponds to 20 and 35\,ns after the excitation with the optical pump pulse, respectively. (b) FFT spectra of the time delay dependence measured at $B$=500\,mT. The two lines seen in each spectrum correspond to two spin-wave modes (see text for details). (c) Frequency of the probe polarization oscillations caused magnetization precession excited by QLA (closed symbols) and QTA (open symbols) pulses. Optical pump fluence was of $P$=40\,mJ$\cdot$cm$^{-2}$. Note, that in (a) the curves are shifted along the vertical axis for a sake of clarity.}
\label{Fig:AcExPrecession}
\end{figure}
Both the QLA and QTA pulses contain components $\epsilon_{xz}$ and $\epsilon_{zz}$ (\ref{Eq:strainAmpl}). The QLA (QTA) strain pulse enters the magnetic film and propagates though it with the sound velocity of $s_\mathrm{QLA}$=6.0\,km$\cdot$s$^{-1}$ ($s_\mathrm{QTA}$=2.8\,km$\cdot$s$^{-1}$). Upon propagation it contributes to the change of the magneto-elastic term in the free energy in Eq.\,(\ref{Eq:energy}), modifying the MCA of the film, and causing effective magnetic field $\mathbf{B}_\mathrm{eff}$ to deviate from its equilibrium. As a result, magnetization starts to move away from its equilibrium orientation following complex trajectory \cite{Scherbakov-PRL2010}. QL(T)A strain pulse leaves the film after $2d_\mathrm{FeGa}s^{-1}_\mathrm{QL(T)A}$, i.e. 33 and 70\,ps, respectively, and $\mathbf{B}_\mathrm{eff}$ returns to its equilibrium value, while magnetizations relaxes towards $\mathbf{B}_\mathrm{eff}$ precessionally on the much longer nanosecond time scale.
As seen from Fig.\,\ref{Fig:AcExPrecession}(a), amplitude of the magnetization precession excited by QLA phonons is higher than of that excited by QTA phonons. This is in agreement with experimental and theoretical results on propagation of QLA and QTA phonons through the (311)-GaAs substrate \cite{Scherbakov-OptExp2013}, which showed that the amplitude of the displacement associated with the QTA pulses is smaller by a factor of $\sim$5 than that of QLA, while the magnetoelastic coefficients for the shear and compressive strain are the same in galfenol.
Thus, the experiment on excitation of the (311)-(Fe,Ga) film by the picosecond strain pulses clearly demonstrates that dynamical strain effectively excites the magnetization precession in the film in the fields upto 1.2\,T, i.e. when the equilibrium magnetization is already along the applied magnetic field. We note that here we reported the magnetization excitation in the particular geometry, when the magnetic field is applied along the magnetization hard axis. Previously some of the authors also demonstrated analogous excitation in (Fe,Ga) with the field applied in the (311) plane at 45$^\mathrm{o}$ to [$\bar{2}$33] direction, as well as in the field applied along the [311] axis \cite{Jager-APL2013}. It has been also shown that all the features of the excitation observed at low temperature remain valid at room temperature as well. Thus, reported here results obtained at $T$=20\,K can be reliably extrapolated to the room temperature, at which the direct optical excitation of the precession in the galfenol film was studied.
\section{Magnetization dynamics in (311) galfenol film induced by direct optical excitation}\label{Sec:extOpt}
While in the experiments described in Sec.\,\ref{Sec:expAc} picosecond strain pulses are the only stimulus driving the magnetization precession, the processes triggered by direct optical excitation of a metallic magnetic film are more diverse, and may contribute to both strain-related and other driving forces (see Sec.\,\ref{Sec:theoryMag}). First, in order to confirm generation of dynamical strain in the optically-excited galfenol film we have detected propagating QLA and QTA strain pulses by measuring the polarization rotation for the probe pulses incident onto the back side of the (311)-(Fe,Ga)/GaAs sample (Fig.\,\ref{Fig:Exp}(c)). Fig.\,\ref{Fig:DirExAcoustics}(a) shows the time traces obtained at various magnetic fields. There are several oscillating components clearly present, as can be seen from the Fourier spectra in Fig.\,\ref{Fig:DirExAcoustics}(b). The field dependences of these frequencies are shown in Fig.\,\ref{Fig:DirExAcoustics}(c). The lines at $\nu_\mathrm{QTA}$=20\,GHz and $\nu_\mathrm{QLA}$=35\,GHz are field-independent and are attributed to the Brillouin oscillations caused by the QTA and QLA strain pulses (\ref{Eq:freqAc}), respectively, propagating away from the galfenol film towards the back side of the GaAs substrate with the velocities $s_\mathrm{QTA}<s_\mathrm{QLA}$.
\begin{figure}
\includegraphics[width=8.6cm]{Fig3v2-resub-proof.eps}
\caption{(Color online) (a) Probe polarization rotation vs. time delay $t$ measured in the geometry shown in Fig.\,\ref{Fig:Exp}(c) at various magnetic fields. (b) FFT spectrum of the time delay dependence measured at $B$=500\,mT. $\nu_\mathrm{QLA}$, $\nu_\mathrm{QTA}$, and $M_z$ denote the lines corresponding to the Brillouin oscillations related to the QLT and QTA strain pulses, and to the magnetization precession, respectively. (c) Brillouin frequencies in the probe polarization oscillations related to the QTA (closed circles) and QLA (open circles) pulses, and the frequencies related to the optically excited magnetization precession (triangles). Optical pump fluence was of $P$=10\,mJ$\cdot$cm$^{-2}$.}
\label{Fig:DirExAcoustics}
\end{figure}
A line in the FFT spectra marked in Fig.\,\ref{Fig:DirExAcoustics}(b) as $M_z$ and possessing the field dependent frequency $\nu$ corresponds to the optically triggered precession of the magnetization in the galfenol film. This experiment, therefore, confirms concomitant generation of the dynamical strain and excitation of the magnetization precession in the optically excited galfenol film. The mechanism behind the precession excitation is, however, more intricate than in the case of injection of strain pulses in the film.
\begin{figure}
\includegraphics[width=8.6cm]{Fig4v7.eps}
\caption{(Color online) (a) Probe polarization rotation vs. time delay $t$ measured in the geometry shown in Fig.\,\ref{Fig:Exp}(d) at various values of the magnetic field. (b) The same traces measured with higher temporal resolution. (c) FFT spectra of the time delay dependence. Optical pump fluence was of $P$=10\,mJ$\cdot$cm$^{-2}$.}
\label{Fig:DirExPrecession}
\end{figure}
In order to get an insight into the problem of direct optical excitation of (311)-(Fe,Ga) film we have performed experiment in the geometry shown in Fig.\,\ref{Fig:Exp}(d), with both pump and probe pulses incident directly on the galfenol film. Figs.\,\ref{Fig:DirExPrecession}(a,b) show the temporal evolution of the TRMOKE signal following excitation of the sample by femtosecond laser pulses. FFT spectra (Fig.\,\ref{Fig:DirExPrecession}(c)) contain one line with field dependent frequency (Fig.\,\ref{Fig:DirExFieldDep}(a)). Thus, we observe excitation of the magnetization precession. Oscillatory component in the observed signal can be approximated by the function
\begin{equation}
\frac{\Delta M_z(t)}{M_s}=\frac{\Delta M_z^\mathrm{max}}{M_s}e^{-t/\tau}\sin(2\pi\nu t+\psi_0).\label{Eq:sine}
\end{equation}
As can be seen from Figs.\,\ref{Fig:DirExFieldDep}(a,b) the frequency $\nu$ is minimal and the amplitude $\Delta M_z^\mathrm{max}/M_s$ is maximal at $B$=150\,mT, i.e. when the magnetization becomes parallel to the external field. This is a conventional behaviour, if the magnetic field is applied along the magnetization hard axis. We also note that the frequency vs. applied field dependences in the case of strain-induced and direct optical excitation resemble each other (see Figs.\,\ref{Fig:AcExPrecession}(c) and \ref{Fig:DirExFieldDep}(a)), with some deviation observed at low fields, which could be due to a fact that the measurements were performed at $T$=20 and 293\,K, respectively.
We have studied the evolution of the magnetization right after the direct optical excitation in more detail (Fig.\,\ref{Fig:DirExPrecession}(b)) and, in particular, determined the initial phases $\psi_0$ of precession (\ref{Eq:sine}). The initial evolution of the magnetization suggest that $\mathbf{B}_\mathrm{eff}$ demonstrates a step-like jump from its equilibrium orientation upon the optical excitation and remains in this orientation for the time much longer than the precession decay. This is opposite to the case of injected strain pulse \cite{Scherbakov-PRL2010}, when the magnetization takes a complex path before the harmonic oscillations start, which reflects the fact that $\mathbf{B}_\mathrm{eff}$ follows the strain while it propagates through the film and returns back to equilibrium orientation once the strain pulse has left the film.
The most striking result is that the initial phase $\psi_0$ of the oscillations possesses non-monotonous field dependence. In particular $M_z(t)$ demonstrates pure $sine-$like behaviour when the magnetic field is of $B$=150\,mT, and pure $cosine-$like behavior at $B$=500\,mT. Detailed field dependence of the precession initial phase is shown in Fig.\,\ref{Fig:DirExFieldDep}(c). Keeping in mind that at $t=0$ at any strength of the in-plane magnetic field the magnetization is oriented in the film plane, one concludes that the \textit{sine}-like ($\psi_0=0$) temporal evolution of $M_z$ at the applied field of $B$=150\,mT corresponds to the magnetization precessing around the transient effective field $\mathbf{B}_\mathrm{eff}(t)$, which lies in the sample plane. By contrast, \textit{cosine}-like ($\psi_0=\pi/2$) behavior of the $M_z$ corresponds to the precession around $\mathbf{B}_\mathrm{eff}(t)$, having finite out-of-plane component.
\begin{figure}
\includegraphics[width=8cm]{Fig5v3-resub.eps}
\caption{(Color online) Field dependencies of (a) the frequency $\nu$, (b) amplitude $M_{z0}/M_s$ and (c) initial phase $\psi_0$ of the magnetization precession excited optically and detected in the geometry shown in Fig.\,\ref{Fig:Exp}(d). Lines show the results of the calculations (see text): solid lines show the results obtained when both heat- and strain-induced contribution to the anisotropy change are taken into account; blue and red dashed lines show the results obtained when only the heat- (red) or strain-induced (blue) change of the MCA is considered. Open symbols in (a) show the frequency of the magnetization excited acoustically by QTA strain pulse (see also Fig.\,\ref{Fig:AcExPrecession}(b). (d) Calculated equilibrium in-plane components of the effective field $\mathbf{B}_\mathrm{eff}$. (e) Field dependences of the $\textbf{B}_\mathrm{eff}$ tilt angles $\Delta\theta$ (dashed lines) and $\Delta\phi$ (solid lines) under the optical excitation resulting in the increase of temperature by 120\,K from equilibrium one (RT). The red and blue lines show the results when only heat-related or strain-related mechanisms is taken into account, respectively. Short-dashed black line shows the net tilt of $\textbf{B}_\mathrm{eff}$, induced by both mechanisms. The upper and lower insets show two cases of purely in-plane and out-of-plane tilts of $\textbf{B}_\mathrm{eff}$ respectively.
}
\label{Fig:DirExFieldDep}
\end{figure}
Observed change of the initial phase of the magnetization precession gives a hint that there are, in fact, two competing mechanism of the precession excitation, which relative and absolute efficiencies change with increase of the applied magnetic field. In experiment with optical excitation (Fig.\,\ref{Fig:Exp}(d)) two mechanisms, heat- and strain-induced ones, considered in Sec.\,\ref{Sec:theoryMag} are expected to affect the magnetic anisotropy of the galfenol film. The heat-induced mechanism, based on the rapid increase of the temperature and decrease of the MCA constants, is expected to trigger the precession at relatively low fields. The strain-induced mechanism, resulting from the thermally-induced stress, in turn, can be efficient at high fields as well. The latter is demonstrated in our experiments with purely acoustic excitation of the studied film, where the precession is observed in the applied fields upto at least 1.2\,T (Fig.\,\ref{Fig:AcExPrecession}(a,c))).
In order to test this model we calculated the changes of the MCA parameters $K_{1,u}$ and the magneto-elastic part of the free energy (\ref{Eq:energy}) of the optically-excited (Fe,Ga) film, using the routine described briefly in Secs.\,\ref{Sec:theoryMag} and \ref{Sec:theoryAc} and in more detail in \cite{Kats-PRB2016}. Since some required parameters are unknown for galfenol, for calculations we used those of Fe: $A_e$=672\,J$\cdot$m$^{-3}$K$^{-2}$ \cite{Tari}, $C_l$=3.8$\cdot10^{6}$\,J$\cdot$m$^{-3}$K$^{-1}$, $\kappa$=80.4 W$\cdot$m$^{-1}$K$^{-1}$ \cite{Lide}. The electron-phonon coupling constant $G$=8$\cdot$10$^{17}$ W$\cdot$m$^{-3}$K$^{-1}$ was obtained from \cite{Carpene-PRB2010} where the electron-phonon relaxation time equal to $\sim C_eG^{-1}$ was found to be of 250\,fs. The values of $R$ and $\alpha$ were determined experimentally. The galfenol density $\rho$ for Fe$_{0.81}$Ga$_{0.19}$ is estimated to be of 7.95$\cdot$10$^3$ kg$\cdot$m$^{-3}$. The equilibrium magnetic anisotropy parameters $K_1$=30\,mT, $K_u$=45\,mT and the magneto-elastic coefficients $b_1$=-6\,T, $b_2$=2\,T were found using literature data \cite{Restorff-JAP2012,Parkes-SciRep2013,Atulasimha-SMS2007} as well as from the fit of the field dependence of the precession frequency (Fig.\,\ref{Fig:DirExFieldDep}(a)). Fig.\,\ref{Fig:DirExFieldDep}(d) shows calculated equilibrium in-plane orientation of the effective field $\textbf{B}_\mathrm{eff}$, confirming that it aligns with the external field when the latter exceeds $B$=150\,mT, in agreement with the SQUID data (not shown).
The calculations of the laser-induced magnetization dynamics were performed for the optical excitation density of $P$=10\,mJ$\cdot$cm$^{-2}$. For this laser fluence the lattice temperature increase calculated with Eq.\,(\ref{Eq:tempL}) was found to be of $\Delta T_l$=120\,K. Corresponding change of the MCA parameters was found to be of $\Delta K_1$=-4.75\,mT and $\Delta K_u$=-2.2\,mT, and the persistent components of the compressive and shear dynamical strain were found to be $\epsilon_{zz}=\Delta\epsilon_{zz}$=1.2$\cdot10^{-3}$ and $\epsilon_{xz}=\Delta\epsilon_{xz}$=-4$\cdot10^{-4}$. From these values the optically-triggered out-of-plane $\Delta\theta$ and in-plane $\Delta\phi$ deviations of the effective field $\textbf{B}_\mathbf{eff}$ (Fig.\,\ref{Fig:Exp}(a)) were found, as shown in Fig.\,\ref{Fig:DirExFieldDep}(e). As expected, the heat-induced change of the MCA affects the orientation of the effective field predominantly in the range of the applied fields below and close to $B$=150\,mT. By contrast, the strain-induced deviation of the effective field remains significant even when the applied field is as high as 500\,mT. At lower fields this contribution competes with the heat-induced one. The calculations also confirm that the propagating strain pulses also generated by the optical excitation contribute much weakly to the MCA change.
Importantly, at high fields optically-generated strain results in the out-of-plane deviation $\Delta\theta$ of $\textbf{B}_\mathrm{eff}$. At intermediate and low field the combined effect of the heat- and strain-induced anisotropy change results in both $\Delta\theta$ and $\Delta\phi$ to be non-zero. At $B$=150\,mT, i.e. when equilibrium magnetization is aligned along the external field, the heat-induced change of magnetocrystalline constants dominates and $\textbf{B}_\mathrm{eff}$ deviates mostly in plane. These two limiting situations are illustrated in the insets of Fig.\,\ref{Fig:DirExFieldDep}(e).
Finally, the amplitude and the initial phase of the magnetization precession triggered via heat- and strain-induced change of magnetic anisotropy of galfenol film were calculated (Fig.\ref{Fig:DirExFieldDep}(b,c)). Good agreement between the experimental data and the calculated one confirms that the MCA in the optically-excited (311)-(Fe,Ga) film is indeed modified and, thus, triggers the precession, via two distinct mechanisms. At relatively low fields the heat-induced change of anisotropy parameters is efficient, as was also shown in a number of previous works \cite{Carpene-PRB2010,Shelukhin-arxiv2015}. In addition, optically-generated strain modifies MCA via inverse magnetostriction. Furthermore, as the applied field increases, the heat-induced contribution to the magnetic anisotropy change decreases more rapidly than the strain-induced one. As a result, in the relatively high fields magnetization precession is excited mostly due to the optically-generated persistent strain.
It is instructive to note, that the physical origin of the MCA change - inverse magnetostriction - is the same when either the step-like strain is induced optically in the magnetic film, or the strain pulse is injected into the film. However, the magnetization precession trajectories may appear to be very distinct. In the case of the direct optical excitation the temporal profile of the $\mathbf{B}_\mathrm{eff}$ modified due to abruptly emerged strain can be seen as the step-like jump. This sets the amplitude and initial phase of the magnetization precession, which are uniquely linked to the angle between the equilibrium and modified directions of $\mathbf{B}_\mathrm{eff}$ \cite{Kats-PRB2016}. This situation becomes much more intricate when the strain pulse drives the MCA change. As discussed in \cite{Scherbakov-PRL2010}, strain pulse alternates the direction of $\mathbf{B}_\mathrm{eff}$ upon propagation through the film triggering the magnetization precession, which then proceeds around the equilibrium $\mathbf{B}_\mathrm{eff}$ ones the strain pulse left the film. Thus, the amplitude and the initial phase of the excited precession would be dependent on the particular spatial and temporal profile of the strain pulse.
\section{Conclusions}
In conclusion, we have demonstrated two alternative approaches allowing one to modify magnetocrystalline anisotropy of a metallic magnetic film at ultrafast time scale by dynamical strain, with inverse magnetostriction being the underlying mechanism. Using 100-nm film of a ferromagnetic metallic alloy (Fe,Ga) grown on a low symmetry (311)-GaAs substrate, we were able to trigger the magnetization precession by dynamical strain of a mixed, compressive and shear, character.
Picosecond quasi-longitudinal and quasi-transversal strain pulses can be injected into the galfenol film from the substrate, which leads to efficient excitation of the magnetization precession. In this case, owing to the distinct propagation velocities of QLA and QTA pulses in the substrate, their impact on MCA can be easily distinguished and analyzed. Importantly, dynamical strain remains the efficient stimulus triggering the magnetization precession in the applied magnetic fields up to 1.2\,T.
Alternatively, one can directly excite the galfenol film by a femtosecond laser pulse. In this case there are two competing mechanism mediating the ultrafast change of MCA. Rapid increase of the lattice temperature results in the decrease of the MCA parameters. The lattice temperature increase also sets up the thermal stress which results in optically generated strain. We demonstrate that the heat-induced decrease of the MCA parameters and the change of MCA mediated by the inverse magnetostriction compete and both can trigger the magnetization precession. Despite the fact that the temporal and the spatial profiles of the lattice temperature increase and optically-generated strain closely resemble each other, their impact on MCA can be distinguished. This is possible owing to distinct response of magnetization to the heat-induced \textit{decrease} of anisotropy parameters and to the strain-induced \textit{change and reorientation} of the effective field describing the MCA. In the former case the magnetization precession is triggered only if the external magnetic field applied along the magnetization hard axis is of moderate strength. In the latter case this constrain is lifted and the precession excitation was observed in the applied fields as strong as at least 0.5\,T. The experiments with strain pulses injected into the film suggest that strain-induced precession excitation would remain efficient at higher applied field values as well.
We would like to emphasise, that in order to realize the strain-induced control of magnetic anisotropy in the optically excited metallic film, low symmetry and elastic anisotropy of the latter is of primary importance. As we have shown in our recent study \cite{Kats-PRB2016} in the galfenol film of high symmetry grown on the (001) GaAs substrate magnetic anisotropy change is dominated by the lattice heating and related decrease of MCA parameters. Thus, low-symmetry films and structures, where the dynamical strain can efficiently modify MCA at relatively high magnetic fields, may be promising objects, when one aims at the excitation of magnetization precession of high frequency. This finding highlights further importance of low-symmetry ferromagnetic structures for ultrafast magneto-acoustic studies \cite{Chudnovsky-PRAppl2016}.
Finally, we note that direct detection of the optically-generated quasi-persistent strain in a metallic film is a challenging task. This strain component is intrinsically accompanied by the laser-pulse induced lattice heating. Therefore, detection of this strain component by optical means is naturally obscured by the change of optical properties of the medium \cite{Thomsen-PRB1986}. Realized in our experiments at the high magnetic field limit observation of the magnetization precession around new MCA direction, which was set mostly by the emerged quasi-persistent strain, can be considered as an indirect way to probe this constituent of the ultrafast lattice dynamics.
\section{Acknowledgements}
This work was supported by the Russian Scientific Foundation [grant number 16-12-10485] through funding the experimental studies at the Ioffe Institute; and by the Engineering and Physical Sciences Research Council [grant number EP/H003487/1] through funding the growth and characterization of Galfenol films. The experimental work at TU Dortmund was supported by the Deutsche Forschungsgemeinschaft in the frame of Collaborative Research Center TRR 160 (project B6). The Volkswagen Foundation [Grant number 90418] supported the theoretical work at the Lashkaryov Institute. A.V.A. acknowledges the support from Alexander von Humboldt Foundation.\\
|
1,314,259,994,817 | arxiv | \section{Introduction}
In recent years a new approach to the problem of putting supersymmetric theories on the lattice has been developed based on discretization of a topologically twisted
version of the continuum theory \cite{Sugino:2004qd,Catterall:2004np,Catterall:2005fd,physrep,D'Adda:2005zk,Damgaard:2008pa}.~\footnote{The same lattice theories can be obtained using orbifold methods and indeed supersymmetric lattice actions for Yang-Mills theories were first constructed using
this technique \cite{Cohen:2003xe,Cohen:2003qw,Kaplan:2005ta,Damgaard} and the connection between twisting and orbifold methods forged in \cite{Unsal:2006qp}}
Initially the focus was on lattice actions that target pure super Yang-Mills theories in the continuum limit, in particular ${\cal N}=4$ super Yang-Mills
\cite{latsusy-1,latsusy-2,Catterall:2012yq,Catterall:2011pd,Catterall:2013roa}. For alternative approaches to
numerical studies of ${\cal N}=4$ Yang-Mills see refs.~\cite{Hanada:2013rga,Hanada:2010kt,Honda:2011qk,Ishiki:2009sg}.
However in \cite{Matsuura}~\cite{SuginoQuiver} these formulations
were extended to the case of theories incorporating fermions transforming in the fundamental representation of the gauge group
and hence targeting super QCD. The starting point for these later lattice constructions is a
continuum quiver
theory containing fields that transform as bifundamentals under a product gauge group $U(N_c)\times U(N_f)$. After discretization these bifundamental fields connect two
separate lattices and, in the limit that the $U(N_f)$ gauge coupling is sent to zero, yield a super QCD theory
with a global $U(N_f)$ flavor symmetry. This construction is
described in detail in section 3.
The lattice action we have employed in this work includes an additional Fayet-Illopoulos term which, while invariant under the exact lattice supersymmetry, generates a potential for the scalar fields. It is straightforward
to show that this yields a non-zero vacuum expectation value for the auxiliary field (D term supersymmetry breaking) if $N_f<N_c$. In section 4.
we show the results from numerical simulations of this theory which support this
conclusion; we measure a non-zero vacuum energy and show that a light state - the Goldstino- appears
in the spectrum of the theory if $N_f<N_c$. In contrast we show that vacuum energy is zero and this state is absent from
the spectrum
when $N_f >N_c$ which is consistent with the prediction that the theory does not spontaneously break
supersymmetry in that case.
\section{The starting point: twisted ${\cal Q}=8$ SYM in three dimensions}
We start from the continuum eight supercharge (${\cal Q}=8$) theory in three dimensions which is written
in terms of twisted fields which are completely antisymmetric tensors in spacetime under the twisted SO(3) group. The original two Dirac fermions
reappear in the twisted theory as the components of a K\"{a}hler-Dirac field
$\Psi=\left( \eta, \psi_{a}, \chi_{ab}, \theta_{abc} \right)$ where the indices $a,b,c=1 \ldots 3 $.
The bosonic sector of the twisted theory comprises a complexified gauge field ${\cal A}_a=A_a+iB_a$ containing the
original gauge field $A_a$ and an additional vector field $B_a$. This additional field
contains the three scalars expected of the eight supercharge theory which, being vectors under the R symmetry,
transform as a vector field after twisting.
The corresponding action $S=S_{\rm exact}+S_{\rm closed}$ where
\begin{eqnarray}
S_{\rm exact} &=& \frac{1}{g^2} \; {\cal Q} \Lambda = \frac{1}{g^2} \; {\cal Q} \int d^3x {\rm Tr} \left[ \chi_{ab}(x){\cal F}_{ab}(x) + \eta(x)\left[{\overline{\cal D}}_{a},{\cal D}_{a}\right] + \frac{1}{2}\eta(x)d(x) \right],
\label{quiverActionQexact}\\
S_{\rm closed}&=& - \; \frac{1}{g^2} \int d^3x {\rm Tr} \left[\theta_{abc}(x) {\overline{\cal D}}_{[c}\chi_{ab]}(x) \right].
\label{quiverActionQclosed}
\end{eqnarray}
Here all fields are in the adjoint representation of a $U(N)$ gauge group $X=\sum_{a=1}^{N^2} X_a T_a$ and we adopt an antihermitian basis for the
generators $T_a$. ${\cal D}_{a}$ and ${\overline{\cal D}}_{a}$ are the continuum covariant derivatives defined in terms of the complexified gauge fields as ${\cal D}_{a} = \partial_{a} + {\cal A}_{a}$ and ${\overline{\cal D}}_{a} = \partial_{a} + {\overline{\cal A}}_{a}$.
The action of the scalar supersymmetry on the fields is given by
\begin{eqnarray}
{\cal Q} {\cal A}_a &=& \psi_a\nonumber\\
{\cal Q} {\overline{\cal A}}_a &=& 0\nonumber\\
{\cal Q} \psi_a &=& 0 \nonumber \\
{\cal Q} \chi_{ab} &=& -{\overline{\cal F}}_{ab}\nonumber\\
{\cal Q} \eta &=& d \nonumber\\
{\cal Q} \theta_{abc} &=& 0
\end{eqnarray}
Notice that we have included an auxiliary field $d(x)$ that allows the algebra to be off-shell nilpotent ${\cal Q}^2=0$.
This feature
then guarantees that $S_{\rm exact}$ is supersymmetric.
The equation of
motion for this auxiliary field is then
\begin{equation}
d(x)=\left[{\overline{\cal D}}_{a},{\cal D}_{a}\right]
\end{equation}
The ${\cal Q}$-invariance of $S_{\rm closed}$ follows from the Bianchi identity\footnote{Note that it is also possible to write the 3d action completely in terms of an ${\cal Q}$-exact form without a ${\cal Q}$-closed term by employing
an additional auxiliary field $B_{abc}$} \\
\begin{equation}
\epsilon_{abc}{\overline{\cal D}}_{c}{\overline{\cal F}}_{ab} = 0.
\label{bianchi}
\end{equation}
To discretize this theory we place all fields on the links of a lattice. This 3d lattice consists of the usual hypercubic vectors plus additional face and body links. In detail
these assignments are
\begin{center}
\begin{tabular}{c|c}\hline
continuum field & lattice link\\
${\cal A}_a(x)$ & $x\to x+\hat{a}$\\
${\overline{\cal A}}_a(x)$ & $x+\hat{a}\to x$\\
$\psi_a(x)$ & $x\to x+\hat{a}$\\
$\chi_{ab}$ & $x+\hat{a}+\hat{b} \to x$\\
$\eta(x)$ & $x\to x$\\
$d(x)$ & $x\to x$\\
$\theta_{abc}$ & $x\to x+\hat{a}+\hat{b}+\hat{c}$ \\
\hline
\end{tabular}
\end{center}
The lattice gauge field will be denoted ${\cal U}_\mu(x)$ in the following discussion.
For the scalar fields $d(x)$, $\eta(x)$ the link degenerates to a single site. Notice that the orientation of a given fermion
link field is determined by the even/odd character of its corresponding continuum antisymmetric form.
The link character of a field determines its transformation properties under lattice gauge transformations eg. ${\cal U}_a(x)\to G(x){\cal U}_a(x)G^\dagger(x+\hat{a})$.
To complete the construction of the lattice action it is necessary to replace continuum covariant derivatives by appropriate gauged lattice difference operators. The necessary
prescription was described in \cite{physrep},~\cite{Matsuura},~\cite{twist2orb}. It is essentially determined by the simultaneous requirements that
the lattice difference agree with the continuum derivative as the lattice spacing is sent to zero and that it yields expressions that transform
as the appropriate link field under lattice gauge transformations. The lattice difference operators acting on a field $f^{(\pm)_{a}}$, where $(\pm)$ corresponding to the orientation of the field\footnote{Note that $\psi_{a}(x)$ and $\theta_{abc}$(x) originate from lattice site x and are, thus, positively oriented. $\chi_{ab}(x)$, however, terminates at lattice site x and this therefore assigned a negative orientation.}, are given by:
\begin{eqnarray}
{\cal D}^{(+)}_{a}f^{(+)}_{b_{1},b_{2},...,b_{n}}(x) &=& {\cal U}_{a}(x)f^{(+)}_{b_{1},b_{2},...,b_{n}}(x+\hat{a}) - f^{(+)}_{b_{1},b_{2},...,b_{n}}(x){\cal U}_{a}(x+\hat{b})
\label{diffop-1} \\
{\cal D}^{(+)}_{a}f^{(-)}_{b_{1},b_{2},...,b_{n}}(x) &=& {\cal U}_{a}(x+\hat{b})f^{(-)}_{b_{1},b_{2},...,b_{n}}(x+\hat{a}) - f^{(-)}_{b_{1},b_{2},...,b_{n}}(x){\cal U}_{a}(x) \\
\nonumber \\
{\overline{\cal D}}^{(+)}_{a}f^{(+)}_{b_{1},b_{2},...,b_{n}}(x) &=& f^{(+)}_{b_{1},b_{2},...,b_{n}}(x+\hat{a}){\overline{\cal U}}_{a}(x+\hat{b}) - {\overline{\cal U}}_{a}(x)f^{(+)}_{b_{1},b_{2},...,b_{n}}(x) \\
{\overline{\cal D}}^{(+)}_{a}f^{(-)}_{b_{1},b_{2},...,b_{n}}(x) &=& f^{(-)}_{b_{1},b_{2},...,b_{n}}(x+\hat{a}){\overline{\cal U}}_{a}(x) - {\cal U}_{a}(x+\hat{b})f^{(-)}_{b_{1},b_{2},...,b_{n}}(x)
\label{diffop-2} \\
\nonumber \\
{\cal D}^{(-)}_{a}f^{(\pm)}_{b_{1},b_{2},...,b_{n}}(x) &=& {\cal D}^{(\pm)}f^{(\pm)}_{b_{1},b_{2},...,b_{n}}(x-\hat{a}) \\
{\overline{\cal D}}^{(-)}_{a}f^{(\pm)}_{b_{1},b_{2},...,b_{n}}(x) &=&= {\overline{\cal D}}^{(\pm)}f^{(\pm)}_{b_{1},b_{2},...,b_{n}}(x-\hat{a}),
\label{diffOps}
\end{eqnarray} where $\hat{b}=\sum_{i=1}^{n}\hat{b}_{i}$ in equations (\ref{diffop-1}) to (\ref{diffop-2}).
For example the continuum derivative $D_a\psi_b$ becomes
\begin{equation}
{\cal D}^{(+)}_a\psi_b(x)={\cal U}_a(x)\psi_b(x+\hat{a})-\psi_b(x){\cal U}_a(x+\hat{b})\end{equation}
This prescription yields a set of link paths which, when contracted with the link field $\chi_{ab}(x)$, yields a closed loop whose trace is gauge invariant:
\begin{equation}
{\rm Tr}\;\left[\chi_{ab}(x)\left({\cal U}_a(x)\psi_b(x+\hat{a})-\psi_b(x){\cal U}_a(x+\hat{b})\right)\right]\label{ex}\end{equation}
It has the
correct naive continuum limit provided that (in some suitable gauge) we can expand ${\cal U}_a(x)=I_{N}+{\cal A}_a(x)$.
The field strength on the lattice, ${\cal F}_{ab}(x)$, is defined using the forward difference operator as:
\begin{equation}
{\cal F}_{ab}(x) = {\cal D}^{(+)}_{a}U_{b}(x).
\label{Fab}
\end{equation}
In lattice QCD the
unit matrix arising in this expansion is automatic since the link fields take their values in the group. However the constraints
of exact lattice supersymmetry require that the lattice gauge fields take their values, like the fermions,
in the algebra. In this case the unit matrix can
then be interpreted as arising from giving a vev to the trace mode of the original scalar fields $B_a$. This feature is {\it required} by lattice supersymmetry
but is only possible because we are
working with a complexified $U(N)$ theory - another indication of the tight connection between twisting and exact
supersymmetry. It also implies that the path integral defining the quantum theory will
use a flat measure rather than the usual Haar measure employed in conventional lattice gauge theory. Such a prescription
would usually break lattice gauge invariance but again complexification comes to the rescue since the Jacobian resulting from
a gauge transformation of the $D{\cal U}$ measure cancels against an equivalent one coming from $D{\overline{\cal U}}$. \\
\\ We now show how to use this three dimensional lattice model to construct a two dimensional quiver theory while maintaining the exact lattice
supersymmetry.
\section{Two dimensional quivers from three dimensional lattice Yang-Mills}
Consider a lattice whose extent in the 3-direction comprises just two 2d slices. Furthermore we shall assume free boundary conditions in the 3-direction
so that these two slices are connected by just a single set of links in the 3-direction - those running from
$x_3=0$ to $x_3=1$. Ignoring for the moment any fields that live on these latter links it is clear that the gauge group
can be chosen independently on these two slices. We choose a group $U(N_c)$ for the slice at $x_3=0$ and $U(N_f)$ at $x_3=1$ and will
henceforth refer to them as the $N_c$ and $N_f$ lattices. Denoting directions on the 2d slices by Greek indices
$\mu,\nu = 1,2 $ the fields living entirely on these lattices are given by
\begin{eqnarray}
N_{c} \; \; \; &:& \; \; \; \Psi(x) = \left( \eta, \psi_{\mu}, \chi_{\mu\nu} \right), \; \; \; {\cal U}_{\mu} = I_{N_{c}} + {\cal A}_{\mu},\qquad d
\label{Nc-ferm} \\
N_{f} \; \; \; &:& \; \; \; \hat{\Psi} (\overline{x})= \left( \hat{\eta}, \hat{\psi}_{\mu}, \hat{\chi}_{\mu\nu} \right), \; \; \; \hat{{\cal U}}_{\mu} = I_{N_{f}} + \hat{{\cal A}},_{\mu}, \qquad \hat{d}
\label{Nf-ferm}
\end{eqnarray}
In these expressions $x(\overline{x})$ denotes the coordinates on the $N_{c}(N_{f})$ lattice and $1_{N_{c}(N_{f})}$ denote the $N_{c}(N_{f}) \times N_{c}(N_{f})$ unit matrix respectively.
Now consider fields that live on the links between the $N_c$ and $N_f$ lattice. These must necessarily transform as bi-fundamentals under $U(N_c)\times U(N_f)$.
We have,
\begin{eqnarray}
N_{c} \; \times \; N_{f} \; \; \; &:& \; \; \;\Psi_{\text{bi-fund}}(x,\overline{x}) = \left( \psi_{3}, \chi_{\mu3}, \theta_{\mu\nu3} \right) = \left( \lambda, \lambda_{\mu}, \lambda_{\mu\nu}\right),\qquad \phi
\label{bifund}
\end{eqnarray} The second equality in the above equation is
a mere change of variables and corresponds to labeling fields according to their two dimensional character.
The complete field content of this model is summarized in the table below: \\
\begin{center}
\begin{tabular}{ c | c | c}
$N_{c}$-lattice & Bi-fundamental fields & $N_{f}$-lattice \\
$x$ & $(x,\overline{x})$ , $(\overline{x},x)$ & $\overline{x}$ \\ \hline
& & \\
${\cal A}_{\mu}(x)$ & $\phi(x,\overline{x})$ & $\hat{{\cal A}_{\mu}}(\overline{x})$\\
$\eta(x)$ & $ \lambda(x,\overline{x})$ & $\hat{\eta}(\overline{x})$\\
$\psi_{\mu}(x)$ & $\lambda_{\mu}(\overline{x}+\mu,x)$ & $\hat{\psi}_{\mu}(\overline{x})$\\
$\chi_{\mu\nu}(x)$ & $\lambda_{\mu\nu}(x,\overline{x}+\mu+\nu)$ & $\hat{\chi}_{\mu\nu}(\overline{x})$\\
& & \\ \hline
\end{tabular}
\end{center}
Defining G(x) as a group element belonging to $U(N_{c})$ and H(x) to $U(N_{f})$ the lattice
gauge transformations for the bi-fundamental fields are as follows:
\begin{eqnarray}
\phi(x) &\rightarrow& G(x)\phi(x)H^{\dagger}(\overline{x})\nonumber\\
\lambda(x) &\rightarrow& G(x)\lambda(x)H^\dagger(\overline{x})\nonumber\\
\lambda_{\mu}(x) &\rightarrow& H(\overline{x}+\mu)\lambda_\mu(x)G^{\dagger}(x)\nonumber\\
\lambda_{\mu \nu}(x) &\rightarrow &G(x)\lambda_{\mu \nu}(x)H^{\dagger}(\overline{x}+\mu+\nu)
\label{gaugetrans}
\end{eqnarray}
It is crucial to note that this generalization of the original lattice super Yang-Mills theory to a quiver model is completely consistent
with both the quiver gauge symmetries and the exact supersymmetry.
For example the 3d term given in eqn.~\ref{ex} yields a bi-fundamental term of the form
\begin{equation}
{\rm Tr}\,\left[\lambda_{\mu}(x)\left({\cal U}_\mu(x)\lambda(x+\mu)-\lambda(x)\hat{{\cal U}}_\mu(\overline{x})\right)\right]
\end{equation} which is invariant under the the generalized gauge transformations given in eqn.~\ref{gaugetrans}.
Thus, the above construction lends us a consistent lattice quiver gauge theory containing both adjoint and bi-fundamental fields
transforming under a product $U(N_c)\times U(N_f)$ gauge group. Consider now setting the $U(N_f)$ gauge coupling to zero. This sets $\hat{{\cal U}}_\mu=I_{N_{f}}$ up to gauge transformations and it
is then consistent to set all other fields on the $N_f$ lattice to zero. The original $U(N_f)$ gauge symmetry now becomes a global $U(N_f)$ flavor symmetry
which acts on a set of complex scalar fields $\phi$ transforming in the fundamental representation of the gauge group and their fermionic
superpartners $(\lambda,\lambda_\mu,\lambda_{\mu\nu})$. The situation is depicted in figure~\ref{3dPic}.
\begin{figure}[b]
\begin{center}
\includegraphics[height=80mm]{QuiverStruc2.pdf}
\caption{3d quiver model}
\label{3dPic}
\end{center}
\end{figure}
At this point we have the freedom to add to the action one further supersymmetric and gauge invariant term - namely $r\sum_x {\rm Tr\;} d(x)=r {\cal Q} \sum_x {\rm Tr\;} \eta$. This is a Fayet-Iliopoulos
term. Its presence changes the equation of motion for the auxiliary field
\begin{equation}
d(x)={\overline{\cal D}}^{(-)}_\mu{\cal U}_\mu(x)+\phi(x){\overline{\phi}}(x)-rI_{N_c}
\label{dFI} \end{equation}
with $I_{N_c}$ a $N_c\times N_c$ unit matrix. The SUSY transformations for the remaining adjoint and fundamental fields are:
\begin{center}
\begin{tabular}{ c | c}
Adjoint Fields & Fundamental fields \\ \hline \\
$ {\cal Q} {\cal A}_{\mu} = \psi_{\mu}$ & $ {\cal Q} \phi = \lambda $ \\
$ {\cal Q} {\overline{\cal A}}_{\mu} = 0$ & $ {\cal Q} {\overline{\phi}} = 0 $ \\
$ {\cal Q} \psi_{\mu} = 0 $ & ${\cal Q} \lambda = 0$ \\
$ {\cal Q} \chi_{\mu\nu} = -{\overline{\cal F}}_{\mu\nu}$ & ${\cal Q} \lambda_{\mu} = -{\overline{\cal D}}_{\mu} {\overline{\phi}}$ \\
$ {\cal Q} \eta = d$ & ${\cal Q} \lambda_{\mu\nu} = 0 $ \\
& \\
\end{tabular}
\end{center} After integration over $d$ the Fayet-Iliopoulos term
yields a scalar potential term which will play a crucial role in determining whether the system can undergo spontaneous supersymmetry
breaking.
The final action may be written as
\begin{eqnarray}
S_{\rm adj} &=& \kappa \sum_{x} {\rm Tr} \left[ - {\overline{\cal F}}_{\mu\nu}(x) {\cal F}_{\mu \nu}(x) - \frac{1}{2} ({\overline{\cal D}}^{(-)}_{\mu}{\cal U}_{\mu})^{2} - \eta(x) {\overline{\cal D}}^{(-)}_{\mu}\psi^{\mu}(x) - \chi_{\mu \nu}(x){\cal D}^{(+)}_{[\mu}\psi_{\nu]}(x) \right], \nonumber \\
\label{action-adj} \\
S_{\rm fund} &=& \kappa \; \sum_{x} {\rm Tr} \left[- \overline{{\cal D}^{(+)}_{\mu}\phi(x)} {\cal D}^{(+)}_{\mu}\phi(x) - \frac{1}{2} \left[ \left( \phi(x){\overline{\phi}}(x) - \rm {rI} \right)^{2} \right] + \left[ {\overline{\cal D}}_{\mu}^{(-)}{\cal U}_{\mu}(x)\right] \left( \phi(x){\overline{\phi}}(x) - \rm {rI} \right) \right] \nonumber \\
&-& \left[ \eta(x) \lambda(x) {\overline{\phi}}(x) + \left \{ \lambda_{\mu}(x){\cal D}^{(+)}_{\mu}\lambda(x) - \lambda_{\mu}(x)\psi_{\mu}(x) \phi(x+\mu) \right \} \right. \nonumber
\label{actionFundFerm-start} \\
&-& \left. \left \{ \lambda_{\mu \nu}(x)\;{\overline{\cal D}}_{\mu}^{(+)}\lambda_{\nu}(x) - \lambda_{\mu \nu}(x){\overline{\phi}}(x+\mu+\nu)\chi_{\mu \nu}(x) \right \} \right],\nonumber \\
\label{actionFundFerm-end}
\end{eqnarray}
In practice we have also included the following soft SUSY breaking mass term, $S_{\rm soft}$, in the adjoint action, $S_{\rm adj}$ in equation (\ref{action-adj}):
\begin{equation}
S_{\rm soft} = \mu^{2}\left[ \frac{1}{N_c} {\rm Tr\;} \left( {\overline{\cal U}}_{\mu}{\cal U}_{\mu} \right) - 1 \right]^{2}.
\label{actionSoft}
\end{equation}
Such a term is necessary to create a potential for the trace mode of the twisted scalar fields as we have discussed earlier. In principle we should extrapolate $\mu^2\to 0$ at the end of the calculation and so we have obtained all our results for
a range of $\mu^2$. In practice we observe that these soft breaking effects are rather small. \\
\\ Finally, the lattice coupling $\kappa$ appearing above is given by:
\begin{equation}
\kappa = \frac{N_{c}LT}{2\lambda A}.
\end{equation} Here, $\lambda = g^{2}N_{c}$ is the dimensionful `t~Hooft coupling, L and T are the numbers of points in each direction of
the 2d lattice and $A$ is a continuum area - the importance of
interactions in the theory being controlled by the dimensionless combination $\lambda A$. When we later discuss
our numerical results we refer to
this dimensionless combination as simply $\lambda$.
\section{Vacuum Structure and SUSY Breaking Scenarios}
Let us return to the equation of motion for the auxiliary field $d(x)$. If we sum the trace of this expression over all lattice sites and take
its expectation value we find
\begin{equation}
\langle \sum_x {\rm Tr\;}\,d(x) \rangle= \langle \sum_x {\rm Tr\;}\,\left(\phi(x){\overline{\phi}}(x)- rI_{N_c}\right) \rangle
\label{dvev}
\end{equation}
Since the lefthand side of this
expression is the expectation value of the ${\cal Q}$-variation of some operator
the question of whether supersymmetry breaks spontaneously or not is determined by whether the righthand side is
non-zero. Indeed after we integrate over the auxiliary field $d$
we find a scalar potential of the form
\begin{equation}
S_{\rm Dterm} = \sum_{x,f=1}^{N_{f}} \frac{\kappa}{2} {\rm Tr\;} \left( \phi^f(x){\overline{\phi}}^f(x) - \rm {rI_{N_c}} \right)^{2},
\label{Dterm}
\end{equation}
Consider the
case where $N_f<N_c$.
Using $SU(N_c)$ transformations one can diagonalize the $N_c\times N_c$ matrix $\phi{\overline{\phi}}$. In general it will have
$N_f$ non-zero real, positive eigenvalues and $N_c-N_f$ zero eigenvalues. This immediately implies that there
is no configuration of the fields $\phi$ where the potential is zero. Indeed the minimum of the potential will
have energy $r^2(N_c-N_f)$ and corresponds to a situation where $N_f$ scalars develop vacuum expectation values breaking the gauge group to $U(N_c-N_f)$. The situation when $N_f\ge N_c$ is qualitatively different;
now the rank of $\phi{\overline{\phi}}$ is at least $N_c$ and a zero energy vacuum configuration is possible. In such a situation
$N_c$ scalars pick up vacuum expectation values and the gauge symmetry is completely broken. \\
\\ For the case when $N_f<N_c$ where ${\cal Q}$-supersymmetry is expected to break we would
expect the spectrum of the theory to contain a massless fermion - the \emph{goldstino} \cite{Goldstino-theorem}. To
see how this works in the twisted theory consider the vacuum energy
\begin{equation}
\langle 0| H |0\rangle \ne 0,
\label{susy-breaking-1}
\end{equation} which is equivalent to $<\left \{ {\cal Q}, {\cal O} \right \}>\ne 0$ for some operator ${\cal O}$.
In the two dimensional twisted theory the relevant part of
the supersymmetry algebra is $\left \{ {\cal Q}, {\cal Q}_{\mu} \right \} = P_{\mu}$ \cite{latsusy2d} so that eqn.~\ref{susy-breaking-1} is
equivalent to
\begin{equation}
\langle 0| \left \{ {\cal Q}, {\cal Q}_{0} \right \} |0\rangle \ne 0,
\label{susy-breaking-3}
\end{equation}
Note that the equation above involves both the scalar ${\cal Q}$ and the 1-form supercharge ${\cal Q}_{\mu}$. Corresponding
to these supercharges are a set of supercurrents, $ J$ and $J_\mu$ whose
form can be derived in the usual manner by varying the continuum twisted action under infinitesimal spacetime
dependent susy transformations. This yields gauge invariant supercurrents on the lattice of the following form
\begin{eqnarray}
J(x) &=& \sum_\mu \left[\psi_{\mu}(x){\overline{\cal U}}_{\mu}(x) \right] d(x) + ... ,
\label{J(x)} \\
J_0(x) &=& \eta(x)d(x) + ... ,
\label{Jprime(y)}
\end{eqnarray}
and using the equations of motion, the auxiliary field d(x) can be replaced by
\begin{equation}
d(x) = \sum_{\mu=1,2} \left[{\overline{\cal D}}_{\mu}, {\cal D}_{\mu} \right] + \left[ \phi(x){\overline{\phi}}(x) -rI_{N_c} \right]
\end{equation} We therefore expect a possible Goldstino signal to manifest itself in the contribution of
a light state to the two-point function:
\begin{equation}
C(t)=\langle 0| {\cal O}(x) {\cal O}^{\prime}(y) |0\rangle,
\end{equation} where `t' corresponds to $(x^{0}-y^{0})$ and a suitable set of lattice interpolating operators are given by:
\begin{equation}
{\cal O}(x) = {\rm Tr} \,\left[ \sum_\mu \psi_{\mu}(x){\overline{\cal U}}_{\mu}(x) \left( \phi(x){\overline{\phi}}(x) - {\rm rI_{N_{c}}} \right) \right].
\label{goldstino-2}
\end{equation} and
\begin{equation}
{\cal O}^{\prime}(y) ={\rm Tr} \,\left[ \eta(y) \left(\phi(y){\overline{\phi}}(y) - {\rm rI_{N_{c}}} \right)\right].
\label{goldstino-3}
\end{equation}
\section{Numerical Results}
We employ a RHMC algorithm to simulate our system having first replaced all the twisted fermions in our model by corresponding pseudofermions - see for
example \cite{OOcode}~\cite{david}. The simulations are performed by imposing anti-periodic (thermal) boundary conditions on the fermions along one of the two space-time directions. This is done to avoid running into the fermion zero modes resulting from the scalar component of the twisted fermion, $\eta$. As discussed in \cite{2dsignprob}~\cite{Catterall:2014vga} this
has the added benefit of ameliorating the sign problem for these lattice theories. This breaks supersymmetry explicitly
by a term that vanishes as the lattice volume is
increased. \\
\\ In this section, we contrast results from simulations
with $N_f=2$, $N_c=3$ corresponding to the predicted
susy breaking scenario with results from simulations with $N_f=3$, $N_c=2$ - the susy preserving case. We ran our simulations for three different values of the `t~Hooft coupling, $\lambda=0.5,1.0$ and 1.5 and observed the same qualitative behavior for the different values of $\lambda$. The results presented in this section correspond to $\lambda=1.0$. The FI parameter, r, is a free parameter and is set to 1.0 for the rest of the discussion. \\
\\ As a first check, we compared the expectation value of the bosonic action with the theoretical value obtained using
a supersymmetric Ward identity
\begin{equation}
<\kappa S_{\rm boson}> = \left(\frac{3}{2}N_{c}^{2} + N_{c}N_{f}\right)V .
\label{bActionQuiver}
\end{equation} In appendix A. we show how to compute this value. Figure~\ref{baction} shows a plot of the bosonic action for various values of the soft SUSY breaking coupling $\mu$. In principle we should take the limit $\mu\to 0$ although it should be clear from
the plot that the $\mu$ dependence is in fact rather weak. We have normalised the data to its value obtained by assuming supersymmetry
is unbroken.
The red points at the bottom of the figure denote the SUSY preserving case and
it can be observed that they agree with the theoretical prediction. This is to
be contrasted with the case when $N_f<N_c$ denoted by the blue points which shows a large
deviation from eqn.~\ref{bActionQuiver} and is the first sign that supersymmetry is spontaneously broken in this
case. \\
\begin{figure}
\begin{center}
\includegraphics[height=80mm]{Bact2.pdf}
\caption{Normalized bosonic action vs soft breaking coupling $\mu$ for $\lambda = 1.0$ for a 16x6 lattice}
\label{baction}
\end{center}
\end{figure}
\\ The spatial Polyakov lines shown in figure~\ref{polyS} also show a distinct difference
between the $N_f<N_c$ and $N_f>N_c$ cases. The red lines where $|P|\approx 1$ correspond to the SUSY preserving case
and are consistent with a deconfined or fully Higgsed phase. Indeed the Polyakov line is
a topological operator and in a susy preserving phase should be coupling constant independent consistent
with what is seen. The blue line in the lower half of the plot corresponds to smaller
values which is qualitatively consistent with
the predicted partial Higgsing of the gauge field in the phase where supersymmetry is spontaneously broken. \\
\begin{figure}
\begin{center}
\includegraphics[height=80mm]{polyS.pdf}
\caption{Spatial Polyakov line vs $\mu$ for $\lambda=1.0$ on an 16x6 lattice}
\label{polyS}
\end{center}
\end{figure}
\\ One of clearest signals of supersymmetry breaking can be obtained if one considers the equation of motion for the
auxiliary field eqn. \ref{Dterm}. We expect the susy preserving case to obey
\begin{equation}
\frac{1}{N_{c}} {\rm Tr\;} \left[ \phi(x){\overline{\phi}}(x) \right] = 1.
\label{Tr-ppd}
\end{equation}
The red points, corresponding to ($N_{f} > N_{c}$) are consistent with this
over a wide range of $\mu$. We attribute the small residual devaition
as $\mu\to 0$ to our use of
antiperiodic boundary conditions which inject explicit ${\cal Q}$ susy breaking into the system.
The simulations with $N_f<N_c$ (blue points) however show a clear signal for spontaneous supersymmetry breaking with the value of
this quantity deviating dramatically from its supersymmetric value even as $\mu\to 0$. \\
\begin{figure}
\begin{center}
\includegraphics[height=80mm]{Tr-ppd.pdf}
\caption{$\frac{1}{N_c}{\rm Tr}\phi{\overline{\phi}}$ vs $\mu$ for a 't Hooft coupling of $\lambda = 1.0$ on an 16x6 lattice}
\label{Tr-ppd-fig}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=80mm]{Abs-corr.pdf}
\caption{Correlation function C(t) for $\lambda = 1.0$ and $\mu = 0.3$ on various asymmetric lattices}
\label{effb}
\end{center}
\end{figure}
\\ Finally we turn to our results for a would be Goldstino. We search for this by
computing the following two point correlation function
\begin{equation}
C(t) = \sum_{x,y}< O^{\prime}(y,t)O(x,0)>
\label{C(t)}
\end{equation} where $O^{\prime}(y,t)$ and O(x,0) are fermionic operators given by:
\begin{eqnarray}
O(x,0) &=& \psi_{\mu}(x,0){\cal U}_{\mu}(x,0)\left[\phi(x,0){\overline{\phi}}(x,0) - rI_{N_{c}} \right]
\label{Oprime} \\
O^{\prime}(y,t) &=& \eta(y,t) \left[\phi(y,t){\overline{\phi}}(y,t) - rI_{N_{c}} \right].
\label{O}
\end{eqnarray} Since it is computationally very cumbersome to evaluate the above correlation function for every lattice site x at the source we instead evaluate the correlator for every lattice site y for a few randomly chosen source points x.
In figure \ref{effb} we show the
logarithm of this correlator as a function of temporal distance for a range of spatial lattice size, $L=6,8,12$ and 14. The anti-periodic boundary condition is applied along the temporal direction corresponding to T=16 and for both $N_f>N_c$ and $N_f<N_c$. The approximate linearity of
these curves is consistent with the correlator being
dominated by a single state in both cases. However when $N_f>N_c$ the
amplitude of this correlator is strongly suppressed relative to the case where
$N_f<N_c$. Furthermore the effective mass extracted from fits to this latter correlator (figure 6) falls as
the spatial lattice size (L) increases, consistent with a vanishing mass in the large volume limit. The lines in figure~6
show fits to $1/L$ - the smallest mass consistent with the boundary conditions - the dashed green line is
a fit constrained to go through the origin while the dotted red line allows the intercept to float. This is just what
we would expect of a would be
Goldstino arising from spontaneous breaking of the exact ${\cal Q}$-symmetry.
\begin{figure}
\begin{center}
\includegraphics[height=80mm]{MvsLinv.pdf}
\caption{Goldstino mass derived from fits $M_{eff}$ vs inverse transverse lattice size, $L^{-1}$}
\label{Meff}
\end{center}
\end{figure}
\section{Conclusions}
In this paper, we have reported on a numerical study of super QCD in two
dimensions. The model in question possesses ${\cal N}=(2,2)$ supersymmetry in the continuum limit
while our lattice formulation preserves a single exact supercharge for non zero lattice spacing. It is expected that
the single supersymmetry will be sufficient to ensure that full supersymmetry is regained without fine
tuning in the continuum limit. This
constitutes the first lattice study of a supersymmetric theory containing fields which transform
in both the fundamental and adjoint representations of the gauge group. Our lattice action
also contains a ${\cal Q}$-exact Fayet-Iliopoulos
term which yields a potential for the scalar fields. The lattice theory possesses several
exact symmetries; $U(N_c)$ gauge invariance,
${\cal Q}$-supersymmetry and a global $U(N_f)$ flavor symmetry. \\
\\ It is expected that the system will
spontaneously break supersymmetry if $N_f<N_c$. The arguments that lead to this
conclusion depend on the inclusion of the Fayet-Iliopoulos
term. Such a term is rather natural in our lattice model since the formulation requires
$U(N_c)$ gauge symmetry. Notice, though, that the free energy of the lattice model does not naively
depend on the coupling $r$ as long as it is positive
since the Fayet-Iliopoulos term is ${\cal Q}$-exact.\footnote{In contrast for $r<0$ we
would expect supersymmetry breaking for any value of $N_f/N_c$. Thus one expects a phase
transition in the $N_f\ge N_c$ theory at $r=0$. }
Our numerical work is fully consistent with this picture; we have examined several supersymmetric Ward identities which
clearly distinguish between the $N_f<N_c$ and $N_f>N_c$ situations and we have observed a would be Goldstino state
in the former case. \\
\\ There are many directions for future work; inclusion of anti-fundamentals fields is straightforward
since it merely corresponds to including the bifundamental fields truncated from
the $N_f$-lattice. Observations of phase transitions in such models as the parameters
are varied can then potentially probe sigma models based on Calabi-Yau hypersurfaces \cite{Witten:1993yc}. It is possible
that the $SU(N)$ theories could be studied by deforming the moduli space of the lattice
theory using ideas similar to those presented in \cite{new}.
This would allow direct contact to be made to the continuum calculations of
Hori and Tong~\cite{HoriTong}. Finally the lattice constructions discussed in this paper
generalize \cite{Joseph:2013jya} to three dimensional quiver theories leaving open the
possibility of studying 3D super QCD using lattice simulations.
|
1,314,259,994,818 | arxiv |
\section{Introduction}
\label{sec:Introduction}
The immersed boundary (IB) method is a mathematical framework for
studying fluid-structure interaction that was originally developed by
Peskin to simulate the flow of blood through a heart
valve~\cite{Peskin1972}. The IB method has been used
in a wide variety of biofluids applications including blood flow through
heart valves~\cite{Griffith2009,Peskin1972},
aerodynamics of the vocal cords~\cite{Duncan2006}, sperm
motility~\cite{Dillon2007}, insect flight~\cite{Miller2000},
and jellyfish feeding dynamics~\cite{Hamlet2011}. The method is also
increasingly being applied in non-biological
applications~\cite{MittalIaccarino2005}.
The immersed boundary equations capture the dynamics of both fluid and
immersed elastic structure using a mixture of Eulerian and Lagrangian
variables: the fluid is represented using Eulerian coordinates that are
fixed in space, and the immersed boundary is described by a set of
moving Lagrangian coordinates. An essential component of the model is
the Dirac delta function that mediates interactions between fluid and IB
quantities in two ways. First of all, the immersed boundary exerts an
elastic force (possibly singular) on the fluid through an external forcing
term in the Navier-Stokes equations that is calculated using the current
IB configuration. Secondly, the immersed boundary is constrained to
move at the same velocity as the surrounding fluid, which is just the
no-slip condition. The greatest advantage of this approach is that when
the governing equations are discretized, no boundary-fitted coordinates
are required to handle the solid structure and the influence of the
immersed boundary on the fluid is captured solely through an external
body force.
When devising a numerical method for solving the IB equations, a common
approach is to use a fractional-step scheme in which the fluid is
decoupled from the immersed boundary, thereby reducing the overall
complexity of the method. Typically, these fractional-step schemes
employ some permutation of the following steps:
\begin{itemize}
\item \emph{Velocity interpolation:} the fluid
velocity is interpolated onto the immersed boundary.
\item \emph{IB evolution:} the immersed boundary is evolved in time
using the interpolated velocity field.
\item \emph{Force spreading:} calculate the force exerted by the
immersed boundary and spreads it onto the nearby fluid grid points,
with the resulting force appearing as an external forcing
term in the Navier-Stokes equations.
\item \emph{Fluid solve:} evolve the fluid variables in time
using the external force calculated in the force spreading step.
\end{itemize}
Algorithms that fall into this category include Peskin's original
method~\cite{Peskin1972} as well as algorithms developed by Lai and
Peskin~\cite{Lai2000}, Griffith and Peskin~\cite{Griffith2005}, and many
others.
A popular recent implementation of fractional-step type is the IBAMR
code~\cite{ibamr} that supports distributed-memory parallelism and
adaptive mesh refinement. This project grew out of Griffith's doctoral
thesis~\cite{GriffithThesis2005} and was outlined in the
papers~\cite{Griffith2007,Griffith2005}. In the original IBAMR
algorithm, the incompressible Navier-Stokes equations are solved using a
second-order accurate projection scheme in which the viscous term is
handled with an L-stable
discretization~\cite{McCorquodale2001,Twizell1996} while an explicit
second-order Godunov scheme~\cite{Colella1990,Minion1996} is applied to
the nonlinear advection terms. The IB evolution equation is then
integrated in time using a strong stability-preserving Runge-Kutta
method~\cite{Gottlieb2001}. Since IBAMR's release, drastic
improvements have been made that increase both the accuracy and
generality of the software~\cite{Griffith2012,Griffith2009}.
Fractional-step schemes often suffer from a severe time step restriction
due to numerical stiffness that arises from an explicit treatment of the
immersed boundary in the most commonly used splitting
approaches~\cite{Stockie1999}. Because of this limitation, many
researchers have proposed new algorithms that couple the fluid and
immersed boundary together in an implicit fashion, for
example~\cite{Ceniceros2009,Hou2008,Le2009,Mori2008,Newren2007}. These
methods alleviate the severe time step restriction, but do so at the
expense of solving large nonlinear systems of algebraic equations in
each time step. Although these implicit schemes have been shown in some
cases to be competitive with their explicit
counterparts~\cite{Newren2008}, there is not yet sufficient evidence to
prefer one approach over the other, especially when considering parallel
implementations.
Projection methods are a common class of fractional-step schemes for solving the incompressible
Navier-Stokes equations, and are divided
into two steps. First, the discretized momentum equations are integrated
in time to obtain an intermediate velocity field that in general is not
divergence-free. In the second step, the intermediate velocity is
projected onto the space of divergence-free fields using the Hodge
decomposition. The projection step typically requires the solution of
large linear systems in each time step that are computationally costly
\changed{and form a significant bottleneck in CFD codes}. This cost is increased even more
when a small time step is required for explicit implementations. Note
that even though some researchers make use of unsplit discretizations of
the Navier-Stokes equations~\cite{Griffith2012,Newren2008}, there is
significant benefit to be had by using a split-step
projection method as a preconditioner~\cite{Griffith2009-2}. Therefore,
any improvements made to a multi-step fluid solver can reasonably be
incorporated into unsplit schemes as well.
In this paper, we develop a fractional-step IB method that has the
computational complexity of a completely explicit method and
exhibits excellent parallel scaling
on distributed-memory architectures. This is achieved
by abandoning the projection method paradigm and instead adopting the
pseudo-compressible fluid solver developed by Guermond and
Minev~\cite{Guermond2010,Guermond2011}. Pseudo-compressibility methods
relax the incompressibility constraint by perturbing it in an
appropriate manner, such as in Temam's penalty
method~\cite{Temam1968}, the artificial compressibility
method~\cite{Chorin1967}, and Chorin's projection
method~\cite{Chorin1968,Rannacher1992}. Guermond and Minev's algorithm
differentiates itself by employing a directional-splitting strategy,
thereby permitting the linear systems of size $N^d\times N^d$ typically
arising in projection methods (where $d=2$ or 3 is the problem
dimension) to be replaced with a set of one-dimensional tridiagonal
systems of size $N \times N$. These tridiagonal systems can be solved
efficiently on distributed-memory computing architectures by combining
Thomas's algorithm with a Schur-complement technique. This allows the
proposed IB algorithm to \changed{efficiently utilize parallel resources~\cite{Ganzha2011}}.
The only serious limitation of the IB algorithm is that it is
restricted to simple geometries and boundary conditions
due to the directional-splitting strategy adopted by Guermond and
Minev. However, since IB practitioners often use a rectangular fluid
domain with periodic boundary conditions, this is not a serious
limitation. Instead, the IB method provides a natural setting to
leverage the strengths of Guermond and Minev's algorithm allowing
complex geometries to be incorporated into the domain through an
immersed boundary. This is a simple alternative to the fictitious domain
procedure proposed by Angot et al.~\cite{Angot2012}.
In section~\ref{sec:Equations}, we begin by stating the governing
equations for the immersed boundary method. We continue by describing
our proposed numerical scheme in section~\ref{sec:Algorithm} where we
incorporate the higher-order rotational form of Guermond and Minev's
algorithm that discretizes an $\order{\Delta t^2}$ perturbation of the
Navier-Stokes equations to yield a formally $\order{\Delta t^{3/2}}$
accurate method. As a result, the proposed method has convergence
properties similar to a fully second-order projection method, while
maintaining the computational complexity of a completely explicit
method. \changed{In section~\ref{sec:Implementation}, we discuss
implementation details and highlight the novel aspects of our
algorithm.} Finally, in section~\ref{sec:Results}
\changed{and~\ref{sec:Performance}}, we \changed{demonstrate} the
accuracy, efficiency and parallel \changed{performance} of our method by
means of several test problems in 2D and 3D.
\section{Immersed Boundary Equations}
\label{sec:Equations}
In this paper, we consider a $d$-dimensional Newtonian, incompressible
fluid that fills a periodic box $\Omega = [0,H]^d$ having side
length $H$ and dimension $d=2$ or 3. The fluid is specified
using Eulerian coordinates, $\bs{x}=(x,y)$ in 2D or $(x,y,z)$ in 3D.
Immersed within the fluid is a neutrally-buoyant, elastic structure
$\Gamma \subset \Omega$ that we assume is either a single
one-dimensional elastic fiber, or else is constructed from a collection
of such fibers. In other words, $\Gamma$ can be a curve, surface or
region. The immersed boundary can therefore be described using a
fiber-based Lagrangian parameterization, in which the position along any
fiber is described by a single parameter $s$. If there are multiple
fibers making up $\Gamma$ (for example, for a ``thick'' elastic region
in 2D, or a surface in 3D) then a second parameter $r$ is introduced to
identify individual fibers. The Lagrangian parameters are assumed to be
dimensionless and lie in the interval $s,r\in[0,1]$.
In the following derivation, we state the governing equations for a
single elastic fiber in dimension $d=2$, and the extension to the
three-dimensional case or for multiple fibers is straightforward. The
fluid velocity $\bs{u}(\bs{x},t)=(u(\bs{x},t),v(\bs{x},t))$ and pressure
$p(\bs{x},t)$ at location $\bs{x}$ and time $t$ are governed by the
incompressible Navier-Stokes equations
\begin{gather}
\label{eq:NSE}
\rho \brac{\pd{\bs{u}}{t} + \bs{u}\cdot\nabla\bs{u}} + \nabla p = \mu
\nabla^2 \bs{u} + \bs{f},
\\
\label{eq:incompressible}
\nabla \cdot \bs{u} = 0,
\end{gather}
where $\rho$ is the fluid density and $\mu$ is the dynamic viscosity
(both constants). The term $\bs{f}$ appearing on the right hand side of
\eqref{eq:NSE} is an elastic force arising from the immersed boundary
that is given by
\begin{gather}
\label{eq:force}
\bs{f}(\bs{x},t) = \int\limits_\Gamma \bs{F}(s,t) \, \delta(\bs{x} -
\bs{X}(s,t)) \,ds,
\end{gather}
where $\bs{x}=\bs{X}(s,t)=(X(s,t), Y(s,t))$ represents the IB
configuration and $\bs{F}(s,t)$ is the elastic force density. The delta
function $\delta(\bs{x}) = \delta(x)\delta(y)$ is a Cartesian product of
1D Dirac delta functions, and acts to ``spread'' the IB
force from $\Gamma$ onto adjacent fluid particles. In general, the
force density $\bs{F}$ is a functional of the current IB configuration
\begin{gather}
\label{eq:forceDensity}
\bs{F}(s,t) = \bs{\mathcal{F}} \left[\bs{X}(s,t)\right].
\end{gather}
For example, the force density
\begin{gather}
\label{eq:forceDensityDefinition}
\bs{\mathcal{F}}[\bs{X}(s,t)] = \sigma \pd{ }{s}\brac{\pd{\bs{X}}{s}
\brac{ 1 - \frac{L}{|\pd{\bs{X}}{s}|} }}
\end{gather}
corresponds to a single elastic fiber having ``spring constant''
$\sigma$ and an equilibrium state in which the elastic strain $|\partial
\bs{X} / \partial s| \equiv L$.
The final equation needed to close the system is an evolution equation
for the immersed boundary, which comes from the simple requirement that
$\Gamma$ must move at the local fluid velocity:
\begin{gather}
\label{eq:membrane}
\pd{\bs{X}(s,t)}{t} = \bs{u}(\bs{X}(s,t),t) = \int\limits_\Omega
\bs{u}(\bs{x},t) \, \delta(\bs{x}-\bs{X}(s,t)) \, d\bs{x}.
\end{gather}
This last equation is simply the no-slip condition, with the
rightmost equality corresponding to the delta function convolution form being more convenient for
numerical computations because of its resemblance to the IB forcing
term~\eqref{eq:force}. Periodic boundary conditions are imposed on both
the fluid and the immersed structure and appropriate initial values are
prescribed for the fluid velocity $\bs{u}(\bs{x},0)$ and IB position
$\bs{X}(s,0)$. \changed{Note that our assumption of periodicity in the
fluid and immersed boundary is a choice made for purposes of
convenience only, and is not a necessary restriction on either the
mathematical model or the algorithms developed in
section~\ref{sec:Algorithm}.} Further details on the mathematical
formulation of the immersed boundary problem and its extension to three
dimensions can be found in \cite{Peskin2002}.
\section{Algorithm}
\label{sec:Algorithm}
We now provide a detailed description of our algorithm for
solving the immersed boundary problem. The novelty in our approach
derives first and foremost from the use of a pseudo-compressibility method for
solving the incompressible Navier-Stokes equations, which is new in the
IB context and is described in this section. The second novel aspect of our
algorithm is in the \changed{implementation}, which is detailed in
section~\ref{sec:Implementation}.
\subsection{Pseudo-Compressibility Methods}
Pseudo-compressibility methods~\cite{Rannacher1992,Shen1997} belong to a
general class of numerical schemes for approximating the incompressible
Navier-Stokes equations by appropriately relaxing the incompressibility
constraint. An $\order{\epsilon}$ perturbation of the governing
equations is introduced in the following manner
\begin{gather}
\label{eq:PerturbNSE}
\rho \brac{\pd{\bs{u}_\epsilon}{t} +
\bs{u}_\epsilon\cdot\nabla\bs{u}_\epsilon} + \nabla p_\epsilon = \mu
\nabla^2 \bs{u}_\epsilon + \bs{f},
\\
\label{eq:PerturbIncompressible}
\frac{\epsilon}{\rho} \ctsop{A} p_\epsilon + \nabla \cdot \bs{u}_\epsilon = 0,
\end{gather}
where various choices of the generic operator $\ctsop{A}$ lead to a
number of familiar numerical schemes. For example, choosing
$\ctsop{A}=\ctsop{1}$ (the identity) corresponds to the penalty method
of Temam~\cite{Temam1968}, $\ctsop{A}=\partial_t$ yields the artificial
compressibility method~\cite{Chorin1967}, $\ctsop{A}=-\nabla^2$ is
equivalent to Chorin's projection scheme~\cite{Chorin1968,Rannacher1992}
(as long as the perturbation parameter is set equal to the time step,
$\epsilon=\Delta t$), and $\ctsop{A}=-\nabla^2 \partial_t$ yields Shen's
method~\cite{Shen1996} (when $\epsilon=\beta\rho(\Delta t)^2$ for some positive
constant $\beta$).
Recently, Guermond and Minev~\cite{Guermond2010,Guermond2011} proposed a
new pseudo-compressibility method \changed{with excellent parallel
scaling properties}. The first-order version of their method can be cast
in the form of an $\order{\epsilon}$-perturbation such as in
equations \eqref{eq:PerturbNSE}--\eqref{eq:PerturbIncompressible} with
$\epsilon = \Delta t$ and
\begin{align*}
\ctsop{A} =
\begin{cases}
(1-\partial_{xx})(1-\partial_{yy}) & \text{in 2D},\\
(1-\partial_{xx})(1-\partial_{yy})(1-\partial_{zz}) & \text{in 3D}.
\end{cases}
\end{align*}
They also proposed an $\order{\epsilon^2}$ (second-order in time) variant
that corresponds to the three-stage scheme
\begin{gather}
\label{eq:PerturbNSE2}
\rho \brac{\pd{\bs{u}_\epsilon}{t} +
\bs{u}_\epsilon\cdot\nabla\bs{u}_\epsilon} + \nabla p_\epsilon = \mu
\nabla^2 \bs{u}_\epsilon + \bs{f},
\\
\label{eq:PerturbIncompressible2}
\frac{\epsilon}{\rho} \ctsop{A} \psi_\epsilon + \nabla \cdot \bs{u}_\epsilon =
0, \\
\label{eq:PerturbCorrection2}
\epsilon \pd{p_\epsilon}{t} = \psi_\epsilon - \chi \mu \nabla \cdot
\bs{u}_\epsilon ,
\end{gather}
where $\psi_\epsilon$ is an intermediate variable and $\chi\in
(0,1]$ is an adjustable parameter.
For both variants of the method, corresponding to either
\eqref{eq:PerturbNSE2}--\eqref{eq:PerturbIncompressible2}
or~\eqref{eq:PerturbNSE2}--\eqref{eq:PerturbCorrection2}, the momentum
equation is discretized in time using a Crank-Nicolson step and the
viscous term is directionally-split using the technique proposed by
Douglas~\cite{Douglas1962}. The perturbed incompressibility constraint
is solved using a straightforward discretization of the direction-split
factors in the operator $\ctsop{A}$ that reduces to a set of
one-dimensional tridiagonal systems. These simple linear systems can be
solved very efficiently on a distributed-memory machine by combining
Thomas's algorithm with a Schur-complement technique. This is achieved
by expressing each tridiagonal system using block matrices and
manipulating the original system into a set of block-structured systems
and a Schur complement system. By solving these block-structured systems
in parallel, the domain decomposition can be effectively parallelized.
It is important to note that Guermond and Minev's fluid solver cannot be
recast as a pressure projection algorithm; nevertheless, it has been
demonstrated both analytically~\cite{Guermond2012} and
computationally~\cite{Guermond2011-2} to have comparable convergence
properties to related projection methods. More precisely, the
higher-order algorithm we apply here yields a formally $\order{\Delta
t^{3/2}}$ accurate method for 2D flows, although in practice higher
convergence rates are observed in both 2D and 3D computations.
The main disadvantage of the algorithm is that it is limited to simple
(rectangular) geometries because of the use of directional-splitting.
However, this is not a real disadvantage in the immersed boundary
context because complex solid boundaries can be introduced by using
immersed boundary points (attached to fixed ``tether points'') that are
embedded within a regular computational domain. In this way, the IB
method provides a simple and efficient alternative to the fictitious
domain approach~\cite{Angot2012} and related methods that could be used
to incorporate complex geometries into Guermond and Minev's fluid solver.
\subsection{Discretization of Fluid and IB Domains}
When discretizing the governing equations, we require two separate
computational grids, one each for the Eulerian and Lagrangian
variables. For simplicity, we state our discrete scheme for a
two-dimensional fluid ($d=2$) and a fiber consisting of a single
one-dimensional closed curve. The immersed structure is discretized
using $N_s$ uniformly-spaced points $s_k=kh_s$ in the interval $[0,1]$,
with mesh spacing $h_s=1/N_s$ and $k=0, 1, \ldots, N_s-1$. As a
short-hand, we denote discrete approximations of the IB position at time
$t_n=n\Delta t$ by
\begin{gather*}
\bs{X}_{k}^n \approx \brac{X(k h_s, t_n),\; Y(k h_s, t_n)} ,
\end{gather*}
where $n=0,1,2,\ldots$. Similarly, the fluid domain
$\Omega=[0,H]^2$ is divided into an $N \times N$, uniform,
rectangular mesh in which each cell has side length $h=H/N$.
We employ a \emph{marker-and-cell} (MAC)
discretization~\cite{Harlow1965} as illustrated in
Figure~\ref{fig:Grid}, in which the pressure
\begin{gather*}
p_{i,j}^n \approx p( \bs{x}_{i,j}, t_n)
\end{gather*}
is approximated at the cell center points
\begin{gather*}
\bs{x}_{i,j} = \brac{(i+{1}/{2}) h,~(j+{1}/{2}) h},
\end{gather*}
for $i,j = 0,1,\ldots,N-1$. The velocities on the other
hand are approximated at the edges of cells
\begin{gather*}
\bs{u}^{\text{\MAC},n}_{i,j} = \brac{ u^{\text{\MAC},n}_{i,j},~
v^{\text{\MAC},n}_{i,j} }, \\
\intertext{where}
u^{\text{\MAC},n}_{i,j} \approx u( i h, (j+{1}/{2}) h , t_n)
\qquad \text{and} \qquad
v^{\text{\MAC},n}_{i,j} \approx v( (i+{1}/{2}) h, j h , t_n).
\end{gather*}
The $x$-component of the fluid velocity is defined on the east and west
cell edges, while the $y$-component is located on the north and south
edges.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.6\textwidth]{Figures/MacGrid.png}
\caption{Location of fluid velocity and pressure variables on the
staggered marker-and-cell (MAC) grid.}
\label{fig:Grid}
\end{center}
\end{figure}
\subsection{Spatial Finite Difference Operators}
Next, we introduce the discrete difference operators that are used for
approximating spatial derivatives. The second derivatives of a scalar
Eulerian variable are replaced using the second-order centered
difference stencils
\begin{align*}
\diffop{D}_{xx} p_{i,j} &= \frac{p_{i+1,j} - 2 p_{i,j} + p_{i-1,j}}{h^2}\\
\mbox{and}\qquad
\diffop{D}_{yy} p_{i,j} &= \frac{p_{i,j+1} -2 p_{i,j} + p_{i,j-1}}{h^2}.
\end{align*}
The same operators may be applied to the vector velocity, so that for example
\begin{gather*}
\diffop{D}_{xx} \bs{u}^{\text{\MAC}}_{i,j} =
\left[
\begin{array}{c}
\diffop{D}_{xx} u^{\text{\MAC}}_{i,j}\\[0.2cm]
\diffop{D}_{xx} v^{\text{\MAC}}_{i,j}
\end{array}
\right].
\end{gather*}
Since the fluid pressure and velocity variables are defined at different
locations (i.e., cell centers and edges respectively), we also require
difference operators whose input and output are at different locations,
and for this purpose we indicate explicitly the locations of the input
and output using a superscript of the form
${}^\text{\emph{Input}$\to$\emph{Output}}$. For example, an operator
with the superscript ${}^\text{C$\to$\text{\MAC}}$ takes a cell-centered
input variable (denoted ``C'') and returns an output value located on a
cell edge (denoted ``E''). Using this notation, we may then define the
discrete gradient operator $\diffop{G}^{\text{C}\to\text{\MAC}}$ as
\begin{gather*}
\diffop{G}^{\text{C} \to \text{\MAC}} p_{i,j} = \brac{
\frac{p_{i,j}-p_{i-1,j}}{h},~ \frac{p_{i,j}-p_{i,j-1}}{h}},
\end{gather*}
which acts on the cell-centered pressure variable and returns a
vector-valued quantity on the edges of a cell. Likewise, the discrete
divergence of the edge-valued velocity
\begin{gather*}
\diffop{D}^{\text{\MAC} \to \text{C}} \cdot
\bs{u}_{i,j}^{\text{\MAC}} = \frac{u_{i+1,j} - u_{i,j}}{h} +
\frac{v_{i,j+1} - v_{i,j}}{h},
\end{gather*}
which returns a cell-centered value.
Difference formulas are also required for Lagrangian variables such as
$\bs{X}_{k}$, for which we use the first-order one-sided difference
approximations:
\begin{align*}
\diffop{D}_{s}^+ \bs{X}_{k} &= \frac{\bs{X}_{k+1} - \bs{X}_{k}}{h_s}
\\
\mbox{and}~~~ \diffop{D}_{s}^- \bs{X}_{k} &= \frac{\bs{X}_{k} -
\bs{X}_{k-1}}{h_s}.
\end{align*}
Finally, when discretizing the integrals in \eqref{eq:force}
and \eqref{eq:membrane}, we require a discrete approximation to the Dirac
delta function. Here, we make use of the following approximation
\begin{gather*}
\delta_h(\bs{x}) = \frac{1}{h^2} \phi \brac{ \frac{x}{h} } \phi
\brac{ \frac{y}{h} }
\end{gather*}
where
\begin{gather}
\label{eq:discretedelta}
\phi(r) =
\begin{cases}
\frac{1}{8}(3-2|r| + \sqrt{1+4|r|-4r^2}) &
\text{if $0 \leq |r| < 1$}, \\
\frac{1}{8}(5-2|r| - \sqrt{-7+12|r|-4r^2}) &
\text{if $1 \leq |r| < 2$}, \\
0 & \text{if $2 \leq |r|$}.
\end{cases}
\end{gather}
\changed{Peskin~\cite{Peskin2002} derives this and other regularized
delta function kernels by imposing various desirable smoothness and
interpolation properties. We have chosen the form of $\delta_h$ in
equation \eqref{eq:discretedelta} because numerical simulations have
shown that it offers a good balance between accuracy and
cost~\cite{BringleyPeskin2008,Griffith2005,Stockie1997}, not to
mention that it is currently the approximate delta function that is
most commonly applied in other IB simulations.}
\subsection{IB-GM Algorithm}
\label{sec:algorithm}
We are now prepared to describe our algorithm for the IB problem based
on the fluid solver of Guermond and Minev~\cite{Guermond2011}, which we
abbreviate ``IB-GM''. The fluid is evolved in time in two main stages,
both of which reduce to solving one-dimensional tridiagonal linear
systems. In the first stage, the diffusion terms in the momentum
equations are integrated in time using the directional-splitting
technique proposed by Douglas~\cite{Douglas1962}. The nonlinear
advection term on the other hand is dealt with explicitly using the
second-order Adams-Bashforth extrapolation
\begin{gather}
\label{eq:nonlinear}
N^{n+1/2} = \frac{3}{2}
\diffop{N}(\bs{u}^{\text{\MAC},n}_{i,j}) \changed{-} \frac{1}{2}
\diffop{N}(\bs{u}^{\text{\MAC},n-1}_{i,j}),
\end{gather}
where $\diffop{N}(\mbox{\raisebox{1.5pt}{\scriptsize $\bullet$}})$ is an approximation of the advection term
$\bs{u}\cdot\nabla\bs{u}$. In this paper, we write the advection
term in skew-symmetric form
\begin{gather}
\diffop{N}(\bs{u}) \approx \frac{1}{2} \bs{u}\cdot\nabla\bs{u} + \frac{1}{2}
\nabla \cdot (\bs{u}\bs{u}),
\end{gather}
and then discretize the resulting expression using the second-order
centered difference scheme studied by Morinishi et
al.~\cite{Morinishi1998}.
In the second stage, the correction term $\psi$ is calculated using
Guermond and Minev's splitting operator~\cite{Guermond2011}, and the
actual pressure variable is updated using the higher-order variant of
their algorithm corresponding
to~\eqref{eq:PerturbNSE2}--\eqref{eq:PerturbCorrection2}. \changed{For
all simulations, we use the same parameter values $\chi=0.6$ and
$\epsilon = \Delta t$.}
For the remaining force spreading and velocity interpolation steps, we
apply standard techniques. The integrals appearing in equations
\eqref{eq:force} and \eqref{eq:membrane} are approximated to second
order using the trapezoidal quadrature rule and the fiber evolution
equation \eqref{eq:membrane} is integrated using the second-order
Adams-Bashforth extrapolation.
Assuming that the state variables are known at the $(n-1)$th and $n$th
time steps, the IB-GM algorithm proceeds as follows.
\begin{description}
\item[Step 1.] Evolve the IB position to time $t_{n+1/2}=(n+1/2)\Delta t$:
\begin{enumerate}
\renewcommand{\theenumi}{1\alph{enumi}}
\item Interpolate the fluid velocity onto immersed boundary
points:
\begin{gather*}
\assign{\bs{U}^n_{k}} = \sum_{i,j} \bs{u}^{\text{\MAC},n}_{i,j}
\delta_h(\bs{x}^{\text{\MAC}}_{i,j} - \bs{X}^n_{k}) \, h^2.
\end{gather*}
\item Evolve \label{step:1b} the IB position to time $t_{n+1}$ using
an Adams-Bashforth discretization of \eqref{eq:membrane}:
\begin{gather*}
\frac{\assign{\bs{X}^{n+1}_{k}} - \bs{X}^{n}_{k}}{\Delta t} =
\frac{3}{2} \bs{U}^n_{k} - \frac{1}{2} \bs{U}^{n-1}_{k}.
\end{gather*}
\item Approximate the IB position at time $t_{n+1/2}$ using the
arithmetic average:
\begin{gather*}
\assign{\bs{X}^{n+1/2}_{k}} = \frac{1}{2} \brac{\bs{X}^{n+1}_{k} +
\bs{X}^{n}_{k}}.
\end{gather*}
\end{enumerate}
\item[Step 2.] Calculate the fluid forcing term:
\begin{enumerate}
\renewcommand{\theenumi}{2\alph{enumi}}
\item Approximate the IB force density at time $t_{n+1/2}$ using
\eqref{eq:forceDensityDefinition}:
\begin{gather*}
\assign{\bs{F}^{n+1/2}_{k}} = \sigma \diffop{D}_{s}^- \brac{
\diffop{D}_{s}^+ \bs{X}_{k}^{n+1/2} \brac{\diffop{1} -
\frac{L}{\left|\diffop{D}_{s}^+
\bs{X}_{k}^{n+1/2}\right|}}}.
\end{gather*}
\item Spread the IB force density onto fluid grid points:
\begin{gather*}
\assign{\bs{f}^{\text{\MAC},n+1/2}_{i,j}} = \sum_{k}
\bs{F}^{n+1/2}_{k} \, \delta_h(\bs{x}^{\text{\MAC}}_{i,j} -
\bs{X}^{n+1/2}_k) \, h_s .
\end{gather*}
\end{enumerate}
\item[Step 3.] Solve the incompressible Navier--Stokes equations:
\begin{enumerate}
\renewcommand{\theenumi}{3\alph{enumi}}
\item Predict \label{step:3a} the fluid pressure at time $t_{n+1/2}$:
\begin{gather*}
\assign{p^{\ast,n+1/2}_{i,j}} = p^{n-1/2}_{i,j} +
\psi^{n-1/2}_{i,j}.
\end{gather*}
\item Compute \label{step:3b} the first intermediate velocity field
$\bs{u}^{\text{\MAC},\ast}$ by integrating the momentum equations
explicitly:
\begin{multline*}
\rho \brac{ \frac{ \assign{\bs{u}^{\text{\MAC},\ast}_{i,j}} -
\bs{u}^{\text{\MAC},n}_{i,j} }{\Delta t} + {N}^{n+1/2} } = \\
\mu \brac{ \diffop{D}_{xx} + \diffop{D}_{yy}}
\bs{u}^{\text{\MAC},n}_{i,j} - \diffop{G}^{\text{C}\to
\text{\MAC}} p^{\ast,n+1/2}_{i,j} +
\bs{f}^{\text{\MAC},n+1/2}_{i,j}.
\end{multline*}
\item Determine the second intermediate velocity
$\bs{u}^{\text{\MAC},\mystar\mystar}$ by solving the tridiagonal systems
corresponding to the $x$-derivative in the directional-split
Laplacian:
\begin{gather*}
\rho \brac{ \frac{ \assign{\bs{u}^{\text{\MAC},\mystar\mystar}_{i,j}} -
\bs{u}^{\text{\MAC},\ast}_{i,j} }{\Delta t} } = \frac{\mu}{2}
\diffop{D}_{xx} \brac{ \assign{\bs{u}^{\text{\MAC},\mystar\mystar}_{i,j}} -
\bs{u}^{\text{\MAC},n}_{i,j}}.
\end{gather*}
\item Obtain the final velocity approximation at time $t_{n+1}$ by
solving the following tridiagonal systems corresponding to the
$y$-derivative piece of the directional-split Laplacian for
$\assign{\bs{u}^{\text{\MAC},n+1}_{i,j}}$:
\begin{gather*}
\rho \brac{ \frac{ \assign{\bs{u}^{\text{\MAC},n+1}_{i,j}} -
\bs{u}^{\text{\MAC},\mystar\mystar}_{i,j} }{\Delta t} } = \frac{\mu}{2}
\diffop{D}_{yy} \brac{ \assign{\bs{u}^{\text{\MAC},n+1}_{i,j}} -
\bs{u}^{\text{\MAC},n}_{i,j}}.
\end{gather*}
\item Determine the pressure correction term $\psi_{i,j}^{n+1/2}$ by
solving
\begin{gather*}
\brac{\diffop{1}-\diffop{D}_{xx}}
\brac{\diffop{1}-\diffop{D}_{yy}} \assign{\psi_{i,j}^{n+1/2}} =
- \frac{\rho}{\Delta t} \diffop{D}^{\text{\MAC}\to \text{C}}
\cdot \bs{u}^{\text{\MAC},n+1}_{i,j}.
\end{gather*}
\item Calculate the pressure at time $t_{n+1/2}$ using
\begin{gather*}
\assign{p^{n+1/2}_{i,j}} = p^{n-1/2}_{i,j} + \psi^{n+1/2}_{i,j} -
\chi \mu \diffop{D}^{\text{\MAC}\to \text{C}} \cdot \left(
\frac{1}{2}(\bs{u}^{\text{\MAC},n+1}_{i,j} +
\bs{u}^{\text{\MAC},n}_{i,j})\right).
\end{gather*}
\end{enumerate}
\end{description}
Note that in the first step of the algorithm with $n=0$, we do not yet
have an approximation of the solution at the previous time step,
and therefore we make the following replacements:
\begin{itemize}
\item In Step \ref{step:1b}, approximate the fiber evolution equation
using a first-order forward Euler approximation $\bs{X}^{1}_{k} =
\bs{X}^{0}_{k} + \Delta t \bs{U}^0_{k}$.
\item In Step \ref{step:3a}, set $p^{\ast,1/2}_{i,j} = 0$.
\item In Step \ref{step:3b}, the nonlinear term from equation
\eqref{eq:nonlinear} is replaced with $N^{1/2} =
\diffop{N}(\bs{u}^{\text{\MAC},0}_{i,j})$.
\end{itemize}
\section{Parallel Implementation}
\label{sec:Implementation}
Here we outline the details of the algorithm that relate specifically to
the parallel implementation. Since a primary feature of our algorithm
is its parallel scaling properties, it is important to discuss our
implementation in order to understand the parallel characteristics of
the method.
\subsection{Partitioning of the Eulerian and Lagrangian Grids}
\label{sec:Partitioning}
Suppose that the algorithm in section~\ref{sec:algorithm} is implemented
on a distributed-memory computing machine with $P=P_x \cdot P_y$
processing nodes. The parallelization is performed by subdividing the
rectangular domain $\Omega$ into equally-sized rectangular partitions
$\{\Omega_{\ell,m}\}$, with $\ell=1,2,\ldots, P_x$ and $m=1,2,\ldots,
P_y$, where $P_x$ and $P_y$ refer to the number of subdivisions in the
$x$-- and $y$--directions respectively. Each node is allocated a single
domain partition $\Omega_{\ell,m}$, along with the values of the Eulerian
and Lagrangian variables contained within it. For example, the
$(\ell,m)$ node would contain in its memory the fluid variables
$\bs{u}_{i,j}^{\text{\MAC}}$ and $p_{i,j}$ for all $\bs{x}_{i,j} \in
\Omega_{\ell,m}$, along with all immersed boundary data $\bs{X}_k$ and
$\bs{F}_k$ such that $\bs{X}_k \in \Omega_{\ell,m}$. This partitioning
is illustrated for a simple $3\times 3$ subdivision in
Figure~\ref{fig:EulerianPartitioning}\subref{fig:DomainDecomp}.
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.44\textwidth]{Figures/GridPartition.pdf}
\label{fig:DomainDecomp}}
\qquad
\subfigure[]{\includegraphics[width=0.44\textwidth]{Figures/GhostRegion.pdf}
\label{fig:EulerianGhostRegion}}
\caption{\subref{fig:DomainDecomp} Parallel domain decomposition
with $P_x=P_y=3$. \subref{fig:EulerianGhostRegion} Communication
required to update ghost cell regions for the subdomain
$\Omega_{2,2}$.}
\label{fig:EulerianPartitioning}
\end{center}
\end{figure}
In this way, the computational work required in each time step is
effectively divided between processing nodes by requiring that each node
update only those state variables located within its assigned subdomain.
\changed{This approach is similar to that taken by
Uhlmann~\cite{Uhlmann2004} and Griffith~\cite{Griffith2010}, who
partitioned the Lagrangian data so that IB points belong to the
same processor as the surrounding fluid. An alternate approach would
be to independently partition the Eulerian and Lagrangian data as done
by Givelberg~\cite{Givelberg2006}.}
\changed{ The primary novelty in our algorithm derives from that way
that it introduces parallelism naturally into the discretization.
This differentiates our method from prior approaches (in
Griffith~\cite{Griffith2005} or Givelberg~\cite{Givelberg2006}) that
rely heavily on black-box parallel solvers such as
Hypre~\cite{Hypre}. In particular, we use a fractional-step scheme
that permits the immersed boundary and fluid to be treated
independently. The IB component of the algorithm is discretized in
the same manner as Griffith~\cite{Griffith2012} who used an
Adams-Bashforth discretization to reduce the number of floating-point
operations. Since this is an explicit discretization, the discrete
immersed boundary can be viewed as a simple particle system -- a
collection of IB material points connected by force-generating
connections -- which is a well-established class of problems in the
parallel computing community~\cite{Asanovic2006,Asanovic2009}.
Therefore, the major differences in parallel implementation come from
the discrete delta function whose support allows particles to interact
over multiple subdomains in the velocity interpolation and force
spreading steps.
For the fluid portion of the algorithm, we use the GM fluid solver as
described in~\cite{Guermond2011} with the following minor
modifications:
\begin{itemize}
\item the advection term is discretized in skew-symmetric form;
\item periodic boundary conditions are imposed on the fluid domain;
\item the directional-splitting order is rotated in each time step
to reduce the possibility of a directional bias; and
\item minor alterations are required to the parallel tridiagonal solver.
\end{itemize}
By using the directional split strategy of Guermond and Minev, the
discretized fluid equations deflate into a sequence of one-dimensional
problems, which is where parallelism is introduced directly into the
discretization. The most significant departure from other common IB
schemes is that the GM solver is a pseudo-compressibility method that
only approximately satisfies the incompressibility constraint. It is
yet to be seen how such a fluid solver will handle the near-singular
body forces that occur naturally in IB problems. Therefore, a
comprehensive numerical study is required to test the accuracy,
convergence, and volume conservation of the method.
Next, we describe our approach for implementing data partitioning and
communication, which makes use of infrastructure provided by
MPI~\cite{OpenMPI} and PETSc~\cite{PETSc-2012}. } Since the fluid and
immersed boundary are discretized on two separate grids, the data
partitioning between nodes must be handled differently in each case.
The partitioning of Eulerian variables is much simpler because the
spatial locations are fixed in time and remain associated with the same
node for the entire computation. In contrast, Lagrangian variables are
free to move throughout the fluid domain and so a given IB point may
move between two adjacent subdomains during the course of a single time
step. As a result, the data structure and communication patterns for
the Lagrangian variables are more complex.
Consider the communication required for the update of fluid variables in
each time step, for which the algorithm in section~\ref{sec:algorithm}
requires the explicit computation of several discrete difference
operators. For points located inside a subdomain $\Omega_{\ell,m}$,
the discrete operators are easily computed; however for points on the
edge of a domain partition, a difference operator may require data that
is not contained in the current node's local memory. For example, when
calculating the discrete Laplacian (using the 5-point stencil), data at
points adjacent to the given state variable are required. As a result,
when an adjacent variable does not reside in $\Omega_{\ell,m}$,
communication is required with a neighbouring node to obtain the
required value. This communication is aggregated together using
\emph{ghost cells} that lie inside a strip surrounding
each $\Omega_{\ell,m}$ as illustrated in Figure
\ref{fig:EulerianPartitioning}\subref{fig:EulerianGhostRegion}. The
width of the ghost region is set equal to the support of the discrete
delta function used in the velocity interpolation and force spreading
steps; that is, two grid points in the case of the
delta-approximation~\eqref{eq:discretedelta}. When a difference
operator is applied to a state variable stored in the $(\ell,m)$ node,
the neighbouring nodes communicate the data contained in the ghost cells
adjacent to $\Omega_{\ell,m}$. After the ghost region is filled, the
discrete difference operators may then be calculated for all points in
$\Omega_{\ell,m}$. When combined with the parallel linear solver
discussed later in section~\ref{sec:linearsolver}, this parallel
communication technique permits the fluid variables to be evolved in
time.
As the IB points move through the fluid, the number of IB points
residing in any particular subdomain can vary from one time step to the
next. Therefore, the memory required to store the local IB data
structure changes with time, as does the communication load. These
complications are dealt with by splitting the data structure defining
the immersed boundary into two separate components corresponding to IB
points and force connections. The IB point (\mycode{IB}) data structure
contains the position and velocity of all IB points resident in a given
subdomain, whereas the force connection (\mycode{FC}) data structure
keeps track of all force-generating connections between these
points. The force density calculations depend on spatial information and
so the \mycode{IB} data structure requires a globally unique index
(which we call the ``primary key'') that is referenced by the
\mycode{FC} data structure (the ``foreign key''). This relationship is
illustrated in Figure~\ref{fig:IBDataStructure}, where the force
connections shown are consistent with the elastic force
function~\eqref{eq:forceDensityDefinition}. If the \mycode{IB} data
structure is represented as an associative array using \mycode{PointID}
as the key (and referenced as $\mycode{IB[PointID]}$) and
$\mycode{FC[i]}$ represents a specific element of the force connection
array, then the force density calculation may be written as
\begin{multline*}
\mycode{FC[i].Fdens} = \frac{\mycode{FC[i].sigma}}{h_s^2} \, \big(\,
\mycode{IB[\,FC[i].LPointID\,].X} \;+\; \mycode{IB[\,FC[i].RPointID\,].X} \\
-\; \mycode{2 * IB[\,FC[i].PointID\,].X} \, \big) ,
\end{multline*}
where we have assumed here that the force parameter $L=0$.
\changed{The \mycode{IB} and \mycode{FC} data structures
are stored in a hash table and vector (respectively) using the standard
STL containers in C++.}
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.9\textwidth]{Figures/IBRelation.pdf}
\label{fig:IBRelation}}
\qquad
\subfigure[]{\includegraphics[width=0.6\textwidth]{Figures/ForceConnectionRef.pdf}
\label{fig:ForceConnectionRef}}
\caption{\subref{fig:IBRelation} Relationship between the data
structures for the IB points (\mycode{IB}) and elastic force
connections (\mycode{FC}). \subref{fig:ForceConnectionRef}
References from a chosen force connection to the corresponding IB
points.}
\label{fig:IBDataStructure}
\end{center}
\end{figure}
We are now prepared to summarize the complete parallel procedure that is
used to evolve the fluid and immersed boundary. Keep in mind that at
the beginning of each time step, a processing node contains only those
IB points and force connections that reside in the corresponding
subdomain. The individual solution steps are:
\begin{itemize}
\item \emph{Velocity interpolation:} Interpolate the fluid velocity
onto the IB points and store the result in \mycode{IB[\mbox{\raisebox{1.5pt}{\scriptsize $\bullet$}}].U}.
This step requires fluid velocity data from the ghost region.
\item \emph{Immersed boundary evolution:} Evolve the IB points in time
by updating $\mycode{IB[\mbox{\raisebox{1.5pt}{\scriptsize $\bullet$}}].X} = \bs{X}^{n+1}_{\mbox{\raisebox{1.5pt}{\scriptsize $\bullet$}}}$. Note
that the IB point position at the half time step
($\mycode{IB[\mbox{\raisebox{1.5pt}{\scriptsize $\bullet$}}].Xh} = \bs{X}^{n+1/2}_{\mbox{\raisebox{1.5pt}{\scriptsize $\bullet$}}}$) must also be
stored for the force spreading step.
\item \emph{Immersed boundary communication:} Send the data from IB
points lying within the ghost region to the neighbouring processing
nodes. Figure~\ref{fig:IBSend} illustrates how the IB points residing
in the ghost region corresponding to $\Omega_{\ell,m}$ are copied from
$\Omega_{\ell+1,m}$ (for both the full time step $n+1$ and the
half-step $n+1/2$). In this example, three IB points (corresponding
to $\mycode{PointID}=k, k+1, k+2$) and two force connections (with
$\mycode{FC[i].PointID}=k,k+1$) are communicated to
$\Omega_{\ell,m}$. The additional IB point is required to calculate
the force density for $\mycode{FC[i].PointID}=k+1$. Because the IB
point $k-1$ already resides in $\Omega_{\ell,m}$, the force density
can be computed for $\mycode{FC[i].PointID}=k$ without any additional
communication.
\item \emph{Force spreading:} Calculate the force density for all IB
points in $\Omega_{\ell,m}$ and the surrounding ghost region at the
time step $n+1/2$. Then spread the force density onto the Eulerian
grid points residing in $\Omega_{\ell,m}$.
\item \emph{Immersed boundary cleanup:} Remove all IB points and
corresponding force connections that do not reside in
$\Omega_{\ell,m}$ at time step $n+1$.
\item \emph{Evolve fluid:} Evolve the fluid variables in time using the
parallel techniques discussed above. This requires communication with
the neighbouring processing nodes to update the ghost cell region, and
further communication is needed while solving the linear systems.
\end{itemize}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.9\textwidth]{Figures/IBSend.pdf}
\caption{IB points inside the ghost region surrounding
$\Omega_{\ell,m}$ are communicated from $\Omega_{\ell+1,m}$.}
\label{fig:IBSend}
\end{center}
\end{figure}
Using the approach outlined above, each processing node only needs to
communicate with its neighbouring nodes, with the only exception being
the linear solver which we address in the next section. Since
communication \changed{often hinders} the performance of a
parallel algorithm, this is the property that allows our method to scale
so well. For example, if the problem size and number of processing
nodes are doubled, then we would ideally want the execution time per
time step to remain unchanged.
\subsection{Linear Solver}
\label{sec:linearsolver}
A key remaining component of the algorithm outlined in
section~\ref{sec:algorithm} is the solution of the tridiagonal linear
systems arising in the fluid solver. When solving the
momentum equations the following linear systems arise:
\begin{gather}
\brac{\diffop{1} - \frac{\mu\Delta t}{2\rho} \diffop{D}_{xx}}
\bs{u}^{\text{\MAC},\mystar\mystar}_{i,j} =
\bs{u}^{\text{\MAC},\ast}_{i,j} - \frac{\mu\Delta t}{2\rho} \diffop{D}_{xx}
\bs{u}^{\text{\MAC},n}_{i,j},
\label{eq:umacstep1}
\\
\brac{\diffop{1} - \frac{\mu\Delta t}{2\rho} \diffop{D}_{yy}}
\bs{u}^{\text{\MAC},n+1}_{i,j} =
\bs{u}^{\text{\MAC},\mystar\mystar}_{i,j} - \frac{\mu\Delta t}{2\rho} \diffop{D}_{yy}
\bs{u}^{\text{\MAC},n}_{i,j},
\label{eq:umacstep2}
\end{gather}
while the pressure update step requires solving
\begin{gather*}
\brac{\diffop{1} - \diffop{D}_{xx}} \brac{\diffop{1} -
\diffop{D}_{yy}} \psi_{i,j}^{n+1/2} = - \frac{\rho}{\Delta t}
\diffop{D}^{\text{\MAC}\to \text{C}} \cdot
\bs{u}^{\text{\MAC},n+1}_{i,j} .
\end{gather*}
This last equation can be split into two steps
\begin{align}
\label{eq:PenaltyStepPart1}
\brac{\diffop{1} - \diffop{D}_{xx}} \psi_{i,j}^{\ast,n+1/2} &=
- \frac{\rho}{\Delta t} \diffop{D}^{\text{\MAC}\to \text{C}} \cdot
\bs{u}^{\text{\MAC},n+1}_{i,j},
\\
\label{eq:PenaltyStepPart2}
\mbox{and} \qquad \brac{\diffop{1} - \diffop{D}_{yy}}
\psi_{i,j}^{n+1/2} &= \psi_{i,j}^{\ast,n+1/2},
\end{align}
where $\psi_{i,j}^{\ast}$ is an intermediate variable. Note that
each linear system in \eqref{eq:umacstep1}--\eqref{eq:PenaltyStepPart2}
involves a difference operator that acts in one spatial dimension only
and decouples into a set of one-dimensional periodic (or cyclic)
tridiagonal systems. For example, equations \eqref{eq:umacstep1} and
\eqref{eq:PenaltyStepPart1} consist of $N$ tridiagonal systems of size
$N\times N$ having the general form
\begin{gather}
\label{eq:TriDiagX}
\diffop{A}^{(j)} \Psi_{i,j} = b_{i,j} ,
\end{gather}
for each $j=0,1,\ldots, N-1$.
Because the processing node $(\ell,m)$ contains only fluid data
residing in subdomain $\Omega_{\ell,m}$, these tridiagonal linear
systems divide naturally between nodes. Each node solves those
linear systems for which it has the corresponding data $b_{i,j} \in
\Omega_{\ell,m}$ as illustrated in Figure~\ref{fig:TriDiagSystems}.
For example, when solving \eqref{eq:TriDiagX} along the $x$-direction,
each processing node solves $N/P_y$ linear systems and the total work is
spread over $P_x$ nodes. Similarly, when solving the corresponding
systems along the $y$-direction, each processing node solves $N/P_x$
systems spread over $P_y$ nodes.
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.44\textwidth]{Figures/LinearSystemXDirection.pdf}
\label{fig:LinearSystemXDirection}}
\qquad
\subfigure[]{\includegraphics[width=0.44\textwidth]{Figures/LinearSystemYDirection.pdf}
\label{fig:LinearSystemYDirection}}
\caption{\subref{fig:LinearSystemXDirection} Coupling direction for
linear systems \eqref{eq:umacstep1} and
\eqref{eq:PenaltyStepPart2}. \subref{fig:LinearSystemYDirection}
Coupling direction for linear systems \eqref{eq:umacstep2} and
\eqref{eq:PenaltyStepPart1}. Each processing node participates in
solving $N/P_{x,y}$ tridiagonal systems and requires communication
in the direction of the arrows.}
\label{fig:TriDiagSystems}
\end{center}
\end{figure}
Each periodic tridiagonal system is solved directly using a
Schur-complement technique~\cite[sec.~14.2.1]{Saad1996}. This is
achieved by rewriting the linear equations as a block-structured system
where the interfaces between blocks correspond to those for the
subdomains. To illustrate, let us consider an example with $P=2$
processors only, for which the periodic tridiagonal
system
\newcommand{\arrayrulecolor{lightgray}\hline}{\arrayrulecolor{lightgray}\hline}
\newcommand{\color{lightgray}\vrule}{\color{lightgray}\vrule}
\begin{gather*}
\left[ \begin{array}{cccccccccc}
a_1 & b_1 & & & & & & & & c_1 \\
c_2 & a_2 & b_2 & & & & & & & \\
& & \ddots & & & & & & & \\
& & & \ddots & & & & & & \\
& & & c_{M-1} & a_{M-1} & b_{M-1} & & & & \\\arrayrulecolor{lightgray}\hline
& & & & c_M & a_M & b_M & & & \\
& & & & & c_{M+1} & a_{M+1} & b_{M+1} & & \\
& & & & & & & \ddots & & \\
& & & & & & & & \ddots & \\
b_N & & & & & & & & c_N & a_N
\end{array} \right]
\left[ \begin{array}{c}
y_1 \\
x_2 \\
\vdots \\
\vdots \\
x_{M-1} \\\arrayrulecolor{lightgray}\hline
y_2 \\
x_{M+1} \\
\vdots \\
\vdots \\
x_N
\end{array} \right]
=
\left[ \begin{array}{c}
g_1 \\
f_2 \\
\vdots \\
\vdots \\
f_{M-1} \\\arrayrulecolor{lightgray}\hline
g_2 \\
f_{M+1} \\
\vdots \\
\vdots \\
f_N
\end{array} \right]
\end{gather*}
arises from a single row of unknowns in
Figure~\ref{fig:TriDiagSystems}\subref{fig:LinearSystemXDirection} (or a
single column in
Figure~\ref{fig:TriDiagSystems}\subref{fig:LinearSystemYDirection}). In
this example, the indices $M-1$ and $M$ refer to the subdomain boundary
points (denoted with a vertical line in the matrix above) so that the
data ($y_1$, $x_2$, \dots, $x_{M-1}$, $g_1$, $f_2$, \dots, $f_{M-1}$)
reside on processor~1 and ($y_2$, $x_{M+1}$, \dots, $x_{N}$, $g_2$,
$f_{M+1}$, \dots, $f_{N}$) on processor~2. To isolate the coupling
between subdomains, the rows in the matrix are reordered to shift the
unknowns at periodic subdomain boundaries ($y_1$ and $y_2$) to the last
two rows, and then the columns are reordered to keep the diagonal
entries on the main diagonal. This yields the equivalent linear system
\begin{gather*}
\left[ \begin{array}{cccc!{\color{lightgray}\vrule}cccc!{\color{lightgray}\vrule}cc}
a_2 & b_2 & & & & & & & c_2 & \\
& \ddots & & & & & & & & \\
& & \ddots & & & & & & & \\
& & c_{M-1} & a_{M-1} & & & & & & b_{M-1} \\\arrayrulecolor{lightgray}\hline
& & & & a_{M+1} & b_{M+1} & & & & c_{M+1} \\
& & & & & \ddots & & & & \\
& & & & & & \ddots & & & \\
& & & & & & c_N & a_N & b_N & \\\arrayrulecolor{lightgray}\hline
b_1 & & & & & & & c_1 & a_1 & \\
& & & c_M & b_M & & & & & a_M
\end{array} \right]
\left[ \begin{array}{c}
x_2 \\
\vdots \\
\vdots \\
x_{M-1}\\\arrayrulecolor{lightgray}\hline
x_{M+1}\\
\vdots \\
\vdots \\
x_N \\\arrayrulecolor{lightgray}\hline
y_1 \\
y_2
\end{array} \right]
=
\left[ \begin{array}{c}
f_2 \\
\vdots \\
\vdots \\
f_{M-1}\\\arrayrulecolor{lightgray}\hline
f_{M+1}\\
\vdots \\
\vdots \\
f_N \\\arrayrulecolor{lightgray}\hline
g_1 \\
g_2
\end{array} \right],
\end{gather*}
which has the block structure
\begin{gather*}
\left[ \begin{array}{ccc}
\bs{B}_1 & & \bs{E}_1 \\
& \bs{B}_2 & \bs{E}_2 \\
\bs{F}_1 & \bs{F}_2 & \bs{C}
\end{array} \right]
\left[ \begin{array}{c}
\bs{x}_1 \\
\bs{x}_2 \\
\bs{y}
\end{array} \right]
=
\left[ \begin{array}{c}
\bs{f}_1 \\
\bs{f}_2 \\
\bs{g}
\end{array} \right].
\end{gather*}
In the more general situation with $P$ subdomains, the block structure
becomes
\begin{gather*}
\left[ \begin{array}{ccccc}
\bs{B}_1 & & & & \bs{E}_1 \\
& \bs{B}_2 & & & \bs{E}_2 \\
& & \ddots & & \vdots \\
& & & \bs{B}_P & \bs{E}_P \\
\bs{F}_1 & \bs{F}_2 & \cdots & \bs{F}_P & \bs{C}
\end{array} \right]
\left[ \begin{array}{c}
\bs{x}_1 \\
\bs{x}_2 \\
\vdots \\
\bs{x}_P \\
\bs{y}
\end{array} \right]
=
\left[ \begin{array}{c}
\bs{f}_1 \\
\bs{f}_2 \\
\vdots \\
\bs{f}_P \\
\bs{g}
\end{array} \right],
\end{gather*}
or more compactly
\begin{gather}
\label{eq:BlockSchurSystem}
\left[ \begin{array}{ccc}
\bs{B} & \bs{E} \\
\bs{F} & \bs{C} \\
\end{array} \right]
\left[ \begin{array}{c}
\bs{x} \\
\bs{y}
\end{array} \right]
=
\left[ \begin{array}{c}
\bs{f} \\
\bs{g}
\end{array} \right],
\end{gather}
where $\bs{C} \in \mathbb{R}^{P \times P}$, $\bs{B} \in \mathbb{R}^{(N-P) \times (N-P)}$,
$\bs{E} \in \mathbb{R}^{(N-P) \times P}$, and $\bs{F} \in \mathbb{R}^{P \times (N-P)}$.
Here, $\bs{x}$ and $\bs{f}$ denote the data located in the interior of a
subdomain while $\bs{y}$ and $\bs{g}$ denote the data residing on the
interface between subdomains.
Next, we use the LU factorization to
rewrite the block matrix from \eqref{eq:BlockSchurSystem} as
\begin{gather*}
\left[ \begin{array}{ccc}
\bs{B} & \bs{E} \\
\bs{F} & \bs{C} \\
\end{array} \right]
=
\left[ \begin{array}{ccc}
\bs{I} & \bs{0} \\
\bs{F}\bs{B}^{-1} & \bs{I}
\end{array} \right]
\left[ \begin{array}{ccc}
\bs{B} & \bs{E} \\
\bs{0} & \bs{S}
\end{array} \right],
\end{gather*}
where $\bs{S} = \bs{C} - \bs{F} \bs{B}^{-1} \bs{E}$ is the Schur
complement. Using this factorized form, we can decompose the block
system into the following three smaller problems:
\begin{align}
\label{eq:LocalTriSystem}
\bs{B} \bs{f}^\ast &= \bs{f}, \\
\label{eq:SchurSystem}
\bs{S} \bs{y} &= \bs{g} - \bs{F} \bs{f}^\ast, \\
\label{eq:SchurCorrection}
\bs{B} \bs{x} &= \bs{B} \bs{f}^\ast - \bs{E} \bs{y}.
\end{align}
Based on this decomposition, we can now summarize the solution procedure
as follows:
\begin{itemize}
\item \emph{Local tridiagonal solver:} Each processor solves a
local non-periodic tridiagonal system
\begin{gather*}
\bs{B}_p \bs{f}^\ast_p = \bs{f}_p,
\end{gather*}
which can be solved efficiently using Thomas's algorithm. The
matrices $\bs{B}_p$ are the non-periodic tridiagonal blocks in the
block diagonal matrix $\bs{B}$.
\item \emph{Gather data to master node:} Each processor sends three
scalar values to the master node corresponding to the first and last
entries of the vector $\bs{f}^\ast_p$, as well as the scalar
$g_p$. Because $\bs{F}_p$ is sparse, only a few values are required to
construct the right hand side of the Schur complement system.
\item \emph{Solve Schur complement system:} On the master node, solve
the reduced $P \times P$ Schur complement
system~\eqref{eq:SchurSystem}. Based on the sparsity patterns of
$\bs{F}$ and $\bs{E}$, the Schur complement matrix $\bs{S}$ is
periodic and tridiagonal and therefore can be inverted efficiently
using Thomas's algorithm.
\item \emph{Scatter data from master node:} The master node scatters two
scalar values from $\bs{y}$ to each processor. Because of the
sparsity of $\bs{B}^{-1}\bs{E}$, only a few values of $\bs{y}$ are
required in the next step. Therefore, the $p$th processor only
requires the entries of $\bs{y}$ numbered $p$ and $\text{mod}(p+1,P)$.
\item \emph{Correct local solution:} Each processor corrects its local
solution
\begin{gather*}
\bs{x}_p = \bs{f}^\ast_p - \bs{B}^{-1}_p \bs{E}_p \bs{y},
\end{gather*}
using the local values $\bs{f}^\ast_p$ computed in the first step.
\end{itemize}
The tridiagonal systems above can be parallelized very efficiently. As
already indicated earlier, this procedure only requires two collective
communications -- scatter and gather -- and because global communication
only occurs along one spatial direction the communication overhead
increases only marginally with the number of processors. A further cost
savings derives from the fact that the tridiagonal systems do not change
from one time step to the next, and so the matrices $\bs{S}$ and
$\bs{B}^{-1}\bs{E}$ can be precomputed.
The only potential bottleneck in this procedure is in solving the
reduced Schur complement system \eqref{eq:SchurSystem}.
Since the reduced system is solved only on
the master node, clock cycles on the remaining idle nodes are wasted
at this time. Furthermore, this wasted time increases as the number of
processors increase since the Schur complement system grows with $P$.
Fortunately, the IB algorithm never solves just a single tridiagonal
system. For example, when solving \eqref{eq:TriDiagX} in the $x$-direction,
$P=P_x$ processing nodes work together to solve $N/P_y$ tridiagonal systems.
Therefore, the $N/P_y$ systems when solved together require solving
$N/P_y$ different Schur complement systems. This workload can be
spread out evenly between the $P$ processors thereby keeping all
processors occupied.
\section{Numerical Results}
\label{sec:Results}
\changed{ To test the accuracy of our algorithm, we consider two model
problems. The first is an idealized one-dimensional elliptical
membrane with zero thickness that is immersed in a 2D fluid and
undergoes a damped periodic motion. Here, the immersed boundary exerts
a singular force on the fluid and results in a pressure jump across
the membrane that reduces the method's order of accuracy. The second
model problem is a generalization of the first in which we consider a
thick immersed boundary, made up of multiple fiber layers in which the
elastic stiffness is reduced smoothly to a value of zero at the edges.
By providing the immersed boundary in this example with a physical
thickness, the external force is no longer singular, which then leads
to higher-order convergence rates. }
\subsection{Thin Ellipse}
\label{sec:ThinEllipseProblem}
For our first 2D model problem, the initial configuration is an
elliptical membrane with semi-axes $r_1$ and $r_2$, parameterized by
\begin{gather*}
\bs{X}(s,0) = \left( \frac{1}{2} + r_1 \cos(2 \pi s) ,~
\frac{1}{2} + r_2 \sin(2 \pi s) \right),
\end{gather*}
with $s\in[0,1]$. The ellipse is placed in a unit square containing
fluid that is initially stationary with $\bs{u}(\bs{x},0)=0$. We see
from the solution snapshots in Figure~\ref{fig:ThinEllipseSim} that the
elastic membrane undergoes a damped periodic motion, oscillating back
and forth between elliptical shapes having a semi-major axis aligned
with the $x$- and $y$-directions. The amplitude of the oscillations
decreases over time, and the membrane tends ultimately toward a circular
equilibrium state with radius approximately equal to $\sqrt{r_1 r_2}$
(which has the same area as the initial ellipse).
\begin{figure}[!tbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.40\textwidth]{Figures/ThinEllipseSim1.pdf}
\label{fig:ThinEllipseSim1}}
\qquad
\subfigure[]{\includegraphics[width=0.40\textwidth]{Figures/ThinEllipseSim2.pdf}
\label{fig:ThinEllipseSim2}}
\qquad
\subfigure[]{\includegraphics[width=0.40\textwidth]{Figures/ThinEllipseSim3.pdf}
\label{fig:ThinEllipseSim3}}
\qquad
\subfigure[]{\includegraphics[width=0.40\textwidth]{Figures/ThinEllipseSim4.pdf}
\label{fig:ThinEllipseSim4}}
\caption{Snapshots of a thin oscillating ellipse using the GM-IB
method, with parameters $\sigma=1$, $N=256$ and $\Delta t=0.04/512$.}
\label{fig:ThinEllipseSim}
\end{center}
\end{figure}
For this problem, we actually computed results using two immersed boundary
algorithms corresponding to different fluid solvers. The first
algorithm, denoted GM-IB, is the same one described in
section~\ref{sec:algorithm} that uses Guermond and Minev's fluid
solver. The second algorithm, denoted BCM-IB, is identical to the first
except that the fluid solver is replaced with the second-order projection
method described by Brown, Cortez, and Minion~\cite{Brown2001}. We take
values of the parameters from Griffith~\cite{Griffith2012}, who used
$\mu=0.01$, $\rho=1$, $r_1=\frac{5}{28}$, $r_2=\frac{7}{20}$ and
$N_s=\frac{19}{4}N$. We then compare our numerical results for
different choices of the membrane elastic stiffness ($\sigma$) and
spatial discretization ($N$). Unless stated otherwise, the time step is
chosen so that the simulation is stable on the finest spatial grid (with
$N=512$). This is a conservative choice for the time step that attempts
to avoid any unreasonable accumulation of errors in time, but it also
provides limited information regarding the time step restrictions for
the two methods. \changed{However, we observe in practice that there is
little difference between the time step restrictions for the GM-IB and
BCM-IB algorithms, although GM-IB does have a slightly stricter time
step restriction than BCM-IB.}
Because the fluid contained within the immersed boundary cannot escape,
the area of the oscillating ellipse should remain constant in time.
However, many other IB computations for this thin ellipse problem
exhibit poor volume conservation which manifests itself as an apparent
``leakage'' of fluid out of the immersed boundary. The source of this
volume conservation error is numerical error in the discrete
divergence-free condition for the interpolated velocity field located on
immersed boundary points, which can be non-zero even when the fluid
solver guarantees that the velocity is discretely divergence-free on the
Eulerian fluid grid~\cite{NewrenPhdthesis2007,PeskinPrintz1993}.
Griffith~\cite{Griffith2012} observed that volume conservation can be
improved by using a pressure-increment fluid solver instead of a
pressure-free solver, and furthermore that fluid solvers based on a
staggered grid tended to perform better than those using a collocated
grid. We have employed both of these ideas in our proposed method and
so we expect to see significant improvement in volume conservation
relative to other IB methods.
We begin by plotting the maximum and mean radii of the ellipse versus
time in
Figure~\ref{fig:ThinEllipseRadiusPressure}\subref{fig:ThinEllipseRadius},
from which it is clear that the immersed boundary converges to a
circular steady state having radius $\sqrt{r_1 r_2}=\frac{1}{4}$. The
BCM-IB results are indistinguishable from those using GM-IB, and so only
the latter are depicted in this figure. The low rate of volume loss
observed in both algorithms is consistent with the numerical experiments
of Griffith~\cite{Griffith2012}. Owing to the relatively high Reynolds
number for this flow ($\mbox{\itshape Re} \approx 150$) there exists a noticeable
error in the oscillation frequency for coarser discretizations, although
we note that this error is much smaller for lower $\mbox{\itshape Re}$ flows. We
suspect that this frequency error could be reduced significantly by
employing higher-order approximations in the nonlinear advection term
and the IB evolution equation~\eqref{eq:membrane}, such as has been done
by Griffith~\cite{Griffith2007}. Finally, we note that
Figure~\ref{fig:ThinEllipseRadiusPressure}\subref{fig:ThinEllipsePressure}
shows that the GM-IB algorithm captures the discontinuity in pressure
without any visible oscillations.
\begin{figure}[!tbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.45\textwidth]{Figures/ThinEllipseRadius.pdf}
\label{fig:ThinEllipseRadius}}
\qquad
\subfigure[]{\includegraphics[width=0.45\textwidth]{Figures/ThinEllipsePressure.pdf}
\label{fig:ThinEllipsePressure}}
\caption{Results for the thin ellipse problem using the GM-IB method
with $\sigma=1$ and $\Delta t=0.04/512$. \subref{fig:ThinEllipseRadius}
Maximum and mean radii. \subref{fig:ThinEllipsePressure} Pressure
slices across the $x$--axis with $y=0.5$ and $t=0.2$.}
\label{fig:ThinEllipseRadiusPressure}
\end{center}
\end{figure}
We next estimate the error and convergence rate for both algorithms.
Because the thin ellipse problem is characterized by a singular IB
force, there is a discontinuity in velocity derivatives and
pressure and so our numerical scheme is limited to first order accuracy.
We note that improvements in the convergence rate could be achieved by
explicitly incorporating these discontinuities into the difference
scheme, for example as is done in the immersed interface method
\cite{Lee2003,Leveque1997}.
When reporting the error in a discrete variable $q_N$ that is
approximated on a grid at refinement level $N$, we use the notation
\begin{gather}
\myerror{q}{N} = \| q_N - q_{\text{exact}} \|_2.
\label{eq:EstError}
\end{gather}
Because the exact solution for the thin ellipse problem is not known, we
estimate $q_\text{exact}$ by using the approximate solution on the
finest mesh corresponding to $N_f=512$ \changed{with the BCM-IB algorithm}, and then take
$q_{\text{exact}}=\mathcal{I}^{N_f\to N} q_{N_f}$, where
$\mathcal{I}^{N_f \to N}$ is an operator that interpolates the finest
mesh solution $q_{N_f}$ onto the current coarse mesh with $N$ points.
We use the discrete $\ell^2$ norm to estimate errors, which is
calculated for an Eulerian quantity such as the pressure using
\begin{gather}
\|p _{i,j}\|_2 = \brac{ h^2 \sum_{i,j} |p_{i,j}|^2 }^{1/2},
\label{eq:ScalarDiscreteL2}
\end{gather}
and similarly for a Lagrangian quantity such as the IB position using
\begin{gather}
\| \bs{X}_k\|_2 = \brac{ h_s \sum_k |\bs{X}_k|^2 }^{1/2},
\label{eq:VectorDiscreteL2}
\end{gather}
where $|\cdot|$ represents the absolute value in the first formula and
the Euclidean distance in the second. The convergence rate can then be
estimated using solutions $q_N$, $q_{2N}$ and $q_{4N}$ on successively
finer grids as
\begin{gather}
\myrate{q}{N} = \log_2 \brac{ \frac{\| q_N -
\mathcal{I}^{2N\to N} q_{2N} \|_2}{\| q_{2N} -
\mathcal{I}^{4N\to 2N} q_{4N} \|_2} }.
\label{eq:EstConvergence}
\end{gather}
A summary of convergence rates and errors is given in
Tables~\ref{Table:ThinEllipse_ConvergenceRate}
and~\ref{Table:ThinEllipse_Error} for both the GM-IB and BCM-IB
algorithms, taking different values of the elastic stiffness parameter
$\sigma$. The error in all cases is measured at a time three-quarters
through the ellipse's first oscillation, when the membrane is roughly
circular in shape. Table~\ref{Table:ThinEllipse_ConvergenceRate}
clearly shows that the two algorithms exhibit similar convergence rates
for all state variables. First-order convergence is seen in both the
fluid velocity and membrane position, while the pressure shows the
expected reduction in accuracy to $\order{h^{1/2}}$ owing to the
pressure discontinuity. The errors in
Table~\ref{Table:ThinEllipse_Error} show that GM-IB and BCM-IB are
virtually indistinguishable from each other except for the error in the
divergence-free condition, $\myerror{\nabla \cdot \bs{u}}{N}$, where the
BCM-IB algorithm appears to enforce the incompressibility constraint
better than GM-IB. Because Guermond and Minev's fluid solver does not
project the velocity field onto the space of divergence-free velocity
fields (even approximately), it is not surprising that BCM-IB performs
better in this regard. We remark that the magnitude of the fluid
variables increases with the stiffness $\sigma$, so that the error
increases as well (since ${\cal E}$ is defined as an absolute error
measure); however, the relative error and convergence rates remain
comparable as $\sigma$ varies over several orders of magnitude.
\begin{table}[htbp]\centering\small
\caption{Estimated $\ell^2$ convergence rates for the thin
ellipse problem with three different parameter sets:
($\sigma=0.1$, $t=1.06$, $\Delta t=0.08/512$), ($\sigma=1$, $t=0.31$,
$\Delta t=0.04/512$), ($\sigma=10$, $t=0.0975$, $\Delta t=0.01/512$).}
\begin{tabular}{ll cccccccccc}\toprule
& & \multicolumn{2}{c}{$\myrate{\bs{u}}{N}$} & \multicolumn{2}{c}{$\myrate{p}{N}$} & \multicolumn{2}{c}{$\myrate{\bs{X}}{N}$} \\
\cmidrule(r){3-4} \cmidrule(r){5-6} \cmidrule(r){7-8}
$\sigma$ & $N$ & GM & BCM & GM & BCM & GM & BCM \\
\midrule
\multirow{2}{*}{$0.1$} & $64$ & $1.02$ & $1.02$ & $0.55$ & $0.55$ & $1.46$ & $1.46$ \\
& $128$ & $1.05$ & $1.06$ & $0.53$ & $0.53$ & $1.28$ & $1.29$ \\
\midrule
\multirow{2}{*}{$1$} & $64$ & $1.48$ & $1.51$ & $0.72$ & $0.73$ & $1.34$ & $1.35$ \\
& $128$ & $0.96$ & $1.03$ & $0.57$ & $0.58$ & $1.31$ & $1.37$ \\
\midrule
\multirow{2}{*}{$10$} & $64$ & $1.27$ & $1.33$ & $0.88$ & $0.84$ & $1.35$ & $1.33$ \\
& $128$ & $0.89$ & $1.03$ & $0.68$ & $0.82$ & $1.32$ & $1.71$ \\
\bottomrule
\end{tabular}
\label{Table:ThinEllipse_ConvergenceRate}
\end{table}
\begin{table}[htbp]\centering\small
\caption{Estimated $\ell^2$ errors the thin ellipse problem with three
different parameter sets: ($\sigma=0.1$, $t=1.06$,
$\Delta t=0.08/512$), ($\sigma=1$, $t=0.31$, $\Delta t=0.04/512$),
($\sigma=10$, $t=0.0975$, $\Delta t=0.01/512$).}
\begin{tabular}{ll cccccccccc}\toprule
& & \multicolumn{2}{c}{$\myerror{\bs{u}}{N}$} & \multicolumn{2}{c}{$\myerror{p}{N}$} & \multicolumn{2}{c}{$\myerror{\bs{X}}{N}$} & \multicolumn{2}{c}{$\myerror{\nabla \cdot \bs{u}}{N}$} \\
\cmidrule(r){3-4} \cmidrule(r){5-6} \cmidrule(r){7-8} \cmidrule(r){9-10}
$\sigma$ & $N$ & GM & BCM & GM & BCM & GM & BCM & GM & BCM \\
\midrule
\multirow{4}{*}{$0.1$} & $64$ & $7.41\text{e}{-3}$ & $7.44\text{e}{-3}$ & $4.13\text{e}{-2}$ & $4.13\text{e}{-2}$ & $2.81\text{e}{-4}$ & $2.82\text{e}{-4}$ & $4.77\text{e}{-3}$ & $3.67\text{e}{-16}$ \\
& $128$ & $3.10\text{e}{-3}$ & $3.16\text{e}{-3}$ & $2.36\text{e}{-2}$ & $2.36\text{e}{-2}$ & $6.69\text{e}{-5}$ & $6.75\text{e}{-5}$ & $1.05\text{e}{-2}$ & $7.22\text{e}{-16}$ \\
& $256$ & $9.64\text{e}{-4}$ & $1.03\text{e}{-3}$ & $1.03\text{e}{-2}$ & $1.04\text{e}{-2}$ & $1.36\text{e}{-5}$ & $1.39\text{e}{-5}$ & $1.86\text{e}{-2}$ & $1.41\text{e}{-15}$ \\
& $512$ & $1.91\text{e}{-4}$ & -- & $8.84\text{e}{-4}$ & -- & $1.80\text{e}{-6}$ & -- & $2.91\text{e}{-2}$ & $2.83\text{e}{-15}$ \\
\midrule
\multirow{4}{*}{$1$} & $64$ & $5.61\text{e}{-2}$ & $5.69\text{e}{-2}$ & $4.56\text{e}{-1}$ & $4.57\text{e}{-1}$ & $4.32\text{e}{-4}$ & $4.36\text{e}{-4}$ & $1.04\text{e}{-1}$ & $1.86\text{e}{-15}$ \\
& $128$ & $1.88\text{e}{-2}$ & $1.98\text{e}{-2}$ & $2.56\text{e}{-1}$ & $2.57\text{e}{-1}$ & $1.06\text{e}{-4}$ & $1.09\text{e}{-4}$ & $2.50\text{e}{-1}$ & $3.60\text{e}{-15}$ \\
& $256$ & $6.32\text{e}{-3}$ & $6.65\text{e}{-3}$ & $1.13\text{e}{-1}$ & $1.11\text{e}{-1}$ & $2.08\text{e}{-5}$ & $2.15\text{e}{-5}$ & $4.55\text{e}{-1}$ & $7.04\text{e}{-15}$ \\
& $512$ & $5.46\text{e}{-3}$ & -- & $4.77\text{e}{-2}$ & -- & $9.21\text{e}{-6}$ & -- & $7.23\text{e}{-1}$ & $1.40\text{e}{-14}$ \\
\midrule
\multirow{4}{*}{$10$} & $64$ & $3.37\text{e}{-1}$ & $3.38\text{e}{-1}$ & $5.88\text{e}{+0}$ & $5.89\text{e}{+0}$ & $6.33\text{e}{-4}$ & $6.38\text{e}{-4}$ & $6.12\text{e}{-1}$ & $7.01\text{e}{-15}$ \\
& $128$ & $1.61\text{e}{-1}$ & $1.62\text{e}{-1}$ & $3.12\text{e}{+0}$ & $3.04\text{e}{+0}$ & $1.56\text{e}{-4}$ & $1.57\text{e}{-4}$ & $2.12\text{e}{+0}$ & $1.40\text{e}{-14}$ \\
& $256$ & $7.06\text{e}{-2}$ & $5.48\text{e}{-2}$ & $1.42\text{e}{+0}$ & $1.20\text{e}{+0}$ & $3.08\text{e}{-5}$ & $2.60\text{e}{-5}$ & $3.94\text{e}{+0}$ & $2.72\text{e}{-14}$ \\
& $512$ & $6.24\text{e}{-2}$ & -- & $1.12\text{e}{+0}$ & -- & $2.01\text{e}{-5}$ & -- & $6.34\text{e}{+0}$ & $5.39\text{e}{-14}$ \\
\bottomrule
\end{tabular}
\label{Table:ThinEllipse_Error}
\end{table}
\changed{ Lastly, we examine the issue of volume conservation by
considering the volume (area) of the membrane for the GM-IB and BCM-IB
methods as the solution approaches steady-state. Ideally, the membrane
area should remain constant in time with a value of $\pi r_1 r_2$
because the fluid contained inside the immersed boundary cannot
escape. For a membrane with an elastic stiffness of $\sigma=1$, the
volume conservation is illustrated in
Table~\ref{Table:ThinEllipse_AreaError}. For both numerical schemes,
the loss of enclosed volume is less than one percent by the time the
solution attains a quasi-steady state near $t=4$. The same is true of
the corresponding simulations using $\sigma=10$ and $\sigma=0.1$,
where the quasi-steady state is reached at $t=2$ and $t=4$
respectively.
When comparing methods, we observe
that BCM-IB conserves volume better than GM-IB. However,
as observed in Table~\ref{Table:ThinEllipse_Error} (see $\myerror{\bs{X}}{N}$),
the difference in volume conservation has negligible impact on the solution's
accuracy.
It is only when approaching the stability boundaries (in
terms of the allowable time step) that the difference in volume
conservation becomes noticeable.
Lastly, when reducing the time step, the volume
conservation in the GM-IB algorithm improves noticeably, which is not surprising since the
GM fluid solver introduces an $\order{\Delta t}$ perturbation to the
incompressibility constraint~\eqref{eq:PerturbIncompressible}. Furthermore,
from Table~\ref{Table:ThinEllipse_AreaErrorRate}, we see that leakage rate
of the membrane is not affected by the time step and is nearly
identical to BCM-IB. }
\begin{table}[htbp]\centering\small
\caption{ The loss of enclosed volume (relative error) of the immersed boundary at time $t=4$ when $\sigma=1$. }
\begin{tabular}{l ccccc c}\toprule
&\multicolumn{4}{c}{GM-IB} & BCM-IB \\
\cmidrule(r){2-5} \cmidrule(r){6-6}
N & $\Delta t=\frac{0.04}{512}$ & $\Delta t=\frac{0.02}{512}$ & $\Delta t=\frac{0.01}{512}$ & $\Delta t=\frac{0.005}{512}$ & $\Delta t=\frac{0.04}{512}$ \\
\midrule
$64$ & $2.21\text{e}{-3}$ & $1.63\text{e}{-3}$ & $1.45\text{e}{-3}$ & $1.38\text{e}{-3}$ & $1.35\text{e}{-3}$ \\
$128$ & $2.74\text{e}{-3}$ & $1.61\text{e}{-3}$ & $1.21\text{e}{-3}$ & $1.07\text{e}{-3}$ & $6.42\text{e}{-4}$ \\
$256$ & $3.31\text{e}{-3}$ & $1.55\text{e}{-3}$ & $9.36\text{e}{-4}$ & $7.12\text{e}{-4}$ & $3.32\text{e}{-4}$ \\
$512$ & $4.25\text{e}{-3}$ & $1.65\text{e}{-3}$ & $7.89\text{e}{-4}$ & $4.81\text{e}{-4}$ & $1.57\text{e}{-4}$ \\
\bottomrule
\end{tabular}
\label{Table:ThinEllipse_AreaError}
\end{table}
\begin{table}[htbp]\centering\small
\caption{ The temporal leakage rate of the membrane (relative error) over the
time interval $t\in[2,4]$ when $\sigma=1$ which is obtained using linear least squares fit. }
\begin{tabular}{l ccccc c}\toprule
&\multicolumn{4}{c}{GM-IB} & BCM-IB \\
\cmidrule(r){2-5} \cmidrule(r){6-6}
N & $\Delta t=\frac{0.04}{512}$ & $\Delta t=\frac{0.02}{512}$ & $\Delta t=\frac{0.01}{512}$ & $\Delta t=\frac{0.005}{512}$ & $\Delta t=\frac{0.04}{512}$ \\
\midrule
$64$ & $1.03\text{e}{-3}$ & $1.04\text{e}{-3}$ & $1.05\text{e}{-3}$ & $1.05\text{e}{-3}$ & $1.05\text{e}{-3}$ \\
$128$ & $6.35\text{e}{-4}$ & $6.39\text{e}{-4}$ & $6.41\text{e}{-4}$ & $6.41\text{e}{-4}$ & $6.41\text{e}{-4}$ \\
$256$ & $3.29\text{e}{-4}$ & $3.31\text{e}{-4}$ & $3.32\text{e}{-4}$ & $3.32\text{e}{-4}$ & $3.32\text{e}{-4}$ \\
$512$ & $1.57\text{e}{-4}$ & $1.57\text{e}{-4}$ & $1.57\text{e}{-4}$ & $1.57\text{e}{-4}$ & $1.57\text{e}{-4}$ \\
\bottomrule
\end{tabular}
\label{Table:ThinEllipse_AreaErrorRate}
\end{table}
\subsection{Thick Elliptical Shell}
\label{sec:ThickEllipseProblem}
Our second test problem involves the thick elastic shell pictured in
Figure~\ref{fig:ThickEllipseSim} that has been studied before by
Griffith and Peskin~\cite{Griffith2005}. This is a natural
generalization of the thin ellipse problem, wherein the shell is treated
using a nested sequence of elliptical immersed fibers. The purpose of
this example is not only to illustrate the application of our algorithm
to more general solid elastic structures, but also to illustrate the
genuine second-order accuracy of our numerical method for problems that
are sufficiently smooth.
To this end, we take an elliptical elastic shell with thickness
$\gamma$ using two independent Lagrangian parameters $s,r\in [0,1]$ and
specify the initial configuration by
\begin{gather*}
\bs{X}(s,r,0) = \left( \frac{1}{2} + (r_1 + \gamma (r-1/2))
\cos(2 \pi s) ,~ \frac{1}{2} + (r_2 + \gamma (r-1/2)) \sin(2 \pi
s) \right).
\end{gather*}
The shell is composed of circumferential fibers having an elastic
stiffness that varies in the radial direction according to
\begin{gather*}
\sigma(r) = 1 - \cos(2 \pi r ) .
\end{gather*}
Because the elastic stiffness drops to zero at the inner and outer edges
of the shell, the corresponding Eulerian force $\bs{f}$ is a continuous
function of $\bs{x}$; this should be contrasted with the ``thin
ellipse'' example in which the fluid force is singular, since it
consists of a 1D delta distribution in the tangential direction along
the membrane. As a result, we expect in this example to observe higher
order convergence because the solution does not contain the
discontinuities in pressure and velocity derivatives that were present
in the thin ellipse problem. Unless otherwise indicated, we take the
parameter values $\rho=1$, $r_1=0.2$, $r_2=0.25$, $\gamma=0.0625$,
$N_s=(75/16)N$, $N_r=(3/8)N$ and $\Delta t=0.08/512$ that are consistent with
the computations in~\cite{Griffith2005}.
The dynamics of the thick ellipse problem illustrated in
Figure~\ref{fig:ThickEllipseSim} are qualitatively similar to those in
the previous section, in that the elastic shell undergoes a damped
oscillation. In Table~\ref{Table:ThickEllipse_ConvergenceRate}, we
present the $\ell^2$ convergence rates in the solution for different
values of fluid viscosity $\mu$. We also include the corresponding
results computed by Griffith and Peskin~\cite{Griffith2005} and observe
that the GM-IB, BCM-IB, and Griffith-Peskin algorithms all exhibit
remarkably similar convergence rates. The $\ell^2$ errors for the GM-IB
and BCM-IB methods are almost identical to Griffith and Peskin's, and so
we have not reported them for this example.
It is only when the viscosity is taken very small ($\mu=0.0005$) that
the Griffith-Peskin algorithm begins to demonstrate superior results.
Because this improvement corresponds to a higher Reynolds number, we
attribute it to differences in the treatment of the nonlinear advection
term and the IB evolution equation. Indeed, Griffith and Peskin
approximate the nonlinear advection term using a high-order Godunov
method \cite{Colella1990,Minion1996} and integrate the IB equation
\eqref{eq:membrane} using a strong stability preserving Runge-Kutta
method \cite{Gottlieb2001}. \changed{We have made no attempt to
incorporate these modifications into our algorithm because the time
integration they used for their Godunov method requires solution of a
Poisson problem, whereas a Runge-Kutta time integration would require
an additional velocity interpolation step that reduces computational
efficiency.} Recall that one of our primary aims is precisely to
avoid the pressure Poisson solves required in so many other IB methods.
With this in mind, we have restricted our attention in this paper to
lower Reynolds number flows corresponding roughly to
$\mbox{\itshape Re}\lessapprox 1000$.
\begin{figure}[!tbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.40\textwidth]{Figures/ThickEllipseSim1.pdf}
\label{fig:ThickEllipseSim1}}
\qquad
\subfigure[]{\includegraphics[width=0.40\textwidth]{Figures/ThickEllipseSim2.pdf}
\label{fig:ThickEllipseSim2}}
\qquad
\subfigure[]{\includegraphics[width=0.40\textwidth]{Figures/ThickEllipseSim3.pdf}
\label{fig:ThickEllipseSim3}}
\qquad
\subfigure[]{\includegraphics[width=0.40\textwidth]{Figures/ThickEllipseSim4.pdf}
\label{fig:ThickEllipseSim4}}
\caption{Snapshots of a thick oscillating ellipse using GM-IB
method, with parameters $\mu=0.005$, $N=256$ and $\Delta t=0.08/512$.}
\label{fig:ThickEllipseSim}
\end{center}
\end{figure}
\begin{table}[htbp]\centering\small
\caption{Estimated $\ell^2$ convergence rates $\myrate{q}{128}$ for
the thick ellipse problem at time $t=0.4$. For comparison,
Griffith's results~\cite{Griffith2005} are reported in the final
row. Since Griffith reports the component-wise convergence rate of
the velocity field, we approximate $\myrate{\bs{u}}{128} \approx
\max(\myrate{u}{128},~\myrate{v}{128})$.}
\begin{tabular}{l cccc cccc cccc}\toprule
& \multicolumn{3}{c}{$\mu=0.05$} & \multicolumn{3}{c}{$\mu=0.01$} & \multicolumn{3}{c}{$\mu=0.005$} \\
\cmidrule(r){2-4} \cmidrule(r){5-7} \cmidrule(r){8-10}
&$\bs{u}$ & $p$ & $\bs{X}$ &$\bs{u}$ & $p$ & $\bs{X}$ &$\bs{u}$ & $p$ & $\bs{X}$\\
GM-IB & $2.10$ & $1.88$ & $1.69$ & $2.12$ & $1.88$ & $1.76$ & $2.11$ & $1.88$ & $1.99$ \\
BCM-IB & $2.11$ & $1.88$ & $1.69$ & $2.09$ & $1.87$ & $1.74$ & $2.09$ & $1.87$ & $1.99$ \\
Griffith \cite{Griffith2005}& $2.16^*$ & $1.89$ & $1.98$ & -- & -- & -- & $2.20^*$ & $1.86$ & $1.74$ \\
\bottomrule
\end{tabular}
\label{Table:ThickEllipse_ConvergenceRate}
\end{table}
Lastly, we investigate the accuracy with which our discrete solution
satisfies the discrete divergence-free condition for a variety of time
steps and spatial discretizations. Our aim in this instance is to
determine how well the fluid solver of Guermond and Minev approximates
the incompressibility constraint, which is related to the volume
conservation issue discussed in the thin ellipse example.
Table~\ref{Table:ThinEllipse_DivergenceError} lists values of the error
in the discrete divergence of velocity, $\myerror{\nabla \cdot
\bs{u}}{N}$, measured at time $t=0.4$ and estimated using
equation~\eqref{eq:EstError}. Observe that $\myerror{\nabla \cdot
\bs{u}}{N}$ increases slightly as the spatial discretization is
refined, but decreases when a smaller time step is used. This last
result is to be expected because Guermond and Minev use a $\order{\Delta t}$
perturbation of the incompressibility
constraint~\eqref{eq:PerturbIncompressible}.
\begin{table}[htbp]\centering\small
\caption{Error in the divergence-free condition $\myerror{\nabla \cdot
\bs{u}}{N}$ for the thick ellipse problem using the GM-IB method
and $\mu=0.01$.}
\begin{tabular}{l cccccc}\toprule
& $\Delta t=\frac{0.08}{512}$ & $\Delta t=\frac{0.04}{512}$ & $\Delta t=\frac{0.02}{512}$ & $\Delta t=\frac{0.01}{512}$ & $\Delta t=\frac{0.005}{512}$ & $\Delta t=\frac{0.0025}{512}$ \\
\midrule
$N=64$ & $6.34\text{e}{-3}$ & $2.37\text{e}{-3}$ & $8.93\text{e}{-4}$ & $3.20\text{e}{-4}$ & $1.08\text{e}{-4}$ & $3.42\text{e}{-5}$ \\
$N=128$ & $8.94\text{e}{-3}$ & $3.68\text{e}{-3}$ & $1.49\text{e}{-3}$ & $5.75\text{e}{-4}$ & $2.11\text{e}{-4}$ & $7.36\text{e}{-5}$ \\
$N=256$ & $1.00\text{e}{-2}$ & $4.19\text{e}{-3}$ & $1.73\text{e}{-3}$ & $6.79\text{e}{-4}$ & $2.55\text{e}{-4}$ & $9.06\text{e}{-5}$ \\
$N=512$ & $1.04\text{e}{-2}$ & $4.35\text{e}{-3}$ & $1.80\text{e}{-3}$ & $7.11\text{e}{-4}$ & $2.68\text{e}{-4}$ & $9.59\text{e}{-5}$ \\
\bottomrule
\end{tabular}
\label{Table:ThinEllipse_DivergenceError}
\end{table}
\section{\changed{Parallel Performance Results}}
\label{sec:Performance}
We now focus on comparing the parallel performance of our algorithm
(GM-IB) with an analogous projection-based scheme (BCM-IB). We begin by
comparing the performance difference between solving the pure Poisson
problem that plays a central role in projection schemes, versus Guermond
and Minev's directional-split counterpart. This captures the major
differences between the fluid solvers used in the corresponding IB
algorithms. We then follow by performing weak and strong scalability
tests for the full immersed boundary problem.
\subsection{Comparison with Poisson Solvers}
\label{sec:PoissonComparison}
In this section, we compare the performance of several Poisson solvers
with our tridiagonal solver described in section~\ref{sec:linearsolver}.
Since any standard projection scheme requires solving a Poisson problem,
this performance study encapsulates the major differences between
Guermond and Minev's fluid solver and other projection-based approaches
(for example, that of Brown-Cortez-Minion). To illustrate the
comparison, we consider the problem
\begin{gather}
\left\{
\begin{array}{c l}
\ctsop{A} \psi = f(\bs{x}) & \mbox{~in } \Omega = [0,1]^d,\\
\psi \mbox{~is periodic } &\mbox{~on } \partial\Omega,
\end{array}\right.
\label{eqn:PoissonGMProblem}
\end{gather}
where
\begin{gather*}
f(\bs{x}) = \left\{
\begin{array}{c l}
\sin( 2\pi x ) \cos( 2\pi y ) & \mbox{~when } d=2,\\
\sin( 2\pi x ) \cos( 2\pi y ) \cos( 2\pi z ) & \mbox{~when } d=3,\\
\end{array}\right.
\end{gather*}
and $\ctsop{A}$ is either the Laplacian operator ($\ctsop{A} =
\nabla^2$) or the directional-split operator $\ctsop{A} =
(1-\partial_{xx})(1-\partial_{yy})$ (when $d=2$).
When solving the Poisson problem, we compare with two other solvers: one
based on FFTs and the other on multigrid. For both of these solvers we
discretize the problem using a second-order finite difference scheme.
In the FFT-based solver, the difference scheme is rewritten in terms of
the Fourier coefficients and solved using the real-to-complex and
complex-to-real transformations found in FFTW~\cite{FFTW}. For the
multigrid solver, we use a highly scalable multigrid preconditioner
(PFMG) implemented in Hypre~\cite{Hypre} that is used with a conjugate
gradient solver. When performing the comparison for the
directional-split problem, the discrete system decouples into a set of
one-dimensional tridiagonal systems that we solve using the techniques
described in section~\ref{sec:linearsolver}. The major differences here
occur in terms of the domain partitioning, which for the FFT-based
solver involves a slab decomposition, whereas the multigrid and
directional-split solvers use square-like subdomains.
Throughout our performance study, times are collected using MPI and the
best result of multiple runs is reported. All simulations are performed
using the Bugaboo cluster managed by WestGrid~\cite{Bugaboo}, a member
of the high-performance computing consortium Compute Canada. This
cluster consists of 12-core blades, each containing two Intel Xeon X5650
6-core processors (2.66 GHz) that are connected by Infiniband using a
288-port QLogic switch.
First, we evaluate the strong scaling property of each solver by
running a sequence of simulations in which the problem size is held
fixed as the number of processors increases. For the three-dimensional
computations, the problems are solved on $N = 128$ and $N = 256$
grids, which are common resolutions used in 3D IB calculations. For the
two-dimensional computations, the problems are solved on grids that are
larger than usual ($N = 2048$ and $4096$). The strong
scaling results are given in
Tables~\ref{Table:StrongScaling:PoissonProblem2d}
and~\ref{Table:StrongScaling:PoissonProblem3d}, which
report the execution time $T_P$ and parallel efficiency
\begin{gather*}
E_P = \frac{T_1}{P T_P},
\end{gather*}
for $P$ processors. The parallel efficiency
quantifies how well the processors are utilized throughout a
computation, where a value of $E_P=1$ corresponds to the ideal case
and smaller values indicate a reduced parallel efficiency. Note that
a reduction in efficiency is expected since the serial computation involves
no Schur complement systems but instead computes the
tridiagonal systems directly.
In all parallel computations, we observe that the directional-split
solver is strongly scalable ($E_P > 0.8$) and outperforms both Poisson
solvers by a significant margin. Indeed, when comparing the
directional-split solver to multigrid, there is an order of magnitude
difference in execution time. For all multigrid computations, the
conjugate gradient solver required $6$ iterations which makes the
directional-split solver a factor of $2$ to $5$ times faster than a
single multigrid iteration. When comparing the directional-split solver
to the FFT-based solver, the difference in execution times is much
smaller, particularly when using fewer processors. However, as the
processor count increases, we still see a two-fold or greater
performance improvement when using the directional-split solver.
Besides the performance improvements, the Guermond and Minev fluid
solver has a few additional advantages over FFT-based fluid
solvers. First of all, FFT-based solvers are restricted to periodic
boundaries while the directional-split solver has no such
restriction. For example, the Guermond-Minev fluid solver can
inexpensively compute driven-cavity
and (periodic) channel flows without the use of
immersed boundaries~\cite{Guermond2011-2}. Secondly, the slab decomposition used by many FFT
libraries (such as FFTW) can lead to serious load-balancing issues in
the immersed boundary context as indicated by Yau~\cite{Yau2002}. Note
that this could be mitigated somewhat in 3D simulations by moving to a
pencil decomposition that consequently allows for more processors to be
used in the computation~\cite{Pippig2013}.
\begin{table}[htbp]\centering
\caption{ Execution time $T_P$ and parallel efficiency $E_P$ for
the 2D problem~\eqref{eqn:PoissonGMProblem} on an $N^2$ grid
with $P$ processors.}
\begin{tabular}{ll cccccc}\toprule
& & \multicolumn{2}{c}{Multigrid} & \multicolumn{2}{c}{FFT} & \multicolumn{2}{c}{Directional-Split} \\
\cmidrule(r){3-4} \cmidrule(r){5-6} \cmidrule(r){7-8}
& $P$ & $T_P$ & $E_P$ & $T_P$ & $E_P$ & $T_P$ & $E_P$ \\
\midrule
\multirow{6}{*}{$N=2048$}& $1$ & $4.26\text{e}{+0}$ & $--$ & $3.10\text{e}{-1}$ & $--$ & $2.77\text{e}{-1}$ & $--$ \\
& $8$ & $6.66\text{e}{-1}$ & $0.80$ & $7.93\text{e}{-2}$ & $0.49$ & $3.77\text{e}{-2}$ & $0.92$ \\
& $16$ & $3.07\text{e}{-1}$ & $0.87$ & $3.57\text{e}{-2}$ & $0.54$ & $1.82\text{e}{-2}$ & $0.95$ \\
& $32$ & $1.72\text{e}{-1}$ & $0.77$ & $2.50\text{e}{-2}$ & $0.39$ & $8.78\text{e}{-3}$ & $0.99$ \\
& $64$ & $8.30\text{e}{-2}$ & $0.80$ & $1.43\text{e}{-2}$ & $0.34$ & $4.40\text{e}{-3}$ & $0.99$ \\
& $128$ & $4.33\text{e}{-2}$ & $0.77$ & $1.20\text{e}{-2}$ & $0.20$ & $2.45\text{e}{-3}$ & $0.88$ \\
\midrule
\multirow{7}{*}{$N=4096$}& $1$ & $1.38\text{e}{+1}$ & $--$ & $1.33\text{e}{+0}$ & $--$ & $1.09\text{e}{+0}$ & $--$ \\
& $8$ & $3.18\text{e}{+0}$ & $0.54$ & $3.58\text{e}{-1}$ & $0.47$ & $1.54\text{e}{-1}$ & $0.88$ \\
& $16$ & $1.83\text{e}{+0}$ & $0.47$ & $2.15\text{e}{-1}$ & $0.39$ & $7.86\text{e}{-2}$ & $0.87$ \\
& $32$ & $8.92\text{e}{-1}$ & $0.48$ & $1.17\text{e}{-1}$ & $0.36$ & $4.06\text{e}{-2}$ & $0.84$ \\
& $64$ & $4.84\text{e}{-1}$ & $0.45$ & $6.36\text{e}{-2}$ & $0.33$ & $2.03\text{e}{-2}$ & $0.84$ \\
& $128$ & $2.23\text{e}{-1}$ & $0.48$ & $3.84\text{e}{-2}$ & $0.27$ & $9.62\text{e}{-3}$ & $0.84$ \\
& $256$ & $1.05\text{e}{-1}$ & $0.51$ & $2.57\text{e}{-2}$ & $0.20$ & $5.22\text{e}{-3}$ & $0.81$ \\
\bottomrule
\end{tabular}
\label{Table:StrongScaling:PoissonProblem2d}
\end{table}
\begin{table}[htbp]\centering
\caption{ Execution time $T_P$ and parallel efficiency $E_P$ for
the 3D problem~\eqref{eqn:PoissonGMProblem} on an $N^3$ grid
with $P$ processors.}
\begin{tabular}{ll cccccc}\toprule
& & \multicolumn{2}{c}{Multigrid} & \multicolumn{2}{c}{FFT} & \multicolumn{2}{c}{Directional-Split} \\
\cmidrule(r){3-4} \cmidrule(r){5-6} \cmidrule(r){7-8}
& $P$ & $T_P$ & $E_P$ & $T_P$ & $E_P$ & $T_P$ & $E_P$ \\
\midrule
\multirow{6}{*}{$N=128$} & $1$ & $2.77\text{e}{+0}$ & $--$ & $1.60\text{e}{-1}$ & $--$ & $2.05\text{e}{-1}$ & $--$ \\
& $8$ & $4.68\text{e}{-1}$ & $0.74$ & $3.16\text{e}{-2}$ & $0.63$ & $2.65\text{e}{-2}$ & $0.97$ \\
& $16$ & $2.38\text{e}{-1}$ & $0.73$ & $1.88\text{e}{-2}$ & $0.53$ & $1.35\text{e}{-2}$ & $0.95$ \\
& $32$ & $1.40\text{e}{-1}$ & $0.62$ & $1.21\text{e}{-2}$ & $0.42$ & $6.75\text{e}{-3}$ & $0.95$ \\
& $64$ & $7.48\text{e}{-2}$ & $0.58$ & $6.67\text{e}{-3}$ & $0.38$ & $3.50\text{e}{-3}$ & $0.92$ \\
& $128$ & $5.67\text{e}{-2}$ & $0.38$ & $4.57\text{e}{-3}$ & $0.27$ & $1.88\text{e}{-3}$ & $0.85$ \\
\midrule
\multirow{7}{*}{$N=256$} & $1$ & $2.16\text{e}{+1}$ & $--$ & $1.43\text{e}{+0}$ & $--$ & $1.71\text{e}{+0}$ & $--$ \\
& $8$ & $3.58\text{e}{+0}$ & $0.75$ & $2.68\text{e}{-1}$ & $0.67$ & $2.26\text{e}{-1}$ & $0.95$ \\
& $16$ & $1.93\text{e}{+0}$ & $0.70$ & $1.53\text{e}{-1}$ & $0.58$ & $1.12\text{e}{-1}$ & $0.95$ \\
& $32$ & $1.15\text{e}{+0}$ & $0.59$ & $8.63\text{e}{-2}$ & $0.52$ & $5.51\text{e}{-2}$ & $0.97$ \\
& $64$ & $6.77\text{e}{-1}$ & $0.50$ & $5.48\text{e}{-2}$ & $0.41$ & $2.80\text{e}{-2}$ & $0.95$ \\
& $128$ & $3.97\text{e}{-1}$ & $0.42$ & $3.43\text{e}{-2}$ & $0.32$ & $1.52\text{e}{-2}$ & $0.88$ \\
& $256$ & $2.26\text{e}{-1}$ & $0.37$ & $2.35\text{e}{-2}$ & $0.24$ & $7.46\text{e}{-3}$ & $0.90$ \\
\bottomrule
\end{tabular}
\label{Table:StrongScaling:PoissonProblem3d}
\end{table}
Next we report the weak scaling results for the multigrid solver and the
directional-split solver as shown in
Figure~\ref{fig:WeakScaling:PoissonProblem}. In each set of
computations, the local problem size $n^d$ (where $N=n\cdot P_x$) is
held fixed as the number of processors is increased. In the ideal case,
the execution time should stay constant so that the workload
per processor does not change as the number of nodes increase. For 2D
problems ($d=2$), the local grid resolution on each subdomain is either
$n = 128$ or 256, while for 3D problems ($d=3$) we use either $n=32$ or
$64$.
As expected~\cite{Baker2012,Guermond2011}, both solvers are weakly
scalable since the execution time stays essentially constant as the
problem size and number of processors increase. For the $n=128$
simulations, the execution time jumps suddenly at $P=25$, which
is due to increased communication costs occurring between blades
inside the same chassis. Since the blades in a chassis are connected through the same
switch where multiple cores share the same connection, and since the work load per
processor is so small, a noticeable jump appears as a result
of resource contention.
When comparing the execution time between solvers, the directional-split
solver is around $1.5$ to $5$ times faster than a single multigrid
iteration. Since the multigrid solver requires $5$ to $6$ iterations of
conjugate gradient, this results in an order of magnitude difference in
the total execution time. Of course, this difference would be reduced by
using a better initial guess in the multigrid solver which would in turn
require fewer iterations.
\begin{figure}[!tbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.46\textwidth]{Figures/PoissonWeakScaling2d.pdf}
\label{fig:WeakScaling:PoissonProblem:2d}}
\qquad
\subfigure[]{\includegraphics[width=0.46\textwidth]{Figures/PoissonWeakScaling3d.pdf}
\label{fig:WeakScaling:PoissonProblem:3d}}
\caption{ Weak scaling of the multigrid and directional-split
solvers when approximating problem~\eqref{eqn:PoissonGMProblem} in
2D and 3D. For the 2D computations, MG-128 and MG-256 denote the
execution time of a single multigrid iterations using local
$n=128$ and $256$ grids. Likewise, DS-128 and DS-256 denote the
execution times for completely solving the directional-split
problem. For 3D computations, we use a local $n=32$ and $n=64$
grids where the solver is specified using the same 2D naming
convention.}
\label{fig:WeakScaling:PoissonProblem}
\end{center}
\end{figure}
As can be seen from this comparison, the directional-split solver
outperformed the Poisson solvers in all non-serial computations. The
precise difference depends on the problem size $N$, the number of
processors $P$, and the hardware configuration of the
cluster. Furthermore, when solving the Poisson problem, we used the two
highly optimized libraries Hypre~\cite{Hypre} and
FFTW~\cite{FFTW}. Therefore, we would expect to see even greater
performance differences if we were to optimize the
directional-split solver to the same degree as these other solvers.
\subsection{Multiple Thin Ellipses in 2D}
\label{sec:TilingThinEllipsesProblem}
The next example is designed to explore in more detail the parallel
performance of GM-IB and BCM-IB (with Hypre) by computing a variation of
the thin ellipse problem from section~\ref{sec:ThinEllipseProblem}.
Because our 2D computations are performed on a doubly-periodic fluid
domain, the thin ellipse geometry is actually equivalent to an infinite
array of identical elliptical membranes. This periodicity in the
solution provides a simple mechanism for increasing the computational
complexity of a simulation by explicitly adding multiple periodic copies
while technically solving a problem with a solution that is identical to
that for a single membrane. Each copy of the original domain (see
section~\ref{sec:ThinEllipseProblem}) may then be handled by a different
processing node, which allows us to explore the parallel performance in
an idealized geometry.
Suppose that we would like to perform a parallel simulation using
$P=P_x\cdot P_y$ processing nodes. On such a cluster, we can simulate a
rectangular $P_x\times P_y$ array of identical ellipses, situated on the
fluid domain $\Omega=[0, P_x] \times [0, P_y]$. We subdivide the domain
into equal partitions so that each processor handles the unit-square
subdomain $\Omega_{\ell,m}=[\ell-1, \ell] \times [m-1, m]$, for
$\ell=1,2,\ldots, P_x$ and $m=1,2,\ldots P_y$. If we denote by
$(x_{\ell,m}, y_{\ell,m})$ the centroid of $\Omega_{\ell,m}$, then each
such subdomain contains a single ellipse having the initial
configuration
\begin{gather*}
\bs{X}_{\ell,m}(s,0) = \left( x_{\ell,m} + r_1 \cos(2 \pi s), \;
y_{\ell,m} + r_2 \sin(2 \pi s) \right),
\end{gather*}
where $s\in[0,1]$ is the same Lagrangian parameter as before. In order
to make the flow slightly more interesting, and to test the ability of
our parallel algorithm to handle immersed boundaries that move between
processing nodes, we impose a constant background fluid velocity field
$\bs{u}(\bs{x},0) = \frac{1}{2}\brac{1, \sqrt{3}}$ instead of the zero initial
velocity used in section~\ref{sec:ThinEllipseProblem}. Snapshots of the
solution for a $2\times 2$ array of ellipses are given in
Figure~\ref{fig:MultipleThinEllipseSim}.
\begin{figure}[!tbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.40\textwidth]{Figures/MultipleThinEllipseSim1.pdf}
\label{fig:MultipleThinEllipseSim1}}
\qquad
\subfigure[]{\includegraphics[width=0.40\textwidth]{Figures/MultipleThinEllipseSim2.pdf}
\label{fig:MultipleThinEllipseSim2}}
\qquad
\subfigure[]{\includegraphics[width=0.40\textwidth]{Figures/MultipleThinEllipseSim3.pdf}
\label{fig:MultipleThinEllipseSim3}}
\qquad
\subfigure[]{\includegraphics[width=0.40\textwidth]{Figures/MultipleThinEllipseSim4.pdf}
\label{fig:MultipleThinEllipseSim4}}
\caption{Simulation of a $2\times 2$ array of thin ellipses.}
\label{fig:MultipleThinEllipseSim}
\end{center}
\end{figure}
To investigate the parallel performance of the GM-IB and BCM-IB
algorithms, we simulate different-sized arrays of thin ellipses
corresponding to values of $P_x$ and $P_y$ in the range $[1,16]$. For
each simulation, we use parameters $\mu=0.01$, $\rho=1$, $\sigma=1$,
$r_1=\frac{5}{28}$, $r_2=\frac{7}{20}$, $h_s=\frac{4}{19}h$ and
$\Delta t=0.01 h$, and we compute up to time $t=1.00$ using two values of the
fluid mesh width $h=\frac{1}{128}$ and $\frac{1}{256}$. In the case of
perfect parallel scaling the execution time should remain constant
between simulations, because of our problem constructon in which
doubling the problem size also double the number of nodes
$P$. Therefore, the problem represents a weak scalability test for our
algorithm in which the workload per processor node remains constant as
the number of nodes increase.
The execution times for various array sizes ($P_x$, $P_y$) are
summarized in Figure~\ref{fig:WeakScaling:IBProblem} for both IB
solvers. The execution time remains roughly constant in both cases,
which indicates that the GM-IB and BCM-IB implementations are
essentially weakly scalable. Notice that there is a slight degradation
in performance as $P$ increases; for example, on the $h=\frac{1}{128}$
grid the GM-IB execution time increases by roughly $20\%$ between $P=64$
to $P=254$, which is minimal considering the large variation in problem
size.
When comparing solvers, GM-IB outperforms BCM-IB by more than a factor
of 5 in execution time. This difference in performance is largely due to
the efficiency of the linear solvers. Here, the BCM-IB solver uses
conjugate gradient with a multigrid preconditioner (PFMG) implemented
within Hypre~\cite{Hypre}, where the initial guess is set to the
solution from the previous time step. For this particular problem, the
multigrid solver typically requires $2$ iterations to solve the momentum
equations and $5$ iterations for the projection step. Naturally, if
either iteration count could be reduced, the performance of the BCM-IB
solver would improve significantly. However, since an iteration of multigrid
is substantially slower than the directional-split solver
(see section~\ref{sec:PoissonComparison}), GM-IB
would continue to outperform BCM-IB.
\begin{figure}[!tbp]
\begin{center}
\includegraphics[width=0.6\textwidth]{Figures/IBWeakScaling2d.pdf}
\caption{ Execution time (in seconds) for the multiple thin ellipse
problem using the BCM-IB and GM-IB algorithms ($P$ ellipses, $P$
processors, local grids with $n=128$ and $256$).}
\label{fig:WeakScaling:IBProblem}
\end{center}
\end{figure}
\subsection{Cylindrical Shell in 3D}
\label{sec:CylinderSimulation}
For our final test case, we consider a three-dimensional example in
which the immersed boundary is a cylindrical elastic shell. The
cylinder initially has an elliptical cross-section with semi-axes
$r_1$ and $r_2$ that is parameterized by
\begin{gather*}
\bs{X}(s,r,0) = \left( r ,~ \frac{1}{2} + r_1 \cos(2 \pi s) ,~
\frac{1}{2} + r_2 \sin(2 \pi s) \right),
\end{gather*}
using the two Lagrangian parameters $s,r\in[0,1]$. The force density
is
\begin{gather*}
\bs{\mathcal{F}}[\bs{X}(s,r,t)] =
\sigma_s \pdd{\bs{X}}{s} +
\sigma_r \pd{}{r}\brac{\pd{\bs{X}}{r} \brac{ 1 -
\frac{L}{\left|\pd{\bs{X}}{r}\right|} }},
\end{gather*}
which corresponds to an elastic shell made up of an interwoven mesh of
one-dimensional elastic fibers. The $s$ parameterization identifies
individual fibers running around the elliptical cross-section of the
cylinder, each having zero resting length and elastic stiffness
$\sigma_s$. On the other hand, the $r$ parameterization describes
fibers running axially along the length of the cylinder, each having a
non-zero resting-length $L$ and stiffness $\sigma_r$. Since the domain
is periodic in all directions, the ends of the cylinder are connected to
their periodic copies so that there are no ``cuts'' along the
fibers. This problem is essentially equivalent to the two-dimensional
thin ellipse problem considered in section~\ref{sec:ThinEllipseProblem},
with the only difference being that the 2D problem does not have any
fibers running along the non-existent third dimension. The 2D thin
ellipse and 3D cylinder problems are only strictly equivalent when
$\sigma_r=0$. However, we take $\sigma_r=\sigma_s=1$ and $L=1$ in order
to maintain the integrity of the elastic shell and to avoid any drifting
of elliptical cross-sections in the $x$-direction. The elastic shell is
discretized using equally-spaced values of the Lagrangian parameters $s$
and $r$. In the simulations that follow, we use the parameter values
$\mu=0.01$, $\rho=1$, $r_1=\frac{5}{28}$, $r_2=\frac{7}{20}$,
$N_s=\frac{19}{4}N$, $N_r=3 N$ and $\Delta t=0.04/N$ where $N=128$ and
$N=256$.
The solution dynamics are illustrated by the snapshots pictured in
Figure~\ref{fig:CylinderSim}, and we observe that the 3D elastic shell
oscillates at roughly the same frequency as the 2D ellipse shown in
Figure~\ref{fig:ThinEllipseSim}. Although the geometry of this problem
may seem somewhat of a special case because of the alignment of axial
fibers along the $x$-coordinate direction, this feature has no
noticeable impact on parallel performance measurements. Indeed, the
reason that fiber alignment doesn't affect communication cost is because
all IB points and force connections residing in the ghost region are
communicated regardless of whether or not they actually cross subdomain
boundaries.
In Table~\ref{Table:CylinderScaling}, we present measurements of
execution time and efficiency that illustrate the parallel scaling over
the first $100$ time steps with the number of processors $P$ varying
between 1 and 128. In all runs, the domain is partitioned evenly between
the $P$ processing nodes using rectangular boxes. When the IB points are
evenly distributed between all domain partitions, we observe good parallel
efficiency. On average, we obtain a speedup factor of $1.85$ when doubling the
number of processors for this particular problem.
For larger runs, the parallel efficiency does deteriorate as the local
subdomain shrinks in size. For example, when subdividing a $N=128$
grid to a $(P_x,P_y,P_z) = (32,2,2)$ array of processors, the local grid
size is $4 \times 64 \times 64$. Therefore, every processor has to
communicate all data within its subdomain to neighbouring processors,
since the entire domain overlaps with ghost regions. For this reason,
the GM-IB algorithm performs remarkably well, given the circumstances.
Lastly, in Table~\ref{Table:CylinderScaling}, we show the execution time
for situations where the immersed boundary is not evenly distributed
between processors. Since no load balancing strategy is incorporated in
our implementation, the parallel efficiency drops as the computational
work becomes more unevenly divided. Here, the total execution time
becomes increasingly dominated by the IB portion of the calculation as
the workload becomes more unbalanced. The parallel efficiency in this
situation could be improved by partitioning the fluid domain in a
dynamic manner, such as is done by IBAMR using SAMRAI
in~\cite{Griffith2010}.
\begin{sidewaysfigure}[!tbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.38\textwidth]{Figures/CylinderSim1.jpg}
\label{fig:CylinderSim1}}
\qquad
\subfigure[]{\includegraphics[width=0.38\textwidth]{Figures/CylinderSim2.jpg}
\label{fig:CylinderSim2}}
\qquad
\subfigure[]{\includegraphics[width=0.38\textwidth]{Figures/CylinderSim3.jpg}
\label{fig:CylinderSim3}}
\qquad
\subfigure[]{\includegraphics[width=0.38\textwidth]{Figures/CylinderSim4.jpg}
\label{fig:CylinderSim4}}
\caption{Snapshots of an oscillating 3D cylindrical shell ($N=128$)
that is initially stretched outward along the $x$--direction.}
\label{fig:CylinderSim}
\end{center}
\end{sidewaysfigure}
\begin{table}[htbp]\centering\small
\caption{Execution time (in seconds) and efficiency for the
3D cylindrical shell problem for a fixed problem size
while varying the number of processing nodes $P$.}
\begin{tabular}{cc cc cc}\toprule
& & \multicolumn{2}{c}{$N=128$} & \multicolumn{2}{c}{$N=256$} \\
\cmidrule(r){3-4} \cmidrule(r){5-6}
$P$ & ($P_x,P_y,P_z$)& Wall Time & Efficiency & Wall Time & Efficiency \\
\midrule
$1$ & ($1,1,1$) & $4.01\text{e}{+2}$ & $1.00$ & $2.60\text{e}{+3}$ & $1.00$ \\
$2$ & ($2,1,1$) & $2.11\text{e}{+2}$ & $0.93$ & $1.43\text{e}{+3}$ & $0.91$ \\
$4$ & ($4,1,1$) & $1.18\text{e}{+2}$ & $0.81$ & $7.06\text{e}{+2}$ & $0.92$ \\
$8$ & ($2,2,2$) & $6.07\text{e}{+1}$ & $0.82$ & $3.52\text{e}{+2}$ & $0.92$ \\
$16$ & ($4,2,2$) & $3.18\text{e}{+1}$ & $0.82$ & $1.83\text{e}{+2}$ & $0.89$ \\
$32$ & ($8,2,2$) & $1.65\text{e}{+1}$ & $0.79$ & $9.64\text{e}{+1}$ & $0.84$ \\
$64$ & ($16,2,2$) & $9.42\text{e}{+0}$ & $0.70$ & $5.39\text{e}{+1}$ & $0.75$ \\
$128$ & ($32,2,2$) & $7.76\text{e}{+0}$ & $0.55$ & $3.02\text{e}{+1}$ & $0.67$ \\
\midrule
$32$ & ($4,4,2$) & $2.53\text{e}{+1}$ & $0.52$ & $1.30\text{e}{+2}$ & $0.62$ \\
$64$ & ($4,4,4$) & $1.42\text{e}{+1}$ & $0.46$ & $7.13\text{e}{+1}$ & $0.57$ \\
$128$ & ($8,4,4$) & $7.76\text{e}{+0}$ & $0.42$ & $3.72\text{e}{+1}$ & $0.55$ \\
\bottomrule
\end{tabular}
\label{Table:CylinderScaling}
\end{table}
\section{Conclusions}
\label{sec:Conclusions}
We have developed a new algorithm for the immersed boundary
problem on distributed-memory parallel computers that is based on the
pseudo-compressibility method of Guermond and Minev for solving the
incompressible Navier-Stokes equations. The fundamental advantage of
this fluid solver is the direction-splitting strategy applied to the
incompressibility constraint, which reduces to solving a series of
tridiagonal linear systems with an extremely efficient parallel
implementation.
Numerical computations demonstrate the ability of our method to simulate
a wide range of immersed boundary problems that includes not only 2D
flows containing isolated fibers and thick membranes constructed of
multiple nested fibers, but also 3D flows containing immersed elastic
surfaces. The strong and weak scalability of our algorithm is
demonstrated in tests with up to \changed{256} distributed processors,
where excellent speedups are observed. \changed{Furthermore, comparisons
against FFT-based (FFTW~\cite{FFTW}) and multigrid
(Hypre~\cite{Hypre}) solvers shows substantial performance
improvements when using the Guermond and Minev solver.} We observe
that since our implementation does not apply any load balancing
strategy, some degradation in the parallel efficiency is observed in
immersed boundary portion of the computation when the elastic membrane
is not equally divided between processors.
We believe that our computational approach is a promising one for
solving fluid-structure interaction problems in which the solid elastic
component takes up a large portion of the fluid domain, such as occurs
with dense particle suspensions~\cite{TornbergShelley2004} or very
complex elastic structures that are distributed throughout the fluid.
These are problems where local adaptive mesh refinement is less likely
to offer any advantage because of the need to use a nearly-uniform fine
mesh over the entire domain in order to resolve the immersed boundary.
It is for this class of problems that we expect our approach to offer
significant advantages over methods such as that of Griffith et
al.~\cite{Griffith2007}.
We plan in future to implement modifications to our algorithm that will
improve the parallel scaling, and particularly on improving memory
access patterns for the Lagrangian portion of the calculation related to
force spreading and velocity interpolation. We will also investigate
code optimizations that aim to reduce cache misses and exploit on-chip
parallelism. \changed{Lastly, work has started on applying this algorithm to
study spherical membrane dynamics, particle sedimentation, and
fiber suspensions.}
|
1,314,259,994,819 | arxiv | \section{Introduction}
\label{intro}
Recently, preparations for terrestrial laboratory experiments with heavy-ion
collisons have been started, where it is planned to access the high-density/
low temperature region of the QCD phase diagram and explore physics at the
phase boundary between hadronic and quark matter, e.g., within the CBM
experiment at FAIR Darmstadt.
Predictions for critical parameters in this domain of the temperature-density
plane are uncertain since they cannot be checked against Lattice-QCD
simulations which became rather precise at zero baryon densities.
Chiral quark models have been developed and calibrated with these results.
They can be extended into the finite-density domain and suggest a rich
structure of color superconducting phases.
These hypothetical phase structures shall imply consequences for the structure
and evolution of compact stars, where the constraints from mass and radius
measurements as well as from the cooling phenomenology have recently reached an
unprecedented level of precision which allows to develop decisive tests of
models for high-density QCD matter.
Among compact stars (we will address them also with the general term
{\it neutron stars} (NSs)) one can distiguish three main classes according
to their composition:
hadron stars, quark stars (bare surfaces or with thin crusts),
and hybrid stars (HyS). The latter are the subject of the present study.
Observations of the surface thermal emission of NSs
is one of the most
promising ways to derive detailed information about processes in interiors
of compact objects (see \cite{Page:2005fq,Page:2004fy,Yakovlev:1999sk}
for recent reviews).
In \cite{Popov:2004ey} (Paper I hereafter) we
proposed to use a population synthesis of close-by cooling NSs as
an addtional test for theoretical cooling curves.
This tool, based on calculation of the $\mathrm{Log\, N}$-$\mathrm{Log\,S}$
distribution, was shown to be an effective supplement to the standard
$\mathrm{T}$~-~$\mathrm{t}$ (Temperature vs. age) test.
In Paper I we used cooling curves for hadron stars calculated
in \cite{Blaschke:2004vq}.
Here we study cooling curves of HyS
calculated in \cite{Grigorian:2004jq} (Paper II hereafter).
Except $\mathrm{T}$~-~$\mathrm{t}$ and $\mathrm{Log\, N}$-$\mathrm{Log\,S}$
we use also the brightness constraint test (BC) suggested in
\cite{Grigorian:2005fd}.
We apply altogether three tests -- $\mathrm{T}$-$\mathrm{t}$,
$\mathrm{Log\, N}$-$\mathrm{Log\,S}$, and BC -- to five sets of cooling
curves of HyS. In the next section we describe calculation of these curves.
In Section III we discuss the population synthesis scenario.
After that we present our results which imply the conjecture of a new mass
spectrum constraint from Vela-like objects.
In Section 5 we discuss the results and present our conclusions in Section ~6.
\section{Cooling curves for hybrid stars}
\label{cool}
\subsection{Hybrid stars}
The description of compact star cooling with color superconducting
quark matter interior is based on the approach introduced in Paper II
which will be briefly reviewed here, see also \cite{Blaschke:2005dc} for
a recent summary.
A nonlocal, chiral quark model is employed which supports
compact star configurations with a rather large quark core
due to the relatively low critical densities for the
deconfinement phase transition from hadronic matter to color superconducting
quark matter.
In the interior of the compact star in late cooling
stages, when the temperature is well below the opacity temperature
$T_{\rm opac}\sim 1$ MeV for neutrino untrapping, four phases of
quark matter are possible: normal quark matter (NQ),
two-flavor superconducting matter (2SC), a mixed phase of both
(NQ-2SC) and the color-flavor-locking phase (CFL). The
state-of-the-art calculations for a three-flavor quark matter
phase diagram within a chiral (NJL) quark model of quark matter
and selfconsistently determined quark masses are described in
Refs. \cite{Ruster:2005jc,Blaschke:2005uj,Abuki:2004zk}.
The detailed structure of the phase diagram in these models still
depends on the strength parameter $G_D$ of the diquark coupling
(and on the formfactor of the momentum space regularization, see
\cite{Aguilera:2004ag}). For all values of $G_D$ no stable hybrid
stars with a CFL phase have been found yet, see
\cite{Buballa:2003qv}, and Refs. therein.
We will restrict us here to the discussion of 2SC and NQ phases.
The 2SC phase occurs at lower baryon densities than the CFL phase
\cite{Steiner:2002gx,Neumann:2002jm}.
For applications to compact stars the omission
of the strange quark flavor is justified by the fact that chemical
potentials in central parts of the stars barely reach the
threshold value at which the mass gap for strange quarks breaks
down and they may appear in the system \cite{Gocke:2001ri}.
It has been shown in \cite{Blaschke:2003yn} that a nonlocal chiral quark
model with the Gaussian formfactor ansatz leads to an early onset of the
deconfinement transition so that hybrid stars with
large quark matter cores \cite{Grigorian:2003vi} can be discussed.
In describing the hadronic part of the hybrid star, as in
\cite{Blaschke:2004vq}, we adopt the Argonne $V18+\delta v+UIX^*$ model
for the EoS \cite{Akmal:1998cf}, which is based on
recent data for the nucleon-nucleon interaction with the
inclusion of a parameterized three-body force and relativistic
boost corrections.
Actually we continue to adopt an analytic parameterization of this
model by Heiselberg and Hjorth-Jensen \cite{Heiselberg:1999fe}, where the fit
is done for $n<4~n_0$ with $n_0=0.16$ fm$^{-3}$ being the nuclear saturation
density. This EoS fits the symmetry energy to the
original Argonne $V18+\delta v +UIX^*$ model in the mentioned
density interval and smoothly incorporates causality constraints at high
densities.
The threshold density for the DU process is $n_c^{\rm DU}\simeq~5.19~n_0$,
i.e. it occurs in stars with masses exceeding
$M_c^{\rm DU}\simeq 1.839~M_{\odot}$).
\subsection{Cooling}
For the calculation of the cooling of the hadronic part of the
hybrid star we use the same model as in \cite{Blaschke:2004vq}.
The main processes are the medium modified Urca (MMU) and the pair breaking
and formation (PBF) processes for our adopted EoS of hadronic
matter.
For a recent, more detailed discussion of these processes and the role
of the $3P_2$ gap, see \cite{Grigorian:2005fn}.
The possibilities of pion condensation and of other so called
exotic processes are suppressed since in the model \cite{Blaschke:2004vq}
these processes may occur only for neutron star masses exceeding
$M_c^{\rm quark}= 1.214~M_{\odot}$. The DU process is irrelevant
in this model up to very large neutron star masses
$M>1.839~M_{\odot}$. The $1S_0$ neutron and proton gaps are taken
the same as those shown by thick lines in Fig. 5 of Ref.
\cite{Blaschke:2004vq}. We pay particular attention to the fact that the
$3P_2$ neutron gap is additionally suppressed by the factor $0.1$
compared to that shown in Fig. 5 of \cite{Blaschke:2004vq}. This
suppression is motivated by the result of the recent work in
\cite{Schwenk:2003bc} and is required to fit the cooling data.
For the calculation of the cooling of the quark core in the hybrid
star we use the model introduced in \cite{Blaschke:2004vq}.
We incorporate the most
efficient processes: the quark direct Urca (QDU) processes on
unpaired quarks, the quark modified Urca (QMU), the quark
bremsstrahlung (QB), the electron bremsstrahlung (EB), and the
massive gluon-photon decay (see \cite{Blaschke:1999qx}).
Following \cite{Jaikumar:2001hq}
we include the emissivity of the quark pair formation and breaking
(QPFB) processes. The specific heat incorporates the quark
contribution, the electron contribution and the massles and
massive gluon-photon contributions. The heat conductivity contains
quark, electron and gluon terms.
The 2SC phase has one unpaired color of quarks (say blue) for which
the very effective quark DU process works and leads to a too fast
cooling of the hybrid star in disagreement with the data
\cite{Grigorian:2004jq}. We have suggested to assume a weak pairing channel
which could lead to a small residual pairing of the hitherto
unpaired blue quarks. We call the resulting gap $\Delta_X$ and
show that for a density dependent ansatz
\begin{equation}
\Delta_{\mathrm{X}}= \Delta_0 \, \exp{\left[-\alpha\, \left(
\frac{\mu - \mu_c}{\mu_c}\right)\right]}
\label{gap}
\end{equation}
with $\mu$ being the quark chemical potential, $\mu_c=330$ MeV.
Here we use different values of $\alpha$ and $\Delta_0$, which are
given in Table 1 characterizing the model.
The physical origin of the X-gap remains to be identified. It
could occur, e.g., due to quantum fluctuations of color neutral
quark sextett complexes \cite{Barrois:1977xd}. Such calculations have not
yet been performed with the relativistic chiral quark models.
For sufficiently small $G_D$, the 2SC pairing may be inhibited at
all. In this case, due to the absence of this competing spin-0
phase with large gaps, one may invoke a spin-1 pairing channel in
order to avoid the DU problem. In particular the
color-spin-locking (CSL) phase \cite{Schafer:2000tw} may be in accordance
with cooling phenomenology as all quark species are paired and the
smallest gap channel may have a behavior similar to Eq.
(\ref{gap}), see \cite{Aguilera:2005tg}. A consistent cooling
calculation for this phase, however, requires the evaluation of
neutrino emissivities (see, e.g. \cite{Schmitt:2005wg} and references therein)
and transport coefficients, which is still to be performed.
Gapless superconducting phases can occur when the diquark coupling
parameter is small so that the pairing gap is of the order of the
asymmetry in the chemical potentials of the quark species to be
paired. Interesting implications for the cooling of gapless CFL
quark matter have been conjectured due to the particular behavior
of the specific heat and neutrino emissivities \cite{Alford:2004zr}.
For reasonable values of $G_D$, however, these phases
do occur only at too high temperatures to be relevant for late
cooling, if a stable hybrid configuration with these phases could
be achieved at all \cite{Blaschke:2005uj}.
The weak pairing channels are characterized by gaps typically in
the interval $10~$ keV $\div 1~$MeV, see discussion of
different attractive interaction channels in paper \cite{Alford:2002rz}.
\begin{figure}
\includegraphics[width=0.46\textwidth,angle=-90]{M1.ps}
\caption[]{Hybrid star cooling curves for Model I.
Different lines correspond to compact star mass values indicated in the legend
(in units of $M_\odot$), data points with error bars are taken from Ref.
\cite{Page:2004fy}. For the explanation of shaded areas, see text.}
\label{fig:bc1}
\end{figure}
\begin{figure}
\includegraphics[width=0.46\textwidth,angle=-90]{M2.ps}
\caption[]{Same as Fig. \ref{fig:bc1} for Model II.} \label{fig:bc2}
\end{figure}
\begin{figure}
\includegraphics[width=0.46\textwidth,angle=-90]{M3.ps}
\caption[]{Same as Fig. \ref{fig:bc1} Model III. } \label{fig:bc3}
\end{figure}
\begin{figure}
\includegraphics[width=0.46\textwidth,angle=-90]{M4.ps}
\caption[]{Same as Fig. \ref{fig:bc1} for Model IV. } \label{fig:bc4}
\end{figure}
In the figures we present $\mathrm{T}$-$\mathrm{t}$ plots for
four models used in this paper. On each plot data points for
known cooling NSs are added (see details in \cite{Grigorian:2005fd}).
The hatched trapeze-like region represents the brightness constraint
(BC). For each model nine cooling curves are shown for configurations
with mass values corresponding to the binning of the population synthesis
calculations explained in the next section.
Clearly, all models satisfy the BC. As for the
$\mathrm{T}$-$\mathrm{t}$ test the situation is different. The
Model II does not pass the test because even the highest mass configuration
(which corresponds to the coolest HyS) cannot explain the lowest
data points. So, in the Table~1 it is marked that the model does
not pass the test.
In this work we want to introduce a more detailed measure for the
ability of a cooling model to describe observational data in the
temperature-age diagram.
We assign five grey values to regions of compact star masses in the
$T-t$ diagram which encode the likelihood that stars in that
mass interval can be found in the solar neighborhood, according to the
population synthesis scenario, see Fig. \ref{fig:mass}.
The darkest grey value, for example, corresponds to the mass interval
$1.35 \div 1.45$ M$_\odot$ for which the population sysnthesis predicts
the most objects.
According to this refined mass spectrum criterion a cooling model is
optimal when the darkness of the grey value is in accordance with the
number of observed objects in that region of the temperature-age diagram.
This criterion is ideally fulfilled for Model IV, where with only one exception
all objects are found in the two bands with darker grey values
whereas for Models I - III about half of the objects are situated in
light grey or even white regions.
\section{Population synthesis scenario}
\label{pop}
Population synthesis is a frequently used technique in astrophysics
described, e.g., in the review \cite{Popov:2004gw} where further
references can be found.
The idea is to construct an evolutionary scenario for an artificial
population of certain astronomical objects. The comparison with observations
gives the opportunity to test our understanding of evolutionary laws and
initial conditions for these sources.
The scenario that we use in this paper is nearly identical to the one
used in Paper I. We just briefly recall the main elements and
then describe the only small difference in the mass spectrum.
The main ingredients of the population synthesis model we use are:
the initial
distribution of NSs and their birth rate; the velocity distribution of NSs;
the mass spectrum of NSs; cooling curves and interstellar absorption.
In this series of papers in which we use the population synthesis model as a
test of the theory of thermal evolution of NSs,
we assume that the set of cooling curves is the most
undetermined igredient. So, we make an attempt to test it.
The cooling curves used in this paper are described in Sec. 2.
We assume that NSs are born in the Galactic disc and in the Gould Belt.
The disc region is calculated up to 3 kpc from the Sun, and is assumed to be
of zero thickness. The birth rate in the disc part of the distribution
is taken as 250 NS per Myr.
The Gould Belt is modeled as a flat disc-like structure with a hole
in the center (see the Belt description in \cite{p97}).
The inclination of the Belt relative to the galactic plane is
$18^{\circ}$. The NS birth rate in the Belt is 20 per Myr.
The velocity distribution of NSs is not well known.
In our calculations we use the one proposed by \cite{Arzoumanian:2001dv}.
This is a bimodal distribution with two
maxima at $\sim 127$ and $\sim707$~km~s$^{-1}$.
Recent results question this bimodality \cite{Hobbs:2005yx}.
However, since the time scales in our calculations typically
are not very long, the exact form of the distribution is not very important.
For the calculation of the column density towards a given NS we use the same
approximation as we used before.
It depends only on the distance to the galactic center and the height above
the galactic plane (see Fig.~1 in \cite{Popov:1999qn}).
The detailed structure of the interstellar medium (ISM)
is not taken into account, except the Local Bubble, which is
modeled as a 140-pc sphere centered on the Sun.
The mass spectrum is a crucial ingredient of the scenario.
The main ideology of its derivation
is the same as given in \cite{Popov:2003iq}.
At first we take all massive stars which can produce
a NS (spectral classes B2-08)
from the HIPPARCOS catalogue with parallaxies $<0.002$~arcsec.
Then for each spectral class we assign a mass interval.
In the next step using calculations by \cite{woosley} we obtain
the baryon masses of compact remnants.
Then we have to calculate the gravitational mass.
In this paper, unlike our previous studies where we just used the formula
from \cite{Timmes:1995kp}
$M_{\mathrm{bar}}-M_{\mathrm{grav}}=\alpha \, M_{\mathrm{grav}}^2$ with
$\alpha=0.075$, we use $M_{\mathrm{grav}}$ accurately calculated for the
chosen configuration.
Since the approximation from \cite{Timmes:1995kp} is
very good the difference with the mass spectrum we used in Paper I is
tiny, and appears only in three most massive bins of our spectrum.
In this paper we slightly rebinned
the mass spectrum in order to have a better
coverage of the cooling behaviour for the chosen configurations.
As before we use eight mass bins defined by their borders:
1.05; 1.13; 1.22; 1.28; 1.35; 1.45; 1.55; 1.65; 1.75~$M_{\odot}$,
see Fig.~\ref{fig:mass}.
The critical mass for the formation of a quark core is close to
1.22~$M_{\odot}$.
Therefore, bins are chosen such that the first two represent purely
hadronic stars.
Bins are of different width.
Outermost bins have a width 0.1~$M_{\odot}$.
We do not expect HyS with masses M $<1.05$ M$_{\odot}$.
The upper boundary of the eighth bin lies close to the
maximum mass allowed by the chosen configuration.
Following the suggestion by \cite{Timmes:1995kp} we make runs for two
modifications of the mass spectrum.
Except the usage of the full range of masses we
produce, in addition, calculations for the truncated spectrum.
In this case contributions of the first two bins are added to the third bin.
This situation reflects the possibility that stars with
$M\stackrel{<}{\sim} 11 \, M_{\odot}$
can produce NSs of similar masses close to ${\sim} 1.27\, M_{\odot}$.
As in Paper I we neglect effects of a NS atmosphere, and use pure blackbody
spectra. As we do not address particular sources this seems to be a valid
approximation.
\begin{figure}[t]
\includegraphics[width=0.46\textwidth,angle=0]{hybrid_mass.ps}
\caption[]{The adopted mass spectrum, binned over eight intervals of
different widths. The non-truncated spectrum is shown (see text).
}
\label{fig:mass}
\end{figure}
The population synthesis code calculates spatial trajectories of NSs
with the time step $10^4$~yrs. For each point from the set of cooling curves
we have the surface temperature of the NS.
Calculations for an individual track are stopped at the age
when the hottest NSs (for all five models here this is a star with $M=1.1\,
M_{\odot}$, unless the truncated mass spectrum is used)
reaches the temperature $10^5$~K.
Such a low temperature is beyond the registration limit for ROSAT even for
a very short distance from the observer.
With the known distance from the Sun and the ISM
distribution we calculate the column density. Finally, count rates are
calculated using the ROSAT response matrix. Results are summarized along
each individual trajectory. We calculate 5,000 tracks for each model. Each
track is applied to all eight cooling curves.
With a typical cooling timescale of about 1~Myr, we obtain $\sim 4 \cdot 10^6$
``sources''.
The results are then normalized to the chosen NS formation rate
(290 NSs in the whole region of the problem).
\section{Numerical results}
\label{res}
In this section we present results of our calculations for four models,
characterized by different sets of the two-parameter ansatz for the X-gap,
Eq. (\ref{gap}), see Table 1.
$\mathrm{Log\, N}$-$\mathrm{Log\, S}$ curves are given for two values
of the Gould Belt radius ($R_{\mathrm{belt}}=300$ and 500~pc),
and for two variants of the mass spectrum (full and truncated).
The modeled $\mathrm{Log\, N}$-$\mathrm{Log\, S}$ curves are confronted with
data for close-by, young cooling NSs observed by ROSAT.
This data set includes the {\it Magnificent Seven}
(seven dim radio-quiet NSs),
radio pulsars, Geminga and a geminga-like source (see the list and details
in \cite{Popov:2003hq}).
The error bars correspond to poissonian errors (i.e. square
root of the number of sources).
An important upper limit is added \cite{Rutledge:2003kg} which
represents an estimate of unidentified cooling NSs in the ROSAT Bright
Source Catalogue (BSC).
Model I is the best model from Paper II.
An important feature of this model is that cooling curves cover data points
in the $\mathrm{T}$-$\mathrm{t}$ plot very uniformly.
The parameters of the Model II were specifically chosen in such a way,
that it is possible to demonstrate the fact that the $\mathrm{Log\,
N}$-$\mathrm{Log\, S}$ test can be successful for a set that fails to pass
the $\mathrm{T}$-$\mathrm{t}$ test. Even for the highest possible mass it
is imposible to explain cold stars, but as the $\mathrm{Log\,
N}$-$\mathrm{Log\, S}$ test is not sensitive to what is happening with
massive NSs it does not influence the results of the population
synthesis.
Model III is an attempt of a compromise to fulfill at least marginally all
three tests. It has s smaller gap than the Model I, and unlike Model II it
has a non-zero value of $\alpha$. It can explain all data points in the
$\mathrm{T}$-$\mathrm{t}$ plot within an available mass range resorting,
however, to the very unlikely objects with masses above 1.5 M$_\odot$.
Models IV assumes a steeper density dependence of the X-gap than all
previous ones. It is thus possible to spread the set of cooling curves over
the existing cooling data already for mass variations within the range of
most probable mass values $1.25\pm 0.25$ M$_\odot$ in Fig. \ref{fig:mass}.
Using these models it is possible to describe even the Vela pulsar being a
young, nearby and rather cool object, within this mass range.
\section{Discussion}
\label{disc}
In the present paper we were to find a set of parameters
for which all three tests ($\mathrm{T}$-$\mathrm{t}$,
$\mathrm{Log\, N}$-$\mathrm{Log\, S}$, and BC) can be successfully passed?
Are these tests sufficient to constrain a model or is it necessary to assume
additional constraints?
For the $\mathrm{T}$-$\mathrm{t}$ test it is necessary to cover the observed
points by curves from a relatively wide range of masses which are
consistent with known data on inital NS mass distribution (for example, data
on masses of not-accreted, i.e. mainly secondary,
companions of double NS systems).
For the Model III (see Fig.~\ref{fig:bc3}),
for example, this is not the case as a number of data
points seems to correspond to a narrow mass range slightly below the critical
mass $1.22 \, M_{\odot}$.
On the one hand, all data points on Fig.~\ref{fig:bc3}
can be covered by cooling curves from the standard
mass range ($\sim 1$~-~$1.5, M_{\odot}$). On the other hand, the intermediate
region (${\mathrm{log}}\,T\sim 6$~-~$6.2$,
${\mathrm{log}}\,t\sim 3$~-~$4$) corresponds to a narrow mass range:
$\sim1.22$~-~1.26~$M_{\odot}$.
This is not a dramatic disadvantage, especially if the hypothesis discussed
in \cite{woosley,Timmes:1995kp} that stars with masses below
$\sim 11\, M_{\odot}$
form remnants of nearly the same mass close to the range above.
Still, this property of cooling curves should be mentioned.
Model IV (Fig.~\ref{fig:bc4}) gives a more appropriate
description from the point of view of the mass distribution.
\begin{figure}
\includegraphics[width=0.46\textwidth,angle=-90]{lnls_1.ps}
\caption[]{
$\mathrm{Log\, N}$-$\mathrm{Log\, S}$ distribution for Model I.
Four variants are shown:
$\mathrm{R_{belt}}=500$~pc and truncated mass spectrum (full line),
$\mathrm{R_{belt}}=500$~pc and non-truncated mass spectrum (dotted line),
$\mathrm{R_{belt}}=300$~pc and truncated mass spectrum (dash-dotted line),
and finally $\mathrm{R_{belt}}=300$~pc (dashed line)
for non-truncated mass distribution.}
\label{fig:m1}
\end{figure}
\begin{figure}
\includegraphics[width=0.46\textwidth,angle=-90]{lnls_2.ps}
\caption[]{
$\mathrm{Log\, N}$-$\mathrm{Log\, S}$ distribution for Model II.
Line styles as in Fig. \ref{fig:m1}.}
\label{fig:m2}
\end{figure}
\begin{figure}
\includegraphics[width=0.46\textwidth,angle=-90]{lnls_3.ps}
\caption[]{$\mathrm{Log\, N}$-$\mathrm{Log\, S}$ distribution for Model III.
Line styles as in Fig. \ref{fig:m1}.}
\label{fig:m3}
\end{figure}
\begin{figure}
\includegraphics[width=0.46\textwidth,angle=-90]{lnls_4.ps}
\caption[]{$\mathrm{Log\, N}$-$\mathrm{Log\, S}$ distribution for Model IV.
Line styles as in Fig. \ref{fig:m1}.}
\label{fig:m4}
\end{figure}
For the case of the $\mathrm{Log\, N}$-$\mathrm{Log\, S}$ test we'd like to
note that the value $\mathrm{R_{belt}}$=300~pc is more reliable.
So, the Model I, for
which at bright fluxes we see an overprediction of sources for this value of
the Gould Belt radius, can be considered only as marginally passing the
test. Other models do better. Especially, models III and IV.
For them $\mathrm{Log\, N}$-$\mathrm{Log\, S}$ curves for $\mathrm{R_{belt}}=300$~pc match
well the data points, leaving a room for few new possible identifications of
close-by cooling NSs (active search is going on by different groups
in France \cite{mo2005a,mo2005b}, in Germany \cite{bp2005}, in
Italy \cite{Chieregato:2005es} and in the USA\cite{ag2005}).
As in our $\mathrm{Log\, N}$-$\mathrm{Log\, S}$ calculations we use a
particular mass spectrum in which there are nearly no objects with
$M\stackrel{>}{\sim} 1.4$~-~1.5~$M_{\odot}$, we have a strong constraint
on properties of a set of cooling curves which can satisfy all three tests.
The position of the critical curve which devides hadronic stars from HyS is
fixed by the chosen configuration. If curves for masses up to 1.4~$M_{\odot}$
lie too close to the critical one, then we overpredict the number of sources
on the $\mathrm{Log\, N}$-$\mathrm{Log\, S}$ plot. If oppositely, we move
curves for 1.3~-1.4~$M_{\odot}$ down (this is achieved by increasing the
parameter $\alpha$ and thus making the density dependence of the X-gap steeper)
then a narrow range of masses becomes responsible for a wide
region in the $\mathrm{T}$~-~$\mathrm{t}$ diagram.
A solution could be in changing the exponential dependence in Eq. 1 to the
power-law. We plan to study this possibility in future.
The combined usage of all three tests can put additional constraints on the
mass spectrum of compact objects.
For example, if we look at Fig.~\ref{fig:bc3} it is clear
that small masses (M < 1.2 M$_{\odot}$) are necessary to explain hot objects
with ages $\sim10^3$~-~$10^4$~yrs. If in such a case only a model with
a truncated mass spectrum is able to explain the $\mathrm{Log\,
N}$-$\mathrm{Log\, S}$ distribution of close-by NSs then the model is in
trouble. On other hand, if an explanation of the $\mathrm{Log\,
N}$-$\mathrm{Log\, S}$ deserves stars from low-mass bins, but cooling curves
for these stars are in conradiction with the BC, then, again,
the model has to be rejected.
In this series of our studies of the local population of cooling
compact objects we use the mass spectrum which should fit to the solar
neighbourhood enriched with stars from the mass range 8~-~15~$M_{\odot}$.
Mass spectrum of all galactic newborn NSs can be different, but not
dramatically. The number of low-mass stars
($\sim1$~-~$1.3 \, M_{\odot}$) can be
slightly decreased in favour of more massive stars.
However, compact objects with $M\stackrel{<}{\sim} 1.5 \, M_{\odot}$
anyway should significantly outnumber more massive objects.
This claim has some observational support.
Unfortunately, a mass determination with high precision is available only
for NSs in binary systems.
Compact objects in X-ray binaries could accrete a significant amount of
matter. Also mass determinations for them are much less precise than for
radio pulsar systems. So, we concentrate on the latter.
For some of the radio pulsars observed in binaries, accretion also played an
important r\^ole. Without any doubts masses of millisecond pulsars do not
represent their initial values. However, there is a small number of NSs with
well determined masses, for which it is highly possible that these masses
did not change significantly since these NSs were born
(data on NS masses can be found, for example, in
\cite{Manchester:2004bp,cordes05,Lorimer:2005bw} and references therein).
These are secondary (younger) components of double NS systems.
According to standard evolutionary scenarios these compact
objects never accreted a significant amount of mass (as when they formed the
primary component already was a NS). Their masses lie in the narrow range
1.18~-~1.39~$M_{\odot}$. Primary components of double NS binaries could
accrete during their lifetime. However, this amount of accreted matter
cannot be large as these are all high-mass binaries.
Masses of these NSs are all below 1.45~$M_{\odot}$. This is also an
important argument in favour of the statement that initial masses of most
NSs are below $\sim 1.4$~-~$1.5 \, M_{\odot}$.
The recently discovered highly relativistic
binary pulsar J1906+0746 \cite{Lorimer:2005un} is the nineth example of such
a system. The total mass is determined to be
$2.61\pm 0.02 \, M_{\odot}$.
The pulsar itself is a young, not millisecond, object.
It should not increase its mass due to accretion.
So, we can assume that it is at least not heavier than the second -
non-pulsar - component (we neglect here the possibility that the
companion is a massive white dwarf, still this is a possibility),
than we obtain that its mass is $\stackrel{<}{\sim} 1.3 \, M_{\odot}$.
Nine examples (without any counter-examples) is a very good evidence in favour
of the mass spectrum used in our calculations. Of course, some effects of
binary evolution can be important (for example, the
mass of the core of a massive star can be influenced if the star looses part
of its mass due to mass transfer to the
primary companion or due to common envelope formation), and so for isolated
stars (or stars in very wide binaries)
the situation can be slightly different.
However, with these observational
estimates of initial masses of NSs we feel
more confident using the spectrum with a small number of NSs with
$M \stackrel{>}{\sim} 1.4$~-~$1.5 \, M_{\odot}$.
Brighter sources are easier to discover. So, among known cooling NSs the
fraction of NSs with masses
$1 \, M_{\odot}\stackrel{<}{\sim} M \stackrel{<}{\sim} 1.5 \, M_{\odot}$
should be even higher than in the original mass spectrum.
So, we have the impression that it is
necessary to try to explain even cold (may be with an exception of 1-2
coldest) sources with $M\stackrel{<}{\sim} 1.4$~-~$1.5 \, M_{\odot}$.
Especially, the Magnificent seven and other young close-by
compact objects should be
explained as most typical representatives of the whole NS
population \cite{footnote}.
We want to underline that, even being selected by their observability in
soft X-rays, these sources form one of the most uniform samples of young
isolated NSs.
In this sense, the situation as in Fig.~\ref{fig:bc1} where a significant
number of sources are explained by cooling curves corresponding to
$1.5 \, M_{\odot} \stackrel{<}{\sim} M \stackrel{<}{\sim} 1.7\, M_{\odot}$
should be considered as a disadvantage of the model.
Particularly, Vela, Geminga and RX J1856-3754 should not be
explained as massive NSs. These all are young close-by sources, and the
probability that so near-by we observe young NSs which come out of few
percent of the most massive objects of this class is low.
All the above gives us the opportunity to formulate the conjecture of a
{\it mass spectrum constraint}: data points should be explained mostly by
NSs with {\it typical} masses.
For all known data this {\it typical} means
$1.1 \, M_{\odot} \stackrel{<}{\sim} M \stackrel{<}{\sim} 1.5 \, M_{\odot}$.
The $\mathrm{Log\, N}$-$\mathrm{Log\, S}$ test is the only one that takes
into account the mass spectrum {\it explicitely}. This is an additional
argument in favour of using this test together with others.
Taking all together, we conclude that Model IV is the best among studied
examples from the point of view of all three tests plus mass constraint.
\section{Conclusion}
We made a preliminary study of cooling curves for HyS based on the approach
in Paper II which suggests that if quark matter occurs in a compact star,
it has to be in the 2SC+X phase, where the hypothetical X-gap still lacks a
microscopical explanation.
All three tests of the cooling behavior
($\mathrm{Log\, N}$-$\mathrm{Log\, S}$,
$\mathrm{T}$-$\mathrm{t}$, BC) are applied.
Four models defined by the two-parameter ansatz for the X-gap were calculated.
Model II with a density-independent X-gap could directly be excluded since it
was not able to explain some cooling data including Vela at all.
Two of the models (I and III) successfully passed two tests and marginally the
third. None of these models could explain explain the temperature-age value
for Vela within a typical mass range, i.e. for a Vela mass below 1.5 M$_\odot$.
However, with a steeper density dependence of the X-gap than suggested in
Paper II, we were able to fulfill all 4 constraints and
exemplified this for model IV.
To conclude, HyS with a 2SC+X quark matter core appear to be good candidates
to explain the cooling behaviour of compact objects, although a consistent
theoretical explanation of the
hypothetical X-gap and its steep density dependence has still to be developed.
\begin{acknowledgements}
We thank M.E. Prokhorov, R. Turolla, D.N. Voskresensky and F. Weber
for their discussions and contributions to this research field.
S.P. thanks for the hospitality extended to him during a visit at
the University of Rostock, and for support from the DAAD partnership program
with the Moscow State University. His work was supported in part by RFBR
grants No. 04-02-16720, 06-02-16025 and by the ``Dynasty'' foundation.
The work of H.G. was supported by DFG grant No. 436 ARM 17/4/05 and he
acknowledges hospitality and support by the Bogoliubov Laboratory for
Theoretical Physics at JINR Dubna where this work has been completed.
\end{acknowledgements}
|
1,314,259,994,820 | arxiv | \section{Introduction}
This contribution to the proceedings of DISCRETE 2014 follows closely the layout of seminar I presented in the conference. I include here an expanded discussion of situations with $\Delta(27)$ singlets, including cases with explicit geometrical CP violation (first identified recently, in \cite{Branco:2015hea}). Some aspects discussed here are to appear also in a subsequent publication.
\subsection{The invariant approach}
I refer to the Invariant Approach (IA) to CP \cite{Bernabeu:1986fc} as an approach that starts by splitting the Lagrangian into $\mathcal{L}_{CP}$, a part that automatically conserves CP (e.g. kinetic terms, gauge interactions) and the remaining part $\mathcal{L}_{rem.}$:
\begin{equation}
\mathcal{L}=\mathcal{L}_{CP}+\mathcal{L}_{rem.} \,.
\end{equation}
The next steps are to
\begin{itemize}
\item Impose the most general CP transformations (that leave $\mathcal{L}_{CP}$ invariant).
\item Apply them and see if it restricts $\mathcal{L}_{rem.}$.
\end{itemize}
Only if the most general CP transformations restrict the shape of $\mathcal{L}_{rem.}$ can CP be violated.
An example of this type of restrictions is if the most general CP transformations force some coefficient to be real.
The IA is powerful because:
\begin{itemize}
\item Gets results just from the Lagrangian.
\item Independent of basis.
\item Shows relevant quantities for physical processes.
\end{itemize}
\subsection{The invariant approach for Standard Model leptons}
As a brief review of the IA, I apply it to a study of CP for Standard Model (SM) leptons. The mass Lagrangian is
\begin{equation}
\label{low}
-\mathcal{L}_m=m_l \overline{e}_L e_R + \tfrac{1}{2} m_\nu \overline{\nu}_{L} \nu^{c}_L + h.c.\,,
\end{equation}
where $L= (e_L, \nu_L)$ stand for the left-handed neutrino and charged lepton fields in a weak basis; $e_R$ is the right-handed counterpart.
Due to the $SU(2)_L$ interactions (inside $\mathcal{L_{CP}}$), the most general CP transformations are:
\begin{eqnarray}
(CP) L (CP)^\dagger &=& i U \gamma^0 \mathcal{C} \bar{L}^T, \label{LCP1} \\
(CP) e_R (CP)^\dagger &=& i V \gamma^0 \mathcal{C} \bar{e}_R^T \,.
\label{LCP2}
\end{eqnarray}
I adopt a less precise notation that is more convenient to work with:
\begin{eqnarray}
L &\to& U L^* \,,\\
e_R &\to& V e_R^* \,.
\end{eqnarray}
I use this notation throughout, particularly as for simplicity I will mostly consider scalar fields in future sections, where the shorter notation is precise.
In order for $\mathcal{L}_m$ to be CP invariant, under eq(\ref{LCP1}), eq(\ref{LCP2}) the terms shown in eq(\ref{low}) go into the respective $h.c.$ and vice-versa:
\begin{equation}
U^\dagger m_{\nu} U^* = m_{\nu}^*, \ \ \ \
U^\dagger m_{l} V = m_{l}^* \,.
\label{mlCP}
\end{equation}
Defining $H_\nu \equiv m_\nu m_\nu^\dagger$ and $H_l \equiv m_l m_l^\dagger$, I have:
\begin{equation}
U^\dagger H_{\nu} U = H_{\nu}^*, \ \ \ \
U^\dagger H_{l} U = H_{l}^* \,.
\label{HlCP}
\end{equation}
At this stage I follow \cite{Bernabeu:1986fc} and build CP-odd invariants (CPI) by constructing combinations where $U$ and $V$ do not appear. First, I note that from eq(\ref{HlCP}), I can obtain
$\Tr ( H_\nu H_l ) = \Tr ( H_\nu H_l )^*$, which does not depend on $U$, $V$.
As the matrices are Hermitian, $\Tr ( H_\nu H_l )^* = \Tr ( H_\nu^T H_l^T ) = \Tr ( H_l H_\nu )^T = \Tr ( H_l H_\nu)$, concluding that for any CP transformations $U$, $V$, $\Tr (( H_\nu H_l ) - ( H_l H_\nu ))=0$ is required for CP conservation. Given that this is the trace of a commutator, this particular CPI automatically vanishes, meaning it is not very useful.
A more useful alternative is the necessary condition for CP conservation:
\begin{equation}
I_1 \equiv \Tr \left[H_\nu , H_l \right]^3 = 0\,,
\label{hhcube}
\end{equation}
valid for any number of fermion generations. For 3 generations (the SM case), it can be shown that eq(\ref{hhcube}) is a sufficient condition to have no Dirac-type CP violation in the lepton sector.
\section{The invariant approach and flavour symmetries}
One of the main points of my talk at the conference and of \cite{Branco:2015hea} is that the IA is very useful for analysing flavour symmetry models.
In order to illustrate this, I present some examples.
\subsection{Toy model with 4 couplings \label{sec:L4}}
I start by considering a version of the toy model presented in section 3.1.1. of \cite{Chen:2014tpa}. As the aim here is to show the IA in action, I replace all fermions with scalars to avoid unnecessary complications. The Lagrangian (with fermions) as presented by \cite{Chen:2014tpa} is shown in figure \ref{L_toy}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=15 cm]{L_toy.png}
\end{center}
\caption{Toy model Lagrangian from \cite{Chen:2014tpa}. \label{L_toy}}
\end{figure}
I refer to a similar Lagrangian (with scalars) as $\mathcal{L}_4$ (due to its 4 couplings):
\begin{equation}
\mathcal{L}_{4}=S \bar\Psi F \Sigma + X \bar\Psi G \Sigma + Y \bar\Psi H_\Psi \Psi + Y \bar\Sigma H_\Sigma \Sigma + h.c. \,,
\label{L4}
\end{equation}
where scalar fields $S$, $X$, $Y$ have just one generation, whereas scalar fields $\Psi$ and $\Sigma$ have $n$ generations, meaning $F$, $G$, $H_\Psi$, $H_\Sigma$ are $n \times n$ coupling matrices. In the original toy model, $\Psi$ and $\Sigma$ are fermions ( with $n=3$ generations) and I use the notation $\bar{\Psi}=\Psi^\dagger$, $\bar{\Sigma}=\Sigma^\dagger$ in eq(\ref{L4}) for easier comparison with the box in figure \ref{L_toy}.\footnote{I reconsider $\mathcal{L}_{4}$ in section \ref{L4again} making it invariant under a flavour symmetry, as done in \cite{Chen:2014tpa}.}
$\mathcal{L}_{4}$ is the $\mathcal{L}_{rem.}$ of this toy model and the most general $CP$ transformations (consistent with the respective $\mathcal{L_{CP}}$) are independent unitary transformations for each of the fields - phases for $S$, $X$, $Y$ and $n \times n$ unitary matrices $Q$ and $R$ for $\Psi$ and $\Sigma$:
\begin{eqnarray}
S &\to& e^{i s} S^* \,, \\
X &\to& e^{i x} X^* \,, \\
Y &\to& e^{i y} Y^* \,, \\
\Psi &\to& Q \Psi^* \,, \\
\Sigma &\to& R \Sigma^* \,.
\end{eqnarray}
Imposing CP conservation requires $\mathcal{L}_{4}$ to be invariant under these, which implies that the terms displayed in eq(\ref{L4}) go into their $h.c.$ and vice-versa.
Starting with $Y \bar\Sigma H_\Sigma \Sigma$, I have
\begin{equation}
\mathcal{L}_{4} \supset Y \bar\Sigma H_\Sigma \Sigma + Y^* \bar\Sigma^* H_\Sigma^* \Sigma^*
\end{equation}
and the relevant CP transformations
\begin{eqnarray}
Y &\to& e^{i y} Y^* \,, \\
\Sigma &\to& R \Sigma^* \,,
\end{eqnarray}
act on $Y \bar\Sigma H_\Sigma \Sigma$:
\begin{equation}
Y \bar\Sigma H_\Sigma \Sigma \to e^{i y} Y^* \bar\Sigma^* R^\dagger H_\Sigma R \Sigma^* \,.
\end{equation}
Comparing with the $h.c.$ I conclude that if $\mathcal{L}_{4}$ remains invariant under CP,
$e^{i y} R^\dagger H_\Sigma R = H_\Sigma^* $.
I repeat the procedure for the other 3 couplings and obtain the 4 relations
\begin{eqnarray}
e^{i s} Q^\dagger F R &=& F^* \,, \label{F}\\
e^{i x} Q^\dagger G R &=& G^* \,, \label{G}\\
e^{i y} Q^\dagger H_\Psi Q &=& H_\Psi^* \,, \label{HPsi} \\
e^{i y} R^\dagger H_\Sigma R &=& H_\Sigma^* \,. \label{HSigma}
\end{eqnarray}
These 4 relations are necessary and sufficient for $\mathcal{L}_{4}$ to conserve CP.
At this stage I build CPIs by combining the 4 relations to obtain equations where the general CP transformations cancel out, meaning they are independent of $s$, $x$, $y$, $Q$ and $R$. A relevant example is obtained by multiplying in order 1. the dagger of eq(\ref{F}), 2. eq(\ref{HPsi}), 3. eq(\ref{F}) and 4. the dagger of eq(\ref{HSigma}):
\begin{equation}
R^\dagger F^\dagger Q Q^\dagger H_\Psi Q Q^\dagger F R R^\dagger H_\Sigma^\dagger R \, e^{i (-s+y+s-y)} = (F^\dagger H_\Psi F H_\Sigma^\dagger)^* \,,
\end{equation}
removing the phases and leaving unitary matrices only outside the product of couplings:
\begin{equation}
R^\dagger F^\dagger H_\Psi F H_\Sigma^\dagger R = (F^\dagger H_\Psi F H_\Sigma^\dagger)^* \,.
\end{equation}
Doing the same with eq(\ref{G}) in steps 1. and 3. I would obtain similarly:
\begin{equation}
R^\dagger G^\dagger H_\Psi G H_\Sigma^\dagger R = (G^\dagger H_\Psi G H_\Sigma^\dagger)^* \,.
\end{equation}
The remaining dependence on unitary matrix $R$ can be eliminated by taking the trace:
\begin{eqnarray}
\Tr \left[ F^\dagger H_\Psi F H_\Sigma^\dagger \right] = \Tr \left[ F^\dagger H_\Psi F H_\Sigma^\dagger \right]^* &\to& \mathrm{Im}\Tr \left[ F^\dagger H_\Psi F H_\Sigma^\dagger \right] = 0 \,, \label{ImF} \\
\Tr \left[ G^\dagger H_\Psi G H_\Sigma^\dagger \right] = \Tr \left[ G^\dagger H_\Psi G H_\Sigma^\dagger \right]^* &\to& \mathrm{Im} \Tr \left[ G^\dagger H_\Psi G H_\Sigma^\dagger \right] = 0 \,. \label{ImG}
\end{eqnarray}
These conditions illustrate the power of the IA.
Simply from studying the CP properties of $\mathcal{L}_{4}$ I have found a set of necessary conditions for CP conservation (they are not necessarily sufficient).
The conditions are basis independent, valid for \emph{any} choice of CP transformation and for any number of generations $n$.
They also directly constrain quantities that are relevant for physical processes.
In \cite{Chen:2014tpa}, the authors compute the CP asymmetry in the decay $Y \to \bar\Psi \Psi$, as shown in the text and equation displayed in figure \ref{Toy_decay}. By computing only a single CP asymmetry one might conclude that CP conservation can be obtained from a cancellation of the two quantities. Instead, by applying the IA to $\mathcal{L}_{4}$ I conclude that there are at least 2 independent necessary conditions for CP conservation, eq(\ref{ImF}), eq(\ref{ImG}).
\begin{figure}[h]
\begin{center}
\includegraphics[width=15 cm]{decay.png}
\end{center}
\caption{Decay of $Y \to \bar\Psi \Psi$ as computed in \cite{Chen:2014tpa}. \label{Toy_decay}}
\end{figure}
\subsection{$\Delta(27)$ and adding CP}
\subsubsection{$\Delta(27)$}
I discuss now the group theory of $\Delta(27)$ that is required for the remaining sections.
I define $\omega \equiv e^{i 2 \pi/3}$, the cyclic generator $c$ and diagonal generator $d$ of the group ($\omega^3=1$, $c^3=d^3=1$). There is an additional generator but it is not directly relevant for the discussion here. The group has irreducible representations that are either 1 or 3 dimensional - referred as singlets and triplets. The action of generators on singlets is simply multiplying them by a phase: $c 1_{ij}=\omega^i 1_{ij}$ and $d 1_{ij}=\omega^j 1_{ij}$, where $i, j = 0, 1, 2$ - there are 9 distinct singlets. In a convenient basis the action of the generators on a $3_{01}$ triplet $A=(a_1,a_2,a_3)_{01}$ or a $3_{02}$ triplet $\bar{B}=(\bar{b}_1,\bar{b}_2,\bar{b}_3)_{02}$ is:
\begin{equation}
c_{3_{0j}}=
\begin{pmatrix}
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 & 0
\end{pmatrix}
\,, \quad
c_{3_{01}}
\begin{pmatrix}
a_1 \\
a_2 \\
a_3
\end{pmatrix}
=
\begin{pmatrix}
a_2 \\
a_3 \\
a_1
\end{pmatrix} \,,
\end{equation}
\begin{equation}
d_{3_{01}}=
\begin{pmatrix}
1 & 0 & 0 \\
0 & \omega & 0 \\
0 & 0 & \omega^2
\end{pmatrix} \,, \quad
d_{3_{02}}=
\begin{pmatrix}
1 & 0 & 0 \\
0 & \omega^2 & 0 \\
0 & 0 & \omega
\end{pmatrix} \,.
\end{equation}
My nomenclature follows from the action of generators on triplets. The generator $d$ distinguishes the triplets $3_{01}$ and $3_{02}$ according to their subscripts, which are the powers of $\omega$ on the first two diagonal entries of the respective matrix. Hereafter I often refer to $3_{01}$ as the triplet representation and to $3_{02}$ as the anti-triplet representation. The cyclic generator acts equally on triplet and anti-triplet by cyclic permutation of the components.
The product of singlet with singlet leads to another singlet transforming as the sum of indices (modulo 3): $1_{ij} \times 1_{kl}$ transforms as $1_{(i+k) (j+l)}$. The product of triplet and anti-triplet gives a sum of all nine singlets. In the following it will be necessary to know how the singlets $1_{i0}$ and $1_{0j}$ are built from the product of triplet and anti-triplet.
$1_{00}$ is the trivial singlet transforming trivially under all generators and is formed from the $SU(3)$ contraction:
\begin{equation}
( A \bar{B} )_{00} \equiv (a_1 \bar{b}_1 + a_2 \bar{b}_2 + a_3 \bar{b}_3)_{00} \,.
\end{equation}
The $1_{i0}$ singlets are built as
\begin{eqnarray}
( A \bar{B} )_{10} &\equiv& (a_1 \bar{b}_1 + \omega^2 a_2 \bar{b}_2 + \omega a_3 \bar{b}_3)_{10} \,, \\
( A \bar{B} )_{20} &\equiv& (a_1 \bar{b}_1 + \omega a_2 \bar{b}_2 + \omega^2 a_3 \bar{b}_3)_{20} \,,
\end{eqnarray}
as acting with $c$ on $A$ and $\bar{B}$ leads to multiplication by $\omega$, $\omega^2$ respectively:
\begin{eqnarray}
( A \bar{B} )_{10} &\to& (a_2 \bar{b}_2 + \omega^2 a_3 \bar{b}_3 + \omega a_1 \bar{b}_1)_{10} \,, \\
( A \bar{B} )_{20} &\to & (a_2 \bar{b}_2 + \omega a_3 \bar{b}_3 + \omega^2 a_1 \bar{b}_1)_{20} \,.
\end{eqnarray}
In turn, the $1_{0j}$ are built as
\begin{eqnarray}
( A \bar{B} )_{01} &\equiv& (a_2 \bar{b}_1 + a_3 \bar{b}_2 + a_1 \bar{b}_3)_{01} \,, \\
( A \bar{B} )_{02} &\equiv& (a_1 \bar{b}_2 + a_2 \bar{b}_3 + a_3 \bar{b}_1)_{02} \,,
\end{eqnarray}
as acting with $d$ on $A$ and $\bar{B}$ leads to multiplication by $\omega$, $\omega^2$ respectively:
\begin{eqnarray}
( A \bar{B} )_{01} &\to& (\omega a_2 \bar{b}_1 + \omega^2 a_3 \omega^2 \bar{b}_2 + a_1 \omega \bar{b}_3)_{01} \,, \\
( A \bar{B} )_{02} &\to& (a_1 \omega^2 \bar{b}_2 + \omega a_2 \omega \bar{b}_3 + \omega^2 a_3 \bar{b}_1)_{02} \,.
\end{eqnarray}
\subsubsection{Adding CP}
I consider now a specific $\Delta(27)$ invariant Lagrangian and study its CP properties.
The field content is triplet $A$, anti-triplet $\bar{B}$, and singlets $C$, $D$ (transforming respectively as $3_{01}$, $3_{02}$, $1_{10}$, $1_{01}$). The $\Delta(27)$ invariant Lagrangian for this field content contains one 3-field invariant between triplet, anti-triplet and each singlet:
\begin{equation}
\mathcal{L}_{CD}=y_c (A \bar B)_{20} C_{10} + y_d (A \bar B)_{02} D_{01} + h.c. \,.
\end{equation}
An additional $Z_N$ or $U(1)$ symmetry can be added to guarantee the absence of additional terms coupling $C$, $D$ to $A A^*$ or $\bar{B}^* \bar{B}$.
Focusing on the CP properties of $\mathcal{L}_{CD}$, I start by adding a specific CP transformation. A simple option is the trivial CP transformation $CP_1$, defined by the action on $A$, $\bar{B}$, $C$ and $D$:
\begin{eqnarray}
CP_1 A &=& A^* = (a_1^*,a_2^*,a_3^*)_{02} \,, \\
CP_1 \bar{B} &=& \bar{B}^*= (\bar{b}_1^*,\bar{b}_2^*,\bar{b}_3^*)_{01} \,, \\
CP_1 C_{10} &=& C_{20}^* \,,\\
CP_1 D_{01} &=& D_{02}^* \,,
\end{eqnarray}
where $A^*$, $\bar{B}^*$, $C^*$, $D^*$ transform respectively as $3_{02}$, $3_{01}$, $1_{20}$, $1_{02}$ (reflected by the subscripts).
If I impose invariance under $CP_1$ on $\mathcal{L}_{CD}$,
the $y_c$ term which transforms to:
\begin{equation}
\to y_c (a_1^* \bar{b}_1^* + \omega a_2^* \bar{b}_2^* + \omega^2 a_3^* \bar{b}_3^*)_{20} C_{20}^* \,,
\label{CP1c}
\end{equation}
should become the $h.c.$, which features $y_c^*$:
\begin{equation}
y_c^* (a_1^* \bar{b}_1^* + \omega^2 a_2^* \bar{b}_2^* + \omega a_3^* \bar{b}_3^*)_{10} C_{20}^* \,.
\end{equation}
In addition to the conjugated coefficient, the expressions inside the parentheses are different, as denounced by their subscripts.
In turn, under $CP_1$ the $y_d$ term transforms into:
\begin{equation}
\to y_d (a_1^* \bar{b}_2^* + a_2^* \bar{b}_3^* + a_3^* \bar{b}_1^*)_{01} D_{02}^* \,,
\end{equation}
and comparing to its $h.c.$ with $y_d^*$
\begin{equation}
y_d^* (a_1^* \bar{b}_2^* + a_2^* \bar{b}_3^* + a_3^* \bar{b}_1^*)_{01} D_{02}^* \,,
\end{equation}
shows that apart from swapping $y_d$ to $y_d^*$ the expressions are the same.
A closer look at eq(\ref{CP1c}) shows that the transformed quantity is no longer invariant under $\Delta(27)$ (the subscripts do not add up to make a trivial singlet). One might state that, for this field content, $\Delta(27)$ is inconsistent with $CP_1$.
A more precise statement is that for $\mathcal{L}_{CD}$ to be invariant under both $\Delta(27)$ and $CP_1$ requires $y_c=0$ (and $y_d$ to be real) or alternatively, that insisting that $y_c \neq 0$ explicitly violates either $\Delta(27)$ or $CP_1$.
That $y_c$ is forced to vanish by adding a specific CP symmetry may appear drastic, but this is rather an usual consequence of adding symmetries to a Lagrangian. For example, one could also force $y_c=0$ in $\mathcal{L}_{CD}$ simply by having only the field $C$ transform non-trivially under an additional $Z_2$ symmetry.
One important point is that although imposing a specific CP transformation can force coefficients to vanish this needs not mean that CP violation occurs if those coefficients do not vanish. Indeed, $\mathcal{L}_{CD}$ with arbitrary $y_c$ and $y_d$ is CP conserving. I prefer to see this using the IA, and rewrite:
\begin{equation}
\mathcal{L}_{CD} = A_i Y_{10}^{ij} \bar{B}_j C + A_i Y_{01}^{ij} \bar{B}_j D + h.c. \,,
\end{equation}
with
\begin{equation}
Y_{10}=y_{c}
\begin{pmatrix}
1 & 0 & 0\\
0 & \omega & 0\\
0 & 0 & \omega^2
\end{pmatrix}
; \quad
Y_{01}=y_{d}
\begin{pmatrix}
0 & 1 & 0\\
0 & 0 & 1\\
1 & 0 & 0
\end{pmatrix} \,.
\label{Y01_Y10}
\end{equation}
Then I take the most general transformations
\begin{equation}
A \to U^* A^* ; \quad
\bar{B} \to V \bar{B}^* ;\quad
C \to e^{i p_{10}} C^*; \quad
D \to e^{i p_{01}} D^* \,,
\label{CP2s}
\end{equation}
and obtain the conditions for CP conservation
\begin{eqnarray}
U^\dagger Y_{01} V e^{i p_{01}} &=& Y_{01}^* \,, \label{Y2s1}\\
U^\dagger Y_{10} V e^{i p_{10}} &=& Y_{10}^* \,.
\label{Y2s2}
\end{eqnarray}
By building CPIs I conclude they are of the form
\begin{eqnarray}
\mathrm{Im}& \Tr& [ (Y_{01}^{\dagger} Y_{01})^{n_1} (Y_{10}^{\dagger} Y_{10})^{n_2} (Y_{01}^{\dagger} Y_{01})^{n_3} (...)] \,,
\label{IA2sG} \\
\mathrm{Im}& \Tr& [ (Y_{01} Y_{01}^{\dagger})^{n_1} (Y_{10} Y_{10}^{\dagger})^{n_2} (Y_{01} Y_{01}^{\dagger})^{n_3} (...)] \,,
\label{IA2sH}
\end{eqnarray}
where $n_i$ are positive integers.
These CPIs automatically vanish due to $\Delta(27)$, as $(Y_{01}^{\dagger} Y_{01})$, $(Y_{10}^{\dagger} Y_{10})$, $(Y_{01} Y_{01}^{\dagger})$ and $(Y_{10} Y_{10}^{\dagger})$ are proportional to the identity matrix\footnote{For the same reason, CPIs like $\mathrm{Im} \Tr [ (Y_{01}^{\dagger} Y_{01})^{n_1} Y_{10}^\dagger Y_{01} (Y_{10}^{\dagger} Y_{10})^{n_2} Y_{01}^\dagger Y_{10}]$ give the same result as those in eq(\ref{IA2sG}).} with either $|y_{c}|^2$ or $|y_{d}|^2$.
The conclusion is that CP is conserved for any $y_c$, $y_d$.
Therefore, there must be at least one CP symmetry that leaves $\mathcal{L}_{CD}$ invariant regardless of arbitrary couplings (even though $CP_1$ does not).
One explicit example is:
\begin{equation}
U=
\begin{pmatrix}
1 & 0 & 0\\
0 & \omega^2 & 0\\
0 & 0 & 1
\end{pmatrix}
; \quad
V=
\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & \omega^2
\end{pmatrix} ; \quad p_{10}= -2 \mathrm{Arg}(y_c) ; \quad p_{01}=-2 \mathrm{Arg}(y_d) \,.
\label{UVCD}
\end{equation}
This example is within the possibilities listed in \cite{Nishi:2013jqa} for CP transformations consistent with $\Delta(27)$ triplets.
While eq(\ref{UVCD}) applies to singlets $1_{01}$ and $1_{10}$, the reasoning based on the IA can be easily applied to any choice of two $\Delta(27)$ singlets.
\subsection{Additional singlets}
In the context of $\Delta(27)$ models with \emph{spontaneous geometrical CP violation}, meaning CP that is spontaneously broken with \emph{calculable phases} \cite{Branco:1983tn}, adding $\Delta(27)$ singlets coupling to triplet and anti-triplet was originally explored in \cite{deMedeirosVarzielas:2011zw}.
The goal was to obtain additional Yukawa couplings for SM fermions and the most promising choices considered the SM quark doublets as different singlets of $\Delta(27)$.\footnote{In \cite{deMedeirosVarzielas:2011zw} it was pointed out that one triplet of $\Delta(54)$ has the same scalar potential as one triplet of $\Delta(27)$, and options with irreducible representations of $\Delta(54)$ were also considered therein.}
Geometrical CP violation is a very interesting topic that continued to be explored after \cite{deMedeirosVarzielas:2011zw}.
Non-renormalisable terms in the scalar potential were considered in \cite{Varzielas:2012nn}, with focus on their effects on the calculable phases.
The first viable model with SM quark doublets transforming as non-trivial $\Delta(27)$ singlets was realised in \cite{Bhattacharyya:2012pi}, featuring geometrical CP violation.
Subsequently it was shown in \cite{Varzielas:2013sla} how an additional symmetry can prevent the additional singlets from endangering the calculable phases and simultaneously explain the quark mass hierarchies. In \cite{Varzielas:2013eta} an extension of this type of model with the complete SM fermion sector was realised (see also \cite{Ma:2013xqa} for a different proposal for the leptons).
Geometrical CP violation was further explored in the context of multi-Higgs models with symmetries other than $\Delta(27)$ in \cite{Varzielas:2012pd, Ivanov:2013nla} (see also \cite{Varzielas:2013zbp}). The $\Delta(27)$ scalar potential for one triplet (invariant under $\Delta(54)$ \cite{deMedeirosVarzielas:2011zw}) was also analysed extensively with different approaches \cite{Holthausen:2012dk, Ivanov:2014doa, Fallbacher:2015rea}.
In \cite{deMedeirosVarzielas:2011zw, Varzielas:2012nn, Bhattacharyya:2012pi, Varzielas:2013sla, Varzielas:2013eta} CP is broken spontaneously, therefore the Lagrangian should conserve CP. As pointed out in \cite{Bhattacharyya:2012pi}, one must take care in adding extra singlets that couple to the triplet and anti-triplet as that may be incompatible with CP conservation.
One way to approach the constraints arising from adding singlets is by studying the outer automorphisms of the group, as discussed in \cite{Holthausen:2012dk} and also \cite{Chen:2014tpa}.
Alternatively, the reason why coupling more singlets to triplets may lead to CP violation becomes very clear in the IA: additional couplings enable more CPIs to be built. Eventually, adding an extra coupled singlet leads to a CPI that does not automatically vanish due to $\Delta(27)$, meaning one of 3 possibilities: $\Delta(27)$ is explicitly violated; CP is explicitly violated; or specific relations on the couplings are imposed. The last possibility is relevant in the context of \cite{deMedeirosVarzielas:2011zw, Varzielas:2012nn, Bhattacharyya:2012pi, Varzielas:2013sla, Varzielas:2013eta}, where one wants the symmetries to be broken spontaneously.
In analogy to $CP_1$ forcing $y_c=0$, one option that allows preserving both symmetries would be to have the couplings involving the triplets and some of the singlets vanish, and this is also understood clearly through the IA: the additional CPIs that do not vanish automatically can vanish due to the couplings.
Indeed, if one considers a set of singlets including two or three $1_{0j}$ singlets as in \cite{Bhattacharyya:2012pi}, the possible CP transformations on the triplets are already so constrained that one can not couple the triplets to even a single additional singlet $1_{ij}$ with $i \neq 0$ and preserve CP. Partly, this is why only $1_{0j}$ singlets were considered therein.
Strictly, the statement in \cite{Bhattacharyya:2012pi} that singlets $1_{ij}$ with $i \neq 0$ generate no coupling due to CP conservation is not mathematically rigorous. This statement is valid e.g. when $CP_1$ is imposed but not in general, as was pointed out recently in \cite{Fallbacher:2015rea}.
Nonetheless, for 3-field couplings between $\Delta(27)$ triplets and singlets (as in $\mathcal{L}_{CD}$), the physics of CP conserving situations - with at most 3 independent couplings - is contained in the choices considered in \cite{Bhattacharyya:2012pi}, which effectively corresponds to working in a basis where CP conservation is reflected on the Clebsch-Gordan (CG) coefficients being real.
An analysis of this issue also clarifies why the presence of any 2 singlets coupling in the manner of $\mathcal{L}_{CD}$ leads automatically to CP conservation.
\subsubsection{Changing basis}
Starting with just 1 singlet $S_{ij}$ and term $y_{ij} (A \bar{B})_{(-i)(-j)} S_{ij}$, I can always change the basis of $A$ such that the $(A \bar{B})_{(-i)(-j)}$ contraction looks like the corresponding contraction $(A \bar{B})_{(0)(-j)}$ in the original basis, which has real CG coefficients.
An explicit example is $y_{c} (A \bar{B})_{20} C_{10}$,
where the change of basis $(a_1, a_2, a_3) \to (a_1, \omega^2 a_2, \omega a_3)$ does precisely this:
\begin{equation}
(a_1 \bar{b}_1 + \omega a_2 \bar{b}_2 + \omega^2 a_3 \bar{b}_3)_{20} \to (a_1 \bar{b}_1 + a_2 \bar{b}_2 + a_3 \bar{b}_3) \,.
\label{20basis}
\end{equation}
As far as the $y_{ij} (A \bar{B})_{(-i)(-j)} S_{ij}$ coupling is concerned it is equivalent to take a singlet in the set $1_{0j}$.
Note that other couplings can distinguish the singlets, e.g. if $j=0$, terms $1_{00}$ and $1_{00}^2$ are $\Delta(27)$ invariants whereas the same does not apply for $1_{0j}$. But restricting ourselves to Lagrangian terms of the form of those in $\mathcal{L}_{CD}$ implies there are other symmetries that forbid such terms (such as the SM gauge group, in \cite{Bhattacharyya:2012pi}).
With 2 singlets, changing only the basis for $A$ may simply move the complex CG coefficients from one contraction into the other. $\mathcal{L}_{CD}$ is an explicit example of this as $(a_1, a_2, a_3) \to (a_1, \omega^2 a_2, \omega a_3)$ takes
\begin{eqnarray}
(a_1 \bar{b}_1 + \omega a_2 \bar{b}_2 + \omega^2 a_3 \bar{b}_3)_{20} &\to& (a_1 \bar{b}_1 + a_2 \bar{b}_2 + a_3 \bar{b}_3) \,, \\
(a_1 \bar{b}_2 + a_2 \bar{b}_3 + a_3 \bar{b}_1)_{02} &\to& (a_1 \bar{b}_2 + \omega^2 a_2 \bar{b}_3 + \omega a_3 \bar{b}_1) \,.
\end{eqnarray}
But if one uses the change of basis:
\begin{eqnarray}
(a_1, a_2, a_3) &\to& (a_1, \omega^2 a_2, a_3) \,, \label{dbasis1} \\
(\bar{b}_1, \bar{b}_2, \bar{b}_3) &\to& (\bar{b}_1, \bar{b}_2, \omega \bar{b}_3) \label{dbasis2} \,,
\end{eqnarray}
then both singlets couple to triplets in $\mathcal{L}_{CD}$ with real CG coefficients:
\begin{eqnarray}
(a_1 \bar{b}_1 + \omega a_2 \bar{b}_2 + \omega^2 a_3 \bar{b}_3)_{20} &\to& (a_1 \bar{b}_1 + a_2 \bar{b}_2 + a_3 \bar{b}_3) \,, \\
(a_1 \bar{b}_2 + a_2 \bar{b}_3 + a_3 \bar{b}_1)_{02} &\to& (a_1 \bar{b}_2 + a_2 \bar{b}_3 + a_3 \bar{b}_1) \,.
\end{eqnarray}
This change of basis takes $U$ and $V$ in eq(\ref{UVCD}) to the identity matrices of $CP_1$.
In a situation with 3 singlets coupling in the manner of $\mathcal{L}_{CD}$, the possibility of explicit CP violation depends on whether the freedom to change the basis of $A$ and $\bar{B}$ is enough to eliminate complex CG coefficients or not.
Most choices of singlets can explicitly violate CP, but for 12 sets (out of 84 combinations) this is not possible. For these 12 sets there is at least one non-trivial element of $\Delta(27)$ which does not distinguish the 3 singlets. The special 12 sets can be identified in the notation I use here by summing the two generator indices over the 3 singlets - if both sums add up to 0 (modulo 3), an appropriate change of basis makes the CG coefficients real.
I demonstrate with the set $1_{00}$, $1_{10}$ and $1_{20}$:
\begin{eqnarray}
( A \bar{B} )_{00} &\equiv& (a_1 \bar{b}_1 + a_2 \bar{b}_2 + a_3 \bar{b}_3)_{00}\,, \\
( A \bar{B} )_{10} &\equiv& (a_1 \bar{b}_1 + \omega^2 a_2 \bar{b}_2 + \omega a_3 \bar{b}_3)_{10} \,, \\
( A \bar{B} )_{20} &\equiv& (a_1 \bar{b}_1 + \omega a_2 \bar{b}_2 + \omega^2 a_3 \bar{b}_3)_{20} \,,
\end{eqnarray}
The required basis change is not readily seen from the expressions, but noting the 3 singlets in the set are distinguished only by generator $c$, the basis change to eigenstates of $c_{3_{0j}}$:
\begin{eqnarray}
(a_1, a_2, a_3) &\to& (a_1+a_2+a_3, a_1+ \omega a_2+ \omega^2 a_3, a_1+\omega^2 a_2+ \omega a_3)/\sqrt{3}\,, \label{cbasis1} \\
(\bar{b}_1, \bar{b}_2, \bar{b}_3) &\to& (\bar{b}_1+\bar{b}_2+\bar{b}_3, \bar{b}_1+\omega^2 \bar{b}_2+\omega \bar{b}_3,\bar{b}_1+\omega \bar{b}_2+\omega^2 \bar{b}_3)/\sqrt{3} \,, \label{cbasis2}
\end{eqnarray}
takes the expressions to those of $(A \bar{B})_{0j}$, with real CG coefficients:
\begin{eqnarray}
(a_1 \bar{b}_1 + a_2 \bar{b}_2 + a_3 \bar{b}_3)_{00} &\to& (a_1 \bar{b}_1 + a_2 \bar{b}_2 + a_3 \bar{b}_3) \,, \\
(a_1 \bar{b}_1 + \omega^2 a_2 \bar{b}_2 + \omega a_3 \bar{b}_3)_{10} &\to& (a_2 \bar{b}_1 + a_3 \bar{b}_2 + a_1 \bar{b}_3) \,, \\
(a_1 \bar{b}_1 + \omega a_2 \bar{b}_2 + \omega^2 a_3 \bar{b}_3)_{20} &\to& (a_1 \bar{b}_2 + a_2 \bar{b}_3 + a_3 \bar{b}_1) \,.
\end{eqnarray}
The generalisation of the change of basis for sets of 3 singlets sharing a non-zero index is relatively straightforward for sets $1_{ij}$ sharing a fixed $i \neq 0$ (distinct only under generator $d$), where one has a diagonal change of basis (in analogy with eq(\ref{dbasis1}), eq(\ref{dbasis2})).
An explicit example is for singlets with a fixed $i=1$, with triplets contracting as:
\begin{eqnarray}
( A \bar{B} )_{20} &\equiv& (a_1 \bar{b}_1 + \omega a_2 \bar{b}_2 + \omega^2 a_3 \bar{b}_3)_{20} \,, \\
( A \bar{B} )_{21} &\equiv& (a_2 \bar{b}_1 + \omega a_3 \bar{b}_2 + \omega^2 a_1 \bar{b}_3)_{21} \,, \\
( A \bar{B} )_{22} &\equiv& (a_3 \bar{b}_1 + \omega a_1 \bar{b}_2 + \omega^2 a_2 \bar{b}_3)_{22} \,,
\end{eqnarray}
where a change to a basis with real CG is $(\bar{b}_1, \bar{b}_2, \bar{b}_3) \to (\bar{b}_1, \omega^2 \bar{b}_2, \omega \bar{b}_3)$.
For sets $1_{ij}$ sharing a fixed $j \neq 0$ (distinct only under generator $c$), the generalisation of the change of basis involves a mix of eq(\ref{cbasis1}), eq(\ref{cbasis2}) and the diagonal type similar to eq(\ref{dbasis1}), eq(\ref{dbasis2}), or equivalently, reordering the eigenstates of $c_{3_{0j}}$ in eq(\ref{cbasis1}), eq(\ref{cbasis2}).
For sets not sharing an index, but for which the sum over indices both sum up to 0 (modulo 3) the change of basis is possible but requires an additional redefinition of one of the 3 singlets (in addition to the triplet representations).
Fortunately, the IA produces results that are basis independent, so for a given Lagrangian one can avoid checking whether basis changes that lead to real CG exist or not.
Either using basis changes or the IA, the conclusion for Lagrangians with singlets coupling to triplet and anti-triplet in the manner of $\mathcal{L}_{CD}$ is the same.
There are 12 sets of 3 singlets that conserve CP, starting from $1_{00}$, $1_{01}$, $1_{02}$ and ending with $1_{20}$, $1_{21}$, $1_{22}$. The sets can be identified whenever the sum of both indices over the 3 singlets adds up to 0 (modulo 3), meaning that there is one non-trivial element of $\Delta(27)$ that does not distinguish the 3 singlets and it is then possible to choose that element to be the generator $c$ in another basis. As far as the 3-field couplings are concerned these 12 sets are equivalent through a change of basis to the choice with $i=0$, which is why this was the only set considered in \cite{Bhattacharyya:2012pi}.
For the other 72 choices of 3 singlets, or for 4 or more singlets, the complex CG coefficients can only be moved around by the change of basis, but not eliminated. In such situations, the coupling of the additional singlets to triplets is not allowed due to CP invariance of the Lagrangian, cf. \cite{Bhattacharyya:2012pi}.
A similar conclusion, based on an analysis of the automorphisms of $\Delta(27)$, was presented later in \cite{Chen:2014tpa}: that adding more than two non-trivial singlets (to a setting with just triplet representations) no longer allows a consistent CP transformation to be defined.\footnote{Strictly, how the singlets couple to triplet representations is very relevant, as discussed above cf. cases like the 8 sets of 3 non-trivial singlets that conserve CP automatically. Furthermore, this type of statement assumes all singlets have non-vanishing couplings to the triplets. This is not a spurious assumption as CP can be conserved even in settings with triplets and more than three non-trivial singlets, where triplet-decoupled singlets are still relevant due to coupling to other singlets. This can be a natural outcome if the vanishing couplings are enforced by a symmetry, which can be a specific CP symmetry as illustrated by $CP_1$ leading to $y_c=0$ for $\mathcal{L}_{CD}$.}
\subsubsection{$\Delta(27)$ and $\mathcal{L}_4$ \label{L4again}}
In \cite{Chen:2014tpa} a $\Delta(27)$ toy model with the trivial singlet and two non-trivial singlets was considered. This is actually the model illustrated here in figure \ref{L_toy}, which was the starting point for the scalar field Lagrangian $\mathcal{L}_4$ I used to exemplify the IA in section \ref{sec:L4}.
The authors of \cite{Chen:2014tpa} employed a $U(1)$ symmetry to restrict the allowed couplings and used the structures imposed by $\Delta(27)$ on the coupling matrices $F$, $G$, $H_\Psi$ and $H_\Sigma$ to compute the bottom row in figure \ref{Toy_decay}.
The field $S$ is associated with coupling matrix $F$ proportional to the identity so in their notation the $1_{0}$ singlet corresponds to the trivial singlet $1_{00}$ here. The fields $X$, $Y$ are associated to $G$ proportional to $Y_{01}$ and $H_\Psi$, $H_\Sigma$ proportional to $Y_{20}=Y_{10}^\dagger$ ($Y_{01}$ and $Y_{10}$ are shown in eq(\ref{Y01_Y10})), based on these couplings I identify that the singlets $1_{1}$ and $1_{3}$ in their notation correspond to singlets $1_{01}$ and $1_{20}$ here.
With these coupling matrices and the couplings $f$, $g$, $h_\Psi$ and $h_\Sigma$ defined in their Lagrangian as shown in figure \ref{L_toy}, calculating the CPIs in eq(\ref{ImF}), eq(\ref{ImG}) leads to:
\begin{eqnarray}
\mathrm{Im}\Tr \left[ F^\dagger H_\Psi F H_\Sigma^\dagger \right] &=& \mathrm{Im} \left( |f|^2 h_\Psi h_\Sigma^* \right) \,, \label{ImFcalc} \\
\mathrm{Im} \Tr \left[ G^\dagger H_\Psi G H_\Sigma^\dagger \right] &=& \mathrm{Im} \left( \omega |g|^2 h_\Psi h_\Sigma^* \right) \,, \label{ImGcalc}
\end{eqnarray}
cf. figure \ref{Toy_decay}. Both CPIs depend on the phase $\mathrm{Arg}(h_{\Psi} h_{\Sigma}^*)$ (the relative phase between two arbitrary Lagrangian parameters). It is clear that no value of this phase can make both CPIs vanish so the conclusion is again that $\Delta(27)$ is explicitly violated, or CP is explicitly violated, or at least one coupling vanishes.
If I impose $f=0$ (or $g=0$) due to CP conservation, then $S$ (or $X$) decouples from the triplets $\Psi$, $\Sigma$ and CP can be conserved for specific values of the phase $\mathrm{Arg}(h_{\Psi} h_{\Sigma}^*)$. Note that the respective CP transformations for $f=0$ will differ from those for $g=0$. The triplet-decoupled singlet in either CP conserving case is still coupled to the other singlet scalars through quartic couplings that are unconstrained by the $U(1)$ symmetry,
$S S^\dagger X X^\dagger$, $S S^\dagger Y Y^\dagger$, $X X^\dagger Y Y^\dagger$.
\subsection{Explicit geometrical CP violation}
I propose now a toy model similar to $\mathcal{L}_{CD}$ but where the field content is reduced to contain only triplet $A$ and singlets $C_{10}$, $D_{01}$, and there are no $U(1)$ or $Z_N$ symmetries forbidding singlets from coupling to $(A A^*)$. Then the Lagrangian has terms:
\begin{equation}
\mathcal{L}_{A} = y_{c} (A A^*)_{20} C_{10} + y_{d} (A A^*)_{02} D_{01} + h.c. \,,
\end{equation}
In contrast with $\mathcal{L}_{CD}$ which has the same singlets, there is no $\bar{B}$.
The situation is to some extent similar to adding to $\mathcal{L}_{CD}$ the trivial singlet, in the sense that with couplings to a triplet in the manner of $\mathcal{L}_{A}$, the only pairs of singlets that automatically conserve CP are the 8 pairs including $1_{00}$, and the four pairs with 2 non-trivial singlets $1_{01}$, $1_{02}$; $1_{10}$, $1_{20}$; $1_{11}$, $1_{22}$; $1_{12}$, $1_{21}$ (where the sum over the 2 singlets of both indices adds up to 0 modulo 3).
Instead of dwelling further on basis changes
I use the IA to study the CP properties of $\mathcal{L}_{A}$. The most general transformations are the same from eq(\ref{CP2s}) (ignoring $\bar{B}$) and
the CP invariance conditions coming from $L_A$ are similar to eq(\ref{Y2s1}), eq(\ref{Y2s2}):
\begin{eqnarray}
U^\dagger Y_{01} U e^{i p_{01}}= Y_{01}^* \,, \\
U^\dagger Y_{10} U e^{i p_{10}}= Y_{10}^* \,.
\label{Y2sA}
\end{eqnarray}
Rather than trying to find unitary matrices that may not exist, it is often better to skip directly to building CPIs that do not depend on them. In this case a relevant CPI is:
\begin{equation}
I_{2} \equiv \mathrm{Im} \Tr (Y_{01} Y_{10}^\dagger Y_{01}^\dagger Y_{10}) \,,
\label{I2s}
\end{equation}
which has to vanish for CP conservation.
Using $Y_{01}$, $Y_{10}$ from eq(\ref{Y01_Y10}) I find:
\begin{equation}
I_{2} = \mathrm{Im} (3 \omega^2 |y_{c}|^2 |y_{d}|^2)
\label{I2sc}
\end{equation}
This means that CP can be \emph{explicitly} violated, in a minimal model with only 2 $\Delta(27)$ singlets.
But furthermore the IA also shows that in this model, CP is violated by
a \emph{calculable phase} that is \emph{entirely determined by the symmetry of the Lagrangian} (and not by arbitrary parameters of the Lagrangian). The phases of the arbitrary $\mathcal{L}_{A}$ parameters $y_{c}$ and $y_{d}$ do not contribute, as shown very clearly in eq(\ref{I2sc}). This situation is directly comparable to the original definition of \emph{calculable phases} in \cite{Branco:1983tn}, where special cases with spontaneous CP violation were referred to as \emph{geometrical}. In analogy with the original definition, it is reasonable to refer to cases like this as \emph{explicit} geometrical CP violation.
Explicit geometrical CP violation was first identified in \cite{Branco:2015hea}. The model presented therein was not a scalar toy model but a physical multi-Higgs doublet model with scalars $h_{00}$, $h_{01}$ and $h_{10}$. It contains fermions $L$ (SM lepton doublets) and $\nu^c$ (SM singlet neutrinos) transforming under $\Delta(27)$ as triplet and anti-triplet. The neutrino Lagrangian is
\begin{equation}
\mathcal{L}_{3} = y_{00} (L \nu^c)_{00} h_{00} + y_{01} (L \nu^c)_{02} h_{01} + y_{10} (L \nu^c)_{20} h_{10} + h.c. \,.
\label{L3}
\end{equation}
CP is explicitly violated due to the presence of the 3 coupled singlets and the relevant CPI is naturally sensitive to all 3 couplings:
\begin{equation}
I_{3} \equiv \mathrm{Im} \Tr (Y_{00} Y_{01}^\dagger Y_{10} Y_{00}^\dagger Y_{01} Y_{10}^\dagger) \,,
\label{I3s}
\end{equation}
where $\Delta(27)$ imposes $Y_{00}$ proportional to the identity and I rename in eq(\ref{Y01_Y10}) $y_{c}=y_{10}$ and $y_{d}=y_{01}$ to match the notation from \cite{Branco:2015hea} used in eq(\ref{L3}).
Then:
\begin{equation}
I_{3}=\mathrm{Im} (3 \omega^2 |y_{00}|^2 |y_{01}|^2 |y_{10}|^2) \,,
\end{equation}
showing CP is \emph{explicitly violated by a phase only originating from the group structure, and not from arbitrary couplings} - the arbitrary phases of $y_{00}$, $y_{01}$ and $y_{10}$ do not affect $I_3$.
\section{Conclusions}
The main conclusion to be drawn is that the invariant approach is a powerful method to study the CP properties of specific Lagrangians, particularly in the presence of flavour symmetries. The CP-odd invariants built from a Lagrangian do not require detailed knowledge of group theory and require relations between the couplings for the Lagrangian to conserve CP. One can then insert into the relevant CP-odd invariants the couplings that respect the flavour symmetry, and obtain a basis independent answer if CP is violated by those couplings.
I have illustrated the use of the invariant approach with several examples, mostly based on the $\Delta(27)$ symmetry. For a given Lagrangian I commented on the consequences of adding a specific CP symmetry.
I also clarified what are the possible outcomes when adding more $\Delta(27)$ singlets to different models, noting that it is relevant to distinguish how the singlets couple to triplets. For
3 coupled singlets, a model with distinct triplet and anti-triplet (or two distinct triplets) can explicitly violate CP for 72 choices of 3 singlets; the other 12 choices
lead to automatic CP conservation, which occurs when the 3 singlets
are undistinguished by at least one non-trivial element of $\Delta(27)$ (this case includes 8 choices where the 3 singlets are non-trivial). In contrast, in a model with just one triplet the possibility for explicit CP violation exists already with 2 coupled non-trivial singlets. Finally, I used a simple toy model with a triplet and 2 coupled non-trivial singlets as an example of explicit geometrical CP violation, followed by the more realistic example with 3 singlets proposed in \cite{Branco:2015hea}.
\ack
This project is supported by the European Union's
Seventh Framework Programme for research, technological development and
demonstration under grant agreement no PIEF-GA-2012-327195 SIFT.
I thank the organisers of DISCRETE 2014 for hosting a very interesting conference, and also G. C. Branco, S. F. King for helpful discussions.
\section*{References}
|
1,314,259,994,821 | arxiv | \section{Introduction} \label{sec:intro}
An important problem in the topology of manifolds is deciding
whether there is an $n$-dimensional closed topological manifold in
the homotopy type of a given $n$-dimensional finite Poincar\'e
complex $X$.
Recall that the ``classical surgery theory'' alias
``Browder-Novikov-Sullivan-Wall-Kirby-Siebenmann theory'' provides
a method to decide this question in the form of a two-stage
obstruction theory, when $n \geq 5$. A result of Spivak provides us
with the Spivak normal fibration (SNF) $\nu_X \co X \ra \BSG$, which
is a spherical fibration, stably unique in some sense. If $X$ is
homotopy equivalent to a closed manifold then $\nu_X$ reduces to a
stable topological block bundle, say $\bar \nu_X \co X \ra \BSTOP$.
The existence of such a reduction is the first obstruction. In terms
of classifying spaces, the composition
\begin{equation} \label{eqn:first-obstruction}
H \circ \nu_X \co X \ra \BSG \ra \textup{B}(\GTOP)
\end{equation}
has to be homotopic to a constant map. Any reduction $\bar \nu_X$
determines a degree one normal map $(f,b) \co M \ra X$ from some
$n$-dimensional closed topological manifold $M$ to $X$ with a
surgery obstruction, which we call the quadratic signature of
$(f,b)$ and denote
\begin{equation} \label{eqn:second-obstruction}
\qsign_{\ZZ[\pi_1 (X)]} (f,b) \in L_n (\ZZ \pi_1 (X)).
\end{equation}
The complex $X$ is homotopy equivalent to a closed manifold if and
only if there exists a reduction for which $\qsign_{\ZZ[\pi_1 (X)]}
(f,b) = 0$.
The ``algebraic theory of surgery'' of Ranicki replaces the above
theory with a single obstruction, namely the total surgery
obstruction
\begin{equation} \label{eqn:tso}
s (X) \in \SS_n (X)
\end{equation}
where $\SS_n (X)$ is the $n$-dimensional structure group of $X$ in
the sense of the algebraic theory of surgery, which is a certain
abelian group associated to $X$. It is the aim of this paper to
discuss the definitions of $\SS_n (X)$ and $s(X)$ and explain how
they replace the classical theory.
The advantage of the algebraic theory is two-fold. On the one hand
it is a single obstruction theory which by itself can be more
convenient. On the other hand it turns out that the group $\SS_n
(X)$ has an $L$-theoretic definition, in fact it is isomorphic to a
homotopy group of the homotopy fiber of a certain assembly map in
$L$-theory. Hence the alternative approach allows us to solve our problem by entirely $L$-theoretic methods, for example by showing that the assembly map induces an isomorphism on homotopy groups and so $\SS_n (X) = 0$. This possibility is in contrast with the classical surgery theory, where the first obstruction (\ref{eqn:first-obstruction}) is not $L$-theoretic in nature.
However, in practice, often slightly different assembly maps turn out to be more accessible for studying, as is the case for example in the recent papers \cite{Bartels-Lueck(2009)}, \cite{Bartels-Lueck-Weinberger(2009)}. Then the theory needs to be modified to accommodate in addition the integer valued Quinn resolution obstruction. In the concluding section \ref{sec:conclusion} we offer more comments on this generalization and applications as well as examples.
The ingredients in the theory surrounding the total surgery obstruction are:
\begin{itemize}
\item The algebraic theory of surgery of Ranicki from \cite{Ranicki-I-(1980)}, \cite{Ranicki-II-(1980)}. This comprises various sorts of $L$-groups of chain complexes over various
additive categories with chain duality and with various notions of
Poincar\'e duality. The sorts of $L$-groups are ``symmetric'',
``quadratic'' and ``normal''. The last notion is also due to Weiss in \cite{Weiss-I(1985)}, \cite{Weiss-II(1985)}.
\item The classical surgery theory in the topological category from \cite{Browder(1971)}, \cite{Wall(1999)}. The
algebraic theory is not independent of the classical theory, in the
sense that the proof that the algebraic theory answers our problem
uses the classical theory.
\item Topological transversality in all dimensions and codimensions
as provided by Kirby-Siebenmann \cite{Kirby-Siebenmann(1977)} and
Freedman-Quinn \cite{Freedman-Quinn(1990)}.
\item The surgery obstruction isomorphism, \cite[Essay V, Theorem C.1]{Kirby-Siebenmann(1977)}:
\[
\qsign_\ZZ \co \pi_n (\G/\TOP) \xra{\cong} L_n (\ZZ) \; \textup{for} \; n\geq 1.
\]
\item Geometric normal spaces and geometric normal transversality,
both of which were invented by Quinn. However, the whole theory as
announced in \cite{Quinn(1972)} is not needed. It is replaced by
the algebraic normal $L$-groups from the first item.
\end{itemize}
\subsection{The basics of the algebraic theory of surgery}
\label{subsec:basics-of-alg-sur} Mishchenko and Ranicki defined for a ring $R$ with involution the
symmetric and quadratic $L$-groups $L^n (R)$ and $L_n (R)$
respectively, as cobordism groups of chain complexes over $R$ with
symmetric and quadratic Poincar\'e structure respectively. The quadratic $L$-groups are isomorphic to the surgery obstruction groups of Wall \cite{Wall(1999)}.
Let $W$ be the standard $\ZZ[\ZZ_2]$-resolution of $\ZZ$. An
$n$-dimensional \emph{symmetric structure} on a chain complex $C$ is
an $n$-dimensional cycle
\[
\varphi \in \Ws{C} := \Hom_{\ZZ[\ZZ_2]} (W,C \otimes_R C) \cong \Hom_{\ZZ[\ZZ_2]} (W,\Hom_R (C^{-\ast},C)).
\]
It can be written out in components $\varphi = (\varphi_i \co
C^{n-\ast} \ra C_{\ast+i})_{i \in \NN}$. If $\varphi_0 \co
C^{n-\ast} \ra C$ is a chain homotopy equivalence, then the
structure is called {\it Poincar\'e}. Given an $n$-dimensional cycle $x
\in C(X)$, there is a symmetric structure $\varphi (x)$ on $C(\tilde
X)$ over $\ZZ[\pi_1 (X)]$ with $\varphi (x)_0 = -\cap x \co
C(\tilde X)^{n-\ast} \ra C(\tilde X)$ given by an equivariant
version of the familiar Alexander-Whitney diagonal approximation
construction. If $X$ is a Poincar\'e complex with the fundamental
class $[X]$, then we obtain the \emph{symmetric signature} of $X$,
\begin{equation}
\ssign_{\ZZ[\pi_1 (X)]} (X) = [(C(\tilde X),\varphi ([X]))] \in L^n (\ZZ[\pi_1 (X)]).
\end{equation}
An $n$-dimensional \emph{quadratic structure} on a chain complex $C$
is an $n$-dimensional cycle
\[
\psi \in \Wq{C} := W \otimes_{\ZZ[\ZZ_2]} (C \otimes_R C) \cong W \otimes_{\ZZ[\ZZ_2]} (\Hom_R (C^{-\ast},C)).
\]
There is a symmetrization map $1+T \co \Wq{C} \ra \Ws{C}$ which allows us to see quadratic structures as refinements of symmetric structures. A quadratic structure is called {\it Poincar\'e} if its symmetrization is Poincar\'e. Such a quadratic structure is more subtle to obtain from a geometric situation. As explained in Construction \ref{constrn:quadconstrn}, given an $n$-dimensional cycle $x \in C(X)$ and a stable map $F \co \Sigma^p X_+ \ra \Sigma^p M_+$ there is a quadratic structure $\psi (x)$ over $\ZZ [\pi_1 (X)]$ on $C (\tilde M)$. A degree one normal map $(f,b) \co M \ra X$ between $n$-dimensional Poincar\'e complexes induces a map of Thom spaces $\Th (b) \co \Th (\nu_M) \ra \Th (\nu_X)$ which in turn, using $S$-duality,\footnote{see section \ref{sec:normal-cplxs} for more details if needed} produces a stable map $F \co \Sigma^p X_+ \ra \Sigma^p M_+$ for some $p$. The quadratic construction $\psi$ produces from the fundamental class $[X]$ a quadratic structure on $C(\tilde M)$. Considering the Umkehr map $f^{!} \co C(\tilde X) \ra \Sigma^{-p} C (\Sigma^p \tilde{X}_+) \ra \Sigma^{-p} C(\Sigma^p \tilde{M}_+) \ra C(\tilde{M})$ and the inclusion into the algebraic mapping cone $e \co C(\tilde M) \ra \sC (f^!)$ we obtain an $n$-dimensional quadratic Poincar\'e complex called the \emph{quadratic signature} of $(f,b)$
\begin{equation} \label{eqn:quad-sign-deg-one-normal-map}
\qsign_{\ZZ[\pi_1 (X)]} (f,b) = [(C(f^{!}),e_{\%} \psi ([X]))] \in L_n (\ZZ[\pi_1 (X)])
\end{equation}
If $(f,b)$ is a degree one normal map with $M$ an $n$-dimensional
manifold, then the quadratic signature
(\ref{eqn:quad-sign-deg-one-normal-map}) coincides with the
classical surgery obstruction.
\subsection{The structure group $\SS_n (X)$} A generalization of the
theory in \ref{subsec:basics-of-alg-sur} is obtained by replacing
the ring $R$ with an algebraic bordism category $\Lambda$. Such a
category contains an underlying additive category with chain duality
$\AA$. The category $\Lambda$ specifies a subcategory of the
category of structured chain complexes\footnote{meaning chain
complexes with a symmetric or quadratic structure} in $\AA$ and a
type of Poincar\'e duality. We obtain cobordism groups of such chain
complexes $L^n (\Lambda)$ and $L_n (\Lambda)$ and also spectra
$\bL^\bullet (\Lambda)$ and $\bL_\bullet (\Lambda)$ whose homotopy
groups are these $L$-groups.
The notion of an additive category with chain duality allows us to
consider structured chain complexes over a simplicial complex $X$,
with $\pi = \pi_1 (X)$. Informally one can think of such a
structured chain complex over $X$ as a compatible collection of
structured chain complexes over $\ZZ$ indexed by simplices of $X$.
``Forgetting'' the indexing ``assembles'' such a complex over $X$ to
a complex over $\ZZ$ and an equivariant version of this process
yields a complex over $\ZZ [\pi]$. The algebraic bordism categories
allow us to consider various types of Poincar\'e duality for structured
complexes over $X$. There is the \emph{local Poincar\'e duality}
where it is required that the structure over each simplex is
Poincar\'e, with the category of all such complexes denoted
$\Lambda(\ZZ)_\ast (X)$. It turns out that\footnote{here $\bL_\bullet (\ZZ)$ is short for $\bL_\bullet (\Lambda (\ZZ)_\ast (\textup{pt.}))$}
\begin{equation}
L_n (\Lambda(\ZZ)_\ast (X)) \cong H_n (X ; \bL_\bullet (\ZZ)) \qquad L^n (\Lambda(\ZZ)_\ast (X)) \cong H_n (X ; \bL^\bullet (\ZZ)).
\end{equation}
Then there is the \emph{global Poincar\'e duality} where only the
assembled structure is required to be Poincar\'e, with the category
of all such complexes denoted $\Lambda(\ZZ[\pi])$ for the purposes
of this introduction\footnote{The notation here is justified by
Proposition \ref{prop:algebraic-pi-pi-theorem} which says that the
$L$-theory of this category is indeed isomorphic to the $L$-theory
of the group ring.}. The assembly gives a functor $A \co
\Lambda(\ZZ)_\ast (X) \ra \Lambda(\ZZ [\pi])$ which induces on the
$L$-groups the assembly maps
\begin{equation} \label{eqn:assembly-map}
A \co H_n (X;\bL^\bullet (\ZZ)) \ra L^n (\ZZ[\pi]) \qquad A \co H_n (X;\bL_\bullet (\ZZ)) \ra L_n (\ZZ[\pi])
\end{equation}
In a familiar situation such chain complexes arise as follows. A
triangulated $n$-dimensional manifold $X$ has a dual cell
decomposition, where for each simplex $\sigma \in X$ the dual cell
$(D(\sigma),\del D(\sigma))$ is an $(n-|\sigma|)$-dimensional
submanifold with boundary. The collection of chain complexes
$C(D(\sigma),\del D(\sigma))$ together with corresponding symmetric
structures provides the symmetric signature of $X$ over $X$, which
is a locally Poincar\'e symmetric complex over $X$
\begin{equation} \label{eqn:sym-sign-over-X}
\ssign_X (X) \in H_n (X;\bL^\bullet (\ZZ)) \quad A (\ssign_X (X)) = \ssign_{\ZZ [\pi]} (X) \in L^n (\ZZ [\pi]).
\end{equation}
A degree one normal map $(f,b) \co M \ra X$ from an $n$-dimensional
manifold $M$ to a triangulated $n$-dimensional manifold $X$ can be
made transverse to the dual cells of $X$. Denoting $M(\sigma) = f^{-1} (D(\sigma))$ this yields a collection
of degree one normal maps of manifolds with boundary
$(f(\sigma),b(\sigma)) \co (M(\sigma),\del (M(\sigma)) \ra
(D(\sigma),\del D(\sigma))$. The collection of chain complexes $\sC
(f(\sigma)^{!})$ with the corresponding quadratic structures
provides the quadratic signature of $(f,b)$ over $X$ which is a
locally Poincar\'e quadratic complex over $X$
\begin{equation}
\qsign_X (f,b) \in H_n (X;\bL_\bullet (\ZZ)) \quad A (\qsign_X (f,b)) = \qsign_{\ZZ [\pi]} (f,b) \in L_n (\ZZ [\pi]).
\end{equation}
The $L$-groups relevant for our geometric problem are modifications
of the above concepts obtained by using certain connective
versions.\footnote{This is a technical point addressed in section
\ref{sec:conclusion}}
The cobordism group of $n$-dimensional quadratic $1$-connective
complexes that are locally Poincar\'e turns out to be isomorphic to
the homology group $H_\ast (X,\bL_\bullet \langle 1 \rangle)$,
where the symbol $\bL_\bullet \langle 1 \rangle$ denotes the
$1$-connective quadratic $L$-theory spectrum. The cobordism group of
$n$-dimensional quadratic $1$-connective complexes that are globally
Poincar\'e turns out to be isomorphic to $L_n (\ZZ[\pi])$. The
assembly functor induces an assembly map analogous to
(\ref{eqn:assembly-map}).
The structure group $\SS_n (X)$ is the cobordism group of
$(n-1)$-dimensional quadratic chain complexes over $X$ that are
locally Poincar\'e, locally $1$-connective and globally
contractible. All these groups fit into the algebraic surgery exact
sequence:
\begin{equation} \label{eqn:ses}
\cdots \ra H_n (X,\bL_\bullet \langle 1 \rangle) \xra{A} L_n
(\ZZ[\pi_1 (X)]) \xra{\del} \SS_n (X) \xra{I} H_{n-1} (X;\bL_\bullet
\langle 1 \rangle) \ra \cdots
\end{equation}
The map $I$ is induced by the inclusion of categories and the map
$\del$ will be described below.
\subsection{The total surgery obstruction $s(X) \in \SS_n (X)$} We
need to explain how to associate to $X$ an $(n-1)$-dimensional
quadratic chain complex over $X$ that is locally Poincar\'e, locally
$1$-connective and globally contractible.
Being Poincar\'e is by definition a global condition. What local
structure does a Poincar\'e complex have? The answer is the
structure of a normal complex.
An $n$-dimensional normal complex $(Y,\nu,\rho)$ consists of a space
$Y$, a $k$-dimensional spherical fibration $\nu \co Y \ra \BSG (k)$
and a map $\rho \co S^{n+k} \ra \Th (\nu)$. There is also a notion
of a normal pair, a normal cobordism, and normal cobordism groups. A
Poincar\'e complex $X$ embedded into a large euclidean space has a
regular neighborhood. Its boundary produces a model for the SNF
$\nu_X$ and collapsing the boundary gives a model for the Thom space
$\Th (\nu_X)$ with the collapse map $\rho_X$. In a general normal
complex the underlying space $Y$ does not have to be Poincar\'e.
Nevertheless it has a preferred homology class $h(\rho) \cap u(\nu)
= [Y] \in C_n (Y)$, where $u (\nu) \in C^k (\Th (\nu))$ is some
choice of the Thom class and $h$ denotes the Hurewicz homomorphism.
The class $[Y]$ produces a preferred equivalence class of symmetric
structures on $C(Y)$.
There exists a notion of an $n$-dimensional normal algebraic complex
$(C,\theta)$ over any additive category with chain duality $\AA$. At
this stage we only say that the normal structure $\theta$ contains a
symmetric structure, and should be seen as a certain refinement of
that symmetric structure.\footnote{The details are presented in
section \ref{sec:normal-cplxs}} Again one can consider normal
complexes in an algebraic bordism category $\Lambda$, specifying an
interesting subcategory and the type of Poincar\'e duality on the
underlying symmetric structure. The cobordism groups are denoted
$NL^n (\Lambda)$ and there are also associated spectra $\bNL^\bullet
(\Lambda)$. For a ring $R$ we have the cobordism groups $NL^n (R)$
of $n$-dimensional normal complexes over $R$ with no Poincar\'e
duality in this case! A geometric normal complex $(Y,\nu,\rho)$
gives rise to a normal algebraic complex, called the normal
signature
\begin{equation} \label{eqn:norm-sign}
\nsign_{\ZZ [\pi_1 (Y)]} (Y) \in NL^n (\ZZ[\pi_1 (Y)]).
\end{equation}
whose symmetric substructure is the one associated to its
fundamental class $[Y]$.
So how is a Poincar\'e complex $X$ locally normal? For a simplex
$\sigma \in X$ consider the dual cell $(D(\sigma),\del D(\sigma))$,
which is a pair of spaces, not necessarily Poincar\'e. The SNF
$\nu_X$ can be restricted to $(D(\sigma),\del D(\sigma))$, remaining
a spherical fibration, say $(\nu_X (\sigma),\nu_X (\del \sigma))$. A
certain trick\footnote{Presented in section
\ref{sec:normal-signatures-over-X}} is needed to obtain a map
\[
\rho(\sigma) \co (D^{n+k-|\sigma|},S^{n-1+k-|\sigma|}) \ra (\Th
(\nu_X (\sigma)),\Th (\nu_X (\del \sigma)))
\]
providing us with a normal complex ``with boundary''. The collection
of these gives rise to a compatible collection of normal algebraic
complexes over $\ZZ$ and we obtain an $n$-dimensional normal
algebraic complex over $X$ whose symmetric substructure is globally
Poincar\'e. As such it can be viewed as a normal complex in two
distinct algebraic bordism categories.
There is the category $\widehat \Lambda (\ZZ) \langle 1 / 2 \rangle
(X)$, which is a $1/2$-connective version of all normal complexes
over $X$ with no Poincar\'e duality. We obtain the $1/2$-connective
normal signature of $X$ over $X$\footnote{The connectivity condition
turns out to be fulfilled, see section \ref{sec:conn-versions} for
explanation if needed}
\begin{equation} \label{eqn:norm-sign-over-X}
\nsign_X (X) \in NL^n (\widehat \Lambda (\ZZ) \langle 1/2 \rangle (X)) \cong H_n (X, \bNL^\bullet \langle 1/2 \rangle).
\end{equation}
Similarly as before the assembly of (\ref{eqn:norm-sign-over-X})
becomes (\ref{eqn:norm-sign}).
Then there is the category $\Lambda (\ZZ) \langle 1 / 2 \rangle
(X)$, which is a $1/2$-connective version of all normal complexes
over $X$ with global Poincar\'e duality. The cobordism group $NL^n
(\Lambda (\ZZ) \langle 1 / 2 \rangle (X))$ is called the
$1/2$-connective visible symmetric group and denoted $VL^n (X)$. We
obtain the visible signature of $X$ over $X$
\begin{equation} \label{eqn:visible-signature}
\vsign_X (X) \in VL^n (X).
\end{equation}
The forgetful functor $\Lambda (\ZZ) \langle 1 / 2 \rangle (X) \ra
\widehat \Lambda(\ZZ) \langle 1 / 2 \rangle (X)$ induces a map on
$NL$-groups which sends (\ref{eqn:visible-signature}) to
(\ref{eqn:norm-sign-over-X}).
But we are after an $(n-1)$-dimensional quadratic complex. To obtain
it we need in addition the concept of a boundary of a structured
chain complex. Consider an $n$-dimensional symmetric complex
$(C,\varphi)$ over any additive category with chain duality $\AA$.
Its boundary $(\del C ,\del \varphi)$ is an $(n-1)$-dimensional
symmetric complex in $\AA$ whose underlying chain complex is defined
as $\del C = \Sigma^{-1} \sC (\varphi_0)$. The $(n-1)$-dimensional
symmetric structure $\del \varphi$ is inherited from $\varphi$. It
becomes Poincar\'e in $\AA$, meaning $\sC ((\del \varphi)_0)$ is
contractible in $\AA$, by a formal argument.\footnote{The choice of
terminology is explained below Definition \ref{defn:symbdy}} The
boundary $(\del C,\del \varphi)$ can be viewed as measuring how the
complex $(C,\varphi)$ itself is Poincar\'e in $\AA$. It is shown in
Proposition \ref{lem:normal-gives-quadratic-boundary} that an
$n$-dimensional symmetric complex $(C,\varphi)$ which is a part of a
normal complex $(C,\theta)$ comes with a quadratic refinement $\del
\psi$ of the symmetric structure $\del \varphi$ on the
boundary.\footnote{The normal structure on $C$ provides a second
stable symmetric structure on $\del C$ in addition to $\del \varphi$
and the two structures stably coincide. Such a situation yields a
quadratic structure.}
From this description it follows that the boundary produces the
following two maps:
\begin{equation}
\del \co L_n (\ZZ[\pi]) \ra \SS_n (X) \quad \textup{and} \quad \del
\co VL^n (X) \ra \SS_n (X).
\end{equation}
The total surgery obstruction $s(X)$ is defined as the
$(n-1)$-dimensional quadratic complex over $X$, obtained as the
boundary of the visible signature
\begin{equation}
s (X) = \del \; \vsign_X (X) \in \SS_n (X).
\end{equation}
It is locally Poincar\'e, because by the above discussion any
boundary of a complex over $X$ is locally Poincar\'e. It is also
globally contractible since $X$ is Poincar\'e and hence the boundary
of the assembled structure is contractible. The connectivity
assumption is also fulfilled.
\
\begin{mainthm} \textup{\cite[Theorem 17.4]{Ranicki(1992)}} \label{main-thm}
Let $X$ be a finite Poincar\'e complex of formal dimension $n \geq
5$. Then $X$ is homotopy equivalent to a closed $n$-dimensional
topological manifold if and only if
\[
0 = s(X) \in \SS_n (X).
\]
\end{mainthm}
The proof is based on:
\begin{maintechthm} \quad \label{thm:main-technical-thm}
Let $X$ be a finite Poincar\'e complex of formal dimension $n \geq
5$ and denote by $t(X) = I(s(X)) \in H_{n-1} (X,\bL_\bullet \langle
1 \rangle)$. Then we have
\begin{enumerate}
\item[(I)] $t(X) = 0$ if and only if there exists a topological block bundle reduction of the SNF $\nu_X \co X \ra \BSG$.
\item[(II)] If $t(X) = 0$ then we have
\begin{align*}
\partial^{-1} s(X) = \{ & - \qsign_{\ZZ[\pi_1 (X)]} (f,b) \in L_n (\ZZ[\pi_1 (X)])
\; | \\
& (f,b) : M \ra X \; \textrm{degree one normal map}, \; M \;
\textrm{manifold} \}.
\end{align*}
\end{enumerate}
\end{maintechthm}
\begin{proof}[Proof of \mainth \;assuming \maintechth]
\
If $X$ is homotopy equivalent to a manifold then $t(X) = 0$ and by
(II) the set $\partial^{-1} s(X)$ contains $0$, hence $s(X) = 0$.
If $s(X)=0$ then $t(X)=0$ and hence by (I) the SNF of $X$ has a
topological block bundle reduction. Also $\partial^{-1} s(X)$ must
contain $0$ and hence by (II) there exists a degree one normal map
with the target $X$ and with the surgery obstruction $0$.
\end{proof}
\begin{rem}
The condition (I) might be puzzling for the following reason. As
recalled earlier, the classical surgery gives an obstruction to the
reduction of the SNF in the group $[X,\textup{B}(\G/\TOP)] = H^1
(X;\G/\TOP)$. It is important to note that here the
$\Omega^\infty$-space structure used on $\G/\TOP$ corresponds to the
Whitney sum and hence not the one that is compatible with the
well-known homotopy equivalence $\G/\TOP \simeq \bL \langle 1
\rangle_0$. On the other hand $t(X) \in H_{n-1} (X; \bL_\bullet
\langle 1 \rangle)$. We note that the claim of (I) is NOT that the
two groups are isomorphic, it merely says that one obstruction is
zero if and only if the other is zero.
\end{rem}
\subsection{Informal discussion of the proof of \maintechth}
\
Part (I): The crucial result is the relation between the quadratic,
symmetric, and normal $L$-groups of a ring $R$ via the long exact
sequence
\begin{equation} \label{eqn:quad-sym-norm-L-groups}
\xymatrix{
\ldots \ar[r] &
L_n (R) \ar[r]^{1+T} &
L^n (R) \ar[r]^J &
NL^n (R) \ar[r]^{\partial} &
L_{n-1} (R) \ar[r] &
\ldots
}
\end{equation}
Here the maps $1+T$ and $\del$ were already discussed. The map $J$
exists because a symmetric Poincar\'e structure on a chain complex
yields a preferred normal structure, reflecting the observation that
a Poincar\'e complex has the SNF and hence gives a geometric normal
complex. Using suitable connective versions there is a related
homotopy fibration sequence of spectra (here the implicit ring is
$\ZZ$)
\begin{equation} \label{fib-seq:quad-sym-norm-long}
\bL_\bullet \langle 1 \rangle \ra \bL^\bullet \langle 0 \rangle \ra
\bNL^\bullet \langle 1/2 \rangle \ra \Sigma \bL_\bullet \langle 1
\rangle.
\end{equation}
The exactness of (\ref{eqn:quad-sym-norm-L-groups}) and fibration
property of (\ref{fib-seq:quad-sym-norm-long}) are not easily
observed. It is a result of \cite{Weiss-I(1985),Weiss-II(1985)} for
which we offer some explanation in section \ref{sec:normal-cplxs}.
The sequence (\ref{fib-seq:quad-sym-norm-long}) induces a long exact
sequence in homology
\begin{equation} \label{les:hlgy-of-L-thy-spectra}
\cdots \ra H_n (X ; \bL^\bullet \langle 0 \rangle) \xra{J} H_n (X;
\bNL^\bullet \langle 1/2 \rangle) \xra{\del} H_{n-1} (X ;
\bL_\bullet \langle 1 \rangle) \ra \cdots
\end{equation}
Another tool is the $S$-duality from stable homotopy theory, which
gives
\begin{equation} \label{eqn:S-duality}
H_n (X ; \bE) \cong H^k (\Th (\nu_X) ; \bE) \quad \textup{with} \;
\bE = \bL_\bullet \langle 1 \rangle, \bL^\bullet \langle 0 \rangle,
\; \textup{or} \; \bNL^\bullet \langle 1/2 \rangle.
\end{equation}
and transforms the exact sequence (\ref{les:hlgy-of-L-thy-spectra}) into an exact sequence in cohomology of the Thom space $\Th (\nu_X)$.
The proof of the theorem is organized within the following
commutative braid:
\vspace{1cm}
\[
\xymatrix@C=0.5em{ H_n (X;\bL^\bullet \langle 0 \rangle) \ar[dr]
\ar@/^2.5pc/[rr] & & H_n (X;\bNL^\bullet \langle 1/2 \rangle)
\ar[dr] \ar@/^2.5pc/[rr] & &
L_{n-1}\ (\ZZ[\pi]) \\
& VL^n (X) \ar[ur] \ar[dr] & & H_{n-1} (X;\bL_\bullet \langle 1 \rangle) \ar[ur] \ar[dr] \\
L_n (\ZZ[\pi]) \ar[ur] \ar@/_2.5pc/[rr] & & \SS_{n} (X) \ar[ur]
\ar@/_2.5pc/[rr] & & H_{n-1}\ (X;\bL^\bullet \langle 0 \rangle) }
\]
\vspace{1cm}
We observe that
\begin{equation} \label{eqn:reduction-obstruction}
t (X) = \del \; \nsign_X (X) \in H_{n-1} (X;\bL_\bullet \langle 1 \rangle)
\end{equation}
with the normal signature over $X$ from
(\ref{eqn:norm-sign-over-X}).
Assuming the above, the proof proceeds as follows. If $\nu_X$ has a
reduction, then it has an associated degree one normal map $(f,b)
\co M \ra X$ which can be made transverse to the dual cells of $X$.
For each $\sigma$ the preimage $(M(\sigma),\del M(\sigma))$ of the
dual cell $(D(\sigma),\del D(\sigma))$ is an
$(n-|\sigma|)$-dimensional submanifold with boundary and
generalizing (\ref{eqn:sym-sign-over-X}) we obtain
\begin{equation} \label{eqn:sym-sign-of-mfd-over-X}
\ssign_X (M) \in L^n (\Lambda(\ZZ) \langle 0 \rangle_\ast (X)) \cong H_n (X ; \bL^\bullet \langle 0 \rangle)
\end{equation}
The mapping cylinder of the degree one normal map $(f,b)$ becomes a
normal cobordism between $M$ and $X$ and, as such, it produces a
normal algebraic cobordism between $J (\ssign_X (M))$ and $\nsign_X
(X)$. In other words the symmetric signature $\ssign_X (M)$ is a lift of the normal signature $\nsign_X (X)$ from (\ref{eqn:norm-sign-over-X}) and it follows from the exact sequence
(\ref{les:hlgy-of-L-thy-spectra}) that $t(X)$ vanishes.
The crucial concept used in the proof of the other direction is that
of an orientation of a spherical fibration with respect to a ring
spectrum, such as $\bL^\bullet \langle 0 \rangle$ and $\bNL^\bullet
\langle 1/2 \rangle$. For such a ring spectrum an $\bE$-orientation
of the SNF $\nu_X$ is an element in $H^k (\Th (\nu_X) ; \bE)$ with a
certain property. By the $S$-duality (\ref{eqn:S-duality}) it
corresponds to a homology class in $H_n (X, \bE)$. It turns out that
the SNF $\nu_X$ has a certain canonical $\bNL^\bullet \langle 1/2
\rangle$-orientation which corresponds to the normal signature
(\ref{eqn:norm-sign-over-X}) in this way. Similarly if there is a
reduction of $\nu_X$ with a degree one normal map $(f,b) \co M \ra
X$, then it gives an $\bL^\bullet \langle 0 \rangle$-orientation of
$\nu_X$ which corresponds to the symmetric signature
(\ref{eqn:sym-sign-of-mfd-over-X}) of $M$ over $X$.
Theorem \ref{thm:lifts-vs-orientations} says that a spherical
fibration has a topological block bundle reduction if and only if
its canonical $\bNL^\bullet \langle 1/2 \rangle$-orientation has an
$\bL^\bullet \langle 0 \rangle$-lift. The~proof is by analyzing
classifying spaces for spherical fibrations with orientations, a
certain diagram (Proposition
\ref{prop:canonical-L-orientations-on-class-spaces}) is shown to be
a homotopy pullback. Here is used the fact that the surgery
obstruction map $\pi_n (\G/\TOP) \ra L_n (\ZZ)$ is an isomorphism
for $n > 1$.
Part (II): To show the inclusion of the right hand side one needs to study the quadratic signatures over $X$
of degree one normal maps $(f,b) \co M \ra X$ with $M$ an $n$-dimensional closed manifold and $X$ an $n$-dimensional Poincar\'e complex. That means studying the local structure of such maps which boils down to studying quadratic signatures of degree one normal maps $(g,c) \co N \ra Y$ where $Y$ is only a normal complex. In this case one obtains a non-Poincar\'e quadratic complex whose boundary can be related to the quadratic boundary of the normal complex $Y$ as shown in Proposition \ref{prop:degree-one-normal-map-mfd-to-normal-cplx}. Passing to complexes over $X$ one obtains a quadratic complex over $X$, still denoted $\qsign_X (f,b)$ although it is not locally Poincar\'e, whose boundary is described in Proposition \ref{prop:degree-one-normal-map-mfd-to-poincare-over-X} establishing the required inclusion.
To study the other inclusion a choice is made of a degree one normal map $(f_0,b_0) \co M_0 \ra X$. Recall that all degree one normal maps with the target $X$ are organized in the cobordism set of the normal invariants $\sN (X)$. One considers the surgery obstruction map relative to $(f_0,b_0)$
\begin{equation} \label{eqn:surgery-obstruction-map}
\qsign_{\ZZ [\pi_1 (X)]} (-,-) - \qsign_{\ZZ [\pi_1 (X)]} (f_0,b_0) \co \sN (X) \ra L_n (\ZZ [\pi_1 (X)]).
\end{equation}
The signature $\qsign_X$ over $X$ relative to $(f_0,b_0)$ produces a map from the normal invariants $\sN (X)$ to the homology group $H_n (X ; \bL_\bullet \langle 1 \rangle)$. The main technical result is now Proposition \ref{prop:identification} which states that this map provides us with an identification of (\ref{eqn:surgery-obstruction-map}) with the assembly map (\ref{eqn:assembly-map}) for $X$. In particular it says that $\qsign_X$ relative to $(f_0,b_0)$ produces a bijection. Via the standard identification $\sN (X) \cong [X;\G/\TOP]$ and the bijection $[X,\G/\TOP] \cong H^0 (X;\bL_\bullet \langle 1 \rangle)$ (using the Kirby-Siebenmann isomorphism again) this boils down to identifying $\qsign_X$ with the Poincar\'e duality with respect to the spectrum $\bL_\bullet \langle 1 \rangle$. Here, similarly as in part (I), a relationship between the signatures and orientations with respect to the $L$-theory spectra plays a prominent role (Proposition \ref{prop:S-duals-of-orientations-are-signatures-relative-case-non-mfd} and Lemma \ref{lem:refined-orientations-vs-cup-product}).
\subsection*{The purpose of the paper}
As the title suggests this article revisits the existing theory which was developed over decades by Andrew Ranicki, with contributions also due to Michael Weiss. On one hand it is meant as a guide to the theory. We decided to write such a guide when we were learning the theory. It turned out that results of various sources needed to be combined and we felt that it might be a good idea to have them in one place. The sources are \cite{Ranicki(1979),Ranicki(1981),Levitt-Ranicki(1987),Ranicki(1992)}, and also \cite{Weiss-I(1985),Weiss-II(1985)}. On the other hand, we found certain statements which were correct, but without proofs, which we were able to supply. These are:
\begin{itemize}
\item The fact that the quadratic boundary of a certain (normal, Poincar\'e) geometric pair associated to a degree one normal map from a manifold to a Poincar\'e space agrees with the surgery obstruction of that map is proved in our Example \ref{expl:normal-symm-poincare-pair-gives-quadratic}. The claim was stated in \cite[page 622]{Ranicki(1981)} without proof. The proposition preceding the claim suggests the main idea of the proof, but we felt that writing it down is needed.
\item The construction of the normal signature $\nsign_X (X)$ in section 11 for an $n$-dimensional geometric Poincar\'e complex $X$. This was claimed to exist in \cite[Example 9.12]{Ranicki(1992)} (see also \cite[Errata for page 103]{Errata-Ranicki(1992)}), for $X$ any $n$-dimensional geometric normal complex. We provide details of this construction when $X$ is Poincar\'e, which is enough for our purposes.
\item In the proof of Theorem \ref{thm:lifts-vs-orientations} a certain map has to be identified with the surgery obstruction map. The identification was claimed in \cite[page 291]{Ranicki(1979)} without details. Theorem \ref{thm:lifts-vs-orientations} is also essentially equivalent to \cite[Proposition 16.1]{Ranicki(1992)}, which has a sketch proof and is referenced back to \cite{Ranicki(1979)} for further details.
\item The relation between the quadratic complex associated to a degree one normal map from a manifold to a normal complex and the quadratic boundary of the normal complex itself as described in Proposition \ref{prop:degree-one-normal-map-mfd-to-normal-cplx}. We also provide the proof of Proposition \ref{prop:degree-one-normal-map-mfd-with-boundary-to-normal-pair} which is a relative version of Proposition \ref{prop:degree-one-normal-map-mfd-to-normal-cplx} and it is also an ingredient in the proof of Proposition \ref{prop:degree-one-normal-map-mfd-to-poincare-over-X} which gives information about the quadratic signature over $X$ of a degree one normal map from a manifold to a Poincar\'e complex $X$. Proposition \ref{prop:degree-one-normal-map-mfd-to-normal-cplx} was stated as \cite[Proposition 7.3.4]{Ranicki(1981)}, but only contained a sketch proof. Proposition \ref{prop:degree-one-normal-map-mfd-to-poincare-over-X} is used in the proof of \cite[Theorem 17.4]{Ranicki(1992)}.
\end{itemize}
Over time we have also heard from several other mathematicians in the area the need for such clarifications. We believe that with this paper we provide an answer to these questions and that the proof of the main theorem as presented here is complete. We also hope that our all-in-one-package paper makes the presentation of the whole theory surrounding the total surgery obstruction more accessible. We would be grateful for comments from an interested reader should there still be unclear parts.
It should be noted however, that we do not bring new technology to the proof, nor do we state any new theorems. Our supplying of the proofs as listed above is in the spirit of the two main sources \cite{Ranicki(1979)} and \cite{Ranicki(1992)}.
\subsection*{Structure}
The reader will recognize that our table of contents closely follows part I and the first two sections of part II of the book \cite{Ranicki(1992)}. We find most of the book a very good and readable source. So in the background parts of this paper we confine ourselves to survey-like treatment. In the parts where we felt the need for clarification, in particular the proof of the main theorem, we provide the details.
The reader of this article should be familiar with the classical surgery theory and at least basics of the algebraic surgery theory. Sections \ref{sec:algebraic-cplxs} to \ref{sec:surgery-sequences} contain a summary of the results from part I of \cite{Ranicki(1992)} which are needed to explain the theory around the main problem, sometimes equipped with informal comments. The reader familiar with these results can skip those sections and start reading section \ref{sec:normal-signatures-over-X}, where the proof of the main theorem really begins. In case the reader is familiar with everything except normal complexes, he may consult in addition section \ref{sec:normal-cplxs}.
\subsection*{Literature}
Besides the above mentioned sources some background can be found in \cite{Ranicki-I-(1980)}, \cite{Ranicki-II-(1980)}, \cite{Ranicki-foundations-(2001)}, \cite{Ranicki-structure-set-(2001)}.
\subsection*{Note.} Parts of this work will be used in the PhD thesis of Philipp K\"uhl.
\subsection*{Acknowledgments.}
We would like to thank Andrew Ranicki for stimulating lectures on algebraic surgery in Fall 2008, for answering numerous questions, and for generous support. We would also like to thank Ian Hambleton for inspiring conversations and Frank Connolly, Diarmuid Crowley, Qayum Khan, Wolfgang L\"uck and Martin Olbermann for comments and suggestions on the first draft of this paper.
\section{Surgery sequences and the structure groups $\SS_n (X)$} \label{sec:surgery-sequences}
Now we have assembled all the tools needed to define, for a finite simplicial complex $X$, the group $\SS_n (X)$, which is the home of the total surgery obstruction if $X$ is an $n$-dimensional Poincar\'e complex. It is important and useful to define not only the group $\SS_n (X)$ itself, but also to relate it to other groups which we might understand better. So the group $\SS_n (X)$ is placed into a commutative braid, which is obtained from the braid in section \ref{sec:alg-bord-cat} by plugging in suitable algebraic bordism categories. We recall these now.
In fact the categories are, as indicated below, various connective versions of the following categories. The underlying additive category with chain duality is
\begin{itemize}
\item $\AA=\ZZ_{*}(X)$ is the additive category of finitely generated free $\ZZ$-modules over $X$..
\end{itemize}
Now we specify the subcategories of $\BB (\AA)$ needed to construct the braid.
\begin{itemize}
\item $\BB = \BB(\ZZ_{*}(X))$ are the bounded chain complexes in $\AA$,
\item $\CC = \{C\in\BB\;|\;A(C)\simeq *\}$ are the \emph{globally contractible} chain complexes in $\BB(\AA)$,
\item $\DD = \{C\in\BB\;|\;C(\sigma) \simeq * \; \forall \sigma\in K\}$ are the \emph{locally contractible} chain complexes in $\BB(\AA)$.
\end{itemize}
The precise connective versions used are indicated in the braid diagram below, which is taken from \cite[Proposition 15.18]{Ranicki(1992)}. Due to the lack of space we have omitted the underlying category $\AA$ from the notation as it is the same everywhere. We also note that obviously $\DD \langle 0 \rangle = \DD \langle 1 \rangle$:
\vspace{1cm}
{\footnotesize
\[
\xymatrix@C=0.05em{ NL^n (\BB \langle 0 \rangle ,\DD \langle 0
\rangle) \ar^{(2)}[dr] \ar@/^2.5pc/_{(4)}[rr] & & NL^n (\BB \langle 0
\rangle,\BB \langle 1 \rangle) \ar[dr] \ar@/^2.5pc/[rr] & &
L_{n-1}\ (\BB \langle 1 \rangle,\CC \langle 1 \rangle) \\
& NL^n (\BB \langle 0 \rangle,\CC \langle 1\rangle) \ar[ur] \ar[dr]
& & L_{n-1} (\BB \langle 1 \rangle,\DD \langle 1 \rangle) \ar[ur] \ar[dr] \\
L_n (\BB \langle 1 \rangle,\CC \langle 1 \rangle) \ar_{(3)}[ur]
\ar@/_2.5pc/^{(1)}[rr] & & L_{n-1} (\CC \langle 1 \rangle,\DD \langle 1
\rangle) \ar[ur] \ar@/_2.5pc/[rr] & & NL^{n-1}\ (\BB \langle 0
\rangle,\DD \langle 0 \rangle) }
\]
}
\vspace{1cm}
Notice that the exact sequence labeled (1) is induced by the assembly functor $A \co \Lambda (\ZZ) \langle 1 \rangle_\ast (X) \ra \Lambda (\ZZ) \langle 1 \rangle (X)$ from section \ref{sec:assembly}. The other sequences are induced by analogous functors, the precise statements are left to the reader. It is more interesting at this stage that we have already identified various groups in the braid. We recapitulate using Propositions \ref{prop:L-theory-over-K-are-homology-theories-conn-versions} and \ref{prop:NL-spectrum-of-lower-star-K}:
\begin{align*}
L_n (\BB \langle 1 \rangle,\CC \langle 1 \rangle) & = L_n (\ZZ[\pi_1
(X)]) \\
L_n (\BB \langle 1 \rangle,\DD \langle 1 \rangle) & = H_n
(X;\bL_\bullet \langle 1 \rangle) \\
NL^n (\BB \langle 0 \rangle,\DD \langle 0 \rangle) & = H_n
(X;\bL^\bullet \langle 0 \rangle)) \\
NL^n (\BB \langle 0 \rangle,\BB \langle 1 \rangle) & = H_n
(X;\bNL^\bullet \langle 1/2 \rangle)) \\
\end{align*}
The sequence containing only homology theories can be thought of as induced by the cofibration sequence from Proposition \ref{prop:fib-seq-of-quad-sym-norm-connective-version}. In addition we have new terms as follows. We keep the notation from the beginning of this section.
\begin{defn} \cite[chapter 17]{Ranicki(1992)}
Let $X$ be a finite simplicial complex. Define the $n$-dimensional \emph{structure group} of $X$ to be
\[
\SS_n (X) = L_{n-1} (\AA,\CC \langle 1 \rangle,\DD \langle 1 \rangle)
\]
\end{defn}
So an element in $\SS_n (X)$ is represented by an $(n-1)$-dimensional $1$-connective QAC in $\ZZ_\ast (X)$ which is globally contractible and locally Poincar\'e. We will see in the following section how to obtain such a chain complex from a geometric situation.
\begin{defn} \cite[chapter 17]{Ranicki(1992)}
Let $X$ be a finite simplicial complex. Define the $n$-dimensional \emph{visible symmetric $L$-group} of $X$ to be
\[
VL^n(X) = NL^n (\AA,\BB \langle 0 \rangle, \CC \langle 1 \rangle)
\]
\end{defn}
The visible symmetric $L$-groups were defined by Weiss in \cite{Weiss(1992)} to clarify certain relations between the symmetric and quadratic $L$-groups. We will not need this aspect, what is important is that an element in $VL^n (X)$ is represented by an $n$-dimensional $0$-connective NAC in $\ZZ_\ast (X)$ whose underlying symmetric structure is locally $0$-connective and globally Poincar\'e. We will see in the following section a geometric situation which yields such a chain complex.
The sequence labeled (1) in the braid is known as the \emph{algebraic surgery exact sequence}:
\begin{equation}
\cdots \ra H_n (X,\bL_\bullet \langle 1 \rangle) \xra{A} L_n (\ZZ[\pi_1
(X)]) \xra{\del} \SS_n (X) \ra H_{n-1} (X;\bL_\bullet \langle 1 \rangle)
\ra \cdots
\end{equation}
Summarizing the above identification we obtain the commutative braid which will be our playground in the rest of the paper:
\vspace{1cm}
\[
\xymatrix@C=0.5em{ H_n (X;\bL^\bullet \langle 0 \rangle) \ar[dr]
\ar@/^2.5pc/[rr] & & H_n (X;\bNL^\bullet \langle 1/2 \rangle)
\ar[dr] \ar@/^2.5pc/[rr] & &
L_{n-1}\ (\ZZ[\pi]) \\
& VL^n (X) \ar[ur] \ar[dr]^{\del} & & H_{n-1} (X;\bL_\bullet \langle 1 \rangle) \ar[ur] \ar[dr] \\
L_n (\ZZ[\pi]) \ar[ur] \ar@/_2.5pc/[rr]^{\del} & & \SS_{n} (X) \ar[ur]
\ar@/_2.5pc/[rr] & & H_{n-1}\ (X;\bL^\bullet \langle 0 \rangle) }
\]
\vspace{1cm}
\section{Definition of $s(X)$} \label{sec:defn-of-s(X)}
\begin{defn} \cite[17.1]{Ranicki(1992)} Define
\begin{equation}
s(X) : = \partial \big( \vsign_X (X) \big) \in \SS_n (X).
\end{equation}
\end{defn}
Let us have a close look at the $(n-1)$-dimensional QAC $(C,\psi)$ in the category $\Lambda (\ZZ) \langle 1 \rangle_\ast (X)$ representing $s(X)$. By definition of $\vsign_X (X)$ and of the map $\del \co VL^n (X) \ra \SS_n (X)$ the subcomplex $C(\sigma)$ is the mapping cone of the duality map
\[
\varphi [X(\sigma)] \co \Sigma^n TC(\sigma) =
C^{n-|\sigma|} (D(\sigma)) \ra C(\sigma) = C(D(\sigma),\del
D(\sigma)).
\]
The quadratic structure $\psi (\sigma)$ is more subtle to describe. It corresponds to the normal structure on $X(\sigma)$ via Lemma \ref{lem:normal-gives-quadratic-boundary}.
We clearly see that if $X$ is a manifold then the mapping cones of the maps $\varphi [X(\sigma)]$ above are contractible and the total surgery obstruction equals $0$. If $X$ is homotopy equivalent to a manifold then the mapping cylinder of the homotopy equivalence provides via the constructions in Construction \ref{con:normal-symmetric-signature-over-X-for-a-deg-one-normal-map} a cobordism from $(C,\psi)$ to $0$.
\section{Normal signatures over $X$} \label{sec:normal-signatures-over-X}
As indicated in the introduction in order to define the total surgery obstruction of $X$ we need to discuss the normal and the visible signature of $X$. In this section we define the visible signature $\vsign_X (X) \in VL^n (X)$ as a refinement of the normal signature $\nsign_X (X) \in H_n (X;\bNL^\bullet \langle 1/2 \rangle)$. In fact, it should be expected that any $n$-dimensional GNC $(X,\nu,\rho)$ has an associated normal signature over $X$ which assembles to the normal signature over $\ZZ[\pi_1 X]$ defined in section \ref{sec:normal-cplxs}. However, we are not able to show such a general statement. We need to assume that the normal complex comes from a Poincar\'e complex with its SNF.\footnote{In \cite[Errata for page 103]{Errata-Ranicki(1992)} a construction for the normal signature over $X$ for a normal complex $X$ is actually given. However, since in the proof of the subsequent sections we directly use the specific properties of the construction presented in this section in the case when $X$ is Poincar\'e, we only discuss this special case.} Before we start we still need some technical preliminaries.
\begin{construction}
Recall some more ideas from \cite{Whitehead(1962)} surrounding the concept of supplement described in section
\ref{sec:gen-hlgy-thies}. Let $K \subseteq L$ be a simplicial subcomplex. The supplement is a subcomplex $\overline{K} \subseteq
L'$. As explained in \cite{Whitehead(1962)} there is an embedding
$|L| \subset |K'| \ast |\overline K|$ into the join of the two
realizations. A point in $|K'| \ast |\overline K|$ can be described
as $t \cdot x + (1-t) \cdot y$ for $x \in |K'|$, $y \in |\overline
K|$, and $t \in [0,1]$. The space $|L|$ can be decomposed as the
union of two subspaces
\begin{align*}
N = N(K') = & \{ t \cdot x + (1-t) \cdot y \; | \; t \geq 1/2 \} \cap L \\
\overline N = N(\overline K) = & \{ t \cdot x + (1-t) \cdot y \; | \; t \leq 1/2 \} \cap L.
\end{align*}
These come with obvious deformation retractions $r \co N \ra |K|$
and $\overline r \co \overline N \ra |\overline K|$.
Next denote $N(\sigma) = N \cap (|D(\sigma,K)| \ast |\overline K|)$
for $\sigma \in K$. Then we have the dissection $N = \cup_{\sigma
\in K} N(\sigma)$ and the retraction $r$ respects the dissections
of $N$ and $|K|$
\[
r|_{N(\sigma)} = r(\sigma) \co N(\sigma) \ra |D(\sigma,K)|.
\]
\[
\jointriangle
\]
\end{construction}
\begin{construction} \label{con:simpl-model-for-the-thom-space-of-snf}
Consider now the case when $X$ is a finite simplicial Poincar\'e
complex of dimension $n$ which we embed into $\del \Delta^{m+1}$, that means $K = X$ and $L = \del \Delta^{m+1}$ in the above notation. For $m$ large enough the homotopy fiber of the projection map $\del r \co \del N = N \cap \overline N \ra X$ is homotopy equivalent to
$S^{m-n-1}$ and the associated spherical fibration is the SNF
$\nu_X$. In more detail, there is a $(D^{m-n},S^{m-n-1})$-fibration
$p \co (D(\nu_X),S(\nu_X)) \ra X$ and a homotopy equivalence of
pairs $i \co (N,\del N) \ra (D(\nu_X),S(\nu_X))$ such that the
following diagram commutes
\[
\xymatrix{
(N,\del N) \ar[rr]^{i} \ar[dr]_{r} & & (D(\nu_X),S(\nu_X)) \ar[dl]^{p} \\
& X & }
\]
The map $p$ is now an honest fibration. Recall from Definition \ref{defn:sigma-m} the complex $\Sigma^m$ and that we have an embedding $\overline X \subset \Sigma^m$. It follows that
\[
|\Sigma^m / \overline X| \simeq N/ \del N \simeq \Th(\nu_X)
\]
\end{construction}
\begin{construction} \label{con:geom-normal-signature-over-X}
Now we would like to present an analogue of Construction \ref{con:sym-construction-over-cplxs-lower-star-cplx} for normal complexes. What we are aiming for is an assignment
\[
\gnsign_X (X) \co \sigma \; \mapsto \; (X(\sigma),\nu(\sigma),\rho(\sigma)) \in \big(\bOmega^N_{n-m}\big)^{(m-|\sigma|)} \; \textup{for each} \; \sigma \in X.
\]
The first two entries are defined as follows:
\[
X(\sigma) = |D(\sigma,X)| \quad \nu(\sigma) = \nu_X \circ \textup{incl} \co X(\sigma) \subset X \ra \BSG (m-n-1)
\]
To define $\rho (\sigma)$ consider the following commutative diagram
\[
\xymatrix{
N(\sigma) \ar[r] \ar[ddr] \ar@{-->}[dr]^{\rho(N(\sigma))} & N \ar[dr]^{i} & \\
& D(\nu(\sigma)) \ar[r] \ar[d] & D(\nu_X) \ar[d] \\
& |D(\sigma,X)| \ar[r] & |X|
}
\]
In general the homotopy fibers of the projections $r(\sigma) \co
\del N(\sigma) \ra |D(\sigma,X)|$ are not $S^{k-1}$. On the other
hand the pullback of $\nu_X$ along the inclusion $D(\sigma,X)
\subset X$ yields an $S^{m-n-1}$-fibration $\nu(\sigma)$. The
associated disc fibration is a pullback as indicated by the diagram.
Since the two compositions $N(\sigma) \ra |X|$ commute we obtain the
dashed map.
Recall the $(m-|\sigma|)$-dimensional simplex $\sigma^\ast \in
\Sigma^m$. Observe that we have
\[
\Delta^{m-|\sigma|} \cong |\sigma^\ast| = N(\sigma) \cup \overline N(\sigma)
\]
where $\overline N (\sigma) = \overline N \cap (|D(\sigma,X)| \ast |\overline X|)$.
Define the map
\[
\rho (\sigma) = \rho(N(\sigma)) \cup \rho (\overline{N} (\sigma))
\co \Delta^{m-|\sigma|} \cong N(\sigma) \cup \overline N(\sigma) \ra
D(\nu(\sigma)) \cup \{ \ast \} \cong \thom{\nu(\sigma)}
\]
where the map $\rho (\overline{N} (\sigma))$ is the collapse map.
\end{construction}
\begin{defn} \label{defn:geom-normal-signature-over-X}
Let $X$ be an $n$-dimensional finite Poincar\'e simplicial complex with the associated $n$-dimensional GNC $(X,\nu_X,\rho_X)$. Then the assignment from Construction \ref{con:geom-normal-signature-over-X} defines an element
\[
\gnsign_X (X) \in H_n (X;\bOmega^N_\bullet)
\]
and is called the \emph{geometric normal signature of $X$ over $X$}.
\end{defn}
\begin{rem}
The assembly of Remark \ref{rem:hlgical-assembly} satisfies
\[
A(\gnsign_X (X)) = (X,\nu_X,\rho_X) \in \Omega^N_n.
\]
\end{rem}
\begin{defn} \label{lem:normal-signature-over-X}
Let $X$ be an $n$-dimensional finite Poincar\'e simplicial complex with the associated $n$-dimensional GNC $(X,\nu_X,\rho_X)$. The composition of the geometric normal signature over $X$ from Definition \ref{defn:geom-normal-signature-over-X} with the normal signature map on the level of spectra from Proposition \ref{prop:connective-signatures-on-spectra-level} produces a well-defined element
\[
\nsign_X (X) = \nsign \circ \gnsign_X (X) \in H_n (X;\bNL^\bullet \langle 1/2 \rangle) \quad
\]
called the \emph{normal signature of $X$ over $X$}.
\end{defn}
\begin{rem}
We have
\[
A(\nsign_X (X)) = \nsign_{\Zpi} (X) \in NL^n (\Zpi)
\]
Recall that an $n$-dimensional NAC has an underlying symmetric complex. In the case of $\nsign_X (X)$ this is the complex obtained in Construction \ref{con:sym-construction-over-cplxs-lower-star-cplx}. It is not locally Poincar\'e, that means, it does not give a symmetric complex in $\Lambda (\ZZ)_\ast (X)$. Its assembly is $\ssign_{\Zpi} (X) \in L^n (\Zpi)$.
\end{rem}
\begin{defn} \label{prop:visible-signature-over-X}
Let $X$ be an $n$-dimensional finite Poincar\'e simplicial complex with the associated $n$-dimensional GNC $(X,\nu_X,\rho_X)$. The assembly of the normal complex over $X$ which defines the normal signature over $X$ is Poincar\'e and hence produces an $n$-dimensional NAC in the algebraic bordism category $\Lambda (\ZZ)(X)$ from Definition \ref{defn:Lambda-K-category} and as such a well-defined element
\[
\vsign_X (X) \in VL^n (X)
\]
called the \emph{visible symmetric signature of $X$ over $X$}.
\end{defn}
Now we present another related construction. It will not be needed for the definition of the total surgery obstruction, but it will be used in the proof of the main theorem. (See Theorem \ref{thm:lifts-vs-orientations}.)
Let $(f,b) \co M \ra X$ be a degree one normal map of $n$-dimensional topological manifolds such that $X$ is triangulated. As discussed in Example \ref{expl:normal-symm-poincare-pair-gives-quadratic} the pair $(W,M \sqcup X)$, where $W$ is the mapping cylinder of $f$ possesses a structure of an $(n+1)$-dimensional geometric (normal, topological manifold) pair: the spherical fibration denoted by $\nu (b)$ is obtained as the mapping cylinder of $b \co \nu_M \ra \nu_X$ and the required map as the composition $\rho (b) \co D^{n+k+1} \ra S^{n+k} \times [0,1] \ra \Th (\nu(b))$. Another way of looking at the pair $(W,M \sqcup X)$ is to say that it is an $n$-simplex in the space $\Sigma^{-1} \bOmega^{N,\STOP}_0$. Hence via the relative normal construction we can associate to $(f,b)$ an $(n+1)$-dimensional (normal,symmetric Poincar\'e) algebraic pair
\[
\nssign (f,b) = (\nsign (W),\ssign (M) - \ssign (X)) \in \pi_n (\bF)
\]
where $\bF := \textup{Fiber} \; \bL^\bullet \langle 0 \rangle \ra \bNL^\bullet \langle 1/2 \rangle$.
We would like to associate to $(f,b)$, respectively $(W,M \sqcup X)$ an $(n+1)$-dimensional (normal,symmetric Poincar\'e) algebraic pair over $\ZZ_\ast (X)$. This is not exactly a relative version of the previous definitions since the pair is not Poincar\'e. Nevertheless, in this special case we are able to obtain what we want.
\begin{con} \label{con:normal-symmetric-signature-over-X-for-a-deg-one-normal-map}
Let $(f,b) \co M \ra X$ be a degree one normal map of $n$-dimensional topological manifolds such that $X$ is triangulated. We can assume that $f$ is transverse to the dual cell decomposition of $X$. Consider the dissection
\[
X = \bigcup_{\sigma \in X} X(\sigma) \quad (f,b) = \bigcup_{\sigma \in X} (f(\sigma),b(\sigma)) \co M (\sigma) \ra X (\sigma)
\]
where each $(f(\sigma),b(\sigma))$ is a degree one normal map of $(n-|\sigma|)$-dimensional manifolds $(m-|\sigma|)$-ads. We obtain an assignment which to each $\sigma \in X$ associates an $(n+1-|\sigma|)$-dimensional pair of normal $(m-|\sigma|)$-ads
\[
\sigma \mapsto ((W(\sigma),\nu(b(\sigma)),\rho(b(\sigma))),M(\sigma) \sqcup X(\sigma)).
\]
These fit together to produce an $\bOmega^N_\bullet$-cobordism of $\bOmega^{\STOP}_\bullet$-cycles in the sense of Definition \ref{defn:E-cycles}, or equivalently a $\Sigma^{-1} \bOmega^{N,\STOP}_\bullet$-cycle, providing us with an element
\[
\sign_X^{\G/\TOP} (f,b) \in H_n (X ; \Sigma^{-1} \bOmega^{N,\STOP}_\bullet)
\]
Composing with the normal signature map $\nsign \co \bOmega^N_\bullet \ra \bNL^\bullet \langle 1/2 \rangle$ then produces a $\bNL^\bullet \langle 1/2 \rangle$-cobordism, which can be seen as an $(n+1)$-dimensional (normal,symmetric Poincar\'e) pair over $\ZZ_\ast (X)$
\[
\nssign_X (f,b) = \nssign (\sign_X^{\G/\TOP} (f,b)) \in H_n (X ; \bF).
\]
By applying the homological assembly of Remark \ref{rem:hlgical-assembly} we obtain the $(n+1)$-dimensional $($normal,symmetric Poincar\'e$)$ pair
\[
\nssign (f,b) \in \pi_n (\bF).
\]
\end{con}
\begin{rem}
Recall from Example \ref{expl:normal-symm-poincare-pair-gives-quadratic} the correspondence
\[
(\nsign (W),\ssign (M) - \ssign (X)) \longleftrightarrow \qsign (f,b)
\]
where $\qsign (f,b) \in L_n (\ZZ)$ is the quadratic signature (=surgery obstruction) of the degree one normal map $(f,b)$. Using the relative version of Example \ref{expl:normal-symm-poincare-pair-gives-quadratic} we obtain in this situation an identification of $\nssign_X (f,b)$ with the a quadratic signature of Construction \ref{con:quad-construction-over-cplxs-lower-star}
\[
\nssign _X (f,b) = \qsign_X (f,b) \in H_n (X ; \bL_\bullet \langle 1 \rangle).
\]
\end{rem}
\section{Proof of the Main Technical Theorem (I)} \label{sec:proof-part-1}
Recall the statement. For an $n$-dimensional finite Poincar\'e complex $X$ with $n \geq 5$ let $t(X)$ be the image of $s(X)$ under the map $\SS_n (X) \ra H_{n-1} (X;\bL_\bullet \langle 1 \rangle)$. Then $t(x) = 0$ if and only if there exists a topological block bundle reduction of the SNF $\nu_X$. The main idea of the proof is to translate the statement about the reduction of $\nu_X$ into a statement about orientations with respect to $L$-theory spectra. The principal references for this section are \cite[pages 280-292]{Ranicki(1979)} and \cite[section 16]{Ranicki(1992)}.
\subsection{Topological surgery theory} \label{subsec:top-surgery}
\
Before we start we offer some comments about the topological surgery and about the bundle theories used. The topological surgery is a modification of the surgery in the smooth and PL-category, due to Browder-Novikov-Sullivan-Wall as presented in \cite{Browder(1971)} and \cite{Wall(1999)}, by the work of Kirby and Siebenmann as presented in \cite{Kirby-Siebenmann(1977)}. This book also discusses various bundle theories and transversality theorems for topological manifolds. From our point of view the notion of a ``stable normal bundle'' for topological manifolds is of prominent importance. As explained in Essay III, \S 1, the notion of a stable microbundle is appropriate and there exists a corresponding transversality theorem, whose dimension and codimension restrictions are removed by \cite[Chapter 9]{Freedman-Quinn(1990)}. It is also explained that when enough triangulations are in sight, one can use block bundles and the stable microbundle transversality can be replaced by block transversality. This is thanks to the fact that for the classifying spaces we have $\BSTOP \simeq \BSbTOP$. Since for our problem we can suppose that the Poincar\'e complex $X$ is in fact a simplicial complex we can ask about the reduction of the SNF to a stable topological block bundle. When we talk about the degree one normal maps $(f,b) \co M \ra X$ we mean the stable microbundle normal data, since we need to work in full generality.
\subsection{Orientations} \label{subsec:orientations}
\
Let $\bE$ be a ring spectrum. An \emph{$\bE$-orientation} of a $\ZZ$-oriented spherical fibration $\nu \co X \ra \BSG(k)$ is an element of $u_{\bE} (\nu) \in H^k (\Th (\nu) ; \bE)$ that means a homotopy class of maps $u_{\bE} (\nu) \co \bT(\nu) \ra \bE$, where $\bT(\nu)$ denotes the Thom spectrum of $\nu$, such that for each $x \in X$, the restriction $u_{\bE} (\nu)_x \co \bT(\nu_x) \ra \bE$
to the fiber $\nu_x$ of $\nu$ over $x$ represents a generator of $\bE^\ast (\bT(\nu_x)) \cong \bE^\ast (S^k)$ which under the Hurewicz homomorphism $\bE^\ast (\bT(\nu_x)) \ra H^\ast (\bT(\nu_X);\ZZ)$ maps to the chosen $\ZZ$-orientation.
\subsection{Canonical orientations} \label{subsec:canonical-orientations}
\
Denote by $\bMSG$ the Thom spectrum of the universal stable
$\ZZ$-oriented spherical fibrations over the classifying space
$\BSG$. Its $k$-th space is the Thom space $\bMSG (k) =
\thom{\gamma_{\SG} (k)}$ of the canonical $k$-dimensional spherical
fibration $\gamma_{\SG} (k)$ over $\BSG (k)$. Similarly denote by
$\bMSTOP$ the Thom spectrum of the universal stable $\ZZ$-oriented
topological block bundles over the classifying space $\BSTOP \simeq
\BSbTOP$. Its $k$-th space is the Thom space $\bMSTOP (k) =
\thom{\gamma_{\SbTOP} (k)}$ of the canonical $k$-dimensional block
bundle $\gamma_{\SbTOP} (k)$ over $\BSbTOP (k)$. There is a map $J \co \bMSTOP \ra \bMSG$ defined by viewing the canonical block bundle $\gamma_{\SbTOP} (k)$ as a spherical fibration.
Both $\bMSG$ and $\bMSTOP$ are ring spectra. The multiplication on $\bMSTOP$ is given by the Cartesian product of block bundles. The multiplication on $\bMSG$ is given by the sequence of the operations: take the associated disk fibrations, form the product disk fibration and take the associated spherical fibration. Upon precomposition with the diagonal map the multiplication on $\bMSTOP$ becomes the Whitney sum and the multiplication on $\bMSG$ becomes fiberwise join. The map $J \co \bMSTOP \ra \bMSG$ is a map of ring spectra.
\begin{prop} \textup{\cite[pages 280-283]{Ranicki(1979)}} \label{canonical-geom-orientations} \
\begin{enumerate}
\item Any $k$-dimensional $\ZZ$-oriented spherical fibration $\alpha \co X \ra \BSG (k)$ has a canonical orientation $u_{\bMSG} (\alpha) \in H^k (\Th(\alpha);\bMSG)$.
\item Any $k$-dimensional $\ZZ$-oriented topological block bundle $\beta \co X \ra \BSbTOP (k)$ has a canonical orientation $u_{\bMSTOP} (\beta) \in H^k (\Th(\beta);\bMSTOP)$.
\end{enumerate}
Moreover $J (u_{\bMSTOP} (\beta)) = u_{\bMSG} (J (\beta))$.
\end{prop}
This follows since any spherical fibration (or a topological block
bundle) is a pullback of the universal via the classifying map.
\subsection{Transversality} \label{subsec:transversality}
\
By transversality one often describes statements which assert that a
map from a manifold to some space with a closed subspace can be
deformed by a small homotopy to a map such that the inverse image of
the closed subspace is a submanifold. Such notion of transversality
can then be used to prove various versions of the Pontrjagin-Thom
isomorphism. For example topological transversality of
Kirby-Siebenmann \cite[Essay III]{Kirby-Siebenmann(1977)} and Freedman-Quinn \cite[chapter 9]{Freedman-Quinn(1990)} implies that the classifying map induces
\begin{equation} \label{htpy-eq-top-transversality}
c \co \bOmega_\bullet^{STOP} \simeq \bMSTOP.
\end{equation}
On the other hand normal transversality used here has a different
meaning, no statement invoking preimages is
required.\footnote{Although there are some such statements
\cite{Hausmann-Vogel(1993)}, we will not need them.} It just means
that there is the homotopy equivalence
(\ref{htpy-eq-normal-transversality}) below inducing a
Pontrjagin-Thom isomorphism. To arrive at it one can use the ideas
described in \cite[Errata]{Ranicki(1992)}. Recall the spectrum
$\bOmega_\bullet^N$ from section \ref{sec:spectra}. Further recall
for a space $X$ with a $k$-dimensional spherical fibration $\nu \co
X \ra \BSG (k)$ the space $\bOmega_n^N (X,\nu)$ of normal spaces
with a degree one normal map to $(X,\nu)$. The normal transversality
described in \cite[Errata]{Ranicki(1992)} says that the classifying map induces
\[
c \co \bOmega_0^N (X,\nu) \simeq \thom{\nu}
\]
We have the classifying space $\BSG (k)$ with the canonical
$k$-dimensional spherical fibration $\gamma_\SG (k)$. The spectrum
$\bOmega_\bullet^N$ can be seen as the colimit of spectra
$\bOmega_\bullet^N (\BSG(k),\gamma_\SG (k))$. The normal
transversality from \cite[Errata]{Ranicki(1992)} translates into
homotopy equivalence
\begin{equation} \label{htpy-eq-normal-transversality}
\bOmega_\bullet^N \simeq \bMSG.
\end{equation}
There are multiplication operations on the spectra $\bOmega^\STOP_\bullet$ and $\bOmega^N_\bullet$, which make the above Pontrjagin-Thom maps to ring spectra homotopy equivalences. These operations are given by Cartesian products. However, we will not use this point of view later.
To complete the picture we denote
\begin{equation} \label{eqn:defn-of-MSGTOP}
\bMSGTOP := \textup{Fiber} \; (\bMSTOP \ra \bMSG)
\end{equation}
and observe that the above classifying maps induce yet another Pontrjagin-Thom isomorphism
\begin{equation} \label{eqn:normal-topological-transversality}
\Sigma^{-1} \bOmega^{N,\STOP}_\bullet \simeq \bMSGTOP.
\end{equation}
Furthermore we have that $\bMSGTOP$ is a module spectrum over $\bMSTOP$ and similarly $\Sigma^{-1} \bOmega^{N,\STOP}_\bullet$ is a module spectrum over $\bOmega^{\STOP}_\bullet$.
\subsection{$L$-theory orientations} \label{subsec:L-theory-orientations}
Here we use the signature maps between the spectra from section \ref{sec:conn-versions} to construct orientations with respect to the $L$-theory spectra. We recall that $\bNL^\bullet \langle 1/2\rangle $ and $\bL^\bullet \langle 0 \rangle$ are ring spectra with the multiplication given by the products of \cite[section 8]{Ranicki-I-(1980)} and \cite[Appendix B]{Ranicki(1992)}. The spectrum $\bL_\bullet \langle 1 \rangle$ is a module over $\bL^\bullet \langle 0 \rangle$ again by the products of \cite[section 8]{Ranicki-I-(1980)} and \cite[Appendix B]{Ranicki(1992)}.
\begin{prop} \textup{\cite[pages 284-289]{Ranicki(1979)}} \label{canonical-L-orientations} \
\begin{enumerate}
\item Any $k$-dimensional $\ZZ$-oriented spherical fibration $\alpha \co X \ra \BSG (k)$ has a canonical orientation $u_{\bNL^\bullet}(\alpha) \in H^k (\Th(\alpha);\bNL^\bullet \langle 1/2 \rangle)$.
\item Any $k$-dimensional $\ZZ$-oriented topological block bundle $\beta \co X \ra \textup{BS}\bTOP (k)$ has a canonical orientation $u_{\bL^\bullet}( \beta) \in H^k (\Th(\beta);\bL^\bullet \langle 0 \rangle)$.
\end{enumerate}
Moreover $J (u_{\bL^\bullet} (\beta)) = u_{\bNL^\bullet} (J
(\beta))$.
\end{prop}
\begin{proof}
These orientations are obtained from maps between spectra using the following up to homotopy commutative diagram of spectra:
\[
\xymatrix{
\bMSTOP \ar[r] \ar[d] & \bOmega_\bullet^{STOP} \ar[r]^-{\ssign} \ar[d] & \bL^\bullet \langle 0 \rangle \ar[d] \\
\bMSG \ar[r] & \bOmega_\bullet^N \ar[r]_-{\nsign} & \bNL^\bullet
\langle 1/2 \rangle }
\]
where the maps in the left hand part of the diagram are the homotopy
inverses of the transversality homotopy equivalences.
\end{proof}
\subsection{$S$-duality} \label{subsec:S-duality}
\
If $X$ is a Poincar\'e complex with the SNF $\nu_X \co X \ra \BSG
(k)$ then we have the $S$-duality $\thom{\nu_X}^\ast \simeq X_+$
producing isomorphisms
\begin{align*}
S \co H^k (\Th(\nu_X);\bNL^\bullet \langle 1/2 \rangle) & \cong H_n (X;\bNL^\bullet \langle 1/2 \rangle) \\
S \co H^k (\Th(\nu_X);\bL^\bullet \langle 0 \rangle) & \cong H_n (X;\bL^\bullet \langle 0 \rangle).
\end{align*}
The following proposition describes a relation between the signatures over $X$ in homology, obtained in sections \ref{sec:gen-hlgy-thies} and \ref{sec:normal-signatures-over-X} and orientations in cohomology from this section.
\begin{prop} \textup{\cite[Proposition 16.1.]{Ranicki(1992)}} \label{prop:S-duals-of-orientations-are-signatures}
If $X$ is an $n$-dimensional geometric Poincar\'e complex with the
Spivak normal fibration $\nu_X \co X \ra \BSG(k)$ then we have
\[
S (u_{\bNL^\bullet} (\nu_X)) = \nsign_X (X) \in H_n (X;\bNL^\bullet
\langle 1/2 \rangle).
\]
If $\bar \nu_X$ is a topological block bundle reduction of the SNF
of $X$ and $(f,b) \co M \ra X$ is the associated degree one normal
map, then we have
\[
S (u_{\bL^\bullet} (\bar \nu_X)) = \ssign_X (M) \in H_n (X;\bL^\bullet
\langle 0 \rangle).
\]
\end{prop}
\begin{proof}
The identification of the normal signature $\nsign_X (X)$ as the canonical orientation follows from the commutative diagram of simplicial sets
\[
\xymatrix{
\Sigma^m/\overline X \ar[d]_{i} \ar[rr]^{\gnsign_X (X)} & & \Omega_{-k}^N \ar[d]^{c} \\
\textup{Sing} \; \Th(\nu_X) \ar[rr]_-{u_{\bMSG}(\nu_X)} & & \textup{Sing} \; \bMSG (k) }
\]
This in turn is seen by inspecting the definitions of the maps in the diagram. The upper horizontal map comes from \ref{con:geom-normal-signature-over-X}, the map $i$ from \ref{con:simpl-model-for-the-thom-space-of-snf} and the other two maps were defined in this section. Note that the classifying map $c$ is characterized by the property that the classified spherical fibration is obtained as the pullback of the canonical $\gamma_{\SG}$ along $c$. But this is also the characterization of the canonical orientation $u_{\bMSG} (\nu_X)$. The desired statement is obtained by composing with $\nsign \co \bOmega^N_\bullet \ra \bNL^\bullet \langle 1/2 \rangle$.
For the second part recall how the degree one normal map $(f,b)$ associated to $\bar \nu_X$ is constructed. Consider the composition $\Sigma^m \ra \Sigma^m / \overline X \ra \Th (\bar \nu_X)$. Since $\bar \nu_X$ is a stable topological block bundle this map can be made transverse to $X$ and $M$ is the preimage, $f$ is the restriction of the map to $M$ and it is covered by a map of stable microbundles $\nu_M \ra \bar \nu_X$, where $\nu_M$ is the stable normal microbundle of $M$. In addition this can be made in such a way that $f$ is transverse to the dual cells of $X$. Hence we obtain a dissection of $M$ which gives rise to the symmetric signature $\ssign_X (M)$ over $X$ as in Construction \ref{con:sym-construction-over-cplxs-lower-star-mfd}. It fits into the following diagram
\[
\xymatrix{
\Sigma^m/\overline X \ar[d]_{i} \ar[rr]^{\stopsign_X (M)} & & \Omega_{-k}^{\STOP} \ar[d]^{c} \\
\textup{Sing} \; \Th(\bar \nu_X) \ar[rr]_-{u_{\bMSTOP}(\bar \nu_X)} & & \textup{Sing} \; \bMSTOP (k) }
\]
The desired statement is obtained by composing with $\ssign \co \bOmega^{\STOP}_\bullet \ra \bL^\bullet \langle 0 \rangle$.
\end{proof}
Suppose now that we are given a degree one normal map $(f,b) \co M \ra X$ between $n$-dimensional topological manifolds with $X$ triangulated. In Construction \ref{con:normal-symmetric-signature-over-X-for-a-deg-one-normal-map} we defined the (normal,symmetric Poincar\'e) signature $\nssign_X (f,b)$ over $X$ associated to $(f,b)$. In analogy with the previous proposition we would like to interpret this signature as an orientation via the $S$-duality. For this recall first that specifying the degree one normal map $(f,b)$ is equivalent to specifying a pair $(\nu,h)$, with $\nu \co X \ra \BSTOP$ and $h \co J(\nu) \simeq \nu_X$, where in our situation the SNF $\nu_X$ has a preferred topological block bundle lift, also denoted $\nu_X$, coming from the stable normal bundle of $X$ (see subsection \ref{subsec:normal-invariants-revisited} if needed). The homotopy $h$ gives us a spherical fibration over $X \times I$ with the canonical orientation $u^{\bMSG} (h)$ which we view as a homotopy between the orientations $J(u^{\bMSTOP} (\nu))$ and $J(u^{\bMSTOP} (\nu_X))$. In this way we obtain an element
\[
u^{\G/\TOP} (\nu,h) \in H^k (\Th (\nu_X); \bMSGTOP)
\]
given by
\[
u^{\G/\TOP} (\nu,h) = (u^{\bMSG} (h) , u^{\bMSTOP} (\nu) - u^{\bMSTOP} (\nu_X)).
\]
The Pontrjagin-Thom isomorphism (\ref{eqn:normal-topological-transversality}) together with the normal and symmetric signature $\nssign \co \Sigma^{-1} \bOmega^{N,\STOP}_\bullet \ra \bF$ provide us with the pair
\[
u^{\bNL^\bullet,\bL^\bullet} (\nu,h) = (u^{\bNL^\bullet} (h),u^{\bL^\bullet} (\nu) - u^{\bL^\bullet} (\nu_X)) \in H^k (\Th (\nu_X) ; \bF).
\]
\begin{prop} \label{prop:S-duals-of-orientations-are-signatures-relative-case}
Let $(f,b) \co M \ra X$ be a degree one normal map of $n$-dimensional simply-connected topological manifolds with $X$ triangulated, corresponding to the pair $(\nu,h)$, where $\nu \co X \ra \BSTOP$ and $h \co J(\nu) \simeq \nu_X$. Then we have
\[
S (u^{\bNL^\bullet,\bL^\bullet} (\nu,h)) = \nssign_X (f,b) \in H_n (X ; \bF).
\]
\end{prop}
\begin{proof}
The proof is analogous to the proof of Proposition \ref{prop:S-duals-of-orientations-are-signatures}. Recall that the signature $\sign^{\G/\TOP}_X (f,b)$ is constructed using a dissection of the degree one normal map $(f,b)$. Using this dissection we inspect that we have a commutative diagram
\[
\xymatrix{
\Sigma^m/\overline X \ar[d]_{i} \ar[rr]^{\sign^{\G/\TOP}_X (f,b)} & & \Sigma^{-1} \Omega_{-k}^{N,\STOP} \ar[d]^{c} \\
\textup{Sing} \; F(\nu,\nu_X) \ar[rr]_-{u^{\G/\TOP} (\nu,h)} & & \textup{Sing} \; \bMSGTOP (k) }
\]
where we use the notation $\bMSGTOP (k) := \textup{Fiber} \; (\bMSTOP (k) \ra \bMSG (k))$ and $F(\nu,\nu_X) : = \textup{Pullback} \; (\Th (\nu) \ra \Th (\nu_X) \leftarrow \Th (\nu_X))$. Composing with the signature map $\nssign \co \Sigma^{-1} \Omega_{-k}^{N,\STOP} \ra \bF$ proves the claim.
\end{proof}
\subsection{Assembly} \label{subsec:assembly}
\
Keep $X$ a Poincar\'e complex with the SNF $\nu_X$ and suppose there
exists a topological block bundle reduction $\bar \nu_X$. Recall
that orientations with respect to ring spectra induce Thom
isomorphisms in corresponding cohomology theories. Hence we have Thom
isomorphisms induced by $u_{\bNL^\bullet} (\nu_X)$ and
$u_{\bL^\bullet} (\bar \nu_X)$ and these are compatible. Also recall
that although the spectrum $\bL_\bullet \langle 1 \rangle$ is not a
ring spectrum, it is a module spectrum over $\bL^\bullet \langle 0
\rangle$, see \cite[Appendix B]{Ranicki(1992)}. Therefore
$u_{\bL^\bullet} (\bar \nu_X)$ also induces a compatible Thom
isomorphism in $\bL_\bullet \langle 1 \rangle$-cohomology. In fact
we have a commutative diagram relating these Thom isomorphisms, the
S-duality and the assembly maps:
{\footnotesize
\[
\xymatrix{
H^0 (X;\bL_\bullet \langle 1 \rangle) \ar[r]^-{\cong} \ar[d] & H^k (\Th(\nu_X);\bL_\bullet \langle 1 \rangle) \ar[r]^-{\cong} \ar[d] & H_n (X;\bL_\bullet \langle 1 \rangle) \ar[r]^-{A} \ar[d] & L_n (\ZZ) \ar[d] \\
H^0 (X;\bL^\bullet \langle 0 \rangle) \ar[r]^-{\cong} \ar[d] & H^k (\Th(\nu_X);\bL^\bullet \langle 0 \rangle) \ar[r]^-{\cong} \ar[d] & H_n (X;\bL^\bullet \langle 0 \rangle) \ar[r]^-{A} \ar[d] & L^n (\ZZ) \ar[d] \\
H^0 (X;\bNL^\bullet \langle 1/2 \rangle) \ar[r]^-{\cong} & H^k (\Th(\nu_X);\bNL^\bullet \langle 1/2 \rangle) \ar[r]^-{\cong} & H_n (X;\bNL^\bullet \langle 1/2 \rangle) \ar[r]^-{A} & NL^n (\ZZ) \\
}
\]
}
If $X = S^n$ then the map $A \co H_n (S^n;\bL_\bullet \langle 1 \rangle) \ra L_n (\ZZ)$ is an isomorphism. This follows from the identification of the assembly map with the surgery obstruction map, which is presented in Proposition \ref{prop:identification} and the fact that the surgery obstruction map for $S^n$ is an isomorphism due to Kirby and Siebenmann \cite[Essay V, Theorem C.1]{Kirby-Siebenmann(1977)}. We note that Proposition \ref{prop:identification} is presented in a greater generality than needed here. We just need the case when $X = S^n$ and so it is a manifold and hence the degree one normal map $(f_0,b_0)$ in the statement of Proposition \ref{prop:identification} can be taken to be the identity on $S^n$, which is the version we need at this place.
Further observe that all the homomorphisms in the diagram are induced homomorphisms on homotopy groups by maps of spaces, see definitions in sections \ref{sec:spectra}, \ref{sec:gen-hlgy-thies} for the underlying spaces.
\subsection{Classifying spaces for spherical fibrations with an orientation} \label{subsec:classifying-spaces}
\
Let $\bE$ be an Omega ring spectrum with $\pi_0 (\bE) = \ZZ$ and
recall the notion of an $\bE$-orientation of a $\ZZ$-oriented
spherical fibration $\alpha \co X \ra \BSG (k)$. In \cite{May(1977)}
a construction of a classifying space $\BEG$ for spherical
fibrations with such a structure was given. The construction is not
so important for us. Of more significance is a description of what
it means to have a map from a space to one of these classifying
spaces. If $X$ is a finite complex then there is a one-to-one
correspondence between homotopy classes of maps $\alpha_\bE \co X
\ra \textup{B}\bE\textup{G}$ and homotopy classes of pairs
$(\alpha,u_\bE(\alpha))$ where $\alpha \co X \ra \BSG (k)$ and
$u_\bE (\alpha) \co \Th(\alpha) \ra \bE_{k}$ is an $\bE$-orientation
of $\alpha$.
\begin{prop} \label{prop:canonical-L-orientations-on-class-spaces}
There is a commutative diagram
\[
\xymatrix{
\BSTOP \ar[r]^-{\ssign} \ar[d]_{J} & \textup{B} \bL^\bullet \langle 0 \rangle \textup{G} \ar[d]^{J} \\
\BSG \ar[r]_-{\nsign} & \textup{B} \bNL^\bullet \langle 1/2 \rangle
\textup{G} }
\]
\end{prop}
\begin{proof}
This follows from Proposition \ref{canonical-L-orientations}.
\end{proof}
We need to study what orientations do there exist for a fixed
spherical fibration $\alpha \co X \ra \BSG (k)$. Denote by
$\bE_\otimes$ the component of $1 \in \ZZ$ in any Omega ring
spectrum $\bE$ with $\pi_0 (\bE) = \ZZ$. Then there is a homotopy
fibration sequence \cite[section III.2]{May(1977)}
\begin{equation} \label{eqn:fibration-sequence-for-E-orientations}
\bE_\otimes \xra{i} \textup{B}\bE\textup{G} \ra \BSG
\end{equation}
The map $i$ can be interpreted via the Thom isomorphism. Let $c \co
X \ra \bE_\otimes$ be a map. Then $i(c) \co X \ra \BEG$ is the map
given by the trivial fibration $\varepsilon \co X \ra \BSG (k)$ with
an $\bE$-orientation given by the composition
\[
u_\bE (i(c)) \co \Th(\varepsilon) \xra{\; \tilde \Delta \;} X_+ \wedge
\Th(\varepsilon) \xra{c \wedge \Sigma^k (1)} \bE_\otimes \wedge \bE_k
\ra \bE_k
\]
We will use the spectra $\bL^\bullet \langle 0 \rangle$ and
$\bNL^\bullet \langle 1/2 \rangle$, which are both ring spectra with
$\pi_0 \cong \ZZ$. We will need the following proposition.
\begin{prop} \label{prop:fibration-sequence-of-classifying-spaces}
There is the following homotopy fibration sequence of spaces
\[
\bL_0 \langle 1 \rangle \ra \textup{B} \bL^\bullet \langle 0 \rangle
\textup{G} \ra \textup{B} \bNL^\bullet \langle 1/2 \rangle
\textup{G}
\]
\end{prop}
\begin{proof}
Consider the sequences
(\ref{eqn:fibration-sequence-for-E-orientations}) for the spectra
$\bL^\bullet \langle 0 \rangle$ and $\bNL^\bullet \langle 1/2
\rangle$ and the map between them. The induced map between the
homotopy fibers fits into the fibration sequence
\begin{equation} \label{fib-seq:quad-sym-norm-on-component-of-1}
\bL_0 \langle 1 \rangle \ra \bL^\otimes \langle 0 \rangle \ra \bNL^\otimes \langle 1/2 \rangle
\end{equation}
which is obtained from the fibration sequence of Proposition \ref{prop:fib-seq-of-quad-sym-norm-connective-version} (more precisely from the space-level version of it on the $0$-th spaces) by replacing the symmetrization map $(1 + T) \co \bL_0 \langle 1 \rangle \ra \bL^0 \langle 0 \rangle$ by the map given on the $l$-simplices as
\begin{align*}
(1 + T)^\otimes \co \bL_0 \langle 1 \rangle & \ra \bL^\otimes \langle 0 \rangle \\
(C,\psi) & \mapsto (1+T) (C,\psi) + (C (\Delta^l),\varphi([\Delta^l])).
\end{align*}
Its effect is to map the component of $0$ (which is the only component of $\bL_0 \langle 1 \rangle$) to the component of $1$ in $\bL^0 \langle 0 \rangle$ instead of the component of $0$. The proposition follows.
\end{proof}
\subsection{$L$-theory orientations versus reductions} \label{subsec:main-thm}
\
The following theorem is a crucial result.
\begin{thm} \textup{\cite[pages 290-292]{Ranicki(1979)}} \label{thm:lifts-vs-orientations}
There is a one-to-one correspondence between the isomorphism classes of
\begin{enumerate}
\item stable oriented topological block bundles over $X$, and
\item stable oriented spherical fibrations over $X$ with an $\bL^\bullet \langle 0 \rangle$-lift of the canonical $\bNL^\bullet \langle 1/2 \rangle$-orientation
\end{enumerate}
\end{thm}
\begin{proof}
In Proposition \ref{prop:canonical-L-orientations-on-class-spaces} a map from (1) to (2) was described. To prove that it gives a one-to-one correspondence is equivalent to showing that the square in Proposition \ref{prop:canonical-L-orientations-on-class-spaces} is a homotopy pullback square. This is done by showing that the induced
map between the homotopy fibers of the vertical maps in the square, which is indicated by the dashed arrow in the diagram below, is a homotopy equivalence.
\[
\xymatrix{
\G/\TOP \ar@{-->}[r]^-{\nssign} \ar[d] & \bL_0 \langle 1 \rangle \ar[d] \\
\BSTOP \ar[r]^-{\ssign} \ar[d]_{J} & \textup{B} \bL^\bullet \langle 0 \rangle \textup{G} \ar[d]^{J} \\
\BSG \ar[r]_-{\nsign} & \textup{B} \bNL^\bullet \langle 1/2 \rangle
\textup{G} }
\]
For this it is enough to show that it induces an isomorphism on the
homotopy groups, that means to show
\[
\nssign \; \co \; [S^n ; \G/\TOP] \; \xrightarrow{\; \cong \;} \; [S^n ; \bL_0 \langle 1 \rangle].
\]
Recall that since $S^n$ is a topological manifold with the trivial
SNF we have a canonical identification of the normal invariants
\[
[S^n ; \G/\TOP] \cong \sN (S^n) \quad ((\alpha,H) \co S^n \ra \G/\TOP) \mapsto( (f,b) \co M \ra S^n)
\]
where $\alpha \co S^n \ra \BSTOP$ and $H = J(\alpha) \simeq
\varepsilon \co S^n \times [0,1] \ra \BSG$ is a homotopy to a
constant map and $(f,b)$ is the associated degree one normal map.
On the other side we have
\[
A(S(i(-))) \co [S^n ; \bL_0 \langle 1 \rangle] \cong H^0
(S^n;\bL_\bullet \langle 1 \rangle) \cong H_n (S^n ; \bL_\bullet
\langle 1 \rangle) \cong L_n (\ZZ).
\]
It is well known that the surgery obstruction map $\qsign_\ZZ \co \sN (S^n) \ra L_n (\ZZ)$ from Definition \ref{defn:quad-sign} is an isomorphism for $n \geq 1$ \cite[Essay V, Theorem C.1]{Kirby-Siebenmann(1977)}. Therefore it is enough to show that
\[
A(S(i(\nssign_\ZZ (\alpha,H)))) = \qsign_\ZZ (f,b).
\]
Denote
\[
(J(\alpha),u_{\bL^\bullet} (\alpha)) = \ssign (\alpha) \quad (H,u_{\bNL^\bullet} (H)) = \nsign (H).
\]
Now we need to describe in more detail what the identification of
the homotopy fiber of the right hand column map means. That means to
produce a map
\[
\bar u_{\bL_\bullet} (\alpha,H) \co S^n \ra \bL_0 \langle 1 \rangle.
\]
from $(J(\alpha),u_{\bL^\bullet} (\alpha))$ and
$(H,u_{\bNL^\bullet}(H))$. The spherical fibration $J(\alpha)$ is trivial because of the null-homotopy $H$
and therefore we obtain a map $\bar u_{\bL^\bullet} (\alpha) \co S^n
\ra \bL^{\otimes} \langle 0 \rangle$ such that $i (\bar u_{\bL^\bullet} (\alpha)) =
(J(\alpha),u_{\bL^\bullet} (\alpha))$.
Similarly the homotopy $(H,u_{\bNL^\bullet}(H)) \co S^n \times [0,1]
\ra \BNLG$ yields a homotopy $\bar u_{\bNL^\bullet} (H) \co S^n \times
[0,1] \ra \bNL^\otimes \langle 1/2 \rangle$ between $J(\bar u_{\bL^\bullet} (\alpha))$ and the constant map.
The pair $(\bar u_{\bNL^\bullet} (H),\bar u_{\bL^\bullet} (\alpha))$ produces via the homotopy fibration sequence (\ref{fib-seq:quad-sym-norm-on-component-of-1}) a lift, which is the desired $\bar u_{\bL_\bullet} (\alpha,H)$. So we have
\[
[\nssign (\alpha,H)] = [\bar u_{\bL_\bullet} (\alpha,H)] \in [S^n;\bL_0 \langle 1 \rangle]
\]
and we want to investigate $A(S(i(\bar u_{\bL_\bullet}
(\alpha,H))))$. Recall now the commutative diagram from subsection
\ref{subsec:assembly}. It shows that $A(S(i(\bar u_{\bL_\bullet}
(\alpha,H))))$ can be chased via the lower right part of the
diagram. Here we consider maps from $S^n$ and $S^n \times [0,1]$ to the underlying spaces in this diagram rather than just elements in the homotopy groups.
Observe first, using Definition \ref{defn:sym-sign-over-X}, Example \ref{expl:assembly-of-symmetric-signature-over-K} and Proposition \ref{prop:S-duals-of-orientations-are-signatures}, that the assembly of the $S$-dual of the class $u_{\bL^\bullet} (\alpha)$ is an $n$-dimensional SAPC $\ssign (M)$ over $\ZZ$.
Secondly, by Construction \ref{con:normal-symmetric-signature-over-X-for-a-deg-one-normal-map} and Proposition \ref{prop:S-duals-of-orientations-are-signatures-relative-case}, the assembly of the $S$-dual of the class $u_{\bNL^\bullet} (H)$ is an $(n+1)$-dimensional (normal, symmetric Poincar\'e) pair
\begin{equation} \label{eqn:normal-sym-pair-of-deg-one-normal-map-to-sphere}
(\nsign (W),\ssign (M) - \ssign (S^n))
\end{equation}
over $\ZZ$, with $W = \cyl (f)$. We consider this as an element in the $n$-th homotopy group of the relative term in the long exact sequence of the homotopy groups associated to the map $\bL^\otimes \langle 0 \rangle \ra \bNL^\otimes \langle 1/2 \rangle$. This group is isomorphic to $L_n (\ZZ)$ by the isomorphism of Proposition \ref{propn:LESL}. The effect of this isomorphism on an element as in (\ref{eqn:normal-sym-pair-of-deg-one-normal-map-to-sphere}) is then described in detail in Example \ref{expl:normal-symm-poincare-pair-gives-quadratic}. It tells us that the $n$-dimensional QAPC corresponding to (\ref{eqn:normal-sym-pair-of-deg-one-normal-map-to-sphere}) is the surgery obstruction $\qsign (f,b)$. Finally we obtain
\[
A(S(i(\bar u_{\bL_\bullet} (\alpha,H)))) = \qsign (f,b) \in L_n (\ZZ)
\]
which is what we wanted to show.
\end{proof}
Recall the exact sequence:
\[
\xymatrix{
\cdots \ar[r] & H_n (X;\bL^\bullet \langle 0 \rangle) \ar[r] & H_n (X;\bNL^\bullet \langle 1/2 \rangle) \ar[r] & H_{n-1} (X; \bL_\bullet \langle 1 \rangle) \ar[r] & \cdots
}
\]
Putting all together we obtain
\begin{cor}
Let $X$ be an $n$-dimensional geometric Poincar\'e complex with the Spivak normal fibration $\nu_X \co X \ra \BSG$. Then the following are equivalent
\begin{enumerate}
\item There exists a lift $\bar \nu_X \co X \ra \BSTOP$ of $\nu_X$
\item There exists a lift of the normal signature $\nsign_X (X) \in H_n (X;\bNL^\bullet \langle 1/2 \rangle)$ in the group $H_{n} (X;\bL^\bullet \langle 0 \rangle)$.
\item $0 = t(X) \in H_{n-1} (X;\bL_\bullet \langle 1 \rangle)$.
\end{enumerate}
\end{cor}
\section{Proof of the Main Technical Theorem (II)} \label{sec:proof-part-2}
Let $X$ be a finite $n$-dimensional GPC and suppose that $t(X) = 0$ so that the SNF $\nu_X$ has a topological block bundle reduction and hence there exists a degree one normal map $(f,b) \co M \ra X$ from some $n$-dimensional topological manifold $M$. We want to show that the subset of $L_n (\ZZ [\pi_1 (X)])$ consisting of the inverses of the quadratic signatures of all such degree one normal maps is equal to the preimage of the total surgery obstruction $s (X) \in \SS_n (X)$ under the boundary map $\del \co L_n (\ZZ [\pi_1 (X)]) \ra \SS_n (X)$.
Let us first look at this map. Inspecting the first of the two commutative braids in section \ref{sec:surgery-sequences} we see that it is in fact obtained from the boundary map $\partial \co L_n (\Lambda (\ZZ)(X)) \ra \SS_n (X)$ using the algebraic $\pi$-$\pi$-theorem of Proposition \ref{prop:algebraic-pi-pi-theorem}. This map is more suitable for investigation since both the source and the target are the $L$-groups of algebraic bordism categories over the same underlying additive category with chain duality, which is $\ZZ_\ast (X)$.
On the other hand there is a price to pay for this point of view. Namely, in the present situation we only have the quadratic signatures $\qsign_{\ZZ[\pi_1(X)]} (f,b)$ as $n$-dimensional QAPCs over the category $\ZZ[\pi_1 (X)]$, but we need a quadratic signature $\qsign_X (f,b)$ over the category $\ZZ_\ast (X)$.\footnote{As shown in Construction \ref{con:quad-construction-over-cplxs-lower-star} in case $X$ is a triangulated manifold we have such a signature but here $X$ is only a Poincar\'e complex with $t(X) = 0$.} A large part of this section will be devoted to constructing such a quadratic signature, it will finally be achieved in Definition \ref{defn:quad-signature-over-X-deg-one-normal-map-to-poincare}. More precisely, we define the quadratic signature
\[
\qsign_X (f,b) \in L_n (\Lambda (\ZZ)(X))
\]
represented by an $n$-dimensional QAC in the algebraic bordism category $\Lambda (\ZZ)(X)$ from Definition \ref{defn:Lambda-K-category}, that means an $n$-dimensional quadratic complex over $\ZZ_\ast (X)$ which is globally Poincar\'e, such that it maps to $\qsign_{\ZZ[\pi_1(X)]} (f,b)$ under the isomorphism of the algebraic $\pi$-$\pi$-theorem. We emphasize that in general the quadratic signature $\qsign _X (f,b)$ does not produce an element in $H_n (X,\bL_\bullet \langle 1 \rangle)$ since it is not locally Poincar\'e.
Granting the definition of $\qsign_X (f,b)$ the proof of the desired statement starts with the obvious observation that the preimage $\del^{-1} s(X)$ is a coset of $\ker (\del) = \im (A)$, where $A \co H_n (X;\bL_\bullet \langle 1 \rangle) \ra L_n (\ZZ [\pi_1 (X)])$ is the assembly map. Then the proof proceeds in two steps as follows.
\begin{enumerate}
\item[(1)] Show that the set of the inverses of the quadratic signatures $\qsign_X (f,b)$ of degree one normal maps with target $X$ is a subset of $\del^{-1} s(X)$ and hence the two sets have non-empty intersection.
\item[(2)] Show that the set of the inverses of the quadratic signatures $\qsign_X (f,b)$ of degree one normal maps with target $X$ is a coset of $\ker (\del) = \im (A)$. Hence we have two cosets of the same subgroup with a non-empty intersection and so they are equal.
\end{enumerate}
The definition of $\qsign_X (f,b)$ and Step (1) of the proof are concentrated in subsections \ref{subsec:general-discussion-of-signatures-over-X} to \ref{subsec:quad-sign-over-X}. The main technical proposition is Proposition \ref{prop:degree-one-normal-map-mfd-to-poincare-over-X} which says that the boundary of the quadratic signature of any degree one normal map from a manifold to $X$ is $s(X)$.
Step (2) of the proof is concentrated in subsections \ref{subsec:identification-of-quad-sign-with-assembly} to \ref{subsec:signatures-versus-orientations}. It starts with an easy corollary of Proposition \ref{prop:degree-one-normal-map-mfd-to-poincare-over-X} which says that although, as noted above, $\qsign_X (f,b)$ does not produce an element in $H_n (X,\bL_\bullet \langle 1 \rangle)$, the difference $\qsign _X (f,b) - \qsign _X (f_0,b_0)$ for two degree one normal maps does produce such an element. Therefore, fixing some $(f_0,b_0)$ and letting $(f,b)$ vary provides us with a map from the normal invariants $\sN (X)$ to $H_n (X,\bL_\bullet \langle 1 \rangle)$. The main technical proposition is then Proposition \ref{prop:identification}. Via the just mentioned difference map it identifies the set of the quadratic signatures of degree one normal maps with the coset of the image of the assembly map containing $\qsign_{\ZZ[\pi_1 (X)])} (f_0,b_0)$.
The principal references are: for Step (1) \cite[sections 7.3, 7.4]{Ranicki(1981)} and for Step (2) \cite[pages 293-298]{Ranicki(1979)} and \cite[section 17]{Ranicki(1992)}.
\subsection{A general discussion of quadratic signatures over $X$} \label{subsec:general-discussion-of-signatures-over-X}
\
As noted above, in case $X$ is a triangulated manifold, we have Construction
\ref{con:quad-construction-over-cplxs-lower-star} which produces from a degree one normal map $(f,b) \co M \ \ra X$ a quadratic signature $\qsign _X (f,b) \in H_n (X,\bL_\bullet \langle 1 \rangle)$. Let us first look at why it is not obvious how to generalize this to our setting. The idea in \ref{con:quad-construction-over-cplxs-lower-star} was to make $f$ transverse to the dual cells of $X$ and to consider the restrictions
\begin{equation} \label{fragmented-deg-one-map}
(f(\sigma),b(\sigma)) \co (M(\sigma),\partial M(\sigma)) \ra
(D(\sigma),\partial D(\sigma)).
\end{equation}
These are degree one normal maps, but the target $(D(\sigma),\partial D(\sigma))$ is only a normal pair which can be non-Poincar\'e. Consequently we cannot define the Umkehr maps
$f(\sigma)^!$ as in
\ref{con:quad-construction-over-cplxs-lower-star}.
We need an alternative way to define the Umkehr maps. Such a
construction is a relative version of an absolute construction whose
starting point is a degree one normal map $(g,c) \co N \ra Y$ from
an $n$-dimensional manifold $N$ to an $n$-dimensional normal space
$Y$. In this case there is the normal signature $\nsign (Y) \in NL^n
(\ZZ)$ with boundary $\partial \nsign (Y) \in L_{n-1} (\ZZ)$. In
Definition \ref{QuadraticSignatureOfDegree1NormalMapNotPoincare}
below, we recall the definition of a quadratic signature $\qsign
(g,c)$ in this setting, which is an $n$-dimensional QAC over $\ZZ$, not
necessarily Poincar\'e.\footnote{The terminology
``signature'' is perhaps not the most suitable, since we do not
obtain an element in an $L$-group. It is used because this
``signature'' is defined analogously to the signatures of section
\ref{sec:algebraic-cplxs}.} As such it has a boundary, which is an
$(n-1)$-dimensional QAPC over $\ZZ$, and hence defines an element
$\del \qsign (g,c) \in L_{n-1} (\ZZ)$. The following proposition
describes the relationship between these signatures.
\begin{prop} \textup{\cite[Proposition 7.3.4]{Ranicki(1981)}} \label{prop:degree-one-normal-map-mfd-to-normal-cplx}
Let $(g,c) \co N \ra Y$ be a degree one normal map from an
$n$-dimensional manifold to an $n$-dimensional normal space. Then
there are homotopy equivalences of symmetric complexes
\[
h \co \partial \ssign (g,c) \xra{\simeq} - (1+T) \partial
\nsign (Y)
\]
and a homotopy equivalence of quadratic refinements
\[
h \co \partial \qsign (g,c) \xra{\simeq} - \partial \nsign (Y).
\]
\end{prop}
\begin{rem}
Recall the situation in the case $Y$ is Poincar\'e. Then there is defined the algebraic Umkehr map $g^{!} \co C_\ast (\widetilde Y) \ra C_\ast (\widetilde N)$ and one obtains the symmetric signature $\ssign (g,c)$ with the underlying chain complex the algebraic mapping cone $\sC (g^{!})$. This can be further refined to a quadratic structure $\qsign (g,c)$. In addition one has (see Remark \ref{rem:symmetrization-of-surgery-obstruction})
\[
\sC(g^{!}) \oplus C_\ast (\widetilde Y) \simeq C_\ast (\widetilde
N) \qquad \textup{and} \qquad \ssign (g,c) \oplus \ssign (Y) =
\ssign (N).
\]
In the situation of Proposition \ref{prop:degree-one-normal-map-mfd-to-normal-cplx} one obtains instead the formula
\[
\ssign (N) \simeq \ssign (g,c) \cup_h \ssign (Y).
\]
where $\cup_h$ denotes the algebraic gluing of symmetric pairs from \cite[section 3]{Ranicki-I-(1980)}.
\end{rem}
Before going into the proof of Proposition
\ref{prop:degree-one-normal-map-mfd-to-normal-cplx} remember that we
still need its relative version. For that again some preparation is
needed. In particular we need the concept of a boundary of a
symmetric pair. Let $(f \co C \ra D, (\delta \varphi, \varphi))$ be
an $(n+1)$-dimensional symmetric pair which is not necessarily
Poincar\'e. Its boundary is the $n$-dimensional symmetric pair
$(\del f \co \del C \ra \del_+ D,\del_+ \delta \varphi,\del
\varphi))$ with the chain complex
\[
\del_+ D = \sC \big( \smallpairmap \co D^{n+1-\ast} \ra \sC (f)
\big).
\]
defined in \cite{Milgram-Ranicki(1990)}. It is Poincar\'e. Similarly
one can define the boundary of a quadratic pair, which is again a
quadratic pair and also Poincar\'e. Finally the boundary of a normal
pair is a quadratic Poincar\'e pair.
Given $((g,c),(f,b)) \co (N,A) \ra (Y,B)$ a degree one normal map
from an $n$-dimensional manifold with boundary to an $n$-dimensional
normal pair, there are relative versions of the signatures appearing
in Proposition \ref{prop:degree-one-normal-map-mfd-to-normal-cplx}
that are defined in Definition
\ref{QuadraticSignatureOfDegree1NormalMapNotPoincare-relative-version}
and Construction \ref{con:quad-boundary-of-pair} below. Their
relationship is described by the promised relative version of the
previous proposition:
\begin{prop} \label{prop:degree-one-normal-map-mfd-with-boundary-to-normal-pair}
Let $((g,c),(f,b)) \co (N,A) \ra (Y,B)$ be a degree one normal map
from an $n$-dimensional manifold with boundary to an $n$-dimensional
normal pair. Then there are homotopy equivalences of symmetric pairs
\[
h \co \partial \ssign ((g,c),(f,b)) \xra{\simeq} - (1+T) \partial
\nsign (Y,B)
\]
and a homotopy equivalence of quadratic refinements
\[
h \co \partial \qsign ((g,c),(f,b)) \xra{\simeq} - \partial \nsign
(Y,B).
\]
\end{prop}
Finally recall that our aim is to prove a certain statement about
quadratic chain complexes in the category $\Lambda(\ZZ)\langle 1
\rangle (X)$. Generalizing the definitions above one obtains for a
degree one normal map $(f,b) \co M \ra X$ from an $n$-dimensional
manifold to an $n$-dimensional Poincar\'e complex the desired
quadratic signature $\qsign_X (f,b)$ in Definition
\ref{defn:quad-signature-over-X-deg-one-normal-map-to-poincare}. Its
relationship to the normal signature of $X$ over $X$, which was
already discussed in section \ref{sec:normal-signatures-over-X}, is
described in the following proposition, which can be seen as a
global version of Proposition
\ref{prop:degree-one-normal-map-mfd-with-boundary-to-normal-pair}:
\begin{prop} \textup{\cite[page 192]{Ranicki(1992)}} \label{prop:degree-one-normal-map-mfd-to-poincare-over-X}
Let $(f,b) \co M \ra X$ be a degree one normal map from an
$n$-dimensional manifold to an $n$-dimensional Poincar\'e complex.
Then there is a homotopy equivalence of symmetric complexes over
$\ZZ_\ast (X)$
\[
h \co \partial \ssign_X (f,b) \xra{\simeq} - (1+T) \partial \nsign_X (X),
\]
a homotopy equivalence of quadratic refinements over $\ZZ_\ast
(X)$
\[
h \co \partial \qsign_X (f,b) \xra{\simeq} - \partial \nsign_X (X)
\]
and consequently a homotopy equivalence
\[
h \co \partial \qsign_X (f,b) \xra{\simeq} - \partial \vsign_X (X) = -s(X).
\]
\end{prop}
In the following subsections we will define the concepts used above and provide the proofs.
\subsection{Quadratic signature of a degree one normal map to a normal space}
\
Recall that the quadratic construction \ref{constrn:quadconstrn} was
needed in order to obtain the quadratic signature $\qsign (f,b)$ of
a degree one normal map $(f,b) \co M \ra X$ from an $n$-dimensional
manifold to an $n$-dimensional Poincar\'e space. To define $\qsign
(g,c)$ for a degree one normal map $(g,c) \co N \ra Y$ from an
$n$-dimensional manifold to an $n$-dimensional normal space we need
the spectral quadratic construction
\ref{con:SpectralQuadraticConstruction}.
\begin{defn} \cite[Proposition 7.3.4]{Ranicki(1981)} \label{QuadraticSignatureOfDegree1NormalMapNotPoincare}
Let~$(g,c) \co N \ra Y$ be a degree one normal map from a Poincar\'e
complex~$N$ to a normal complex~$Y$. The \emph{quadratic signature}
of~$(g,c)$ is the $n$-dimensional QAC
\[
\qsign (g,c) = (C,\psi)
\]
which is not necessarily Poincar\'e, obtained from a choice of the
Thom class $u (\nu_Y) \in \tilde{C}^k (\Th (\nu_Y))$ as follows.
Consider the commutative diagrams
\[
\xymatrix{ C^{n-\ast} (N) \ar[d]_{(\varphi(N))_0}^{\simeq} &
C^{n-\ast} (Y) \ar[l]_-{g^\ast} \ar[d]^{(\varphi(Y))_0}
& & \Th(\nu_N)^\ast \ar[d]_{\Gamma_N}^{\simeq} & \Th (\nu_Y)^{\ast} \ar[l]_{\Th (c)^{\ast}} \ar[d]^{\Gamma_Y} \\
C (N) \ar[r]_{g_\ast} & C(Y) & & \Sigma^p N_+ \ar[r]_{\Sigma^p g_+}
& \Sigma^p Y_+ }
\]
The maps $\Gamma_{\_}$ in the right diagram are obtained using the
$S$-duality as in Construction
\ref{con:S-duality-and-Thom-is-Poincare}. In fact using the
properties of the $S$-duality explained in Construction
\ref{con:S-duality-and-Thom-is-Poincare} we see that the left
diagram can be considered as induced from the right diagram by
applying the chain complex functor $C_{\ast+p} (-)$.
Set $g^{!} = (\varphi(N))_0 \circ g^\ast$ and define~$C = \sC
(g^{!})$ and $\psi = \Psi (u (\nu_Y)^\ast)$, where $\Psi$ denotes
the spectral quadratic construction on the map $\Gamma_N \circ \Th
(c)^\ast$.
Also note that by the properties of the spectral quadratic
construction we have
\[
(1+T) \psi \equiv e_{g^!}^{\%} (\varphi(N))
\]
where $e_{g^!} \co C(N) \ra \sC (g^!)$ is the inclusion.
\end{defn}
For the proof of Proposition \ref{prop:degree-one-normal-map-mfd-to-normal-cplx} we also need an
additional property of the spectral quadratic construction.
\begin{prop} \textup{\cite[Proposition 7.3.1. (v)]{Ranicki(1981)}} \label{PropertiesOfSpectralQuadraticConstruction}
Let~$F \co X \lra \Sigma^{p} Y$ and $F' \co X' \lra \Sigma^{p} Y'$
be semi-stable maps fitting into the following commutative diagram
\[
\xymatrix{
X \ar[d]_{G_X} \ar[r]^-F
& \Sigma^{p} Y \ar[d]^{G_Y}
\\
X' \ar[r]^-{F'}
& \Sigma^{p} Y'
}
\]
inducing the commutative diagram of chain complexes
\[
\xymatrix{
\Sigma^{-p}\tilde{C}(X) \ar[d]^{g_X} \ar[r]^-{f}
& \tilde{C}(Y) \ar[d]^{g_Y} \ar[r]^-{e}
& \cone(f) \ar[d]^{\smalltwoxtwomatrix{g_Y}{0}{0}{g_X}}
\\
\Sigma^{-p}\tilde{C}(X') \ar[r]^-{f'}
& \tilde{C}(Y') \ar[r]^-{e'}
& \cone(f')
}
\]
Then the spectral quadratic constructions of~$F$ and $F'$ are related
by
\[
\Psi (F') \circ g_X \equiv \smalltwoxtwomatrix{g_Y}{0}{0}{g_X}_\% \circ \Psi (F) + (e')_\% \circ \psi (G_Y) \circ f
\]
where $\Psi (-)$ and $\psi(-)$ denote the (spectral) quadratic constructions on the respective maps.
\end{prop}
\begin{proof}[Proof of Proposition \ref{prop:degree-one-normal-map-mfd-to-normal-cplx}]
For ease of notation, let
\begin{itemize}
\item $\ssign (g,c) = (\cone(g^!),\varphi (g^{!}))$,
\item $\nsign (Y) = (C(Y),\psi(Y))$,
\end{itemize}
and set
\begin{itemize}
\item $\del \ssign (g,c) = (\del\cone(g^!),\del\varphi(g^!))$,
\item $\del \nsign (Y) = (\del C(Y),\del \psi (Y))$.
\end{itemize}
Consider the following commutative diagram where all rows and
columns are cofibration sequences (the diagram also sets the
notation $\mu$, $q_g$ and $e_{g^!}$)
\begin{equation} \label{dgrm:umkehr-for-mfd-to-normal-cplx}
\begin{split}
\xymatrix{
0 \ar[r] \ar[d]
& C(Y)^{n-\ast} \ar[r]^{\id} \ar[d]^{g^!}
& C(Y)^{n-\ast} \ar[d]^{\varphi(Y)_0}
\\
\dsc(g) \ar[r]^{q_g} \ar[d]^\id
& C(N) \ar[r]^g \ar[d]_{e_{g^!}}
& C(Y)
\\
\dsc(g) \ar[r]^\mu
& \cone(g^!)
&
}
\end{split}
\end{equation}
We obtain a homotopy equivalence, say $h' \co \sC (\mu) \raeq \sC (\varphi(Y)_0) \simeq \Sigma \del C(Y)$. Inspection shows that $\sC (g^!)^{n-\ast} \simeq \dsc (g)$ and that under this homotopy equivalence the duality map $\varphi (g^{!})_0$ is identified with the map $\mu$. Hence we have
\begin{equation} \label{eqn:naturality-of-htpy-equiv-on-boundary}
e_Y \circ g = h' \circ e_{\varphi(g^!)_0} \circ e_{g^{!}}
\end{equation}
where $e_{\varphi(g^!)_0} \co \sC (g^{!}) \ra \Sigma \del \sC (g^{!})$ and $e_Y
\co C(Y) \ra \Sigma \del C (Y)$ are the inclusions. We also obtain a commutative braid of chain complexes, which we leave for the reader to draw, with chain homotopy equivalences
\[
h \co \del \cone(g^!) \xra{\simeq} \del C(Y) \quad \textup{and} \quad h' \co \sC (\varphi (g^!)_0) \simeq \sC (\mu)
\xra{\simeq} \sC (\varphi (Y)_0)
\]
which are related by $h' = - \Sigma(h)$, thanks to the sign conventions used for definitions of mapping cones and suspensions.
Next we consider the symmetric structures. Recall that, by definition we have $S(\del\varphi(g^{!})) = e_{\varphi(g^!)_0}^\% \circ e_{g^{!}}^\% (\varphi (N))$, and $S(\del\varphi(Y))=e_Y^\% (\varphi(Y))$. Further we have $g^\% (\varphi (N)) = \varphi (Y)$ and hence
\[
(h')^\% S(\del\varphi(g^{!}))) = S(\del\varphi(Y)).
\]
By the injectivity of the suspension we also have~$h^\% (\del\varphi(g)) = - \del\varphi(Y)$.
Finally we study the quadratic structures. Set~$\del\nsign (Y) =
(\del C(Y),\del\psi(Y))$, $\qsign (g,c) =
(\del\cone(g^!),\del\psi(g^{!}))$. Recall that the spectral
quadratic constructions are employed to define the quadratic
structures $\del\psi(Y)$ and $\del\psi(g^{!})$. By properties of the
$S$-duality the semi-stable maps used in these constructions fit
into the commutative diagram
\begin{equation} \label{dgrm:umkehr-for-mfd-to-normal-cplx-on-level-of-spaces}
\begin{split}
\xymatrix{
\Th(\nu_Y)^\ast \ar[d]_{\Gamma_N \circ \Th(c)^\ast} \ar[r]_{\id} & \Th(\nu_Y)^\ast \ar[d]^{\Gamma_Y} \\
\Sigma^p N_+ \ar[r]^{\Sigma^p g} & \Sigma^p Y_+
}
\end{split}
\end{equation}
which in fact induces the upper right part of the Diagram
(\ref{dgrm:umkehr-for-mfd-to-normal-cplx}). By Proposition
\ref{PropertiesOfSpectralQuadraticConstruction} the spectral
quadratic construction applied to the maps~$\Gamma_Y$, $\Gamma_N
\circ \Th(c)^\ast$ and $\Sigma^p g_+$ satisfy the following relation
\[
\Psi (\Gamma_Y) = \smalltwoxtwomatrix{g}{0}{0}{1}_\% \circ \Psi (\Gamma_N \circ \Th(c)^\ast) + (e_{g_G})_\% \circ \psi (\Sigma^p g_+) \circ g_F
\]
where the symbols in the brackets specify the map to which the
(spectral) quadratic constructions are applied. However, the
map~$\Sigma^p g_+$ comes from the map~$g \co N \ra Y$ and so~$\psi
(\Sigma^p g_+) = 0$. This leads to the commutative diagram
\begin{equation} \label{dgrm:comparing-two-quad-str-on-cones}
\begin{split}
\xymatrix{
\tilde{C}_{n+p}(\Th(\nu_Y)^\ast) \ar[rr]^{\Psi (\Gamma_N \circ \Th(c)^\ast)} \ar[drr]_{\Psi (\Gamma_Y)}
&
& (\Wq{\cone(g^!)})_n \ar[d]^{\smalltwoxtwomatrix{g}{0}{0}{1}_\% = (h' \circ e_{\varphi(g^!)_0})_\%}
\\
&
& (\Wq{\cone(\varphi(Y)_0)})_n
}
\end{split}
\end{equation}
The identification of the vertical map comes from Diagram
(\ref{dgrm:umkehr-for-mfd-to-normal-cplx}) and equation
(\ref{eqn:naturality-of-htpy-equiv-on-boundary}). Hence we obtain
that
\begin{align*}
h'_\% (S \del \psi(g^{!})) & = h'_\% \circ (e_{\varphi(g^!)_0})_\%(S \psi(g^{!})) =
(h' \circ e_{\varphi(g^!)_0})_\% \Psi (\Gamma_N \circ \Th(c)^\ast) (u(\nu_Y)^\ast)
= \\ & = \Psi (\Gamma_Y) (u(\nu_Y)^\ast) = S \del \psi(Y).
\end{align*}
The uniqueness of desuspension as presented in Construction \ref{con:normal-con-via-spectral-quadratic-con} yields the desired
\[
h_\% (\del \psi(g^{!})) = - \del \psi(Y)
\]
thanks again to $h' = - \Sigma (h)$.
\end{proof}
\subsection{Quadratic signature of a degree one normal map to a normal pair}
\
Now we aim at proving Proposition
\ref{prop:degree-one-normal-map-mfd-with-boundary-to-normal-pair},
which is a relative version of the proposition just proved. First we
need a relative version of Definition
\ref{QuadraticSignatureOfDegree1NormalMapNotPoincare}. For that we
need to know how to apply the spectral quadratic construction in the
relative setting and for that, in turn, how to apply $S$-duality in
the relative setting.
\begin{construction} \cite[Proposition 7.3.1]{Ranicki(1981)} \label{con:RelativeSpectralQuadraticConstruction}
Let~$(G,F) \co (X,A) \lra \Sigma^{p} (Y,B)$ be a semi-stable map
between pointed pairs. Consider the following diagram of induced
chain maps
\[
\xymatrix{
\tilde{C}(A)_{p+\ast} \ar[r]^-{f} \ar[d]^{i} & \tilde{C}(B) \ar[r] \ar[d]^{j} & \sC (f) \ar[d]^{(j,i)} \\
\tilde{C}(X)_{p+\ast} \ar[r]^-{g} & \tilde{C}(Y) \ar[r] & \sC (g)
}
\]
The \emph{relative spectral quadratic construction} on~$(G,F)$ is a
chain map
\[
\Psi \co \Sigma^{-p} \tilde{C}(X,A) \lra \sC ((j,i)_{\%})
\]
such that
\[
(1+T) \circ \Psi \equiv e^\% \circ \varphi \circ (g,f)
\]
where $\varphi \co \tilde{C}(Y,B) \lra \sC (j^{\%})$ is the relative
symmetric construction on $(Y,B)$ (Construction \ref{con:rel-sym})
and $e^{\%} \co \sC (j^{\%}) \ra \sC ((j,i)^{\%})$ is the map induced by the right hand square in the diagram above. The existence of $\Psi$ follows from the naturality of the diagram in Construction
\ref{con:SpectralQuadraticConstruction}.
\end{construction}
The $S$-duality is applied to normal pairs as follows. Recall that
an $(n+1)$-dimensional geometric normal pair $(Y,B)$ comes with the
map of pairs
\[
(\rho_Y,\rho_B) \co (D^{n+k+1},S^{n+k}) \ra (\Th (\nu_Y),\Th
(\nu_B))
\]
In the absolute case the map $\rho$ composed with the diagonal map
gave rise to a map $\Gamma$ which was the input for the spectral
quadratic construction. Now we have three diagonal maps producing
three compositions:
\begin{align*}
S^{n+k} \xra{\rho_B} \Th (\nu_B) & \xra{\Delta} B_+ \wedge \Th (\nu_B) \\
S^{n+k+1} \xra{\rho_Y/\rho_B} \Th (\nu_Y) / \Th (\nu_B) & \xra{\Delta} Y_+ \wedge \Th (\nu_Y) / \Th (\nu_B) \\
S^{n+k+1} \xra{\rho_Y/\rho_B} \Th (\nu_Y) / \Th (\nu_B) &
\xra{\Delta} Y/B \wedge \Th (\nu_Y)
\end{align*}
These induce three duality maps which fit into a commutative diagram
as follows:
\begin{equation} \label{diag:relative-S-duality}
\begin{split}
\xymatrix{
\Sigma^{-1} \Th (\nu_B)^\ast \ar[d]^{\Sigma^{-1} \Gamma_B} \ar[r]^(0.42){i} & (\Th (\nu_Y)/\Th (\nu_B))^\ast \ar[d]^{\Gamma_Y} \ar[r] & \Th (\nu_Y)^\ast \ar[d]^{\Gamma_{(Y,B)}} \ar[r] & \Th (\nu_B)^\ast \ar[d]^{\Gamma_B} \\
\Sigma^p B \ar[r]^{j} & \Sigma^p Y \ar[r] & \Sigma^p (Y/B) \ar[r] & \Sigma^{p+1} B.
}
\end{split}
\end{equation}
\begin{con} \label{con:quad-boundary-of-pair}
The quadratic boundary of the normal pair $(Y,B)$ is an
$n$-dimensional QAPP
\[
\del \nsign (Y,B) = (\del C(B) \ra \del_+ C(Y),(\psi(Y),\psi(B)))
\]
obtained by applying the relative spectral quadratic construction on
the pair of maps $(\Gamma_Y,\Sigma^{-1} \Gamma_B)$ with
$(\psi(Y),\psi(B))$ the image of $u(\nu(Y))^\ast \in \tilde{C} (\Th
(\nu_Y)^\ast)_{n + p}$ under
\[
\Psi \co \Sigma^{-p} \tilde{C} (\Th (\nu_Y)^\ast) \ra \sC ((j,i)_{\%})
\]
where $i$ and $j$ are as in Diagram (\ref{diag:relative-S-duality}).
\end{con}
\begin{defn}\label{QuadraticSignatureOfDegree1NormalMapNotPoincare-relative-version}
Let~$((g,c),(f,b)) \co (N,A) \ra (Y,B)$ be a degree one normal map
from a Poincar\'e pair~$(N,A)$ to a normal pair~$(Y,B)$ of dimension
$(n+1)$. The \emph{quadratic signature} of~$((g,c),(f,b))$ is the
$n$-dimensional quadratic pair
\[
\qsign ((g,c),(f,b)) = (j \co C \ra D,(\delta \psi,\psi))
\]
which is not necessarily Poincar\'e, obtained from a choice of the
Thom class $u (\nu_Y)$ as follows.
The $S$-duality produces a commutative diagram
\[
\xymatrix{
\Sigma^{p} A_+ \ar[d] & & \Sigma^{-1} \Th (\nu_B)^\ast \ar[ll]_{\Gamma_A \circ \Th(b)^\ast} \ar[d] \\
\Sigma^p N_+ & & (\Th (\nu_Y) / \Th (\nu_B) )^\ast
\ar[ll]^(0.6){\Gamma_N \circ (\Th(c)/\Th(b))^\ast} }
\]
inducing the diagram of chain complexes
\[
\xymatrix{
C(A) \ar[d] & C^{n-\ast} (B) \ar[l]_{f^!} \ar[d] \\
C(N) & C^{n+1-\ast} (Y,B) \ar[l]^(0.6){g^!} }
\]
Define~$C = \sC (f^{!})$, $D = \sC (g^!)$ and $(\delta \psi, \psi) =
\Psi (u (\nu_Y)^\ast)$, where the spectral quadratic construction is
on the pair of maps $(\Gamma_N \circ (\Th(c)/\Th(b))^\ast,\Gamma_A \circ
\Th(b)^\ast)$.
\end{defn}
\begin{proof}[Proof of Proposition
\ref{prop:degree-one-normal-map-mfd-with-boundary-to-normal-pair}]
The proof follows the same pattern as the proof of Proposition
\ref{prop:degree-one-normal-map-mfd-to-normal-cplx}. With the
notation of Construction \ref{con:quad-boundary-of-pair} and
Definition
\ref{QuadraticSignatureOfDegree1NormalMapNotPoincare-relative-version}
one first observes that we have a homotopy equivalence of pairs
\[
(\del h \ra h) \co (\del C(B) \ra \del_+ C(Y)) \xra{\simeq} (j \co C
\ra D)
\]
by studying a map of diagrams of the same shape as Diagram
(\ref{dgrm:umkehr-for-mfd-to-normal-cplx}). For the symmetric
structures observe that the homotopy equivalence $(\del h \ra h)$
satisfies an equation analogous to
(\ref{eqn:naturality-of-htpy-equiv-on-boundary}) and again use the
naturality. Finally to obtain the desired equivalence of quadratic
structures there is a again a map of commutative squares of the form
as in Diagram
(\ref{dgrm:umkehr-for-mfd-to-normal-cplx-on-level-of-spaces}). A
diagram chase shows that the relative spectral construction
satisfies a formula analogous to the one appearing in Proposition
\ref{PropertiesOfSpectralQuadraticConstruction}. This leads to a
diagram analogous to Diagram
(\ref{dgrm:comparing-two-quad-str-on-cones}). As in the absolute
case unraveling what it means and using the desuspension produces
the desired equation.
\end{proof}
\begin{rem} \label{pairs-vs-k-ads}
Just as explained in section \ref{sec:cat-over-cplxs} the relative
version just proved has a generalization for $k$-ads.
\end{rem}
\subsection{Quadratic signature over $X$ of a degree one normal map to $X$} \label{subsec:quad-sign-over-X}
\
Now we want to prove Proposition \ref{prop:degree-one-normal-map-mfd-to-poincare-over-X}. The
preparation starts with discussing $\nsign_X (X)$ for an $n$-dimensional GPC. This was defined in section \ref{sec:normal-signatures-over-X} by first passing to $\gnsign_X
(X)$ and then applying the spectrum map $\nsign \co \Omega^N_\bullet
\ra \bNL^\bullet$ from \cite{Weiss-II(1985)}. By the proof of
\cite[Theorem 7.1]{Weiss-II(1985)} the spectrum map $\nsign$
composed with the boundary fits with the quadratic boundary
construction as described in subsection
\ref{subsec:quadratic-boundary-of-gnc}, so we can think of $\del
\nsign_X (X)$ as a collection of $(n-|\sigma|)$-dimensional
quadratic $(m-|\sigma|)$-ads indexed by simplices of $X$ which fit
together and are obtained by the relative spectral construction
described in Construction \ref{con:quad-boundary-of-pair}.
\begin{defn} \label{defn:quad-signature-over-X-deg-one-normal-map-to-poincare}
Let $(f,b) \co M \ra X$ be a degree one normal map from a closed
$n$-dimensional topological manifold to an $n$-dimensional GPC. Make
$f$ transverse to the dual cell $D(\sigma,X)$ for each $\sigma \in
X$ so that we have a degree one normal map
\[
(f[\sigma],f[\del \sigma]) \co (M[\sigma],\del M[\sigma]) \ra (X[\sigma],\del X[\sigma])
\]
from an $(n-|\sigma|)$-dimensional manifold with boundary to an
$(n-|\sigma|)$-dimensional normal pair. Define the \emph{quadratic
signature over $X$} of $(f,b)$ to be the element
\[
\qsign_X (f,b) \in L_n (\Lambda (\ZZ) (X))
\]
represented by the $n$-dimensional QAC $(C,\psi)$ in $\Lambda (\ZZ)
(X)$ whose component over $\sigma \in X$ is the relative quadratic
signature
\[
\qsign \big( (f(\sigma),b(\sigma)), \del (f(\sigma),b(\sigma)) \big)
\]
obtained as in Definition
\ref{QuadraticSignatureOfDegree1NormalMapNotPoincare-relative-version}.
The resulting element is independent of all the choices.
\end{defn}
\begin{proof}[Proof of Proposition
\ref{prop:degree-one-normal-map-mfd-to-poincare-over-X}] In order to
prove the proposition it is necessary to prove that one has homotopy
equivalences as in the statement but for each simplex and so that
they fit together. However, for each simplex this is exactly the
statement of Proposition
\ref{prop:degree-one-normal-map-mfd-with-boundary-to-normal-pair}.
Since one can proceed inductively from simplices of the top
dimension to smaller simplices the homotopy equivalences can be made
to fit together.
To obtain the last homotopy equivalence recall that $X$ is an $n$-dimensional Poincar\'e complex and hence the same complex that defines the normal signature defines the visible signature over $X$, see section \ref{sec:normal-signatures-over-X} if needed.
\end{proof}
\subsection{Identification of the quadratic signature with the assembly} \label{subsec:identification-of-quad-sign-with-assembly}
\
Now we proceed to Step (2) of the proof of the Main Technical Theorem part (II). We first state the following preparatory proposition which describes what happens when we consider the difference of the quadratic signatures of two degree one normal maps.
\begin{prop} \label{prop:difference-of-quad-sign-is-poincare}
Let $(f_i,b_i) \co M_i \ra X$ with $i = 0,1$ be two degree one normal maps from $n$-dimensional topological manifolds to an $n$-dimensional GPC. Then the difference of their quadratic signatures over $\ZZ_\ast (X)$ is an $n$-dimensional QAC in the algebraic bordism category $\Lambda \langle 1 \rangle (\ZZ)_\ast (X)$ and hence represents an element
\[
\qsign_X (f_1,b_1) - \qsign_X (f_0,b_0) \in H_n (X ; \bL_\bullet \langle 1 \rangle).
\]
\end{prop}
\begin{proof}
A quadratic chain complex in $\ZZ_\ast (X)$ is an $n$-dimensional QAC in the algebraic bordism category $\Lambda \langle 1 \rangle (\ZZ)_\ast (X)$ if and only if it is locally Poincar\'e which is equivalent to saying that its boundary is contractible. So it is enough to prove that the two quadratic complexes representing $\qsign_X (f_i,b_i)$ have homotopy equivalent boundaries. This follows from Proposition \ref{prop:degree-one-normal-map-mfd-to-poincare-over-X} since they are both homotopy equivalent to $-\del \nsign_X (X)$.
\end{proof}
The degree one normal maps with the target $X$ are organized in the normal invariants $\sN(X)$. The above proposition tells us that the quadratic signature over $X$ relative to $(f_0,b_0)$ defines a map
\begin{equation} \label{eqn:quad-sign-rel-to-x-0}
\qsign_X (-,-) - \qsign_X (f_0,b_0) \co \sN (X) \ra H_n (X ; \bL_\bullet \langle 1 \rangle).
\end{equation}
The following proposition is the main result in the proof of Step (2). It says that for $X$ an $n$-dimensional Poincar\'e complex such that $t(X) = 0$ the surgery obstruction map $\qsign_{\ZZ [\pi_1 (X)]} \co \sN (X) \ra L_n (\ZZ [\pi_1 (X)])$ can be identified with the assembly map. When $X$ is already a manifold the map $(f_0,b_0)$ can be taken to be the identity.
\begin{prop} \textup{\cite[pages 293-297]{Ranicki(1979)}, \cite[proof of Theorem 17.4]{Ranicki(1992)}} \ \label{prop:identification}
\
Let $X$ be an $n$-dimensional GPC with $\pi = \pi_1 (x)$ such that $t(X) = 0$ and let $(f_0,b_0) \co M_0 \ra X$ be any choice of a degree one normal map. Then the diagram
\[
\xymatrix{
\sN (X) \ar[rrr]^(0.43){\qsign_{\ZZ [\pi]} (-,-) - \qsign_{\ZZ [\pi]} (f_0,b_0)} \ar[d]_{\qsign_X (-,-) - \qsign_X (f_0,b_0)}^{\cong} & & & L_n (\ZZ [\pi_1 (X)]) \ar[d]^{=} \\
H_n (X ; \bL_\bullet \langle 1 \rangle) \ar[rrr]_{A} & & & L_n (\ZZ [\pi_1 (X)])
}
\]
is commutative and the left vertical map is a bijection.
\end{prop}
\begin{proof}[Overview of the proof]
\
To see the commutativity consider $(f,b) \co M \ra X$ a degree one normal map from an $n$-dimensional manifold to an $n$-dimensional Poincar\'e complex $X$ and $A \co \Lambda (X) \ra \Lambda (\ZZ[\pi_1 (X)])$ the assembly functor. Then we have
\[
A(\qsign_X (f,b)) = \qsign_{\ZZ[\pi_1 (X)])} (f,b) \in L_n (\Lambda (\ZZ[\pi_1 (X)]))
\]
since the assembly corresponds to geometric gluing, see Remark \ref{rem:hlgical-assembly}.
We are left with showing that the left hand vertical map is a bijection which will be done by identifying this map with a composition of four maps which are all bijections. In order to save space we will abbreviate (using $x_0$ for $(f_0,b_0)$)
\begin{align*}
\qsign_X (-;x_0) & := \qsign_X (-,-) - \qsign_X (f_0,b_0)
\end{align*}
The strategy of the proof can be summarized in the following diagram:
\[
\xymatrix{
\sN (X) \ar[d]^{t (-,x_0)}_{\cong} \ar@/_8pc/[dddd]^(0.35){\qsign_X (-;x_0)} \ar[r]^{=} & \sN (X) \ar[d]^{t (-,x_0)}_{\cong} \ar@/^9pc/[dddd]_(0.35){\sign^{\G/\TOP}_X (-;x_0)} \\
[X;\G/\TOP] \ar[d]^{\qsign}_{\cong} \ar[r]^{=} & [X;\G/\TOP] \ar[d]^{\widetilde \Gamma} & \\
H^0 (X ; \bL_\bullet \langle 1 \rangle) \ar[d]^{- \cup u^{\bL_\bullet} (\nu_0)}_{\cong} & H^0 (X ; \Sigma^{-1} \bOmega^{N,\STOP}_\bullet) \ar[d]^{- \cup u^{\STOP}(\nu_0)}_{\cong} \ar[l]_{\qsign} \\
H^k (\Th (\nu_X) ; \bL_\bullet \langle 1 \rangle) \ar[d]^{S-\textup{dual}}_{\cong} & H^k (\Th (\nu_X) ; \Sigma^{-1} \bOmega^{N,\STOP}_\bullet) \ar[d]^{S-\textup{dual}}_{\cong} \ar[l]_{\qsign} \\
H_n (X ; \bL_\bullet \langle 1 \rangle) & H_n (X ; \Sigma^{-1} \bOmega^{N,\STOP}_\bullet) \ar[l]_{\qsign}
}
\]
Some of the maps in the diagram have been defined already, the remaining ones will be defined shortly. We will show that those marked with $\cong$ are bijections or isomorphisms. Once this is done it is enough to show that the left hand part of the diagram is commutative, because then we have indeed identified $\qsign_X (-;x_0)$ with a composition of four bijections. The commutativity of the left hand part will be shown by proving that:
\begin{enumerate}
\item[(a)] the outer square commutes in subsection \ref{subsec:quad-sign-versus-normal-sign},
\item[(b)] the middle part commutes in subsection \ref{subsec:proof-of-2},
\item[(c)] the right hand part commutes in subsection \ref{subsec:signatures-versus-orientations}.
\end{enumerate}
The intermediate subsections contain the necessary definitions.
\end{proof}
\subsection{Proof of (a) - Quadratic signatures versus normal signatures}
\label{subsec:quad-sign-versus-normal-sign}
\
Recall that for a degree one normal map $(f,b) \co M \ra X$ from an $n$-dimensional manifold to an $n$-dimensional GPC we have two ways how to obtain its quadratic signature over $\ZZ [\pi_1 (X)]$. Namely via the Umkehr map $f^!$ as in Construction \ref{constrn:quadconstrn} or via the normal structure on the mapping cylinder $W$ as in Example \ref{expl:normal-symm-poincare-pair-gives-quadratic}. It was further shown in Example \ref{expl:normal-symm-poincare-pair-gives-quadratic} that these two constructions yield the same result. In subsection \ref{subsec:quad-sign-over-X} we have defined the quadratic signature of $(f,b)$ over $\ZZ_\ast (X)$ using Umkehr maps providing an analogue to Construction \ref{constrn:quadconstrn}. Here we provide an analogue to Example \ref{expl:normal-symm-poincare-pair-gives-quadratic} over $\ZZ_\ast (X)$.
\begin{expl} \label{expl:normal-pair-gives-quadratic-cplx-not-Poincare}
This is a generalization of the results of subsection \ref{subsec:quadratic-boundary-of-gnc}. Let us consider a degree one normal map $(g,c) \co N \ra Y$ from an $n$-dimensional manifold to an $n$-dimensional GNC. We obtain an $(n+1)$-dimensional normal pair $(W,N \sqcup Y)$ where $W$ is the mapping cylinder of $f$. However, in contrast to the situation in Example \ref{expl:normal-symm-poincare-pair-gives-quadratic}, the disjoint union $N \sqcup Y$ is no longer a Poincar\'e complex. Therefore the associated algebraic normal pair
\[
(\nsign (W),\ssign (N) - \nsign (Y))
\]
does not have a Poincar\'e boundary. Nevertheless we can still perform algebraic surgery on this pair, just as in Lemma \ref{lem:normal-sym-pair-gives-quad-poincare-cplx} and thanks to the spectral quadratic construction the result of the surgery is an $n$-dimensional quadratic complex, which however, will not be Poincar\'e. The proof from Example \ref{expl:normal-symm-poincare-pair-gives-quadratic} translates almost word-for-word to an identification of this quadratic complex with $\qsign (g,c)$ from Definition \ref{QuadraticSignatureOfDegree1NormalMapNotPoincare} (the only difference being the fact that the map $\varphi_0|_Y$ is no longer an equivalence). In symbols we have
\begin{equation}
\textup{Lemma } \ref{lem:normal-sym-pair-gives-quad-poincare-cplx}
\co (\nsign (W),\ssign (N) - \nsign (Y)) \mapsto \qsign (g,c).
\end{equation}
Using the relative version of $S$-duality and of the spectral quadratic construction from earlier in this section one also obtains a relative version of this identification.
\end{expl}
\begin{expl} \label{expl:normal-symmetric-pair-gives-quadratic-over-X}
Starting now with a degree one normal map $(f,b) \co M \ra X$ from an $n$-dimensional manifold to an $n$-dimensional GPC consider the dissection
\[
X = \bigcup_{\sigma \in X} X(\sigma) \quad (f,b) = \bigcup_{\sigma \in X} (f(\sigma),b(\sigma)) \co M (\sigma) \ra X (\sigma)
\]
where each $(f(\sigma),b(\sigma))$ is a degree one normal map from an $(n-|\sigma|)$-dimensional manifold $(m-|\sigma|)$-ad to an $(n-|\sigma|)$-dimensional normal $(m-|\sigma|)$-ad. As such it gives rise to an $(n+1-|\sigma|)$-dimensional pair of normal $(m-|\sigma|)$-ads
\[
(W(\sigma),\nu(b(\sigma)),\rho(b(\sigma))).
\]
Applying Example \ref{expl:normal-pair-gives-quadratic-cplx-not-Poincare} shows that the quadratic chain complex over $\ZZ_\ast (X)$ obtained this way coincides with the quadratic signature $\qsign_X (f,b)$ from Definition \ref{defn:quad-signature-over-X-deg-one-normal-map-to-poincare}.
\end{expl}
The following Lemma is a generalization of ideas from Construction \ref{con:normal-symmetric-signature-over-X-for-a-deg-one-normal-map}. In its statement a use is made of the quadratic signature map $\qsign \co \Sigma ^{-1} \bOmega^{N,\STOP}_\bullet \ra \bL_\bullet \langle 1 \rangle$ from Proposition \ref{prop:signatures-on-spectra-level}.
\begin{lem} \label{lem:normal-symmetric-signature-over-X-for-a-difference-of-deg-one-normal-maps}
Let $x_i = (f_i,b_i) \co M_i \ra X$ with $i = 0,1$ be two degree one normal maps from $n$-dimensional topological manifolds to an $n$-dimensional GPC. Then there exists a $\G/\TOP$-signature
\[
\sign^{\G/\TOP}_X (x_1,x_0) \in H_n (X ; \Sigma ^{-1} \bOmega^{N,\STOP}_\bullet)
\]
such that
\[
\qsign_X (f_1,b_1) - \qsign_X (f_0,b_0) = \qsign (\sign^{\G/\TOP}_X (x_1,x_0)) \in H_n (X ; \bL_\bullet \langle 1 \rangle).
\]
\end{lem}
\begin{proof}
Consider the dissections of $(f_i,b_i) \co M_i \ra X$ as in Example \ref{expl:normal-symmetric-pair-gives-quadratic-over-X}. The assignments
\[
\sigma \mapsto (W_i(\sigma),\nu(b_i(\sigma)),\rho(b_i(\sigma))).
\]
fit together to produce cobordisms of $\bOmega^N_\bullet$-cycles in the sense of Definition \ref{defn:E-cycles}. However, they do not produce a $\Sigma ^{-1} \bOmega^{N,\STOP}_\bullet$-cycle since the ends of the cobordisms given by $X$ are not topological manifolds. But the two ends for $i=0,1$ are equal and so we can glue the two cobordisms along these ends and we obtain for each $\sigma \in X$ the $(n+1-|\sigma|)$-dimensional pairs of normal $(m-|\sigma|)$-ads
\[
(W_1(\sigma)) \cup_{X(\sigma)} W_0(\sigma)),\nu(b_1(\sigma)) \cup_{\nu_{X(\sigma)}} \nu(b_0(\sigma)),\rho(b_1(\sigma)) \cup_{\rho(\sigma)} \rho(b_0(\sigma))).
\]
which now fit together to produce a $\Sigma^{-1} \bOmega^{N,\STOP}_\bullet$-cycle in the sense of Definition \ref{defn:E-cycles}. This produces the desired signature
\[
\sign^{\G/\TOP}_X (x_1,x_0) \in H_n (X ; \Sigma^{-1} \bOmega^{N,\STOP}_\bullet).
\]
To prove the equation recall from Proposition \ref{prop:connective-signatures-on-spectra-level} the quadratic signature map $\qsign \co \Sigma^{-1} \bOmega^{N,\STOP}_\bullet \ra \bL_\bullet \langle 1 \rangle$. We have to investigate the value of the induced map on the just defined $\G/\TOP$-signature. By definition this value is given on each simplex $\sigma \in X$ as the $(n-|\sigma|)$-dimensional quadratic Poincar\'e $(m-|\sigma|)$-ad obtained by the algebraic surgery on the algebraic pair extracted from the (normal,topological manifold) pair $(W_1(\sigma) \cup_{X(\sigma)} W_0(\sigma),M_1 (\sigma) \sqcup M_0 (\sigma))$.
Consider now the left hand side of the desired equation. By Example \ref{expl:normal-symmetric-pair-gives-quadratic-over-X} the value of each summand on a simplex $\sigma$ is obtained via algebraic surgery on the algebraic pair extracted from the normal pair $(W_i(\sigma),M_i(\sigma) \sqcup X(\sigma))$ (whose boundaries are not Poincar\'e and so the resulting complexes are also not Poincar\'e). Subtracting these corresponds to taking the disjoint union of the normal pairs above and reversing the orientation on the one labeled with $i = 0$. On the other hand there is a geometric normal cobordism between geometric normal pairs
\[
(W_1(\sigma) \cup_{X(\sigma)} W_0(\sigma), M_1(\sigma) \sqcup -M_0(\sigma))
\]
and
\[
(W_1(\sigma) \sqcup - W_0(\sigma),M_1(\sigma) \sqcup X(\sigma)\sqcup -M_1(\sigma) \sqcup -X(\sigma) )
\]
which induces an algebraic cobordism and hence the extracted algebraic data are also cobordant.
\end{proof}
\subsection{Normal invariants revisited} \label{subsec:normal-invariants-revisited}
\
Let $X$ be an $n$-dimensional GPC which admits a topological block bundle reduction of its SNF. For such an $X$ we will now discuss in more detail the bijection $\sN (X) \cong [X ; \G/\TOP]$ which was already used in the proof of Theorem \ref{thm:lifts-vs-orientations}.
We first set up some notation. An element $x \in \sN (X)$ is represented either by a degree one normal map $(f,b) \co M \ra X$ from an $n$-dimensional topological manifold $M$ to $X$ or by a pair $(\nu,h)$ where $\nu \co X \ra \BSTOP$ is a stable topological block bundle on $X$ and $h \co J(\nu) \simeq \nu_X$ is a homotopy from the underlying spherical fibration to the SNF. The two descriptions of normal invariants are identified via the usual Pontrjagin-Thom construction, see \cite[chapter 10]{Wall(1999)} if needed.
An element in the set $[X ; \G/\TOP]$ of homotopy classes of maps from $X$ to $\G/\TOP$ can be thought of as represented by a pair $(\bar \nu,\bar h)$, where $\bar \nu \co X \ra \BSTOP$ is a stable topological block bundle on $X$ and $h \co J(\bar \nu) \simeq \ast$ is a homotopy from the underlying spherical fibration to the constant map (which represents the trivial spherical fibration). The set $[X ; \G/\TOP]$ is a group under the Whitney sum operation and it has an action on $\sN (X)$ by
\begin{align*}
[X ; \G/\TOP] \times \sN (X) & \ra \sN (X) \\
((\bar \nu,\bar h) , (\nu,h)) & \mapsto (\bar \nu \oplus \nu, \bar h \oplus h).
\end{align*}
The action is free and transitive \cite[chapter 10]{Wall(1999)} and hence any choice of a point $x_0 = (\nu_0,h_0) = (f_0,b_0) \in \sN (X)$ gives a bijection $[X ; \G/\TOP] \cong \sN (X)$ whose inverse is denoted by
\begin{equation}
t (-,x_0) \co \sN (X) \rightarrow [X ; \G/\TOP]
\end{equation}
So for $x = (\nu,h)$ and $t (x,x_0) = (\bar \nu,\bar h)$ we have
\begin{equation}
(\nu,h) = (\bar \nu \oplus \nu_0,\bar h \oplus h_0).
\end{equation}
\subsection{From normal invariants to cohomology} \label{subsec:normal-invariants-to-cohomology}
\
\begin{con} \label{con:widetilde-Gamma}
Now we construct the map
\[
\widetilde \Gamma \co \G/\TOP \ra \Sigma^{-1} \bOmega^{N,\STOP}_0.
\]
To an $l$-simplex in $\G/\TOP$ alias a degree one normal map $(f,b) \co M \ra \Delta^l$ the map $\widetilde \Gamma$ associates an $l$-simplex of $\Sigma^{-1} \bOmega^{N,\STOP}_0$ alias an $(l+1)$-dimensional $l$-ad of (normal,topological manifold) pairs $(W,M \sqcup -\Delta^l)$ where $W$ is the mapping cylinder of $f$ and the normal structure comes from the bundle map $b$.
\end{con}
The proof of Theorem \ref{thm:lifts-vs-orientations} shows that the surgery obstruction alias the quadratic signature map $\qsign \co \G/\TOP \ra \bL_0 \langle 1 \rangle$ can be thought of as a composition of two maps:
\[
\widetilde \Gamma \co \G/\TOP \ra \Sigma^{-1} \bOmega^{N,\STOP}_0 \quad \textup{and} \quad \qsign \co \Sigma^{-1} \bOmega^{N,\STOP}_0 \ra \bL_0 \langle 1 \rangle,
\]
where the second map comes from Proposition \ref{prop:connective-signatures-on-spectra-level}.
\subsection{Products and Thom isomorphism} \label{subsec:products}
\
Now we briefly review the cup products in ring and module spectra. We only concentrate on the cases which are used in this paper. In fact we only need the cup products realizing the Thom isomorphism. Let $X$ be a CW-complex and let $\xi$ be a $k$-dimensional spherical fibration over $X$. Further suppose that $\bF$ is a module spectrum over the ring spectrum $\bE$. Then there are the cup products
\begin{align*}
- \cup - \co H^p (X ; \bF) \otimes H^q (\Th (\xi) ; \bE) & \ra H^{p+q} (\Th (\xi) ; \bF) \\
x \otimes y & \mapsto x \cup y
\end{align*}
given by the composition
\[
x \cup y \co \Th (\xi) \xra{\Delta} X_+ \wedge \Th (\xi) \xra{x \wedge y} \bF_p \wedge \bE_q \ra \bF_{p+q}
\]
with $\Delta$ the diagonal map already mentioned for example in section \ref{sec:normal-cplxs}.
If we have an $\bE$-orientation $u \in H^k (\Th (\xi) ; \bE)$, then the resulting homomorphism
\begin{equation}
- \cup u \co H^p (X ; \bF) \ra H^{p+k} (\Th (\xi) ; \bF)
\end{equation}
is the Thom isomorphism.
Remember now that we are working with simplicial complexes and $\Delta$-sets rather than topological spaces. The above constructions work in this setting as long as we choose a simplicial approximation of the diagonal map $\Delta$. An explicit description of such an approximation in our situation is given in \cite[Remark 12.5]{Ranicki(1992)}. However, it does not help us much since we do not understand its behavior with respect to the signatures we have defined earlier in this section. On the other hand, as we will see, we understand the behavior of the diagonal map $\Delta$ of spaces with respect to the orientations discussed in section \ref{sec:proof-part-1}. Since we also know that the orientations correspond to the signatures via the $S$-duality (Proposition \ref{prop:S-duals-of-orientations-are-signatures}) we can work with them and therefore we can work with the version of the cup product for spaces.
\subsection{Proof of (b) - Naturality of orientations} \label{subsec:proof-of-2}
\
The commutativity of the second square follows from the second paragraph of subsection \ref{subsec:normal-invariants-to-cohomology}. The commutativity of the third square follows from the naturality of the cup product with respect to the coefficient spectra and from the fact that the canonical $\bL^\bullet$-orientation of a stable topological block bundle is the image of the canonical $\bMSTOP$-orientation (Proposition \ref{canonical-L-orientations}). The commutativity of the fourth square follows from the naturality of the $S$-duality with respect to the coefficient spectra.
\subsection{Proof of (c) - Signatures versus orientations revisited} \label{subsec:signatures-versus-orientations}
\
If $x = (f,b) \co M \ra X$ represents an element from $\sN (X)$ then (c) can be expressed by the formula
\begin{equation} \label{eqn:cup-product-on-wgamma-and-orientation-gives-signature}
\widetilde \Gamma (t(x,x_0)) \cup u^{\STOP} (\nu_0) = S^{-1} (\sign^{\G/\TOP}_X (x,x_0))
\end{equation}
in the group $H^k (\Th (\nu_X) ; \Sigma^{-1} \bOmega^{N,\STOP}_\bullet)$. Here $x_i \in \sN(X)$ are represented either by degree one normal maps $(f_i,b_i) \co M_i \ra X$ or pairs $(\nu_i,h_i)$ as in subsection \ref{subsec:normal-invariants-revisited} and we keep this notation for the rest of this section. For the proof an even better understanding of the relationship between various signatures and orientations is needed. To put the orientations into the game we use the Pontrjagin-Thom map
\[
\Sigma^{-1} \bOmega^{N,\STOP}_\bullet \simeq \bMSGTOP := \textup{Fiber} \; (\bMSTOP \ra \bMSG).
\]
We will show (\ref{eqn:cup-product-on-wgamma-and-orientation-gives-signature}) in two steps, namely we show that both sides are equal to a certain element $u^{\G/\TOP} (\nu,\nu_0) \in H^k (\Th (\nu_X) ; \bMSGTOP)$.
\begin{con}
Recall the canonical $\STOP$-orientations (subsection \ref{subsec:L-theory-orientations})
\[
u^{\STOP} (\nu) - u^{\STOP} (\nu_0) \in H^k (\Th (\nu_X) ; \bMSTOP)
\]
and also the fact that we have the homotopy $h_0 \cup h \co \Th (\nu_X) \times [-1,1] \ra \bMSG$ between $J (\nu)$ and $J (\nu_0)$. This homotopy can also be viewed as a null-homotopy of the map $J(u^{\STOP} (\nu) - u^{\STOP} (\nu_0))$. Hence we obtain a preferred lift which we denote
\[
u^{\G/\TOP} (\nu,\nu_0) \in H^k (\Th (\nu_X) ; \bMSGTOP).
\]
\end{con}
\begin{prop} \label{prop:S-duals-of-orientations-are-signatures-relative-case-non-mfd}
Let $X$ be an $n$-dimensional GPC and let $x$, $x_0$ be two topological block bundle reductions of the SNF. Then we have
\[
S (u^{\G/\TOP} (\nu,\nu_0)) = \sign^{\G/\TOP}_X (x,x_0) \in H_n (X ; \bMSGTOP).
\]
\end{prop}
\begin{proof}
The proof is analogous to the proof of Proposition \ref{prop:S-duals-of-orientations-are-signatures-relative-case}. Recall that the signature $\sign^{\G/\TOP}_X (x,x_0)$ is constructed using the dissections of the degree one normal maps $(f_i,b_i)$. From these dissections we inspect that we have a commutative diagram
\[
\xymatrix{
\Sigma^m/\overline X \ar[d]_{i} \ar[rr]^{\sign^{\G/\TOP}_X (x_1,x_0)} & & \Sigma^{-1} \Omega_{-k}^{N,\STOP} \ar[d]^{c} \\
\textup{Sing} \; F(\nu_1;\nu_0) \ar[rr]_-{u^{\G/\TOP} (\nu,\nu_0)} & & \textup{Sing} \; \bMSGTOP (k) }
\]
where we use the notation $\bMSGTOP (k) := \textup{Fiber} \; (\bMSTOP (k) \ra \bMSG (k))$ and $F(\nu_1;\nu_0) : = \textup{Pullback} \; (\Th (\nu_1) \ra \Th (\nu_X) \leftarrow \Th (\nu_0))$. This proves the claim.
\end{proof}
Now we turn to the left hand side of the formula (\ref{eqn:cup-product-on-wgamma-and-orientation-gives-signature}). We first need to understand the composition (abusing the notation slightly):
\[
\widetilde \Gamma \co [X;\G/\TOP] \xra{\widetilde \Gamma} H^0 (X;\Sigma^{-1} \bOmega^{N,\STOP}_\bullet) \ra H^0 (X;\bMSGTOP).
\]
Let $(\bar \nu,\bar h)$ represent an element on $[X;\G/\TOP]$. Recall that $\bar h \co J(\bar \nu) \simeq \varepsilon$ and that we have the canonical orientations $u^{\bMSTOP} (\nu)$ and $ u^{\bMSTOP} (\varepsilon)$ and the homotopy $u^{\bMSG} (\bar h) \co u^{\bMSG} (J(\nu)) \simeq u^{\bMSG} (\varepsilon)$. We obtain
\[
\widetilde \Gamma (\bar \nu,\bar h)) = (u^{\bMSTOP} (\nu) - u^{\bMSTOP} (\varepsilon),u^{\bMSG} (\bar h) \co u^{\bMSG} (J(\nu)) - u^{\bMSG} (\varepsilon) \simeq \ast)
\]
Hence the element $\widetilde \Gamma (\bar \nu,\bar h)$ is the unique lift of $u^{\bMSTOP} (\nu) - u^{\bMSTOP} (\varepsilon)$ obtained from the homotopy $\bar h$.
Now consider our $x,x_0 \in \sN (X)$ and denote $t := t(x,x_0) = (\bar \nu,\bar h)$. As a warm up before proving the equation (\ref{eqn:cup-product-on-wgamma-and-orientation-gives-signature}) we consider its push-forward in the group $H^k (\Th (\nu_X) ; \bMSTOP)$. Denote the composition
\[
\Gamma \co [X;\G/\TOP] \xra{\widetilde{\Gamma}} H^0 (X;\bMSGTOP) \xra{\textup{incl}} H^0 (X;\bMSTOP)
\]
This simply forgets the homotopy $u^{\bMSG} (\bar h)$. So we have
\[
\Gamma (\bar \nu,\bar h) = u^{\bMSTOP} (\bar \nu) - u^{\bMSTOP} (\varepsilon) \co \Th (\bar \nu) \simeq \Sigma^k \Delta^l_+ \simeq \Th (\varepsilon) \ra \bMSTOP.
\]
Define the following two maps
\[
\Phi \co [X;\G/\TOP] \ra H^0 (X ; \bMSTOP) \quad \textup{and} \quad 1 \co [X;\G/\TOP] \ra H^0 (X;\bMSTOP)
\]
by
\[
\Phi (\bar \nu,\bar h) = u^{\bMSTOP} (\nu) \quad \textup{and} \quad 1 (\bar \nu,\bar h) = u^{\bMSTOP} (\varepsilon)
\]
so that we have $\Gamma = \Phi - 1$ and consider $\Gamma (t) = (\Phi - 1) (t)$. The Thom isomorphism
\begin{equation}
- \cup u^{\STOP}(\nu_0) \co H^0 (X ; \bMSTOP) \ra H^{k} (\Th (\nu_X) ; \bMSTOP)
\end{equation}
applied to an element $\Phi (t)\in H^0 (X ; \bMSTOP)$ is given by the composition
\[
\Th (\nu) \xra{\Delta} \Sigma^l X_+ \wedge \Th (\nu_0) \xra{\Phi(t) \wedge u^{\STOP}(\nu_0)} \bMSTOP \wedge \bMSTOP \xra{\oplus} \bMSTOP.
\]
From the relationship between the Whitney sum and the cross product and the diagonal map we obtain that
\[
u^{\STOP} (\nu) = \Phi (t (x,x_0)) \cup u^{\STOP} (\nu_0).
\]
Analogously we obtain
\[
u^{\STOP} (\nu_0) = 1 (t (x,x_0)) \cup u^{\STOP} (\nu_0).
\]
\begin{lem} \label{lem:orientations-vs-cup-product}
Let $X$ be an $n$-dimensional GPC and let $\nu$, $\nu_0 \co X \ra \BSTOP$ be two topological block bundles such that $J (\nu) \simeq \nu_X \simeq J(\nu_0)$. Then the canonical $\STOP$-orientations satisfy
\[
u^{\STOP} (\nu) - u^{\STOP} (\nu_0) = \Gamma ( \tilde t (x,x_0)) \cup u^{\STOP} (\nu_0).
\]
\end{lem}
\begin{proof}
The desired equation follows from the definition $\Gamma = \Phi -1$.
\end{proof}
The final step is the following lemma which is a refinement of Lemma \ref{lem:orientations-vs-cup-product}.
\begin{lem} \label{lem:refined-orientations-vs-cup-product}
Let $X$ be an $n$-dimensional GPC and let $\nu$, $\nu_0 \co X \ra \BSTOP$ be two topological block bundles such that $J (\nu) \simeq \nu_X \simeq J(\nu_0)$. Then the canonical $\STOP$-orientations satisfy
\[
u^{\G/\TOP} (\nu,\nu_0) = \widetilde \Gamma (t (x,x_0)) \cup u^{\STOP} (\nu_0).
\]
\end{lem}
\begin{proof}
The left hand side is obtained from the left hand side of Lemma \ref{lem:orientations-vs-cup-product} using the null-homotopy of $\Gamma (t(x,x_0))$ coming from $h \cup h_0 \co J(\nu)) \simeq J (\nu_0)$. The right hand side is obtained from the right hand side of Lemma \ref{lem:orientations-vs-cup-product} using the null-homotopy of $\Gamma (t (x,x_0)$ coming from $\bar h \co J(t(x,x_0)) \simeq J (t (x_0,x_0)) = \nu_X$. Applying the cup product with $u^{\STOP} (\nu_0)$ to this null-homotopy corresponds to taking the Whitney sum with $\nu_0$ and produces the homotopy $\bar h \oplus \id_{\nu_0} \co J(\nu)) \simeq J (\nu_0)$. The claim now follows from the property of the SNF that any two fiberwise homotopy equivalences between the stable topological block bundle reductions of the SNF are stably fiberwise homotopic.
\end{proof}
\subsection{Proof of the Main Technical Theorem (II)} \label{subsec:proof-of-main-tech-thm-2}
\begin{proof}[Proof of the Main Technical Theorem (II) assuming Propositions \ref{prop:degree-one-normal-map-mfd-to-poincare-over-X} and \ref{prop:identification}] \ \vspace{-0.4cm}
Consider the set
\begin{align*}
Q := \{ \, - &\text{sign}^{\mathbf{L}_{\bullet}}_{\mathbb{Z}[\pi_1(X)]} (f,b) \, \in \, L_n (\mathbb{Z}[\pi_1(X)]) \quad | \\
&(f,b) \colon M \rightarrow X \text{ degree one normal map, } M \text{ manifold} \}.
\end{align*}
Fix a degree one normal map~$(f_0,b_0) \colon M_0 \rightarrow X$ from a manifold~$M_0$ to our Poincar\'e complex~$X$. Proposition \ref{prop:identification} tells us that
\begin{align*}
\qsign_{\ZZ [\pi_1(X)]} (f_0,b_0) + Q = & \im \; (A \co H_n
(X;\bL_\bullet \langle 1 \rangle \ra L_n (\ZZ[\pi_1 (X)])) \\ = & \ker \; (\del \co L_n (\ZZ[\pi_1 (X)]) \ra \SS_n (X))
\end{align*}
and it follows that $Q$ is a coset of $\ker (\del)$. The preimage~$\partial^{-1}s(X) \subseteq L_n(\mathbb{Z}[\pi_1(X)])$ is also a coset of $\ker (\del)$. Moreover, from Proposition \ref{prop:degree-one-normal-map-mfd-to-poincare-over-X} we have~$Q \subseteq \partial^{-1}s(X)$. Hence~$Q$ and~$\partial^{-1}s(X)$ are the same coset of~$\ker (\del)$ and thus~$Q = \partial^{-1}s(X)$.
\end{proof}
\section{Concluding remarks} \label{sec:conclusion}
In Part II of the book \cite{Ranicki(1992)} interesting generalizations and applications of the theory can be found.
One important such generalization is the theory when one works with the spectrum $\bL_\bullet \langle 0 \rangle$ rather than with $\bL_\bullet \langle 1 \rangle$. This yields an analogous theory for the ANR-homology manifolds rather than for topological manifolds. The Quinn resolution obstruction also fits nicely into this theory. For details see \cite[chapters 24,25]{Ranicki(1992)} and \cite{Bryant-et-al(1996)}.
We note that, as already mentioned in the introduction, this generalization is especially interesting in view of the recent progress in studying the assembly maps associated to the spectrum $\bL_\bullet$. For example, thanks to the generalization, the results about the assembly maps in \cite{Bartels-Lueck(2009)} can be used to obtain an application in \cite{Bartels-Lueck-Weinberger(2009)}, which discusses when does a torsion-free word-hyperbolic group $G$ have a topological manifold model for its classifying space $BG$.
Another important application is that the total surgery obstruction can be used to identify the geometric structure set of an $n$-dimensional manifold $M$ with $\SS_{n+1} (M)$. This is closely related to subsection \ref{subsec:identification-of-quad-sign-with-assembly} and in fact the geometric surgery exact sequence can be identified with the algebraic surgery exact sequence, with more details to be found in \cite[chapter 18]{Ranicki(1992)}.
Interesting examples of geometric Poincar\'e complexes with non-trivial total surgery obstruction can be found in \cite[chapter 19]{Ranicki(1992)}.
\section{Algebraic complexes} \label{sec:algebraic-cplxs}
In this section we briefly recall the basic concepts of algebraic surgery. The details can be found in \cite{Ranicki-I-(1980),Ranicki-II-(1980)} and \cite[chapter 1]{Ranicki(1992)}.
Throughout the paper $\AA$ denotes an additive category and $\BB(\AA)$ denotes the category of bounded chain complexes in $\AA$. The total complex of a double chain complex can be used to extend a contravariant functor $T \co \AA \ra \BB (\AA)$ to a contravariant functor $T \co \BB(\AA) \ra \BB (\AA)$ as explained in detail in
\cite[page 26]{Ranicki(1992)}.
\begin{defn} \label{defn:chain-duality}
A \emph{chain duality} on an additive category $\AA$ is a pair
$(T,e)$ where
\begin{itemize}
\item $T$ is a contravariant functor $T:\AA\ra\BB(\AA)$
\item $e$ is a natural transformation
$e:T^{2}\ra(\id:\AA\ra\BB(\AA))$ such that
\begin{itemize}
\item $e_{M}:T^{2}(M)\ra M$ is a chain equivalence.
\item $e_{T(M)}\circ T(e_{M}) = \id$.
\end{itemize}
\end{itemize}
\end{defn}
The extension $T \co \BB(\AA) \ra \BB (\AA)$ mentioned before the
definition defines the dual $T(C)$ for a chain complex $C \in \BB
(\AA)$. A chain duality $T \colon \AA \rightarrow \BB(\AA)$ can be
used to define a tensor product of two objects $M$, $N$ in $\AA$
over $\AA$ as
\begin{equation} \label{tensor-product}
M \otimes_\AA N = \Hom_\AA (T(M),N),
\end{equation}
which is a priori just a chain complex of abelian groups. This definition generalizes for chain complexes
$C$ and $D$ in $\BB(\AA)$:
\[
C\otimes_{\AA}D \coloneqq \Hom_{\AA}(T(C),D).
\]
\begin{expl} \label{expl:R-duality}
Let $R$ be a ring with involution $r\mapsto\bar r$, for example for $R = \ZZ[\pi]$, the group ring of a group $\pi$, we have involution given by $\bar g=g^{-1}$ for $g\in\pi$.
The category $\AA(R)$ of finitely generated free left $R$-modules possesses a chain duality by $T(M) = \Hom_R (M,R)$. The involution can be used to turn an a~priori right
$R$-module $T(M)$ into a left $R$-module. The dual $T(C)$ of a bounded chain complex $C$ over $R$ is $\Hom_R(C,R)$.
\end{expl}
Chain duality is important because it enables us to define various concepts of Poincar\'e duality as we will see. Although the chain dual $T(M)$ in the above example is concentrated in dimension $0$, this is not necessarily the case in general. In section \ref{sec:cat-over-cplxs} we will see examples where this generality is important.
\begin{notation}\label{defn:W}
Let $W$ and $\widehat{W}$ be the canonical free $\ZZ[\ZZ_2]$-resolution and the free periodic $\ZZ[\ZZ_2]$-resolution of $\ZZ$ respectively:
\[
\xymatrix{
W := &
\ldots \ar[r]^-{1+T} &
\Ztwo \ar[r]^-{1-T} &
\Ztwo \ar[r] &
0 \quad & \quad
}
\]
\[
\xymatrix{
\widehat{W} := &
\ldots \ar[r]^{1+T} &
\Ztwo \ar[r]^{1-T} &
\Ztwo \ar[r]^{1+T} &
\Ztwo \ar[r]^-{1-T} &
\ldots
}
\]
\end{notation}
The chain duality $T$ can be used to define an involution $T_{C,C}$ on $C\otimes_{\AA}C$ which makes it into a $\ZZ[\ZZ_{2}]$-module chain complex, see \cite[page 29]{Ranicki(1992)}.
\begin{defn} We have the following chain complexes of abelian
groups:
\begin{align*}
\Wq C & \coloneqq W\otimes_{\ZZ[\ZZ_{2}]}(C\otimes_{\AA}C) \\
\Ws C & \coloneqq \Hom_{\ZZ[\ZZ_{2}]}(W,C\otimes_{\AA}C) \\
\Wh C & \coloneqq \Hom_{\ZZ[\ZZ_{2}]}(\widehat{W},C\otimes_{\AA}C)
\end{align*}
\end{defn}
\begin{notation}\label{defn:fstar}
Let $f \colon C \rightarrow D$ be a chain map in $\BB(\AA)$. Then
the map of $\ZZ[\ZZ_2]$-chain complexes $f \otimes f \colon C
\otimes_\AA C \rightarrow D \otimes_\AA D$ induces chain maps
\[
f_{\%} \colon \Wq{C} \rightarrow \Wq{D} \quad f^{\%} \colon \Ws{C}
\rightarrow \Ws{D} \quad \widehat{f}^{\%} \colon \Wh{C} \rightarrow
\Wh{D}
\]
\end{notation}
\begin{defn} \label{defn:structures-on-chain-complexes}
Let $C$ be a chain complex in $\BB(\AA)$. An $n$-dimensional
\emph{symmetric structure} on $C$ is an $n$-dimensional cycle
$\varphi \in \Ws C _n$. An $n$-dimensional \emph{quadratic
structure} on $C$ is an $n$-dimensional cycle $\psi \in \Wq C _n$.
An $n$-dimensional \emph{hyperquadratic structure} on $C$ is an
$n$-dimensional cycle $\theta \in \Wh C _n$.
\end{defn}
Note that the dimension $n$ refers only to the degree of the element $\varphi$, $\psi$, or $\theta$ and does not mean that the chain complex $C$ has to be concentrated between degrees $0$ and $n$.
\begin{notation} \label{notn:suspension}
On chain complexes we use the operations of {\it suspension} defined by $(\Sigma C)_n = C_{n-1}$ and {\it desuspension} defined by $(\Sigma^{-1} C)_n = C_{n+1}$. If $X$ is a well-based topological space we can consider the reduced suspension $\Sigma X$. For the singular chain complexes $C(X)$ and $C(\Sigma X)$ and there is a natural chain homotopy equivalence which we denote $\Sigma \co C(X) \ra \Sigma^{-1} C (\Sigma X)$, see \cite[section 1]{Ranicki-I-(1980)} if needed. Sometimes we use the same symbol for the associated map of degree one of chain complexes $\Sigma \co C(X) \ra C(\Sigma X)$.
\end{notation}
\begin{remark}
The structures on a chain complex $C$ from Definition \ref{defn:structures-on-chain-complexes} can also be described in terms of theirs components. Abbreviating $C^{m-\ast} = \Sigma^m TC$, an element $\varphi \in \Ws C _n$ is a collection of maps $\{ \varphi_s \colon C^{n+s-\ast} \rightarrow C | s \in \mathbb{N} \}$, an element $\psi \in \Wq C _n$ is a collection of maps $\{ \psi_s \colon C^{n-s-\ast} \rightarrow C | s \in \mathbb{N} \}$, and an element $\theta \in \Wh C _n$ is a collection of maps $\{ \theta_s \colon C^{n+s-\ast} \rightarrow C | s \in \ZZ \}$, all of them satisfying certain identities, see \cite[page 30]{Ranicki(1992)}. In the symmetric case these identities describe each $\varphi_s$ as a chain homotopy between $\varphi_{s-1}$ and $T\varphi_{s-1}$.
\end{remark}
\begin{defn} \label{defn:Q-grps}
For a $C \in \BB(\AA)$ the \emph{$Q$-groups} of $C$ are defined by
\[
\Qq n C = H_n (\Wq C) \qquad \Qs n C = H_n (\Ws C) \qquad \Qh n C =
H_n (\Wh C)
\]
\end{defn}
\begin{prop} \textup{\cite[Proposition 1.2]{Ranicki-I-(1980)}} \label{propn:LESQ}
For a chain complex $C \in \BB(\AA)$ we have a long exact sequence of $Q$-groups
\[
\xymatrix{
\ldots \ar[r] &
\Qq{n}{C} \ar[r]^-{1+T} &
\Qs{n}{C} \ar[r]^-J &
\Qh{n}{C} \ar[r]^-H &
\Qq{n-1}{C} \ar[r] &
\ldots
}
\]
\end{prop}
The sequence is induced from the short exact sequence of chain complexes
\[
\xymatrix{ 0 \ar[r] & \Ws{C} \ar[r] & \Wh{C} \ar[r] & \Sigma \Wq{C}
\ar[r] & 0 }
\]
The connecting map
\begin{equation} \label{eqn:symmetrization-map}
1+T \co \Wq C \ra \Ws C \qquad ((1+T)\psi)_s = \begin{cases} (1+T)\psi_0 & \textup{if} \; s = 0 \\ 0 & \textup{if} \; s \geq 1 \end{cases}
\end{equation}
is called the \emph{symmetrization map}.
\begin{definition}\label{defn:nSAPC-nQAPC}
An $n$-dimensional \emph{symmetric algebraic complex}
(SAC) in $\AA$ is a pair $(C,\varphi)$ where $C \in \BB(\AA)$ and
$\varphi$ is an $n$-dimensional symmetric structure on $C$. It is called \emph{Poincar\'{e}} (SAPC) if
$\varphi_0$ is a chain homotopy equivalence.
An $n$-dimensional \emph{quadratic algebraic complex}
(QAC) in $\AA$ is a pair $(C,\psi)$ where $C \in \BB(\AA)$ and
$\psi$ is an $n$-dimensional quadratic structure on $C$. It is called \emph{Poincar\'{e}} (QAPC) if
$((1+T) \cdot \psi)_0$ is a chain homotopy equivalence.
\end{definition}
An analogous notion for hyperquadratic complexes is not defined. The
following construction helps to understand the exact sequence of
Proposition \ref{propn:LESQ}.
\begin{definition}\label{defn:suspension}
Let~$C$ be a chain complex $\BB(\AA)$. The \emph{suspension} maps
\[
S \colon \Ws{C} \rightarrow \susp^{-1} (\Ws{\susp C}) \qquad S
\colon \Wh{C} \rightarrow \susp^{-1} (\Wh{\susp C})
\]
are defined by
\[
(S(\varphi))_k := \varphi_{k-1} \qquad (S(\theta))_k := \theta_{k-1}
\]
\end{definition}
\begin{prop} \label{prop:sups-Q-groups}
The hyperquadratic $Q$-groups are the stabilization of the symmetric
$Q$-groups:
\[
\Qh{n}{C} = \underset{k \rightarrow \infty}{\textup{colim}} \,
\Qs{n+k}{ \susp^k C }.
\]
Moreover, the suspension induces an isomorphism on hyperquadratic
$Q$-groups:
\[
S \co \Qh{n}{C} \xra{\cong} \Qh{n+1}{\Sigma C}.
\]
\end{prop}
The proposition is proved in \cite[section 1]{Ranicki-I-(1980)}. It
follows that a symmetric structure has a quadratic refinement if and
only if its suspension $S^k$ is zero in $\Qs{n+k}{\Sigma^k C}$ for some $k$.
This can be improved in a sense that a preferred quadratic refinement can be chosen if a preferred path of the suspension $S^k$ to $0$ is chosen in $\Sigma^{-k} \Ws{\Sigma^k C}$.
\begin{rem} \label{rem:Qh-is-cohomology}
There exists the operation of a direct sum on the structured chain complexes \cite[section 1]{Ranicki-I-(1980)}. We remark that the quadratic and symmetric $Q$-groups do not respect this operation, but the hyperquadratic $Q$-groups do. In fact the assignments $C \mapsto \Qh{n}{C}$ constitute a generalized cohomology theory on the category of chain complexes in $\AA$, see \cite[Theorem 1.1]{Weiss-I(1985)}.
\end{rem}
Now we proceed to explain how the above structures arise from
geometric examples.
\begin{construction}\label{constrn:symmetric construction}
\cite[Proposition 1.1,1.2]{Ranicki-II-(1980)} Let~$X$ be a topological space with the singular chain complex~$C(X)$. The Alexander-Whitney diagonal approximation gives a chain map
\[
\varphi \colon C(X) \rightarrow \Ws{C(X)},
\]
called the \emph{symmetric construction on $X$}, such that for every $n$-dimensional cycle $[X] \in C(X)$, the component $\varphi([X])_0 \co C^{n-\ast} (X) \ra C(X)$ is the cap product with the cycle $[X]$.
There exists an equivariant version as follows. Let~$\tilde{X}$ be the universal cover of~$X$. The singular chain complex~$C(\tilde{X})$ is a chain complex over~$\Zpi$. The symmetric construction $\varphi_{\tilde{X}}$ on $\tilde{X}$ produces a chain map of $\Zpi$-modules. Applying $\ZZ \otimes_{\Zpi}$ we obtain a chain map of chain complexes of abelian groups
\[
\varphi \colon C(X) \rightarrow \Ws{C(\tilde{X})} = \Hom_{\ZZ[\ZZ_2]} (W,C(\tilde{X}) \otimes_{\ZZ[\pi_1 (X)]} C(\tilde{X})),
\]
still called the \emph{symmetric construction of $X$}, and such that for every cycle $[X]\in C(X)$, the component $\varphi([X])_0 \co C^{n-\ast} (\tilde{X}) \ra C(\tilde{X})$ is the cap product with the cycle $[X]$, but now we obtain a map of $\Zpi$-module chain complexes. There is also a version of it for pointed spaces where one works with reduced chain complexes $\tilde C (\tilde X)$.
If $X$ is an $n$-dimensional geometric Poincar\'{e} complex with the fundamental class~$[X]$, then $\varphi ([X])_0$ is the Poincar\'{e} duality chain equivalence. In this case we obtain an $n$-dimensional SAPC over $\ZZ[\pi_1 (X)]$
\[
(C(\tilde X),\varphi ([X])).
\]
\end{construction}
The symmetric construction is functorial with respect to maps of topological spaces and natural with respect to the suspension of chain complexes, as shown in \cite[Proposition 1.1, 1.2]{Ranicki-II-(1980)}. However, if we have a chain map $C(X) \ra C(Y)$ not necessarily induced by a map of spaces, it might not commute with the symmetric constructions of $X$ and $Y$. This is one motivation for the quadratic construction below.
\begin{construction}\label{constrn:quadconstrn}
\cite[Proposition 1.5]{Ranicki-II-(1980)} Let~$X,Y$ be pointed spaces and let $F \colon \Sigma^k X \rightarrow \Sigma^k Y$ be a map. Denote
\[
f \colon C(X) \overset{\Sigma}{\rightarrow} \susp^{-k} C( \Sigma^k X)
\overset{F}{\rightarrow} \susp^{-k} C( \Sigma^k Y) \overset{\Sigma^{-1}}{\rightarrow} C(Y).
\]
where $\Sigma^{-1}$ is some homotopy inverse of $\Sigma$ from Notation \ref{notn:suspension}. The following diagram does not necessarily commute, since~$f$ does not come from a geometric map
\[
\xymatrix{
C (X) \ar[r]^-{\varphi} \ar[d]^{f_{\ast}} &
\Ws{C(X)} \ar[d]^{f^{\%}} \\
C (Y) \ar[r]^-{\varphi} &
\Ws{C(Y)}
}
\]
There is a chain map, called the \emph{quadratic construction on
$F$},
\[
\psi \colon C (X) \rightarrow \Wq{C(Y)} \quad \textup{such that}
\quad (1+T) \cdot \psi \equiv f^{\%} \varphi - \varphi f_{\ast}.
\]
To show that such a map exists we look at the difference~$f^{\%}
\varphi - \varphi f_{\ast}$, and use Proposition
\ref{propn:LESQ} to obtain the commutative diagram
\[
\xymatrix@C=1.25cm{
& & H_n(X)
\ar@{-->}[dl]_{\Psi_F}
\ar[d]|{f^{\%} \varphi - \varphi f_{\ast}}
\ar[dr]^{\equiv 0}
\\
\ldots \ar[r] &
\Qq{n}{C(Y)} \ar[r] &
\Qs{n}{C(Y)} \ar[r] &
\Qh{n}{C(Y)} \ar[r] &
\ldots
}
\]
The map~$H_n(X) \rightarrow \Qh{n}{C(Y)}$ is the stabilization of the map~$f^{\%} \varphi - \varphi f_{\ast}$. But when we stabilize~$f$ we recover the map~$F \colon C(\Sigma^kX) \rightarrow C(\Sigma^kY)$, up to a preferred chain homotopy. This map comes from a geometric map, and so, by the naturality of the symmetric construction, the map~$H_n(X) \rightarrow \Qh{n}{C(Y)}$ is zero.
Then exactness tells us there is a lift. However, we are allowed to look on the chain level, and we observe that there is a preferred null-homotopy of the difference $S^k (f^{\%} \varphi - \varphi f_{\ast}) \simeq F^{\%} \varphi - \varphi F_{\ast}$ in the chain complex $\Sigma^{-k} \Ws{C(\Sigma^k Y)}$. By the remark following Proposition \ref{prop:sups-Q-groups} we obtain a preferred lift. This is describes the map~$\psi$, the full details can be found in \cite{Ranicki-II-(1980)}.
Similarly as in the symmetric construction there is an equivariant version, also called \emph{quadratic construction on $F$},
\[
\psi \colon C (X) \rightarrow \Wq{C(\tilde{Y})} \quad \textup{such that}
\quad (1+T) \cdot \psi \equiv f^{\%} \varphi - \varphi f_{\ast}.
\]
\end{construction}
\begin{construction} \label{con:quad-construction-on-degree-one-normal-map}
Let~$M$, $X$ be geometric Poincar\'{e} complexes with a degree one normal map~$(f,b) \colon M \rightarrow X$. Using $\pi_1(X)$-equivariant $S$-duality
(see section \ref{sec:normal-cplxs}) we obtain a stable equivariant map~$F \colon \Sigma^k \tilde{X}_+ \rightarrow \Sigma^k \tilde{M}_+$ for some~$k \in \mathbb{N}$.
Consider the Umkehr map
\[
f^{!} \colon C(\tilde X) \rightarrow \susp^{-k} C( \Sigma^k \tilde X_+)
\overset{F}{\rightarrow} \susp^{-k} C( \Sigma^k \tilde M_+) \rightarrow C(\tilde M)
\]
and its mapping cone $\sC (f^{!})$ with the inclusion map $e \co C(\tilde M) \ra \sC (f^{!})$. We obtain an $n$-dimensional QAPC over $\Zpi$
\[
(\sC(f^{!}),e_\% \psi ([X])).
\]
\end{construction}
An example of a hyperquadratic structure on a chain complex coming from geometry is relegated to section \ref{sec:normal-cplxs}. Now we present the relative versions of the above concepts.
\begin{definition}\label{defn:sympair}
An ($n$+1)-dimensional \emph{symmetric algebraic pair} over $\AA$ is
a chain map $f \colon C \rightarrow D$ in $\BB(\AA)$ together with
an $(n+1)$-dimensional cycle $(\delta \varphi,\varphi) \in \sC
(f^{\%})$. An ($n$+1)-dimensional \emph{quadratic algebraic pair}
over $\AA$ is a chain map $f \colon C \rightarrow D$ in $\BB(\AA)$
together with an $(n+1)$-dimensional cycle $(\delta \psi,\psi) \in
\sC (f_{\%})$.
\end{definition}
Notice that an $(n+1)$-dimensional symmetric pair contains an
$n$-dimensional symmetric complex $(C,\varphi)$ and similarly an
$(n+1)$-dimensional quadratic pair contains an $n$-dimensional
quadratic complex $(C,\psi)$. The cycle condition translates into the relation between $\delta \varphi$
and $\varphi$ via the equation $d(\delta\varphi) = (-1)^n f^{\%}
(\varphi)$. It is also helpful to define the evaluation map
\[
\textup{ev} \co \sC (f^{\%}) \ra \Hom_\AA (D^{n+1-\ast},\sC(f))
\quad \textup{ev} (\delta \varphi,\varphi) = \pairmap \colon
D^{n+1-\ast} \rightarrow \sC(f)
\]
and likewise in the quadratic case.
\begin{definition}\label{defn:SAPP-and-QAPP}
An $(n+1)$-dimensional \emph{symmetric algebraic \emph{Poincar\'{e}}
pair} (SAPP) in $\AA$ is a symmetric pair~$(f \colon C \rightarrow D
, (\delta\varphi,\varphi))$ such that
\[
\pairmap \colon D^{n+1-\ast} \rightarrow \sC (f)
\]
is a chain equivalence.
An $(n+1)$-dimensional \emph{quadratic algebraic \emph{Poincar\'{e}}
pair} (QAPP) in $\AA$ is a quadratic pair~$(f \colon C \rightarrow D
, (\delta\psi,\psi))$ such that
\[
(1+T) \cdot \pairmapquad \colon D^{n+1-\ast} \rightarrow \sC (f)
\]
is a chain equivalence.
\end{definition}
\begin{construction} \label{con:rel-sym} Let $(X,Y)$ be a pair of
topological spaces, and denote the inclusion $i \co Y \ra X$. By the
naturality of the symmetric construction we obtain a chain map
\[
\varphi \co C(X,Y) \ra \sC (i^{\%})
\]
which is called the \emph{relative symmetric construction}.
If $(X,Y)$ is an $(n+1)$-dimensional Poincar\'e pair with the
fundamental class $[X] \in C_{n+1} (X,Y)$ then the evaluation
\[
\textup{ev} \circ \varphi ([X]) \co C^{n+1-\ast} (X) \ra
C(X,Y)
\]
is a chain homotopy equivalence. There also exists an equivariant version.
\end{construction}
\begin{construction} \label{con:rel-quad-htpy}
Let $(X,A)$ and $(Y,B)$ be pairs of pointed topological spaces and
let
\[
\xymatrix{
\Sigma^k A \ar[r]^{\del F} \ar[d]_{i} & \Sigma^k B \ar[d]^{j} \\
\Sigma^k X \ar[r]_{F} & \Sigma^k Y
}
\]
be a commutative diagram. Let $\del f$ and $f$ be maps defined
analogous to the map $f$ in Construction \ref{constrn:quadconstrn}.
There is a chain map, the \emph{relative quadratic construction},
\[
\psi \co C (X,A) \ra \sC (j_{\%})
\]
such that $(1+T) \cdot \psi = (f,\del f)^{\%} \varphi - \varphi (f,\del f)_\ast$. Again, there is also an equivariant version.
\end{construction}
\begin{construction} \label{con:rel-quad-mfds}
Let $((f,b),\del (f,b)) \co (M,N) \ra (X,Y)$ be a degree one normal
map of manifolds with boundary. Here we do not assume that the
restriction of $\del f$ on the boundary $N$ is a homotopy
equivalence. The $S$-duality yields in this case the
commutative diagrams
\[
\xymatrix{
T(\nu_M)/T(\nu_N) \ar[r] \ar[d] & T(\nu_X)/T(\nu_Y) \ar[d] &
\leadsto & \Sigma^k Y_+ \ar[r]^{\del F} \ar[d]_{i} & \Sigma^k N_+
\ar[d]^{j} \\
\Sigma T(\nu_N) \ar[r] & \Sigma T(\nu_Y) & \leadsto & \Sigma^k
X_+ \ar[r]_{F} & \Sigma^k M_+
}
\]
We have two Umkehr maps $\del f^{!}$ and $f^{!}$ and a commutative
square
\[
\xymatrix{
C(N) \ar[r]^{\del e} \ar[d]_{j} & \sC(\del f^{!}) \ar[d]^{k} \\
C(M) \ar[r]_{e} & \sC (f^{!})
}
\]
We obtain an $(n+1)$-dimensional QAPP
\[
\big( k \co \sC (\del f^{!}) \ra \sC (f^{!}),(e,\del e)_{\%} \psi ([X]) \big).
\]
\end{construction}
The notion of a pair allows us to define the notion of a cobordism
of structured chain complexes.
\begin{definition}\label{defn:symcobord}
A \emph{cobordism} of $n$-dimensional SAPCs~$(C,\varphi),(C',\varphi')$ in $\AA$ is
an~$(n+1)$-dimensional SAPP in $\AA$
\[
((f \, f') \colon C \oplus C' \rightarrow E ,
(\delta\varphi , \varphi \oplus -\varphi'))
\]
A \emph{cobordism} of $n$-dimensional QAPCs~$(C,\psi),(C',\psi')$ in $\AA$ is
an~$(n+1)$-dimensional QAPP in $\AA$
\[
((f \, f') \colon C \oplus C' \rightarrow E ,
(\delta\psi , \psi \oplus -\psi'))
\]
\end{definition}
There is a notion of a {\it union} of two adjoining cobordisms in $\AA$ is defined in \cite[section 3]{Ranicki-I-(1980)}. Using it one obtains transitivity for the cobordisms and hence an equivalence relation.
Geometrically, one obtains a symmetric cobordism from a geometric Poincar\'e triad and a quadratic cobordism from a degree one normal map of geometric Poincar\'e triads.
Recall the well-known fact that using Morse theory any geometric
cobordism can be decomposed into elementary cobordisms which are in
turn obtained via surgery. Although it has slightly different properties there exists an analogous notion of
algebraic surgery which we now recall. For simplicity we will only discuss it in the symmetric case, although there is an analogous
notion for quadratic complexes.
\begin{construction} \cite[Definition 1.12]{Ranicki(1992)}
Let $(C,\varphi)$ be an $n$-dimensional symmetric complex. The
\emph{data} for an algebraic surgery on $(C,\varphi)$ is an
~$(n+1)$-dimensional symmetric pair~$(f \colon C \rightarrow D ,
(\delta\varphi,\varphi))$ . The \emph{effect} of the algebraic
surgery on $(C,\varphi)$ using $(f \colon C \rightarrow D ,
(\delta\varphi,\varphi))$ is the $n$-dimensional symmetric complex
$(C',\varphi')$ defined by
\[
C' = \susp ^{-1} \cone{\smallpairmap}, \quad \varphi' =
\Sigma^{-1} (e')^{\%} (\delta \varphi / \varphi)
\]
Here the map $e'$ is defined by the diagram
\[
\xymatrix@R=0,4cm{
C \ar[r]_{f} & D \ar[rd] & & \susp C' \\
& & \sC (f) \ar[ur]_{e'} & \\
C' \ar[r] & D^{n+1-\ast} \ar[ur]_{\smallpairmap}
}
\]
The symmetric structure on the pair $f \co C \ra D$ defines a
symmetric structure $\delta \varphi/\varphi$ on $\sC (f)$ by the
formula as in \cite[Proposition 1.15]{Ranicki(1992)}. It is pushed
forward by $e'$ to an $(n+1)$-cycle of~$\Ws{\susp C'}$ which turns
out to have a preferred desuspension and so we obtain an~$n$-cycle~$\varphi '$.
\end{construction}
A geometric analogue is obtained from a cobordism $W$ between closed
manifolds $M$ and $M'$. Then we have a diagram
\[
\xymatrix@R=0.4cm{
C(M) \ar[r] & C(W,M') \ar[rd]-<35pt,-5pt> & \\
& & C(W,M \cup M') \\
C(M') \ar[r] & C(W,M) \ar[ur]-<35pt,5pt>
}
\]
where the chain complexes~$C(W,M^\prime)$ and~$C(W,M)$ are Poincar\'e dual.
\begin{definition}\label{defn:symbdy}
Let~$(C,\varphi)$ be an $n$-dimensional SAC. The \emph{boundary}
of~$(C,\varphi)$ is the $(n-1)$-dimensional SAC obtained from surgery on the
symmetric pair $( 0 \rightarrow C , ( \varphi , 0 ) )$. The boundary
is denoted $\partial (C,\varphi) = (\partial C, \partial \varphi)$,
with $\del C = \Sigma^{-1} \sC (\varphi_0)$ and $\del \varphi = S^{-1}
e^{\%} (\varphi)$, where $e \co C \ra \sC (\varphi_0)$.
\end{definition}
Here the geometric analogue arises from considering an
$n$-dimensional manifold with boundary, say $(N,\del N)$. Consider
the chain complex $C(N,\del N)$ and its suspended dual $C^{n-\ast}
(N,\del N)$. There is a symmetric structure on $C(N,\del N)$, which
is not Poincar\'e. However, there is the Poincar\'e duality
$C^{n-\ast} (N,\del N) \simeq C(N)$. Thus the mapping cone of the
duality map $C^{n-\ast} (N,\del N) \ra C(N,\del N)$ becomes homotopy
equivalent to the mapping cone of the map $C(N) \ra C(N,\del N)$
which is $\Sigma C(\del N)$.
\begin{remark}
Notice that an $n$-dimensional SAC is Poincar\'{e} if and only if its boundary
is contractible.
\end{remark}
We also have the following proposition which is proven in \cite{Ranicki-I-(1980)} by writing out the formulas.
\begin{proposition}\textup{\cite[Proposition 4.1]{Ranicki-I-(1980)}} \label{prop:homotopy-type-of-boundary}
Algebraic surgery preserves the homotopy type of the boundary
of~$(C,\varphi)$. In particular we have that
\[
(C,\varphi) \text{ is Poincar\'{e}} \Leftrightarrow
(C',\varphi') \text{ is Poincar\'{e}}.
\]
\end{proposition}
An algebraic surgery on $(C,\varphi)$ using $(f \co C \ra D,(\delta \varphi,\varphi))$ gives rise to a symmetric pair $(f \; f'
\co C \oplus C' \ra D',(\delta \varphi',\varphi \oplus \varphi))$
with $D' = \sC (\varphi_0 f^\ast)$. If $(C,\varphi)$ is Poincar\'e
then, as noted above, $(C',\varphi')$ is also Poincar\'e, and in
addition the pair is a cobordism. We remark that the data for
algebraic surgery might not be a Poincar\'e pair, in fact this is a
typical case, since if it is a Poincar\'e pair, then it already
defines a null-cobordism of $(C,\varphi)$ and hence $C'$ is contractible.
The relationship between the algebraic cobordism and algebraic
surgery turns out to be as follows:
\begin{prop} \textup{\cite[Proposition 4.1]{Ranicki-I-(1980)}}
The equivalence relation generated by surgery and homotopy equivalence is the same as the equivalence relation given by cobordism.
\end{prop}
\begin{definition}\label{defn:Lgps} \cite[Proposition 3.2]{Ranicki-I-(1980)}\
The \emph{symmetric $L$-groups} of an additive category with chain duality $\AA$ are
\[
L^n (\AA) := \{ \text{cobordism classes of } n \text{-dimensional SAPCs in } \AA \}
\]
The \emph{quadratic $L$-groups} an additive category with chain duality $\AA$ are
\[
L_n (\AA) := \{ \text{cobordism classes of } n \text{-dimensional QAPCs in } \AA \}
\]
The group operation is the direct sum of the structured chain complexes in both cases. The inverse of a SAPC $(C,\varphi)$ is given by $(C,-\varphi)$, and the inverse of a QAPC $(C,\psi)$ is given by $(C,-\psi)$.
\end{definition}
\begin{rem}
It is proven in \cite[sections 5,6,7]{Ranicki-I-(1980)} for $\AA = \AA (R)$, where $R$ is a ring with involution, that the groups $L_n (\AA(R))$ are isomorphic to the surgery obstruction groups $L_n (R)$ of Wall. Both symmetric and quadratic groups $L^n (\AA)$ and $L_n (\AA)$ are $4$-periodic for any $\AA$ \cite[Proposition 1.10]{Ranicki(1992)}.
\end{rem}
\begin{definition} \label{defn:sym-sign}
Let $X$ be an $n$-dimensional Poincar\'e complex. The cobordism class of the $n$-dimensional SAPC obtained from any choice of the fundamental class $[X] \in C_n (X)$ in Construction \ref{constrn:symmetric construction} does not depend on the choice of $[X]$ and hence defines an element
\[
\ssign_{\Zpi} (X) = [(C(\tilde X),\varphi ([X]))] \in L^n (\Zpi).
\]
called the \emph{symmetric signature} of $X$.\footnote{The notation is somewhat premature, the symbol $\bL^\bullet$ denotes the symmetric $L$-spectrum and will be defined later in section \ref{sec:spectra}. Likewise in the quadratic case.} If $X$ is an oriented $n$-dimensional topological manifold, then the symmetric signature only depends on the oriented cobordism class of $X$, and so it provides us with a homomorphism\footnote{If $X$ is not a manifold we can still say that the symmetric signature only depends on the oriented cobordism class of $X$ in the Poincar\'e cobordism group $\Omega^{P}_n$, but we will not need this point of view later}
\[
\ssign_{\Zpi} \co \Omega^{\STOP}_n (K(\pi_1 (X),1)) \ra L^n (\ZZ[\pi_1 (X)]).
\]
\end{definition}
\begin{definition} \label{defn:quad-sign}
Let $(f,b) \co M \ra X$ be a degree one normal map of Poincar\'e complexes. The cobordism class of the $n$-dimensional QAPC obtained from any choice of the fundamental class $[X] \in C_n (X)$ in Construction \ref{con:quad-construction-on-degree-one-normal-map} does not depend on the choice of $[X]$ and hence defines an element
\[
\qsign_{\Zpi} (f,b) = [(\sC(f^{!}),e_\% \psi ([X]))] \in L_n (\Zpi)
\]
called the \emph{quadratic signature} of the degree one normal map $(f,b)$. If $M$ is an $n$-dimensional oriented manifold then the quadratic signature only depends on the normal cobordism class of $(f,b)$ in the set of normal invariants $\sN (X)$ and provides us with a function\footnote{Recall that for $X$ an $n$-dimensional GPC the set of normal invariants if it is non-empty is a group with respect to the group structure given by the Whitney sum. The quadratic signature is NOT a homomorphism with respect to this group structure.}
\[
\qsign_{\ZZ[\pi_1(X)]} \co \sN(X) \ra L_n (\ZZ[\pi_1 (X)]).
\]
\end{definition}
\begin{rem} \label{rem:symmetrization-of-surgery-obstruction} \cite[Proposition 2.2]{Ranicki-II-(1980)}
The symmetrization map (\ref{eqn:symmetrization-map}) carries over to the $L$-groups as
\[
(1+T) \co L_n (\AA) \ra L^n (\AA)
\]
and for $\pi = \pi_1 (X)$ we have
\[
(1+T) \; \qsign_{\ZZ[\pi]} (f,b) = \ssign_{\ZZ[\pi]} (M) - \ssign_{\ZZ[\pi]} (X).
\]
\end{rem}
\begin{rem} \label{rem:quadratic-signature-is-surgery-obstruction}
If $M$ is a closed $n$-dimensional topological manifold then the quadratic signature from Definition \ref{defn:quad-sign} coincides with the classical surgery obstruction by the result of \cite[Proposition 7.1]{Ranicki-II-(1980)}.
\end{rem}
\begin{rem}
Notice that we did not define a hyperquadratic version of the $L$-groups. In fact, hyperquadratic structures are useful when we
have a fixed chain complex $C$ and we study the relationship between
the symmetric and quadratic structures on $C$ via the sequence in
Proposition \ref{propn:LESQ}. When comparing the symmetric and
quadratic $L$-groups, hence cobordism groups of complexes equipped
with a symmetric and quadratic structure a new concept of an
algebraic normal complex is needed. It is discussed in the next
section.
\end{rem}
\section{Normal complexes} \label{sec:normal-cplxs}
A geometric normal complex is a notion generalizing a geometric
Poincar\'e complex. It is motivated by the observation that although
a Poincar\'e complex is not necessarily locally Poincar\'e, it is
locally normal. On the other hand a manifold is also locally
Poincar\'e. Hence the question whether a Poincar\'e complex can be
modified within the homotopy type so that the locally normal
structure becomes Poincar\'e is central to our main problem.
In this section we will recall the definition of an algebraic normal
complex. In addition we recall that cobordism groups of algebraic
normal complexes, the so-called $NL$-groups, which measure the
difference between the symmetric and quadratic $L$-groups. Another
viewpoint on that same fact is that the quadratic $L$-groups measure
the difference between the symmetric $L$-groups and the $NL$-groups.
This will be crucially used in the proof of the Main Technical
Theorem.
The material from this section comes from \cite[section
2]{Ranicki(1992)}, \cite{Weiss-I(1985),Weiss-II(1985)} and
\cite[sections 7.3 and 7.4]{Ranicki(1981)}.
\begin{definition}\label{defn:GNC}
An $n$-dimensional \emph{geometric normal complex} (GNC) is a triple
$(X,\nu,\rho)$ consisting of a space~$X$ with a $k$-dimensional
oriented spherical fibration~$\nu$ and a map $\rho \colon S^{n+k}
\rightarrow \thom{\nu}$ to the Thom space of $\nu$.
The \emph{fundamental class} of~$(X,\nu,\rho)$ is the
$n$-dimensional homology class in $H_n (X)$ represented by the cycle
$[X] \in C_n (X)$ given by the formula $[X] := u(\nu) \cap h(\rho)$
where~$h$ is the Hurewicz homomorphism, and $u (\nu) \in C^k
(\thom{\nu})$ is some choice of the Thom class of $\nu$.
\end{definition}
Note that the dimension of a GNC is the dimension of the source
sphere of the map $\rho$ minus the dimension of the spherical
fibration. It does not necessarily have anything to do with a
geometric dimension of $X$. Also the cap product with the
fundamental class does not necessarily induce an isomorphism between
cohomology and homology of $X$.
\begin{example}\label{expl:nGPCtonGNC}
Let~$X$ be an $n$-dimensional geometric Poincar\'{e} complex (GPC)
with the fundamental class~$[X]$ in the sense of Poincar\'e duality.
Then, for $k$ large enough, the space~$X$ has the Spivak normal
fibration (SNF)~$\nu_X \colon X \rightarrow \BSG(k)$, which has the
property that there is a map~$\rho_X \colon S^{n+k} \rightarrow
\thom{\nu_X}$ such that
\[
[X] = u(\nu_{X}) \cap h( \rho_X ) \in H_n (X).
\]
Thus we get an $n$-dimensional geometric normal complex~$(X,
\nu_X,\rho_X)$ with the fundamental class equal to the fundamental
class in the sense of Poincar\'e duality.
\end{example}
Some properties of normal complexes can be stated in terms of the
$S$-duality from stable homotopy theory. For pointed spaces $X$, $Y$
the symbol $[X,Y]$ denotes the abelian group of stable homotopy
classes of stable pointed maps from $X$ to $Y$. Here, for
simplicity, we confine ourselves to a non-equivariant $S$-duality.
An equivariant version, which is indeed needed for our purposes is
presented in detail in \cite[section 3]{Ranicki-I-(1980)}.
\begin{definition}\label{defn:Sduality}
Let~$X,Y$ be pointed spaces. A map~$\alpha \colon S^N \rightarrow X
\wedge Y$ is an $N$-dimensional \emph{S-duality map} if the slant
product maps
\[
\alpha_{\ast} ([S^N]) \backslash \underbar{ }
\colon \tilde{C}(X)^{N-\ast} \rightarrow
\tilde{C}(Y) \quad \textup{and} \quad \alpha_{\ast} ([S^N]) \backslash
\underbar{ } \colon \tilde{C}(Y)^{N-\ast}
\rightarrow \tilde{C}(X)
\]
are chain equivalences. We say the spaces~$X,Y$ are \emph{S-dual}.
\end{definition}
\begin{expl} \label{expl:SNF}
Let $X$ be an $n$-dimensional GPC with the $k$-dimensional SNF
$\nu_X \co X \ra \BSG (k)$. Then $\thom{\nu_X}$ is an
$(n+k)$-dimensional $S$-dual to $X_+$.
\end{expl}
\begin{prop} \label{S-duality-property}
The $S$-duality satisfies:
\begin{enumerate}
\item For every finite CW-complex $X$ there exists an $N$-dimensional S-dual, which we denote $X^\ast$, for some large $N \geq 1$.
\item If $X^\ast$ is an $N$-dimensional $S$-dual of $X$ then $\Sigma X^\ast$ is an $(N+1)$-dimensional $S$-dual of $X$.
\item For any space $Z$ we have isomorphisms
\begin{align*}
S \co [X,Z] \cong [S^N,Z \wedge Y] & \quad \gamma \mapsto S(\gamma)
=
(\gamma \wedge \id_Y) \circ \alpha, \\
S \co [Y,Z] \cong [S^N,X \wedge Z] & \quad \gamma \mapsto S(\gamma)
= (\id_X \wedge \gamma) \circ \alpha.
\end{align*}
\item A map $f \co X \ra Y$ induces a map $f^\ast \co Y^\ast \ra X^\ast$ for $N$ large enough via the isomorphism
\[
[X,Y] \cong [S^N,Y \wedge X^\ast] \cong [Y^\ast,X^\ast].
\]
\item If $X \ra Y \ra Z$ is a cofibration sequence then $Z^\ast \ra Y^\ast \ra X^\ast$ is a cofibration sequence for $N$ large enough.
\end{enumerate}
\end{prop}
The $S$-dual is also unique in some sense. In fact the assignment $X
\mapsto X^\ast$ can be made into a functor in an appropriate stable
homotopy category. As this requires a certain amount of
technicalities and we do not really need it, we skip this aspect.
The reader can find the details for example in \cite{Adams(1974)}.
Now we present a generalization of Example \ref{expl:SNF}.
\begin{construction} \label{con:S-duality-and-Thom-is-Poincare}
Let~$(X,\nu,\rho)$ be an $n$-dimensional GNC. Let~$V$ be the mapping
cylinder of the projection map of $\nu$ with~$\del V$ being the
total space of the spherical fibration $\nu$. Then we have the
generalized diagonal map
\[
\tilde{\Delta} \colon
\thom{\nu} \simeq \frac{V}{\partial V}
\overset{\Delta}{\longrightarrow}
\frac{V \times V} {V \times \partial V}
\simeq
\thom{\nu} \wedge X_+
\]
where~$\Delta$ is the actual diagonal map. Consider the composite
\[
S^{n+k}
\overset{\rho}{\longrightarrow}
\thom{\nu}
\overset{\tilde{\Delta}}{\longrightarrow}
\thom{\nu} \wedge X_+.
\]
By Proposition \ref{S-duality-property} part (1) we have an
$S$-duality map $S^N \ra \Th (\nu) \wedge \Th (\nu)^\ast$ for $N$
large enough. Setting $p = N-(n+k)$ we obtain from part (3) the
one-to-one correspondence:
\begin{align*}
S^{-1} \co [S^{n+k},\thom{\nu} \wedge X_+] & \cong [\thom{\nu}^\ast,\Sigma^p X_+] \\
\tilde{\Delta} \circ \rho & \mapsto \Gamma_X := S^{-1}
(\tilde{\Delta} \circ \rho).
\end{align*}
Moreover, we obtain the following homotopy commutative diagram in
which $\gamma_X$ is the chain map induced by $\Gamma_X$:
\begin{equation} \label{dgrm:Thom-S-dual-Poincare}
\begin{split}
\xymatrix@R=10pt@C=1cm{
C(X)^{n-\ast} \ar[d]_{- \cap [X]} \ar[r]^-{-\cup u (\nu)}
&
\tilde{C}(\thom{\nu})^{n+k-\ast} \ar[r]^-{\textup{S-dual}}
&
C(\thom{\nu}^\ast)_{p + \ast} \ar[d]^{\gamma_X}
\\
C(X) = \tilde{C}(X_+) \ar[rr]_{\Sigma^p}
& &
\tilde{C} (\Sigma^p X_+)_{p + \ast} }
\end{split}
\end{equation}
If $X$ is Poincar\'e then $p$ can be chosen to be $0$ and the maps
$\Gamma_X$ and $\gamma_X$ the identity. Hence the Poincar\'e duality
is seen as the composition of the Thom isomorphism for the SNF and
the $S$-duality.
\end{construction}
Now we turn to algebraic normal complexes. As a first step we
discuss the following notion which is an algebraic analogue of a
spherical fibration.
\begin{definition}\label{defn:chain bundle}
Let~$C$ be a chain complex over an additive category with chain duality ~$\AA$. A \emph{chain bundle}
over~$C$ is a $0$-dimensional cycle~$\gamma$ in~$\Wh{TC}$.
\end{definition}
\begin{construction}\label{constrn:hypquadconstrn}
Let~$X$ be a finite CW-complex and let~$\nu \co X \ra \BSG (k)$ be a
$k$-dimensional spherical fibration over~$X$. The Thom space
$\thom{\nu}$ is also a finite CW-complex and hence has an
$N$-dimensional S-dual $\thom{\nu}^\ast$ for some $N$. The
\emph{hyperquadratic construction} is the chain map given by the
following composition:
\[
\xymatrix@R=0.1cm@!C=1.5cm{
*+[l]{\gamma_\nu \co \tilde{C}^k(\thom{\nu})} \ar[r]^-{\text{S-duality}}
& *+[r]{\tilde{C}_{N-k}(\thom{\nu}^\ast)}
\\
\ar[r]^-{\varphi_{\thom{\nu}^\ast}}
& *+[r]{\Ws{\tilde{C}(\thom{\nu}^\ast)}_{N-k}}
\\
\ar[r]^-{\text{S-duality}}
& *+[r]{\Ws{\tilde{C}(\thom{\nu})^{N-\ast}}_{N-k}}
\\
\ar[r]^-{\text{Thom}}
& *+[r]{\Ws{C(X)^{N-k-\ast}}_{N-k}}
\\
\ar[r]^-{J}
& *+[r]{\Wh{C(X)^{N-k-\ast}}_{N-k}}
\\
\ar[r]^-{S^{-(N-k)}}
& *+[r]{\Wh{C(X)^{-\ast}}_0}
}
\]
Given a choice of the Thom class $u (\nu) \in
\tilde{C}^k(\thom{\nu})$, the cycle $\gamma_\nu (u(\nu))$ becomes a
chain bundle over $C(X)$. An equivariant version produces a chain
bundle over $\Zpi$:
\[
(C(\tilde X),\gamma_\nu (u(\nu)))
\]
\end{construction}
Now we can define an algebraic analogue of a geometric normal
complex.
\begin{definition}\label{defn:nNAC}
An \emph{$n$-dimensional normal algebraic complex} (NAC) in $\AA$ is a pair $(C,\theta)$ where $\theta$ is a triple~$(\varphi,\gamma,\chi)$ such that
\begin{itemize}
\item $(C,\varphi)$ is an $n$-dimensional SAC
\item $\gamma \in (\Wh{TC})_{0}$ is a chain bundle over~$C$
\item $\chi \in (\Wh{C})_{n+1}$ satisfies
$d \chi = J (\varphi) - \varphizp (S^n \gamma)$.
\end{itemize}
\end{definition}
As we indicate below in the geometric example the third condition is
a consequence of the homotopy commutativity of the diagram
(\ref{dgrm:Thom-S-dual-Poincare}) and as such can be seen as a
generalization of the equation in Example \ref{expl:nGPCtonGNC}.
Notice that there is no requirement on $\varphi_0$ being a chain
equivalence, that means normal complexes are in no sense Poincar\'e.
Now we indicate the \emph{normal construction} which to an
$n$-dimensional GNC functorially associates an $n$-dimensional NAC.
The full details are somewhat complicated, the reader can find them
in \cite{Weiss-I(1985),Weiss-II(1985)}.
\begin{construction} \label{con:normal-construction}
Let~$(X,\nu,\rho)$ be an $n$-dimensional GNC with a choice of the
Thom class $u(\nu) \in \tilde C (\Th (\nu))$ whose associated
fundamental class is denoted~$[X]$. We would like to associate to it
an $n$-dimensional NAC over $\ZZ[\pi_1 X]$. We start with
\begin{itemize}
\item $C = C(\tilde X)$
\item $\varphi = \varphi ([X])$
\item $\gamma = \gamma_{\nu} (u(\nu))$
\end{itemize}
Now we will only show that an element~$\chi$ with required
properties exists. In other words we show that $J (\varphi) =
\varphizp (S^n \gamma)$ in $\Qh{n}{C(X)}$. Consider in our case the
symmetric construction, the hyperquadratic construction and the
diagram (\ref{dgrm:Thom-S-dual-Poincare}). We obtain the following
commutative diagram:
{\footnotesize
\begin{center}
\xymatrix@C=0.45cm{
&
H_n (X) \ar[r]^(0.45){\varphi} \ar[d]^{\Sigma^p}
&
Q^n (C(X)) \ar[r]^{J} \ar[d]^{\Sigma^p}
&
\Qh{n}{C(X)}
\ar[d]_{\Sigma^p}^{\cong}
&
\Qh{n}{C^{n-\ast} (X)}
\ar[l]_{\widehat\varphi_0^{\%}}
\ar[d]_{\Sigma^p}^{\cong}
\\
H^k (\Th(\nu))
\ar[ur]^{- \cap h (\rho)} \ar[r] \ar[dr]_{\labelstyle S-\text{dual}}
&
H_{n+p} (\Sigma^p X)
\ar[r]^(0.45){\varphi}
&
Q^{n+p} (\tilde{C}(\Sigma^p_+ X)) \ar[r]^{J}
&
\Qh{n+p}{\tilde{C}(\Sigma^p_+ X)}
&
\Qh{n+p}{C^{n+p-\ast} (X)}
\ar[l]_*!/u2pt/{\labelstyle \widehat{S^p\varphi}_0^{\%}}
\\
&
H_{n+p} (\Th(\nu)^\ast)
\ar[r]^(0.45){\varphi} \ar[u]_{\gamma_X}
&
Q^{n+p} (\tilde{C}(\Th(\nu)^\ast)) \ar[r]^{J} \ar[u]_{\gamma_X^{\%}}
&
\Qh{n+p}{\tilde{C}(\Th(\nu)^\ast)}
\ar[r]^-*!/u3pt/{\labelstyle S-\text{dual}}
\ar[u]^{\widehat\gamma_X^{\%}}
&
\Qh{n+p}{\tilde{C}^{N-\ast} (\Th(\nu))}
\ar[u]_{\text{Thom}} \\
}
\end{center}
}
The commutativity of the upper left part follows from the basic
properties of the symmetric construction. The commutativity of the
lower left part follows from the existence of the map $\Gamma_X$ and
naturality of the symmetric construction. The commutativity of the
right part follows from the commutativity of the diagram
(\ref{dgrm:Thom-S-dual-Poincare}).
As mentioned above, the construction can be made sufficiently
functorial, that means there is a preferred choice of $\chi$. We
obtain an $n$-dimensional NAC over $\Zpi$
\[
(C(\tilde X),\theta (u(\nu))).
\]
\end{construction}
As in the previous section, we also need to discuss the relative
versions.
\begin{definition}\label{defn:GNP}
An $(n+1)$-dimensional \emph{geometric normal pair} (GNP) is a
triple $((X,Y),\nu,\rho)$ consisting of a pair of spaces~$(X,Y)$
with a $k$-dimensional spherical fibration~$\nu \co X \ra \BSG (k)$
and a map $\rho \colon (D^{n+1+k},S^{n+k}) \rightarrow
(\thom{\nu},\thom{\nu|_Y})$.
The \emph{fundamental class} of the normal pair~$((X,Y),\nu,\rho)$
is the $(n+1)$-dimensional homology class represented by the cycle
$[X,Y] \in C_n (X,Y)$ given by the formula $[X,Y] := u(\nu) \cap
h(\rho)$ where~$h$ is the Hurewicz homomorphism, and $u(\nu) \in
\tilde{C}^k (\thom{\nu})$ is some choice of the Thom class of $\nu$.
A \emph{geometric normal cobordism} between two $n$-dimensional GNCs
$(X,\nu,\rho)$ and $(X',\nu',\rho')$ is an $(n+1)$-dimensional
normal pair $((Z,X \sqcup X'),\nu'',\rho'')$ which restricts
accordingly over $X$ and $X'$.
The \emph{normal cobordism group} $\Omega^N_n (K)$ is defined as the abelian group of normal cobordism classes of $n$-dimensional GNCs with a reference map $r \co X \ra K$ and with the group operation given by the disjoint union operation.
\end{definition}
Notice that in the above setting, the triple $(Y,\nu|_Y,\rho|_{S^{n+k}})$ is an
$n$-dimensional GNC. The relative algebraic analogues come next.
\begin{defn}
A \emph{map of chain bundles} $(f,b) \co (C,\gamma) \ra
(C',\gamma')$ in $\AA$ is a map $f \co C \ra C'$ of chain complexes
in $\BB(\AA)$ together with a chain $b \in \Wh{TC}_1$ such that
\[
d(b) = \widehat f^{\%} (\gamma') - \gamma \in \Wh{TC}_0
\]
\end{defn}
\begin{defn}
An $(n+1)$-dimensional \emph{normal pair} $(f \co C \ra D,(\delta
\theta,\theta))$ in $\AA$ is an $(n+1)$-dimensional symmetric pair
$(f \co C \ra D,(\delta \varphi,\varphi))$ together with a map of
chain bundles $(f,b) \co (C,\gamma) \ra (D,\delta \gamma)$ and
chains $\chi \in \Wh C_{n+1}$ and $\delta \chi \in \Wh D_{n+2}$ such
that
\begin{align*}
J (\varphi) - \varphizp (S^n \gamma) & = d \chi \in \Wh{C}_n \\
J (\delta \varphi) - \widehat{\delta \varphi_0}^{\%} (S^{n+1} \delta
\gamma) + \widehat{f}^{\%} (\chi - \widehat{\varphi}^{\%}_0 (S^n b))
& = d ( \delta \chi) \in \Wh{D}_{n+1}
\end{align*}
where we abbreviate $(\delta \theta,\theta)$ for $((\delta
\varphi,\delta \gamma,\delta \chi),(\varphi,\gamma,\chi))$.
\end{defn}
Again notice that in the above setting $(C,\theta)$ is an $n$-dimensional NAC.
\begin{defn}
A \emph{normal cobordism} between normal complexes $(C,\theta)$ and
$(C',\theta')$ is a normal pair $((f \; f') \co C \oplus C' \ra
D,(\delta \theta,\theta \oplus -\theta'))$.
\end{defn}
The direct sum operation is defined analogously to the direct sum for the symmetric and quadratic complexes. Also there is a notion of a union of adjoining normal cobordisms and we obtain an equivalence relation. Again notice that a cobordism of normal complexes is in no sense a Poincar\'e pair.
There exists a relative normal construction. It associates to an
$(n+1)$-dimensional geometric normal pair an $(n+1)$-dimensional
algebraic normal pair in a functorial way. An $(n+1)$-dimensional
geometric normal cobordism induces an $(n+1)$-dimensional algebraic
normal cobordism in this way. These constructions are quite
complicated and therefore we again refer at this place to
\cite[section 7]{Weiss-II(1985)}.
Now we are ready to define the $NL$-groups, alias normal $L$-groups.
\begin{definition}\label{defn:NLgps} The \emph{normal $L$-groups} of an additive category with chain duality $\AA$ are
\[
NL^n (\AA) := \{ \text{normal cobordism classes of } n \text{-dimensional NACs in } \AA \}.
\]
\end{definition}
\begin{definition} \label{defn:normal-signature}
Let $(X,\nu,\rho)$ be an $n$-dimensional GNC. The cobordism class of
the $n$-dimensional NAC obtained from any choice of the Thom class
$u(\nu) \in \tilde{C}^k (\Th (\nu))$ in Construction
\ref{con:normal-construction} does not depend on the choice of
$u(\nu)$ and hence defines an element
\[
\nsign_{\Zpi} (X) = [(C(\tilde X),\theta (u(\nu)))] \in NL^n (\ZZ[\pi_1
(X)])
\]
called the \emph{normal signature} of $(X,\nu,\rho)$.
In fact the element $\nsign_{\ZZ[\pi_1 (X)]} (X)$ only depends on the normal cobordism class of $(X,\nu,\rho)$ and hence we obtain a homomorphism
\[
\nsign_{\ZZ[\pi_1 (X)]} \co \Omega^N_n (K(\pi_1 (X),1) \ra NL^n (\ZZ[\pi_1 (X)]).
\]
\end{definition}
See also Remark \ref{rem:notation-NL-versus-L-hat} for a note on the notation.
Now we discuss the relation between the groups $L_n (\AA)$, $L^n
(\AA)$ and $NL^n (\AA)$. The details can be found in \cite[section
2]{Ranicki(1992)} and \cite{Weiss-II(1985),Weiss-II(1985)}. Here we
confine ourselves to the main ideas. We start with a lemma.
\begin{lemma} \textup{\cite[Proposition 2.6 (i)]{Ranicki(1992)}} \label{lem:normal-gives-quadratic-boundary}
Let~$(C,\varphi)$ be an $n$-dimensional SAC. Then~$(C,\varphi)$ can
be extended to a normal complex~$(C,\varphi,\gamma,\chi)$ if and
only if the boundary~$(\partial C, \partial \varphi)$ has a
quadratic refinement.
\end{lemma}
\begin{proof}
Consider the following long exact sequences
\[
\xymatrix@R=0,5cm{
\ldots \ar[r] &
\Qq{n-1}{\partial C} \ar[r]^-{1+T} &
\Qs{n-1}{\partial C} \ar[r]^-{J} &
\Qh{n-1}{\partial C} \ar[r] \ar[d]_{S}^{\cong} &
\ldots
\\
\ldots \ar[r] &
\Qh{n}{C^{n-\ast}} \ar[r]^-{\widehat{\varphi_0}^{\%}} &
\Qh{n}{C} \ar[r]^-{\widehat{e}^{\%}} &
\Qh{n}{\cone{(\varphi_0)}} \ar[r] &
\ldots
}
\]
We have~$\partial \varphi = S^{-1} (e^{\%} (\varphi)) \in
\Qs{n-1}{\partial C}$. A diagram chase (using a slightly larger
diagram than the one above) gives the equation
\[
\widehat{e}^{\%} (J(\varphi)) = S ( J (\del \varphi)) \in \Qh{n}{\cone{(\varphi_0)}}.
\]
It follows that $\del \varphi$ has a preimage in $\Qq{n-1}{\partial
C}$, that means a quadratic refinement, if and only if $J (\varphi)$
has a preimage in $\Qh{n}{C^{n-\ast}} \cong \Qh{0}{C^{-\ast}}$, that
means a chain bundle whose suspension maps to $J(\varphi)$ via
$(\widehat{\varphi_0})^{\%}$, in other words there is a normal
structure refining $\varphi$.
\end{proof}
The lemma can be improved so that one obtains a one-to-one
correspondence between the normal structures extending $(C,\varphi)$
and quadratic refinements of $(\del C,\del \varphi)$, the details
are to be found in \cite[sections 4,5]{Weiss-II(1985)}.
\begin{construction} \cite[Definition 2.9]{Ranicki(1992)}
The map
\[
\del \co NL^n (\AA) \ra L_{n-1} (\AA) \quad \del
(C,\varphi,\gamma,\chi) = (\del C,\del \psi)
\]
is defined so that $\del \psi$ is the quadratic refinement of $\del
\varphi$ described in Lemma
\ref{lem:normal-gives-quadratic-boundary}.
\end{construction}
\begin{lemma} \textup{\cite[Proposition 2.6 (ii)]{Ranicki(1992)}} \label{lem:symmetric-poincare-means-unique-normal}
There is a one-to-one correspondence between the homotopy classes of
$n$-dimensional SAPCs and the homotopy classes $n$-dimensional NACs
such that $\varphi_0$ is a chain homotopy equivalence.
\end{lemma}
\begin{proof}
Let $(C,\varphi)$ be an $n$-dimensional SAPC in $\AA$ so that $\varphi_0 \co \Sigma^n TC \ra C$ is a chain homotopy equivalence. One can associate a normal structure to $(C,\varphi)$ as follows. The chain bundle $\gamma \in \Wh{TC}_0$ is the image of $\varphi \in \Ws{C}_n$ under
\[
\Ws{C}_n \xra{J} \Wh{C}_n \xra{(\widehat{\varphi_0}^{\%})^{-1}} \Wh{\Sigma^n TC}_n \xra{\;S^{-n}\;} \Wh{TC}_0.
\]
The chain $\chi \in \Wh{C}_{n+1}$ comes from the chain homotopy $(\widehat{\varphi_0}^{\%}) \circ (\widehat{\varphi_0}^{\%})^{-1} \simeq 1$.
\end{proof}
\begin{construction} \cite[Proposition 2.6 (ii)]{Ranicki(1992)}
The map
\[
J \co L^n (\AA) \ra NL^n (\AA) \quad J(C,\varphi) = (C,\varphi,\gamma,\chi)
\]
is constructed using the above Lemma
\ref{lem:symmetric-poincare-means-unique-normal}.
\end{construction}
The maps we just described in fact fit into a long exact sequence.
\begin{proposition} \textup{\cite[Definition 2.10, Proposition
2.8]{Ranicki(1992)}} \textup{\cite[Example 6.7]{Weiss-II(1985)}}
\label{propn:LESL} Let~$\AA$ be an additive category with chain
duality. Then there is a long exact sequence
\[
\xymatrix{
\ldots \ar[r] &
L_n (\AA) \ar[r]^{1+T} &
L^n (\AA) \ar[r]^J &
NL^n (\AA) \ar[r]^{\partial} &
L_{n-1} (\AA) \ar[r] &
\ldots
}
\]
\end{proposition}
\begin{proof}[Sketch of proof]
In \cite[chapter 2]{Ranicki(1981)} Ranicki defines the concept of a
triad of structured chain complexes and shows that one can define a
cobordism group of pairs of structured chain complexes, where the
structure on the boundary is some refinement of the structure
inherited from the pair. Such cobordism groups then fit into a
corresponding long exact sequence. The whole setup is analogous to
the definition of relative cobordism groups for a pair of spaces and
the associated long exact sequence.
In our special case we consider the map $J \co L^n (\AA) \ra NL^n
(\AA)$. So the $n$-th relative group is the cobordism group of
$n$-dimensional (normal, symmetric Poincar\'e) pairs, that means we
have a normal pair $f \co C \ra D$ such that the symmetric structure
on $C$ is Poincar\'e. This together with the following lemma
establish the proposition.
\end{proof}
\begin{lemma} \label{lem:normal-sym-pair-gives-quad-poincare-cplx}
\textup{\cite[Proposition 2.8 (ii)]{Ranicki(1992)}} Let~$\AA$ be an
additive category with chain duality. There is a one-to-one
correspondence between the cobordism classes of $n$-dimensional (normal, symmetric
Poincar\'e) pairs in $\AA$ and the cobordism classes of $(n-1)$-dimensional QAPCs in $\AA$.
\end{lemma}
\begin{proof}[Sketch of proof]
Let $(f \co C \ra D,(\delta \theta,\theta))$ be an $n$-dimensional
(normal, symmetric Poincar\'e) pair in $\AA$. In particular we have
an $n$-dimensional symmetric pair $(f \co C \ra D,(\delta
\varphi,\varphi))$, which we can use as data for an algebraic
surgery on the $(n-1)$-dimensional SAPC $(C,\varphi)$. The effect
$(C',\varphi')$ is again an $(n-1)$-dimensional SAPC. It turns out
to have a quadratic refinement, by a generalization of the proof of
Lemma \ref{lem:normal-gives-quadratic-boundary} (the lemma is a
special case when $f \co 0 \ra C$). The assignment $(f \co C \ra
D,(\delta \theta,\theta)) \mapsto (C',\del \psi')$ turns out to
induce a one-to-one correspondence on cobordism classes.
\end{proof}
\begin{rem} \label{rem:notation-NL-versus-L-hat}
Proposition \ref{propn:LESL} provides us with an isomorphism between the groups $NL^n (\AA)$ and the groups $\widehat{L}^n (\AA)$ defined in \cite{Ranicki(1981)} and used in \cite{Ranicki(1979)}.
\end{rem}
\subsection{The quadratic boundary of a GNC}
\label{subsec:quadratic-boundary-of-gnc}
In this subsection we study in more detail the passage from a GNC to
the boundary of its associated NAC. This means that from an
$n$-dimensional GNC we pass to an $(n-1)$-dimensional QAPC. The
construction was described in \cite[section 7.4]{Ranicki(1981)} even
before the invention of NAC in \cite{Weiss-I(1985),Weiss-II(1985)}.
It will be useful for geometric applications in later sections.
Before we start we need more basic technology. First we describe the
spectral quadratic construction:
\begin{construction}\label{con:SpectralQuadraticConstruction}
Let~$F \co X \lra \Sigma^{p} Y$ be a map between pointed spaces (a
map of this shape is called a {\it semi-stable map}) inducing the
chain map
\[
f \co \tilde{C}(X)_{p+\ast} \lra \tilde{C}(\Sigma^p Y)_{p+\ast} \simeq \tilde{C}(Y)
\]
The \emph{spectral quadratic construction} on~$F$ is a chain map
\[
\Psi \co \tilde{C}(X)_{p+\ast} \lra \Wq{\sC (f)}
\]
such that
\[
(1+T) \circ \Psi \equiv e^\% \circ \varphi \circ f
\]
where $\varphi \co C(Y) \ra \Ws{C(Y)}$ is the symmetric construction
on $Y$ and $e \co C(Y) \ra \sC (f)$ is the inclusion map. The
existence of $\Psi$ can be read off the following commutative
diagram in which the lower horizontal sequence is exact by Remark \ref{rem:Qh-is-cohomology} and the right vertical sequence is exact by Proposition \ref{propn:LESQ}
\[
\xymatrix{
& \tilde{H}_{n+p} (X) \ar[dl]_{\cong} \ar[dr]_-{f} \ar@{-->}[drr]^{\Psi}
&
&
\\
\tilde{H}_{n+p} (X) \ar[d]_{\varphi_{X}} \ar[r]^-{F}
& \tilde{H}_{n+p} (\Sigma^{p} Y) \ar[d]_{\varphi_{\Sigma^{p} Y}}
& \tilde{H}_n (Y) \ar[l]_-{\cong} \ar[d]^{\varphi_Y}
& \Qq{n}{\cone(f)} \ar[d]^{1+T}
\\
\Qs{n+p}{\tilde{C}(X)} \ar[dr]^J \ar[r]^-{F^\%}
& \Qs{n+p}{\tilde{C}(\Sigma^{p} Y)} \ar[dr]^J
& \Qs{n}{\tilde{C}(Y)} \ar[l]_-{S^{p}} \ar[d]^J \ar[r]^-{e^\%}
& \Qs{n}{\cone(f)} \ar[d]^J
\\
& \Qh{n}{\Sigma^{-p} \tilde{C}(X)} \ar[r]^{\widehat{f}^\%}
& \Qh{n}{\tilde{C}(Y)} \ar[r]^-{\widehat{e}^\%}
& \Qh{n}{\cone(f)}
}
\]
The spectral quadratic construction $\Psi$ on $F$ has the property
that if $X = \Sigma^p X_0$ for some $X_0$ then it coincides with the
quadratic construction on $F$ as presented in Construction
\ref{constrn:quadconstrn} composed with $e_{\%}$.
\end{construction}
Recall that we have already encountered the semi-stable map
$\Gamma_Y$ coming from an $n$-GNC $(Y,\nu_Y,\rho_Y)$ in Construction
\ref{con:S-duality-and-Thom-is-Poincare}. The spectral quadratic
construction on $\Gamma_Y$ is identified below.
\begin{construction}
\label{con:normal-con-via-spectral-quadratic-con} See
\cite[Proposition 7.4.1]{Ranicki(1981)} and \cite[Theorem
7.1]{Weiss-II(1985)}. Let~$\Gamma_Y \co \Th(\nu_Y)^\ast \ra \Sigma^p
Y_+$ be the semi-stable map obtained in Construction
\ref{con:S-duality-and-Thom-is-Poincare} and let $\gamma_Y \co
C(\Th(\nu_Y)^\ast)_{\ast+p} \ra C(Y)$ denote the induced map. Recall
diagram (\ref{dgrm:Thom-S-dual-Poincare}) in Construction
\ref{con:S-duality-and-Thom-is-Poincare} which identifies
\[
\sC (\varphi_0) \simeq \sC (\gamma_Y)
\]
via the Thom isomorphism and $S$-duality. The spectral quadratic
construction on the map $\Gamma_Y$ produces a quadratic structure
\[
\Psi (u(\nu_Y)^\ast) \in W_{\%} \sC (\gamma_Y)_n
\]
where $u(\nu_Y)^\ast$ denotes the $S$-dual of the Thom class of
$\nu_Y$. We also have
\[
(1+T) \circ \Psi ( u (\nu_{Y})^\ast) \equiv e^{\%} (\varphi ([Y])) \stackrel{\text{def}}{=} S(\del\varphi([Y]))
\]
From the cofibration sequence of chain complexes (with $C' = \del
C(Y)$):
\[
\Sigma \Wq{C'} \xra{\left(\begin{smallmatrix} 1+T \\ S
\end{smallmatrix}\right)} \Sigma \Ws{C'} \oplus \Wq{\Sigma C'} \xra{S
- (1+T)} \Ws{\Sigma C'}
\]
we see that there exists a~$\psi (Y) \in (\Wq{\del C(Y)})_{n-1}$,
unique up to equivalence, such that~$(1+T)\psi (Y)\simeq
\del\varphi([Y])$. Hence we obtain an $(n-1)$-dimensional QAPC over
$\ZZ$ giving an element
\begin{equation*}
[(\del C(Y),\psi(Y))] \in L_{n-1} (\ZZ).
\end{equation*}
Recall from Construction \ref{con:normal-construction} that for any
geometric normal complex~$(Y,\nu_Y,\rho_Y)$ there is defined an
$n$-dimensional NAC $\nsign (Y)$ over $\ZZ$, which, as such, has a
quadratic boundary
\[
\del \nsign (Y) = [(C',\psi')] \in L_{n-1} (\ZZ)
\]
defined via Lemma \ref{lem:normal-gives-quadratic-boundary}.
Inspecting the definitions we see that $C' \simeq \del C(Y)$ and
further inspection of commutative diagrams defining the respective
quadratic structures shows that $\psi (Y)$ and $\psi'$ are
equivalent.
\end{construction}
\begin{example}
\label{expl:normal-symm-poincare-pair-gives-quadratic} See
\cite[Remark 2.16]{Ranicki(1992)}, \cite[Proposition
7.4.1]{Ranicki(1981)} and \cite[Theorem 7.1]{Weiss-II(1985)}. Recall
from the sketch proof of Lemma
\ref{lem:normal-sym-pair-gives-quad-poincare-cplx} that there is an
equivalence between cobordism classes of $n$-dimensional algebraic
(normal,~symmetric Poincar\'e) pairs and cobordism classes of
$(n-1)$-dimensional QAPCs, and that, in the special case that the
boundary in the pair we start with is $0$ the construction giving
the equivalence specializes to the construction of the quadratic
boundary of a normal complex.
In Construction \ref{con:normal-con-via-spectral-quadratic-con} it
is shown how the spectral quadratic construction can be used to
construct the quadratic boundary when we have a geometric normal
complex as input. In this example it is shown how the equivalence of
Lemma \ref{lem:normal-sym-pair-gives-quad-poincare-cplx} can be
realized using the spectral quadratic construction when we have a
degree one normal map of Poincar\'e complexes as input. In that case
the mapping cylinder of the map gives a normal pair, with Poincar\'e
boundary. Furthermore, it is shown that the quadratic complex
obtained in this way coincides with the surgery obstruction
associated to the degree one normal map. This is crucially used in
the proof of part (i) of the Main technical theorem (see proof of
Theorem \ref{thm:lifts-vs-orientations}).
Let $(f,b) \co M \ra X$ be a degree one normal map of $n$-GPC.
Denote by $\nu_M$, $\nu_X$ the respective SNFs. We form the
$(n+1)$-dimensional geometric (normal,~Poincar\'e) pair
\begin{equation*}
\big( (W,M \sqcup X), (\nu_W,\nu_{M \sqcup X}), (\rho_W,\rho_{M
\sqcup X}) \big)
\end{equation*}
with $W = \cyl (f)$. The symbol $\nu_W$ denotes the $k$-spherical
fibration over $W$ induced by $b$ and
\[
(\rho_W,\rho_{M \sqcup X}) \co (D^{n+1+k},S^{n+k}) \ra (\Th (\nu_W),
\Th (\nu_M \sqcup \nu_X))
\]
is the map induced by $\rho_M$ and $\rho_X$. Denote $j \co M \sqcup
X \hookrightarrow W$, $j_M \co M \hookrightarrow W$, and $j_X \co X
\hookrightarrow W$ the inclusions, by $\pr_X \co W \ra X$ the
projection which is also a homotopy inverse to $j_X$ and observe
that $f = \pr_X \circ j_M$.
Now we describe the passage
\begin{equation}
\label{eqn:norm-poincare-sign-of-deg-one-map-is-surgery-obstruction}
\textup{Lemma } \ref{lem:normal-sym-pair-gives-quad-poincare-cplx}
\co (\nsign (W),\ssign (M) - \ssign (X)) \mapsto [(C',\psi')].
\end{equation}
According to the proof of Lemma
\ref{lem:normal-sym-pair-gives-quad-poincare-cplx} the underlying
chain complex $C'$ is obtained by algebraic surgery on the
$(n+1)$-dimensional symmetric pair
\[
(j_\ast \co C(M) \oplus C(X) \ra C(W),(\delta \varphi,\varphi)).
\]
This is just the desuspension of the mapping cone of the 'want to
be' Poincar\'e duality map
\begin{equation*}
C' = S^{-1} \sC \big( C^{n+1-\ast} (W) \xra{\smallpairmapforw}
C(W,M\sqcup X) \big)
\end{equation*}
If we want to use the spectral quadratic construction we need a
semi-stable map inducing the map in the above display. Consider the
map
\[
S^N \xra{\rho_W/\rho_{M\sqcup X}} \Th (\nu_W) / \Th (\nu_{M\sqcup
X}) \xra{\Delta} \Sigma^p (W/(M \sqcup X)) \wedge \Th (\nu_W)
\]
which has an $S$-dual
\[
\Gamma_W \co \Th (\nu_W)^\ast \ra \Sigma^p (W/(M \sqcup X))
\]
which in turn induces a map of chain complexes
\[
\gamma_W \co C_{\ast+p}(\Th (\nu_W)^\ast) \ra C_\ast (W/(M \sqcup
X))
\]
The map $\gamma_W$ coincides with the map $\smallpairmapforw$ under
Thom isomorphism and $S$-duality (by a relative version of Diagram
\ref{dgrm:Thom-S-dual-Poincare}, see also section
\ref{sec:proof-part-2}).
The spectral quadratic construction on $\Gamma_W$
\[
\Psi \co C_{n+1+p} (\Th (\nu_W)^\ast) \ra \Wq{\sC (\gamma_W)}_{n+1}
\]
produces from the dual of the Thom class $u(\nu_W)^\ast \in
C_{n+1+p} (\Th (\nu_W)^\ast)$ an $(n+1)$-dimensional quadratic
structure on $\sC (\gamma_W)$ which has a desuspension unique up to
equivalence and that is our desired $\psi'$ such that
\[
\Psi (u(\nu_W)^\ast) = S (\psi').
\]
The construction just described comes from \cite[Proposition
7.4.1]{Ranicki(1981)}. By \cite[Proof of Theorem
7.1]{Weiss-II(1985)} we obtain that
(\ref{eqn:norm-poincare-sign-of-deg-one-map-is-surgery-obstruction})
holds.
Now recall from Definition \ref{defn:quad-sign} that we have an
another way of assigning $n$-dimensional quadratic Poincar\'e
complex to $(f,b)$, namely the surgery obstruction $\qsign (f,b) \in
L_n (\ZZ)$.
We claim that
\begin{equation*}
[(C',\psi')] = \qsign (f,b) \in L_n (\ZZ)
\end{equation*}
The following commutative diagram identifies $C' \simeq \sC (f^!)$:
\begin{equation} \label{dgrm:want-to-be-duality-in-pair-vs-umkehr}
\begin{split}
\xymatrix{
C^{n+1-\ast} (W) \ar[r]^{\smallpairmapforw} \ar[d]^{j_M^\ast} &
C(W,M\sqcup X) \ar[d]_{\simeq} \\
C^{n+1-\ast} (M) \ar[r]^{\varphi_0|_M}_{\simeq} & C(\Sigma M) \\
C^{n+1-\ast} (X) \ar[u]_{S(f^\ast)}
\ar@/^5pc/[uu]^{\pr_X^\ast}_{\simeq}
\ar[r]^{\varphi_0|_X}_{\simeq} & C (\Sigma X) \ar[u]_{S(f^!)}
}
\end{split}
\end{equation}
To identify the quadratic structures recall first that the spectral
quadratic construction $\Psi$ on a semi-stable map $F$ is the same
as the quadratic construction $\psi$ composed with $e_{\%}$ if the
semi-stable map $F \co X \ra \Sigma^p Y$ is in fact a stable map $F
= \Sigma^p X_0 = X \ra \Sigma^p Y$. Furthermore the homotopy
equivalence $j_X$ and the $S$-duality are used to show that Diagram
\ref{dgrm:want-to-be-duality-in-pair-vs-umkehr} is induced by the
commutative diagram of maps of spaces as follows:
\begin{equation}
\begin{split}
\xymatrix{ \Th(\nu_W)^\ast \ar[r]^(0.4){\Gamma_W}
\ar[d]^{T(j_M)^\ast} &
\Sigma^p (W/(M \sqcup X)) \ar[d]^{\simeq} \\
\Th(\nu_M)^\ast \ar[r]^{\gamma_M}_{\simeq} & \Sigma^{p+1} M_+ \\
\Th(\nu_X)^\ast \ar[u]_{T(b)^\ast}
\ar@/^5pc/[uu]^{T(\pr_X)^\ast}_{\simeq}
\ar[r]^{\gamma_X}_{\simeq} & \Sigma^{p+1} X_+ \ar[u]_{F}
}
\end{split}
\end{equation}
which identifies $F$ and $\Gamma_W$.
The Thom class $u(\nu_W)$ restricts to $u(\nu_X)$ and hence the
duals $u(\nu_W)^\ast$ and $u(\nu_X)^\ast = \Sigma [X]$ are also
identified. The uniqueness of desuspensions gives the identification
of the equivalence classes of the quadratic structures
\[
e_{\%} \psi ([X]) \sim \psi'.
\]
\end{example}
\section{Algebraic bordism categories and exact sequences}
\label{sec:alg-bord-cat}
In previous sections we recalled the notions of certain structured
chain complexes over an additive category with chain duality $\AA$
and corresponding $L$-groups. In this section we review a
generalization where the category we work with is an algebraic
bordism category. This eventually allows us to vary $\AA$ and we also obtain
certain localization sequences.
\begin{defn}
An algebraic bordism category $\Lambda =(\AA,\BB,\CC,(T,e))$
consists of an additive category with chain duality $(\AA,(T,e))$, a
full subcategory $\BB \subseteq \BB(\AA)$ and another full
subcategory $\CC \subseteq \BB$ closed under taking cones.
\end{defn}
\begin{defn}
Let $\Lambda=(\AA,\BB,\CC,(T,e))$ be an algebraic bordism category.
An \emph{$n$-dimensional symmetric algebraic complex in $\Lambda$}
is a pair $(C,\varphi)$ where $C\in\BB$ and $\varphi\in
(W^{\%}C)_{n}$ is an $n$-cycle such that $\partial C =
\dsc(\varphi_{0}:\susp^{n}TC\ra C) \in\CC$.
An \emph{$(n+1)$-dimensional symmetric algebraic pair in $\Lambda$}
is a pair\\ $(f:C\ra D, (\delta\varphi,\varphi))$ in $\Lambda$ where
$f:C\ra D$ is a chain map with $C,D\in \BB$, the pair
$(\delta\varphi,\varphi)\in \sC(f^{\%})$ is an $(n+1)$-cycle and
$\sC(\delta\varphi_{0},\varphi_{0}f^{*})\in\CC$.
A \emph{cobordism} between two $n$-dimensional symmetric algebraic
complexes $(C,\varphi)$ and $(C',\varphi')$ in $\Lambda$ is an
$(n+1)$-dimensional symmetric pair $(C\oplus C' \ra
D,(\delta\psi,\varphi\oplus-\varphi')$ in $\Lambda$.
\end{defn}
So informally these are complexes, pairs and cobordisms of chain
complexes which are in $\BB$ and which are Poincar\'e modulo $\CC$.
There are analogous definitions in quadratic and normal case. The $L$-groups are generalized to this setting as follows.
\begin{defn} \label{defn:L-groups-over-Lambda}
The symmetric, quadratic, and normal \emph{L-groups}
\[
L^{n}(\Lambda), \quad L_{n}(\Lambda) \quad \textup{and} \quad NL^{n}(\Lambda)
\]
are defined as the cobordism groups of $n$-dimensional symmetric,
quadratic, and normal algebraic complexes in $\Lambda$ respectively.
\end{defn}
\begin{expl} \label{expl:alg-bord-cat-ring}
Let $R$ be a ring with involution. By
\[
\Lambda(R)=(\AA(R),\BB(R),\CC(R),(T,e))
\]
is denoted the algebraic bordism category with
\begin{itemize}
\item[$\AA(R)$] the category of $R$-modules from Example \ref{expl:R-duality},
\item[$\BB(R)$] the bounded chain complexes in $\AA(R)$,
\item[$\CC(R)$] the contractible chain complexes of $\BB(R)$.
\end{itemize}
We also consider the algebraic bordism category
\[
\widehat \Lambda(R)=(\AA(R),\BB(R),\BB(R),(T,e)).
\]
The $L$-groups of section \ref{sec:algebraic-cplxs} and the $L$-groups of Definition \ref{defn:L-groups-over-Lambda} are related by
\[
L^n (R) \cong L^n (\Lambda (R)) \quad \textup{and} \quad L_n (R) \cong L_n (\Lambda (R)).
\]
For the $NL$-groups of section \ref{sec:normal-cplxs} we have:
\[
NL^n (R) \cong NL^n (\widehat \Lambda (R)) \quad \textup{and} \quad L^n (R) \cong NL^n (\Lambda (R)).
\]
The second isomorphism is due to Lemma \ref{lem:symmetric-poincare-means-unique-normal}.
\end{expl}
The notion of a functor of algebraic bordism categories
\[
F:\Lambda=(\AA,\BB,\CC)\ra \Lambda'=(\AA',\BB',\CC')
\]
is defined in \cite[Definition 3.7]{Ranicki(1992)}. Any such functor
induces a map of $L$-groups.
\begin{prop} \textup{\cite[Prop. 3.8]{Ranicki(1992)}} \label{prop:relativ-L-groups}
For a functor $F:\Lambda\ra \Lambda'$ of algebraic bordism
categories there are relative L-groups $L_{n}(F)$, $L^{n}(F)$ and
$NL^{n}(F)$ which fit into the long exact sequences
\[\les[4]{L_{\n}(\Lambda)}{L_{\n}(\Lambda')}{L_{\n}(F)},\]
\[\les[4]{L^{\n}(\Lambda)}{L^{\n}(\Lambda')}{L^{\n}(F)},\]
\[\les[4]{NL^{\n}(\Lambda)}{NL^{\n}(\Lambda')}{NL^{\n}(F)}.\]
\end{prop}
These exact sequences are produced by the technology of \cite[chapter 2]{Ranicki(1981)} already mentioned in the previous section. An element in $L_{n}(F)$ is an $(n-1)$-dimensional quadratic complex $(C,\psi)$ in $\Lambda$ together with an $n$-dimensional quadratic
pair $(F(C)\ra D, (\delta\psi,F(\psi)))$ in $\Lambda'$. There is a notion of a cobordism of such pairs and the group $L_n (F)$ is
defined as such a cobordism group. Analogously in the symmetric and normal case.
The following proposition improves the above statement in the sense that the relative terms are given as cobordism groups of complexes rather than pairs.
\begin{prop} \textup{\cite[Prop. 3.9]{Ranicki(1992)}} \label{prop:inclusion-les}
Let $\AA$ be an additive category with chain duality and let
$\DD\subset\CC\subset\BB\subset \BB (\AA)$ be subcategories closed
under taking cones. The relative symmetric L-groups for the
inclusion $F:(\AA,\BB,\DD)\ra(\AA,\BB,\CC)$ are given by
\begin{itemize}
\item[(i)] $L^{n}(F) \cong L^{n-1}(\AA,\CC,\DD)$
\end{itemize}
and in the quadratic and normal case by
\begin{itemize}
\item[(ii)] $L_{n}(F) \cong L_{n-1}(\AA,\CC,\DD) \cong NL^{n}(F).$
\end{itemize}
\end{prop}
Part (ii) of the proposition allows us to produce interesting
relations between the long exact sequences for various inclusions
combining the quadratic $L_n$-groups and the normal $NL^n$-groups.
In the following commutative braid we have $4$ such sequences.
Sequence (1) is given by the inclusion $(\AA,\BB,\DD) \ra
(\AA,\BB,\CC)$ in the quadratic theory, sequence (2) by the
inclusion $(\AA,\BB,\DD) \ra (\AA,\BB,\CC)$, sequence (3) by the
inclusion $(\AA,\BB,\CC) \ra (\AA,\BB,\BB)$, and sequence (4) by the
inclusion $(\AA,\BB,\DD) \ra (\AA,\BB,\BB)$, all last three in the
normal theory:
\newcommand{\braideightlabel}[8]{\xymatrix@C=0.65cm{
& & & & \\
{#1}\ar@/^2.5pc/_{(4)}[rr]\ar^{(2)}[rd] & & {#2}\ar@/^2.5pc/[rr]\ar[rd] & & #3 \\
& {#4}\ar[ru]\ar[rd] & & {#5}\ar[rd]\ar[ru] & \\
{#6}\ar@/_2.5pc/^{(1)}[rr]\ar_{(3)}[ru] & & {#7}\ar@/_2.5pc/[rr]\ar[ru] & & #8 \\
& & & &
}}
{\footnotesize
\[
\braideightlabel{NL^{n}(\AA,\BB,\DD)}{NL^{n}(\AA,\BB,\BB)}{L_{n-1}(\AA,\BB,\CC)}{NL^{n}(\AA,\BB,\CC)}{L_{n-1}(\AA,\BB,\DD)}{L_{n}(\AA,\BB,\CC)}{L_{n-1}(\AA,\CC,\DD)}{NL^{n-1}(\AA,\BB,\DD)}
\]
}
\begin{proof}[Comments on the proof Proposition \ref{prop:inclusion-les}]
Recall that an element in $L_{n}(F)$ is an $(n-1)$-dimensional
quadratic complex $(C,\psi)$ in $(\AA,\BB,\DD)$ together with an
$n$-dimensional quadratic pair $(C \ra D, (\delta\psi,\psi))$
in $(\AA,\BB,\CC)$. The isomorphism $L_{n}(F)\cong
L_{n-1}(\AA,\CC,\DD)$ is given by
\[
\left( (C,\psi),C \ra D, (\delta\psi,\psi)\right)\mapsto (C',\psi')
\]
where $(C',\psi')$ is the effect of algebraic surgery on $(C,\psi)$
using as data the pair $(C\ra D, (\delta\psi,\psi))$. We have $C' \in \CC$
since $C \ra D$ is Poincar\'e modulo $\CC$. Furthermore, the
observation that $(C',\psi')$ is Poincar\'e modulo $\DD$ follows
from the assumption that $(C,\psi)$ is Poincar\'e modulo $\DD$ and
from Proposition \ref{prop:homotopy-type-of-boundary} which says that the homotopy type of the boundary is
preserved by algebraic surgery.
The inverse map is given by
\[
(C,\psi)\mapsto \left( (C,\psi), C\ra 0, (0,\psi)\right).
\]
Similarly for $NL^{n}(F)\cong L_{n-1}(\CC,\DD)$. Consider $\left((C,\theta), C\ra D, (\delta \theta,\theta)\right)\in NL^{n}(F)$ and perform
algebraic surgery on $(C,\theta)$ with data $(C\ra D, (\delta \theta,\theta))$. We obtain an $(n-1)$-dimensional symmetric complex in $\CC$ which
is Poincar\'e modulo $\DD$. Using \cite[2.8(ii)]{Ranicki(1992)} we see that the symmetric structure has a quadratic refinement.
\end{proof}
\begin{expl}
Let $R$ be a ring with involution and consider the inclusion of the algebraic bordism categories $\Lambda (R) \ra \widehat \Lambda (R)$ from Example \ref{expl:alg-bord-cat-ring}. Then the long exact sequence of the associated $NL$-groups (sequence (3) in the diagram above) becomes the long exact sequence of Proposition \ref{propn:LESL}, thanks to Lemma \ref{lem:symmetric-poincare-means-unique-normal}.
\end{expl}
\section{Categories over complexes} \label{sec:cat-over-cplxs}
In this section we recall the setup for studying local Poincar\'e duality over a locally finite simplicial complex $K$. For a simplex $\sigma \in K$ we will use the notion of a dual cell $D(\sigma,K)$ which is a certain subcomplex of the barycentric subdivision $K'$, see \cite[Remark 4.10]{Ranicki(1992)} for the definition if needed.\footnote{Note that in general the dual cell $D(\sigma,K)$ is not a ``cell'' in the sense that it is not homeomorphic to $D^l$ for any $l$. Nevertheless the terminology is used in \cite{Ranicki(1992)} and we keep it.}
Observe first that there are two types of such a local duality for a
triangulated $n$-manifold $K$.
\begin{enumerate}
\item Each simplex $\sigma$ of $K$ is a $|\sigma|$-dimensional manifold with boundary and so there is a duality between $C_{*}(\sigma,\partial\sigma)$ and $C^{|\sigma|-*}(\sigma)$
\end{enumerate}
\begin{minipage}{\linewidth-4.5cm}
\begin{enumerate}
\item[(2)]
Each dual cell $D(\sigma,K)$ is an $(n-|\sigma|)$-dimensional manifold with boundary and so there is a duality between the chain complexes $C_{\ast}(D(\sigma, K),\partial D(\sigma, K))$ and $C^{n-|\sigma|-\ast}(D(\sigma, K))$
\end{enumerate}
\end{minipage}
\begin{minipage}{3.5cm}
\begin{flushright}
\dualpicture
\end{flushright}
\end{minipage}\\[0.2cm]
This observation leads to two notions of additive categories with
chain duality over $K$.
\begin{defn} Let $\AA$ be an additive category with chain duality and $K$ as above.
The additive categories of $K$-based objects $\AA^{*}(K)$ and
$\AA_{*}(K)$ are defined by
\begin{enumerate}
\item[] $\text{Obj}(\AA^{*}(K))=\text{Obj}(\AA_{*}(K))=\{\sum_{\sigma\in K}M_{\sigma}\;|\; M_{\sigma}\in\AA\}$,\\[0.3em]
\item $\text{Mor}(\AA^{*}(K)) =$\\ \indent $\{ \sum\limits_{\sigma\geq\tau}f_{\tau,\sigma}:\sum\limits_{\sigma\in K}M_{\sigma}\ra\sum\limits_{\tau\in K}N_{\tau}\;|\;(f_{\tau,\sigma}:M_{\sigma}\ra N_{\tau})\in\text{Mor}(\AA)\}$\\[0.2em]
\item $\text{Mor}(\AA_{*}(K)) =$ \\ \indent $\{ \sum\limits_{\sigma\leq\tau}f_{\tau,\sigma}:\sum\limits_{\sigma\in K}M_{\sigma}\ra\sum\limits_{\tau\in K}N_{\tau}\;|\; (f_{\tau,\sigma}:M_{\sigma}\ra N_{\tau})\in\text{Mor}(\AA)\}$
\end{enumerate}
\end{defn}
A chain complex $(C, d)$ over $\AA^{*}(K)$, respectively $\AA_\ast
(K)$, consists of chain complexes $(C(\sigma),d(\sigma))$ for each
$\sigma \in K$ and additional boundary maps
$d(\sigma,\tau) \co C(\sigma)_\ast \ra C(\tau)_{\ast-1}$ for each
$\tau \leq \sigma$, respectively $\sigma \leq \tau$.
\begin{example} \label{expl:chain-cplxs-over-K}
The simplicial chain complex $C = \Delta (K)$ is a chain complex in
$\BB (\AA^\ast (K))$, by defining $C(\sigma)\coloneqq \Delta (\sigma,\del
\sigma) = S^{|\sigma|} \ZZ$.
The simplicial chain complex $C = \Delta (K')$ of the barycentric
subdivision $K'$ is a chain complex in $\BB (\AA_\ast (K))$ by $C
(\sigma) = \Delta (D(\sigma),\del D(\sigma))$.
\end{example}
The picture depicts the simple case of the simplicial chain complex
$\Delta_{*}(\Delta^{1})$ as a chain complex in
$\AA(\ZZ)^{*}(\Delta^{1})$:
\begin{center}
\[
\xymatrix@R=0.3cm{
*=0{\bullet}\save[]+<0cm,0.3cm>*{_{\sigma_{0}}}\restore\ar@{-}^-{\tau}[rr]
& &
*=0{\bullet}\save[]+<0cm,0.3cm>*{_{\sigma_{1}}}\restore
\\
C(\sigma_{0})=\Delta_{*}(\sigma_{0},\partial\sigma_{0})
&
C(\tau)=\Delta_{*}(\tau,\partial\tau)
&
C(\sigma_{1})=\Delta_{*}(\sigma_{1},\partial\sigma_{1})
\\
\save[]+<-1.9cm,0cm>*{C_{2}:}\restore
0\ar[dd]
&
0\ar[ddl]\ar[dd]\ar[ddr]
&
0\ar[dd]
\\
&&&
\\
\save[]+<-1.9cm,0cm>*{C_{1}:}\restore
0\ar[dd]
&
\ZZ\ar[ldd]_{\partial_{0}}\ar[rdd]^-{\partial_{1}}\ar[dd]
&
0\ar[dd]
\\
&&&
\\
\save[]+<-1.9cm,0cm>*{C_{0}:}\restore
\ZZ
&
0
&
\ZZ
}
\]
\end{center}
Now we recall the extension of the chain duality from $\AA$ to the
two new categories.
\begin{defn}
\[
T^{*}:\AA^{*}(K)\ra\BB(\AA^{*}(K)),\; T^{*}(\sum_{\sigma\in K}
M_{\sigma}))_{r}(\tau) =
(T(\bigoplus_{\tau\geq\tilde{\tau}}M_{\tilde{\tau}}))_{r-|\tau|}.
\]
\[
T_{*}:\AA_{*}(K)\ra\BB(\AA_{*}(K)),\; T_{*}(\sum_{\sigma\in K}
M_{\sigma}))_{r}(\tau) =
(T(\bigoplus_{\tau\leq\tilde{\tau}}M_{\tilde{\tau}}))_{r+|\tau|}.
\]
\end{defn}
\begin{example} \label{expl:duality-in-chain-cplxs-over-K}
The dual $T^\ast (C)$ of the simplicial chain complex $C = \Delta
(K)$ is a chain complex in $\BB (\AA^\ast (K))$ given by $(T^\ast
C)(\sigma) = \Delta^{|\sigma|-\ast} (\sigma)$.
The dual $T_\ast C$ of the simplicial chain complex $C = \Delta (K')$
of the barycentric subdivision $K'$ is a chain complex in $\BB
(\AA_\ast (K))$ given by $(T_\ast C)(\sigma) =
\Delta^{-|\sigma|-\ast} (D(\sigma))$.
\end{example}
In the example when $K$ is a triangulated manifold recall that the
chain duality functor $T^{*}$ on $\AA^{*}(K)$ is supposed to encode
the local Poincar\'e duality of all simplices of $K$. But the
dimensions of these local Poincar\'e dualities vary with the
dimension of the simplices and we have to deal with the boundaries.
So the dimension shift in the above formula comes from the varying
dimensions and the direct sum comes from ``dealing with the
boundary''. In the example $C = \Delta_* (\Delta^1)$ we obtain the
following picture
{\footnotesize
\[
\xymatrix@C=0.5cm@R=0.3cm{
{\Delta_{*}(\sigma_{0},\partial\sigma_{0})}
&
{\Delta_{*}(\tau,\partial\tau)}
&
{\Delta_{*}(\sigma_{1},\partial\sigma_{1})}
{\Delta^{*}(\sigma_{0})}
&
{\Delta^{*}(\tau)}
&
{\Delta^{*}(\sigma_{1})}
\\
\save[]+<-0.7cm,0cm>*{C_{1}:}\restore
0\ar[dd]
&
\ZZ\ar[ldd]_{\partial_{0}}\ar[rdd]^-{\partial_{1}}\ar[dd]
&
0\ar[dd]\save[]+<0.6cm,0cm>*{}\ar@{<--}^{\varphi_{0}}[r]+<-0.6cm,0cm>\restore
& &
(\ZZ\oplus\ZZ)^{*}\ar[dd]^(0.6)*!/l2pt/{{\partial_{0}^{*}}\choose{\partial_{1}^{*}}}
\ar[ddl]_{i^{*}_{0}}\ar[ddr]^{i^{*}_{1}}
&
\save[]+<1.3cm,0cm>*{:(T^{*}C)_{1}}\restore
\\
&&&&&
\\
\save[]+<-0.7cm,0cm>*{C_{0}:}\restore
\ZZ
&
0
&
\ZZ
\save[]+<0.6cm,0cm>*{}\ar@{<--}^{\varphi_{0}}[r]+<-0.6cm,0cm>\restore
&
\ZZ^{*}\ar[dd]
&
\ZZ^{*}\ar[ddr]\ar[ddl]
&
\ZZ^{*}\ar[dd]
\save[]+<1.3cm,0cm>*{:(T^{*}C)_{0}}\restore
\\
&&&&&
\\
&&&0&&0\save[]+<1.4cm,0cm>*{:(T^{*}C)_{-1}}\restore
\\
}
\]
}
In $\AA_{*}(K)$ the role of simplices is replaced by the dual cells
and so the formulas are changed accordingly.
The additive categories with chain duality $\AA^\ast (K)$ and $\AA_\ast (K)$ can be made into algebraic bordism categories in various ways yielding chain complexes with various types of Poincar\'e duality. Now we introduce the local duality, in the next section we will have the global duality.
\begin{prop} \label{defn:Lambda-star-categories}
Let $\Lambda=(\AA,\BB,\CC)$ be an algebraic bordism category and $K$
a locally finite simplicial complex. Then the triples
\[
\Lambda^{*}(K) = (\AA^\ast (K),\BB^\ast (K),\CC^\ast (K)) \qquad
\Lambda_\ast (K) = (\AA_\ast (K),\BB_\ast (K),\CC_\ast (K))
\]
where $\BB^{*} (K)$, $\BB_\ast (K)$, $(\CC^\ast (K)$, $\CC_\ast
(K))$ are the full subcategories of $\BB (\AA^\ast (K))$,
respectively $\BB (\AA_\ast (K))$, consisting of the chain complexes
$C$ such that $C(\sigma) \in \BB$ $(C(\sigma) \in \CC)$ for all
$\sigma \in K$ are algebraic bordism categories.
\end{prop}
See \cite[Proposition 5.1]{Ranicki(1992)} for the proof. We remark that other useful algebraic bordism categories associated to $\Lambda$ and $K$ will be defined in Definitions \ref{defn:Lambda-K-category} and \ref{defn:Lambda-hat-category}.
\begin{prop}
Let $\Lambda=(\AA,\BB,\CC)$ be an algebraic bordism category and let
$f \co J \ra K$ be a simplicial map. Then $f$ induces
contravariantly (covariantly) a covariant functor of algebraic
bordism categories
\[
f^\ast \co \Lambda^\ast (K) \ra \Lambda^\ast (J) \qquad (f_\ast \co
\Lambda_\ast (J) \ra \Lambda_\ast (K)).
\]
\end{prop}
See \cite[Proposition 5.6]{Ranicki(1992)}. A consequence is that we
obtain induced maps on the $L$-groups as well, which we do not
write down explicitly at this stage.
Now we present constructions over the category $\AA(\ZZ)^\ast (K)$ analogous to the symmetric and quadratic construction in section \ref{sec:algebraic-cplxs}. Examples of chain complexes over $\AA(\ZZ)^{\ast} (K)$ were already presented in Examples \ref{expl:chain-cplxs-over-K} and \ref{expl:duality-in-chain-cplxs-over-K}. The underlying chain complexes below are generalizations of those. We will write $\ZZ^\ast(K)$ as short for $\AA(\ZZ)^\ast(K)$ and $\ZZ_\ast(K)$ as short for $\AA(\ZZ)_\ast(K)$.
\begin{construction} \label{con:sym-construction-over-cplxs-upper-star-cplx}
Consider a topological $k$-ad $(X,(\del_\sigma X)_{\sigma \in \Delta^k})$ and the subcomplex of the singular chain complex $C (X)$ consisting of simplices which respect the $k$-ad structure in a sense that each singular simplex is contained in $\del_\sigma X$ for some $\sigma \in \Delta^k$. By a Mayer-Vietoris type argument this chain complex is chain homotopy equivalent to $C(X)$ and by abuse of notation we still denote it $C(X)$. It becomes a chain complex over $\ZZ^\ast (\Delta^k)$ by $C(X) (\sigma) = C(\del_\sigma X,\del (\del_\sigma X))$. Its dual is a chain complex $T^\ast C(X) $ given by $(T^\ast C(X) )(\sigma) = C^{|\sigma|-\ast} (\del_\sigma X)$ for $\sigma \in \Delta^k$. A generalization of the relative symmetric construction \ref{con:rel-sym} gives a chain map
\[
\varphi_{\Delta^k} \co \Sigma^{-k} C(X,\del X) \ra \Ws{C(X)} \; \textup{over} \; \ZZ^\ast (\Delta^k)
\]
called the \emph{symmetric construction over} $\Delta^k$ which evaluated on a cycle $[X] \in C_{n+k} (X, \del X)$ gives an $n$-dimensional symmetric algebraic complex $(C (X),\varphi_{\Delta^k} [X])$ in $\ZZ^\ast (\Delta^k)$ whose component
\[
\varphi_{\Delta^k} ([X]) (\sigma)_0 \co C^{n+|\sigma|-\ast} (\del_\sigma X) \ra C(\del_\sigma X,\del (\del_\sigma X))
\]
is the cap product with the cycle $[\del_\sigma X] \in C_{n+|\sigma|} (\del_\sigma X,\del (\del_\sigma X))$. Here $\del_\sigma \co C (\Delta^k) \ra C(\sigma)$ is the map defined as in \cite[Definition 8.2]{Ranicki(1992)}.
\end{construction}
\begin{construction} \label{con:sym-construction-over-cplxs-upper-star-mfd}
Consider now the special case when we have an $(n+k)$-dimensional manifold $k$-ad $(M,(\del_\sigma M)_{\sigma \in \Delta^k})$. Let $\Lambda (\ZZ)$ be the algebraic bordism category from Example \ref{expl:alg-bord-cat-ring}. Construction \ref{con:sym-construction-over-cplxs-upper-star-cplx} applied to the fundamental class $[M] \in C_{n+k} (M,\del M)$ produces an $n$-dimensional symmetric algebraic complex $(C (M),\varphi_{\Delta^k} ([M]))$ in the category $\Lambda (\ZZ)^\ast (\Delta^k)$ since the maps
\[
\varphi_{\Delta^k} ([M]) (\sigma)_0 \co C^{(n-k)+|\sigma|-\ast} (\del_\sigma M) \ra C(\del_\sigma M,\del (\del_\sigma M))
\]
are the cap products with the fundamental classes $[\del_\sigma M] \in C_{n-k+|\sigma|} (\del_\sigma M,\del (\del_\sigma M))$ and hence chain homotopy equivalences and hence their mapping cones are contractible.
\end{construction}
\begin{construction} \label{con:quad-construction-over-cplxs-upper-star}
Analogously, when we have a degree one normal map of manifold $k$-ads
\[
((f,b),(f_\sigma,b_\sigma)) \co (M,\del_\sigma M) \ra (X,\del_\sigma X)
\]
with $\sigma \in \Delta^k$, the stable Umkehr map $F \co \Sigma^p X_+ \ra \Sigma^p M_+$ for some $p$ induces by a generalization of the relative quadratic construction \ref{con:rel-quad-htpy} a chain map
\[
\psi_{\Delta^k} \co \co \Sigma^{-k} C(X,\del X) \ra \Wq{C(M)} \; \textup{over} \; \ZZ^\ast (\Delta^k)
\]
called the \emph{quadratic construction over} $\Delta^k$. Evaluated on the fundamental class $[X] \in C_{n+k} (X, \del X)$ it produces an $n$-dimensional quadratic algebraic complex in the category $\Lambda(\ZZ)^\ast (\Delta^k)$. The mapping cone $\sC (f^!)$ becomes a complex over $\ZZ^\ast (\Delta^k)$ by $\sC (f^{!}) (\sigma) = \sC (f_\sigma^{!},\del f_\sigma^{!})$. The chain map $e \co C(M) \ra \sC (f^!)$ in $\ZZ^\ast (\Delta^k)$ produces an $n$-dimensional quadratic complex in $\Lambda (\ZZ)^\ast (\Delta^k)$
\[
\big( C(f^!), e_\% \psi_{\Delta^k} [X] \big)
\]
\end{construction}
Now we move to the constructions in the category $\ZZ_\ast (K)$.
\begin{construction} \label{con:sym-construction-over-cplxs-lower-star-cplx}
Let $r \co X \ra K$ be a map of simplicial complexes. Denoting for $\sigma \in K$
\[
X[\sigma] = r^{-1} (D(\sigma,K)) \subset X' \; \textup{we obtain} \;
X = \bigcup_{\sigma \in K} X[\sigma].
\]
This decomposition is called a $K$-dissection of $X$. Consider the subcomplex of the singular chain complex of $C(X)$ consisting of the singular chains which respect the dissection in the sense that each singular simplex is contained in some $X[\sigma]$. This chain complex is chain homotopy equivalent to $C(X)$ and by abuse of notation we still denote it $C(X)$. It becomes a chain complex in $\BB (\ZZ_\ast (K))$ by
\[
C(X)(\sigma) = C(X[\sigma],\del X[\sigma]) \; \textup{for} \; \sigma \in K
\]
with $n$-dual
\[
\Sigma^n T_\ast C(X) (\sigma) = C^{n-|\sigma|-\ast}(X[\sigma]).
\]
There is a chain map $\del_\sigma \co C(X) \ra S^{|\sigma|}
C(X[\sigma],\del X [\sigma])$, defined in \cite[Definition
8.2]{Ranicki(1992)}, the image of a chain $[X] \in C(X)_n$ is
denoted $[X[\sigma]] \in C(X[\sigma],\del X [\sigma])_{n-|\sigma|}$.
A generalization of the relative symmetric construction \ref{con:rel-sym} gives a chain map
\[
\varphi_K \co C(X) \ra \Ws{C(X)} \; \textup{over} \; \ZZ_\ast (K)
\]
called the \emph{symmetric construction over } $K$, which evaluated on a cycle $[X] \in C(X)_n$ produces an $n$-dimensional symmetric complex $(C(X),\varphi_K [X])$ over $\ZZ_\ast (K)$ whose component
\[
\varphi_K ([X]) (\sigma)_0 \co C^{n-|\sigma|-\ast} (X[\sigma]) \ra
C(X[\sigma],\del X[\sigma])
\]
is the cap product with the class $[X[\sigma]]$.
\end{construction}
\begin{construction} \label{con:sym-construction-over-cplxs-lower-star-mfd}
More generally, let $X$ be an $n$-dimensional topological manifold and let $r \co X \ra K$ be a map, transverse to the dual cells $D(\sigma,K)$ for all $\sigma \in K$. Any map can be so deformed by topological transversality. In this situation we obtain an analogous $K$-dissection. The resulting complex $(C (X),\varphi_K [X])$ is now an $n$-dimensional symmetric algebraic complex in $\Lambda (\ZZ)_\ast (K)$ since the maps
\[
\varphi (\sigma)_0 \co C^{(n-|\sigma|-\ast} (X[\sigma]) \ra C(X[\sigma],\del X[\sigma])
\]
are the cap products with the fundamental classes $[X[\sigma]] \in C_{n-|\sigma|} (X[\sigma],\del X[\sigma])$ and hence chain homotopy equivalences and hence their mapping cones are contractible. Here we are using the fact that each $X[\sigma]$ is an $n-|\sigma|$-dimensional manifold with boundary and hence satisfies Poincar\'e duality.
\end{construction}
\begin{construction} \label{con:quad-construction-over-cplxs-lower-star}
Analogously let $(f,b) \co M \ra X$ be a degree one normal map of closed $n$-dimensional topological manifolds. We can make $f$ transverse to the $K$-dissection of $X$ in a sense that
each preimage
\[
(M[\sigma],\del M[\sigma]) := f^{-1} (X[\sigma],\del X[\sigma])
\]
is an $(n-|\sigma|)$-dimensional manifold with boundary and each restriction
\[
(f[\sigma],f[\del \sigma]) \co (M[\sigma],\del M[\sigma]) \ra (X[\sigma],\del X[\sigma])
\]
is a degree one normal map. The stable Umkehr map $F \co \Sigma^p X_+ \ra \Sigma^p M_+$ for some $p$ induces by a generalization of the relative quadratic construction \ref{con:rel-quad-htpy} a chain map
\[
\psi_{K} \co \Sigma^{-k} C(X) \ra \Wq{C(M)} \; \textup{over} \; \ZZ_\ast (K)
\]
called the \emph{quadratic construction over} $K$. Evaluated on the fundamental class $[X] \in C_{n} (X, \del X)$ produces an $n$-dimensional quadratic algebraic complex in the category $\Lambda(\ZZ)_\ast (K)$. The mapping cone $\sC (f^!)$ becomes a complex over $\ZZ_\ast (K)$ by $\sC (f^{!}) (\sigma) = \sC (f (\sigma)^{!},f (\del \sigma)^{!})$. The chain map $e \co C(M) \ra \sC (f^!)$ in $\ZZ_\ast (K)$ produces an $n$-dimensional quadratic complex in $\Lambda (\ZZ)_\ast (K)$
\[
\big( C(f^!), e_\% \psi_{K} [X] \big).
\]
\end{construction}
\section{Assembly} \label{sec:assembly}
Assembly is a map that allows us to compare the concepts of the local Poincar\'e duality introduced in section \ref{sec:cat-over-cplxs} and the global Poincar\'e duality in section \ref{sec:algebraic-cplxs}. It is formulated as a functor of algebraic bordism categories.
\begin{prop}
The functor of additive categories $A:\ZZ_{*}(K)\ra \ZZ[\pi_{1}(K)]$
defined by
\[
M\mapsto \sum_{\tilde{\sigma}\in\tilde{K}}M(p(\tilde{\sigma}))
\]
defines a functor of algebraic bordism categories.
\[
A \co \Lambda (\ZZ)_\ast (K) \ra \Lambda (\ZZ[\pi_1 (K)])
\]
and hence homomorphisms
\[
A \co L^n (\Lambda (\ZZ)_\ast (K)) \ra L^n (\Lambda (\ZZ[\pi_1 (K)])) \qquad A \co L_n (\Lambda (\ZZ)_\ast (K)) \ra L_n (\Lambda (\ZZ[\pi_1 (K)]))
\]
\end{prop}
\begin{expl} \label{expl:assembly-of-symmetric-signature-over-K} \cite[Example 9.6]{Ranicki(1992)}
Let $X$ be an $n$-dimensional topological manifold with a map $r \co X \ra K$. In Construction \ref{con:sym-construction-over-cplxs-lower-star-cplx} there is described how to associate to $X$ an $n$-dimensional SAC $(C,\varphi)$ in $\Lambda (\ZZ)_\ast (K)$. The assembly $A(C,\varphi)$ is then the $n$-dimensional SAPC $\ssign (X) = (C(\widetilde{X}),\varphi ([X]))$ in $\Lambda (\ZZ[\pi_1 (K)])$ described in Construction \ref{constrn:symmetric construction}.
\end{expl}
\begin{expl} \label{expl:assembly-of-quadratic-signature-over-K} \cite[Example 9.6]{Ranicki(1992)}
Let $(f,b) \co M \ra X$ be a degree one normal map of closed $n$-dimensional topological manifolds. In Construction \ref{con:quad-construction-over-cplxs-lower-star} there is described how to associate to $(f,b)$ an $n$-dimensional QAC $(C,\psi)$ in $\Lambda (\ZZ)_\ast (K)$. The assembly $A(C,\varphi)$ is then the $n$-dimensional QAPC $\qsign (f,b) = (C(f^{!}),e_\% \psi [X])$ in $\Lambda (\ZZ[\pi_1 (K)])$ described in Construction \ref{constrn:quadconstrn}.
\end{expl}
It is convenient to factor the assembly map into two maps. The reason is that we have nice localization sequences for a functor of algebraic bordism categories when the underlying category with chain duality is fixed and the functor is an inclusion. Hence we define
\begin{defn} \label{defn:Lambda-K-category}
Let $\Lambda (\ZZ)$ be the algebraic bordism category of Example \ref{expl:alg-bord-cat-ring} and $K$ a locally finite simplicial complex. Then the triple
\[
\Lambda (\ZZ) (K) = (\AA_\ast (K),\BB_\ast (K),\CC (K))
\]
where the subcategory $\CC (K)$ consists of the chain complexes $C \in \BB_\ast (K)$ such that $A(C) \in \CC (\ZZ[\pi_1 (K)])$.
\end{defn}
Hence, for example, an $n$-dimensional symmetric complex $(C,\varphi)$ in $\Lambda (\ZZ) (K)$ will be a complex over $\ZZ_\ast (K)$, which will only be globally Poincar\'e in the sense that $A(C,\varphi)$ will be an $n$-dimensional SAPC over $\ZZ[\pi_1 K]$, but the duality maps $\varphi (\sigma) \co \Sigma^n TC (\sigma) \ra C(\sigma)$ do not have to be chain homotopy equivalences for a particular simplex $\sigma \in K$.
\begin{prop}
The assembly functor factors as
\[
A \co \Lambda (\ZZ)_\ast (K) \ra \Lambda (\ZZ) (K) \ra \Lambda (\ZZ[\pi_1 (K)])
\]
\end{prop}
Furthermore Ranicki proves the following algebraic $\pi-\pi$-theorem\footnote{The name is explained at the beginning of \cite[chapter 10]{Ranicki(1992)}}:
\begin{prop} \textup{\cite[chapter 10]{Ranicki(1992)}} \label{prop:algebraic-pi-pi-theorem}
The functor $\Lambda(\ZZ) (K) \ra \Lambda (\ZZ[\pi_1 (K)])$ induces an isomorphism on quadratic $L$-groups
\[
L_n (\Lambda (\ZZ) (K)) \cong L_n (\ZZ [\pi_1 (K)])
\]
\end{prop}
It follows that when we want to compare local and global Poincar\'e duality it is enough to study the map
\begin{equation}
A \co L_n (\Lambda (\ZZ)_\ast (K)) \ra L_n (\Lambda (\ZZ) (K)).
\end{equation}
\section{$\bL$-Spectra} \label{sec:spectra}
The technology of the previous sections also allows us to construct
$L$-theory spectra whose homotopy groups are the already defined
$L$-groups. Spectra give rise to generalized (co-)homology theories
via the standard technology of stable homotopy theory. That is also
the main reason for their introduction in $L$-theory.
These spectra are constructed as spectra of $\Delta$-sets, alias
simplicial sets without degeneracies. We refer the reader to
\cite[chapter 11]{Ranicki(1992)} for the detailed definition as well
as for the notions of Kan $\Delta$-sets, the geometric product $K
\otimes L$, the smash product $K \wedge L$, the function
$\Delta$-sets $L^K$, the fiber and the cofiber of a map of
$\Delta$-sets, the loop $\Delta$-set $\Omega K$ and the suspension
$\Sigma K$ as well as the notion of an $\Omega$-spectrum of
$\Delta$-sets.
Below, $\Delta^n$ is the standard $n$-simplex, $\Lambda$ is an
algebraic bordism category and $K$ is a finite $\Delta$-set.
\begin{defn}
Let $\bL_{n}(\Lambda)$, $\bL^{n}(\Lambda)$ and $\bNL^{n}(\Lambda)$
be pointed $\Delta$-sets defined by
\begin{align*}
\bL^{n}(\Lambda)^{(k)} & = \{n\text{-dim. symmetric complexes in }\Lambda^{*}(\Delta^{k})\}, \\
\bL_{n}(\Lambda)^{(k)} & = \{n\text{-dim. quadratic complexes in }\Lambda^{*}(\Delta^{k})\, \\
\bNL^{n}(\Lambda)^{(k)} &= \{n\text{-dim. normal complexes in }\Lambda^{*}(\Delta^{k+n})\}.
\end{align*}
The face maps are induced by the face inclusions $\partial_{i}:\Delta^{k-1}\ra\Delta^{k}$
and the base point is the 0-chain complex.
\end{defn}
\begin{prop}
We have $\Omega$-spectra of pointed Kan $\Delta$-sets
\[
\bL^{\bullet}(\Lambda) \!\coloneqq \{\bL^{n}(\Lambda)\;|\;n\in\ZZ\}
\quad \! \bL_{\bullet}(\Lambda) \!\coloneqq
\{\bL_{n}(\Lambda)\;|\;n\in\ZZ\} \quad \! \bNL^{\bullet}(\Lambda)
\!\coloneqq \{\bNL^{n}(\Lambda)\;|\;n\in\ZZ\}
\]
with homotopy groups
\[
\pi_{n}(\bL^{\bullet}(\Lambda)) \cong L^{n}(\Lambda) \quad
\pi_{n}(\bL_{\bullet}(\Lambda)) \cong L_{n}(\Lambda) \quad
\pi_{n}(\bNL^{\bullet}(\Lambda)) \cong NL^{n}(\Lambda)
\]
\end{prop}
\begin{rem}
The indexing of the $L$-spectra above is the opposite of the usual
indexing in stable homotopy theory. Namely, if $\bE$ is any of the
spectra above we have $\bE_{n+1} \simeq \Omega \bE_n$.
\end{rem}
\begin{notation}
To save space we will abbreviate
\[
\bL^{\bullet} = \bL^{n}(\Lambda (\ZZ)) \quad \bL_{\bullet} =
\bL_{n}(\Lambda (\ZZ)) \quad \bNL^{\bullet} = \bNL^{n}(\widehat \Lambda
(\ZZ)).
\]
\end{notation}
We note that the exact sequences from Propositions \ref{propn:LESL},
\ref{prop:relativ-L-groups}, and \ref{prop:inclusion-les} can be
seen as the long exact sequences of the homotopy groups of fibration
sequences of spectra. We are mostly interested in the following
special case.
\begin{prop} \label{prop:fib-seq-of-quad-sym-norm}
Let $R$ be a ring with involution. Then we have a fibration sequence
of spectra
\[
\bL_\bullet (\Lambda(R)) \ra \bL^\bullet (\Lambda(R)) \ra \bNL^\bullet (\Lambda(R)).
\]
\end{prop}
\begin{proof}
Consider the fiber of the map of spectra $\bL^\bullet (\Lambda(R)) \ra \bNL^\bullet (\Lambda(R))$. Use algebraic surgery to identify it with $\bL_\bullet (\Lambda(R))$ just as in the proof of Proposition \ref{propn:LESL}.
\end{proof}
In fact the $L$-theory spectra are modeled on some geometric spectra. We will use the notion of a $(k+2)$-ad (of spaces) and manifold $(k+2)$-ads as defined in \cite[\S 0]{Wall(1999)}.
\begin{defn}
Let $n\in\ZZ$ and $\bbOmega[n]{STOP}$ and $\bbOmega[n]{N}$ be pointed
$\Delta$-sets defined by
\begin{align*}
(\bbOmega[n]{STOP})^{(k)} = & \{ (M,\partial_0M,\ldots,\partial_kM) \; | \; (n+k)\textup{-dimensional manifold} \\
& \textup{$(k+2)$-ad such that } \partial_0 M \cap \ldots \cap \partial_k M = \emptyset\} \\
(\bbOmega[n]{N})^{(k)} = & \{ (X,\nu,\rho) \; | \; (n+k)\textup{-dimensional normal space $(k+2)$-ad} \\
& X = (X, \partial_0 X,\ldots,\partial_k X) \; \textup{such that} \; \partial_0 X \cap \ldots\cap \partial_k X = \emptyset, \\ & \nu \co X \ra \BSG (r) \textup{ and } \rho:\Delta^{n+k+r}\ra \Th(\nu_X) \; \textup{such that} \\ & \rho (\partial_i\Delta^{n+k+r}) \subset \Th(\nu_{\partial_{i} X}) \}
\end{align*}
Face maps $\partial_i:(\bbOmega[n]{?})^{(k)} \ra (\bbOmega[n]{?})^{(k-1)}$, $0\leq i\leq k$ are given in both cases by
\[
\partial_i(X) = (\partial_iX, \partial_0X\cap \partial_iX,\ldots, \partial_{i-1}X\cap \partial_iX,\partial_{i+1}X\cap \partial_iX,\ldots, \partial_nX\cap \partial_iX, ).
\]
Here a convention is used that an empty space is a manifold (normal space) of any dimension $n \in \ZZ$ and it is a base point in all the dimensions.
\end{defn}
\begin{prop}
We have $\Omega$-spectra of pointed Kan $\Delta$-sets
\[
\bbOmega[\bullet]{STOP} \coloneqq \{\bbOmega[n]{STOP}\;|\;n\in\ZZ\}
\quad \bbOmega[\bullet]{N} \coloneqq\{\bbOmega[n]{N}\;|\;n\in\ZZ\}
\]
with homotopy groups
\[
\pi_n(\bbOmega{STOP}) = \Omega^{\STOP}_n \quad \pi_n(\bbOmega{N}) =
\Omega^N_n.
\]
\end{prop}
\begin{defn}
For $n \in \ZZ$ let $\Sigma^{-1} \bbOmega[n]{N,\STOP}$ be the
pointed $\Delta$-set defined as the fiber of the map of
$\Delta$-sets
\[
\Sigma^{-1} \bbOmega[n]{N,\STOP} = \textup{Fiber} (\bbOmega[n]{STOP} \ra \bbOmega[n]{N} )
\]
The collection $\Sigma^{-1} \bbOmega[n]{N,\STOP}$ becomes an
$\Omega$-spectrum of $\Delta$-sets.
\end{defn}
\begin{rem}
Again, the indexing of the above spectra is the opposite of the
usual indexing in stable homotopy theory. To see that the spectra
are indeed $\Omega$-spectra observe that an $(n+1+k-1)$-dimensional
$(k-1+2)$-ad is the same as an $(n+k)$-dimensional $(k+2)$-ad whose faces
$\del_0$ and $\del_1 \ldots \del_k$ are empty. Similar observation
is used in the algebraic situation.
\end{rem}
Hence we have a homotopy fibration sequence of spectra
\begin{equation} \label{eqn:htpy-fib-seq-of-bordism-spectra}
\Sigma^{-1} \bbOmega[\bullet]{N,\STOP} \ra \bbOmega[\bullet]{STOP} \ra \bbOmega[\bullet]{N}
\end{equation}
The fibration sequences from Proposition
\ref{prop:fib-seq-of-quad-sym-norm} and of
(\ref{eqn:htpy-fib-seq-of-bordism-spectra}) are related by the
signature maps as follows.
\begin{prop} \label{prop:signatures-on-spectra-level}
The relative symmetric construction produces
\[
\textup{(1)}\quad \ssign \co \bbOmega[n]{STOP} \ra \bL^n(\Lambda(\ZZ)) \; \leadsto \; \ssign \co
\bbOmega[\bullet]{STOP} \ra \bL^\bullet
\]
The relative normal construction produces
\[
\textup{(2)}\quad \nsign \co \bbOmega[n]{N} \ra \bNL^n(\widehat \Lambda(\ZZ))
\; \leadsto \; \nsign \co \bbOmega[\bullet]{N} \ra \bNL^\bullet.
\]
The relative normal construction together with the fibration
sequence from Proposition \ref{prop:fib-seq-of-quad-sym-norm}
produces
\[
\textup{(3)}\quad \qsign \co \Sigma^{-1} \bbOmega[n]{N,\STOP} \ra
\bL_n(\Lambda(\ZZ)) \; \leadsto \; \qsign \co \Sigma^{-1}
\bbOmega[\bullet]{N,\STOP} \ra \bL_\bullet.
\]
\end{prop}
\begin{proof}
For (1) use Construction
\ref{con:sym-construction-over-cplxs-upper-star-cplx} which is just
a generalization of the relative symmetric construction. For (2) the
relative normal construction can be used. The full details are
complicated, they can be found in \cite[section 7]{Weiss-II(1985)}.
For (3) observe that the relative normal construction provides us
with a map to the fiber of the map $\bL^{n} (\Lambda(\ZZ)) \ra
\bNL^{n} (\Lambda(\ZZ))$. The identification of this fiber from Proposition
\ref{prop:fib-seq-of-quad-sym-norm} produces the desired map.
\end{proof}
\section{Generalized homology theories} \label{sec:gen-hlgy-thies}
Now we come to the use of the spectra just defined to produce (co-)homology. Definition \ref{defn:co-hlgy-via-spectra} below contains the formulas. In addition the $S$-duality gives an opportunity to express homology as cohomology and vice versa. In our application it turns out that the input we obtain is of cohomological nature, but we would like to think of it in terms of homology. Therefore the strategy is adopted which comes under the slogan: ``homology is the cohomology of the $S$-dual''. Here in fact a simplicial model for the $S$-duality will be useful when we work with particular cycles. For $L$-theory spectra a relation to the $L$-groups of algebraic bordism categories from section \ref{sec:cat-over-cplxs} will be established.
The following definitions are standard.
\begin{defn} \label{defn:co-hlgy-via-spectra}
Let $\bE$ be an $\Omega$-spectrum of Kan $\Delta$-sets and let $K$
be locally finite $\Delta$-set.
(1) The \emph{cohomology with $\bE$-coefficients} is defined by
\[
H^n(K;\bE) = \pi_{-n}(\bE^{K_{+}}) = [K_{+}, \bE_{-n}]
\]
where $\bE^{K_+}$ is the mapping $\Delta$-set given by
\[
(\bE^{K_{+}}_{-n})^{(p)} = \{ K_{+}\otimes \Delta^p\ra \bE_{-n} \}
\]
(2) The \emph{homology with $\bE$-coefficients} is defined by
\[
H_n(K;\bE) = \pi_{n}(K_+ \wedge \bE) = \textup{colim} \; \pi_{n+j} (K_+ \wedge \bE_{-j} )
\]
where $K_+ \wedge \bE$ is the $\Omega$-spectrum of $\Delta$-sets
given by
\[
(K_+ \wedge \bE) = \{ \textup{colim} \; \Omega^j (K_+ \wedge \bE_{n-j}) \; | \; n \in \ZZ \}.
\]
\end{defn}
What follows is a combinatorial description of $S$-duality from \cite{Whitehead(1962)} and \cite{Ranicki(1992)}.
\begin{defn}
Let $K \subset L$ be an inclusion of a simplicial subcomplex. The
\emph{supplement of} $K$ \emph{in} $L$ is the subcomplex of the
barycentric subdivision $L'$ defined by
\[
\overline K = \{ \sigma' \in L' \; | \; \textup{no face of} \;
\sigma' \; \textup{is in} \; K' \} = \bigcup_{\sigma \in L, \sigma
\notin K} D(\sigma,L) \subset L'
\]
\end{defn}
Next we come to the special case when $L = \del \Delta^{m+1}$. In
this case the dual cell decomposition of $\del \Delta^{m+1}$ can in
fact be considered as a simplicial complex, which turns out to be
convenient. First a definition.
\begin{defn}\label{defn:sigma-m}
Define the simplicial complex $\Sigma^m$ by
\begin{align*}
(\Sigma^m)^{(k)} &= \{ \sigma^\ast \; | \; \sigma \in (\del \Delta^{m+1})^{(m-k)} \} \\
\del_i \co (\Sigma^m)^{(k)} & \ra (\Sigma^m)^{(k-1)} \; \textup{for} \; 0 \leq i \leq k \; \textup{is} \; \del_i \co \sigma^\ast \mapsto (\delta_i \sigma)^\ast
\end{align*}
with $\delta_i \co (\del \Delta^{m+1})^{(m-k)} \ra (\del
\Delta^{m+1})^{(m-k+1)}$ given by
\[
\delta_i \co \sigma = \{0,\ldots,m+1\} \setminus \{j_0,\ldots,j_k\}
\mapsto \sigma \cup \{j_i\}, \quad (j_0 < j_1 < \cdots < j_k).
\]
\end{defn}
So $\Sigma^m$ has one $k$-simplex $\sigma^\ast$ for each
$(m-k)$-simplex $\sigma$ of $\del \Delta^{m+1}$ and $\sigma^\ast
\leq \tau^\ast$ if and only if $\sigma \geq \tau$.
\[
\deltapicture
\]
The usefulness of this definition is apparent form the following
proposition, namely that each dual cell in $\partial\Delta^{m+1}$
appears as a simplex in $\Sigma^m$.
\begin{prop}
There is an isomorphism of simplicial complexes
\[
\Phi \co (\Sigma^m)' \xra{\cong} (\del \Delta^{m+1})'
\]
such that for each $\sigma \in K \subset \del \Delta^{m+1}$ we have
\[
\Phi(\sigma^\ast) = D(\sigma,\partial\Delta^{m+1}) \quad
\textup{and} \quad \Phi(\sigma^\ast) \cap K' = D(\sigma,K)
\]
\end{prop}
Notice that since $\del \Delta^{m+1}$ is an $m$-dimensional manifold
the dual cell $D(\sigma,\del \Delta^{m+1})$ is a submanifold with
boundary of dimension $(m-|\sigma|)$ which coincides with the
dimension of $\sigma^\ast$.
\begin{proof}
The isomorphism $\Phi$ is given by the formula
\begin{align*}
\xymatrix{
(\sigma^\ast)'
=
\{ \hat{\sigma}^\ast_0\hat{\sigma}^\ast_1\ldots\hat{\sigma}^\ast_p \;|\; \sigma^\ast_p < \ldots <\sigma^\ast_1< \sigma^\ast_0\leq\sigma^\ast\} \ar[d]^{\Phi},
&
\hat{\sigma}^\ast_0\hat{\sigma}^\ast_1\ldots\hat{\sigma}^\ast_p \ar@{|->}[d]
\\
D(\sigma,\partial\Delta^{m+1})
=
\{ \hat{\sigma}_0\hat{\sigma}_1\ldots\hat{\sigma}_p \;|\; \sigma \leq \sigma_0< \sigma_1<\ldots<\sigma_p \},
&
\hat{\sigma}_0\hat{\sigma}_1\ldots\hat{\sigma}_p
}
\end{align*}
\end{proof}
The isomorphism of course induces a homeomorphism of geometric
realizations. For $m=2$ it looks like this:
\[
\flippedsigmapicture
\]
\begin{prop} \textup{\cite[Prop. 12.4]{Ranicki(1992)}} \label{prop:hom=cohom+S}
Let $\bE$ be a $\Omega$-spectrum of Kan $\Delta$-sets and $K$ a
finite simplicial complex. Then for $m\in\NN$ large enough we have
\[
H_n(K;\bE) \cong H^{m-n}(\Sigma^m,\splK; \bE)
\]
\end{prop}
\begin{proof}
The above proposition allows us to think of $K$ as being embedded in
$\Sigma^m$ and the complex $\Sigma^m/\splK$ is the quotient of
$\Sigma^m$ by the complement of a neighborhood of $K$. This is a
well known construction of an $m$-dimensional $S$-dual of $K$, which
is proved in detail for example in \cite[p. 265]{Whitehead(1962)}.
The construction there provides an explicit simplicial construction
of a map $\Delta' \co \Sigma^m\ra K_{+}\wedge
(\Sigma^m/\overline{K})$ which turns out to be such an $S$-duality.
\end{proof}
We remark that if $K$ is an $n$-dimensional Poincar\'e complex with
the SNF $\nu_K \co K \ra \BSG (m-n)$ then $\Sigma^m / \splK \simeq
Th(\nu_K)$.
Now we come to the promised alternative definition of the homology
of $K$.
\begin{defn} \label{defn:E-cycles}
Let $\bE$ be an $\Omega$-spectrum of $\Delta$-sets. An
$n$-dimensional $\bE$-cycle in $K$ is a collection
\[
x = \{x(\sigma) \in \bE_{n-m}^{(m-|\sigma|)} \; | \; \sigma\in K \}
\]
such that $\partial_ix(\sigma) =
\begin{cases}
x(\delta_i\sigma) & \text{if } \delta_i\sigma\in K\\
\emptyset & \text{if }\delta_i\sigma\notin K
\end{cases} (0\leq i\leq m-|\sigma|)$
A cobordism of $n$-dimensional $\bE$-cycles $x_0$, $x_1$ in $K$ is a
$\Delta$-map
\[
y: (\Sigma^m,\overline{K})\otimes \Delta^1 \ra (\bE_{n-m},\emptyset)
\]
such that $y(\sigma\otimes i)= x_i(\sigma)\in \bE_{n-m}^{m-|\sigma}$
for $\sigma\in K$ and $i=0,1$.
\end{defn}
\begin{prop}[{\!\cite[Prop. 2.8]{Ranicki(1992)}}]
There is a bijection between the set of cobordism equivalence classes of $n$-dimensional $\bE$-cycles in $K$ and the $n$-dimensional $\bE$-homology group $H_n(K,\bE)$.
\end{prop}
\begin{proof}
A $n$-dimensional $\bE$-cycle $x$ defines a $\Delta$-map
\[
(\Sigma^m,\splK)\ra \bE_{n-m}, \sigma^\ast\mapsto \begin{cases}
x(\sigma) & \sigma\in K \\ \emptyset & \sigma\notin K\end{cases}
\]
and cobordism relation of cycles corresponds to the homotopy
relation of $\Delta$-maps.
\end{proof}
\begin{prop}[{\!\cite[Prop. 13.7]{Ranicki(1992)},\cite[Remark 14.2]{Laures-McClure2009}}] \label{prop:L-thy-of-star-cat-is-co-hlgy}
\renewcommand{\labelenumi}{(\roman{enumi})}
Let $K$ be a finite simplicial complex and $\Lambda$ an algebraic
bordism category. Then
\begin{enumerate}
\item $\bL_{\bullet}(\Lambda)^{K_{+}}\simeq\bL_{\bullet}(\Lambda^{*}(K))$ and $\bL^{\bullet}(\Lambda)^{K_{+}}\simeq\bL^{\bullet}(\Lambda^{*}(K))$
\item $K_{+}\wedge\bL_{\bullet}(\Lambda)\simeq\bL_{\bullet}(\Lambda_{*}(K))$ and $K_{+}\wedge\bL^{\bullet}(\Lambda)\simeq\bL^{\bullet}(\Lambda_{*}(K))$
\end{enumerate}
\end{prop}
\begin{cor}
For the algebraic bordism category $\Lambda = \Lambda (\ZZ)$ we have
\[
L_{n}(\Lambda (\ZZ)_{*}(K)) \cong H_{n}(K, \bL_{\bullet}) \textup{ and } \\
L^{n}(\Lambda (\ZZ)_{*}(K)) \cong H_{n}(K, \bL^{\bullet}).
\]
\end{cor}
\begin{proof}[Proof of Corollary]
For any $\Lambda$ we have
\[
L_{n}(\Lambda_\ast (K)) \cong \pi_{n}(\bL_{\bullet} (\Lambda_{*}(K))) \cong \pi_{n}(K_{+}\wedge \bL_{\bullet}(\Lambda)) \cong H_{n} (K,\bL_{\bullet}(\Lambda))
\]
and similarly in the symmetric case.
\end{proof}
\begin{proof}[Proof of (i)]
Since the morphisms in the category $\Lambda^\ast (K)$ only go from bigger to smaller simplices we can split an $n$-dimensional QAC $(C,\varphi)\in\Lambda^{*}(K)$ over $K$ into a collection of $n$-dimensional QAC $\{(C_{\sigma},\varphi_{\sigma}) \in \Lambda^{*}(\Delta^{|\sigma|})\}$ over standard simplices such that the $(C_{\sigma},\varphi_{\sigma})$ are related to each other in the same way the corresponding simplices are related to each other in $K$, i.e. $C_\sigma(\partial_{i}\sigma)=C_{\partial_{i}\sigma}(\partial_{i}\sigma)$ for all $\sigma\in K$. The complex $(C_{\sigma},\varphi_{\sigma})$ is a $|\sigma|$-simplex in $\bL_{n}(\Lambda)$ and the compatibility conditions are contained in the notion of $\Delta$-maps.
Hence we get
\begin{eqnarray*}
(C,\varphi) & = & \{n\text{-dim. QAC }(C_\sigma,\varphi_\sigma)\in\Lambda^{*}(\Delta^{|\sigma|})\;|\;
\\
& &\qquad\qquad \sigma\in K\text{ and } C_\sigma(\partial_{i}\sigma)=C_{\partial_{i}\sigma}(\partial_{i}\sigma)\}\\
& = & \Delta\text{-map } f_{C}: K_{+}\ra \bL_{n}(\Lambda)\text{ with } f(\sigma) = (C_\sigma,\varphi_{\sigma}) \text{ for } \sigma\in K_{+}
\end{eqnarray*}
Thus
\begin{eqnarray*}
\bL_{n}(\Lambda^{*}(K))^{(k)} & = & \{n\text{-dim. QAC } (C,\varphi)\in \Lambda^{*}(K)^{*}(\Delta^{k}) \simeq \Lambda^{*}(K\otimes \Delta^{k})\}\\
& = & \{ f:(K\otimes\Delta^{k})_{+}\ra\bL_{n}(\Lambda)\;|\; f \text{ is a pointed } \Delta\text{-map}\}\\
& = &(\bL_{n}(\Lambda)^{K_{+}})^{(k)}
\end{eqnarray*}
\end{proof}
\begin{proof}[Proof of (ii)]
For $m\in\NN$ large enough consider an embedding
$i:K\ra\partial\Delta^{m+1}$, the complex $\Sigma^{m}$ and the
supplement $\overline{K}$ in $\Sigma^m$ as in
Definition~\ref{defn:sigma-m}. The first observation is that there
is an isomorphism of algebraic bordism categories
\[
\Lambda_{*}(K) \cong \Lambda^{*}(\Sigma^m,\bar{K})
\]
This follows from the existence of the one-to-one correspondence
$\sigma \leftrightarrow \sigma^\ast$ between $k$-simplices of $K$
and $(m-k)$-simplices of $\Sigma^m \smallsetminus \overline{K}$ which have
the property $\sigma \leq \tau$ if and only if $\sigma^\ast \geq
\tau^\ast$ and the symmetry in the definition of the dualities
$T_\ast$ and $T^\ast$.
The observation leads to
\[
\bL_\bullet (\Lambda_\ast (K)) \cong \bL_{\bullet}(\Lambda^{*}
(\Sigma^m,\overline{K})) \simeq
\bL_{\bullet}(\Lambda)^{(\Sigma^{m},\overline{K})} \simeq K_{+} \wedge
\bL_{\bullet}(\Lambda).
\]
where the last homotopy equivalence is a spectrum version of the
isomorphism in Proposition \ref{prop:hom=cohom+S}.
\end{proof}
\begin{remark}
Recall that in section \ref{sec:cat-over-cplxs} we have defined various structured algebraic complexes over $X$. By theorems of this section some of them represent homology classes with coefficients in the $L$-theory spectra. Alternatively to the explicit construction above a different approach in \cite{Weiss(1992)} proves that these homology groups $H_n(K,\bE)$ are induced by homotopy invariant and excisive functors $K\ra\bE(\Lambda_*(K))$ and hence this construction is natural in $K$.
\end{remark}
\begin{definition} \label{defn:sym-sign-over-X}
Let $X$ be an $n$-dimensional closed topological manifold with a map $r \co X \ra K$ to a simplicial complex. The cobordism class of the $n$-dimensional SAC in $\Lambda (\ZZ)_\ast (K)$ obtained from any choice of the fundamental class $[X] \in C_n (X)$ in Construction \ref{con:sym-construction-over-cplxs-lower-star-mfd} does not depend on the choice of $[X]$ and hence defines an element
\[
\ssign_{K} (X) = (C(X),\varphi_K ([X])) \in H_n (K;\bL^\bullet)
\]
called the \emph{symmetric signature} of $X$ over $K$.
\end{definition}
\begin{definition} \label{defn:stop-sign}
Let $X$ be an $n$-dimensional closed topological manifold with a map $r \co X \ra K$ to a simplicial complex. Recall the spectrum $\bOmega^{\STOP}_\bullet$ from section \ref{sec:spectra}. Note that the $K$-dissection of $X$ obtained by making $r$ transverse to the dual cells gives a compatible collection of manifolds with boundary so that the assignment $\sigma \ra X[\sigma]$ is precisely an $n$-dimensional $\bOmega^{\STOP}_\bullet$-cycle. We call it the $\STOP$-\emph{signature} of $X$ over $K$ and denote
\[
\stopsign_{K} (X) \in H_n (K;\bOmega^{\STOP}_\bullet).
\]
\end{definition}
\begin{rem}
The symmetric signature $\ssign_K (X)$ can be seen as obtained from the STOP-signature $\stopsign_K (X)$ by applying the symmetric signature map on the level of spectra, that means the map $\ssign$ from Proposition \ref{prop:signatures-on-spectra-level}. In fact the $\STOP$-signature and hence the symmetric signature only depend on the oriented cobordism class of $X$, and so we obtain a homomorphism
\[
\ssign_{K} \co \Omega^{\STOP}_n (K) \ra H_n (K;\bL^\bullet).
\]
\end{rem}
\begin{definition} \label{defn:quad-sign-over-X}
Let $(f,b) \co M \ra X$ be a degree one normal map of $n$-dimensional closed topological manifolds and let $r \co X \ra K$ be a map to a simplicial complex. The cobordism class of the $n$-dimensional QAC in $\Lambda (\ZZ)_\ast (K)$ obtained from any choice of the fundamental class $[X] \in C_n (X)$ in Construction \ref{con:quad-construction-over-cplxs-lower-star} does not depend on the choice of $[X]$ and hence defines an element
\[
\qsign_{K} (f,b) = (\sC(f^{!}),e_\% \psi_K ([X]) \in H_n (K;\bL_\bullet)
\]
called the \emph{quadratic signature} of the degree one normal map $(f,b)$ over $K$. In fact the quadratic signature only depends on the normal cobordism class of $(f,b)$ in the set of normal invariants $\sN (X)$ and provides us with a function
\[
\qsign_{K} \co \sN(X) \ra H_n (K;\bL_\bullet).
\]
\end{definition}
In order to obtain an analogue of Proposition \ref{prop:L-thy-of-star-cat-is-co-hlgy} for $\bNL^\bullet$ spectra we need to introduce yet another algebraic bordism category associated to $\Lambda$ and $K$.
\begin{defn} \label{defn:Lambda-hat-category}
Let $\Lambda=(\AA,\BB,\CC)$ be an algebraic bordism category and $K$ a locally finite simplicial complex. Define the algebraic bordism category
\[
\widehat \Lambda (K) = (\AA_\ast (K),\BB_\ast (K),\BB_\ast (K))
\]
where $\AA_\ast (K)$ and $\BB_\ast (K)$ are as in section \ref{sec:cat-over-cplxs}.
\end{defn}
\begin{prop} \textup{\cite[Proposition 14.5]{Ranicki(1992)}}
Let $K$ be a finite simplicial complex and $\Lambda$ an algebraic bordism category. Then
\[
K_{+}\wedge\bNL^{\bullet}(\Lambda) \simeq \bNL^{\bullet}(\widehat \Lambda (K)).
\]
\end{prop}
To complete the picture we present the following proposition which follows from Lemma \ref{lem:symmetric-poincare-means-unique-normal} and Proposition \ref{prop:L-thy-of-star-cat-is-co-hlgy}
\begin{prop} \label{prop:NL-spectrum-of-lower-star-K}
We have
\[
\bNL^{\bullet} (\Lambda_\ast (K)) \simeq \bL^{\bullet} (\Lambda_\ast (K)) \simeq K_{+}\wedge\bL^{\bullet}(\Lambda)
\]
\end{prop}
\begin{rem} \label{rem:hlgical-assembly}
Recall the idea of the assembly map from section \ref{sec:assembly}. Via Proposition \ref{prop:L-thy-of-star-cat-is-co-hlgy} it induces a map
\[
A \co H_n (K;\bL_\bullet) = \pi_n (K_+ \wedge \bL_\bullet) \ra L_n (\ZZ[\pi_1 (K)])) = \pi_n (\bL_\bullet (\Lambda (\ZZ[\pi_1 (K)])))
\]
If $\pi_1 (K) = 0$ then this map can be thought of as an induced map on homology by the collapse map $K \mapsto \ast$. Similarly for spectra $\bL^\bullet$ and $\bNL^\bullet$. However, this is not a phenomenon special to these spectra. In fact in \cite[chapter 12]{Ranicki(1992)} an assembly map
\[
A \co H_n (K;\bE) \ra \pi_n (\bE)
\]
is discussed for any spectrum $\bE$, hence any homology theory. On the level of chains this map can be described via certain ``gluing'' procedure. For the spectra $\bOmega^N_\bullet$ and $\bOmega^{\STOP}_\bullet$ this procedure coincides with the geometric gluing.
\end{rem}
\section{Connective versions} \label{sec:conn-versions}
An important technical aspect of the theory is the use of connective
versions of the $L$-theory spectra. This is related to the
difference between topological manifolds and ANR-homology manifolds.
In principle there are two ways how to impose connectivity
restrictions. One is to fix the algebraic bordism category and
modify the definition of the $L$-groups and $L$-spectra. The other
is to modify the algebraic bordism category and keep the definition
of the $L$-groups and $L$-spectra. Both ways are convenient at some
stages.
\begin{prop}
Let $\Lambda$ be an algebraic bordism category and let $q \in \ZZ$.
Then there are $\Omega$-spectra of Kan $\Delta$-sets
$\bL_\bullet \langle q \rangle (\Lambda)$, $\bL^\bullet \langle q
\rangle (\Lambda)$, $\bNL^\bullet \langle q \rangle (\Lambda)$ with
homotopy groups
\begin{align*}
\pi_n \bL_\bullet \langle q \rangle (\Lambda) & = \bL_n (\Lambda) \; \textup{for} \; n \geq q, \; 0 \; \textup{for} \; n < q \\
\pi_n \bL^\bullet \langle q \rangle (\Lambda) & = \bL^n (\Lambda) \; \textup{for} \; n \geq q, \; 0 \; \textup{for} \; n < q\\
\pi_n \bNL^\bullet \langle q \rangle (\Lambda) & = \bNL^n (\Lambda)
\; \textup{for} \; n \geq q, \; 0 \; \textup{for} \; n < q.
\end{align*}
\end{prop}
\begin{defn} \label{defn:abc-connective-modification}
Let $\Lambda = (\AA,\BB,\CC)$ be an algebraic bordism category, and let $q \in \ZZ$. Denote $\BB
\langle q \rangle \subset \BB$ the category of chain complexes in $\BB$ which are homotopy equivalent to $q$-connected chain complexes and $\CC \langle q \rangle = \BB \langle q \rangle \cap \CC$. The algebraic bordism categories $\Lambda \langle q \rangle$ and $\Lambda \langle 1/2 \rangle$ are defined by
\[
\Lambda \langle q \rangle = (\AA,\BB \langle q \rangle,\CC \langle q \rangle) \quad \textup{and} \quad \Lambda \langle 1/2 \rangle = (\AA,\BB\langle 0 \rangle,\CC \langle 1 \rangle)
\]
\end{defn}
\begin{notation}
In accordance with \cite{Ranicki(1992)} we will now use the notation $\Lambda \langle q \rangle (R)$ for $\Lambda (R) \langle q \rangle$ where $R$ is a ring with involution, $q \in \ZZ$ or $q = 1/2$ and $\Lambda (R)$ is the algebraic bordism category from Example \ref{expl:alg-bord-cat-ring}. Similarly $\widehat \Lambda \langle q \rangle (R)$ stands for $\widehat \Lambda (R) \langle q \rangle$.
\end{notation}
In general the groups $L^n (\Lambda \langle q \rangle (R)) = \pi_n \bL^\bullet (\Lambda \langle q \rangle (R))$ need not be isomorphic to $\pi_n \bL^\bullet \langle q \rangle (\Lambda (R))$, likewise for quadratic and normal $L$-groups. However, in certain special cases this holds, for example we have (\cite[Example 15.8]{Ranicki(1992)}):
\begin{align*}
\pi_n \bL_\bullet \langle 0 \rangle (\Lambda (\ZZ)) & \cong L_n (\Lambda \langle 0 \rangle (\ZZ)) \\
\pi_n \bL^\bullet \langle 0 \rangle (\Lambda (\ZZ)) & \cong L^n
(\Lambda \langle 0 \rangle (\ZZ))
\end{align*}
\begin{notation}
Again, to save space we abbreviate for $q \in \ZZ$ or $q = 1/2$:
\[
\bL^{\bullet} \langle q \rangle = \bL^{n}(\Lambda \langle q \rangle (\ZZ)) \quad \bL_{\bullet} \langle q \rangle = \bL_{n}(\Lambda \langle q \rangle (\ZZ)) \quad \bNL^{\bullet} \langle q \rangle = \bNL^{n}(\widehat \Lambda \langle q \rangle (\ZZ)).
\]
\end{notation}
We also obtain a connective version of Proposition \ref{prop:fib-seq-of-quad-sym-norm}.
\begin{prop} \textup{\cite[Proposition 15.16]{Ranicki(1992)}} \label{prop:fib-seq-of-quad-sym-norm-connective-version}
We have a homotopy fibration sequence
\begin{equation}
\bL_\bullet \langle 1 \rangle \ra \bL^\bullet \langle 0 \rangle \ra
\bNL^\bullet \langle 1/2 \rangle.
\end{equation}
As a consequence we have
\[
\pi_0 \bL^\bullet \langle 0 \rangle \cong \pi_0 \bNL^\bullet \langle
1/2 \rangle \cong \ZZ.
\]
\end{prop}
Let $K$ be a simplicial complex. Now we consider the connective versions of the $L$-theory groups/spectra of algebraic bordism categories associated to $K$ in sections \ref{sec:cat-over-cplxs} and \ref{sec:gen-hlgy-thies}. Specifically we are interested in the algebraic bordism category $\Lambda \langle q \rangle (\ZZ)_\ast (K)$ from Definition \ref{defn:Lambda-star-categories} and $\widehat \Lambda \langle q \rangle (\ZZ) (K)$ from Definition \ref{defn:Lambda-hat-category}. The following proposition is an improvement on Propositions \ref{prop:L-thy-of-star-cat-is-co-hlgy}, \ref{prop:NL-spectrum-of-lower-star-K} and \ref{prop:algebraic-pi-pi-theorem}.
\begin{prop} \textup{\cite[Proposition 15.9,15.11]{Ranicki(1992)}} \label{prop:L-theory-over-K-are-homology-theories-conn-versions}
There are isomorphisms
\begin{align*}
\pi_n \bL^\bullet (\Lambda \langle q \rangle (\ZZ)_\ast (K)) & \cong H_n (K;\bL^\bullet \langle q \rangle (\ZZ)) \\
\pi_n \bL_\bullet (\Lambda \langle q \rangle (\ZZ)_\ast (K)) & \cong H_n (K;\bL_\bullet \langle q \rangle (\ZZ)) \\
\pi_n \bNL^\bullet (\widehat \Lambda \langle q \rangle (\ZZ) (K)) & \cong H_n
(K;\bNL^\bullet \langle q \rangle (\ZZ)) \\
\pi_n \bL_\bullet (\Lambda \langle q \rangle (K)) & \cong L_n (\ZZ [\pi_1 [K]) \quad \textup{for} \; n \geq 2q.
\end{align*}
\end{prop}
In addition there is an improved version of Proposition \ref{prop:signatures-on-spectra-level} as follows:
\begin{prop} \label{prop:connective-signatures-on-spectra-level}
The relative symmetric construction produces
\[
\textup{(1)}\quad \ssign \co \bbOmega[n]{STOP} \ra
\bL^n(\Lambda \langle 0 \rangle (\ZZ)) \; \leadsto \; \ssign \co
\bbOmega[\bullet]{STOP} \ra \bL^\bullet \langle 0 \rangle
\]
The relative normal construction produces
\[
\textup{(2)}\quad \nsign \co \bbOmega[n]{N} \ra
\bNL^n(\widehat \Lambda \langle 1/2 \rangle (\ZZ)) \; \leadsto \; \nsign \co
\bbOmega[\bullet]{N} \ra \bNL^\bullet \langle 1/2 \rangle.
\]
The relative normal construction produces
\[
\textup{(3)}\quad \qsign \co \Sigma\invers\bbOmega[n]{N,\STOP} \ra
\bL_n(\Lambda \langle 1 \rangle (\ZZ)) \; \leadsto \; \qsign \co
\Sigma\invers\bbOmega[\bullet]{N,\STOP} \ra \bL_\bullet \langle 1 \rangle.
\]
\end{prop}
Part (1) is obvious, since a geometric situation provides only chain complexes concentrated in non-negative dimensions. Part (2) is shown in \cite[page 178]{Ranicki(1992)}. Part (3) follows from part (2) and the fibration sequence from Proposition \ref{prop:fib-seq-of-quad-sym-norm-connective-version}.
|
1,314,259,994,822 | arxiv | \section*{Experimental Section}
\textbf{Fabrication}
Aluminium films were deposited on Si(001) substrates by DC sputtering, and Si$_3$N$_4$~ and Sb$_2$S$_3$~ were deposited by radio frequency (RF) sputtering.
The chamber base pressure was 4$\times$10$^{-5}$ Pa and the sputtering pressure was 0.5 Pa.
The deposition rate was 0.5 �\AA/min from an Sb$_2$S$_3$~ alloy target with a diameter of 50.8 mm and a purity of 99.9 \%.
Si$_3$N$_4$~ was deposited by using a gas mixture of Ar: N$_{2}$=8:2 from a Si target.
To crystallise the Sb$_2$S$_3$~ in the optical filter, the samples were heated at 300 $^\circ$C for 30 minutes on a hot plate.\\
\textbf{Optical band gap measurement}
The optical band gaps were calculated by Tauc analyses\cite{tauc1968optical}.
As Sb$_2$S$_3$~ has a direct energy band gap, the $\alpha(\hbar\nu)\sim \hbar\nu$ - $E_g^2$ was used, where $\alpha$ is absorption coefficient, $\hbar\nu$~ is photon energy, $E_g$ is band gap\cite{jellison1996parameterization}.\\
\textbf{Ellipsometry}
The optical constants of Sb$_2$S$_3$~ in the amorphous and crystalline states were measured using an ellipsometry spectrometer (WVASE, J.A Woollam Co.).
37~nm of as-deposited Sb$_2$S$_3$~ deposited on Si (001) substrate was used for measurement.
A further sample with the same thickness was crystallised in a tube furnace at 300 $^\circ$C in an argon atmosphere for 20 minutes.
A heating rate of 4 K/min was used.
Both samples were measured by ellipsometry.
The wavelength range of incident light is from 250~nm to 900~nm and the angle of incident light was 65.1$^{\circ}$.
The data was fitted by Lorentz and Tauc-Lorentz oscillator models for both amorphous and crystalline states.
The fit parameters are listed in the Supplementary Information table S2, whilst the spectroscopic ellipsometric parameters and the computed refractive indices are available at www.actalab.com.\\
\textbf{Reflectance spectra}
Reflectance spectra of the Al/Si$_3$N$_4$/Sb$_2$S$_3$/Si$_3$N$_4$/Al structures were measured using a QDI 2010 UV-visible-NIR range microspectrophotometer (CRAIC Technology Inc., California, USA).
The reflectance spectra were normalised to an Al mirror, which was considered 100\% reflective across the measured spectral range.
The absorptance maximum wavelengths were obtained by Gaussian fitting at the dip of the reflectance spectra. \\
\textbf{Laser switching}
A pump-probe system with pulse length from 20 ns to 2000 ns and pulse power up to 40 mW was used to switch the Al/Sb$_2$S$_3$/Al structure\cite{behera2017laser}.
A 658~nm pump laser with higher power was used to induce a phase transition in the Sb$_2$S$_3$~ film.
To measure the change in reflected signal, a 100 $\mu$W and 1 $\mu$s probe laser was used.
A 635~nm probe laser was used to focus the probe beam on the write mark.
The reflected probe signal was detected simultaneously by the fast silicon photodetectors.
The final data was measured by a digitising oscilloscope (NI PXIe-5162, 10 bit, 1.5 GHz).
A camera was used to image the sample surface.
We assumed that the crystallised fraction was directly proportional to the change in the probe light.\\
\textbf{Optical microscope images}
To compare the colours of the optical filters with different Sb$_2$S$_3$~ thickness, the bright field optical micro images of the colour filter were taken using an Olympus BX51 microscope through a �10$\times$ objective lens.
Each colour square is 680 $\mu$m $\times$ 680 $\mu$m.
\section*{Acknowledgement}
This research was performed under the auspices of the SUTD-MIT International Design Center (IDC).
The research project was funded by the Samsung GRO, the A-star Singapore-China joint research program Grant No. 1420200046, and the SUTD Digital Manufacturing and Design Centre (DManD) Grant No. RGDM 1530302.
We are grateful for fruitful discussions with Seokho Yun.
\vspace{1cm}
\small
\bibliographystyle{advancedmaterials}
|
1,314,259,994,823 | arxiv | \section{Introduction}
Crowded analysis gained so much popularity in the recent years for both academic purposes and industrial AI. This growing trend is mainly due to the increase of population growth rate and the need of more precise public monitoring systems. Such systems are shown to be effective to capture crowd dynamics for public environments design~\cite{moussaid2011simple}, simulate crowd behavior for games designing~\cite{silverman2006human}, and group activity~\cite{nabi2013temporal} and crowd monitoring for visual surveillance~\cite{su2013large,mehran2009abnormal,benabbas2011motion}.
A human expert observer seem to be capable of monitoring the scene for unusual events in real time and taking immediate reactions ~\cite{hospedales2012video} accordingly. However, psychological research shows that the ability to monitor concurrent signals is really limited also in humans~\cite{sulman2008effective}. In the extremely crowded scenes, in presence of multiple individuals doing different behaviors, monitoring is a critical issue even for a human observer.
In the last few years, the computer vision community has pushed on crowd behavior analysis and particularly has made a lot of progresses in crowd abnormality detection~\cite{mehran2009abnormal,mousavi2015crowd,mousavi2015analyzing,mousavi2015abnormality}. However lack of publicly available \emph{realistic} datasets (i.e., high density, various types of behaviors, etc.) led not to have a fair common test bed for researchers to compare their algorithms and fairly evaluate the strength of their methods in real scenarios.
Besides the data, most of the proposed approaches only employ low-level visual features (see in ~\cite{su2013large,mehran2009abnormal,roggen2011recognition,saxena2008crowd,rodriguez2011data}, e.g., motion and appearance) to build the abnormal behavior models in crowd. However, many behaviors can be hardly characterized by simply low-level features and need to be represented with a more semantic description. The huge semantic gap between low-level video pixels and high-level crowd behavior concepts make a barrier itself to realize the dream of having a deep understanding of the behaviors in the scene.
\begin{figure*}
\begin{center}
\includegraphics[width=4.5in]{images/emotion-base-description111.png}
\caption{Schematic illustration of the emotion-based crowd representation.}
\end{center}
\label{fig:panicfight}
\end{figure*}
For example, crowd behaviors which are very similar visually, are appeared as entirely different in terms of being normal/abnormal, by only considering the \emph{emotion} of the individuals participating them (See Fig. \ref{fig:panicfight}).
Despite the relatively huge literature on emotion recognition for face~\cite{pantic2000automatic,bassili1979emotion,busso2004analysis} and posture~\cite{coulson2004attributing,mota2003automated}, there is just a few works which target emotion recognition from crowd motion~\cite{baig2015perception,baig2014crowd,mchugh2010perceiving}, and to the best of knowledge, there is no work which targets emotion recognition and abnormality behavior detection in a unified framework.
Inspired by the recent works on attribute-based representation in object/action recognition literature~\cite{liu2011recognizing,lampert2009learning,farhadi2010attribute}, we believe that human emotions can be used to represent a crowd motion, thus helping for crowd behavior understanding. In other words, to come up with a deep understanding of crowd behavior, the human emotions can be deployed to narrow down the semantic gap between low-level motion information and high-level behavior semantics.
Having this idea in mind, we create a dataset which includes both annotations of \emph{crowd behavior} and \emph{crowd emotion}. Such dataset opens up avenues for both tasks of abnormality detection and emotion recognition as well as analyzing the correlations of these two tasks. Our dataset consists of a big set of video clips annotated with crowd behavior labels (e.g., ``panic'', ``fight'', ``congestion'', etc.) and crowd emotion labels (e.g., ``happy'', ``excited'', ``angry'', etc.).
We, first, evaluated a set of baseline methods on both tasks, showing that our dataset can be effectively used as a benchmark in both communities. We, then, analyzed the importance of crowd emotion information for crowd behavior understanding, by predicting the crowd behavior label using ground-truth emotion information.
And finally, we introduced a emotion-based crowd representation,
which provides us to exploit emotion information even without accessing to any ground-truth emotion information in testing time.
Our major contributions in this paper are: \textbf{(i)} Introducing a novel dataset of crowd with both behavior and emotions annotations. \textbf{(ii)} Showing the tight link between the two tasks of crowd behavior understanding and emotion recognition. \textbf{(iii)} Presenting a method which exploits jointly the complimentary information of these two task, outperforming all baselines of both tasks significantly.
The rest of the paper is organized as follows: a short review on previous datasets and the specification of our proposed dataset is reported in Sec. 2; the idea of emotion-based crowd representation for abnormal behavior detection is introduced in Sec. 3. In Sec. 4 we explain the considered baselines and the experiments regarding our introduced emotion-based approaches and discussing on the obtained results. Finally, in Sec. 5, we conclude the paper with briefly discussing other applications worth investigating and promoting further research on this new dataset.
\begin{table*}
\begin{center}
\renewcommand{\arraystretch}{1}
\label{tab:datasetscompare}
\scalebox{0.6}{
\begin{tabular}{p{2.9cm}|lp{1.5cm}llllll}
\noalign{\hrule height 1.3pt}
Dataset & UMN ~\cite{mehran2009abnormal} & UCSD~\cite{mahadevan2010anomaly} & CUHK~\cite{wang2009unsupervised} & PETS2009~\cite{ferryman2010pets2010} & ViF ~\cite{hassner2012violent} & Rodriguez's ~\cite{rodriguez2011data} & UCF ~\cite{solmaz2012identifying}& Our Dataset \\
\noalign{\hrule height 1.5pt}
Number of samples & 11 seq & 98 seq & 2 seq & 59 seq & 246 seq & 520 seq & 61 seq & 31 seq\\
\hline
Annotation level & frame & frame/pixel & video & frame & video & video & video & frame\\
\hline
Density & semi & semi & semi & semi & dense & dense & dense & dense\\
\hline
Type of scenarios & panic & abnormal objects & traffic & panic & fight & pedestrians & crowd & multi-category\\
\hline
Indoor/Outdoor & both & outdoor & outdoor & outdoor & outdoor & outdoor & outdoor & outdoor\\
\hline
Meta-data & no & no & no & no & no & no & no & crowd emotion\\
\noalign{\hrule height 1.5pt}
\end{tabular}
}
\end{center}
\caption{Datasets for crowd behavior analysis}
\end{table*}
\section{The crowd anomaly detection dataset}
In this section, after a brief review on the well-known datasets for the task of crowd behavior analysis, we introduce our dataset in details.
\subsection{Previous Datasets}
Despite the significant demand for understanding crowd behaviors,
there is still a significant gap between accuracy and efficiency of typical behavior recognition methods in research labs and the real world. The most important reason is that the majority of proposed algorithms are experimented on non-standard datasets having only a small number of sequences taken under controlled circumstances with limited behavior classes. \\
Among all the existing datasets, we select a subset of most cited ones and analyze the characteristics of them. The datasets that we select are:
UMN ~\cite{mehran2009abnormal}, UCSD ~\cite{mahadevan2010anomaly}, CUHK ~\cite{wang2009unsupervised}, PETS ~\cite{ferryman2010pets2010} , Violent-Flows(ViF) ~\cite{hassner2012violent}, Rodrigues's ~\cite{rodriguez2011data} and UCF ~\cite{solmaz2012identifying}.
We also select a set of criteria which provide us the possibility of being able to compare the datasets according to them.
The criteria are including: \emph{number of samples, annotation level, crowd density, type of scenarios, indoor/outdoor, meta-data}.\\
\emph{Number of samples} is an important characteristics of a dataset, because having sufficient number of recorded videos can be so helpful not only for training with more samples, but also for evaluation. \emph{Annotation level} is another important criteria of the dataset. It can be specified as pixel-level, frame-level and video-level, which technically reflects the richness of a dataset in terms of annotation. \emph{Density} is another important factor of the crowd which should be considered. crowd scenes results in more occlusion, clutter scenarios which harder to detect different type of behaviors.
\emph{Type of scenarios} is another critical characteristics of a dataset, reflects the type of events happenings in all the video sequences. Datasets with higher type of scenarios are more challenging, since the proposed algorithms should work on a larger variations of conditions (i.e., real world setup).
The \emph{Indoor/Outdoor} criteria is a about the location that the video sequences have been recorded and it has a peculiar effect on illumination conditions, background clutters, occlusions, etc.\\
Last but not least, \emph{Meta-data} is another important feature of a dataset, which we insist on it in this paper. It is also one of the features which make our dataset unique and provide the possibility for researchers to move toward higher-level interpretations of the video sequences. In our dataset, we specifically, introduced ``crowd emotion'' as extra annotation.
In table \ref{tab:datasetscompare} we describe all aforementioned crowd behavior datasets in terms of the explained features. A common demerit lies in all of them is lack of any extra annotation so that they all potentially only can be relied on low-level features to discriminate types of behavior classes. Lack of diverse subjects and scenarios, low volume crowd density and limited number of video sequences are other limitations of the previous datasets.
\subsection{The Proposed Dataset}
The presented dataset is consists of 31 video sequences in total or as about 44,000 normal and abnormal video clips. The videos were recorded as 30 frames per second using a fixed video camera elevated at a height, overlooking individual walkways and the video resolution is $554\times235$.
The crowd density in the crowd was changeable, ranging from sparse to very crowded. In addition to normal and abnormal behavior scenarios, a few scenarios including abnormal objects which can be regarded as threats to the crowd are also recorded make them more realistic. ``Motorcycle crossing the crowded scene'', ``a suspicious backpack left by an individual in the crowd'', ``a motorcycle which is left between many people'', etc. are some examples of such scenarios.
In proposed dataset, we have represented five typical types of crowd behaviors. Each scenario topology was sketched in harmony with circumstances usually met in crowding issues. They accord with a transition on a flow of individuals in a free environment (neutral), a crowded scene containing abnormal objects (obstacles), evacuation of individuals from the scene (panic), physical altercation between individuals (Fight), group of people gathering together (congestion). For each behavior type we recorded videos from two field of views with different crowd densities changing from sparse to very crowded.
All the videos in dataset starts with normal behavior frames and ends with abnormal ones. For emotion annotation we follow the standard definition of emotion in psychology which ``a feeling evoked by environmental stimuli or by internal body states'' ~\cite{bower2014emotional}. This can modulate human behavior in terms of actions in the environment or changes in the internal status of the body. There is no crowd abnormal behavior dataset available with emotion annotation in computer vision literature, so we following the datasets for face and gesture, we defined six types of basic emotions in our dataset, including: ``Angry'', ``Happy'', ``Excited'', ``Scared'', ``Sad'', ``Neutral''.
Since the perception of each of aforementioned emotions can be so subjective, we ask different annotators to annotate separately and finally we conclude over them using majority voting. To insure consistency between annotators, we conducted an agreement study, finding that the overall agreement is 92\% with a Kappa value of 0.81 and the maximum inconsistency was between Happy with Excited (confused 4\% of the time).
As we expected, our observation confirms that emotion annotations can provide a more descriptive representation for crowd behavior and it is clear even from the annotations. Specifically, in different circumstances, emotions belonging to an individual or a crowd have a crucial effect on decision making process. Emotion information is much richer than pure low-level motion information of videos.
The videos, ground-truth annotations, and baseline codes will be available publicly soon after publishing the paper. We believe this dataset can be used as benchmark of future researches in both abnormal behavior detection and emotion recognition tasks.
\begin{figure*}
\begin{center}
\includegraphics[width=0.5\linewidth]{images/method-image.png}
\end{center}
\caption{The emotion classifiers as mid-level representation for abnormal behaviors.}
\label{fig:emotionbehavelabels}
\end{figure*}
\section{Emotion-Based Crowd Representation}
If the ground-truth emotion labels are available during both training and testing, we can simply consider them as part of the input data and solve a standard classification problem (See \emph{emotion-aware baseline} for evaluation, Sec.~\ref{seqemotionbased}). But things are not so straight forward when there is no ground-truth emotion label is available during testing. Here in this section we formally explain the latter.\\
Given set of $N$ video clips in the dataset $\{(x^{(n)},e^{(n)},y^{(n)})\}_{n=1}^N$, the goal is to learn
a model employing emotion labels $e$ which be used to assign the abnormal behavior class label $y$ to an unseen test video clip $x$. In training phase, each example is represented as a tuple $(\textbf{f},\textbf{e},y)$ where $\textbf{f}\in\mathbb{F}^d$ is the $d$-dimensional low-level feature extracted from video clip $x$ (See Fig.~\ref{fig:emotionbehavelabels}). The abnormal behavior class label of the image is represented by $y\in\mathbb{Y}$. The crowd emotion of the video clip $x$ are denoted by a $K$-dimensional vector $\textbf{e} = (e_1, e_2, ..., e_K)$, where $e_k\in\mathbb{E}_k$ $(k = 1, 2, ..., K)$ indicates the $k$-th emotion of the video clip. For example, if the $k$-th emotion attribute is ``Angry'', we will have $\mathbb{E}_k = \{0, 1\}$, where $e_k$ = 1 means the crowd is ``Angry'', while $e_k$ = 0 means it is not. Since our datasets is designed to be applied also for standard multi-label emotion recognition setup, here, we describe each video clip with a binary-valued emotion attribute with a single non-zero value, i.e. $\mathbb{E}_k = \{0, 1\}$ $(e = 1, 2, ..., K)$, s.t. $\|\textbf{e}\|_0=1$. But we emphasize that our proposed method is not limited to binary-valued attributes with single emotion and simply can be extended to multi-emotion and continuous valued (fuzzy) attributes. Discarding the emotion information we can simply train a classifier $ \mathcal{C}\colon\mathbb{F}^d\rightarrow\mathbb{Y}$, that maps the feature vector $f$ to a behavior class label $y$ (See \emph{low-level visual feature baseline} for evaluation, Sec.~\ref{seqlowlevel1}). However, by employing emotion as an mid-level attribute to represent the crowd motion, the classifier $C$ is decomposed into:
\begin{equation}
\label{eq1}
\begin{split}
\mathcal{H} & =\mathcal{B}(\mathcal{E}(f)) \\&
\mathcal{E}\colon\mathbb{F}^d\rightarrow\mathbb{E}_k \ and \ \mathcal{B}\colon\mathbb{E}_k\rightarrow\mathbb{Y}
\end{split}
\end{equation}
where $\mathcal{E}$ includes $K$ individual emotion classifiers $\{\mathcal{C}_{e_i}(f)\}_{i=1}^n$, and each classifier $\mathcal{C}_{e_i}$maps $f$ to the corresponding $i$-th emotion of $\mathbb{E}^n$, $\mathcal{B}$ maps an emotion attribute $\textbf{e}\in\mathbb{E}^n$ to a behavior class label $ \textbf{y}\in\mathbb{Y}$.
The emotion classifiers are learned during training using the emotion annotations provided by our dataset. Particularly, the classifier $ \mathcal{C}_{e_i}(f)$ is a binary linear SVM trained by labeling the examples of all behavior classes whose emotion value $e_i=1$ as positive examples and others as negative.
Assuming there is no emotion ground-truth information is available in test time, we represent each video clip $x$ by $\phi(x) \in \mathbb{E}_k$:
\begin{equation} \label{eq2}
\phi(x) = [s_1(x),s_2(x),...,s_K(x)]
\end{equation}
where $s_k(x)$ is the confidence score of $k$-th emotion classifier $ \mathcal{C}_{e_k}$ in $\mathcal{E}$. This \emph{emotion-based crowd representation} vector has an entry for each emotion type reflects the degree to which a particular emotion is present in a video clip (See \emph{emotion-based crowd representation experiments}, Sec. ~\ref{seqemotionbased}). The mapping $\mathcal{B}$ is finally obtained by training a multi-class linear SVM for behavior classes on emotion-based crowd representation vectors. \\
\noindent\textbf{Latent Emotion:}
As aforementioned, we select crowd emotions as attributes, which are discriminative and yet able to extract the intra-class changing of each behavior. Note that intra-class changing may cause video clips to be associated with different sets of emotion information, in spite of belonging to same behavior class. For instance, the behavior class \textit{Congestion} in some video clips of a dataset might have the \textit{Angry} emotion attribute, while in other dataset it may contains \textit{happy} emotion attribute.(see Fig. \ref{fig:panicfight}). To address this problem, we treat emotion attributes as latent variables and learn the model using the latent SVM ~\cite{felzenszwalb2008discriminatively, wang2009max}. We aim at learning a classifier $f_W$ to predict the behavior class of an unknown video clip $x$ during testing. Specifically, a linear model is defined as:
\begin{equation}\label{equation3}
W^T \Psi (x,y,e)=W_x \psi_1 (x) +\sum_{l\in\mathbb{E}} W^T_{e_l} \psi_2 (x,e_l) + \sum_{l,m\in\mathbb{E}} W^T_{e_l, e_m} \psi_3 (e_l,e_m)
\end{equation}
where parameter vector $W$ is $W=\{W_x;W_{e_l};W_{e_l,e_m}\}$, and $\mathbb{E}$ is an emotion attribute set. The first term in Eq.~\ref{equation3} provides the score measuring how well the raw feature $\psi_1 (x)$ of a video clip matches the template $W_x$ which is a set of coefficients learned from the raw features $x$. The second term in Eq.~\ref{equation3} provides the score of a specific emotion, and is used to indicate the presence of a emotion in the video clip $x$.
The initial value of $e_l$ is inherited from the behavior label during training, and is given by a pre-trained emotion classifier in the testing phase (see Sec 3). The third term defined to capture the co-occurrence of the pairs of emotions. Specifically, the feature vector $\psi_3$ of a pair $e_l$ and $e_m$ is a $\mathbb{E}\times\mathbb{E}$ dimensional indicator for the pair configurations and the associated $W^T_{e_l, e_m}$ contains the weights of all configurations.
From a set of training instances, the model vector $W$ is learned by solving the following formulation as learning objective function:
\begin{equation}\label{equation4}
W^*=\min_{W} \lambda||W||^2+\sum_{j=1}^{n} \max(0,1-y_j.f_w(x_j))
\end{equation}
where $\lambda$ is the trade-off parameter controlling the amount of regularization, and the second term performs a soft-margin.
Since the objective function in Eq.~\ref{equation4} is semi-convex, a local optimum can be obtained by the coordinate descent ~\cite{felzenszwalb2008discriminatively}.
In our current implementation, each emotion has two statuses \{0\} and \{1\} and belief propagation ~\cite{felzenszwalb2008discriminatively} is employed to find the best emotion configurations(See \emph{latent-emotion crowd representation experiments}, Sec. ~\ref{seqemotionbased}).
\section{Experiments}\label{exp}
In this section, we first explain the baseline methods we used to extract low-level visual features from our dataset. We, then, explain the experiments regarding emotion-based crowd representation for crowd behavior understanding. Note that during the experiments we fixed the evaluation protocol during all the experiments. We divide the train and test data in a leave-one-sequence-out fashion. More specifically, for 31 times (equal to number of video sequences) we leave one video clip of a sequence out for test and train on all the remaining 30. As the evaluation measure we used the average accuracy both in tables and confusion matrices. Note that all confusion matrices are based on dense trajectory features.
\subsection{Baseline Methods}\label{seqlowlevel1}
\noindent\textbf{Low-level Visual Feature Baseline:}\label{baseline1}
As low-level features, we exploited the well-known dense trajectories ~\cite{wang2011action,wang2012abnormal} by extracting them for each video clips using the code provided by~\cite{wang2011action}. For this purpose, we computed state-of-the-art feature descriptors, namely histogram of oriented gradients (HOG)~\cite{dalal2005histograms}, histogram of optical flow (HOF)~\cite{laptev2008learning}, motion boundary histogram (MBH)~\cite{dalal2006human} and dense trajectories ~\cite{wang2011action} within space-time patches to leverage the motion information in dense trajectories. The size of the patch is $32\times 32$ pixels and 15 frames.\\
We use the bag-of-words representation for each clip to create a low-level visual feature using the extracted feature descriptors. In particular, we extract a codebook for each descriptor (HOG, HOF, MBH, and Trajectories) by fixing the number of visual words to $d$=1000. For the sake of time, we just did clustering on a subset of randomly selected training features using k-means. Descriptors are allocated to their closest vocabulary word using Euclidean distance. The extracted histograms of visual words finally used as a video descriptor. For classification of videos we use a standard one-vs-all multi-class SVM classifier. We separately evaluate HOG, HOF, MBH,Trajectory and Dense trajectory low-level feature descriptors by ground-truth label information of the behavior and the average accuracy of each is presented in first column of table \ref{tab:accuracylabel}. As can be seen, dense trajectory feature achieved 38.71 \% accuracy in crowd abnormality detection and has better performance comparing with four other feature descriptors.
Fig. \ref{fig:conf_all} (b) shows the performance comparison between varied combinations of different types of behavior categories in confusion matrix based on dense trajectory features. As you can see, the "\textit{Panic}" category has the best result of 74.82 \% compares to other behavior classes, probably due to solving a simpler task. The most confusion of this category was with "\textit{fight}" which can be justified as the similarity of motion patterns in these two categories (very sharp movements).
\begin{table}
\begin{center}
\scalebox{1}{
\begin{tabular}{cccc}
\noalign{\hrule height 1.5pt}
& \multicolumn{3}{c}{Our dataset}\\
{} & low-level visual feature & emotion-aware & emotion-based\\
\hline
Dense trajectory & 38.71 & 83.79 & 43.64 \\
\hline
\noalign{\hrule height 1.5pt}
\end{tabular}
}
\end{center}
\caption{Comparison of dense trajectory descriptor on low-level visual features, emotion-aware and emotion-based categories in our dataset. We report average accuracy over all classes for our dataset}
\label{tab:accuracylabel1}
\end{table}
\begin{table}
\begin{center}
\scalebox{1}{
\begin{tabular}{cccc}
\noalign{\hrule height 1.5pt}
& \multicolumn{3}{c}{Our dataset}\\
{} & low-level visual feature & emotion- based & latent-emotion\\
\hline
Trajectory & 35.30 & 40.05 & 40.04\\
HOG & 38.80 & 38.77 & 42.18 \\
HOF & 37.69 & 41.50 & 41.51 \\
MBH & 38.53 & 42.72 & 42.92 \\
\hline
Dense Trajectory & \textbf{38.71} & \textbf{43.64} & \textbf{43.90} \\
\noalign{\hrule height 1.5pt}
\end{tabular}
}
\end{center}
\caption{Comparison of different feature descriptors (Trajectory, HOG, HOF, MBH and Dense Trajectory) on low-level visual feature, emotion-based and latent-emotion categories in our dataset. We report average accuracy over all classes for our dataset.}
\label{tab:accuracylabel}
\end{table}
\subsection{Emotion-based representation experiments}\label{seqemotionbased}
In this part we explain the experiments regarding our emotion-based proposed methods. In the first experiment, we assume to have access to emotion labels both in testing and training time and in the second experiment, instead, we have access to emotion labels only in training time.\\
\noindent\textbf{Emotion-aware Baseline:}
In this experiment, we use the ground-truth label information of the crowd emotion for feature construction. For this purpose, we first simply create a 6 dimension binary feature vector for all test and train data. As an example, since we have 6 emotion classes, namely "angry","happy","excited","scared","sad" and "neutral" respectively, a feature vector related to a video clip with emotion class "happy" is represented as \{0,1,0,0,0,0\} $\in\mathbb{E}^6$. Considering constructed features, we train a multi-class SVM classifier using the abnormal behavior labels.
In testing time we evaluate the trained classifier with test examples. In second column of table \ref{tab:accuracylabel1} we report the average accuracy over all behavior categories.
Such significant margins suggests that having a precise emotion recognition method can be so helpful for crowd behavior understanding. Inspired by this results, in following experiments we employing the emotion as mid-level representation for crowd motion. \\
\noindent\textbf{Emotion-based Crowd Representation Experiment:}
In this part, We first used the ground-truth label information of the emotion to separately evaluate aforementioned low-level feature descriptors. Fig. \ref{fig:conf_all} (c) shows the performance comparison between diverse combinations of different types of emotion categories in confusion matrix based on dense trajectory features with average accuracy of 34.13 \%. Based on the results reported in the confusion matrix Fig. \ref{fig:conf_all} (c), although the results are not so good, but it still can be used for abnormality behavior detection.
In this part we assume that there is no emotion label available for test data, so we learn a set of binary linear SVMs using emotion labels of training data. We call this classifier as emotion classifiers $ \mathcal{C}_{e_i}(f)$. The output of emotion classifiers is a vector in which each dimension shows confidence score of emotion prediction. We consider this vector as an emotion-based crowd representation vector for behavior classification. We extract this vector for all train and test images and then train a multi-class SVMs with behavior labels. This behavior classifier is finally evaluated on test data to report the final accuracy of behavior classification.\\
We applied this method separately to HOG, HOF, MBH, Trajectory and Dense trajectory low-level feature descriptors.The average accuracy results for each of which is presented in second column of table. \ref{tab:accuracylabel}. Dense trajectory feature achieved the best precision with 43.64 \% among the other low level features. In compare with two other baselines, our method has highest accuracy and increase it by almost 7 \%.
Also in confusion matrix in Fig. \ref{fig:conf_all} (a), the best detection result belongs to "\textit{Panic}" behavior class with 71.87 \% and the most conflict to this class belongs to "\textit{fight}" behavior category with 11.88 \%. On the other hand, the worst detection result belongs to "congestion" behavior class with the most conflict of 21.92 \% to "\textit{panic}" behavior class. This results are in line with the average accuracies that we have for emotion based classifiers and emotion-aware baseline. This results supports that by having a better emotion recognition classifiers and more precise emotion labels we can boost the performance.\\
\noindent\textbf{Latent-Emotion Crowd Representation Experiment:}
Finally, we treated emotion labels as latent variables and learn the model using the latent SVM. In third column of table \ref{tab:accuracylabel}, the results are presented where 43.9 \% belongs to this experiment. It suggests that by using crowd emotion as mid-level representation we can obtain more accurate detection of crowd behavior classes.
\begin{figure*}
\begin{tabular}{cc}
\input{resurces/midlevel.tex} &\input{resurces/behave.tex}\\
(a)&(b)\\
\multicolumn{2}{c}{\input{resurces/emotion.tex}}\\
\multicolumn{2}{c}{(c)}
\end{tabular}
\caption{(a): Confusion matrix for each emotion-based class. (b): Confusion matrix for each low-level visual feature class. (c): Confusion matrix for six predefined emotion classes. }
\label{fig:conf_all}
\end{figure*}
\section{Conclusions and Future Works}
In this paper, we have proposed a novel crowd dataset with both annotations of abnormal crowd behavior and crowd emotion. We believe this dataset not only can be used as a benchmark in computer vision community, but also can open up doors toward understanding the correlations between the two tasks of ``crowd behavior understanding'' and ``emotion recognition''. As the second contribution we present a method which exploits jointly the complimentary information of these two task, outperforming all baselines of both tasks significantly. In particular, future work will be directed towards recognizing a novel abnormal behavior class with no training samples available, by manually defining the emotion-to-behavior mapping function.
|
1,314,259,994,824 | arxiv | \section{Preliminaries}Henceforth $R$ stands for an associative unital ring and $S$ denotes a monoid which possesses a group $S^{-1}S$ of left quotients.
Recall that this is the case exactly when the monoid $S$ is left
and right cancellative and satisfies the left Ore condition. That
is, for any $s_1,s_2\in S$, there exist $t_1, t_2\in S$ such that
$t_1s_1=t_2s_2$.
Let $\phi\colon
S\rightarrow \mbox{End}(R)$ denote the action of $S$ on
$R$ by injective endomorphisms. For any $s\in S$, the
endomorphism $\phi(s)\in\mbox{End}(R)$ will be denoted by
$\phi_s$.
\begin{definition}
\label{ext}
An over-ring $\A$ of $R$ is called an $S$-Cohn-Jordan extension ($CJ$-extension, for short) of
$R$ if:
\begin{enumerate}
\item the action of $S$ on $R$ extends to an action of $S$ (also denoted by $\phi$) on
$\A$ by automorphisms, i.e. $\phi_s$ is an automorphism of $\A$, for any $s\in S$.
\item every element $a\in \A$ is of the form $a=\phi_s^{-1}(b)$, for some
suitable $b\in R$ and $s\in S$.
\end{enumerate}
\end{definition}
As it was mentioned in the introduction, the $CJ$-extension $\A$ exists
and is uniquely defined up to an $R$-isomorphism (see also \cite{jm2}).
Hereafter, as in the above definition, $\phi_s$ will also denote
the automorphism $\phi(s)$ of $\A$ and $\phi_{s^{-1}}$ will stand for its inverse $(\phi_s)^{-1}$, where $s\in S$. In particular, the preimage in $R$ of a subset $X$ of $R$ under the action of $s\in S$ is equal to $\phi_{s^{-1}}(X)\cap R$.
\begin{definition}\label{def. admisible}
A set $\{X_s\}_{s\in S}$ of subsets of $R$ is called
$S$-admissible if, for any $k,s\in S$, we have $R\cap
\phi_{s^{-1}}(X_{sk})=X_k$. For such a set let $\Delta(\{X_s\}_{s\in
S})=\bigcup_{s\in S}\p{s}{X_s}\subseteq \A$.
\end{definition}
\begin{rem}\label{remark}
Let $\{X_s\}_{s\in S}$ be an $S$-admissible set. Then
$\pn{s}{X_k}\subseteq X_{sk}$, for any $k,s\in S$. Indeed
$\pn{s}{X_k}=\pn{s}{R\cap \phi_{s^{-1}}(X_{sk})}\subseteq
\pn{s}{R}\cap X_{sk}\subseteq X_{sk}$.
\end{rem}
\begin{lem} \label{X gives admisible}
Let $X$ be a subset of $\A$ and $\Gamma(X)=\{X_s=\phi_s(X)\cap
R\}_{s\in S}$. Then $\{X_s \}_{s\in S}$ is an $S$-admissible set
of subsets of $R$ and $X=\bigcup_{s\in S}\p{s}{X_s}$, i.e.
$\Delta\Gamma(X)=X$.
\end{lem}
\begin{proof} Let $s, k\in S$. Notice that
$R\cap\p{s}{X_{sk}}=R\cap\p{s}{\pn{sk}{X}\cap R}=R\cap \pn{k}{X}\cap \p{s}{R}=R\cap
\pn{k}{X}=X_k$,
as $R\subseteq \p{s}{R}$. This shows that $\{X_s \}_{s\in S}$ is an $S$-admissible
set.
The inclusion $X\subseteq \bigcup_{s\in S}\p{s}{X_s}$ is a consequence of the fact that
for any $x\in X$, there is $s\in S$ such that $\pn{s}{x}\in R$. The reverse inclusion holds, since $\phi_s$ is monic, for every $s\in
S$.
\end{proof}
Notice that the set of all $S$-admissible sets has a
natural partial ordering given by
$$\{X_s\}_{s\in S}\leq \{Y_s\}_{s\in S} \;\mbox{ if and only if }X_s\subseteq Y_s,\;\mbox{for all } s\in S.$$
\begin{prop}\label{thm iso}
There is an order-preserving one-to-one correspondence between
the set $\mathcal{L}$ of all subsets of $\A$ ordered by inclusion
and the partially ordered set $\mathcal{R}$ of all $S$-admissible
sets of subsets of $R$. The correspondence is given by maps
$\Delta$ and $\Gamma$ defined above.
\end{prop}
\begin{proof} By Lemma \ref{X gives admisible}, the maps $\Delta$ and $\Gamma$ are well-defined and satisfy $\Delta\Gamma=\mbox{id}_{\mathcal{L}}$. Clearly both maps preserve the ordering.
Let $\{X_k\}_{k\in S}$ be an $S$-admissible set of subsets of
$R$. Then
\begin{equation}\label{eq inclusion}
\{X_k\}_{k\in S}\leq
\Gamma\Delta(\{X_k\}_{k\in S})=\{Y_k\}_{k\in S}
\end{equation}
$\mbox{where}\;\;Y_k=R\cap \pn{k}{\bigcup_{s\in S} \pn{s^{-1}}{X_s}}=\bigcup_{s\in S} \pn{ks^{-1}}{X_s} $.
Let $a \in Y_k$. Then there are $s\in S$ and
$b\in X_s$ such that $a=\pn{ks^{-1}}{b}$. Since $S$ satisfies the
left Ore condition, we can pick $t,l\in S$ such that $tk=ls$.
Hence $a=\pn{t^{-1}l}{b}$ and $\pn{t}{a}=\pn{l}{b}\in
\pn{l}{X_s}\subseteq X_{ls}=X_{tk}$, where the last inclusion is
given by Remark \ref{remark}. Therefore we obtain $a\in R\cap
\p{t}{X_{tk}}=X_{k}$, as $\{X_s\}_{s\in S}$ is an $S$-admissible
set. This shows that $Y_k\subseteq X_k$, for any $k\in S$. This
together with (\ref{eq inclusion}) yield that $\{X_k\}_{k\in S}=
\Gamma\Delta(\{X_k\}_{k\in S})$ and complete the proof of the
proposition.
\end{proof}
\begin{prop}\label{extensions inside}
Let $A$ be an over-ring of $R$ such that the
action of $S$ on $R$ extends to the action of $S$ on $A$ by automorphisms.
Then $B=\bigcup_{s\in S}\p{s}{R}$ is a $CJ$-extension of
$R$.
\end{prop}
\begin{proof}
Let $a,b\in
B$ and $k,l\in S$ be such that $\pn{k}{a}, \pn{l}{b}\in R$. Since $S$ satisfies the
left Ore condition, there are $s,t, w\in S$ such that $sk=tl=w$.
Then $\pn{w}{a}=\pn{sk}{a}, \pn{w}{b}= \pn{tl}{b}\in R$. This
implies that $a-b, ab\in \p{w}{R}\subseteq B$
and shows that $B$ is a subring of $A$.
By definition of $B$, $\p{k}{B}\subseteq B$ and $B\subseteq \pn{k}{B}$ follows, for any $k\in S$.
The left Ore condition implies for any $k,s\in S$
we can find $l,t\in S$ such that $ks^{-1}=t^{-1}l$. Then
$\pn{k}{\p{s}{R}}=\p{t}{\pn{l}{R}}\subseteq \p{t}{R}$. This
means that also $\pn{k}{B}\subseteq B$, for $k\in S$. Now it is easy to complete the proof.
\end{proof}
We will say that a subset $X$ of $\A$ is $S$-invariant if
$\pn{s}{X}\subseteq X$, for all $s\in S$.
Direct application of Proposition \ref{extensions inside} gives
the following:
\begin{cor}\label{cor JC ext construction}
Let $T$ be an $S$-invariant subring of $R$. Then $\bigcup_{s\in
S}\p{s}{T}\subseteq \A$ is a $CJ$-extension of $T$.
\end{cor}
\begin{prop} \label{submodules}
Let $T$ be an $S$-invariant subring of $R$
and $B=\bigcup_{s\in S}\p{s}{T}\subseteq \A$.
Let $X$ be a subset of $\A$ and $\{X_s\}_{s\in S}=\Gamma(X)$.
Then:
\begin{enumerate}
\item $X$ is an additive subgroup (a subring) of $\A$ iff for any $s\in S, $ $X_s$ is an
additive subgroup (a subring) of $R$.
\item $X$ is a left (right) $B$-submodule of $\A$ iff for any $s\in S, $ $X_s$ is a
left (right) $T$-submodule of $R$.
\end{enumerate}
\end{prop}
\begin{proof}
(1). If $X$ is an additive subgroup (a subring) of $\A$, then so
is $X_s=\pn{s}{X}\cap R$, for any $s\in S$.
Suppose now, that $\{X_s\}_{s\in S}$ consists of additive
subgroups (subrings) of $R$. Let $a,b\in X$. Then there are
$s,t\in S$ such that $\pn{s}{a}\in X_s$ and $\pn{t}{b}\in X_t$. By
the left Ore condition of $S$, we can pick $k,l\in S$ such that
$ks=lt=w$. Then, making use of Remark \ref{remark}, we have
$\pn{w}{a}=\pn{k}{\pn{s}{a}}, \pn{w}{b}=\pn{l}{\pn{t}{b}}\in X_w$.
Now it is easy to complete the proof of (1).
(2). We will prove only the left version of the statement (2).
Suppose that $X$ is a left $B$-submodule of $\A$ and let $s\in S$.
Then $TX_s\subseteq R\cap B\pn{s}{X}=R\cap\pn{s}{BX}\subseteq X$,
as $B=\pn{s}{B}$. This together with (1) show that $X_s$ is a
left $T$-submodule of $R$.
Suppose now, that $\{X_s\}_{s\in S}$ consists of left
$T$-submodules of $R$. Let $b\in B$ and $x\in X$. Then there exist
$s,t\in S$ be such that $\pn{s}{b}\in T $, $ \pn{t}{x}\in X_t$.
Since $T$ is $S$-invariant, similarly as in the proof of (1), we
can find $w\in S$ such that $\pn{w}{b}\in T$ and $\pn{w}{x}\in
X_w$. Then $\pn{w}{bx}\in X_w$ and $bx\in \p{w}{X_w}\subseteq X$
follows. This together with (1) completes the proof.
\end{proof}
Let $T$ be an $S$-invariant subring of $R$. We will say that an $S$-admissible set $\{X_s\}_{s\in S}$ of subsets of $R$ is an
$S$-admissible set of left (right) $T$-modules if each $X_s$ is a left (right) $T$-module.
Propositions \ref{thm iso} and \ref{submodules} imply the following
\begin{cor}\label{correspondance cor}Let $T$ be an $S$-invariant subring of $R$
and $B=\bigcup_{s\in S}\p{s}{T}\subseteq \A$.
There is a one-to-one correspondence between the set of all left (right)
$B$-submodules of $\A$ and the set of all $S$-admissible sets of left (right)
$T$-submodules of $R$.
\end{cor}
\begin{rem}\label{remark correspondance}
1. If we take $T=R$ in the above corollary, then $B=\A$ and
the corollary gives one-to-one correspondence between the set of all left, right, two-sided
ideals of $\A$ and the set of all $S$ admissible sets of all left, right, two-sided
ideals of $R$, respectively.
2. Let $W, T$ be $S$-invariant subrings of $R$ such that
$ \bigcup_{s\in S}\p{s}{W} =\bigcup_{s\in S}\p{s}{T} = B\subseteq
\A$ (for example assume $S$ is commutative
and take $W=R$ and $T=\pn{t}{R}$, for some $t\in S$). Then an
$S$-admissible set $\{X_s\}_{s\in S}$ consists of
left $W$-submodules iff it consists of
left $T$-submodules as it corresponds to a $B$-submodule of
$\A$. On the other hand, observe that
$T$ is a left $T$-module and it does not have to be a left
$W$-module.
\end{rem}
\begin{lem}\label{Ck}
Let $T$ be an $S$-invariant subring of $R$, $B=\bigcup_{s\in
S}\p{s}{T}$ its CJ-extension of $T$ contained in $\A$. Then, for any subset $X$ of $R$ and $k\in S$ we have
$B\pn{k}{X}\cap R= \bigcup_{s\in S}\p{s}{T\pn{sk}{X}}\cap R$.
\end{lem}
\begin{proof}
Let $x\in B\pn{k}{X}\cap R$. Then $x=\sum_{i=1}^nb_i\pn{k}{x_i}\in R$,
where $b_i\in B$ and $x_i\in X$, for $1\le i\le n$. Let $s\in
S$ be such that $\pn{s}{b_i}\in T$, for all $1\le i\le n$. Then
$\pn{s}{x}=\sum_{i=1}^n \pn{s}{b_i}\pn{sk}{x_i}\in T\pn{sk}{X}$.
This shows that $B\pn{k}{X}\cap R\subseteq \bigcup_{s\in
S}\p{s}{T\pn{sk}{X}}\cap R$. The reverse inclusion is clear as,
for any $s\in S$, we have $\p{s}{T}\subseteq B$ and
$\p{s}{\pn{sk}{X}}\subseteq \pn{k}{X}$.
\end{proof}
\begin{definition}\label{def. closed ideals}
Let $T, W$ be $S$-invariant subrings of $R$. For any
$(T,W)$-subbimodule $M$ of $R$ and $k\in S$ we define
$c^{(T,W)}_{k}(M)=\bigcup_{s\in S}\p{s}{T\pn{sk}{M}W}\cap R$.
\end{definition}
\begin{prop}\label{BMC}
Let $M$ be a $(T,W)$-subbimodule
of $R$, where $T, W$ are $S$-invariant subrings of $R$ and $B=A(T;S),
C=A(W,S)\subseteq \A$. Then $\{c^{(T,W)}_{s}(M)\}_{s\in S}$ is
an admissible set of $(T,W)$-bimodules associated to the
$(B,C)$-subbimodule $BMC$ of $\A$.
\end{prop}
\begin{proof} Let us consider $(B,C)$-subbimodule $BMC$ of $\A$.
Since $\pn{s}{B}=B$ and $\pn{s}{C}$, for all $s\in S$, we have
$\Gamma(BMC)=\{B\pn{s}{M}C\cap R\}_{s\in S}$.
Now, the proof is a direct consequence of a
bimodule versions of Corollary \ref{correspondance cor} and Lemma
\ref{Ck}.
\end{proof}
\begin{definition}\label{def. of closed} Let $M$ be a $(T,W)$-subbimodule
of $R$, where $T, W$ are $S$-invariant subrings of $R$ and $B=A(T;S),
C=A(W,S)\subseteq \A$. We say that $M$ is $(T,W)$-closed if
$M=BMC\cap R$.
\end{definition}
The following proposition offers an internal (in $R$)
characterization of $(T,W)$-closed subbimodules of $R$.
\begin{prop}\label{inner description of closed}
For a $(T,W)$-subbimodule $M$ of $R$ the following conditions are
equivalent:
\begin{enumerate}
\item $M$ is $(T,W)$-closed.
\item $c^{(T,W)}_{\mbox{\rm \scriptsize id}}(M)=M$
\item $R\cap \p{s}{T\pn{s}{M}W}\subseteq M$, for any
$s\in S$.
\end{enumerate}
\end{prop}
\begin{proof} Recall that $c^{(T,W)}_{ \mbox{\rm \scriptsize id}}(M)
=\bigcup_{s\in S}\p{s}{T\pn{s}{M}W}\cap R$. The equivalence
$(1)\Leftrightarrow (2)$ is given by Proposition \ref{BMC}. The
implication $(2)\Rightarrow (3)$ is a tautology.
The statement $(3)$ yields that $c^{(T,W)}_{\mbox{\rm \scriptsize
id}}(M)\subseteq M$ and clearly $M\subseteq c^{(T,W)}_{\mbox{\rm
\scriptsize id}}(M)$. This shows that $(3)\Rightarrow (2)$ and
completes the proof of the proposition.
\end{proof}
Let us notice that if $V$ is a $(B,C)$-subbimodule of $\A$, then
$V\cap R$ is a $(T,W)$-subbimodule of $R$ and $V\cap R\subseteq
B(V\cap R)C\cap R\subseteq V\cap R$, i.e. $V\cap R$ is a
$(T,W)$-closed subbimodule of $R$.
\begin{prop}\label{properies of closed} Let $T, W$ be $S$-invariant subrings of $R$. Then:
\begin{enumerate}
\item If $\{X_s\}_{s\in S}$ is an $S$-admissible set of
$(T,W)$-subbimodules of $R$, then $X_s$ is a closed
$(T,W)$-subbimodule of $R$, for each $s\in S$.
\item Let $T_1\subseteq T$ and $W_1\subseteq W$ be $S$-invariant
subrings. Then any $(T,W)$-closed subbimodule $M$ of $R$ is
closed as $(T_1,W_1)$-subbimodule.
\end{enumerate}
\end{prop}
\begin{proof}
By Corollary \ref{correspondance cor}, there is
$(B,C)$-subbimodule $V$ of $\A$ such that $X_s=\pn{s}{V}\cap R$.
This together with the observation made just before the
proposition, gives (1).
The statement (2) is an easy exercise if we use the description (3) of closeness
from of Proposition \ref{inner description of closed}.
\end{proof}
\section{Applications}
In this section we restrict our attention to left ideals, i.e. we
take $T=R$ and $W$ is the subring of $R$ generated by 1. In this
case, for $k\in S$, we will write $c_k$ instead of $c_k^{(T,W)}$.
That is, by Proposition \ref{BMC}, $c_k(M)=\A\pn{k}{M}\cap R$, for any left ideal $M$ of $R$.
Recall (Cf. Remark \ref{remark correspondance})(1)) that there
is one-to-one correspondence between left ideals of $\A$ and
$S$-admissible sets of left ideals of $R$. If a left ideal $L$ of $\A$
corresponds to the $S$-admissible set $\{L_s\}_{s\in S}$, we will
say that $L$ is associated to $\{L_s\}_{s\in S}$ or that $\{L_s\}_{s\in
S}$ is associated to $L$.
\begin{definition}
We say that an $S$-admissible set $\{L_s\}_{s\in S}$ of left ideals
of $R$ is stable if there exists $k\in S$ such that
$c_s(L_k)=L_{sk}$, for all $s\in S$.
\end{definition}
The following proposition offers some other characterizations of
stability of $S$-admissible sets of left ideals.
\begin{prop} \label{characterization of stable sets}
Let $\{L_s\}_{s\in S}$ be an $S$-admissible set of left ideals
of $R$ and $L$ be its associated left ideal of $\A$. The following
conditions are equivalent:
\begin{enumerate}
\item $\{L_s\}_{s\in S}$ is stable.
\item There exists $k\in S$ such that
$\pn{sk}{L}=\A(\pn{sk}{L}\cap R)$, for any $s\in S$.
\item There exists $k\in S$ such that $\pn{k}{L}=\A(\pn{k}{L}\cap R)$.
\item There exist $k\in S$ and a left ideal $W$ of $R$ such that
$\pn{k}{L}=\A W$.
\end{enumerate}
\end{prop}
\begin{proof}
$(1)\Rightarrow (2)$. Suppose $\{L_s\}_{s\in S}$
is stable, that is we can pick $k\in S$ such that
$c_s(L_k)=L_{sk}$, for all $s\in S$.
Recall that $L_s=\pn{s}{L}\cap R$. This means that
$\{L_{sk}\}_{s\in S}=\{c_s(L_k)\}_{s\in S}$ is an $S$-admissible
set of left ideals of $R$ associated to $\pn{k}{L}$. Now, Proposition \ref{BMC} applied to $M=L_k$, yields that the left ideals $\pn{k}{L}$ and $\A L_k$ of $\A$ have the same associated $S$-admissible sets. Hence, by Proposition \ref{thm iso}, $\pn{k}{L}=\A L_k$. Then, for any $s\in
S$, we have
\begin{equation*}\begin{split}
\A (\pn{s}{\pn{k}{L}}\cap R) & \subseteq \pn{s}{\pn{k}{L}}= \pn{s}{\A (\pn{k}{L}\cap R)}\subseteq \\
& \subseteq \A (\pn{s}{\pn{k}{L}} \cap \pn{s}{R})
\subseteq \A (\pn{s}{\pn{k}{L}}\cap R).
\end{split}
\end{equation*}
This shows
that $\pn{sk}{L}=\A(\pn{sk}{L}\cap R)$, i.e. (2) holds.
The implications $(2)\Rightarrow (3)\Rightarrow (4)$ are
tautologies.
$(4)\Rightarrow (1)$. Let $k\in S$ and the left ideal $W$ of $R$
be such that $\pn{k}{L}=\A W$.
Eventually replacing $W$ by $\pn{k}{L}\cap R$, we may additionally
assume that $W=\pn{k}{L}\cap R=L_k$.
Therefore, by Proposition \ref{BMC}, the left ideal $\pn{k}{L}$
of $\A$ is associated to the $S$-admissible set $\{c_s(L_k)\}_{s\in
S}$. Also, by definition, $\pn{k}{L}$ is associated
to $\{\pn{s}{\pn{k}{L}}\cap R\}_{s\in S}=\{L_{sk}\}_{s\in S}$.
This shows that $c_s(L_k)=L_{sk}$, for any $s\in S$ and completes
the proof of the implication.
\end{proof}
\begin{cor}\label{cor stable ideals}
Suppose that the $S$-admissible set $\{L_s\}_{s\in S}$ of left
ideals of $R$ is associated to a finitely generated left ideal
of $\A$. Then $\{L_s\}_{s\in S}$ is stable.
\end{cor}
\begin{proof}
Let $L=\A a_1+\ldots +\A a_n$ be a left ideal of $\A$
associated to $\{L_s\}_{s\in S}$ and $k\in S$ be such that $\pn{k}{a_i}=b_i\in
R$, for $1\le i\le n$. Then $\pn{k}{L}=\A W$, where
$W=\sum_{i=1}^nRb_i$. Thus the condition (4) of
Proposition \ref{characterization of stable
sets} holds, i.e. $\{L_s\}_{s\in S}$ is stable.
\end{proof}
Recall (Cf. Definition \ref{def. of closed}) that a left ideal $X$ of $R$ is closed if $X=\A X\cap R$
and that $\A X\cap R$ is always a closed left ideal of $R$. This
implies that $\A X\cap R$ is the smallest closed left ideal of $R$
containing $X$. We will call it the closure of $X$ and denote by
$\overline{X}$. Proposition \ref{inner description of closed}
offers an internal characterization of the closure of $X$, namely
$\overline{X} =\bigcup_{s\in S}\p{s}{R\pn{s}{X}}\cap R$.
With all the above preparation we are ready to prove the following theorem.
\begin{thm}\label{thm noetherian}
For the $CJ$-extension $\A$ of $R$ the following conditions are equivalent:
\begin{enumerate}
\item $\A$ is left noetherian;
\item The ring
$R$ has \mbox{\rm ACC} on closed left ideals and every
$S$-admissible set of left ideals is stable;
\item Every closed left ideal of $R$ is the closure of a finitely
generated left ideal of $R$ and every $S$-admissible set of
left ideals is stable.
\end{enumerate}
\end{thm}
\begin{proof}
$(1)\Rightarrow (2)$. Suppose $\A$ is left noetherian.
Let $X_1\subseteq X_2\subseteq \ldots$
be a chain of closed left ideals of $R$. Since $\A$ is left noetherian,
there exists $n\geq 1$ such that $\A X_n=\A X_{n+m}$, for all $m\ge
0$. By assumption, every $X_i$'s is closed,
so $X_n=\A X_n\cap R=\A X_{n+m}\cap R=X_{n+m}$, for all $m\ge 0$.
This shows that $R$ has \mbox{\rm ACC} on closed left ideals.
Since $\A$ is left noetherian, every $S$-admissible set $\{L_s\}_{s\in S}$ of left ideals is
associated to a finitely generated left ideal of $\A$. Hence, by
Corollary \ref{cor stable ideals}, $\{L_s\}_{s\in S}$ is stable.
$(2)\Rightarrow (3)$. The proof is a version of a standard
argument. Let $W$ be a closed left ideal of $R$. Consider the set
$\mathfrak{W}$ of all closures $\overline{I}$, where $I$ ranges
over all finitely generated left ideals $I$ of $R$ contained in
$W$. Notice that if $\overline{I}\in\mathfrak{W}$ and $b\in W$,
then $\overline{I+Rb}\subseteq \overline{W}=W$. Since $R$
satisfies ACC on closed left ideals, we can pick a maximal element
$\overline{M}$ in $\mathfrak{W}$ and the remark above yields
$W=\overline{M}$.
$(3)\Rightarrow (1)$. Let $L$ be a left ideal of $\A$ and
$\{L_s\}_{s\in S}$ its $S$-admissible set of left ideals of $R$.
By assumption, $\{L_s\}_{s\in S}$ is stable. Thus, by Proposition
\ref{characterization of stable sets}, there exist $k\in S$ and a
left ideal $W$ of $R$ such that $\pn{k}{L}=\A W$. Replacing $W$ by
$\overline{ W}$ we may additionally suppose that $W$ is closed.
Then, by assumption, there exist $b_1,\ldots, b_n\in R$ such that
$W=\overline{Rb_1+\ldots+Rb_n}$. Notice that $\A b_1+\ldots +\A
b_n\subseteq \pn{k}{L}=\A ( R\cap (\A b_1+\ldots+\A b_n))\subseteq
\A b_1+\ldots +\A b_n$. This shows that $\pn{k}{L}$ is a finitely
generated left ideal of $\A$. Since $\phi_k$ is an automorphism of
$\A$, $L$ is also finitely generated.
\end{proof}
The above theorem gives immediately:
\begin{cor} Suppose that $R$ left noetherian. Then $\A$ is left noetherian iff
every $S$-admissible set of
left ideals of $R$ is stable.
\end{cor}
The equivalence $(1)\Leftrightarrow (2)$ in Theorem \ref{thm noetherian} is a generalization of Theorem 5.6 \cite{Jo} from the case when the monoid $S$ is cyclic to the case when $S$ is a cancellative monoid satisfying the left Ore condition. The idea of the presented proof is completely different from the one used in \cite{Jo}.
It is know that there exist rings $R$ such that only one of $R$ and $\A$ is left noetherian.
The following example, which offers such rings, is a variation of examples from \cite{Jo}.
\begin{ex}1. Let $\si$ be the endomorphism of the polynomial ring $ \mathbb{Z}[x]$ given by $\si(x)=2x$. One can check that $A(\mathbb{Z}[x];\langle\si\rangle)=\mathbb{Z}+\mathbb{Z}[\frac{1}{2}][x]x$ is not noetherian.\\
2. Let $A$ denote the field of rational functions in the set $\{x_i\}_{i\in\mathbb{Z}} $ of indeterminates over a field $F$ and $\si$ be the $F$-endomorphism of $R=F(x_i\mid i\leq 0)[x_i\mid i> 0]$ given by $\si(x_i)=x_{i-1}$, for $i\in \mathbb{Z}$. Then $R$ is not noetherian and $A=A(R;\langle\si\rangle)$ is a field.
\end{ex}
The following theorem offers necessary and sufficient conditions for $\A$ to be left principal ideal ring.
\begin{thm}\label{A is PLIR}
For the $CJ$-extension $\A$ of $R$ the following conditions are
equivalent:
\begin{enumerate}
\item Every left ideal of $\A$ is principal;
\item Every $S$-admissible set $\{L_s\}_{s\in S}$ of left ideals of
$R$ satisfies the following conditions:\\
(a) $\{L_s\}_{s\in S}$ is stable,\\
(b) There exist $t\in S$ and $b\in R$ such that
$L_t=\overline{Rb}$.
\end{enumerate}
\end{thm}
\begin{proof}
$(1)\Rightarrow (2)$. Let $\{L_s\}_{s\in S}$ be an $S$-admissible
set of left ideals of $R$ and $L$ be its associated left ideal of $\A$.
Since every left
ideal of $\A$ is principal,
Corollary \ref{cor stable ideals} implies that the property (a)
holds.
Let $a\in \A$ and $t\in S$ be such that $L=\A a$ and $ b=\pn{t}{a}\in
R$. Then $L_t=\pn{t}{L}\cap R=\A b\cap R=\overline{Rb}$,
i.e. the property (b) is satisfied.
$(2)\Rightarrow (1)$. Let $L$ be a left ideal of $\A$ and
$\{L_s\}_{s\in S}$ be its associated $S$-admissible
set of left ideals of $R$. By assumption, $\{L_s\}_{s\in S}$ is
stable. Thus, applying Proposition \ref{characterization of stable
sets}(2), we can pick $k\in S$ such that $\pn{sk}{L}=\A L_{sk}$,
for any $s\in S$. Observe that
$\{L_{sk}\}_{s\in S}=\{\pn{s}{\pn{k}{L}}\cap R\}_{s\in S}$ is an
$S$-admissible set of left ideals associated to $\pn{k}{L}$.
Therefore we can apply (2)(b) to $\{L_{sk}\}_{s\in S}$ and pick $l\in S$ and $b\in R$ such
that $L_{lk}=\overline{Rb}$. Let us set $t=lk$. Using the above
we have $\pn{t}{L}=\A L_t$ and
$\A b\subseteq \A L_t =\A\overline{Rb}\subseteq \A b$. This
shows that $\pn{t}{L}=\A b$ and proves that the left ideal $L=\A\p{s}{b}$ is
principal.
\end{proof}
\begin{rem} \label{rem assumptions}
1. It is not difficult to prove that
the condition (2)(b) of the above theorem is equivalent to the
condition that every closed left ideal $X$ of $R$ is of the form
$X=\p{t}{\overline{Rb}}\cap R$, for suitable $t\in S$ and $b\in
R$.\\
2. Let us remark that the condition (2)(b) always holds, provided
every closed left ideal is principal.
\end{rem}
Recall that a ring $R$ is left B\'ezout if every finitely generated left ideal of $R$ is principal.
\begin{prop}\label{prop. Bezout} For the $CJ$-extension $\A$ of $R$ the following conditions are
equivalent:
\begin{enumerate}
\item $\A$ is a left B\'ezout ring;
\item for every $S$-admissible set
$\{L_s\}_{s\in S}$ associated to a finitely generated left
ideal $L$ of $\A$, there exist $t\in S$ and $b\in R$ such
that $L_t=\overline{Rb}$.
\end{enumerate}
\end{prop}
\begin{proof}
Let $L$ be a finitely generated left ideal of $\A$ and $\{L_s\}_{s\in
S}$ its associated $S$-admissible set.
If $\A$ is left B\'ezout, then $L$ is principal. Thus there is $t\in
S$ and $b\in R$ such that $\pn{t}{L}=\A b$ and $L_t=\pn{t}{L}\cap R=\overline{R
b}$. This shows that $(1)$ implies $(2)$.
Suppose $(2)$ holds. Then, by Corollary \ref{cor stable
ideals}, $\{L_s\}_{s\in S}$ is stable.
Now one can complete the
proof as in the proof of implication $(3)\Rightarrow(1)$ of
Theorem \ref{A is PLIR}.
\end{proof}
Notice that the characterization obtained in the above proposition is not nice
in the sense that the statement (2) is not expressed
in terms of properties of $R$ but $\A$ is involved. Anyway it has the following direct application:
\begin{cor}\label{cor Bezout}
Suppose that one of the following conditions is satisfied:\\
1. Every closed left ideal of $R$ is principal.\\
2. $R$ is left B\'ezout.\\
Then $\A$ is a left B\'ezout ring.
\end{cor}
\begin{proof}
Proposition \ref{prop. Bezout} and
Remark \ref{rem assumptions}(2) give the thesis when (1) holds.
Suppose (2) holds. Let $L=\A a_1+\ldots \A a_n$ and $t\in S$ be
such that $b_i=\pn{t}{a_i}\in R$, $1\le i\le n$. By assumption,
there exists $b\in R$ such that $R b_1+\ldots R b_n=Rb$. Then
$L_t=\pn{t}{L}\cap R=\A b\cap R=\overline{Rb}$ and the thesis is a
consequence of Proposition \ref{prop. Bezout}.
\end{proof}
The following example offers a principal ideal domain $R$ such that $\A$ is not noetherian. Of course, by Corollary \ref{cor Bezout}, $\A$ is left B\'ezout.
\begin{ex}\label{Bezout example}
Let $A= K[x^\frac{1}{2^n}\mid n\in \mathbb{N}]$, where $K$ is a field, and $\si$
be a $K$-linear automorphism of $A$ defined by $\si(x)=x^2$. Then
the restriction of $\si$ to
$R=K[x]$ is an endomorphism of $R$ and it is easy to check
that $A$ is a $CJ$-extension of $R$ with respect to the action of $\si$. Notice that $A$ is
not noetherian but it is B\'ezout, by the above corollary.
\end{ex}
In view of Theorem \ref{A is PLIR} and Proposition \ref{prop. Bezout} it seems interesting to know when all principal left ideals of $R$ are closed. We will concentrate on this problem till the end of the paper.
It is known (Cf. Lemma 1.16 and Theorem 2.24 of \cite{jm2}) that if $R$ is a semiprime left Goldie
ring, then:\\
(i) every regular element $c$ of $R$ is regular in $\A$;\\
(ii) $\A$ is a semiprime left Goldie ring and
$Q(\A)=A(Q(R);S)$, where $Q(B)$ denotes the classical left
quotient ring of a left Goldie ring $B$.
Therefore both $Q(R)$ and $\A$ are over-rings of $R$ included in $A(Q(R);S)$.
Keeping the above notation we have:
\begin{prop}\label{basic} For a semiprime left Goldie ring $R$, the following conditions are equivalent:
\begin{enumerate}
\item $Q(R)\cap \A=R$;
\item $Rc=\overline{Rc}$ and $cR=\overline{cR}$, for every regular element $c\in
R$;
\item $cR=\overline{cR}$, for every regular element $c\in R$;
\item If $ca\in
R$, then $a\in R$, provided $a\in \A$ and $c\in R$ is
regular.
\end{enumerate}\end{prop}
\begin{proof} Let $c\in R$ be a regular element.
$(1)\Rightarrow (2)$
Let $a\in\A$ be such that
$ac=r\in R$. Then $a=rc^{-1}\in Q(R)\cap \A=R$. This shows that
$\A c\cap R\subseteq Rc$ and implies that $Rc=\overline{Rc}$. A
similar argument works for showing that $cR=\overline{cR}$.
The implication $(2)\Rightarrow (3)$ is a tautology.
$(3)\Rightarrow (4)$ Suppose $ca\in R$, where $a\in \A$.
By $(3)$ we have $cR=\overline{cR}=c\A\cap R$. thus there exists $r\in R$ such that $ca=cr$ and $a=r\in R$ follows, as $c$ is regular in $\A$.
$(4)\Rightarrow (1)$ Let $r\in R$ be such that $c^{-1}r=a\in
Q(R)\cap \A$. The condition (4) gives $a\in R$ and shows that
$Q(R)\cap \A=R$.
\end{proof}
The statement (2) in the above proposition is left-right symmetric thus, additionally assuming that the semiprime ring $R$ is also right Goldie, we can add to the proposition left versions of statements (3) and (4). However, as the following example shows, we can not do this when $R$ is not right Goldie.
\begin{ex} Let $D$ denote the field
of fractions of the ring $K[x^\frac{1}{2^n}\mid n\in \mathbb{N}]$
from Example \ref{Bezout example} and $\sigma$ be a $K$-linear
automorphism of $D$ defined by $\sigma(x)=x^2$. Let us consider
the skew polynomial ring of endomorphism type (with coefficients
written on the left) $A=D[t;\sigma]$. Then $\si$ can be extended
to an automorphism of $A$ by setting $\si(t)=t$. Let
$R=K(x)[t;\si]\subseteq A$. Then the restriction of $\si $
to $R$ is an endomorphism of $R$ and for any $w\in A$, there
exists $n\ge 1$ such that $\si^n(w)\in R$. This means that
$A=A(R;\langle\si\rangle)$, where $\langle\si\rangle$ denotes the
monoid generated by $\si$.
It is well known that $R$ is a left Ore domain which is not right
Ore. Observe that $t\sqrt{x}=xt\in R$, but $\sqrt{x}\not\in R$.
Thus, by Proposition \ref{basic}, $Q(R)\cap A\ne R$. In fact, the left localization of $R$ with respect the left Ore
set consisting of all powers of $t$ is equal to $D[t,t^{-1},\si]$. Thus $A\subseteq D[t,t^{-1},\si]
\subseteq Q(R)$.
We claim that $R$ satisfies the left version of statement
(4) from Proposition \ref{basic}. Let $0\ne c\in R$ and $a\in A$ be such that
$ac\in R$. If $a\not \in R$, then we
can choose such $a=\sum_{i=0}^na_it^i$ of minimal possible degree,
say $n$. Then, by the choice of $n$, $a_n \not \in K(x)$. Then
also $a_n\si^n(c_m)\not \in K(x)$, where $c_m$ denotes the leading
coefficient of $c\in R=K(x)[t;\si]$ and then $ac\not \in R$, which is impossible. Thus $a$ has to belong to $ R$.
\end{ex}
Observe that the ring from Example \ref{Bezout example} satisfies the
assumption of the following proposition.
\begin{prop}
Suppose $R$ is a left Ore domain such that $Q(R)\cap \A=R$. For a left ideal $L$ of $A$ the following
conditions are equivalent:
\begin{enumerate}
\item $L$ is principal;
\item $\exists_{s\in S}\exists_{a\in R}$
such that, for all $t\in S$, $\phi_{ts}(L)\cap R=L_{ts}=R\phi_t(a)$.
\end{enumerate}
\begin{proof}
$(1)\Rightarrow (2).$ Suppose $L=Ab$ and let $s\in S$ be such that $\phi_s(a)\in R$.
Then $\phi_s(L)=A\phi_s(b)$. Set $a=\phi_s(b)$. Now the implication is a direct
consequence of Proposition \ref{basic}.
$(2)\Rightarrow (1).$ Let $s\in S$ and $a\in R$ be as in $(2)$.
Then $\phi_s(L)_t=L_{ts}R\phi_t(a)$. This means
that $\phi_s(L)=\bigcup_{t\in S}\phi_t^{-1}(R)a=\A a$. Thus
$L=\A\phi_s^{-1}(a)$.
\end{proof}
\end{prop}
|
1,314,259,994,825 | arxiv | \section{Main claim and the importance}
We propose Deep Node Ranking (DNR), an algorithm for structural network embedding based on the ideas of personalized node ranking and representation construction using deep neural networks. The contribution of this paper is multifold.
\begin{itemize}
\item First, we propose a fast algorithm for supervised and unsupervised node embedding construction. When the constructed embeddings are evaluated against state-of-the-art baselines, we achieve comparable or better performance on nine node classification data sets.
\item We demonstrate how the proposed solution can also be used for end-to-end classification.
\item To our knowledge this is one of the first node classification benchmarks at such scale, as commonly the algorithms are tested only on a handful of data sets.
\item We introduce three novel node classification benchmark data sets from financial and biological domains to help improve general node embedding evaluation.
\item In addition to evaluating the algorithms' performance using standard critical difference diagrams, we use also the recently introduced Bayesian hierarchical t-test, which is to our knowledge one of the first comparisons of this type for the node classification task.
\item Finally, we demonstrate that the node embeddings, obtained by DNR are also suitable for visualization purposes.
\end{itemize}
\section{Related work}
We recognize the body of structural node embedding literature as related work, more specifically the works of node2vec, DeepWalk, LINE, NetMF and HINMINE. Further, we believe the proposed methodology is complementary to contemporary graph-aggregator approaches such as GraphSAGE, Graph-convolutional neural networks, as well as the Graph-attention networks.
\section{Published parts of the paper}
The ideas in the proposed manuscript partially build on the recently introduced HINMINE methodology, where personalized node ranks were used to obtain node representations.
Even though Algorithm P-PRS, named Personalized PageRank with Shrinking in this paper, was introduced as part of the HINMINE methodology by Kralj, Robnik-\v{S}ikonja and Lavra\v{c}\footnote{HINMINE: Heterogeneous information network mining with information retrieval heuristics, Journal of Intelligent Information Systems, 1--33, 2017, Springer.}, this paper provides an explicit formulation of the algorithm, its pseudocode, and its theoretical properties (i.e. its time and space complexity).
The proposed DNR methodology leverages the HINMINE idea by offering a solution, linear in terms of the number of nodes with respect to the consumed space, thus ensuring significant improvements in terms of scalability (the solution, offered in HINMINE is quadratic with respect to the number of nodes). To our knowledge, we are the first to exploit personalized node ranks for direct (and efficient) learning of a deep neural network, which we demonstrate can be used for multiple tasks, ranging from supervised and unsupervised embedding construction, as well as end-to-end learning and visualization.
\end{document}
\section{Introduction}
Many real-world systems consisting of interconnected entities can be represented as complex networks. The study of complex networks offers insights into latent relations between connected entities represented by nodes and has various practical applications, such as the discovery of drug targets, modeling of disease outbreaks, author profiling, modeling of transportation, and study of social dynamics \cite{benson2016higher,nowzari2016analysis,le2015novel}.
Modern machine learning approaches applied to complex networks offer intriguing opportunities for the development of fast and accurate algorithms that can learn from the topology of a given network. Recently, approaches based on network embedding \citep{grover2016node2vec,jiline2011,zitnik2017predicting,perozzi2014deepwalk} became prevalent for many common tasks, such as node classification and edge prediction, as well as for unsupervised clustering. Embedding of a network is a representation of its nodes and edges in a vector space that maintains the topological properties of the network \cite{perozzi2014deepwalk}. Embeddings are useful, as vectors are suitable for use with conventional machine learning algorithms.
Deep neural networks \citep{Goodfellow-et-al-2016,lecun2015deep} belong to a class of machine learning algorithms in which multiple layers of neurons are stacked on top of each other and trained to predict target classes via backpropagation. Deep learning is widely used in image and text analysis \cite{krizhevsky2012imagenet} and has only recently been considered for network learning \citep{lecun2015deep,Wang:2016:SDN:2939672.2939753} for tasks such as network node classification, (i.e. assigning labels to nodes), and node clustering, where nodes are grouped into clusters according to their shared properties.
In this work, we propose a new network embedding and node classification algorithm, named \emph{Deep Node Ranking (DNR)}, which combines efficient node ranking with non-linear approximation power of deep neural networks. The developed framework uses deep neural networks to obtain a network embedding directly from stationary node distributions produced by random walkers with restarts. The rationale for developing this approach is that it is currently impossible to analytically derive properties of complex networks, for example, node label distributions, that would relate with a network's topology. We solve this problem by using random walk-based network sampling.
Even though there already exist embedding approaches based on higher-order random walkers \citep{grover2016node2vec} (i.e. random walkers with memory), we believe that the stationary distribution of first-order random walkers is \emph{unexploited} in a deep learning setting. We showcase the developed algorithm on the challenging problems of node classification and network visualization, which highlights the ability of our algorithm to learn and accurately predict node labels. As test sets for the task of node classification are few, we compile three new data sets from financial and biological domains with distinct network topologies and a varying number of target classes. The key contributions of the paper are:
\begin{enumerate}
\item A fast network embedding algorithm DNR based on global, personalized node ranks, which performs comparably to the state-of-the-art embedding algorithms, and can be used for a multitude of downstream learning tasks, such as node classification, network visualization, etc. The algorithm is faster than many state-of-the-art embedding algorithms.
\item A variation of the DNR algorithm, developed specifically for end-to-end node classification, comparably to or outperforming state-of-the-art embedding approaches as well as approaches based on explicit matrix factorization.
\item To our knowledge, this is one of the first node classification benchmarks at such scale, as commonly the algorithms are tested only on a handful of data sets.
\item We introduce three novel node classification benchmark data sets from financial and biological domains to help improve general node embedding evaluation.
\item In addition to evaluating the algorithms' performance using standard critical difference diagrams, we use also the recently introduced Bayesian hierarchical t-test, which is to our knowledge one of the first comparisons of this type for the node classification task.
\item The paper compares the proposed method also with four different variants of the personalized node rank space embedded via non-negative matrix factorization, and one where the space of node rankings is embedded via manifold embedding method UMAP \cite{mcinnes2018umap}. These baselines were previously not considered, albeit being strong competitors in terms of efficiency.
\item Finally, we demonstrate that node embeddings, obtained by DNR are also suitable for visualization purposes.
\end{enumerate}
The work is structured as follows. In Section \ref{sec:related}, we shortly review the related work on deep learning, learning from complex networks, and (personalized) network node ranking algorithms. Based on that, we provide a rationale for the development of our approach. Section \ref{sec:DNR_origin} presents the proposed network node embedding algorithm that combines deep neural networks with network node ranking. In Section \ref{sec:experimental}, we describe the experimental setting and nine different non-synthetic complex networks from different domains used in the evaluation, including the three newly composed data sets The experimental results are discussed in Section \ref{sec:results}. In Section \ref{sec:conclusions} we conclude the work and present plans for further work.
\section{Background and related work}
\label{sec:related}
In the following subsections, we present deep learning preliminaries, describe how algorithms learn from complex networks and what is learned, followed by an overview of node ranking algorithms. Finally, we describe the rationale behind the proposed approach.
\subsection{Learning from complex networks}
\label{sec:learning_cn}
Complex networks, representing real-world phenomena, such as financial markets, transportation, biological interactions, or social dynamics \cite{benson2016higher,nowzari2016analysis,le2015novel} often possess interesting properties, such as scale invariance, partitioning, presence of hub nodes, weakly connected components, heavy-tailed node degree distributions, the occurrence of communities, significant motif pattern counts, etc. \cite{costa2007characterization,van2016random}. \emph{Learning from complex networks} considers different aspects of complex networks, e.g., network structure and node labels that are used as inputs to machine learning algorithms with the aim to do link prediction, node classification, etc.
Many different approaches to learning from complex networks exist. For example, one of the most common unsupervised approaches, the \textit{community detection} \citep{wang2017community}, groups the nodes of a network into densely connected sub-networks, and enables learning of hidden properties from complex networks. Communities in complex biological networks correspond to functionally connected biological entities, such as the proteins involved in cancerogenesis. In social networks, communities may correspond to people sharing common interests \cite{duch2005community}. Community detection algorithms use random walk-based sampling or graph spectral properties \cite{malliaros2013clustering,kuncheva2015community,rosvall2009map} to achieve unsupervised detection of communities within complex networks.
In contrast, our methodology focuses on the semi-supervised tasks of \emph{structural node classification} and \emph{network node embedding} described subsequently.
\subsubsection{Node classification}
Node classification is the problem of classifying nodes in a network into one or many possible classes. It belongs to semi-supervised learning algorithms, as the whole network is used to obtain representations of individual nodes, from which the network classification model is learned.
Information propagation algorithms \cite{Zhu02learningfrom} propagate label information via nodes' neighbors until all nodes are labeled. These algorithms learn in an \emph{end-to-end} manner, meaning that no intermediary representation of a network is first obtained and subsequently used as an input for learning.
Another class of node classification algorithms learn node labels from node representations in a vector form (embeddings) \cite{cui2018survey}. Here, the whole network is first transformed into an information-rich low-dimensional representation, for example, a dense matrix (one row for one node). This representation serves as an input to plethora of general machine learning approaches that can be used for node classification.
We distinguish between two main branches of node classification algorithms, discussed next. The graph neural network paradigm, introduced in the recent years, attempts to incorporate a given network's adjacency structure as a neural network layers. Amongst first such approaches there were Graph Convolutional Networks \cite{kipf2016semi}, its generalization with the attention mechanism \cite{velivckovic2018graph}, as well as the isomorphism-based variants with provable properties \cite{xu2018powerful}. Treating the adjacency structure as a neural network has also shown promising results \cite{hamilton2017inductive}. The key characteristics of this branch of methods is their capability of accounting for \emph{node features} (adjacency matrix gets multiplied as part of a special layer during learning from features). On the other hand, if node features are not available, which is the case with the majority off the freely available public data sets, more optimized methods, focused on \emph{structure-based} learning were introduced.
For example, the LINE algorithm \cite{tang2015line} uses the network's eigendecomposition in order to learn a low dimensional network representation, i.e. a representation of the network in 128 dimensions instead of the dimension matching the number of nodes. Approaches that use random walks to sample the network include DeepWalk \cite{perozzi2014deepwalk} and its generalization node2vec \cite{grover2016node2vec}. It was recently proven that DeepWalk, node2vec, and LINE can be reformulated as an implicit matrix factorization \cite{Qiu:2018:NEM:3159652.3159706}.
All the before-mentioned methods aim to preserve topological properties of the input network in the final embedding. There currently exist only a handful of approaches that leverage the deep learning methodology (discussed in the introductory section) to learn from complex networks directly. For example, an approach named Structural Deep Network Embedding (SDNE) \cite{Wang:2016:SDN:2939672.2939753} learns embeddings from the network adjacency matrices generated using deep autoencoders. The authors of this approach have experimentally shown that neural networks with up to three layers suffice for the majority of learning tasks on five different complex networks. When determining the layer sizes and other parameters, they used exhaustive search, which is computationally demanding for such problems.
Despite many promising approaches developed, a recent extensive evaluation of the network embedding techniques \cite{goyal2017graph} suggests that node2vec \cite{grover2016node2vec} remains one of the best network embedding approaches for the task of structural node classification.
\subsection{Node ranking algorithms}
\label{sec:nr}
Node ranking algorithms assess the relevance of a node in a network either globally, relative to the whole network, or locally, relative to a sub-network by assigning a \emph{score} (or a \emph{rank}) to each node in the network.
A well known global node ranking algorithm is PageRank \citep{pagerank}, which has been used in the Google search engine. Others include Weighted PageRank \citep{xing2004}, SimRank \citep{simrank2001}, diffusion kernels \citep{diffker2002}, hubs and authorities \citep{kleinberg1999}, and spreading activation \citep{spract1997}. More recent network node ranking algorithms are the PL-ranking \citep{zhang2016} and NCDawareRank \citep{nikolakopoulos2013}. Network nodes can also be ranked using network centrality measures, such as Freeman's network centrality \citep{freeman1979}, betweenness centrality \citep{freeman1977}, closeness centrality \citep{bavelas1950}, and Katz centrality \citep{katz1953}.
We consider local node ranking algorithms that compute a local relevance score of a node relative to a given subset of other nodes. A representative of this class of node ranking algorithms is the Personalized PageRank (P-PR) algorithm \citep{pagerank}, sometimes referred to as random walk with restart \citep{tong2006fast}. Personalized PageRank uses the random walk approach to calculate the relevance of nodes in a network. It measures the stationary distribution of a random walk that starts at node $u$. The algorithm at each iteration follows a random edge of the current node with a predefined probability $p$ (usually set to $0.85$), and with probability $1-p$ jumps back to the starting node. The P-PR-based approaches were used successfully to study cellular networks, social phenomena \cite{halu2013multiplex}, and many other real-world networks \cite{yu2017pagerank}. Efficient implementation of P-PR algorithms remains an active research field. Recently, a bidirectional variation of the P-PR was introduced, which significantly speeds up the node ranking process \cite{lofgren2015bidirectional,lofgren2016personalized}.
The obtained stationary distribution of a random walk can be used directly for network learning tasks, as demonstrated in the recently introduced HINMINE methodology \cite{kralj2017hinmine}. Our Deep Node Ranking algorithm uses both fast personalized node rank computations, and the semantic representation learning power of deep neural networks.
Exploiting the ideas of augmenting learning with ranking was in the recent years explored in the context of graph neural networks. For example, ranking was used to prioritize propagation \cite{klicpera2018predict}, as well to scale graph neural networks \cite{bojchevski2019pagerank}. A similar idea was exploited by \cite{xu2018representation}, where a more efficient propagation scheme was proposed via the use of node ranking. Note that the considered works concern directly with graph neural networks that operate with both nodes and features, however are not necessarily suitable for purely structural learning purposes, which is the key focus of this work.
\subsection{The rationale behind the proposed approach}
As discussed in the previous sections, many of the state-of-the-art network embedding algorithms based on deep learning suffer from high computational complexity due to exhaustive search of the hyperparameter space. The properties of multi-layered neural network architectures are not well understood and are commonly evaluated using grid search over various layer sizes and other hyperparameters, which is computationally inefficient.
The proposed DNR algorithm addresses these issues by exploiting the classification potential of the fast Personalized PageRank algorithm with shrinking, integrated with a neural network architecture. Compared to node2vec and similar methods, which build on simulated second order random walks, the proposed approach achieves similar predictive performance by using only the first order Markov dynamics, i.e. random walkers with no memory. Further, the proposed algorithm enables for direct node classification, where node classes are obtained without the intermediary network embedding step characteristic of other approaches. Development of algorithms which are linear in space with respect to the network size is a challenging task. The proposed algorithm is implemented using efficient sparse matrix manipulations, therefore its space complexity is $\mathcal{O}(|E|+|N|)$, where $|E|$ is the number of network edges and $|N|$ is the number of network nodes (vertices).
\section{Deep Node Ranking algorithm}
\label{sec:DNR_origin}
In this section we describe the novel Deep Node Ranking (DNR) algorithm for structural network embedding and end-to-end node classification (overview shown in Figure \ref{scheme}). The name of the algorithm, Deep Node Ranking, reflects the two main ingredients of the technology: network node ranking step, and the subsequent deep neural network learning step.
In the first step of DNR, personalized node ranks are computed for each network node resulting in the Personalized PageRank (P-PR) vectors.
In the second step, the P-PR vectors enter a deep neural network consisting of a dense embedding-producing layer (optionally preceded by a convolution layer), whose size equals to the predefined embedding dimension. The third, output step, consists either of an output layer with the number of its neurons equal to the number of target classes (top) enabling direct classification of nodes, or the embeddings (bottom), which correspond to the embedding layer from Step 2. The embeddings can be used for downstream machine learning tasks, such as classification, network visualization, and comparison.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\linewidth]{wf_updated}
\caption{Key steps of the Deep Node Ranking algorithm. For simplicity, in this figure, the output layer consists of a single neuron (i.e. a single target class).}
\label{scheme}
\end{figure}
An outline of the three steps of the proposed DNR algorithm is as follows.
\begin{enumerate}
\item Input Preparation. Learning node representations using the Personalized PageRank with Shrinking (P-PRS) algorithm, shown in Step 1 of Figure~\ref{scheme} and described in Section \ref{sec:PPR}.
\item Neural Network formulation. A neural network architecture processes the prepared personalized page rank vectors, shown in Step 2 of Figure~\ref{scheme}. We describe the considered neural network architectures in Section \ref{sec:architecture} and types of learning, supported by these architectures, in Section~\ref{sec:learning}.
\item Different Outputs. The output of the network can be either node classification, that is, direct learning of node labels (Figure~\ref{scheme}, Step 3, top), or a low-dimensional embedding of the network (Figure~\ref{scheme}, Step 3, bottom). The types of learning supported by DNR are given in Section~\ref{sec:typesLearning}.
\end{enumerate}
In the next subsections, we provide the details of the steps outlined above.
\subsection{Personalized PageRank with Shrinking algorithm}
\label{sec:PPR}
Authors of the HINMINE algorithm propose two approaches to network nodes classification: label propagation or network propositionalization \citep{kralj2017hinmine,PPRS}. In this work we build upon the latter approach, using a version of Personalized PageRank algorithm \cite{pagerank1999} algorithm, to which we refer to as Personalized PageRank with Shrinking ($\mbox{P-PRS}$) (Algorithm~\ref{algo:PPR}). This algorithm produces node representations (or $\mbox{P-PR}$ vectors) by simulating random walks for each node of the input network. Compared to the network adjacency matrix, $\mbox{P-PR}$ vectors contain information for each individual node, i.e. the node associated $\mbox{P-PR}$ vectors include also the information regarding the network topology, which proves to be more suitable for learning, compared to simple node adjacency-based learning.
\begin{algorithm}[H]
\tiny
\KwData{A complex network's adjacency matrix $A$, with nodes $N$ and edges $E$, starting node $u \in N$}
\Parameter{damping factor $\delta$, spread step $\sigma$, spread percent $\tau$ (default 50\%), stopping criterion $\epsilon$}
\KwResult{$\mbox{P-PR}_u$ vector describing stationary distribution of random walker visits with respect to $u \in N$}
$A$ := toRightStochasticMatrix($A$)\Comment*[r]{Transpose and normalize rows of $A$}
core\_vector := $[0,\dots,0] $\Comment*[r]{Initialize zero vector of size |N|}
core\_vector$[u]$ := $1$\;
rank\_vector := $\textrm{core\_vector}$\;
$v$ := $\textrm{core\_vector}$\;
$\textrm{steps}$ := 0\Comment*[r]{Shrinking part}
nz := 1\Comment*[r]{Number of non-zero P-PR values}
\While{$\textrm{nz} < |N| \cdot \tau$ $\wedge$ $\textrm{steps} < \sigma$}{
$\textrm{steps} := \textrm{steps} + 1$\;
$v = v + A \cdot v$\Comment*[r]{Update transition vector}
$\textrm{nzn}$ := $\textrm{nonZero}(v)$\Comment*[r]{Identify non-zero values}
\If{$\textrm{nzn} = \textrm{nz}$}{
$\textrm{shrink}$ := $\textrm{True}$\;
\textbf{end while}\;
}
$\textrm{nz}$ := $\textrm{nzn}$\;
}
\If{$\textrm{shrink}$}{
$\textrm{toReduce}$ := $\{i; v[i]\neq0\}$\Comment*[r]{Indices of non-zero entries in vector $v$}
core\_rank := $\textrm{core\_rank}[\textrm{toReduce}]$\;
rank\_vector := $\textrm{rank\_vector}[\textrm{toReduce}]$\;
$A$ := $A[\textrm{toReduce},\textrm{toReduce}]$\Comment*[r]{Shrink a sparse adjacency matrix}
}
$diff$ := $\infty$\;
$steps$ := 0\Comment*[r]{Node ranking - PageRank iteration}
\While{$ \textrm{diff} > \epsilon \wedge steps < max\_steps$}{
$\textrm{steps} := \textrm{steps} + 1$\;
new\_rank := $A \cdot \textrm{rank\_vector}$\;
rank\_sum := $\sum_{i}\textrm{rank\_vector}[i]$\;
\If{$rank\_sum < 1 $}{
new\_rank := new\_rank + start\_rank $\cdot$ $(1 - \textrm{rank\_sum})$\;
}
new\_rank := $\delta \cdot \textrm{new\_rank} + (1 - \delta) \cdot \textrm{start\_rank}$\;
\textrm{diff} := $\|\textrm{rank\_vec}-\textrm{new\_rank}\|$\Comment*[r]{Norm computation}
rank\_vec := new\_rank\;
}
\uIf{$shrink$}{
$\mbox{P-PR}_u$ := $[0,\dots,0]$\Comment*[r]{Zero vector of dimension |N|}
$\mbox{P-PR}_u[\textrm{toReduce}]$ := rank\_vec\;
}
\uElse{$\mbox{P-PR}_u$ := rank\_vec}
\Return $\mbox{P-PR}_u$\;
\caption{P-PRS: Personalized PageRank with Shrinking}
\label{algo:PPR}
\end{algorithm}
The algorithm consists of two main parts:
\begin{enumerate}
\item
In the first part of the algorithm, named the \emph{shrinking step} (lines 8--25 of Algorithm~\ref{algo:PPR}), in each iteration, the PageRank spreads from the nodes with non-zero PageRank values to their neighbors.
\item In the second part of the algorithm, named the \emph{P-PR computation step} (lines 26--44 of Algorithm~\ref{algo:PPR}), P-PR vectors corresponding to individual network nodes are computed using the power iteration method (Eq.~\ref{eqPR}).
\end{enumerate}
\noindent {\bf Shrinking step.} In the shrinking step we take into account the following:
\begin{itemize}
\item If no path exists between the nodes in $u$ (the starting nodes) and node $i$, the $\mbox{P-PR}$ value assigned to node $i$ will be zero.
\item The $\mbox{P-PR}$ values for nodes reachable from $u$ will be equal to the $\mbox{P-PR}$ values calculated for a reduced network $G_u$, obtained from the original network by only accounting for the subset of nodes reachable from $u$ and connections between them (lines 8--18 in Algorithm \ref{algo:PPR}).
\end{itemize}
If the network is strongly connected, $G_u$ will be equal to the original network, yielding no change in performance compared to the original $\mbox{P-PR}$ algorithm. However, if the resulting network $G_u$ is smaller, the calculation of the $\mbox{P-PR}$ values will be faster as they are calculated on $G_u$ instead of on the whole network. In our implementation, we first estimate if network $G_u$ contains less than $50\%$ (i.e. spread percent) of the nodes of the whole network (lines 8--18 in Algorithm \ref{algo:PPR}). This is achieved by expanding all possible paths from node $i$ and checking the number of visited nodes in each step. If the number of visited nodes stops increasing after a maximum of $15$ steps, we know we have found network $G_u$ and we count its nodes. If the number of nodes is still increasing, we abort the calculation of $G_u$. We limit the maximum number of steps because each step of computing $G_u$ is computationally comparable to one step of the power iteration used in the PageRank algorithm \cite{pagerank1999} which converges in about $50$ steps. Therefore we can considerably reduce the computational load if we limit the number of steps in the search for $G_u$. Next, in lines 18--25, the \mbox{P-PRS} algorithm shrinks the personalized rank vectors based on non-zero values obtained as the result of the shrinking step (lines 8--18).
\newline
\noindent {\bf P-PR computation step.}
In the second part of the algorithm (lines 26--44), node ranks are computed using the power iteration (Eq.~\ref{eqPR}), whose output consists of P-PR vectors.
For each node $u \in V$, a feature vector $\gamma_u$ (with components $\gamma_u(i)$, $1\leq i \leq |N|$) is computed by calculating the stationary distribution of a random walk, starting at node $u$. The stationary distribution is approximated by using power iteration, where the $i$-th component $\gamma_{u}(i)^{(k)}$ of approximation $\gamma_u^{(k)}$ is computed in the $k+1$-st iteration as follows:
\begin{align}
\label{eqPR}
\gamma_{u}(i)^{(k+1)} = \alpha \cdot \sum_{j \rightarrow i}\frac{\gamma_{u}(j)^{(k)}}{d_{j}^{out}}+(1-\alpha) \cdot v_{u}(i);k = 1, 2,\dots
\end{align}
\noindent The number of iterations $k$ is increased until the stationary distribution converges to the stationary distribution vector (P-PR value for node $i$).
In the above equation, $\alpha$ is the damping factor that corresponds to the probability that a random walk follows a randomly chosen outgoing edge from the current node rather than restarting its walk. The summation index $j$ runs over all nodes of the network that have an outgoing connection toward $i$, (denoted as $j \to i$ in the sum), and $d_{j}^{out}$ is the out degree of node $d_{j}$. Term $v_{u}(i)$ is the restart distribution that corresponds to a vector of probabilities for a walker's return to the starting node $u$, i.e. $v_{u}(u) = 1$ and $v_u(i)=0$ for $i\neq u$. This vector guarantees that the walker will jump back to the starting node $u$ in case of restart\footnote{If the binary vector was composed exclusively of ones, the iteration would compute the global PageRank vector, and Eq.~\ref{eqPR} would reduce to the standard PageRank iteration.}.
In a single iteration ($k \rightarrow k+1$), all the stationary distribution vector components $\gamma_{u}(i)$, $1 \leq i \leq |N|$, are updated which result in the P-PR vector $\gamma_{u}^{(k+1)}$. Increasing $k$ thus leads to the $\gamma_{u}^{(k)}$ eventually converging to the PageRank $\gamma_u$ of a random walk starting from the node $u$ (see Algorithm~\ref{algo:PPR}). Eq.~\ref{eqPR} is optimized using power iteration, which is especially suitable for large sparse matrices, since it does not rely on spatially expensive matrix factorization in order to obtain the eigenvalue estimates\footnote{The power iteration (Eq.~\ref{eqPR}) converges exponentially, that is, the error is proportional to $\alpha^{k}$, where $\alpha$ is the damping factor and $k$ is the iteration number.}.
The \mbox{P-PRS} algorithm simulates a first-order random walk in which no past information is incorporated in the final stationary distribution. The time complexity of the described \mbox{P-PRS} algorithm with shrinking for $k$ iterations is $\mathcal{O}(|N|(|E|+|N|)\cdot k)$ for the whole network, and $\mathcal{O}((|E|+|N|)\cdot k)$ for a single node. The proof of the computational complexity of the P-PRS algorithm used in this work reads as follows:
A nai\"ve PageRank iteration corresponds to multiplying a rank vector with a $|N| \times |N|$ (dense) matrix. Given this matrix is sparse a single iterations takes up $\mathcal{O}((|E|+|N|)k)$ time, where $k$ iterations are needed for convergence with respect to a single node. For all nodes, the complexity is thus $$\mathcal{O}(|N|(|E|+|N|)k)$$.
In terms of spatial complexity, the \mbox{P-PRS} algorithm is linear with respect to the number of edges ($\mathcal{O}(|E|)$) if the input is a sparse matrix.
The advantage of the deep neural network architecture, discussed in the following section is that it can learn incrementally, from small batches of the calculated \mbox{P-PR} vectors. In contrast, the previously developed HINMINE approach \cite{kralj2017hinmine} requires that all the \mbox{P-PR} vectors for the entire network are calculated prior to learning, which is due to HINMINE using the $k$-nearest neighbors and the support vector machines classifiers. This incurs substantial space requirements as the \mbox{P-PR} vectors for the entire network require $\mathcal{O}(|N|^{2})$ of computer memory. The DNR algorithm presented here uses deep neural network instead, which can take as input small batches of \mbox{P-PR} vectors. Therefore, only a small percentage of the vectors needs to be computed before the second step of the algorithm (\emph{learning the neural network} step, see Figure~\ref{scheme}) can begin. This offers significant improvement in both spatial and temporal complexity of the learning process. The required time is reduced, since the learning part, which is performed on a GPU, can proceed simultaneously with the \mbox{P-PRS} computation, done on a CPU.
\subsection{Deep neural network learning}
\label{sec:architecture}
In this section we address the second step the Deep Node Ranking approach (outlined in Figure~\ref{scheme}) by
summarizing the outcomes of our empirical exploration of possible neural network (NN) architectures and their hyperparameters. First, we discuss the validation scheme used to optimize the NN topology. Next, we investigate how different activation functions impact the selected architecture's performance. We additionally explore the possibility of using convolutions over $\mbox{P-PR}$ vectors directly, and conclude with a general formulation of the NN architecture that we finally test on unseen networks.
\subsubsection{Hyperparameter optimization setting}
In the second step of the Deep Node Ranking approach, P-PR vectors are embedded into a low-dimensional space, effectively producing a network embeddings. To this end, the purpose of hyperparameter optimization is to make the performance of the evaluated intermediary NN architectures comparable to the state-of-the-art node2vec method \cite{grover2016node2vec}.
Since the node2vec approach was benchmarked for the node classification task, we initially aimed at improving the performance our DNR approach by comparing it to node2vec on one of the data sets that they used, i.e. the \textit{Homo sapiens} protein interaction network \cite{stark2010biogrid}.
The hyperparameters defining the neural network architecture were optimized on a single validation data set, the \textit{Homo sapiens} network. The performance of the hyperparameters optimized on the \textit{Homo sapiens} network were then tested on new networks from different domains that were previously not seen by the neural network. In tests, the hyperparameters were \emph{transferred} to the new networks, but parameters (weight matrices and bias vectors) were re-trained for each new network separately.
We optimized two hyperparameters of the neural network architecture for the task of node classification: the number of training epochs and the activation functions.
The \textit{Homo sapiens} protein interaction network (for further characteristics of the data set see Section~\ref{sec:experimental}) was split into a training set (10\% of nodes) and a validation set (90\% of nodes) as done in \citep{perozzi2014deepwalk,netMF} and later in \citep{grover2016node2vec}---the classification performance was evaluated by training a logistic regression classifier on the obtained embedding, so that from 10\% up to 90\% of embedded nodes were used for training. The evaluation metrics we used were the micro and macro $F_1$ scores, widely adopted to assess the performance on network classification tasks.
The authors of the node2vec algorithm do not specify how the nodes used for embedding construction are dealt with in the validation phase. We believe that these nodes shall be excluded from the validation phase. Therefore, the 10\% of nodes that were used for hyperparameter tuning were always removed in the validation phase, in which the obtained embedding was used for node classification. This allowed us to avoid overfitting and assured fair comparisons.
\subsubsection{Optimizing the NN architecture and the hyperparameters}
\label{sec:nn-arch}
In the experiments, we systematically explored s subset of all possible hyperparameters aimed at finding the best neural network setting to be used in the final Deep Node Ranking validation described in Section~\ref{sec:experimental}.
\begin{description}
\item[The tested single (embedding) layer NN architecture.]
We first demonstrate how a single hidden layer neural network can be optimized to learn from \mbox{P-PR} vectors. We chose a single-hidden layered network architecture based on the recent findings on deeper architectures \cite{Wang:2016:SDN:2939672.2939753} showing that shallow architectures are well suited for graph learning tasks. This architecture is formulated as:
\begin{align*}
l_{2} &= \sigma(w_{2}(a(w_{\textrm{dim}}X+b_{l_{1}}))+b_{l_{2}});
\end{align*}
\noindent where $X$ corresponds to the input of the neural network, which are the \mbox{P-PR} vectors generated in the input preparation phase, $w_{dim}$ and $b$ to vectors of trainable weights, $l_1$ is the first and only hidden layer (the embedding layer), $l_2$ is the output layer whose number of neurons is equal to the number of target classes,
and $a$ is one of the standard activation functions (see Table~\ref{tbl:act} in Appendix~\ref{appendix:act}).
The hidden layer size \emph{dim} is the dimension of the embedding, which is set to the value of $128$ in this work as was also done in \cite{jiline2011,grover2016node2vec,netMF}. The deep neural network evaluated was trained using the Adam optimizer \citep{kingma2014adam}, a version of stochastic gradient descent using the binary cross-entropy loss function \cite{Goodfellow-et-al-2016}. We present the key findings regarding neural network architecture below, while the details of extensive empirical testing are provided in Appendix~\ref{appendix:empirical}.
\item[Evaluation of different activation functions.]
As the first part of evaluation, we explored how the selection of the activation function $a$ impacts the DNR's classification performance. We evaluated the performance of ReLU, Leaky ReLU, ELU, Sigmoid as well as no activation functions (summarized in Table~\ref{tbl:act}). We observed that the Sigmoid function exhibits the slowest convergence ($\approx 20$ epochs), whereas other non-linear activation functions perform similarly well, and converge after approximately the same number of iterations ($\approx 15$ epochs). To ensure an adequate number of epochs, in the remainder of hyperparameter exploration experiments we decided to use $20$ epochs and the ReLU function.
\item[Evaluation of additional convolutional layer in NN architecture.]
In the next series of experiments, we tested the following architecture:
\begin{align*}
l_{2} &= \sigma(w_{2}(a(w_{\textrm{dim}}\textrm{Conv1D}({X,f,k,p})+b_{l_{1}}))+b_{l_{2}});
\end{align*}
\noindent where Conv1D(X,f,k,p) corresponds to one dimensional convolution, parameterized with the number of filters $f$, kernel size $k$, and pooling size $p$, over the \mbox{P-PR} vectors X. We conducted extensive grid search over different combinations of $f$, $k$ and $p$ (referred to as ($f,k,p$) triplets in Appendix~\ref{appendix:empirical}). Intuitively, as \mbox{P-PR} vectors are not ordered, the rationale for using an additional convolution layer is the following. Sets of \mbox{P-PR} vectors could be potentially merged into order-independent representations, which, although containing less information, potentially offer the opportunity to obtain an architecture with the number of parameters, sublinear with respect to the number of nodes. This series of experiments yielded the architecture with two filters, kernel size of eight and the pooling size of two as the best performing one, yet this variation did not outperform the simpler, fully connected single layer NN described above.
\item[DNR and the attention mechanism.]
Recent advancements in state-of-the-art unsupervised language and speech processing exploit the attention mechanism for extraction of key features from the input space \cite{vaswani2017attention,chorowski2015attention}. We refer the interested reader to the aforementioned publications for detailed description of this technique, and explain here only the essential ideas implemented as part of DNR. The proposed neural network architecture that implements the attention mechanism can be stated as follows:
\begin{align*}
l_{\textrm{att}} &= X \otimes \textrm{softmax}( w_{\textrm{att}}X + b_{l_{\textrm{att}}})\\
l_{1} &= a (w_{dim} \cdot l_{\textrm{att}}+b_{l_{1}})\\
l_{2} &= \sigma(w_{2} \cdot l_{1} + b_{l_{2}})
\end{align*}
Here, the input \mbox{P-PR} vectors (X) are first used as input to a softmax-activated layer containing the same number of neurons as there are number of nodes, where the softmax function applied to th $j$-th element of a weight vector $v$ is defined as follows:
\begin{align*}
\textrm{softmax}(v_j) = \frac{e ^{v_{j}}}{\sum_{k=1}^{|N|}e^{v_{k}}};
\end{align*}
\noindent where $v \in \mathbb{R}^{|N|}$, and $|N|$ denotes the dimensionality of the considered weight vector. This dimensionality equals to the number of nodes, in order for the attention mechanism to output values for each node.
The $\otimes$ sign corresponds to the element-wise multiplication. This layer's outputs can be intuitively understood as node importances, yet we leave extensive evaluation of this interpretation with respect to graph centrality measures for further work. The remainder of the architecture is similar to the basic DNR formulation, i.e. an embedding layer, followed by the output layer. The set of initial validation experiments shows many more epochs are needed to train this type of the architecture. After 1,000 epochs, the aforementioned architecture yielded marginally worse performance to that of node2vec when trained as an autoencoder: here, the neural network attempted to learn the input \mbox{P-PR} vectors and no node label information was used (See Section~\ref{sec:typesLearning}).
\end{description}
\subsubsection{Key observations}
\label{sec:key-obs}
The goal of this paper is not to exhaustively explore the possible architecture space. Based on previous work, we explored two variations of a shallow neural network architecture, where we demonstrate that a simple, fully connected architecture performs the best on the considered data set, where the state-of-the-art performance is improved by a small margin. We also recognize the use of 1D convolutions combined with average pooling for situations, where the number of tunable parameters can grow too quickly. Finally, we suggest a simple rank attention mechanism. We continue the discussion by stating the DNR quality evaluation function and the tasks solvable by the proposed DNR algorithm. Additional technical details are given in Appendix~\ref{sec:appendix-tech}.
\subsubsection{End-to-end learning of node labels}
In the previous subsection, we discussed how the DNR approach can be used to obtain a node embedding using a small subset of labeled nodes. In this section, we continue with \emph{end-to-end} learning of node labels.
Formally, the constructed classification model approximates the mapping from $N$ to $[0,1]^{|N| \times |\mathcal{C}|}$, where $N$ represents a network's set of nodes, $\mathcal{C}$ a set of class values and $c_i$ a single vector of node classes. The neural network is trained as any standard machine learning algorithm in a semi-supervised setting where node ranks are initially computed. The architecture formulation and other parameters remain the same as described in Section~\ref{sec:architecture}.
\subsection{Unsupervised learning}
\label{sec:autoencoders}
So far, DNR was used in a supervised setting for supervised learning of network embedding or for direct node classification. DNR can also be used for unsupervised representation learning, useful when visualizing and comparing complex networks. In this section we discuss how the proposed DNR algorithm can be used to obtain node representations without any information on class labels.
In the unsupervised learning settings addressed in this section we modified the $Loss$ function, defined in Equation~\ref{eq:loss}, as follows. In this case, instead of the one-hot class matrix $C$ we use the unweighted adjacency matrix $A$, and thus the equation \ref{eq:loss} translates to
$Loss(i) = \sum_{j=1}^{|N|} A_{i,j}\log p_{i,j}.$
As no node labels are used in this process, we refer to such learning as \emph{unsupervised representation learning}.
The proposed unsupervised setting is explored as part of the qualitative, as well as quantitative evaluation, where obtained network representation (embedding) is used for network visualization and node classification.
The total number of trainable parameters is sub-linear with respect to the number of nodes, if the P-PR vectors' sizes \emph{are reduced} prior to being used as input to the dense (embedding) layer. In this work, we propose the use of convolution operator for such cases but other solutions, e.g., a minibatch PCA could also be used for this purpose.
\section{Data sets and experimental setting}
\label{sec:experimental}
In this section we first describe the data sets used, the experimental settings, the DNR implementations tested together with their parameters, followed by a short description of compared baseline approaches.
\subsection{Data sets}
We evaluated the proposed approach on nine real-world complex networks, three of them newly introduced, which is one of the largest collections
of complex networks formulated as multi-label classification task. The \emph{Homo Sapiens} (proteome) \cite{stark2010biogrid}, POS tags \cite{mahoney2011large} and Blogspot data sets \cite{zafarani2009social} are used in the same form as in \cite{grover2016node2vec}. The data sets are summarized in Table \ref{tbl:stats}. In the table, CC denotes the number of connected components. The clustering coefficient measures how nodes in a graph tend to cluster together, and is computed as the ratio between the number of closed triplets and the number of all triplets. The network density is computed as the number of actual connections, divided by all possible connections. The mean degree corresponds to the average number of connections of a node. Links to the data sets, along with other material presented in this paper, are given in Section~\ref{availability}.
A more detailed description of the data sets is given below:
\begin{small}
\begin{table}[b]
\centering
\caption{Networks used in this study and their basic statistics. Three newly created datasets (marked with $^*$) are listed at the bottom of the table.}
\begin{tabular}{l|r|r|r|r|r|r|r}
\hline
Name & $|\mathcal C|$ & Nodes & Edges & CC & Clust. & Density & MeanDeg. \\ \hline
\emph{Homo Sapiens} & 50 & {3,890} & {38,739} & 35 & 0.146 & 0.00512 & 19.92 \\
POS & 40 & {4,777} & {92,517} & 1 & 0.539 & 0.00811 & 38.73 \\
Blogspot & 39 & {10,312} & {333,983} & 1 & 0.463 & 0.00628 & 64.78 \\
Citeseer & 6 & {3,327} & {46,76} & 438 & 0.141 & 0.00085 & 2.81 \\
Cora & 7 & {2,708} & {5,278} & 78 & 0.241 & 0.00144 & 3.90 \\
Ecommerce & 2 & {29,999} & {178,608} & {8,304} & 0.484 & 0.00040 & 11.91 \\
Bitcoin Alpha$^*$ & 20 & {3,783} & {14,124} & 5 & 0.177 & 0.00197 & 7.47 \\
Bitcoin$^*$ & 20 & {5,881} & {21,492} & 4 & 0.178 & 0.00124 & 7.31 \\
Ions$^*$ & 12 & {1,969} & {16,092} & 326 & 0.529 & 0.00831 & 16.35 \\ \hline
\end{tabular}
\label{tbl:stats}
\end{table}
\end{small}
\begin{itemize}
\item The \emph{Homo sapiens} data set represents a subset of the human proteome, i.e. a set of proteins which interact with each other. The sub-network consists of all proteins for which biological states are known \citep{stark2006biogrid}. The goal is to predict protein function annotations. This data set was used as the validation data set, hence its results are not included among others, yet the results are visualized along other micro and macro $F_1$ curves for comparison.
\item The POS data set represents part-of-speech tags obtained from Wikipedia---a co-occurrence network of words appearing in the first million bytes of the Wikipedia dump \citep{mahoney2011large}. Different POS tags are to be predicted.
\item The Blogspot data set represents a social network of bloggers (Blogspot website) \citep{zafarani2009social}. The labels
represent blogger interests inferred through the metadata
provided by the bloggers.
\item The CiteSeer citation network consists of scientific publications classified into one of six classes (categories) \citep{lu2003link}.
\item The Cora citation network consists of scientific publications classified into one of seven classes (categories) \citep{lu2003link}.
\item The E-commerce network is a heterogeneous network connecting buyers with different products. As the original DNR methodology and the compared baseline algorithms operate on homogeneous networks, the E-commerce network was transformed to a homogeneous network prior to learning using a term frequency weighting scheme \cite{kralj2017hinmine}. The edges created represent mutual purchases of two persons, i.e. two customers are connected if they purchased an item from the same item category, as defined below:
$$
\textrm{Person} \xrightarrow{purchased} \textrm{itemCategory} \xrightarrow{purchasedBy} \textrm{Person}
$$
We refer the interested reader to \cite{kralj2017hinmine} for a detailed description of the data set and its transformation to a homogeneous network. The two class values being predicted correspond to the gender of buyers.
\end{itemize}
One of the contributions of this work are also three novel node classification data sets, obtained as follows.
\begin{itemize}
\item Two data sets are related to Bitcoin trades \cite{kumar2016edge}. The two networks correspond to transactions within two different platforms, namely Bitcoin OTC and Bitcoin Alpha. Each edge in these network represents a transaction along with an integer score denoting trust in range $[-10,10]$ (zero-valued entries are not possible). We reformulate this as a classification problem by collecting the trust values associated with individual nodes and considering them as target classes. The resulting integer values can thus belong to one of the 20 possible classes. Note that more than a single class is possible for an individual node, as we did not attempt to aggregate trust scores for individual nodes.
\item The Ions data set is based on the recently introduced protein-ion binding site similarity network \cite{vskrlj2018insights}. The network was constructed by structural alignment using the ProBiS family of algorithms \cite{konc2014probis,konc2012parallel,konc2017genprobis} where all known protein-ion binding sites were considered. The obtained network was pruned for structural redundancy as described in \cite{vskrlj2018insights}. Each node corresponds to one of 12 possible ions and each weighted connection corresponds to the ion binding site similarity between the two considered networks.
\end{itemize}
\subsection{Experimental setting}
\label{sec-setting}
In this section, we describe the experimental setting used to evaluate the proposed method against the selected baselines.
\begin{itemize}
\item In the embedding based classification setting, we consider randomly selected 20\% of the data set for supervised embedding construction. This part of the data set, named Construction Data, is only used in training of the embeddings. The remaining 80\% of the data set, named Evalution Data, is used only to test the classification performance of logistic regression. as discussed next.
Once we obtain the deep learning based network embeddings (regardless of whether they were obtained by the supervised or the unsupervised variants of DNR), the classification based on the embeddings used the L2-regularized logistic regression. In training and evaluating the logistic regression classifier, we used train-test splits of the Evaluation Data. Following the standard practice in the evaluation of embedding algorithms \citep{tang2015line,grover2016node2vec,perozzi2014deepwalk,netMF} we evaluated the performance on training sets of increasing size, i.e. from 10\% to 90\% of the size of the Evaluation Data.
\item In the end-to-end classification setting, the same train-test splits of the Evaluation Data were used for training and evaluating the entire neural network.
\end{itemize}
We repeated the classification experiments 50 times and averaged the results to obtain stable performance.
The performance of the trained classifiers was evaluated using micro and macro $F_1$ scores, as these two measures are used in the majority of other node classification studies \citep{tang2015line,grover2016node2vec,perozzi2014deepwalk,netMF}. Definitions of
$F_1$, micro $F_1$ and macro $F_1$ scores are given in Appendix~\ref{AppendixF1}.
Due to many comparisons, we utilize the Friedman's test with Nemenyi \emph{post hoc} correction to compute statistical significance of the differences. The results are visualized as critical difference diagrams, where ranks of individual algorithms according to scores across all data set splits are presented \cite{demsar2006}.
All experiments were conducted on a machine with 64GB RAM, 6 core Intel(R) Core(TM) i7-6800K CPU @ 3.40GH with a Nvidia 1080 GTX GPU. As the maximum amount of RAM available for all approaches was 64 GB, the run is marked as unsuccessful should this amount be exceeded. Further, we gave each algorithm at maximum five hours for learning. We selected these constraints as the networks used are of medium size and if a given method cannot work on these networks, it will not scale to larger networks, e.g., social networks with millions of nodes and tens of millions, or even billions of edges without substantial adaptation of the method.
Network statistics and visualization are implemented as part of our Py3plex library \citep{py3plex,vskrlj2019py3plex}. Node ranking is implemented using sparse matrices from the the Scipy module \cite{jones2014scipy} and TensorFlow library \cite{abadi2016tensorflow}.
\subsection{DNR implementations}
\label{sec:implementations}
In this section we list the various DNR implementations tested.
\begin{itemize}
\item DNR denotes the implementation where embedding is first constructed and then used for classification.
\item DNRconv denotes the DNR approach with added initial convolution layer, consisting of 2 filters, kernel size of 8, and average pooling region of 2.
\item DNR-e2e denotes end-to-end variants of the algorithm, where the train-test splits used by logistic regression classifier for learning from embeddings are used to learn from P-PR vectors directly.
\item DNR (attention) is a variant of DNR where attention mechanism is exploited, where we test only the unsupervised variant of this architecture.
\item autoDNR is the unsupervised variant of DNR.
\item autoDNRconv is the unsupervised variant of DNRconv.
\end{itemize}
The parameters of the DNR implementations, which can be subject to problem-specific tuning, are as follows:
\begin{itemize}
\item Number of epochs (i.e. the number of weight update cycles). For all experiments we set the maximum number of epochs to $20$. The exception is that in the experiments with the DNR (attention) algorithm it was shown, based on the initial experiments, that more epochs are needed to obtain performance similar to other unsupervised methods, hence for DNR (attention) we trained the model for $100$ epochs. Note that the DNR architectures were trained with a stopping criterion of 5, i.e. if after five epochs the performance remained the same, training was terminated.
\item Batch size. This parameter specifies how many P-PR vectors are fed into the architecture simultaneously. The computation speed highly depends on this parameter, as batches are processed in parallel. For all experiments, this parameter was set to $5$. For all DNR implementations we use the ReLU activation function with the threshold and alpha parameters set to 0 (i.e. a default setting).
\item P-PRS algorithm parameters:
\begin{itemize}
\item $\epsilon$. The error bound, which specifies the end of iteration, was set to $10^{-6}$.
\item Max steps. The number of maximum steps allowed during one iteration was set to \SI{100000} steps.
\item Damping factor. The probability that the random walker continues at a given step was set to $0.5$.
\item Spread step. The number of iteration steps allowed for the shrinking part was set to $10$.
\item Spread percent. Maximum percentage of the network to be explored during shrinking was set to $0.3$.
\end{itemize}
\end{itemize}
These DNR parameters were set based on the validation network (\emph{Homo Sapiens}), and are not optimized for each test network separately. As discussed, we intentionally do not perform hyperparameter tuning for each network, to showcase the general performance of the proposed algorithm.
\subsection{The baseline approaches}
We tested the proposed approach against ten different baselines outlined below. Nine of these are embedding algorithms, while Label Propagation is the only approach that performs classification directly without using embeddings.
\begin{itemize}
\item HINMINE (see Section \ref{sec:nr}) uses network decomposition with P-PRS algorithm. The embedding with original algorithm produces a matrix of size $|N| \times |N|$.
\item HINMINE-PCA uses PCA to reduce the embedding obtained by HINMINE to 128 dimensions.
\item NNMF non-negative matrix factorization approaches \citep{fevotte2011algorithms} explore how P-PR vectors can be compressed using non-negative matrix factorization. We introduce these additional experiments, so that PCA-based decomposition can be compared with computationally more demanding NNMF---to our knowledge, we are the first to perform such experiments.
For this task, we use the following NNMF implementations, available as part of the Scikit-learn repository \citep{pedregosa2011scikit}.
\begin{itemize}
\item The NNMF(r) corresponds to random initialization.
\item NNMF(svd) denotes standard Nonnegative Double Singular Value Decomposition.
\item NNMF(svdar) is a variant of NNMF(svd), where zeros are replaced with small random numbers.
\item NNMF(svda) is a variant of NNMF(svd), where zeros are filled with the average of the input matrix.
\end{itemize}
\item LINE is one of the first network embedding algorithms \cite{tang2015line}.
\item node2vec implemented in C++, compiled on the machine where the benchmarks were evaluated. The same exhaustive search options were used as in the original paper \cite{grover2016node2vec}.
\item UMAP \cite{mcinnes2018umap} is a recently introduced dimensionality reduction method, which leverages ideas from manifold theory, performs for the embedding task. We used UMAP's default parameter setting.
\item Label Propagation (LP) \cite{zhu2002learning,kralj2017hinmine}.
\item Graph Attention Networks (GATn) \cite{velivckovic2018graph}. We employed the PyTorch-Geometric \cite{Fey2019}. We trained the models with the stopping criterion of 100 epochs up to 1000 epochs. Due to unstable performance, we report the best performance (epoch scoring best). Further, as GATs were not initially implemented for multilabel classification, we extended them so they minimize binary cross-entropy and output a sigmoid-activated space with the same cardinality as the number of targets (the multiclass version does not work for such problems). As this branch of models operates with additional features assigned to nodes, and the considered benchmark data sets do not posses such features, we used the identity matrix of the adjacency matrix as the feature space, thus expecting sub-optimal performance rendering this baseline a weak baseline. This algorithm was shown to outperform other variants of graph neural networks such as the GCNs \cite{kipf2016semi}, thus is the only GNN approach included as a baseline.
\end{itemize}
\section{Results}
\label{sec:results}
In this section, we present the empirical results and discuss their qualitative as well as quantitative aspects. We first present the results for the node classification task, followed by qualitative evaluation the proposed DNR algorithm for the task of node visualization.
\subsection{Classification performance: Statistical analysis of results}
\label{performance5}
We present the classification results in the form of critical difference (CD) diagrams, visualize them over various train percentages (ranging from 10\% to 90\% of the Evaluation Data, see Section~\ref{sec-setting}), and perform Bayesian comparison of the results. Figure \ref{fig:cd1} and Figure \ref{fig:cd2} present the performance of the algorithms in the form of critical difference diagrams \citep{demsar2006} (the interested reader can find full results in Table~\ref{tab-full-results} of Appendix~\ref{AppendixF1}).
In these diagrams, the algorithm's mean performance is displayed along a horizontal line with marked total ranks.
\begin{figure}[h]
\vspace*{-0.5cm}
\centering
\includegraphics[width=\linewidth]{macroFCD.pdf}
\vspace*{-1cm}
\caption{Macro $F_1$ critical difference diagram.}
\label{fig:cd1}
\end{figure}
\begin{figure}[h]
\vspace*{-0.5cm}
\includegraphics[width=\linewidth]{microFCD.pdf}
\vspace*{-1cm}
\caption{Micro $F_1$ critical difference diagram.}
\label{fig:cd2}
\end{figure}
In terms of Macro $F_1$, the best performing algorithm is DNR-e2e, a single hidden layer end-to-end Deep Node Ranking classifier.
It outperforms existing state-of-the-art algorithms by up to $8\%$.
In terms of the computed statistical significance, the DNR classifiers perform similarly to HINMINE and node2vec. In terms of Micro $F_1$, HINMINE-based classification is one of the best-performing approaches, indicating that the information from the whole network can offer similar results to simulated second order random walkers (node2vec). The results indicate that shallow neural network architectures are well suited for learning from complex networks, as they are comparable to the state-of-the-art embedding algorithms. This observation is aligned with the observation by \cite{Wang:2016:SDN:2939672.2939753} that shallow neural network architectures outperform deeper ones. This might be the result of locality of information in the used graphs, i.e. no long paths are important for the used classification task. Finally, the multilabel implementation of GATs (GATn) used in this work, where the feature space consists only of the diagonal entries (identity matrix), did not perform competitively to other methods. The main reason for such behavior is the na\"ive treatment of structure by GNN-like approaches, indicating that simple neighborhood aggregation is not adequate for structure-only representation learning. Further, multilabel variant considered in this work was also spatially expensive.
\subsection{Classification performance: Analysis on individual data sets}
\label{performance4}
Next, we present the results for each of the nine data sets, as shown in Figures~\ref{sp:macro} and \ref{sp:micro}. Note that the first data set (\emph{Homo sapiens}) was used for validation, hence the results obtained on this data set were not included in the construction of CD diagrams.
We observe that the label propagation (LP) algorithm strongly depends on the network's topology as its performance is subject to high variability. This behavior is expected, as the LP algorithm works by moving through the network na\"ively, meaning that its classifications can be biased to denser parts of networks. The (supervised) DNR embedding performs similarly to the state-of-the-art node2vec algorithm.
As stated previously, DNR-e2e, i.e. an architecture with a single hidden layer and end-to-end training, outperforms other approaches in terms of Macro $F_1$ score.
The HINMINE-PCA approach performs similarly to the original HINMINE methodology, indicating that raw node rank vectors can be reduced using fast PCA \cite{jolliffe2011principal} projections. As there exist batch variants of PCA algorithms \cite{mitliagkas2013memory}, the spatial complexity of the original HINMINE can also be reduced by using--similarly to the proposed DNR--minibatches of node rank vectors. We leave such experiments for further work. The Cora data set results are subject to the highest performance variability---the factorization-based approaches tend to be competitive with other approaches only when larger portions of the embedding are used for prediction (rightmost part of the plot).
\begin{figure}[htbp]
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{macroline.png}
\caption{Classifier performance (macro $F_1$ scores).}
\label{sp:macro}
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{microline.png}
\caption{Classifier performance (micro $F_1$ scores).}
\label{sp:micro}
\end{figure}
The embedding-based approaches are more consistent---no large variation in classification performance is observed. We believe that this behavior is due to efficient sampling schemes used for embedding construction. Finally, we observe the DNR (attention) performs similarly to e.g., node2vec.
\subsection{Classification performance: Bayesian comparison of classifiers}
\label{performance3}
A recent methodological improvement in multiple classifier performance comparisons are their Bayesian variants \cite{bayesiantests2016}. We use the Bayesian variant of the hierarchical t-test to determine differences in performance of compared classifiers\footnote{\url{https://github.com/BayesianTestsML/tutorial/blob/master/Python/Hierarchical\%20test.ipynb}}. This test samples pairs of results and is potentially more robust than the frequently used CD diagrams. Compared to CD diagrams, the considered Bayesian hierarchical t-test distinguishes between three scenarios: two scenarios where one classifier outperforms the other and one in which the difference in classifier performance lies in the \emph{region of practical equivalence} (rope). The size of the rope region is a free parameter, p(rope), of the Bayesian hierarchical t-test.
As Bayesian multiple classifier correction cannot be intuitively visualized for more than two classifiers, we focus on the two best-performing embedding approaches and additionally compared performance of node2vec against variants of the proposed DNR.
The results shown on Figure \ref{fig:cdBayes} were obtained from the differences of classifiers' performances, represented with percentages (10 repetitions). Here, we used the macro $F_1$ score for comparisons.
We subtracted the performances of the second classifier from the first. Negative differences indicate superior performance of the left classifier and vice versa. In the tests performed, we set the parameter p(rope) to $0.01$, meaning that two performances are considered the same, if they do not differ by more than $0.01$.
Each green dots located within the triangles represents a sample, obtained from the hierarchical model. As the sampling procedure is governed by the underlying data, green dots fall under one of the three categories; classifier one dominates (left), classifier two dominates (right), or the difference of the classifiers' performance lies in the region of practical equivalence (up). Upon models convergence, the highest density of the green dots corresponds to the higher probabilities of the observed classifier outperforming the other. Similarly to critical difference diagrams, node2vec and the embedding-based DNR classifier perform similarly---the green dots in Figure \ref{fig:cdBayes}a are approximately uniformly distributed between the left and the right parts of the triangle. On the contrary, as the DNR-e2e algorithm outperforms node2vec, highest density of the green dots is located on the leftmost part of the triangle in Figure \ref{fig:cdBayes}b.
The sampled probability densities corresponding to differences in classifiers' performances are finally aggregated into a three probabilities of the three scenarios (classifier one, classifier two or rope). The probabilities of classifier one and classifier two of each scenario are given in the captions under the plots in Figure~\ref{fig:cdBayes}.
In general, this form of classifier comparison gives similar relations between the classifier as the critical difference diagrams in Figures \ref{fig:cd1} and \ref{fig:cd2}.
All other classifier comparisons are given in Appendix~\ref{appendix:comp}.
\begin{figure}[htbp]
\centering
\resizebox{0.87\textwidth}{!}{
\begin{tabular}{cc}
\subcaptionbox{node2vec (0.465) vs. DNR (0.482)}{\includegraphics[width = 2.5in]{bayesian_n2v_DNR.png}} &
\subcaptionbox{DNR-e2e (0.942) vs. node2vec (0.051)}{\includegraphics[width = 2.5in]{DNR-e2e_Node2vec.png}} \\
\subcaptionbox{DNR-e2e (0.932) vs. DNR (0.041)}{\includegraphics[width = 2.5in]{DNR-e2e_DNR.png}} &
\subcaptionbox{DNR (0.578) vs. autoDNR (0.351)}{\includegraphics[width = 2.5in]{DNR_autoDNR.png}} \\
\subcaptionbox{DNR (0.502) vs. DNRconv (0.192)}{\includegraphics[width = 2.5in]{DNR_DNRconv.png}} &
\subcaptionbox{autoDNR (0.811) vs. autoDNRconv (0.182)}{\includegraphics[width = 2.5in]{autoDNR_autoDNRconv.png}}
\end{tabular}}
\caption{Pairwise Bayesian performance comparisons of selected classifiers. The probabilities following classifier names represent the probabilities a given classifier outperforms the other.}
\label{fig:cdBayes}
\end{figure}
\subsection{Computation time analysis}
In this section, we present the results in terms of run times for node classification, shown in Figure~\ref{res:time2}.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{meantime}
\vspace*{-0.5cm}
\caption{Learning times.}
\label{res:time2}
\end{figure}
We observe that label propagation is the fastest approach, which could be expected due to its simplicity and fast, BLAS based routines. Next, the proposed DNR-e2e performs notably faster compared to other embedding approaches. We believe the reason for this lies in the execution time of the logistic regression classifier. Further, the LINE algorithm remains one of the fastest embedding methods, which confirms previous results \cite{jiline2011}. Surprisingly, the PCA-reduced \mbox{P-PR} vectors are, even though executed entirely on CPUs, relatively fast. This result indicates that even linear reduction of \mbox{P-PR} vector dimensionality can yield promising result both with respect to performance, as well as to execution time. The matrix factorization approaches also perform well, which we believe is due to efficient factorization implementations. The DNR algorithm outperforms the original HINMINE and node2vec, indicating the DNR represents a viable embedding alternative also with respect to execution time (apart from the classification performance). Finally, the UMAP algorithm performs at the same scale as other unsupervised approaches, such as autoDNRconv and others, indicating manifold projections could be a promising venue for future research (to our knowledge, this study is one of the first ones to propose the use of UMAP for obtaining graph embeddings). Overall, we demonstrate that DNR family of algorithms performs reasonably well when used as embedding constructors, and can be further sped up if considered in an end-to-end learning setting. Note that the attention-based variant of DNR is not included, as it took too long to finish when considering the large amount of repetitions, its optimization is left for further work.
\subsection{Qualitative exploration}
The main objective of this section was to evaluate how different embedding methods capture the latent organization of the network in terms of node classes.
To demonstrate that unsupervised construction of the network embedding yields potentially interesting representations, we visualize them. For this purpose, we construct different embeddings with default parameters and embedding dimension 128 (with the exception of HINMINE, which constructs an embedding of size $|N| \times |N|$). We used the t-SNE projection \citep{maaten2008visualizing} to project the obtained embeddings to two dimensions. We plotted the obtained projections using the Py3plex library \citep{py3plex}, where class labels for individual nodes correspond to different colors. The generated visualizations of the Cora network are shown in Figure~\ref{embeddings_viz}.
\begin{figure}[htb]
\centering
\begin{tabular}{ccc}
\subcaptionbox{HINMINE-PCA}{\fbox{\includegraphics[width = 1.2in]{PCA_embedding}}} &
\subcaptionbox{HINMINE}{\fbox{\includegraphics[width = 1.2in]{HINMINE_embedding}}} &
\subcaptionbox{LINE}{\fbox{\includegraphics[width = 1.2in]{line_embedding}}} \\
\subcaptionbox{node2vec}{\fbox{\includegraphics[width = 1.2in]{n2v_embedding}}} &
\subcaptionbox{autoDNR}{\fbox{\includegraphics[width = 1.2in]{cora_embedding_trained.png}}} & \subcaptionbox{autoDNRconv}{\fbox{\includegraphics[width = 1.2in,height=0.88in]{conv_embedding_trained.png}}}
\end{tabular}
\caption{Visualizations of the Cora network. Individual classes correspond to different colors. No class information was used to train the embeddings to 128 dimensions. Vectors were reduced to two dimensions using t-SNE projections with default parameters. To obtain more distinct clusters, t-SNE's parameters can be furhter tuned.}
\label{embeddings_viz}
\end{figure}
HINMINE-based embeddings appear to encode node information in such a way that it corresponds well to node class labels. This observation implies that node ranks contain certain information about higher-order organization, as labeled by classes. On the contrary, no distinct separation was observed for the LINE and node2vec algorithms. As we tested only default t-SNE parameterizations, it is possible that additional tuning could produce a better separation or visualization. The produced visualizations of the DNR-based representations show that DNR (i.e. the shallow architecture) successfully groups the classes, yet some classes (e.g., green and teal points) remain scattered throughout the canvas.
\section{Conclusions and further work}
\label{sec:conclusions}
This paper presents a novel DNR approach to complex network node embedding and multilabel node classification. We demonstrate the scalability and the overall performance of the proposed approach, which is fast and performs better or comparably to the state-of-the-art approaches. We find that the best-performing DNR variation is a shallow architecture, learning labels in an end-to-end manner:
\begin{align*}
l_{2} &= \sigma(w_{2}(a(w_{\textrm{dim}}X+b_{l_{1}}))+b_{l_{2}});
\end{align*}
\noindent
This indicates that deeper architectures are possibly not needed for the selected tasks.
Our node rank vector computation is friendly to parallel and distributed computations. Therefore, we believe that DNR is suitable for larger networks (i.e. $|N| \geq 10^{4}$). Our algorithm for network learning tasks is based on node ranking. We empirically show that it outperforms the state-of-the-art approaches, even though the network sampling only leverages first-order random walks. We observe that DNR performs comparably to node2vec, which uses information derived from second-order random walks.
This may offer opportunity for further improvements with the second- or higher order-random walks in the network sampling part.
Graph convolutional neural networks (GCN) also work well for the problem of end-to-end node classification, especially on block-diagonal adjacency matrix structures \cite{kipf2016semi}. The difference between GCNs and the approach proposed in this work is that, apart from network adjacency matrix and node classes, GCN algorithms require an additional feature matrix, which corresponds to properties of individual nodes. GCNs are thus useful only when features originating from different data sources can be assigned to individual nodes, e.g., the presence of individual words can be assigned as feature vectors to documents representing the network's nodes. As \mbox{P-PR} vectors can also be considered as features of individual nodes, we see the GCN family of algorithms as \emph{complementary} to the proposed DNR approach.
One of the unsolved problems in graph deep learning is the interpretability of models. For example, methods such as node2vec \cite{grover2016node2vec} and GCN-like models \cite{kipf2016semi} either construct embeddings by doing local graph crawling or aggregate the features of close neighbors. This means that the importance of individual nodes (in e.g., classifier performance) are masked via the embedding layer or a series of aggregations. Even though a similar problem emerges with the DNR embedding algorithm proposed, the DNR-e2e variant directly builds a classifier. This property is useful when a machine learning practitioner wishes to understand which nodes influenced the model's decisions. The recently introduced SHAP algorithm \cite{NIPS2017_7062}, based on the coalition theory, offers a direct calculation of feature importance, which could be of relevance here, i.e. when a practitioner e.g., attempts to understand the causality behind the learned patterns. We leave the evaluation of this idea on real-world data sets for further work.
The proposed neural network architecture is by no means the optimal solution. We believe that the number of parameters, needed to leverage the network's properties, can be significantly reduced whilst maintaining the classification performance, yet we leave such exploration for further work. For example, recent improvements in autoML approaches relevant for deep learning topology generation \cite{young2015optimizing} could provide better alternatives.
Let us summarize the possible further improvements of the DNR methodology:
\begin{enumerate}
\item DNR currently operates on homogeneous networks. On the other hand, the HINMINE methodology can also be used for decomposing heterogeneous networks thus taking into account different relations between the nodes. The proposed DNR algorithm is a natural extension to the embedding (propositionalization) step of the HINMINE methodology and is capable of exploiting properties of heterogeneous networks, which we demonstrate on the E-commerce data set. We leave more extensive learning tasks on heterogeneous networks for further work.
\item The current DNR implementation is based on the \mbox{P-PR} vectors, calculated for individual nodes. The input vectors used during learning are thus one dimensional. One could potentially extend individual rank vectors with other node features. The only change to the proposed DNR implementation would include three or more dimensional tensors as inputs, which would represent node features.
\item
As current implementation of DNR leverages only the first order random walks, we believe its performance could be further improved by using higher-order Markov dynamics. As the state-of-the-art node ranking algorithms already incorporate such information, the next step for the DNR methodology is an extension to second and higher order random walks. One possible solution to this problem is to use the recently introduced Multilinear PageRank \cite{gleich2015multilinear}.
\item The \mbox{P-PRS} with shrinking algorithm used can be implemented on GPUs, which could offer additional speed-ups, especially for larger networks.
\item The embeddings, obtained using DNR could serve as the input for clustering algorithm, offering the opportunity for semantic enrichment using background knowledge, as demonstrated in \cite{CBSSD}.
\item In this work, we did not consider edge classification tasks, although every embedding can also be used to construct edge representations.
\end{enumerate}
We finally list some of our attempts which did not yield good enough performance to be included in the paper, yet they could be of potential use as a reference for future research.
\begin{enumerate}
\item \mbox{P-PR} vector discretization. We attempted to discretize the \mbox{P-PR} vectors prior to learning. We experimented with various discretizations of the interval $[0,1]$ in terms of bin sizes. These ranged from from $0.0005$ to $0.01$. The results were significantly worse compared to using \mbox{P-PR} vectors directly.
\item Deeper architectures. We experimented with architectures with up to three hidden layers prior to the embedding layer. Performance notably decreased when more layers were added.
\end{enumerate}
\label{availability}
The DNR and the datasets will be freely accessible at \url{https://github.com/SkBlaz/DNR}.
\subsection*{\textbf{Acknowledgments}}
The work of the first author was funded by the Slovenian Research Agency through a young researcher grant (B\v{S}).
The work of other authors was supported by the Slovenian Research Agency (ARRS) core research program P2-0103 and P6-0411,
and research projects J7-7303, L7-8269, and N2-0078 (financed under the ERC Complementary Scheme).
|
1,314,259,994,826 | arxiv | \section{Introduction}
Recent years have witnessed a boom in versatile wireless devices and data-consuming mobile services. In the coming wave of a large number of new generation mobile Internet devices, including tablets, smartphones, smart wearables and so on, future 5th generation (5G) mobile networks need to support the massive connectivity of wireless devices. Additionally, the emergence of new data-consuming services, such as virtual reality (VR), augmented reality (AR), high-definition (HD) video streaming, cloud and fog computing services, have elevated the traffic demands.
The ever-increasing traffic demands have raised the stakes on developing new access technologies to utilize limited spectrum resources. Non-orthogonal multiple access (NOMA), as an emerging multiple access (MA) technology for improving spectral efficiency, has recently obtained remarkable attention~\cite{dai2015non,boccardi2014five}. The innovative concept of NOMA is to serve multiple users in a single resource block, and thus is fundamentally different from conventional orthogonal multiple access (OMA) technologies, such as time division multiple access (TDMA) and orthogonal frequency division multiple access (OFDMA). Recently, a downlink version of NOMA, termed multiuser superposition transmission (MUST), has been included for the 3rd generation partnership project (3GPP) long term evolution (LTE) initiative~\cite{LTE2015}. Another NOMA technology, namely layer-division-multiplexing (LDM) has been accepted by the digital TV standard advanced television systems committee (ATSC) 3.0~\cite{zhang2016layered} for its efficiency in delivering multiple services in one TV channel.
Despite growing attempts and extensive efforts on NOMA, most studies have focused on the physical layer (PHY) or medium access control layer (MAC) performance, such as throughput and PHY latency, while few have systematically investigated its impact on user's perceived quality of experience (QoE). QoE is the perceptual quality of service (QoS) from the user's perspective~\cite{qoesurvey}. Given the context that the growth of future traffic demands is largely driven by the visual-experience-oriented services, such as VR, AR, and video streaming, there is broad consensus among leading industry and academic initiatives that improving user's QoE is a key pillar to sustaining the revenue of service providers in future networks~\cite{qoe}. {Despite this agreement on the importance of QoE, our understanding of how NOMA affects QoE is limited. The reason is that NOMA is a PHY/MAC access technology whose primary goal is to improve spectrum efficiency in the presence of massive connectivity, while QoE is an upper-layer concept related to user engagement and end-to-end quality.}
The goal of this article is to explore the upper-layer impact of NOMA on the user side, and call attention to a clean-slate redesign of cross-layer NOMA frameworks for QoE provisioning. {The remainder of this article is structured as follows. In Section II, we start at a deep dive into NOMA system architectures, and then review QoE demands and the challenges of realizing QoE awareness in NOMA systems. Specifically, we shed light on QoE metrics and confounding factors in NOMA systems, as well as how the lower-layer NOMA strategies affect user experience in the upper layers. Then, we follow on the heels of implications from the QoE analyses in NOMA, and propose a QoE-aware framework for NOMA systems in Section III. We build a model based on real-world datasets from a cellular service provider and take a case study on video streaming applications in Section IV. Merits of the QoE-awareness framework are verified, and implications about future QoE-aware NOMA schemes are provided in Section V.}
\section{Exploiting QoE Awareness in NOMA}
In this section, we describe the system architecture of NOMA from a cross-layer perspective, and discuss user experience issues that arise with NOMA. In particular, we highlight QoE demands and confounding factors in NOMA systems. We also investigate how the user experience in the upper layers is affected by NOMA in PHY/MAC.
\subsection{NOMA Premier}
\begin{figure}[t]
\centering
\includegraphics[width=5.5in]{system_new}
\caption{NOMA systems: a cross-layer perspective.}
\label{fig:system}
\end{figure}
The essential idea of NOMA is allocating non-orthogonal resources to multiple users by allowing controllable interference and tolerable decoding complexity at receivers. Basically, the gamut of NOMA technologies is divided into a pair of categories, that is, \emph{power-domain NOMA} and \emph{code-domain NOMA}. Power-domain NOMA multiplexes several users in the same subcarrier, by employing superposition coding (SC) technology at transmitters. At receivers, successive interference cancelation (SIC) technology is applied as such multiple users can be distinguished via different power levels~\cite{ding2015application}. Code-domain NOMA departs from power-domain NOMA in that it adopts multi-carrier operations. More particularly, it mainly relies on applying low density or non-orthogonal sequence designs at transmitters over multiple carriers, then invoking joint detection technology such as message passing algorithms (MPA) at receivers for obtaining coding gains. Representative code-domain NOMA technologies include sparse coding multiple access (SCMA)~\cite{nikopour2013sparse}, patten division multiple access (PDMA), and low density signature code domain multiple access (LDS-CDMA)~\cite{hoshyar2008novel}.
Fig.~\ref{fig:system} illustrates the system architecture of downlink NOMA from a cross-layer view. At the top level, content is delivered to users, who are in different contexts such as at home, in office shops, and in transportation. To this end, the content is first encoded into bit streams that are passed from the application layer to the lower layers, where packets for different users are scheduled and transmitted in a non-orthogonal manner. In particular, the power-domain multi-carrier NOMA first partitions users into different clusters, and assigns each cluster to one orthogonal subcarrier~\cite{ali2016dynamic}. Normally, effective user scheduling algorithms and power allocation approaches are employed. Regarding code-domain NOMA that is originally multi-carrier based, appropriate codeword selection and power level adjustment at each subcarrier need joint consideration. Receivers perform corresponding decoding algorithms to extract their own information from the received frames at PHY. In visual-based applications, received data is buffered or played according to the data rate of the downlink, video quality, and display speed. Agnostic to all the above data transmissions and processes, users only sense and feel the services, basically the content delivered to and displayed on their own devices. We observe from Fig.~\ref{fig:system} that NOMA functionalities including user clustering, packet scheduling, and power allocation, are the cornerstones of the upper-layer service quality delivered to the end users. In the following sections, we investigate the user experience in the NOMA system, and take one step further to analyze the impact of NOMA functionalities on the user experience. {Architecturally, such a cross-layer design can be realized by following on the heels of the software-defined network (SDN) paradigm. In particular, a controller extracts the QoE requirements of different types of service from the upper layers. All the collected information is forwarded to lower layers to guide traffic scheduling and resource allocation in NOMA.}
\begin{figure}[t]
\centering
\includegraphics[width=4.5in]{factor_metric_new}
\caption{Complex interdependencies between QoE metrics and confounding factors. Controllable parameters at PHY also have impact on service quality or user engagement.}
\label{fig:metric}
\end{figure}
\subsection{QoE Metrics and Confounding Factors in NOMA Systems}
NOMA is conventionally evaluated using PHY connection-centric metrics such as throughput or sum-rate capacity. However, conventional connection-centric designs have become a barrier to meeting the diverse application requirements and the quality expectation of the end users, especially for the rapid expanding visual-experience-oriented services, such as VR, AR, and video streaming. Even though the data rate and throughput are increasing, current mobile networks are still facing poor user experience and low service quality~\cite{qoe}. The gap between the PHY system performance and higher-level user's perceived experience drives a paradigm shift from connection-centric designs to experience-centric designs. QoE is a metric that quantifies user's perceived experience from a subjective perspective~\cite{qoesurvey}. To fully understand NOMA from the QoE perspective, we first investigate the complex relationship between the QoE metric and confounding factors in NOMA that may affect the user experience. Interdependencies between metrics and factors are depicted in Fig.~\ref{fig:metric}.
Various metrics can be used to quantify user's different subjective experience. They are mainly divided into engagement metrics and quality metrics. Engagement metrics such as \textit{usage time} and \textit{number of visits} reflect user's satisfaction and content's popularity. Given a certain context and service content, user engagement largely depends on the service quality. The commonly-used industry-standard quality metrics include \textit{peak signal-to-noise ratio (PSNR)} and latency-wise metrics such as \textit{join time} and \textit{stall event}. PSNR is most commonly used to measure the quality of reconstruction of lossy compression codecs in visual-based services. Join time represents the time it takes for the service to start offering content (such as video) after the user initiates a request, which indicates the start-up delay and thereby leading to different levels of satisfaction for the service. The number of stall events is a crucial indicator of end-to-end latency during the service usage and undermine user experience.
\textbf{Confounding factors}. There are many confounding factors impacting on user engagement and service quality. Confounding factors mainly include network conditions, hardware capabilities, context and service attributes.
\begin{itemize}
\item \textbf{Network conditions} include traffic, channel, and interference conditions. Since bad network condition will significantly degrade user experience, many QoE models regard network condition as one of the most influential factors for user's QoE.
\item \textbf{Hardware capabilities} including CPU processing ability, battery condition, and screen size affect user's QoE demands as well as perceived experience. For example, the processing capability of CPU largely determines player's start-up delay and users of low-end devices are more concerned with the CPU resources required by the service.
\item \textbf{Context} includes location, user's state (such as driving, walking, dining), and surrounding environments (such as office, coffee shop). Context also indirectly affects service quality as different contexts may result in different network conditions (such as interference and channel conditions).
\item \textbf{Service attributes} include type, content and popularity. In general, users tend to spend more time using services with their preferred attributes, which are valuable references for QoE evaluation.
\end{itemize}
\subsection{Understanding NOMA from the Perspective of QoE Awareness}
Conventionally, NOMA is proposed as a novel PHY/MAC multiple access paradigm and is designed to support massive connectivity with low latency and diverse service types. To fully reap the benefits of NOMA to improve end-to-end performance and user's perceived experience, we investigate how NOMA in PHY/MAC affects user experience in the upper layers.
\subsubsection{QoE Awareness in Transmission Pipeline}
The crux of NOMA is to assign multiple users to the same time/frequency/code resource units with tolerable interference. From the PHY-level perspective, two or more users are clustered to achieve maximal sum-rate capacity or throughput with fairness or latency constraints~\cite{ding2016impact}. However, from the user's perspective, it does not provide QoE guarantee for each user. Recall that QoE demands are heterogeneous due to user's preference, hardware diversity, context difference and so on, and thus data rate or throughput cannot ensure user's QoE. To overcome this predicament, it is required to translate the upper-layer demands and diversity into PHY objectives and constraints. The confounding factors such as service types and hardware conditions determining QoE metrics should be considered when clustering users in NOMA. Users with low-end devices or high sensitivity for latency can be clustered with delay-tolerant users with residual computational power.
Power allocation and packet scheduling schemes in NOMA should also be tailored to support QoE provisioning. Conventionally, {power-domain NOMA allocates different users with different power levels for achieving a good throughput-fairness tradeoff~\cite{ding2015application}. Regarding code-domain NOMA, it relies on sparse or low density spreading sequences to multiplex users on multiple subcarriers for realizing overloading, with the aid of the so-call coding gain~\cite{nikopour2013sparse}.} From the QoE perspective, power allocation schemes should reap the benefits of NOMA without compromising individual's QoE. Likewise, packet scheduling schemes in NOMA need to be reformulated to support QoE awareness. User engagement and quality metrics should be considered as constraints or objectives, and factors such as hardware, service type, and context information should be taken into account.
\subsubsection{QoE Awareness in Reception Pipeline}
At the receiver, a decoding algorithm with increased complexity is normally employed to cancel multi-user interference in NOMA. The extra cost in decoding NOMA packets may not be affordable to devices with limited remaining battery or computational capabilities, and thus may shorten usage time or cause CPU process delay that undermines user's QoE.
The decoding issue is more complex in power-domain NOMA, in which SIC is performed at receivers in a descending order of signal-to-interference-plus-noise ratio (SINR). Once a receiver fails to decode a packet in SIC, the following packets become undecodable. On the one hand, multiplexing more users in the same resource unit can potentially achieve higher sum rates, thereby amortizing network congestion. With proper configurations and channel conditions, multiplexing more users creates greater potential to satisfy more users QoE demands. On the other hand, it increases the probability of decoding failure and thus undermines the robustness of packet transmission. This issue becomes more severe in dynamic, unpredictable radio environments such as buses or trains. The sacrifice of robustness may lead to unstable service quality such as stall events in video streaming. Additionally, it incurs substantially higher computational overhead at users with low SINR, which drains battery and may incur processing delay for low-end devices. As a result, NOMA schemes are expected to consider the decoding capabilities at user end and the corresponding QoE demand to play proper strategies.
\begin{figure}[t]
\centering
\includegraphics[width=6in]{NOMA_QoE_framework_new}
\caption{QoE-aware NOMA framework.}
\label{fig:framework}
\end{figure}
\section{QoE-Aware NOMA Framework}
In this section, we develop a cross-layer NOMA framework that provides QoE guarantee for the NOMA system.
\subsection{Overview}
Fig.~\ref{fig:framework} outlines the proposed QoE-aware framework that facilitates the NOMA system to schedule users and packets in order to fit user's diverse QoE demands. In particular, the key design components are described in a top-down fashion.
\begin{itemize}
\item \textbf{QoE Evaluation}. The function of QoE evaluation is to collect user data and estimate QoE demand for each user. User data consists of two parts: historical data and real-time data. The historical data contains complete datasets of user engagement, service quality, confounding factors and corresponding NOMA configurations and parameters. These datasets are used to train a QoE demand profile for each user indicating their preferences.
\item \textbf{Adaptive QoE mapper}. The adaptive QoE mapper translates QoE demands from users to objectives and requirements described by controllable system parameters in NOMA. The mapper is built based on the QoE model trained in QoE evaluation and the real-time network condition feedback. It interacts with the network protocol stack through a scheduler.
\item \textbf{Scheduler}. The scheduler acts as a control plane that derives feasible or optimal configurations to fulfill the requirements given by the adaptive QoE mapper. The configurations, {including power allocation, user clustering, queuing, and resource allocation strategies,} are tuned based on the updated QoE model as well as network condition feedback from users or measured by the transmitter.
\item \textbf{NOMA agent}. The NOMA agent resides atop PHY/MAC and acts as an interface between NOMA functionalities and the scheduler. It exposes commands from the scheduler to NOMA functionalities to perform user clustering/pairing, power allocation, and packet scheduling.
\end{itemize}
In the following two sections, we shed light on how to implement the above framework to achieve QoE guarantee for different users in NOMA with adaptive scheduling.
\subsection{Adaptive Mapping for QoE-Aware NOMA}
The design rationale of QoE-aware NOMA is in a top-down fashion: we start with user's perceived QoE and map QoE demands into system parameters that are taken as input to derive proper NOMA configurations. Since QoE can be quantified from different perspectives, we perform a data-driven QoE analysis based on user data collected by service providers. For each user, we identify a subset of factors that have the most significant impact on QoE while screening out insignificant factors to reduce dimension and noise. {To this end, we measure the importance of a factor using information gain, which represents the informative level of a feature. Then, we rank all factors based on information gain and select the top-$k$ factors.}
Since QoE greatly depends on the user's personal preference as well as the context, the QoE of different users can be quite disparate even for the same content and service quality. To take into account user disparity, we build a distinct QoE model for each user based on collective matrix factorization~\cite{singh2008relational}, which is used to discover the latent features underlying the interactions between user engagement and service quality. The users' QoE is represented by a matrix where each entry indicates the QoE of a user for a service. Similarly, user preferences, contexts and attributes such as age, gender, and data plan form a user matrix; and a service matrix describes service attributes such as bitrates, codecs and other system parameters in NOMA. Our key idea is to jointly factorize QoE matrix, user matrix, and service matrix into two low-rank matrices describing latent factors in NOMA. The underlying intuition is that, by transforming both user and service matrices to the same latent factor space, we can estimate the user preference to these latent factors and content score on these factors. We iteratively use all the data of services used by a user to train the user's preference vector, and feed all the data of users who have used the service into a model to learn the video score vector. Finally, the factorization results are combined in a weighted sum to establish a personalized QoE model that maps QoE demands to controllable parameters in the NOMA system.
\subsection{QoE-Aware Scheduling in NOMA}
The adaptive QoE mapper translates users' QoE demands into system parameters and requirements that are controllable to the NOMA system. The scheduler takes these parameters and requirements as input to derive proper scheduling and configuration strategies for NOMA.
To sustain users' QoE demands in a long-term way, the scheduling strategies are derived by formulating the problem as a dynamic scheduling problem. We consider a scenario where the time of service $T$ is descretized into slots $t$. Our goal is to obtain an optimal scheme to cluster users in NOMA and schedule resources to minimize the average QoE loss for all the time slots, while satisfying the QoE requirements for all the users in NOMA. For example, in a video streaming case, we minimize time-averaged total quality loss for all video streams while maintaining the long-term fluency by controlling buffer data to restrain the number of stall events. In each time slot, the scheduler checks the data queue and network conditions. Then, it derives the optimal scheduling strategy by making a joint decision to reallocate resources and update the queue.
\begin{figure}[t]
\centering
\includegraphics[width=5.5in]{scheduling}
\caption{QoE-aware scheduling in NOMA.}
\label{fig:schedule}
\end{figure}
In order to manage this problem, as shown in Fig.~\ref{fig:schedule}, the user clustering strategies need to be determined. Recall that confounding factors such as context, network conditions, and hardware capabilities largely determine the performance of user clustering from the QoE perspective. We set constraints for user clustering strategies based on QoE requirements imposed by these confounding factors, and maximize $\sum_u q_u(t) \cdot p_u(t) - \omega \sum_u Q_u(t)$, where $q_u(t)$ is the data queue for user $u$ in slot $t$, $p_u(t)$ the {playing duration (the duration of the video been played by the user)} of data received by user $u$ in slot $t$, $Q_u(t)$ the QoE loss of user $u$ in slot $t$ and $\omega$ the control parameter of the drift plus penalty obtained from users' QoE preferences. In addition, we allocate resource units and power to clustered users. For illustration, we only focus on the power-domain NOMA. In each slot, users are allocated with different proportions of the total power $P$ in the same frequency/time/code domain so that users can perform SIC for packet decoding. Note that transmission power and channel conditions determine the data rates of clustered users in NOMA, which decides $p_u(t)$ along with the bitrate in the application layer. We select the optimal power allocation and scheduling scheme that delivers services with the minimal QoE loss on average while providing QoE guarantee for each user.
\begin{figure}[t]
\centering
\includegraphics[width=5in]{omega}
\caption{Performance under various values of $\omega$ in NOMA.}
\label{fig:omega}
\end{figure}
\section{Case Study on Video Streaming}
In this section, we evaluate the QoE-aware NOMA framework using data-driven simulations. We consider power-domain NOMA to illustrate the merits. To model QoE requirements, we collected a large-scale dataset from a tier-one cellular service provider in China containing traces of eight million users over eight months, including the complete raw IP flow traces in the core network and user profile database. We build a reliable QoE model for each user to construct the QoE evaluation and adaptive QoE mapper components in the framework. To evaluate our scheduling scheme, we turn to simulations. {We consider a scenario where four users request video streaming services from one base station, whose transmission power is 20~dBm. Users are randomly distributed and move randomly within the radius of the base station. Rayleigh fading is used to model wireless channel. We use the Sony Demo, which is encoded in MPEG4 with four different quality levels, as the video streaming data. Video streaming services in which the video data is divided into sequential chunks encoded in different bitrates and quality levels. In each slot, video chunks are encoded with different quality levels and buffered at the user side to sustain video playing.}
We compare our framework with the conventional framework where QoE demands are oblivious. {In the QoE-oblivious NOMA system, resource allocation follows the max-sum throughput rule, that is, the power allocation and channel access mainly depend on the network conditions. Differently, QoE-aware NOMA considers users' QoE demands by minimizing time-averaged total quality loss while maintaining the long-term fluency by controlling buffer data to restrain the number of stall events.} We evaluate the performance under various values of control parameter $\omega$, which indicates the importance of service quality valuated by users. {Note that the QoE-oblivious scheme does not depend on $\omega$ and is used as a reference to assess the merits of the QoE awareness.}
\begin{figure}[t]
\centering
\includegraphics[width=5in]{bandwidth}
\caption{Performance under various bandwidths in NOMA.}
\label{fig:bandwidth}
\end{figure}
In this study, we first evaluate the performance under various values of the control parameter $\omega$, which indicates the importance of service quality valuated by users. As shown in Fig.~\ref{fig:omega}, the PSNR of QoE-aware NOMA is higher than that of QoE-oblivious NOMA, and the performance gain of QoE-aware NOMA increases with the value of $\omega$. It reveals that QoE-aware NOMA can better fit the diverse demands of users in that it provides better service quality for users with higher quality preferences.
The performance of the framework is also influenced by the network condition, such as available bandwidth. As illustrated in Fig.~\ref{fig:bandwidth}, both schemes achieve higher PSNR and fewer stall events with larger bandwidth. QoE-aware NOMA outperforms QoE-oblivious NOMA in all cases demonstrated, which indicates that QoE awareness can better adapt to the changes of bandwidth.
\section{Concluding Remarks and Future Directions}
This article has envisioned the crucial role of QoE awareness in NOMA systems. Instead of merely focusing on the PHY/MAC system performance, QoE awareness exploits interdependencies among NOMA functionalities, confounding factors and user's perceived experience. Through careful investigation of the interplay between upper-layer demands and lower-layer configurations, we have presented a QoE-aware NOMA framework and demonstrated its merits by conducting a video streaming case study using real-world traces and simulations. Some observations in the case study can provide some implications for future designs of QoE provisions in NOMA.
However, the study of QoE-aware NOMA is still in its infancy. There are many open problems that need further investigation.
\begin{itemize}
\item \textbf{Complexity-aware NOMA}. NOMA improves spectrum efficiency at the cost of increased decoding complexity at the receiver side~\cite{dai2015non}. For example, power-domain NOMA requires receivers to perform SIC for decoding. The extra computations induced by NOMA add burdens at the user end and lead to extra processing delay. Such impacts are not considered in conventional NOMA designs, while they may undermine user's QoE. How to quantify these impacts and design complexity-aware NOMA is a new direction in designing QoE-aware NOMA.
\item \textbf{Energy-aware NOMA}. A typical application scenario of NOMA is 5G smart devices such as the Internet of things (IoT), wearables and smartphones with massive connectivity~\cite{ding2016mimo,wang2017spectrum}. These devices, however, are power-constrained. Users may care more about their energy consumption and consider battery life as a dimension of QoE. However, NOMA introduces extra computation and even communication overhead that consumes the user's battery. There will be new trade-offs in NOMA when energy consumption is considered in QoE. How to jointly consider energy consumption and service quality in QoE-aware NOMA is still an open problem.
\item \textbf{QoE-aware NOMA in different applications}. It has been shown in~\cite{ding2015application} that NOMA can be applied to different communication scenarios, including device-to-device (D2D), multiple-input-multiple-output (MIMO), and cooperative transmission. This article investigates the general interdependencies between QoE and NOMA, while there are still many design challenges and practical issues in realizing QoE-aware NOMA in these applications.
\item \textbf{Software-Defined NOMA}. {Recently the software-defined network (SDN) has been deemed as a new paradigm to improve spectrum manegement~\cite{wang2016software} from the network architecture perspective. NOMA offers new PHY opportunities to improve spectrum efficiency. NOMA and SDN can be jointly considered in future system designs to boost spectrum efficiency.}
\end{itemize}
\section*{Acknowledgment}
The research was supported in part by the National Science Foundation of China under Grant 61502114, 91738202, 61729101, and 61531011, Major Program of National Natural Science Foundation of Hubei in China with Grant 2016CFA009, 2015ZDTD012, the RGC under Contract CERG 16212714, 16203215, ITS/143/16FP-A.
|
1,314,259,994,827 | arxiv | \section{\@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}}
\def\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}
\makeatother
\usepackage{amsmath,amssymb,amsthm}
\usepackage{verbatim}
\usepackage{mathabx}\changenotsign
\usepackage{mathrsfs}
\usepackage[babel]{microtype}
\usepackage{xcolor}
\usepackage[backref]{hyperref}
\hypersetup{
colorlinks,
linkcolor={red!60!black},
citecolor={green!60!black},
urlcolor={blue!60!black},
}
\usepackage{pgf,tikz}
\usepackage{mathrsfs}
\usetikzlibrary{arrows}
\usepackage{bookmark}
\usepackage[abbrev,msc-links,backrefs]{amsrefs}
\usepackage{doi}
\renewcommand{\doitext}{DOI\,}
\renewcommand{\PrintDOI}[1]{\doi{#1}}
\renewcommand{\eprint}[1]{\href{http://arxiv.org/abs/#1}{arXiv:#1}}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage[english]{babel}
\numberwithin{equation}{section}
\linespread{1.3}
\usepackage{geometry}
\geometry{left=27.5mm,right=27.5mm, top=25mm, bottom=25mm}
\usepackage{enumitem}
\def\upshape({\itshape \roman*\,}){\upshape({\itshape \roman*\,})}
\def\upshape(\Roman*){\upshape(\Roman*)}
\def\upshape({\itshape \alph*\,}){\upshape({\itshape \alph*\,})}
\def\upshape({\itshape \Alph*\,}){\upshape({\itshape \Alph*\,})}
\def\nlabel{\upshape({\itshape \arabic*\,})}
\let\polishlcross=\ifmmode\ell\else\polishlcross\fi
\def\ifmmode\ell\else\polishlcross\fi{\ifmmode\ell\else\polishlcross\fi}
\def\ \text{and}\ {\ \text{and}\ }
\def\quad\text{and}\quad{\quad\text{and}\quad}
\def\qquad\text{and}\qquad{\qquad\text{and}\qquad}
\let\emptyset=\varnothing
\let\setminus=\smallsetminus
\let\backslash=\smallsetminus
\makeatletter
\def\mathpalette\mov@rlay{\mathpalette\mov@rlay}
\def\mov@rlay#1#2{\leavevmode\vtop{ \baselineskip\z@skip \lineskiplimit-\maxdimen
\ialign{\hfil$\m@th#1##$\hfil\cr#2\crcr}}}
\newcommand{\charfusion}[3][\mathord]{
#1{\ifx#1\mathop\vphantom{#2}\fi
\mathpalette\mov@rlay{#2\cr#3}
}
\ifx#1\mathop\expandafter\displaylimits\fi}
\makeatother
\newcommand{\charfusion[\mathbin]{\cup}{\cdot}}{\charfusion[\mathbin]{\cup}{\cdot}}
\newcommand{\charfusion[\mathop]{\bigcup}{\cdot}}{\charfusion[\mathop]{\bigcup}{\cdot}}
\DeclareFontFamily{U} {MnSymbolC}{}
\DeclareSymbolFont{MnSyC} {U} {MnSymbolC}{m}{n}
\DeclareFontShape{U}{MnSymbolC}{m}{n}{
<-6> MnSymbolC5
<6-7> MnSymbolC6
<7-8> MnSymbolC7
<8-9> MnSymbolC8
<9-10> MnSymbolC9
<10-12> MnSymbolC10
<12-> MnSymbolC12}{}
\DeclareMathSymbol{\powerset}{\mathord}{MnSyC}{180}
\usepackage{tikz}
\usetikzlibrary{decorations.markings}
\usetikzlibrary{calc,positioning,decorations.pathmorphing,decorations.pathreplacing}
\makeatletter
\def\namedlabel#1#2{\begingroup
#2%
\def\@currentlabel{#2}%
\phantomsection\label{#1}\endgroup
}
\makeatother
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{fact}[theorem]{Fact}
\newtheorem{prop}[theorem]{Proposition}
\newtheorem{clm}[theorem]{Claim}
\newtheorem{cor}[theorem]{Corollary}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{conj}[theorem]{Conjecture}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{alg}[theorem]{Algorithm}
\newtheorem{exmp}[theorem]{Example}
\newtheorem{prob}[theorem]{Problem}
\newtheorem{quest}[theorem]{Question}
\theoremstyle{remark}
\newtheorem{rem}[theorem]{Remark}
\newtheorem{note}[theorem]{Note}
\def\equaldef{\smash{\displaystyle
\mathrel{\mathop{=}\limits^{\hbox{\rm\tiny def}}}}}
\def\mathop{\textrm{\rm tr}}\nolimits{\mathop{\textrm{\rm tr}}\nolimits}
\def\mathop{\text{\rm left}}\nolimits{\mathop{\text{\rm left}}\nolimits}
\def\mathop{\text{\rm right}}\nolimits{\mathop{\text{\rm right}}\nolimits}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\let\eps=\varepsilon
\let\theta=\vartheta
\let\rho=\varrho
\let\phi=\varphi
\def\mathds N{\mathds N}
\def\ZZ{\mathds Z}
\def\mathds Q{\mathds Q}
\def\mathds R{\mathds R}
\def\mathds P{\mathds P}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\cE}{\mathcal{E}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\cG}{\mathcal{G}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\cS}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\cJ}{\mathcal{J}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\tilde{S}}{\tilde{S}}
\begin{document}
\title{Embedding Hypertrees into Steiner Triple Systems}
\author[Bradley Elliott]{Bradley Elliott}
\address{Department of Mathematics and Computer Science,
Emory University, Atlanta, GA 30322, USA}
\email{bradley.elliott@emory.edu}
\author[Vojt\v{e}ch R\"{o}dl]{Vojt\v{e}ch R\"{o}dl}
\address{Department of Mathematics and Computer Science,
Emory University, Atlanta, GA 30322, USA}
\email{rodl@mathcs.emory.edu}
\thanks{The second author was supported by NSF grant DMS 1764385.}
\dedicatory{ To Charlie Colbourn and Alex Rosa on the occasion of their round birthdays}
\begin{abstract}
In this paper we are interested in the following question: Given an arbitrary Steiner triple system $S$ on $m$ vertices and any 3-uniform hypertree $T$ on $n$ vertices, is it necessary that $S$ contains $T$ as a subgraph provided $m \geq (1+\mu)n$?
We show the answer is positive for a class of hypertrees and conjecture that the answer is always positive.
\end{abstract}
\keywords{hypergraph, tree, Steiner Triple System}
\maketitle
\section{Introduction}
The well-known Tree Packing Conjecture of Gy\'arf\'as and Lehel (\cite{GL}) states that any arbitrary set of trees $T_2,T_3, \cdots, T_{m}$ where tree $T_i$ has order $i$ can be packed into the complete graph $K_m$.
This conjecture remains open despite many partial results since its statement in 1976.
One such result by Bollob\'as (\cite{Bol}) shows that any arbitrary set of trees $T_2, T_3, \cdots, T_s$ can be packing into $K_m$ when $3\leq s < \frac{m}{\sqrt{2}}$.
Inspired by this conjecture, Peter Frankl (personal communication) asked a similar question regarding hypertrees and Steiner triple systems.
A \emph{hypertree} is a connected, simple 3-uniform hypergraph in which every two vertices are joined by a unique path.
Note that any hypertree must have odd order, and in particular any hypertree with size $s$ has order $2s+1$.
A Steiner triple system is a 3-uniform hypergraph in which every pair of vertices is contained in exactly one edge.
\begin{quest}[Frankl]\label{q:Frankl}
What is the largest value of $s$ so that any $s$ hypertrees $T_3, T_5, T_7, \cdots, T_{2s+1}$ can be packed into any Steiner triple system $S$ on $m$ vertices?
\end{quest}
Clearly $s \leq \frac{m-1}{2}$, since no hypertree can have order greater than that of $S$.
A greedy argument easily yields that any hypertree with at most $\frac{m+1}{4}$ edges (and thus $\frac{m+3}{2}$ vertices) can be embedded into any $S$.
Following from this, Frankl showed using a method similar to \cite{Bol} that if $s= \frac{m+1}{4}$, then any $s$ hypertrees $T_3, T_5, T_7, \cdots, T_{2s+1}$ can be packed into any $S$ on $m$ vertices.
It is however less clear how to embed larger trees.
This paper then is motivated by the following question:
\begin{quest}\label{q:embed}
Given a Steiner triple system $S$ on $m$ vertices and a hypertree $T$ on $2s+1$ vertices with $2s+1 < m$ vertices, can one find $T$ in $S$ as a subhypergraph?
\end{quest}
For convenience let $n=2s+1$ be the order of $T$.
One can find examples of pairs $T$ and $S$ showing that if $n$ equals $m$ then the answer is negative.
For example, for $s\geq 3$ let $T$ be the hypertree with edges $\{u, v_i, w_i \}$ for $1 \leq i \leq s-1$ and also the edge $\{w_1, x, y \}$.
One can easily observe that $T$ is not in any Steiner triple system $S$ on $2s+1$ vertices.
In fact even assuming only that $n < m$ may likely not be sufficient to guarantee the embedding of any hypertree on $n$ vertices into any Steiner triple system with $m$ vertices.
Consequently we start with the following more modest conjecture.
\begin{conj}\label{conjec}
Given $\mu > 0$, there exists $n_0 = n_0(\mu)$ such that if $n > n_0$, $T$ is any hypertree on $n$ vertices, and $S$ is any Steiner triple system on $m \geq n(1+\mu)$ vertices, then $T$ is a subhypergraph of $S$.
\end{conj}
Note that if hypertree T is replaced by a matching the analogous result is true -- in other words, every Steiner triple system contains an almost perfect matching.
Various generalizations of this fact are known and we mention some in Section 5.
Unfortunately we are unable to resolve even this Conjecture~\ref{conjec} and will address a specific case only. Here we consider a special class of trees.
\begin{definition}\label{def:subdiv}
A \emph{subdivision tree} $T$ is a hypertree in which each edge contains a vertex of degree one.
\end{definition}
Equivalently, $T$ can be obtained from a graph tree $T'$ by subdividing each edge $\{x,y\}$ of $T'$ by a vertex $z_{xy}$ and setting
$$V(T) = V(T') \cup \{z_{xy}, \{x,y\} \in E(T')\}$$
$$E(T) = \{ \{ x,y,z_{xy}\}, \{x,y\} \in E(T') \}.$$
We say that a hypertree $T$ has bounded degree $d$ if no vertex of $T$ has degree greater than $d$.
\begin{theorem}[Main Theorem]\label{thm:main}
For any $d\in \mathbb{Z}^+$ and any $\mu > 0$ such that $\frac{1}{d} \gg \mu$, there exists $n_0=n_0(d,\mu)$ such that for all odd $n>n_0$, any subdivistion tree $T$ on $n$ vertices with bounded degree $d$ is a subhypergraph of any Steiner triple system $S$ on $m\geq(1+\mu)n$ vertices.
\end{theorem}
We will divide the proof into four parts.
First, we will decompose the hypertree $T$ into smaller subhypertrees by removing some edges in the shape of stars and some isolated vertices.
The vast majority of $V(T)$ will be contained in the subhypertrees after the decomposition.
We will keep track of which stars we remove while decomposing $T$ so that we can restore them later.
Second, we will show that given a set of at most $d$ vertices in the Steiner triple system $S$, we can find many stars in $S$ that all contain the vertices but that are otherwise pairwise disjoint.
These stars in $S$ are the candidates for where to eventually embed the stars we removed from $T$ in part 1.
Third, we will fix a subset of the vertices of $S$, called the reservoir.
The reservoir is where the isolated vertices from part one will eventually be embedded.
Lastly, we will embed the subhypertrees from part one into the Steiner triple system $S$, though we will avoid using the reservoir.
Then we will use the reservoir to embed the isolated vertices and stars removed in part one.
The constants used in the proof of Theorem~\ref{thm:main} follow the following hierarchy.
\begin{equation}\label{eq:hier}
1 > \frac{1}{d} \gg \mu \gg \epsilon \gg \frac{1}{k} \gg \frac{1}{t} > \frac{1}{k3^k} \gg \frac{1}{l} \gg \frac{1}{n} > \frac{1}{m},
\end{equation}
where $d$, $\mu$, $n$, and $m$ are as stated in the theorem and the others are defined when needed.
The reader may think about the constants $d$, $\mu$, $\epsilon$, $k$, and $t$ as being fixed while $l$, $n$, and $m$ are tending together to infinity.
\section{Decomposing the Hypertree}
We will decompose $T$ into a set $\mathcal{P}$ of subhypertrees because the smaller hypertrees will be simpler to embed into $S$.
In the proof of Theorem~\ref{thm:main} we will describe how the embedded subhypertrees can be reassembled to form a copy of $T$ in $S$.
We will need the following definition throughout this paper.
\begin{definition}\label{def:star}
A \emph{star} $S$ in a 3-uniform hypergraph $G$ is a set of edges $\{v_i,w_i,u\}\in E(G)$, $1\leq i\leq c = deg_G(u)$.
All vertices $v_i, w_i, u$, $1\leq i \leq c$ must be distinct, so that any two edges intersect precisely at $u$, which we call the \emph{center} of the star.
\end{definition}
The following lemma describes the result of the decomposition process.
\begin{lemma}\label{lemma:sawing}
Let $T$ be any subdivison tree on $n$ vertices with bounded degree $d\ll n$, and let $k$ be any integer with $d \ll k \ll n$.
Then there exists a system $\cE$ of $e$ stars $E_j = \{\{v_{j,i},w_{j,i},u_{j}\}, 1\leq i \leq deg(u_j)\} \subset E(T)$, for $j=1,\ldots,e$, such that by removing all of the edges of the stars from $T$, $T$ is decomposed into
\begin{itemize}
\item a set $I$ of isolated vertices, and
\item a set $\mathcal{P}$ of $l$ subhypertrees
\end{itemize}
with the following properties.
\begin{enumerate}
\item\label{sawing2} $k \geq \vert V(P) \vert \text{, for any } P \in \mathcal{P}$.
\item\label{sawing1} $\left( \frac{2d^2}{k}\right) n \geq \vert I \vert$.
\item\label{sawing3} $\vert I \vert \geq \vert \mathcal{P} \vert =l$
\item\label{sawing4} $ l \geq \vert \cE \vert = e $
\item\label{sawing5} $ l \geq \frac{n}{k + 3}$.
\item\label{sawing6} $I = \bigcup_{j=1}^e \{w_{j,1}, w_{j,2}, \ldots, w_{j,deg(u_j)}, u_j\}$, and $v_{j,i} \not \in I$ for all $i,j$.
\end{enumerate}
\end{lemma}
\begin{note}
Some of the subhypertrees in $\mathcal{P}$ may contain just a single vertex, but for technical reasons they will still be considered as elements of $\mathcal{P}$ and not of $I$.
\end{note}
Before we prove Lemma~\ref{lemma:sawing}, we introduce some terminology.
We fix some vertex of degree at least 2 to be the root of $T$.
We say two vertices are \emph{adjacent} if they belong to the same edge of $T$.
Define a \emph{leaf} on $T$ to be any degree-one vertex that is adjacent to another degree-one vertex.
Borrowing the terminology of a family tree, for a vertex $v$ of $T$, we say that the \emph{father} of $v$ is the neighbor of $v$ that lies on the path from $v$ to the root.
Likewise, we say that a \emph{son} of $v$ is any neighbor of $v$ whose path to the root passes through $v$.
Note that the root has no father and leaf vertices have no sons.
Define a \emph{branch} of $T$ to be a sequence $\{ b_h\}_{h=1}^L$ of vertices in $T$ where $b_1$ is the root, $b_{h+1}$ is a son of $b_h$, and $b_L$ is a leaf.
We say that the \emph{progeny} of a vertex $v$ is the set of all vertices whose paths to the root must pass through $v$.
That is, the progeny of $v$ is the set of all of $v$'s sons, their sons, their sons, etc.
We say $v$ is included in the set of its own progeny.
Lastly, we say that a vertex is \emph{celibate} if it is the only degree-one vertex in its edge.
If an edge has two degree-one vertices, we choose exactly one of them to call celibate.
In this way, every edge has exactly one celibate vertex.
During the decomposition, we will use the distinction between celibate and non-celibate vertices to decide whether a single vertex should be added to $\mathcal{P}$ (as a subhypertree) or to $I$ (as an isolated vertex).
\begin{proof}[Proof of Lemma~\ref{lemma:sawing}]\label{proof:sawing}
Choose any vertex with degree at least two to be the root of $T$ and decide which vertices to call celibate.
Create empty sets $I$, $\mathcal{P}$, and $\cE$, which will be used to store isolated vertices, subhypertrees, and stars (respectively) as $T$ is decomposed.
We assign a proper coloring to $V(T)$ in the following way.
Color all celibate vertices blue.
Color the root red.
For every remaining uncolored vertex, color it red if its father is blue, and blue if its father is red.
In this coloring, every edge has exactly one red and two blue vertices.
A necessary (but not sufficient) condition for a star to belong to $\cE$ will be that its center is red.
This ensures that the centers of two stars are never adjacent vertices, which will be important as we reassemble $T$ in $S$.
To construct $\cE$, and with it $\mathcal{P}$ and $I$, repeat the following "sawing" procedure, each iteration of which will remove one or more stars, subhypertrees, and isolated vertices from $T$.
What remains of $T$ at the beginning of the $j^{th}$ iteration is called $H_j$, where $H_1 = T$.
To simplify the notation throughout the proof, we will drop the index $j$ and will write $v_i = v_{j,i}$, $w_i = w_{j,i}$, and $u = u_j$, as well as $b_h = b_{j,h}$, whenever it is clear from the context that $j$ is fixed.
\begin{enumerate}
\item[(a)] Let $\{b_{h}\}$ be a branch of $H_j$, where $b_{h+1}$ is the son of $b_{h}$ that has the most vertices in its progeny.
Let $b_{x}$ be the last vertex in this sequence that has more than $k$ vertices in its progeny.
If $b_x$ is red, let $u = b_{x}$.
If $b_x$ is blue, let $u = b_{x+1}$.
In either case, $u$ is red and has at least $\frac{k-d}{d-1}$ vertices in its progeny.
We will "saw" around the vertex $u$.
\item[(b)] Let $E_j = \{\{v_{i},w_{i},u\}$, $1 \leq i \leq deg(u)\}$ be the star centered at $u$.
Label the vertices adjacent to $u$ such that $w_{i}$ is the celibate vertex in each edge and $v_{1}$ is the father of $u$.
Figure~\ref{beforesaw} shows how all of the vertices around $u$ should be labeled.
Add $E_j$ to $\cE$ and let $H_{j}' = H_j \setminus E_j$.
Figure~\ref{duringsaw1} shows as dotted triangles which edges are removed to form $H_{j}'$.
\item[(c)] Removal of $E_j$ from $H_j$ results in some vertices and subhypergraphs in $H_{j}'$ not being connected to the root, as shown in Figure~\ref{duringsaw2}.
Specifically, $u$ is now isolated, as are the celibate vertices $w_{i}$, $1 \leq i \leq deg(u)$ (because all celibate vertices of $T$ are originally contained in just one edge).
Add these vertices $u$ and $w_{i}$ to $I$.
\item[(d)] For $i\geq 2$, the vertices $v_{i}$ are not connected to the root in $H_j'$.
(Note that $v_{1}$ is the father of $u_j$, so there is still a path from $v_{1}$ to the root.)
Let $P_{j,i}$ be the connected component containing $v_{i}$ for $i \geq 2$, and add each $P_{j,i}$ (even if it is just a single vertex) to $\mathcal{P}$.
\item[(e)] Define
$$H_{j+1} = H_j' \setminus \{u\} \setminus \{w_{i}\}_{i=1}^{deg(u)} \setminus \{P_{j,i}\}_{i=2}^{deg(u)}.$$
That is, $H_{j+1}$ is the connected component of $H_j'$ that contains the root.
Figure~\ref{aftersaw} shows that after the isolated vertices and disconnected subhypergraphs are removed from $H_{j}'$, we are left with a smaller hypertree still containing $v_{1}$.
This smaller hypertree is $H_{j+1}$.
\end{enumerate}
There are two cases that can cause this procedure to end.
First, at some point the root could be the only vertex with more than $k$ vertices in its progeny, so the root becomes $u$.
By "sawing" around $u$ and removing all of the isolated vertices and components not connected to the root, we remove every vertex and edge.
This completes the decomposition.
In this case, notice that the number of complete iterations performed of the sawing procedure is $e = \vert \cE \vert$, since exactly one star is added to $\cE$ with each iteration.
Second, at some point there could be no vertex with more than $k$ vertices in its progeny.
If this occurs, add the entire remaining tree $H_j$ to $\mathcal{P}$.
In this case, notice again that the number of complete iterations performed of the sawing procedure is $e = \vert \cE \vert$.
This implies the following fact.
\begin{fact}\label{prop1}
Either $H_{e+1}$ does not exist, or $H_{e+1}$ is a hypertree with at most $k$ vertices and is a member of $\mathcal{P}$.
\end{fact}
\begin{figure}[h]
\begin{minipage}{0.46\textwidth}
\centering
\definecolor{zzttqq}{rgb}{0,0,0}
\definecolor{qqqqff}{rgb}{0,0,0}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=0.7cm]
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (6.,1.) -- (4.2,-1.) -- (4.8,-1.) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (6.,1.) -- (5.7,-1.) -- (6.3,-1.) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (6.,1.) -- (7.2,-1.) -- (7.8,-1.) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (6.,1.) -- (5.4,1.) -- (4.1,3.) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (4.3641429644735465,1.0254659940610364) -- (3.806023398089517,1.0254659940610364) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (2.832891846445569,1.0254659940610364) -- (2.3168531886162764,1.0216671244288105) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (3.509291958589649,3.0008696189206994) -- (3.8,5.) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.8,-1.) -- (3.8657942712621005,-2.9859105973373743) -- (4.41898751506418,-2.998203780532976) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.8,-1.) -- (5.,-3.) -- (5.6114262850375525,-3.010496963728578) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (7.8,-1.) -- (7.,-3.) -- (7.60292196272504,-2.998203780532976) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (7.8,-1.) -- (8.15611520652712,-3.010496963728578) -- (8.758481183111607,-3.010496963728578) -- cycle;
\draw [line width=2.pt,color=zzttqq] (6.,1.)-- (4.2,-1.);
\draw [line width=2.pt,color=zzttqq] (4.2,-1.)-- (4.8,-1.);
\draw [line width=2.pt,color=zzttqq] (4.8,-1.)-- (6.,1.);
\draw [line width=2.pt,color=zzttqq] (6.,1.)-- (5.7,-1.);
\draw [line width=2.pt,color=zzttqq] (5.7,-1.)-- (6.3,-1.);
\draw [line width=2.pt,color=zzttqq] (6.3,-1.)-- (6.,1.);
\draw [line width=2.pt,color=zzttqq] (6.,1.)-- (7.2,-1.);
\draw [line width=2.pt,color=zzttqq] (7.2,-1.)-- (7.8,-1.);
\draw [line width=2.pt,color=zzttqq] (7.8,-1.)-- (6.,1.);
\draw [line width=2.pt,color=zzttqq] (6.,1.)-- (5.4,1.);
\draw [line width=2.pt,color=zzttqq] (5.4,1.)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (6.,1.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (4.3641429644735465,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (4.3641429644735465,1.0254659940610364)-- (3.806023398089517,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (3.806023398089517,1.0254659940610364)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (2.832891846445569,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (2.832891846445569,1.0254659940610364)-- (2.3168531886162764,1.0216671244288105);
\draw [line width=2.pt,color=zzttqq] (2.3168531886162764,1.0216671244288105)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (3.509291958589649,3.0008696189206994);
\draw [line width=2.pt,color=zzttqq] (3.509291958589649,3.0008696189206994)-- (3.8,5.);
\draw [line width=2.pt,color=zzttqq] (3.8,5.)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.8,-1.)-- (3.8657942712621005,-2.9859105973373743);
\draw [line width=2.pt,color=zzttqq] (3.8657942712621005,-2.9859105973373743)-- (4.41898751506418,-2.998203780532976);
\draw [line width=2.pt,color=zzttqq] (4.41898751506418,-2.998203780532976)-- (4.8,-1.);
\draw [line width=2.pt,color=zzttqq] (4.8,-1.)-- (5.,-3.);
\draw [line width=2.pt,color=zzttqq] (5.,-3.)-- (5.6114262850375525,-3.010496963728578);
\draw [line width=2.pt,color=zzttqq] (5.6114262850375525,-3.010496963728578)-- (4.8,-1.);
\draw [line width=2.pt,color=zzttqq] (7.8,-1.)-- (7.,-3.);
\draw [line width=2.pt,color=zzttqq] (7.,-3.)-- (7.60292196272504,-2.998203780532976);
\draw [line width=2.pt,color=zzttqq] (7.60292196272504,-2.998203780532976)-- (7.8,-1.);
\draw [line width=2.pt,color=zzttqq] (7.8,-1.)-- (8.15611520652712,-3.010496963728578);
\draw [line width=2.pt,color=zzttqq] (8.15611520652712,-3.010496963728578)-- (8.758481183111607,-3.010496963728578);
\draw [line width=2.pt,color=zzttqq] (8.758481183111607,-3.010496963728578)-- (7.8,-1.);
\begin{scriptsize}
\draw [fill=qqqqff] (6.,1.) circle (2.5pt);
\draw[color=qqqqff] (6.306823073590519,1.062826693367364) node {u};
\draw [fill=qqqqff] (4.2,-1.) circle (2.5pt);
\draw[color=qqqqff] (3.7011987775393953,-0.9356618443805933) node {$w_{2}$};
\draw [fill=qqqqff] (4.8,-1.) circle (2.5pt);
\draw[color=qqqqff] (5.117848880246802,-0.9019320800304167) node {$v_{2}$};
\draw [fill=qqqqff] (5.7,-1.) circle (2.5pt);
\draw[color=qqqqff] (5.573200698974183,-1.188635077006917) node {$w_{3}$};
\draw [fill=qqqqff] (6.3,-1.) circle (2.5pt);
\draw[color=qqqqff] (6.332120396853152,-1.188635077006917) node {$v_{3}$};
\draw [fill=qqqqff] (7.2,-1.) circle (2.5pt);
\draw[color=qqqqff] (7.057310330381943,-1.188635077006917) node {$w_{4}$};
\draw [fill=qqqqff] (7.8,-1.) circle (2.5pt);
\draw[color=qqqqff] (8.153527671762674,-0.9356618443805933) node {$v_{4}$};
\draw [fill=qqqqff] (5.4,1.) circle (2.5pt);
\draw[color=qqqqff] (4.94920005849592,1.05439425227982) node {$w_{1}$};
\draw [fill=qqqqff] (4.1,3.) circle (2.5pt);
\draw[color=qqqqff] (4.392658946718011,3.162504524165851) node {$v_{1}$};
\draw [fill=qqqqff] (3.8,5.) circle (2.5pt);
\draw[color=qqqqff] (4.4179562699806425,5.312777001489602) node {toward root};
\end{scriptsize}
\end{tikzpicture}
\caption{$H_j$ with vertices of star labeled\label{beforesaw}}
\end{minipage}
\begin{minipage}{0.46\textwidth}
\centering
\definecolor{zzttqq}{rgb}{0,0,0}
\definecolor{qqqqff}{rgb}{0,0,0}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=0.7cm]
\fill[line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq,fill opacity=0] (6.,1.) -- (4.2,-1.) -- (4.8,-1.) -- cycle;
\fill[line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq,fill opacity=0] (6.,1.) -- (5.7,-1.) -- (6.3,-1.) -- cycle;
\fill[line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq,fill opacity=0] (6.,1.) -- (7.2,-1.) -- (7.8,-1.) -- cycle;
\fill[line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq,fill opacity=0] (6.,1.) -- (5.4,1.) -- (4.1,3.) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (4.3641429644735465,1.0254659940610364) -- (3.806023398089517,1.0254659940610364) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (2.832891846445569,1.0254659940610364) -- (2.3168531886162764,1.0216671244288105) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (3.509291958589649,3.0008696189206994) -- (3.8,5.) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.8,-1.) -- (3.8657942712621005,-2.9859105973373743) -- (4.41898751506418,-2.998203780532976) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.8,-1.) -- (5.,-3.) -- (5.6114262850375525,-3.010496963728578) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (7.8,-1.) -- (7.,-3.) -- (7.60292196272504,-2.998203780532976) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (7.8,-1.) -- (8.15611520652712,-3.010496963728578) -- (8.758481183111607,-3.010496963728578) -- cycle;
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (6.,1.)-- (4.2,-1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (4.2,-1.)-- (4.8,-1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (4.8,-1.)-- (6.,1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (6.,1.)-- (5.7,-1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (5.7,-1.)-- (6.3,-1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (6.3,-1.)-- (6.,1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (6.,1.)-- (7.2,-1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (7.2,-1.)-- (7.8,-1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (7.8,-1.)-- (6.,1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (6.,1.)-- (5.4,1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (5.4,1.)-- (4.1,3.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (4.1,3.)-- (6.,1.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (4.3641429644735465,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (4.3641429644735465,1.0254659940610364)-- (3.806023398089517,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (3.806023398089517,1.0254659940610364)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (2.832891846445569,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (2.832891846445569,1.0254659940610364)-- (2.3168531886162764,1.0216671244288105);
\draw [line width=2.pt,color=zzttqq] (2.3168531886162764,1.0216671244288105)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (3.509291958589649,3.0008696189206994);
\draw [line width=2.pt,color=zzttqq] (3.509291958589649,3.0008696189206994)-- (3.8,5.);
\draw [line width=2.pt,color=zzttqq] (3.8,5.)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.8,-1.)-- (3.8657942712621005,-2.9859105973373743);
\draw [line width=2.pt,color=zzttqq] (3.8657942712621005,-2.9859105973373743)-- (4.41898751506418,-2.998203780532976);
\draw [line width=2.pt,color=zzttqq] (4.41898751506418,-2.998203780532976)-- (4.8,-1.);
\draw [line width=2.pt,color=zzttqq] (4.8,-1.)-- (5.,-3.);
\draw [line width=2.pt,color=zzttqq] (5.,-3.)-- (5.6114262850375525,-3.010496963728578);
\draw [line width=2.pt,color=zzttqq] (5.6114262850375525,-3.010496963728578)-- (4.8,-1.);
\draw [line width=2.pt,color=zzttqq] (7.8,-1.)-- (7.,-3.);
\draw [line width=2.pt,color=zzttqq] (7.,-3.)-- (7.60292196272504,-2.998203780532976);
\draw [line width=2.pt,color=zzttqq] (7.60292196272504,-2.998203780532976)-- (7.8,-1.);
\draw [line width=2.pt,color=zzttqq] (7.8,-1.)-- (8.15611520652712,-3.010496963728578);
\draw [line width=2.pt,color=zzttqq] (8.15611520652712,-3.010496963728578)-- (8.758481183111607,-3.010496963728578);
\draw [line width=2.pt,color=zzttqq] (8.758481183111607,-3.010496963728578)-- (7.8,-1.);
\begin{scriptsize}
\draw [fill=qqqqff] (6.,1.) circle (2.5pt);
\draw[color=qqqqff] (6.306823073590519,1.062826693367364) node {u};
\draw [fill=qqqqff] (4.2,-1.) circle (2.5pt);
\draw[color=qqqqff] (3.7011987775393953,-0.9356618443805933) node {$w_{2}$};
\draw [fill=qqqqff] (4.8,-1.) circle (2.5pt);
\draw[color=qqqqff] (5.117848880246802,-0.9019320800304167) node {$v_{2}$};
\draw [fill=qqqqff] (5.7,-1.) circle (2.5pt);
\draw[color=qqqqff] (5.573200698974183,-1.188635077006917) node {$w_{3}$};
\draw [fill=qqqqff] (6.3,-1.) circle (2.5pt);
\draw[color=qqqqff] (6.332120396853152,-1.188635077006917) node {$v_{3}$};
\draw [fill=qqqqff] (7.2,-1.) circle (2.5pt);
\draw[color=qqqqff] (7.057310330381943,-1.188635077006917) node {$w_{4}$};
\draw [fill=qqqqff] (7.8,-1.) circle (2.5pt);
\draw[color=qqqqff] (8.153527671762674,-0.9356618443805933) node {$v_{4}$};
\draw [fill=qqqqff] (5.4,1.) circle (2.5pt);
\draw[color=qqqqff] (4.94920005849592,1.05439425227982) node {$w_{1}$};
\draw [fill=qqqqff] (4.1,3.) circle (2.5pt);
\draw[color=qqqqff] (4.392658946718011,3.162504524165851) node {$v_{1}$};
\draw [fill=qqqqff] (3.8,5.) circle (2.5pt);
\draw[color=qqqqff] (4.4179562699806425,5.312777001489602) node {toward root};
\end{scriptsize}
\end{tikzpicture}
\caption{$H_j$, where the edges of star $E_J$ are the dashed triangles \label{duringsaw1}}
\end{minipage}
\end{figure}
\begin{figure}[h]
\begin{minipage}{0.46\textwidth}
\centering
\definecolor{zzttqq}{rgb}{0,0,0}
\definecolor{qqqqff}{rgb}{0,0,0}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=0.7cm]
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (4.3641429644735465,1.0254659940610364) -- (3.806023398089517,1.0254659940610364) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (2.832891846445569,1.0254659940610364) -- (2.3168531886162764,1.0216671244288105) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (3.509291958589649,3.0008696189206994) -- (3.8,5.) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.8,-1.) -- (3.8657942712621005,-2.9859105973373743) -- (4.41898751506418,-2.998203780532976) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.8,-1.) -- (5.,-3.) -- (5.6114262850375525,-3.010496963728578) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (7.8,-1.) -- (7.,-3.) -- (7.60292196272504,-2.998203780532976) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (7.8,-1.) -- (8.15611520652712,-3.010496963728578) -- (8.758481183111607,-3.010496963728578) -- cycle;
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (4.3641429644735465,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (4.3641429644735465,1.0254659940610364)-- (3.806023398089517,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (3.806023398089517,1.0254659940610364)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (2.832891846445569,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (2.832891846445569,1.0254659940610364)-- (2.3168531886162764,1.0216671244288105);
\draw [line width=2.pt,color=zzttqq] (2.3168531886162764,1.0216671244288105)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (3.509291958589649,3.0008696189206994);
\draw [line width=2.pt,color=zzttqq] (3.509291958589649,3.0008696189206994)-- (3.8,5.);
\draw [line width=2.pt,color=zzttqq] (3.8,5.)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.8,-1.)-- (3.8657942712621005,-2.9859105973373743);
\draw [line width=2.pt,color=zzttqq] (3.8657942712621005,-2.9859105973373743)-- (4.41898751506418,-2.998203780532976);
\draw [line width=2.pt,color=zzttqq] (4.41898751506418,-2.998203780532976)-- (4.8,-1.);
\draw [line width=2.pt,color=zzttqq] (4.8,-1.)-- (5.,-3.);
\draw [line width=2.pt,color=zzttqq] (5.,-3.)-- (5.6114262850375525,-3.010496963728578);
\draw [line width=2.pt,color=zzttqq] (5.6114262850375525,-3.010496963728578)-- (4.8,-1.);
\draw [line width=2.pt,color=zzttqq] (7.8,-1.)-- (7.,-3.);
\draw [line width=2.pt,color=zzttqq] (7.,-3.)-- (7.60292196272504,-2.998203780532976);
\draw [line width=2.pt,color=zzttqq] (7.60292196272504,-2.998203780532976)-- (7.8,-1.);
\draw [line width=2.pt,color=zzttqq] (7.8,-1.)-- (8.15611520652712,-3.010496963728578);
\draw [line width=2.pt,color=zzttqq] (8.15611520652712,-3.010496963728578)-- (8.758481183111607,-3.010496963728578);
\draw [line width=2.pt,color=zzttqq] (8.758481183111607,-3.010496963728578)-- (7.8,-1.);
\begin{scriptsize}
\draw [color=qqqqff] (6.,1.)-- ++(-2.5pt,-2.5pt) -- ++(5.0pt,5.0pt) ++(-5.0pt,0) -- ++(5.0pt,-5.0pt);
\draw[color=qqqqff] (6.306823073590519,1.062826693367364) node {u};
\draw [color=qqqqff] (4.2,-1.)-- ++(-2.5pt,-2.5pt) -- ++(5.0pt,5.0pt) ++(-5.0pt,0) -- ++(5.0pt,-5.0pt);
\draw[color=qqqqff] (3.7011987775393953,-0.9356618443805933) node {$w_{2}$};
\draw [fill=qqqqff] (4.8,-1.) circle (2.5pt);
\draw[color=qqqqff] (5.117848880246802,-0.9019320800304167) node {$v_{2}$};
\draw [color=qqqqff] (5.7,-1.)-- ++(-2.5pt,-2.5pt) -- ++(5.0pt,5.0pt) ++(-5.0pt,0) -- ++(5.0pt,-5.0pt);
\draw[color=qqqqff] (5.674389992024713,-1.25609460570727) node {$w_{3}$};
\draw [fill=qqqqff] (6.3,-1.) circle (2.5pt);
\draw[color=qqqqff] (6.332120396853152,-1.188635077006917) node {$v_{3}$};
\draw [color=qqqqff] (7.2,-1.)-- ++(-2.5pt,-2.5pt) -- ++(5.0pt,5.0pt) ++(-5.0pt,0) -- ++(5.0pt,-5.0pt);
\draw[color=qqqqff] (7.057310330381943,-1.188635077006917) node {$w_{4}$};
\draw [fill=qqqqff] (7.8,-1.) circle (2.5pt);
\draw[color=qqqqff] (8.153527671762674,-0.9356618443805933) node {$v_{4}$};
\draw [color=qqqqff] (5.4,1.)-- ++(-2.5pt,-2.5pt) -- ++(5.0pt,5.0pt) ++(-5.0pt,0) -- ++(5.0pt,-5.0pt);
\draw[color=qqqqff] (4.94920005849592,1.05439425227982) node {$w_{1}$};
\draw [fill=qqqqff] (4.1,3.) circle (2.5pt);
\draw[color=qqqqff] (4.392658946718011,3.162504524165851) node {$v_{1}$};
\draw [fill=qqqqff] (3.8,5.) circle (2.5pt);
\draw[color=qqqqff] (4.4179562699806425,5.312777001489602) node {toward root};
\end{scriptsize}
\end{tikzpicture}
\caption{$H_j'$. The vertices marked with an x will be added to $I$. The subhypertrees containing $v_2$, $v_3$, and $v_4$ will be added to $\mathcal{P}$ \label{duringsaw2}}
\end{minipage}
\begin{minipage}{0.46\textwidth}
\centering
\definecolor{zzttqq}{rgb}{0,0,0}
\definecolor{qqqqff}{rgb}{0,0,0}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=0.7cm]
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (4.3641429644735465,1.0254659940610364) -- (3.806023398089517,1.0254659940610364) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (2.832891846445569,1.0254659940610364) -- (2.3168531886162764,1.0216671244288105) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (3.509291958589649,3.0008696189206994) -- (3.8,5.) -- cycle;
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (4.3641429644735465,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (4.3641429644735465,1.0254659940610364)-- (3.806023398089517,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (3.806023398089517,1.0254659940610364)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (2.832891846445569,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (2.832891846445569,1.0254659940610364)-- (2.3168531886162764,1.0216671244288105);
\draw [line width=2.pt,color=zzttqq] (2.3168531886162764,1.0216671244288105)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (3.509291958589649,3.0008696189206994);
\draw [line width=2.pt,color=zzttqq] (3.509291958589649,3.0008696189206994)-- (3.8,5.);
\draw [line width=2.pt,color=zzttqq] (3.8,5.)-- (4.1,3.);
\begin{scriptsize}
\draw [fill=qqqqff] (4.1,3.) circle (2.5pt);
\draw[color=qqqqff] (4.402779313850565,3.1690798886590237) node {$v_{1}$};
\draw [fill=qqqqff] (3.8,5.) circle (2.5pt);
\draw[color=qqqqff] (4.446649363730799,5.318712332790555) node {toward root};
\end{scriptsize}
\end{tikzpicture}
\caption{$H_{j+1}$, which is the connected component of $H_j'$ containing the root \label{aftersaw}}
\end{minipage}
\end{figure}
To prove~(\ref{sawing2}) of Lemma~\ref{lemma:sawing}, we recall that our choice of $u$ in step (a) for $j \leq e$ implies that all sons of $u$ have at most $k$ vertices in their progeny.
These sons and their progeny make up the vertex sets of the subhypergraphs that we add to $\mathcal{P}$ in step (d).
This, along with Fact~\ref{prop1}, represents the only two ways that subhypergraphs can be added to $\mathcal{P}$.
We know then that
$$k \geq \vert V(P) \vert \text{, for any} \in \mathcal{P}$$
which proves~(\ref{sawing2}) of Lemma~\ref{lemma:sawing}.
To prove~(\ref{sawing1}), we need to show that very few of the vertices of $T$ end up in $I$ (and therefore, most of the vertices of $T$ will be in subhypertrees in $\mathcal{P}$).
For all $j \leq e$,
$$\vert V(H_{j}) \setminus V(H_{j+1})\vert > \frac{k-d}{d-1}.$$
This is because in $H_j$, $u$ has more than $\frac{k-d}{d-1}$ vertices in its progeny, and none of the progeny are in $V(H_{j+1})$.
With each iteration of the sawing procedure, at most $d+1$ vertices are added to $I$: one for $u$ and at most $d$ for the celibate neighbors of $u$.
The remaining vertices in $V(H_{j}) \setminus V(H_{j+1})$ must therefore be in an element of $\mathcal{P}$ (by step (d)), meaning that $\sum_{P \in \mathcal{P}} \vert V(P) \vert$ increases by at least $\frac{k-d}{d-1} - (d+1)$ with each iteration.
By Fact~\ref{prop1}, if $H_{e+1}$ exists, one subhypertree is added to $\mathcal{P}$ and no vertices are added to $I$.
This implies that at the end of the decomposition,
$$\frac{\sum_{P \in \mathcal{P}} \vert V(P) \vert}{\vert I \vert} \geq \frac{\frac{k-d}{d-1} - (d+1)}{d+1} = \frac{k-d}{d^2-1}-1,$$
and since $\vert I \vert + \sum_{P \in \mathcal{P}} \vert V(P) \vert = \vert V(T) \vert = n$, we have that
$$\vert I \vert \leq \left( \frac{d^2-1}{k-d}\right) n \leq \left( \frac{2d^2}{k}\right) n,$$
which is~(\ref{sawing1}) of Lemma~\ref{lemma:sawing}.
Now we establish~(\ref{sawing3}) of the lemma.
As described in step (d), for every subhypergraph $P \in \mathcal{P}$, $P$ contains at least one vertex $v_{i}$.
Each of these $v_{i}$ has a corresponding isolated vertex $w_{i}$ that was placed into $I$ in step (c).
Therefore
\begin{equation*}\label{eq1}
\vert I \vert \geq \vert \mathcal{P} \vert,
\end{equation*}
proving~(\ref{sawing3}).
Proving~(\ref{sawing4}) is similarly clear.
For every iteration of the sawing procedure, one star is added to $\cE$ (in step (b)), and at least
one subhypertree is added to $\mathcal{P}$ (in step (d)).
Therefore
$$ \vert \mathcal{P} \vert \geq \vert \cE \vert.$$.
To find the minimum cardinality of $\mathcal{P}$ over all hypertrees on $n$ vertices (and show~(\ref{sawing5})), consider how many vertices of $T$ are removed during a single iteration of the sawing procedure.
By step (c), $1+deg(u)$ vertices are put into $I$.
By step (d), $\sum_{i=2}^{deg(u)} \vert V(P_{j,i})\vert$ vertices are put into $\mathcal{P}$.
These vertices are divided between $deg(u)-1$ subhypertrees.
Therefore the number of vertices removed per subhypertree added to $\mathcal{P}$ is at most
$$\frac{1 + deg(u) + \sum_{i=2}^{deg(u)} \vert V(P_{j,i})\vert }{deg(u) - 1} \leq \frac{1 + deg(u) + \sum_{i=2}^{deg(u)} k}{deg(u) - 1} \leq k + 3.$$
Summing over all iterations of the sawing procedure, we get that
$$\frac{n}{\vert \mathcal{P} \vert} \leq k+3,$$
proving~(\ref{sawing5}).
In order to verify~(\ref{sawing6}), recall that by step (c), the only vertices added to $I$ are of the form $w_{j,i}$ and $u_j$, for some $i,j$.
For some vertex $v_{j,i}$ to be in $I$, $v_{j,i}$ would need to be the same vertex as some $u_h$. ($v_{j,i}$ is not celibate so it cannot be some $w_{h,l}$.)
By the coloring of $V(T)$, $u_h$ is red.
However, $v_{j,i}$ is blue, since it is adjacent to the center (red) vertex $u_j$ of star $E_j$, and in the coloring no two red vertices are adjacent.
Therefore $v_{j,i} \not \in I$.
This completes the proof of~(\ref{sawing6}) and of Lemma~\ref{lemma:sawing}.
\end{proof}
\section{Searching for Disjoint Stars}
\begin{lemma}\label{lemma:stars}
Let $S$ be a Steiner triple system on $m$ vertices.
Then for any set of vertices $v_1, v_2, \ldots, v_c \in V(S)$, $c\leq d$, there are $s(c) = \frac{m}{c^2+1}$ stars $S_1, \ldots, S_{s(c)}$ so that
\begin{enumerate}
\item $S_l = \{ \{v_i,w_i^{(l)},u^{(l)}\}, 1 \leq i \leq c\} \subset E(S)$ is a star centered at $u^{(l)}$, for $l=1,\ldots,s(c)$.
\item The sets $W_l = \{w_1^{(l)},\ldots,w_c^{(l)},u^{(l)}\}$, $l=1,\ldots,s(c)$, are pairwise disjoint subsets of $V(S)$.
\end{enumerate}
\end{lemma}
\begin{proof}\label{proof:stars}
Fix any $v_1, v_2, \ldots, v_c \in V(S)$.
We prove the lemma by induction, in each step constructing a new star.
As a base case, we construct a set $V^{(1)}\subset V(S)$ of vertices that may serve as the center $u^{(1)}$ of the first star $S_1$.
Let
$$Q = \{ u\in V(S): \exists v_i, v_j \text{ such that } \{u, v_i, v_j\} \in E(S)\}.$$
The vertices of $Q$ cannot be used as $u^{(1)}$.
Set
$$V^{(1)} = V(S) \setminus \{v_1, v_2, \ldots, v_c\} \setminus Q $$
and observe that
$$\vert V^{(1)} \vert \geq m -c -\binom{c}{2} > m-(c^2+1).$$
Select any vertex in $V^{(1)}$ and call it $u^{(1)}$.
The vertex $u^{(1)}$ and each vertex $v_i$, $1\leq i \leq c$, share an edge with some vertex $w_i^{(1)} \in V(S)$.
Each $w_i^{(1)}$ is distinct from each $v_j$, because otherwise $u^{(1)}$ would be in $Q$.
The edges $\{ v_i, w_i^{(1)}, u^{(1)} \}$, $1 \leq i \leq c$, form a star $S_1$ in $S$.
Now we consider the set
$$Q^{(1)} = \{ u_{ij}^{(1)}\in V(S): \exists v_i, w_j^{(1)}, i\neq j \text{ such that } \{v_i, w_j^{(1)}, u_{ij}^{(1)}\} \in E(S)\}.$$
Note that $\vert Q^{(1)} \vert \leq c(c-1)$.
If we attempted using $u_{ij}^{(1)}\in Q^{(1)}$ as the center of some other star, that star would contain the edge $\{v_i, w_j^{(1)}, u_{ij}^{(1)}\}$, so two different stars would use the vertex $w_j^{(1)}$.
Excluding these vertices $u_{ij}^{(1)}$ from the center of any future star $S_l, l\geq 2$, ensures that
$$V(S_1) \cap V(S_l) = \{v_1, \ldots, v_c\}$$
and so
$$W_1 \cap W_l = \emptyset.$$
Set
$$V^{(2)} = V^{(1)} \setminus W_1 \setminus Q^{(1)} .$$
It follows that
$$\vert V^{(2)} \vert \geq \vert V^{(1)} \vert -(c+1) -c(c-1) > m - 2(c^2+1).$$
Thus we have constructed one star $S_1$ and the set $V^{(2)}$ such that any star $S_l$ whose center $u^{(l)}$ is in $V^{(2)}$ will have $W_l$ disjoint from $W_1$.
Having completed the base case, we assume by induction that the stars $S_1,\ldots, S_{l-1}$ have been constructed.
We assume the set $V^{(l)}$ has the property that for any star $S_l$ whose center is in $V^{(l)}$, $W_l$ will be disjoint from $W_1$, $W_2$, \ldots, $W_{l-1}$.
We also assume inductively that $\vert V^{(l)} \vert > m - l(c^2+1)$.
Choose any vertex $u^{(l)} \in V^{(l)}$ to be the center of star $S^{(l)}$.
For each $i$, $1\leq i \leq c$, let $w_i^{(l)}$ be the unique vertex with $\{v_i, w_i^{(l)}, u^{(l)}\} \in E(S)$.
Observe that by induction,
$$w_i^{(l)} \not\in \bigcup_{k<l} V(S_k).$$
The edges $\{ v_i, w_i^{(l)}, u^{(l)} \}$, $1 \leq i \leq c$, form a star $S_l$ in $S$, and
$$W_l \cap \left( \bigcup_{k<l} W_k \right) = \emptyset.$$
Let
$$Q^{(l)} = \{ u_{ij}^{(l)}\in V(S): \exists v_i, w_j^{(l)}, i\neq j \text{ such that } \{v_i, w_j^{(l)}, u_{ij}^{(l)}\} \in E(S)\}.$$
and set
$$V^{(l+1)} = V^{(l)} \setminus W_l \setminus Q^{(l)} .$$
For the same reason as in the base case, a star centered at any $u^{(l+1)} \in V^{(l+1)}$ will intersect each of $S_1,\ldots, S_{l}$ precisely at $v_1, \ldots, v_c$.
By the inductive hypothesis,
\begin{equation}\label{eq3}
\vert V^{(l+1)}\vert \geq \vert V^{(l)} \vert - (c+1) - c(c-1) > m - (l+1) (c^2+1).
\end{equation}
This completes the induction.
We can continue forming new stars $S_l$ from sets $V^{(l)}$ in this way as long as $V^{(l)}$ has at least one vertex in it to become $u^{(l)}$.
Therefore, by equation~(\ref{eq3}), we can construct at least $\frac{m}{c^2+1}$ stars.
The same process can be conducted for any $c$-tuple, where $c\leq d$, completing the proof.
\end{proof}
\section{Selecting the Reservoir}
Let $\mathcal{P} = \{ P_1, P_2, \ldots, P_l\}$ be an enumerated collection of hypertrees as formed by the decomposition of $T$ in Lemma~\ref{lemma:sawing}, and $I$ the set of independent vertices formed by the decomposition.
Consider
\begin{equation}\label{Psum}
P = \bigvee_{j=1}^l P_j
\end{equation}
as a forest consisting of (vertex-disjoint) members of $\mathcal{P}$.
Then
$$V(T) = I \cup V(P).$$
Next we are going to randomly select a set $R \subset V(S)$ and show that its properties allow us to find $T$ in $S$ in such a way that $I \subset R$ and $V(P) \subset V(S) \setminus R$.
We will call the set $R$ the \emph{reservoir}.
Let $a\sim b$ mean that $\lim_{m\to \infty} \frac{a}{b} = 1$.
\begin{lemma}\label{lemma:random}
Let $S = (V,E)$ be a Steiner triple system on $m$ vertices, and let $\epsilon>0$ be small.
Then there exists a subset $R \subset V$ and a hypergraph $\tilde{S} = S[V \setminus R]$ induced on the set $V\setminus R$ that have the following properties.
\begin{enumerate}
\item $\vert R \vert \sim \epsilon m$
\item $\vert V(\tilde{S}) \vert \sim (1-\epsilon) m$
\item Any vertex $v \in V(\tilde{S})$ has $deg (v) \sim(1-\epsilon)^2 \frac{m}{2}$ in $\tilde{S}$.
\item For any $c$-tuple of vertices in $V(S)$, where $c\leq d$, at least $r(c) = \frac{\epsilon^{c+1}m}{2(c^2+1)}$ of the disjoint sets $W_l$ guaranteed by Lemma~\ref{lemma:stars} lie entirely in $R$.
\end{enumerate}
\end{lemma}
\begin{proof}
Consider a set $R \subset V$ such that each vertex is chosen randomly
and independently with probability $\epsilon$.
We will use the following form of Chernoff bound (inequality (2.9) in~\cite{JLR}).
We will then fix an $R$ that has all of the desired properties.
\begin{theorem}[\cite{JLR}]\label{chernoff1}
For $\delta \leq 3/2$ and binomially distributed random variable $X$,
$$\mathbb{P}\left( \vert X - \mathbb{E} (X) \vert \geq \delta \mathbb{E} (X) \right) \leq 2 \exp \left( -\frac{\delta^2}{3} \mathbb{E}(X)\right).$$
\end{theorem}
For a reservoir $R$ selected randomly as described above, the expected number of vertices in $R$ is $\epsilon m$.
Since vertices are selected independently in $R$, we can apply Theorem~\ref{chernoff1}.
Almost surely the number of vertices chosen will be close to the expectation, so
$$\vert R \vert \sim \epsilon m \hspace*{1cm} \text{ and } \hspace*{1cm} \vert V(\tilde{S}) \vert \sim (1-\epsilon)m,$$
showing that (1) and (2) of the lemma hold with probability $1-o(1)$.
Considering (3), note that any vertex $v \in V(\tilde{S})$ has degree $\frac{m-1}{2}$ in $S$.
For every edge containing $v$ in $S$, the edge contains two other vertices.
The probability that each of these vertices is in $\tilde{S}$ is $1-\epsilon$, so the expected number of edges incident to $v$ contained totally in $\tilde{S}$ is $(1-\epsilon)^2 \frac{m-1}{2}$.
Again by Theorem~\ref{chernoff1}, the degree of $v$ in $\tilde{S}$ is close to the expectation, so
$$deg_{\tilde{S}} (v) \sim(1-\epsilon)^2 \frac{m}{2}.$$
Finally we consider (4) of the lemma and fix any $c$-tuple $\{v_1, \ldots, v_c\}$ of vertices in $S$ for some $c\leq d$.
By Lemma~\ref{lemma:stars}, there are $s(c) = \frac{m}{c^2+1}$ sets $W_l$, $l=1,\ldots,s(c)$, that are pairwise disjoint.
We need to show that $\frac{\epsilon^{c+1}m}{2(c^2+1)}= r(c)$ of these sets $W_l$ are subsets of $R$.
Let $X_l$ be the event that all $c+1$ vertices of $W_l$ are in $R$.
Clearly $\mathbb{P}(X_l) = \epsilon^{c+1}$, and if
$$X = \sum_{l=1}^{s(c)} I(X_l)$$
(where $I(X_l)$ is an indicator random variable), then
$$\mathbb{E}( X) = \frac{\epsilon^{c+1}m}{c^2+1} = 2r(c).$$
Since the sets $W_l$, $1\leq l \leq \frac{m}{c^2+1}$, are disjoint, the events $X_l$ are independent, so we may apply Theorem~\ref{chernoff1} with $\delta=1/2$.
Then
$$\mathbb{P}\left( X \leq r(c) \right) \leq 2 \text{exp}\left( \frac{-r(c)}{6} \right).$$
So if $Y = Y(v_1,\ldots, v_c)$ is the event that for a fixed $v_1, \ldots, v_c$, there are fewer than $r(c)$ sets $W_l$ in $R$, then the probability of $Y$ is exponentially small in $m$ (since $r(c)$ grows with $m$).
Consequently the probability of $Y(v_1, \ldots, v_c)$ happening for any choice of $v_1,\ldots,v_c$ where $c$ can be any integer between 1 and $d$ is bounded by
$$\sum_{c=1}^d \binom{m}{c}\text{exp}\left( \frac{-r(c)}{4} \right) = o(1).$$
Thus with probability $1-o(1)$, for every $c$-tuple in $V(S)$, there are at least $r(c)$ pairwise-disjoint sets $W_l$ contained in $R$.
Since with probability $1-o(1)$, $R$ has each of the four properties listed by Lemma~\ref{lemma:random}, we can fix a set $R$ that has all four properties, completing the proof.
\end{proof}
\section{Embedding the Subhypertrees}
Let $P$ be as defined in Equation~\ref{Psum}.
\begin{lemma}\label{lemma:matching}
Let $\tilde{S}$ be an induced subhypergraph of a Steiner triple system $S$ on $m\geq(1+\mu)n$ vertices, such that $\vert V(\tilde{S}) \vert \sim (1-\epsilon) m$ and every vertex $v \in V(\tilde{S})$ has $deg (v) \sim(1-\epsilon)^2 \frac{m}{2}$.
Then $\tilde{S}$ contains a copy of $P$.
\end{lemma}
Recall that
$$\vert \mathcal{P} \vert = l.$$
For the proof of Lemma~\ref{lemma:matching}, we find it convenient for $l / {k3^k}$ to be an integer, where $k$ is as in Lemma~\ref{lemma:sawing}.
If not, add isolated vertices to $\mathcal{P}$ (and to $P$), increasing $l$, until ${k3^k}$ divides $l$.
If we can find a copy of this larger $P$ in $\tilde{S}$, then surely we can find a copy of the original $P$ in $\tilde{S}$.
Consider a partition of $\mathcal{P}$ with each partition class $\mathcal{C}_i$ consisting of those members of $\mathcal{P}$ that are pairwise isomorphic hypertrees.
Let
$$T_i= \text{isomorphism type of } \mathcal{C}_i$$
and let
$$l_i = \vert \mathcal{C}_i \vert.$$
Denoting the number of isomorphism classes by $t$, then
$$\sum_{i=1}^t l_i= \vert \mathcal{P} \vert = l.$$
P\'olya (see~\cite{LPV} and~\cite{Pol}) showed an upperbound on the number of isomorphism classes of a tree on $k$ vertices.
Specifically,
$$t<3^k.$$
Recall that by assumption $k3^k \ll n.$
Since $ \frac{n}{k+3} \leq l$ (cf. Lemma~\ref{lemma:sawing}), we see that $t \ll l$, and in fact
$$ t < {k3^k} \ll l$$
for $n$ large.
In order to find the desired embedding of $P$ in $\tilde{S}$, we will first consider a "small" (with size independent of $n$) forest consisting of trees which form a "statistical sample" of $\mathcal{P}$.
Next, we will find an almost perfect packing of vertex-disjoint copies of the small forest in $\tilde{S}$.
Finally, we will show that among the union of the copies of the small forest in $\tilde{S}$, there is a copy of $P$, meaning $P$ is a subhypergraph of $\tilde{S}$.
\begin{proof}\label{proof:matching}
We want to select a sampling of hypertrees from $\mathcal{P}$ that has representatives from each partition class in proportion with the size of the class.
We first construct a forest consisting of about ${k3^k}$ hypertrees from $\mathcal{P}$.
Let
$$\lambda_i = \Big\lceil \frac{{k3^k}l_i}{l}\Big\rceil,$$ so that if we were to choose ${k3^k}$ hypertrees randomly and independently from $\mathcal{P}$, we would expect about $\lambda_i$ hypertrees of type $T_i$.
Consider the forest
$$F = \bigvee_{i=1}^t \lambda_i T_i,$$
containing $\lambda_i$ vertex-disjoint hypertrees from class $T_i$, $i=1,\ldots, t$.
Let
$$\sum_{i=1}^t \lambda_i = \lambda > k3^k$$
be the number of connected components in the forest $F$. In the remaining part of the proof we will show the following.
\begin{claim}\label{claim:forest}
The hypergraph $\tilde{S}$ contains $ \frac{l}{{k3^k}} $ vertex-disjoint copies of $F$.
\end{claim}
Before we establish the claim, observe that the claim immediately implies Lemma~\ref{lemma:matching}.
Indeed by Claim~\ref{claim:forest}, the number of vertex-disjoint copies of $T_i$ in $\tilde{S}$ is
$$\frac{l}{{k3^k}} \lambda_i = \frac{l}{{k3^k}} \Big\lceil \frac{{k3^k}l_i}{l} \Big\rceil \geq l_i$$
for each $i=1,\ldots, t$.
Since $P$ contains exactly $l_i$ vertex-disjoint copies of $T_i$, $P$ is contained in $\tilde{S}$.
\end{proof}
\begin{proof}[Proof of Claim~\ref{claim:forest}]
We look for vertex-disjoint embeddings of $F$ in $\tilde{S}$.
First, we establish two upper bounds on the size of $\vert V(F) \vert$, which we denote by $r$.
\begin{prop}\label{prop:r}
For $\vert V(F) \vert = r$, the following holds.
\begin{enumerate}
\item $r \leq k(k+4)3^k$, and
\item $r \leq \frac{{k3^k} n}{l}\left( 1+\frac{\mu}{2}\right)$.
\end{enumerate}
\end{prop}
\begin{proof}[Proof of Proposition~\ref{prop:r}]
Recall that $t \leq 3^k$.
By Lemma~\ref{lemma:sawing}, $\vert V(T_i)\vert\leq k$, $i=1,\ldots,t$, and $l = \vert \mathcal{P} \vert \geq \frac{n}{k+3}$.
Also, $\vert V(P) \vert < n$.
Then
\begin{align*}
r &= \sum_{i=1}^t \lambda_i \vert V(T_i)\vert = \sum_{i=1}^t \Big\lceil \frac{{k3^k} l_i}{l}\Big\rceil \vert V(T_i)\vert \leq \frac{{k3^k}}{l}\sum_{i=1}^t \ l_i\vert V(T_i)\vert + \sum_{i=1}^t \vert V(T_i)\vert\\
&\leq \frac{{k3^k}}{l} \sum_{i=1}^l\vert V(P_i)\vert + kt < \frac{{k3^k}}{l}\vert V(P)\vert + k3^k \leq \frac{k3^k}{n/(k+3)}n + k3^k \\
&\leq k(k+4)3^k,
\end{align*}
which proves $(1)$ of the proposition.
For the proof of $(2)$, we will first observe that $l \leq \frac{\mu n}{2}$ or equivalently that ${k3^k} \leq \frac{{k3^k} \mu n}{2 l}$.
This follows from~(\ref{sawing1}) and~(\ref{sawing3}) of Lemma~\ref{lemma:sawing}, which implies that $l \leq \frac{2d^2}{k}n$ and from the hierarchy Inequality~\ref{eq:hier} by which
$$\frac{2d^2}{k} \leq \frac{\mu}{2}$$
for $k \geq k_0(\mu, d)$.
Now applying in part our estimate from the proof of $(1)$ above, we infer that
\begin{align*}
r &< \frac{{k3^k}}{l}\vert V(P)\vert + k3^k \leq \frac{{k3^k}}{l}n + \frac{{k3^k} \mu n}{2 l}
\leq \frac{{k3^k} n}{l} \left( 1 + \frac{\mu }{2}\right).
\end{align*}
\end{proof}
Now consider an auxiliary hypergraph $A$ so that
\begin{equation*}\tag{$\star$}
\begin{array}{ccl}
V(A) & = & V(\tilde{S}),\text{ and}\\
E(A) & = & \{R \in \binom{V(\tilde{S})}{r}: \tilde{S}[R] \text{ contains a copy of } F \}.
\end{array}
\end{equation*}
In order to find vertex-disjoint copies of $F$ in $\tilde{S}$, we look for a matching in $A$.
To that end, we wish to apply the following theorem, where the co-degree $deg_A(x,y)$ of any $x,y \in V(A)$ is the number of edges shared by both $x$ and $y$.
\begin{theorem}[\cite{FR}]\label{thm:FR}
Suppose $A$ is an $r$-uniform hypergraph on $V$ which, for some $D >1$, has the following two properties:
\begin{enumerate}
\item $deg_A(x) = D(1+o(1))$ for all $x \in V$, where $o(1)\to 0$ as $\vert V \vert \to \infty$.
\item $deg_A(x,y) < D/(\log \vert V \vert)^4$ for all $x,y \in V$.
\end{enumerate}
Then $A$ contains at least $\frac{\vert V \vert(1-o(1))}{r}$ pairwise disjoint edges.
\end{theorem}
\begin{note}\label{pipspenc}
This theorem was subsequently extended and improved in a number of papers (e.g.~\cite{AKS}, \cite{KK}, \cite{PS}), but for the purposes here, it is sufficient to use this form.
\end{note}
We will show the following result (which is proved on the next page), where
$$f=\vert E(F) \vert.$$
\begin{claim}\label{claim:degcodeg}
There exists a constant $c=c(F)$ such that
$$D = c \vert V(A) \vert^{\lambda+f-1}$$
satisfies $(1)$ and $(2)$ of Theorem~\ref{thm:FR} for the auxiliary hypergraph $A$ defined in $(\star)$.
\end{claim}
Thus by Theorem~\ref{thm:FR}, there is a nearly perfect matching in $A$, implying a nearly perfect packing of vertex-disjoint copies of $F$ in $\tilde{S}$.
Specifically Theorem~\ref{thm:FR} says one can pack at least
$$\frac{\vert V(A) \vert (1-o(1))}{r}$$
vertex-disjoint copies of $F$ into $\tilde{S}$.
To prove Claim~\ref{claim:forest}, it remains to show that
$$\frac{\vert V(A) \vert (1-o(1))}{r} \geq \frac{l}{{k3^k}}.$$
Recall that $\epsilon \ll \mu$ and $\vert V(A) \vert \geq (1-\epsilon)(1+\mu)n$.
By part (2) of Proposition~\ref{prop:r}, it follows that
$$\frac{\vert V(A) \vert(1-o(1))}{r} \geq \frac{(1-\epsilon)(1+\mu) n (1-o(1))}{\frac{{k3^k} n}{l}\left( 1+\frac{\mu}{2}\right)} \geq \frac{(1+\frac{\mu}{2}) n }{\frac{{k3^k} n}{l}\left( 1+\frac{\mu}{2}\right)} = \frac{l}{{k3^k}}.$$
Therefore there are $\frac{l}{{k3^k}}$ vertex-disjoint copies of $F$ in $\tilde{S}$.
\end{proof}
For the proof of Claim~\ref{claim:degcodeg}, we formalize the definition of an embedding.
\begin{definition}
Let $V(F) = \{1,2,\ldots, r\}$ and let $R\subset V(\tilde{S})$ be a subset of $V(\tilde{S})$ with labeled vertices $\{v_1, v_2, \ldots, v_r\}$.
We say a function $\psi:F\to \tilde{S}$ is an \emph{embedding} of $F$ into $\tilde{S}$ if $\psi(i)=v_i$ for $i=1, \ldots, r$ and if for all $\{i,j,h\} \in E(F)$, $\{v_i, v_j, v_h\} \in E(\tilde{S})$.
\end{definition}
\begin{proof}[Proof of Claim~\ref{claim:degcodeg}]
To count $\deg_A(x)$ for any $x\in V(A)$, we first count the number of embeddings from $F$ to $\tilde{S}$ that map some vertex in $V(F)$ to $x$.
Fix a labeling $\{1,2,\ldots, r\}$ of $V(F)$ and fix any $x\in V(\tilde{S})$.
Then let
$$E_x = \{\psi: F\to \tilde{S} \text{ such that there is some }i\in V(F) \text{ with } \psi(i)=x\}$$
be the set of all embeddings of $F$ into $\tilde{S}$ where $x$ is in the image of $\psi$.
Let
\begin{equation}\label{degAx}
D_x = \{ R\in \binom{V(\tilde{S})}{r}: \text{ there exist } \psi\in E_x \text{ with } \psi(V(F)) = R\}
\end{equation}
and see that
$$deg_A(x) = \vert D_x \vert.$$
In order to determine $\vert D_x \vert$, we will find the cardinality of $E_x$.
With $i$ fixed, consider all embeddings $\psi: F\to \tilde{S}$ with $i\to x$.
For simplicity, first consider the case that $F$ consists of a single tree.
Consider an ordering of the edges $e_t\in E(F)$, $t= 1, \ldots, f$, satisfying
\begin{equation}\label{edgeorder}
\begin{array}{c}
e_1 = \{i,j,h\} \text{ for some vertices } j \text{ and } h, \text{ and }\\
\vert e_{t+1} \cap \left( \bigcup_{s\leq t} e_s \right) \vert =1.
\end{array}
\end{equation}
Recall that by assumption, for all $v\in V(F)$,
$$deg_{\tilde{S}} (v) \sim (1 -\epsilon)^2 \frac{m}{2} \sim (1-\epsilon)\frac{\vert V(\tilde{S}) \vert}{2}.$$
For edge $e_1$, there are $(1-\epsilon)\frac{\vert V(\tilde{S}) \vert}{2}$ edges incident to $x$ in $\tilde{S}$ onto which to map $e_1$.
Choosing one of these edges, say $\{x,y_1, y_2\}$, there are two ways to map $\{i,j,h\}$ to $\{x,y_1, y_2\}$ with $i \to x$.
Therefore there are $(1-\epsilon)\vert V(\tilde{S}) \vert$ ways to map $e_1$ into $\tilde{S}$.
For $t>1$, one vertex of $e_t$ has already been mapped into $\tilde{S}$.
Consequently, similarly as in the $t=1$ case, there are $(1-o(1))(1-\epsilon)\vert V(\tilde{S}) \vert$ ways to embed $e_t$ for $t=2, \ldots, f$ into $\tilde{S}$.
Therefore we have $(1-o(1))(1-\epsilon)^f \vert V(\tilde{S}) \vert^f$
embeddings with $i\to x$.
Since $i$ can be chosen in $r= \vert V(F) \vert$ ways, if $F$ is a simple tree,
\begin{equation}\label{Extree}
\vert E_x \vert \sim r(1-\epsilon)^f \vert V(\tilde{S}) \vert^f.
\end{equation}
Now consider $F$ as a forest of $\lambda$ disjoint trees.
As above, map some vertex $i$ to $x$ and embed all of the edges in the same component of $i$ into $\tilde{S}$.
For each of the other $\lambda-1$ components of $F$, embed one of its vertices to some unused vertex in $\tilde{S}$. There are
\begin{equation}\label{embedseeds}
(1-o(1))\vert V(\tilde{S})\vert^{\lambda-1}
\end{equation}
ways to choose these vertices.
As with the first component, map the rest of the edges of $F$ into $\tilde{S}$ to form an embedding of $V(F)$ to some subset $R \subset V(\tilde{S})$.
Combining~\ref{Extree} and~\ref{embedseeds}, we infer that
\begin{equation}\label{sizeEx}
\vert E_x \vert \sim r(1-\epsilon)^f \vert V(\tilde{S}) \vert^{f+\lambda-1}.
\end{equation}
Some embeddings $\psi$ in $E_x$ may map onto the same vertex sets but different edge sets in $\tilde{S}$.
In order to find $deg_A(x)$, we make sure that each vertex set $R$ with $x\in R$ inducing a copy of $F$ is counted precisely once.
To this end, let
$$R_x = \{ \psi \in E_x: \text{ there exists } \psi'\in E_x \text{ with } \psi(V(F))=\psi'(V(F))\text{ but } \psi(E(F)) \neq\psi'(E(F))\}.$$
For any $\psi, \psi' \in R_x$, there must be some edge in $\tilde{S}[\psi(V(F))]$ not in $\psi(E(F))$, because this edge is in $\psi'(E(F))$.
In the following claim, we show that $R_x$ makes up a small portion of $E_x$ by counting how many embeddings in $E_x$ induce no extra edge in $\tilde{S}$.
This implies that the number of embeddings $\psi$ for which there is a $\psi'$ with the property above is negligible.
\begin{claim}\label{Rxsmall}
$$\vert E_x \setminus R_x \vert \sim \vert E_x \vert$$
\end{claim}
\begin{proof}
The argument will be similar to that of the proof of \ref{sizeEx} with one additional constraint.
In order to determine $\vert E_x \setminus R_x \vert$, fix $i$ and consider all $\psi\in E_x \setminus R_x$ with $i\to x$.
First consider the case that $F$ consists of a single tree.
As before, consider an ordering of the edges $e_t\in E(F)$ as in \ref{edgeorder}.
For edge $e_1$, there are $(1-\epsilon)\vert V(\tilde{S}) \vert$ ways to map $e_1$ into $\tilde{S}$ with $i\to x$.
For $t>1$, similarly as before, one vertex of $e_t$ has already been mapped into $\tilde{S}$.
To choose which two vertices in $\tilde{S}$ to map the other two vertices of $e_t$ to, we have to avoid creating an \emph{unwanted triple} in the image of $F$:
i.e. a triple $\{ a_1, a_2, a_3 \} \subset V(F)$ with $\{ a_1, a_2, a_3 \} \not\in E(F)$ while $\{ \psi(a_1), \psi(a_2), \psi(a_3) \}\in E(\tilde{S})$.
By avoiding unwanted edges, $\tilde{S}[\psi(V(F))]$ will equal $\psi(E(F))$.
For any two vertices $a_1, a_2$ with
$$a_1,a_2 \in \bigcup_{s=1}^{t-1}e_s,$$
these two vertices have already been mapped into $\tilde{S}$ with the first $t-1$ edges.
We need to select the image of the remaining two vertices of $e_t$ different from the vertex $b\in V(\tilde{S})$ for which $\{ \psi(a_1), \psi(a_2), b\} \in E(\tilde{S})$.
Since there are at most $\binom{r}{2}$ pairs $a_1, a_2$, at most $\binom{r}{2}$ vertices $b$ in $\tilde{S}$ are forbidden to be selected.
Since by Proposition~\ref{prop:r} $r \leq k(k+4)3^k \ll \vert V(\tilde{S}) \vert$, there are still $(1-o(1))(1-\epsilon)\vert V(\tilde{S}) \vert$ ways to embed $e_t$ for $t=2,\ldots, f$ into $\tilde{S}$.
As before, since $i$ can be chosen in $r$ ways, if $F$ is a tree,
$$\vert E_x \setminus R_x \vert \sim r(1-\epsilon)^f \vert V(\tilde{S}) \vert^f.$$
If $F$ is a forest, then again proceed as before.
When mapping the first vertex of each component into $\tilde{S}$, we still must avoid creating unwanted edges in $\tilde{S}$.
The number of vertices that are forbidden to be selected here is still small though, because the number of components $\lambda \ll \vert V(\tilde{S}) \vert$.
Then as before, if $F$ is a forest,
$$\vert E_x \setminus R_x \vert \sim r(1-\epsilon)^f \vert V(\tilde{S}) \vert^{f+\lambda-1} \sim \vert E_x \vert.$$
\end{proof}
It still may be that for two distinct embeddings $\psi,\psi'\in E_x\setminus R_x$, the images $\psi(V(F))=\psi'(V(F))$ and $\psi(E(F))=\psi'(E(F))$.
Fix any copy of $F$ in $\tilde{S}$.
For each labeling of $V(F)$ that gives an automorphism of $F$, there is a distinct embedding $\psi\in E_x$ onto the copy of $F$.
Let the number of hypergraph automorphisms of $F$ be called $\vert Aut(F)\vert$.
Then there are $\vert Aut(F)\vert$ distinct embeddings in $E_x$ onto any fixed copy of $F$.
Since $D_x$ counts unlabeled sets containing $x$ that induce a copy of $F$, we infer that
$$\frac{\vert E_x\setminus R_x \vert}{\vert Aut(F) \vert} \leq \vert D_x \vert \leq \frac{\vert E_x\vert}{\vert Aut(F) \vert}.$$
This along with Claim~\ref{Rxsmall} and Equation~\ref{sizeEx} implies that
$$deg_A(x) = \vert D_x \vert \sim \frac{r(1-\epsilon)^f \vert V(\tilde{S}) \vert^{f+\lambda-1}}{\vert Aut(F) \vert},$$
proving $(1)$ of Claim~\ref{claim:degcodeg} with
$$c = \frac{r(1-\epsilon)^f}{\vert Aut(F) \vert}.$$
To find $deg_{A}(x,y)$ for any $x,y \in V(A)$, proceed as before.
Let
$$E_{x,y} = \{\psi: F\to \tilde{S} \text{ such that there are some }i,j\in V(F) \text{ with } \psi(i)=x \text{ and } \psi(j)=y\}$$
be the set of all embeddings of $F$ into $\tilde{S}$ with $i\to x$ and $j\to y$.
Let
\begin{equation}\label{degAx}
D_{x,y} = \{ R\in \binom{V(\tilde{S})}{r}: \text{ there exist } \psi\in E_{x,y} \text{ with } \psi(V(F)) = R\}
\end{equation}
and see that
$$deg_A(x,y) = \vert D_{x,y} \vert.$$
To find the cardinality of $E_{x,y}$, we follow the same procedure as for $E_{x}$, except that some vertex in $F$ must be mapped to $y$ by all $\psi \in E_{x,y}$.
Fix some $i,j\in V(F)$ and first count the embeddings $\psi \in E_{x,y}$ with $i\to x$ and $j\to y$.
Consider two possible cases.
\begin{enumerate}
\item[Case 1)] Suppose $j$ is in the same component of $F$ as $i$.
Call the component $C_i$.
As before, give an ordering to the edges $e_t\in E(C_i)$ such that
$$e_1 = \{i,g,h\} \text{ for some vertices } g \text{ and } h \text{, and}$$
$$\big \vert e_{t+1} \cap \left( \bigcup_{s\leq t} e_s \right) \big\vert =1.$$
Let $e_j$ be the first edge in the order that contains $j$ as a vertex, so $e_j = \{ a,b,j\}$ for some vertices $a$ and $b$.
For edge $e_1$, as before, there are $(1-\epsilon)\vert V(\tilde{S}) \vert$ ways to map $e_1$ into $\tilde{S}$ with $i\to x$.
For $2 \leq t < j$, similarly as before, there are $(1-o(1)(1-\epsilon)\vert V(\tilde{S}) \vert$ ways to map $e_t$ into $\tilde{S}$.
For $e_j$, either $a$ or $b$ has already been assigned an image in $\tilde{S}$, and since $j\to y$, there is only one way to map $e_j$ into $\tilde{S}$.
For $e_t$, $t>j$, once again there are $(1-o(1)(1-\epsilon)\vert V(\tilde{S}) \vert$ ways to map $e_t$ into $\tilde{S}$.
Embed the other $\lambda-1$ components of $F$ into $\tilde{S}$ as before, so all $f$ edges are mapped into $\tilde{S}$.
Then there are $(1-o(1))(1-\epsilon)^{f-1} \vert V(\tilde{S}) \vert^{f+\lambda-2}$
embeddings with $i\to x$ and $j\to y$.
Since $i$ and $j$ can be chosen in $r(r-1)$ ways,
$$
\vert E_{x,y} \vert \sim r^2(1-\epsilon)^{f-1} \vert V(\tilde{S}) \vert^{f+\lambda-2}.
$$
\item[Case 2)] Suppose $j$ is not in the same component of $F$ as $i$.
Map the component containing $i$ as before, with $i\to x$.
Next map the component containing $j$ in the same way, with $j\to x$.
For each of the other $\lambda-2$ components of $F$, embed one of its vertices to some unused vertex in $\tilde{S}$.
There are
\begin{equation}\label{embedseedsnoty}
(1-o(1))\vert V(\tilde{S})\vert^{\lambda-2}
\end{equation}
ways to choose these vertices.
As with the first two components, map the rest of the edges of $F$ into $\tilde{S}$ to form an embedding of $F$.
Since each edge of $F$ can be mapped to $\tilde{S}$ in $(1-o(1))(1-\epsilon)\vert V(\tilde{S}) \vert$ ways, and since there are $r(r-1)$ ways to choose $i$ and $j$, applying~\ref{embedseedsnoty} we infer that
\begin{equation}
\vert E_x \vert \sim r^2(1-\epsilon)^{f} \vert V(\tilde{S}) \vert^{f+\lambda-2}.
\end{equation}
\end{enumerate}
In either case,
\begin{equation}\label{sizeExy}
\vert E_{x,y} \vert \lesssim r^2(1-\epsilon)^{f-1} \vert V(\tilde{S}) \vert^{f+\lambda-2}.
\end{equation}
By the same argument that showed
$$\vert D_x \vert \sim \frac{r(1-\epsilon)^f \vert V(\tilde{S}) \vert^{f+\lambda-1}}{\vert Aut(F) \vert},$$
it can be shown that
$$ deg_A(x,y) = \vert D_{x,y} \vert \lesssim \frac{r^2(1-\epsilon)^{f-1} \vert V(\tilde{S}) \vert^{f+\lambda-2}}{\vert Aut(F) \vert}.$$
Since $\vert V(\tilde{S}) \vert = \vert V(A) \vert \sim (1-\epsilon)m$ by assumption, and since $r$ is bounded by a fixed value by Proposition~\ref{prop:r},
$$deg_A(x,y) < \frac{c \vert V(A) \vert^{f+\lambda+1}}{(\log \vert V(A) \vert)^4}$$
for the same value of $c$ as above.
This proves $(2)$ of Claim~\ref{claim:degcodeg}.
\end{proof}
\section{Proof of Theorem~\ref{thm:main}}
\begin{proof}\label{proof:main}
Let $T$ by any subdivision tree on $n$ vertices of bounded degree $d$ and let $S$ be any Steiner triple system on $m \geq n(1+\mu)$ vertices, where $n$ is large.
Fix constants $\epsilon$ and $k$ so that they fit in the hierarchy described in Inequality \ref{eq:hier}.
Specifically, choose $k$ so that
\begin{equation}\label{sizek}
k \geq \frac{8d^5}{\epsilon^{d+1}}.
\end{equation}
First, recall that Lemma~\ref{lemma:sawing} guarantees a decomposition of $T$ into families with certain properties.
Namely, $T$ is decomposed into a set $\mathcal{P}$ of subhypertrees, a set $\cE$ of stars, and a set $I$ of independent vertices.
Second, recall that Lemma~\ref{lemma:random} guarantees a subset $R \subset V(S)$ called the \emph{reservoir} with certain properties.
Let
$$\tilde{S} = S[V(S) \setminus R].$$
Third, let $P$ be as defined in Equation~\ref{Psum}.
Lemma~\ref{lemma:matching} guarantees that there is a copy of $P$ in $\tilde{S}$, so that the hypergraph embedding
$$f: P \to \tilde{S}$$
exists.
Finally, it remains to show that $\cE$ and $I$ can be embedded in $S$ in such a way that the original configuration of $T$ is restored.
Recall that a star $E_j \in \cE$ has the form
$$E_j = \{ v_{j,i}, w_{j,i}, u_j\} \text{ where } 1\leq i \leq {c_j} = deg(u_j).$$
To simplify the notation throughout the rest of the proof, we will drop the index $j$ and will write $v_i = v_{j,i}$, $w_i = w_{j,i}$, $u = u_j$, and $c=c_j$ when referring to vertices of $E_j$, whenever it is clear from the context that $j$ is fixed.
To embed the stars belonging to $\cE$, first take the star $E_1 = \{ v_{i}, w_{i}, u\}$, $1\leq i\leq c$.
Referring to Figure~\ref{embedreminder}, recall how the vertices of a star are labeled.
By~(\ref{sawing6}) of Lemma~\ref{lemma:sawing},
$$w_{1}, w_{2},\ldots, w_{c}, u \in I, \text{ and }$$
$$v_1, \ldots, v_c \in V(P),$$
so the vertices $v_i$ already have an image in $S$ by $f$.
Specifically, all $f(v_i)$ are in $\tilde{S}$.
By Lemma~\ref{lemma:stars}, for the $c$-tuple of vertices $f(v_{1}), \ldots, f(v_{c})$, there are at least $s(c)=\frac{m}{c^2+1}$ stars of the form
$$S_l = \{ \{f(v_i),w_i^{(l)},u^{(l)}\},\hspace*{0.15cm} 1 \leq i \leq c\}, \hspace*{0.5cm} l=1,\ldots,s(c)$$
in $S$ such that the sets
$$W_l = \{w_1^{(l)},\ldots,w_c^{(l)}, u^{(l)}\}, \hspace*{0.5cm} l=1,\ldots,s(c)$$
are pairwise disjoint subsets of $V(S)$.
Further, Lemma~\ref{lemma:random} guarantees that at least $r(c)=\frac{\epsilon^{c+1}m}{2(c^2+1)}$ of the sets $W_l$ lie in the reservoir $R$.
As an example, Figure~\ref{embedding1} shows just two such sets $W_1$ and $W_2$, where a dashed line segment represents a hyperedge in $S$.
\begin{figure}[h]
\begin{minipage}{0.33\textwidth}
\centering
\definecolor{zzttqq}{rgb}{0,0,0}
\definecolor{qqqqff}{rgb}{0,0,0}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=0.7cm]
\fill[line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq,fill opacity=0] (6.,1.) -- (4.2,-1.) -- (4.8,-1.) -- cycle;
\fill[line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq,fill opacity=0] (6.,1.) -- (5.7,-1.) -- (6.3,-1.) -- cycle;
\fill[line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq,fill opacity=0] (6.,1.) -- (7.2,-1.) -- (7.8,-1.) -- cycle;
\fill[line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq,fill opacity=0] (6.,1.) -- (5.4,1.) -- (4.1,3.) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (4.3641429644735465,1.0254659940610364) -- (3.806023398089517,1.0254659940610364) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (2.832891846445569,1.0254659940610364) -- (2.3168531886162764,1.0216671244288105) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (3.509291958589649,3.0008696189206994) -- (3.8,5.) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.8,-1.) -- (3.8657942712621005,-2.9859105973373743) -- (4.41898751506418,-2.998203780532976) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.8,-1.) -- (5.,-3.) -- (5.6114262850375525,-3.010496963728578) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (7.8,-1.) -- (7.,-3.) -- (7.60292196272504,-2.998203780532976) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (7.8,-1.) -- (8.15611520652712,-3.010496963728578) -- (8.758481183111607,-3.010496963728578) -- cycle;
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (6.,1.)-- (4.2,-1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (4.2,-1.)-- (4.8,-1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (4.8,-1.)-- (6.,1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (6.,1.)-- (5.7,-1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (5.7,-1.)-- (6.3,-1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (6.3,-1.)-- (6.,1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (6.,1.)-- (7.2,-1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (7.2,-1.)-- (7.8,-1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (7.8,-1.)-- (6.,1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (6.,1.)-- (5.4,1.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (5.4,1.)-- (4.1,3.);
\draw [line width=2.pt,dash pattern=on 4pt off 4pt,color=zzttqq] (4.1,3.)-- (6.,1.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (4.3641429644735465,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (4.3641429644735465,1.0254659940610364)-- (3.806023398089517,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (3.806023398089517,1.0254659940610364)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (2.832891846445569,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (2.832891846445569,1.0254659940610364)-- (2.3168531886162764,1.0216671244288105);
\draw [line width=2.pt,color=zzttqq] (2.3168531886162764,1.0216671244288105)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (3.509291958589649,3.0008696189206994);
\draw [line width=2.pt,color=zzttqq] (3.509291958589649,3.0008696189206994)-- (3.8,5.);
\draw [line width=2.pt,color=zzttqq] (3.8,5.)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.8,-1.)-- (3.8657942712621005,-2.9859105973373743);
\draw [line width=2.pt,color=zzttqq] (3.8657942712621005,-2.9859105973373743)-- (4.41898751506418,-2.998203780532976);
\draw [line width=2.pt,color=zzttqq] (4.41898751506418,-2.998203780532976)-- (4.8,-1.);
\draw [line width=2.pt,color=zzttqq] (4.8,-1.)-- (5.,-3.);
\draw [line width=2.pt,color=zzttqq] (5.,-3.)-- (5.6114262850375525,-3.010496963728578);
\draw [line width=2.pt,color=zzttqq] (5.6114262850375525,-3.010496963728578)-- (4.8,-1.);
\draw [line width=2.pt,color=zzttqq] (7.8,-1.)-- (7.,-3.);
\draw [line width=2.pt,color=zzttqq] (7.,-3.)-- (7.60292196272504,-2.998203780532976);
\draw [line width=2.pt,color=zzttqq] (7.60292196272504,-2.998203780532976)-- (7.8,-1.);
\draw [line width=2.pt,color=zzttqq] (7.8,-1.)-- (8.15611520652712,-3.010496963728578);
\draw [line width=2.pt,color=zzttqq] (8.15611520652712,-3.010496963728578)-- (8.758481183111607,-3.010496963728578);
\draw [line width=2.pt,color=zzttqq] (8.758481183111607,-3.010496963728578)-- (7.8,-1.);
\begin{scriptsize}
\draw [fill=qqqqff] (6.,1.) circle (2.5pt);
\draw[color=qqqqff] (6.306823073590519,1.062826693367364) node {u};
\draw [fill=qqqqff] (4.2,-1.) circle (2.5pt);
\draw[color=qqqqff] (3.7011987775393953,-0.9356618443805933) node {$w_{2}$};
\draw [fill=qqqqff] (4.8,-1.) circle (2.5pt);
\draw[color=qqqqff] (5.117848880246802,-0.9019320800304167) node {$v_{2}$};
\draw [fill=qqqqff] (5.7,-1.) circle (2.5pt);
\draw[color=qqqqff] (5.573200698974183,-1.3) node {$w_{3}$};
\draw [fill=qqqqff] (6.3,-1.) circle (2.5pt);
\draw[color=qqqqff] (6.332120396853152,-1.3) node {$v_{3}$};
\draw [fill=qqqqff] (7.2,-1.) circle (2.5pt);
\draw[color=qqqqff] (7.057310330381943,-1.3) node {$w_{4}$};
\draw [fill=qqqqff] (7.8,-1.) circle (2.5pt);
\draw[color=qqqqff] (8.153527671762674,-0.9356618443805933) node {$v_{4}$};
\draw [fill=qqqqff] (5.4,1.) circle (2.5pt);
\draw[color=qqqqff] (4.94920005849592,1.05439425227982) node {$w_{1}$};
\draw [fill=qqqqff] (4.1,3.) circle (2.5pt);
\draw[color=qqqqff] (4.392658946718011,3.162504524165851) node {$v_{1}$};
\draw [fill=qqqqff] (3.8,5.) circle (2.5pt);
\end{scriptsize}
\end{tikzpicture}
\caption{The dashed edges of star $E_j$ are removed during decomposition \label{embedreminder}}
\end{minipage}
\begin{minipage}{0.59\textwidth}
\centering
\definecolor{zzttqq}{rgb}{0,0,0}
\definecolor{qqqqff}{rgb}{0,0,0}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(0.34079231360926787,-1.2) rectangle (23.100787761327332,9.91380104385483);
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (4.3641429644735465,1.0254659940610364) -- (3.806023398089517,1.0254659940610364) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (2.832891846445569,1.0254659940610364) -- (2.3168531886162764,1.0216671244288105) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (4.1,3.) -- (3.509291958589649,3.0008696189206994) -- (3.8,5.) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (6.416661752659802,2.906503383945089) -- (5.155954180533691,1.0119923313931234) -- (5.763617847966775,1.011083270323857) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (6.416661752659802,2.906503383945089) -- (7.133417257810685,0.9632995699804647) -- (6.528157053461051,0.9632995699804647) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (9.745592876582792,2.8587196836016964) -- (8.996981571202982,0.8836600694081441) -- (9.634097575781544,0.8995879695226082) -- cycle;
\fill[line width=2.pt,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (9.745592876582792,2.8587196836016964) -- (10.271213580360108,0.8677321692936799) -- (10.860545884595279,0.8518042691792158) -- cycle;
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (4.3641429644735465,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (4.3641429644735465,1.0254659940610364)-- (3.806023398089517,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (3.806023398089517,1.0254659940610364)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (2.832891846445569,1.0254659940610364);
\draw [line width=2.pt,color=zzttqq] (2.832891846445569,1.0254659940610364)-- (2.3168531886162764,1.0216671244288105);
\draw [line width=2.pt,color=zzttqq] (2.3168531886162764,1.0216671244288105)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (4.1,3.)-- (3.509291958589649,3.0008696189206994);
\draw [line width=2.pt,color=zzttqq] (3.509291958589649,3.0008696189206994)-- (3.8,5.);
\draw [line width=2.pt,color=zzttqq] (3.8,5.)-- (4.1,3.);
\draw [line width=2.pt,color=zzttqq] (6.416661752659802,2.906503383945089)-- (5.155954180533691,1.0119923313931234);
\draw [line width=2.pt,color=zzttqq] (5.155954180533691,1.0119923313931234)-- (5.763617847966775,1.011083270323857);
\draw [line width=2.pt,color=zzttqq] (5.763617847966775,1.011083270323857)-- (6.416661752659802,2.906503383945089);
\draw [line width=2.pt,color=zzttqq] (6.416661752659802,2.906503383945089)-- (7.133417257810685,0.9632995699804647);
\draw [line width=2.pt,color=zzttqq] (7.133417257810685,0.9632995699804647)-- (6.528157053461051,0.9632995699804647);
\draw [line width=2.pt,color=zzttqq] (6.528157053461051,0.9632995699804647)-- (6.416661752659802,2.906503383945089);
\draw [line width=2.pt,color=zzttqq] (9.745592876582792,2.8587196836016964)-- (8.996981571202982,0.8836600694081441);
\draw [line width=2.pt,color=zzttqq] (8.996981571202982,0.8836600694081441)-- (9.634097575781544,0.8995879695226082);
\draw [line width=2.pt,color=zzttqq] (9.634097575781544,0.8995879695226082)-- (9.745592876582792,2.8587196836016964);
\draw [line width=2.pt,color=zzttqq] (9.745592876582792,2.8587196836016964)-- (10.271213580360108,0.8677321692936799);
\draw [line width=2.pt,color=zzttqq] (10.271213580360108,0.8677321692936799)-- (10.860545884595279,0.8518042691792158);
\draw [line width=2.pt,color=zzttqq] (10.860545884595279,0.8518042691792158)-- (9.745592876582792,2.8587196836016964);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (4.1,3.)-- (4.771941869802165,6.288315043333342);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (4.771941869802165,6.288315043333342)-- (4.973357758720024,7.597518321299435);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (4.973357758720024,7.597518321299435)-- (5.304255290513649,6.2451544957080865);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (5.304255290513649,6.2451544957080865)-- (6.416661752659802,2.906503383945089);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (8.16873076525085,2.8905754838306246)-- (5.721473917557786,6.331475590958598);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (5.721473917557786,6.331475590958598)-- (4.973357758720024,7.597518321299435);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (4.973357758720024,7.597518321299435)-- (6.0811451477682485,6.51850463066804);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (6.0811451477682485,6.51850463066804)-- (9.745592876582792,2.8587196836016964);
\draw [line width=2.pt] (5.2754815920968126,7.022044352962692) circle (1.2576772595439563cm);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (4.1,3.)-- (8.46936211636572,6.676759971960644);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (8.46936211636572,6.676759971960644)-- (9.577149505413944,7.554357773674179);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (9.577149505413944,7.554357773674179)-- (8.800259648159344,6.389022987792273);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (8.800259648159344,6.389022987792273)-- (6.416661752659802,2.906503383945089);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (8.16873076525085,2.8905754838306246)-- (9.203091425995062,6.273928194124924);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (9.203091425995062,6.273928194124924)-- (9.577149505413944,7.554357773674179);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (9.577149505413944,7.554357773674179)-- (9.6203100530392,6.273928194124924);
\draw [line width=2.pt,dash pattern=on 3pt off 3pt,color=zzttqq] (9.6203100530392,6.273928194124924)-- (9.745592876582792,2.8587196836016964);
\draw [line width=2.pt] (9.174317727578226,6.964496956129016) circle (1.2700418438484615cm);
\draw [rotate around={-0.5541005153679632:(7.325607604296451,6.986077229941646)},line width=2.pt] (7.325607604296451,6.986077229941646) ellipse (4.1472177807621655cm and 1.8349814213155113cm);
\begin{scriptsize}
\draw [fill=qqqqff] (4.973357758720024,7.597518321299435) circle (2.5pt);
\draw [fill=qqqqff] (5.304255290513649,6.2451544957080865) circle (2.5pt);
\draw [fill=qqqqff] (6.416661752659802,2.906503383945089) circle (2.5pt);
\draw[color=qqqqff] (6.95,2.993726574605482) node {$f(v_{2})$};
\draw [fill=qqqqff] (5.721473917557786,6.331475590958598) circle (2.5pt);
\draw [fill=qqqqff] (8.16873076525085,2.8905754838306246) circle (2.5pt);
\draw[color=qqqqff] (8.620424033054112,2.936179177771807) node {$f(v_{3})$};
\draw [fill=qqqqff] (6.0811451477682485,6.51850463066804) circle (2.5pt);
\draw [fill=qqqqff] (9.745592876582792,2.8587196836016964) circle (2.5pt);
\draw[color=qqqqff] (10.174203747563311,2.921792328563389) node {$f(v_{4})$};
\draw [fill=qqqqff] (4.771941869802165,6.288315043333342) circle (2.5pt);
\draw [fill=qqqqff] (4.1,3.) circle (2.5pt);
\draw[color=qqqqff] (4.7,3.0368871222307376) node {$f(v_{1})$};
\draw[color=black] (4.4,7.352941884756318) node {$W_1$};
\draw [fill=qqqqff] (8.46936211636572,6.676759971960644) circle (2.5pt);
\draw [fill=qqqqff] (9.577149505413944,7.554357773674179) circle (2.5pt);
\draw [fill=qqqqff] (8.800259648159344,6.389022987792273) circle (2.5pt);
\draw [fill=qqqqff] (9.203091425995062,6.273928194124924) circle (2.5pt);
\draw [fill=qqqqff] (9.6203100530392,6.273928194124924) circle (2.5pt);
\draw[color=black] (8.3,7.396102432381575) node {$W_2$};
\draw[color=black] (6.440816377978711,8.611791190492946) node {R};
\draw[color=qqqqff] (9.65,7.820514484029924) node {$u^{(2)}$};
\draw[color=qqqqff] (5.448123782597834,7.66) node {$u^{(1)}$};
\end{scriptsize}
\end{tikzpicture}
\caption{The dashed edges of each star $S_l$ are where $E_j$ can be mapped to. Note that all vertex sets $W_l$ are disjoint \label{embedding1}}
\end{minipage}
\end{figure}
Choose the star $S_1$ to be the image of $E_1$.
Specifically, extend the homomorphism $f$ by letting
$$f(u) = u^{(1)}, \text{ and }$$
$$f(w_i) = w_i^{(1)}, \hspace*{0.5cm} 1 \leq i \leq c.$$
Now the subhypertrees in $S$ containing $f(v_{1}),\ldots,f(v_{c})$ are connected by $f(E_1)$ in the same way they were connected originally in $T$.
Repeat the above procedure for each $E_j$.
Instead of mapping $E_j$ to $S_1$ each time, choose a star $S_l$ such that no vertex of $W_l$ is already in the image of $f$.
(We show below in Claim~\ref{enoughstars} that it is always possible to find such a star $S_l$.)
Extend $f$ so that it maps $E_j$ to $S_l$.
Now the subhypertrees in $S$ containing $f(v_{1}),\ldots,f(v_{c})$ are connected by $f(E_j)$ in the same way they were connected originally in $T$.
After this process has been completed for all $E_j\in \cE$, all stars of $\cE$ have an image in $S$ by $f$.
Consequently, by~(\ref{sawing6}) of Lemma~\ref{lemma:sawing},
all vertices of $I$ have also been embedded into $S$.
Then
$$f(T) \subset S,$$
completing the proof.
\end{proof}
\begin{claim}\label{enoughstars}
There exists some star $S_l$ in $S$ onto which to embed $E_j$, such that none of the vertices of $W_l$ has been used yet.
\end{claim}
\begin{proof}[Proof of Claim~\ref{enoughstars}]
By Lemma~\ref{lemma:stars}, there are $r(c)$ stars $S_{r}$ in $S$ onto which to embed $E_j$.
If for any $i<j$,
$$f(V(E_i)) \cap W_r \neq \emptyset,$$
then $S_r$ cannot be the image of $E_j$.
$$\vert f(V(E_i)) \cap R \vert \leq d+1,$$
so there are at most $(d+1)(j-1)$ stars $S_r$ that cannot be the image of $E_j$, because one of their vertices is already used in the image of some $E_i$.
Still, there were originally $r(c) = \frac{\epsilon^{c_j+1}m}{2(c_j^2+1)}$ stars $S_{r}$ from which to choose.
Recall that $d\geq c$, $\vert \cE \vert \geq j$, and $m \geq n$.
Also, combining several parts of Lemma~\ref{lemma:sawing},
$$ \left( \frac{2d^2}{k}\right) n \geq \vert \cE \vert.$$
Using these facts, Inequality~\ref{eq:hier}, and~\ref{sizek},
$$r(c) = \frac{\epsilon^{c_j+1}m}{2(c_j^2+1)} \geq \frac{\epsilon^{d+1}}{2(d^2+1)}n \geq (d+1)\frac{2d^2}{k}n > (d+1)(\vert \cE \vert-1) \geq (d+1)(j-1).$$
So there will always be at least one set $W_j$ such that for all $i<j$,
$$f(V(E_i)) \cap W_l = \emptyset.$$
Map $E_j$ onto the star $S_j$ containing $W_j$.
\end{proof}
\section{Concluding Remarks}
Note that problems of finding large matchings in Steiner systems has been extensively studied.
For some of these results, see Chapter 19 of ~\cite{CR}.
The best current bound on the size of such a matching is due to Alon, Kim, and Spencer~\cite{AKS}, who proved a more general result implying that any Steiner triple system on $m$ vertices contains a matching of size $(m/3) - cm^{1/2}(\ln m)^{3/2}$ where $c>0$ is an absolute constant.
Likely one could extend Theorem~\ref{thm:main} along similar lines giving a better numerical bound on parameter $\mu$ as a function of $m$.
As another possible extension one could study a problem of embedding hypertrees into other designs.
In our opinion the most interesting question is whether Conjecture~\ref{conjec} is true.
In other words, does any Steiner triple system on $m$ vertices contain all hypertrees with $m-o(m)$ vertices?
Note that the main evidence for stating Conjecture~\ref{conjec} is our inability to find a counterexample.
We close with the following likely easier variant of Conjecture~\ref{conjec}
\begin{conj}\label{conjec2}
Let $d$ be a fixed constant.
Then any Steiner triple system on $n $ vertices contains all hypertrees with maximum degree $\leq d$ and $n-o(n)$ vertices.
\end{conj}
\begin{bibdiv}
\begin{biblist}
\bib{AKS}{article}{
AUTHOR = {Alon, Noga},
AUTHOR = {Kim, Jeong-Han},
AUTHOR = {Spencer, Joel},
TITLE = {Nearly perfect matchings in regular simple hypergraphs},
JOURNAL = {Israel J. Math.},
FJOURNAL = {Israel Journal of Mathematics},
VOLUME = {100},
YEAR = {1997},
PAGES = {171--187},
ISSN = {0021-2172},
MRCLASS = {05C70 (05C65)},
MRNUMBER = {1469109},
MRREVIEWER = {Italo Jos\'e Dejter},
}
\bib{Bol}{article}{
AUTHOR = {Bollob\'as, B\'ela},
TITLE = {Some remarks on packing trees},
JOURNAL = {Discrete Math.},
FJOURNAL = {Discrete Mathematics},
VOLUME = {46},
YEAR = {1983},
NUMBER = {2},
PAGES = {203--204},
ISSN = {0012-365X},
MRCLASS = {05C70 (05C05)},
MRNUMBER = {710892},
MRREVIEWER = {H. Joseph Straight},
URL = {https://doi.org/10.1016/0012-365X(83)90254-6},
}
MathSciNet
\bib{CR}{book}{
author={Colbourn, Charles J.},
author={Rosa, Alexander},
title={Triple systems},
series={Oxford Mathematical Monographs},
publisher={The Clarendon Press, Oxford University Press, New York},
date={1999},
pages={xvi+560},
isbn={0-19-853576-7},
review={\MR{1843379}},
}
\bib{FR}{article}{
AUTHOR = {Frankl, Peter},
AUTHOR = {R\"odl, V.},
TITLE = {Near perfect coverings in graphs and hypergraphs},
JOURNAL = {European J. Combin.},
FJOURNAL = {European Journal of Combinatorics},
VOLUME = {6},
YEAR = {1985},
NUMBER = {4},
PAGES = {317--326},
ISSN = {0195-6698},
MRCLASS = {05C70 (05B40)},
MRNUMBER = {829351},
MRREVIEWER = {Zolt\'an F\"uredi},
}
\bib{GL}{book}{
AUTHOR = {Gy\'arf\'as, A.},
AUTHOR = {Lehel, J.},
TITLE = {Packing trees of different order into {$K_{n}$}},
BOOKTITLE = {Combinatorics ({P}roc. {F}ifth {H}ungarian {C}olloq.,
{K}eszthely, 1976), {V}ol. {I}},
SERIES = {Colloq. Math. Soc. J\'anos Bolyai},
VOLUME = {18},
PAGES = {463--469},
PUBLISHER = {North-Holland, Amsterdam-New York},
YEAR = {1978},
MRCLASS = {05C38 (05C05)},
MRNUMBER = {519284},
MRREVIEWER = {K. C. Stacey},
}
\bib{KK}{article}{
AUTHOR = {Kahn, Jeff},
AUTHOR = {Kayll, P. Mark},
TITLE = {Fractional v.\ integral covers in hypergraphs of bounded edge
size},
JOURNAL = {J. Combin. Theory Ser. A},
FJOURNAL = {Journal of Combinatorial Theory. Series A},
VOLUME = {78},
YEAR = {1997},
NUMBER = {2},
PAGES = {199--235},
ISSN = {0097-3165},
MRCLASS = {05C65 (05C70)},
MRNUMBER = {1445415},
MRREVIEWER = {Nigel Martin},
}
\bib{JLR}{book}{
AUTHOR = {Janson, Svante},
AUTHOR = {\L uczak, Tomasz},
AUTHOR = {Ruci\'nski, Andrzej},
TITLE = {Random graphs},
SERIES = {Wiley-Interscience Series in Discrete Mathematics and
Optimization},
PUBLISHER = {Wiley-Interscience, New York},
YEAR = {2000},
PAGES = {xii+333},
ISBN = {0-471-17541-2},
MRCLASS = {05C80 (60C05 82B41)},
MRNUMBER = {1782847},
MRREVIEWER = {Mark R. Jerrum},
}
\bib{LPV}{book}{
AUTHOR = {Lov\'asz, L.},
AUTHOR = {Pelik\'an, J.},
AUTHOR = {Vesztergombi, K.},
TITLE = {Discrete mathematics},
SERIES = {Undergraduate Texts in Mathematics},
NOTE = {Section 8.5},
PUBLISHER = {Springer-Verlag, New York},
YEAR = {2003},
PAGES = {x+290},
ISBN = {0-387-95584-4},
MRCLASS = {05-01 (11-01)},
MRNUMBER = {1952453},
MRREVIEWER = {Robin J. Wilson},
URL = {https://doi.org/10.1007/b97469},
}
\bib{PS}{article}{
AUTHOR = {Pippenger, Nicholas},
AUTHOR = {Spencer, Joel},
TITLE = {Asymptotic behavior of the chromatic index for hypergraphs},
JOURNAL = {J. Combin. Theory Ser. A},
FJOURNAL = {Journal of Combinatorial Theory. Series A},
VOLUME = {51},
YEAR = {1989},
NUMBER = {1},
PAGES = {24--42},
ISSN = {0097-3165},
MRCLASS = {05C65 (05C15 05C70)},
MRNUMBER = {993646},
}
\bib{Pol}{book}{
AUTHOR = {P\'olya, G.},
AUTHOR = {Read, R. C.},
TITLE = {Combinatorial enumeration of groups, graphs, and chemical
compounds},
NOTE = {P\'olya's contribution translated from the German by Dorothee
Aeppli},
PUBLISHER = {Springer-Verlag, New York},
YEAR = {1987},
PAGES = {viii+148},
ISBN = {0-387-96413-4},
MRCLASS = {05A15 (01A75 05C30 20B05 92A40)},
MRNUMBER = {884155},
MRREVIEWER = {Daniel Turz\'\i k},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
1,314,259,994,828 | arxiv | \section{Introduction}
Addressing algorithmic harms and implementing mechanisms for enhancing fairness, accountability and transparency in AI systems is a pressing challenge for researchers, practitioners, and policymakers alike. New regulatory frameworks on transnational, national, federal, and local levels create an urgent need to rapidly assess and implement technical and socio-technical innovation that fosters fairness, accountability, and transparency. This is particularly true for AI start-ups, which are often treated as conduits for the cultural production of innovation and the acceleration of economic progress, especially in the US and European contexts \cite{sloane_ethics_2022}. They face the same regulatory pressures as large organizations but cannot fall back onto comparable resources and knowledge for capacity building in what practitioners often call “AI ethics” \cite{greene_better_2019}.
To date, we know little about how AI start-ups put into practice or operationalize “AI ethics”. This gap is indicative of a larger split in the field of AI ethics: There is a rapidly growing body of work that focuses on technical innovation via computational interpretations of ethical concerns, normative frameworks, and socio-technical innovation which is outpacing the production of empirical research on the professional practices of AI development. The most recent technical innovations in the AI ethics space include, for example, tactics for fair clustering \cite{abbasi_fair_2021} and producing intersectionally fair rankings \cite{yang_causal_2021}, as well as testing the probabilistic fairness of pre-trained logistic classifiers \cite{taskesen_statistical_2021}, assessing “leave-one-out unfairness” \cite{black_leave-one-out_2021}, or measuring robustness bias \cite{nanda_fairness_2021}, among many others.
In the academe, normative approaches that have been introduced include the introduction of technomoral virtues \cite{vallor_technology_2016}, contextual integrity \cite{nissenbaum_privacy_2009}, or artificial morality \cite{allen_artificial_2005}. In practice, normative concepts such as utilitarianism continue to play a significant role in AI applications, especially self-driving vehicles \cite{jafarinaimi_our_2018, karnouskos_role_2021}. Similarly relevant are normative AI ethics codes that seek to influence behavior of AI professionals \cite{ryan_artificial_2020}.
Socio-technical innovations to help foster fairness, accountability, and transparency include model cards for model reporting \cite{mitchell_model_2019} and datasheets for datasets \cite{gebru_datasheets_2021}, responsible data management \cite{stoyanovich_responsible_2020}, counterfactuals \cite{pearl_causality_2009, morgan_counterfactuals_2015, kusner_counterfactual_2017, kilbertus_avoiding_2017, karimi_algorithmic_2021, barocas_hidden_2020}, using measurement theory to render assumptions in mathematical models visible \cite{jacobs_measurement_2021, milli_optimizing_2021}, deploying structural causal models \cite{kacianka_designing_2021} or introducing traceability \cite{kroll_outlining_2021} to define, assess and improve models of accountability, nutritional labels \cite{holland_dataset_2018, stoyanovich_nutritional_2019}, algorithmic impact assessment \cite{metcalf_algorithmic_2021, reisman_algorithmic_2018, kaminski_algorithmic_2019, selbst_disparate_2017}, or algorithmic auditing \cite{kasy_fairness_2021, raji_closing_2020, koshiyama_towards_2021, bandy_problematic_2021, brown_algorithm_2021, sloane_silicon_2021, rhea_external_nodate}.
While this important body of work is fast expanding, less is known about how professionals treat and enact AI ethics vis-à-vis these innovations. Even though the fairness, accountability and transparency (FAccT) community increasingly turns towards an integrative understanding of \textit{socio}-technical systems, the emphasis on technology lingers and supplants advancements in understanding how “the social” in “socio-technical” systems actually pans out, especially in the context of the cultural production of AI innovation at large. The epistemological neglect of social practice and the cultural context of AI can perpetuate a mentality of framing the social impact of AI as a purely technical problem\cite{vakkuri_this_2020}. This technical framing can foster a disconnect to cultural context and social practice and lead to mitigation strategies only hovering at the surface. Prominent examples of that are different forms of “-washing” such as ethics-washing \cite{greene_better_2019, ochigame_invention_2019, sloane_inequality_2019, wagner_ethics_2018}, participation-washing \cite{sloane_participation_2020}, potential diversity, equality, and inclusion (DEI)-washing \cite{howard_council_2020}, or woke-washing \cite{dowell_woke-washing_2020}.
The point of departure for this paper is that this epistemological gap is the \textit{actual} crux of what is often described as the polarization between techno-solutionism and techno-criticism: failure to systematically understand the cultural contexts and social practices AI gets folded into across domains intensifies the risk of a disconnect between scholarly research and application at a time when convergence is much needed to effectively and sustainably address AI harms and comply with new regulatory regimes. This issue surfaces concretely in the fairness, accountability, and transparency space of an issue of adoption \cite{kroll_outlining_2021}, or integration.\footnote{Of course, there also is a bigger political issue that comes with prioritizing quantitative ways of knowing and describing the world over qualitative approaches. For example, such an asymmetry systematically prevents genuine intersectional approaches \cite{crenshaw_mapping_1991, hammack_jr_intersectional_2018}, which center lived experience, to enter the frame, and therefore can be seen as perpetuating oppression. Those considerations, however, are out of scope for this paper.} What seems to be missing is a framework that can meaningfully connect the rich insights from qualitative works and conceptual advancements that have focused on studying the intersection of algorithmic systems and social life, particularly with regards to the professions.\footnote{This body of work is expansive and continues to grow. Important works include research on the gig economy \cite{gray_ghost_2019, rosenblat_rosenblat_2018, cameron_making_2021, ticona_trusted_2018}, journalism \cite{christin_metrics_2020, petre_all_2021}, content moderation \cite{roberts_behind_2019}, or hiring \cite{dencik_regimes_2021}.}
In this paper, we address this need. We present empirical findings from our study on the cultural interpretation, history, and operationalization of “ethics” in German AI start-ups. We use the analysis of this material to argue that AI ethics must be understood in their specific cultural, social, and historical contexts. We base this work on a practice-based approach for understanding “ethical AI” to then present an “anatomy of AI ethics” in German AI start-ups that breaks down “ethical AI” practices into principles, needs, narratives, materializations, and cultural genealogies, all of which we illustrate with data from the field.
To discuss applicability of this framework beyond the academe, we then translate our conceptual work into a research and innovation assessment guide that allows AI researchers, practitioners, and regulators to systematically analyze existing cultural understandings, histories, and social practices of “ethical AI” to define appropriate strategies for effectively implementing socio-technical innovations that emerge in the FAccT field. This guide is comprised of two sets of questions that can help researchers and practitioners conduct their own qualitative work mapping the existing cultural understandings, histories, and social practices and use this analysis as a method of assessing technical and socio-technical innovations vis-à-vis their potential for adoption. We close this paper with critical reflections on our own work and recommendations for future research.
\subsection{Why is understanding AI ethics as social practice necessary?}
Working to understand the practical accomplishment of “ethical work” in technology organizations is not a new endeavor. Fruitful, and empirically grounded, approaches have been proposed, notably Ziewitz’s work on SEO consultants and the “ethicality of optimization” \cite{ziewitz_rethinking_2019,rakova_where_2021} work on the impact of organizational culture and structure on the effectiveness of responsible AI initiatives in practice, or Madaio et al.’s research \cite{madaio_assessing_2022} on organizational factors shaping the work on AI fairness.
Here, however, we are taking the cue from Metcalf and Moss' \cite{metcalf_owning_2019} notion of “ethics owners”, who define ethics owners in their report \cite{moss_ethics_2020} as "Silicon Valley professionals tasked with managing the ethical footprints of tech companies”, and who handle “challenging ethical dilemmas with the tools of tech management and within familiar tech company structures”. But rather than focusing on individual ethics roles and responsibilities, we want to center the question of how meaning is ascribed to “ethics” in relation to the wider professional practice of AI design, and how this meaning stabilizes. Within that, we are looking to make space for the cultural specificity of how this dynamic unfolds, as well as its material dimension --- we are less concerned with individualized social processes. Therefore, we engage with the analytically potent notion of social practice theory \cite{bourdieu_outline_1977, shove_design_2007,hui_matters_2017, hui_variations_2017, reckwitz_toward_2002, reckwitz_kreativitat_2016, schatzki_social_1996, schatzki_site_2002, schatzki_practice_2001}, a framework that decenters the individual and focuses on a practice as a unit of inquiry \cite{heidenstrom_utility_2021}.
Specifically, we build on Shove, Pantzar, \& Watson’s \cite{shove_dynamics_2012} framework that focuses on the \textit{dynamics} of social practice as a way of charting and understanding patterns of stability and change. They suggest that social practices are comprised of the three elements: \textit{meanings}, \textit{competences}, and \textit{materials}. \textit{Meanings} designate the social and symbolic significance of participation in any given moment, including emotional knowledge; \textit{competences} are skills, multiple forms of shared understanding and the practical knowledgeability; \textit{materials} are objects, infrastructures, tools, hardware and so on, as well as the body itself \cite{shove_dynamics_2012}. These elements are individually distributed and combined but inform each other. A social practice stabilizes when the three elements are linked up, whereby this stabilization is not static, because elements change as linkages between them are continually made and remade and practices stabilize (or disintegrate) across time and space. The making, breaking, and re-making of links between the elements is “transformative” in that it (re-)shapes those elements that do not disintegrate. Important, here, is the idea that continuities shape competences, materials, and meanings, and how they link up \cite{shove_dynamics_2012}. Shove, Pantzar, \& Watson \cite{shove_dynamics_2012} use the case of the social practice of (car-)driving to work this out, discussing how the material design of a car is a continuation of the design of a horse carriage, rather than a radical new design, and that driving on the left-hand side of a road (in the UK) relates to swords typically being held on the right hand.
What is important to note in this context is that Shove, Pantzar, \& Watson \cite{shove_dynamics_2012} describe the connection of various social practices in terms of “bundles” and “complexes” of practice, as well as circuits of reproduction, that have “emergent lives of their own” \cite{shove_dynamics_2012}. Whilst we do not contend that, we are interested in what that means in the context of the body of a profession, and the profession of AI design and the field of German AI start-ups specifically. In other words, we are concerned with an interpretation and a use of social practice theory that has a more bounded focus, that is more closely tied to the research focus on organizations, and reflects the insight that was gathered from the empirical data.
Therefore, we use social practice theory in conjunction with our empirical data to develop an “anatomy” of AI ethics. The connecting thread between the two is a concern for stabilization: the anatomy of AI ethics is comprised of a “skeleton” and of “soft tissue” which together stabilize AI ethics in German start-ups. The “skeleton” of AI ethics is comprised of five connected and mutually constitutive AI ethics elements that broadly map onto the characteristics of the social practice elements of competencies, meanings, and materials, but are more nuanced (see discussion below): principles, needs, narratives, ethics materials, and cultural genealogy. Whilst these five elements scaffold AI ethics, they are brought to life – in a specific context – by the “soft tissue” of this anatomy: the way in which principles, needs, narratives, ethics materials, and cultural genealogy are enacted in an interconnected way. In other words, the “skeleton” is the What of (the stabilization of) AI ethics in German start-ups, and the “soft tissue” is the How. Below, we illustrate this dynamic by discussing the enactment of the principles “Mitbestimmung” (co-determination) and “Verantwortung” (responsibility).\footnote{To date, our analysis shows 10 principles overall: “Mitbestimmung” (co-determination), “Verantwortung” (responsibility), “Ehrlichkeit” (honesty), “Aufklärung” (education / enlightenment), “Austausch” (exchange), “Fortbildung” (training), “Absicherung” (coverage), “Ordnung” (order), “Forschung” (research), and “Miteinander” (togetherness). Discussing all of these in this paper, however, is out of scope. We therefore decided to only showcase the two most prominent principles to show how the elements are connected.} It is important to note that the anatomy frame is not a replacement for the social practice theory approach, it is an expansion thereof.
As demonstrated in the discussion part of this paper, we argue that using social practice theory in this way is particularly fruitful for researchers and practitioners who wish to consider where and how technical and socio-technical innovation on fairness, accountability, and transparency can be best “linked into” existing social practices of AI design and stabilized as an integral element of them - or in other words: so that they “stick” better.
\section{Data Collection, Analysis and Limitations}
This paper reports on empirical findings that are part of a larger qualitative study on the operationalization of ethics in German AI start-ups.\footnote{The project is housed at the Tübingen AI Center, University of Tübingen, Germany, and the Principal Investigator is Mona Sloane.} The study aims to map out and understand the plurality of social assumptions and historical continuities that give shape to how German AI start-ups put “ethics” into practice as an organizational responsibility and as a cultural understanding. Guiding research questions are: What is the plurality of meanings of “ethics” among German AI start-ups? What are the basic social assumptions and cultural processes that underpin “ethics” in German AI start-ups? What are the principles, processes, and practices that materialize “ethics” in German AI start-ups?
\subsection{Data Collection and Analysis}
Because this study is focused on researching articulations and enactments of “ethics” through the lens of AI practitioners (in this case in German AI start-ups), a qualitative research approach in the form of semi-structured interviews was chosen.\footnote{The original research proposal also encompassed participant observation in selected AI start-ups and accelerators, primarily in locations where there are AI start-up hubs, such as the Cybervalley in Tübingen, and various AI campuses in Berlin. Due to the Covid-19 pandemic, however, in-person research was suspended and the research team had to focus on conducting interviews virtually. It is important to note that due to the lack of in-person engagement, it was extremely difficult to source research participants and grow the data pool.} Research participants were sourced through the German Federal Association of AI Businesses (KI Bundesverband), the University of Tübingen, various AI industry event pages, and personal networks of the research team. All companies that self-declared to be a start-up, and that offered AI technology, were included in the sample for outreach. As of December 2021, the research team had conducted 64 interviews. 64\% of the individuals that were interviewed identified as AI start-up founders or co-founders, 30\% as AI start-up employees. The remaining 6\% were representatives of AI start-up associations or accelerators.
For each interview, an interview protocol encompassing seven topics formed the basis for the conversation. These themes were derived from a scoping literature review on AI ethics (see section 1), and the empirical focus developed for the study. The themes were: start-up background, which included questions around the AI that the company built or used; ethics within the start-up, which included questions about how the interviewee defines ethics, as well as how ethics - as per their own definition - plays a role in the organization; ethics pertaining to technology design, including questions of how ethics are measured, if at all; ethics and AI generally, including questions about the social role of AI, and AI innovation in the context of inequality; the German start-up landscape, including questions around availability of government funding vs. venture capital, geographic or thematic AI innovation clusters, and the general understanding of “ethics” within the German AI industry; professional ethics, including the question of professional responsibility and accountability; and regulation, including questions about data protection, German and European approaches to AI regulation, and their relationship to ethics.
As per IRB protocol,\footnote{This study was approved by the IRB of the University of Tübingen (Ethikkommission).} all research participants had to sign a consent form prior to the interview. Of the 64 interviews 60 were conducted in German, 4 were conducted in English. All interviews were recorded and then transcribed using transcription software. Transcripts were cleaned manually and anonymized and entered into the transcription software Atlas.ti, where all interview material was coded.\footnote{All data is stored securely on password-protected servers. Codebooks are stored separately from interview data. As per German data protection regulation, the code list will be destroyed by 2023. The anonymized data will be kept for at least 10 years.}
The data analysis followed a grounded theory approach. Grounded theory \cite{clarke_situational_2005, glaser_discovery_2017, charmaz_constructing_2006} is an approach that generates theory from data in an iterative way. This approach was chosen as it is most appropriate for studies that are based on open-ended research questions. It was also chosen because there is little known about the phenomenon of how actors within German AI start-ups specifically construct and enact “ethics”. Grounded theory was also chosen because it allowed for flexibility in the research process. Specifically, the grounded theory approach allowed for iteration in both data collection and analysis. In terms of data collection, it allowed shifting emphasis in interviews while sticking to the interview protocol. Initial coding followed the themes in the interview protocol, as well as the social practice theory lens deployed in this study (with a particular focus on coding for the elements). As coding progressed, clearer and more nuanced patterns emerged, as well as the relationship between them. We learned, for example, that the element of “meaning” bifurcates into needs, narratives, and cultural genealogy, all of which are culturally and historically specific. As data collection progressed and researchers gained more knowledge and understanding of the field, questions could be asked in a more targeted way. For example, the question of how “ethics” are materialized within the company initially was asked in an abstract way and later became a series of questions asking the research participant to mentally visualize the company, its environment and relevant actors, projects, processes, and how they are connected, and then to articulate where they “see” ethics in that visual. In terms of analysis, it allowed letting the data point to analytical foci and theorizations that were not expected when the study was originally designed. The grounded theory developed here – the anatomy of AI ethics – emerged at this juncture, allowing us to move from Shove, Pantzar, \& Watson’s notion of clustered and bundled practices, to the profession- and organization-focused concept of the anatomy of AI ethics.
\subsection{Limitations}
There are a number of limitations of this study. First, this study depends on voluntary participation of participants and resulting self-selection of participants and a data set that is likely to be biased. In this case towards start-ups that were part of the ecosystems that the research team was able to tap into, towards start-ups that are already thinking about ethics and/or want to participate in the AI ethics discourse, or start-ups that have the “ethical” stance to support research (see below). Due to self-selection, the data set is also biased towards founders and co-founders rather than employees or freelancers. Second, restrictions on in-person research due to the COVID-19 pandemic severely limited the scope of the originally envisioned research. Specifically, the research team were not able to conduct in-person research via participant observation or more informal interviews (“water cooler conversations”) on site – methods that are essential for observing and recording social practices \cite{schmidt_sociology_2017} – and had to rely on the cumbersome process of formally scheduling and conducting interviews on video platforms. Because in interviews, especially virtually conducted interviews, practices are not directly observed but captured through the reporting of those enacting them, some scholars have questioned their viability for social practice research \cite{schmidt_sociology_2017, bourdieu_outline_1977}. Others, however, have rejected participant observation as the “golden standard” of social practice research and have argued for the benefit of interviews as producing equally relevant knowledge on practices, not least because narratives can then enter the analytical frame, and because interviews themselves are a social practice \cite{halkier_questioning_2017, nicolini_practice_2017}.\footnote{It must be noted that the overwhelming majority of interviewees reported that their entire organization was working remotely at the time of the interview. Some start-ups did not even have office space but were fully remote. Many corporate and non-corporate organizations have fully embraced a remote or hybrid office structure. Against that backdrop, social practice researchers will have to fundamentally rethink what it means to collect data on social practice.} For this research, we have taken the cue from the latter group, and argue that interviews are an appropriate method for conducting research on social practices, falling in line with scholars who have called for a “method pluralism” in social practice research \cite{littig_combining_2017}. Third, this study is limited, much like any other empirical research, through the powerful position the researchers take on as gatekeepers of both data collection and data analysis, requiring reflexivity of the researchers’ own positions and perspectives. In this case, this means acknowledging that both researchers identify as middle-class, European, white, native Germans with proficiency in English,\footnote{It must be noted that proficiency in both German and English enabled the research team to delve into relevant scholarly literature in both languages.} and with degrees from elite institutions (among other identifiers), characteristics they share with many of the research participants. While these shared identifiers may have facilitated access to some research participants, predominantly founders and co-founders, it may have prevented access to others, predominantly employees.
\section{The Anatomy of “AI Ethics” in German AI Start-Ups}
\subsection{The Skeleton}
\subsubsection{Principle/s}
The social practices that constitute “ethics” in German AI start-ups are anchored by distinct principles that permeate across the field of German AI start-ups at large and that serve as actionable frameworks and actions within the organization. These principles have a central function in that the other elements - needs, narratives, ethics materials and cultural genealogy - are articulated \textit{in reference} to the principles. The principles presented themselves not as what is commonly understood as human values or as virtues per se, or as what has been critiqued as norms and methods that have little bearing on actual practice \cite{mittelstadt_principles_2019}, but as frameworks that directly affected collective behavior, particularly within an organization. As such, they are closely related to what has been described as “business ethics”, which is “focused on the human values to coordinate activity inside a business operation” \cite{moss_ethics_2020}. In that sense, principles serve as framing devices for what is considered “ethical” practice within an organization and, perhaps more importantly what is not, and how this practice is organized, and accounted for \cite{korte_einfuhrung_2016}.
The principles we identified were both aspirational and concrete in that they represented ethical ideals a company had set for itself, for example creating and maintaining a culture of responsibility, as well as concrete actions and activities that materialized that ideal (see “ethics materials” below). They were underpinned and carried through by distinct narratives, they were serving particular needs of the organization and the AI community of practice at large, they had clearly identifiable cultural and historic roots, and they materialized as various socio-technical arrangements, roles, responsibilities, and so on.
The principles that clearly emerged in the data showed clearly identifiable roots in German history and culture. We therefore chose to retain the original German terminology to describe them.\footnote{ It is well known that language and knowledge shape each other \cite{coupland_relation_1997, kay_what_1984}, and that certain terms cannot be readily translated, partially because they provide a certain vagueness that serves as an umbrella term for multiple interpretations which are culturally specific. Whilst ontologically difficult to pin down, these terms can play a central role for ordering social life. For example, Bille \cite{bille_hazy_2015, bille_vagueness_2019} argues that the Danish (and Norwegian) word “hygge” does not just mean “cosy”, but describes an atmosphere, a whole way of being in the world, and of being with other people. “Hygge” is aspirational, something to strive for, as well as real and distinctly materializes in social situations and is immediately recognizable by those familiar with this concept. It often stands in for what is “typical” Nordic culture \cite{linnet_money_2011}. The Dutch notion of “gezellig” is comparable, yet different, connoting cosyness and comfort, but with a distinct emphasis on conviviality over solitude. The principles that emerged from how our research participants described the role of “ethics” in their organizations showed a similar simultaneousness of vagueness and specificity that appeared to be best graspable by retaining its original terminology.} This is as much a methodological strategy as it is an analytical stance, because it centers the idea that what counts as “AI ethics” is not only culturally specific, but stabilizes as part of the wider social practice of AI design within a field~\cite{sloane_use_nodate}. It is important to note that principles co-emerge and overlap, rather than being strictly separate from one another, and that they can encompass a wide range of concerns and activities. The analysis below should, therefore, be read as the beginning of an in-depth analysis and mapping of AI ethics concerns in German start-ups which will grow as the data analysis progresses.
\subsubsection{Needs}
Our data showed that “ethics” in German AI start-ups materialized not altruistically, but served to meet a wider range of (business) needs that were internal to the AI start-up organization. These needs arose from very specific social, political, and commercial contexts, such as German data protection regulation. In the “soft tissue” section, we describe how these contexts created pressures, issues, and concerns that materialized concretely as problems in and for the organizations we studied, such as issues around talent scarcity, transparency mandates, or client acquisition and retention. These problems were addressed through \textit{the way in which} the aforementioned principles were enacted within the organization. For example, the need for scarce talent became an ethics problem as AI start-up founders found technical talent fastidious with workplace choices and preferring organizations that allowed for their individual ethical concerns to be heard and integrated into decision-making processes, such as around client selection.
\subsubsection{Narratives}
While principles depict broader AI ethics frameworks that work to influence collective behavior, and needs represent concrete problems that organizations “solved” through ethics, narratives were concrete stories that informed the organizational and social logics that underpinned the principles and their enactment. Narratives are “ensemble [...] of texts, images, spectacles, events and cultural artefacts that ‘tell a story’” \cite{bal_narratology_2017}. Narratives are both more concrete and elaborate than principles and they play an important role for how individuals, communities, and organizations make sense of and act upon the social worlds they are involved in~\cite{gubrium_new_1997}. Typically, the role of narratives is to justify the \textit{how} principles are enacted, and needs are met. Recent research has shown that narratives around AI in particular play an important role in shaping public opinion on technology, as well as technology innovation and regulation \cite{cave_portrayals_2018, cave_ai_2020, singler_existential_2019}. Importantly, they often serve to rationalize social and racial hierarchies and the status quo \cite{benjamin_race_2019, browne_dark_2015, weheliye_habeas_2014}.
The narratives that we captured had a distinct function for the organizations we researched.\footnote{The narratives in the “soft tissue” section are not taken from the dataset verbatim, but are aggregates of statements and stories.} Sometimes, they presented in the form of if-then statements that rationalized particular stabilizations of the ethics principles. For example, a narrative that sustained how “Verantwortung” (responsibility) was enactacted was that individuals had a responsibility to articulate their personal “ethical” concerns, for example about projects or clients, and that this dynamic would create a generally more ethical organization (“\textit{If} everybody takes seriously their responsibility to articulate their personal ethical concerns, then the company will be more ethical.”). It is important to note that there were multiple and continually emerging narratives connected to each of the principles we identified, but they all connected directly to the needs of the organization.
\subsubsection{ Ethics Materials}
Principles, needs, narratives, and cultural genealogies concretely came to bear through what we call “ethics materials”: concrete objects, processes, roles, tools or infrastructures focused on “AI ethics”. That could include company chats dedicated to AI ethics-related concerns, such as social impact or AI fairness, ethics advisory boards, working groups, philanthropy, data security protocols, codes of conduct, participation processes for employees, training, mission statements, environmental sustainability campaigns, and much more.\footnote{While ethics materials clearly emerged in the data analysis, it has to be noted that we also specifically asked research participants about the processes, roles, activities, which they would classify as “ethics”. We used a very specific thought experiment for that, which we called the “post-it question”, whereby we asked the research participants to mentally visualize their organization of interconnected spheres (spheres could, for example, bind people and processes together by a shared responsibility, such as HR, by responsibility, or by hierarchy, background, interest, or by a product). We then asked them to imagine they had an unlimited stack of post-its with the word “ethics” at hand, and to put an ethics post-it anywhere in the ecosystem of spheres where “ethics” plays a role.} Moss and Metcalf \cite{moss_ethics_2020} term these “ethics methods” and describe them as replicable routines that foster predictability and accountability.
We, however, want to remain committed to the aforementioned social practice theory approach \cite{shove_dynamics_2012} as an analytical lens which describes “materials” as one of three key elements of any social practice. This lens allows us to examine materials, processes, and technologies that go beyond Moss and Metcalf’s focus on predictability and accountability, and that generally play an important role in the social constitution of AI ethics.
\subsubsection{ Cultural Genealogy}
In the data analysis, it became clear that the principles we observed were culturally specific and rooted in distinct histories and genealogies of practice. To understand how contemporary AI ethics stabilizes as a social practice and understand its cultural specificity in the German context, it is important to trace the trajectory of the principles, and their interconnectedness with the other elements (especially needs and narratives). We propose doing so by focusing on the cultural genealogy of principles. This is not a matter of mapping traditions to argue for a structuralist notion of cultural reproduction and path dependency, but of mapping the continuity of a social practice and its elements to understand how standardized concepts, such as concepts of “ethics”, emerge in particular contexts.
The cultural genealogies that we trace in our data are not homogeneous, and they do not signify distinct events in Germany’s history. Rather, they trace the social history of concrete materializations of the principles in the context of corporate organizations. These materializations are multidimensional. For example, the principle of “Mitbestimmung” (co-determination) is linked to the cultural history of the “Betriebsrat”, the workers’ council which historically has had significant influence in decision making in German corporate organizations, but also to cultural histories and ideologies of co-determination in general that can be traced back to the Medieval guild system \cite{von_heusinger_von_2010}, and that plays a role in how the collective bargaining power of workers is leveraged via strong unions on the federal level in contemporary Germany. The important part is that even though tech workers in German AI start-ups are not unionized, the thread of the culturally specific continuity of “Mitbestimmung” shapes the professional practice - not as industrial action, but as “ethics”.
\subsection{The Soft Tissue}
\subsubsection{AI Ethics as “Mitbestimmung”}
\hfill\break
{\bfseries AI ethics as “Mitbestimmung” (co-determination) meant employee}\footnote{For this paper, we differentiate between “employee” and “worker” based on the notion that “employees” are permanently employed by an organization, whereby “workers” are precarious \cite{bodie_participation_2014}. The vast majority of individuals we interviewed reported to be permanently employed by their organizations. However, it must be noted that these semantic distinctions are not the same in German, where the terms “Mitarbeiter” or “Arbeitnehmer” designates both “worker”, “employee” or “staff”, and where labor law is much stricter, generally limiting outsourcing and external contracting \cite{haipeter_angestellte_2017}\textbf{.t} participation in corporate decision making.}
In one of our first conversations, we spoke to a founder of an AI start-up that used computer vision technology to optimize \nobreak production in industrial mechanical manufacturing. Very early in the conversation, it became clear that AI ethics as “Mitbestimmung” drove key decision-making in the organization. Employees had voiced a desire to participate in decisions about what industries and clients the company should and should not work with. Specifically, there was a clear request to be able to refuse work with clients from the military-industrial complex.\footnote{This stance reverberates across the field of German AI start-ups. When talking about ethics, almost every research participant framed refusal to work with the military industrial complex as \textit{ethics}.} Based on that, the company leadership initiated an “ethics council” (consisting of three individuals: an employee, a doctoral candidate [who was not on the payroll] and a student) which conducted a survey among employees to gauge what was of “ethical relevance” to them. This data was then clustered into an “ethics matrix”, a spreadsheet used to vote on taking on new clients or projects. Employees would add their “ethics scoring” regarding a new client or project into the spreadsheet, which were averaged against predetermined thresholds. Projects and clients that scored below the threshold were deemed “okay”, projects just above were deemed to be “in the middle” and needed to be discussed with the ethics council to reach an individual decision, and “extreme cases” were desk rejected by senior management.
This method was, in part, born out of the need to more deeply explore ethical issues in complex circumstances, particularly in the context of interlinked supply chains for military applications. The overarching goal of how “Mitbestimmung” was operationalized \textit{as ethics} within this organization was to ensure that employees felt that their opinion mattered and was respected, that there was active participation in company decision making (even though the founders still effectively made each decision), and to create transparency across the organization. But it was also conceptualized as a talent attraction and retention tool, and as non-monetary compensation: “We depend on young people joining our company who are willing to work for lower salaries (...) so we need to offer them an environment in which they like to work, because we cannot pay them that much money. (...) But what does environment mean? Where they like to work, where they get a free beer after work and cereal, and also participation (...) real participation.”\footnote{Extended original quote in German: "Und wir sind angewiesen, dass wir junge, talentierte Leute bekommen, die bereit sind für wenig Geld bei uns zu arbeiten. Und dann haben wir so gedacht okay, wie können wir das kompensieren? Also ganz betriebswirtschaftlich. Wir müssen denen halt ein Umfeld bieten, in dem sie gerne arbeiten wollen, weil wir könnten denen nicht soviel Geld bezahlen. Das heißt dann okay, was heißt denn Umfeld? Wo sie gern arbeiten, wo sie ein Feierabendbier kriegen und Müsli... und auch Teilhabe. Und das war halt wirklich Teilhabe, das mit den Kunden, also die moralisch fragwürdigen Deals - findet niemand in Ordnung."}
The {\bfseries needs} that “Mitbestimmung” served were threefold: they derived from the necessity of continually establishing a shared set of values and a shared moral compass. This was connected to the need of establishing transparency about underlying rationales for decision making across the company. A very distinct need that was served through “Mitbestimmung” was the need for talent attraction, alternative compensation, and retention.
The {\bfseries narratives} that underpinned “Mitbestimmung” were:
\begin{itemize}
\item “Our employees need to have a voice in some key company decisions (e.g. client acquisition) to collectively define a shared set of values (‘moral compass’) that will keep the company ethical at large.”
\item “In order to attract and retain the young tech talent that we can ‘afford’ as a start-up, we have to offer ways to be actively involved in ‘ethical’ decision making in the company, because this is a demand in this demographic.”
\item “To keep up morale in our team(s), we need to create transparency about (ethical) decision-making in the company.”
\end{itemize}
The {\bfseries ethics materials} that substantiated “Mitbestimmung” were:
\begin{itemize}
\item Formalized co-determination processes focused on ethics, such as a platform for employee voting on client selection
\item Routines for “taking temperature” from employees in all-hands team meetings to understand how they “feel” about the direction of the company, specifically with regards to ethical concerns around new projects and clients
\item Regular social gatherings (“Stammtisch”)\footnote{The “Stammtisch” translates to “tribal table” refers to a tradition from Central Europe of regular meetings of a peer group at the same tavern. Historically, these gatherings formed an important public sphere for both for the working class as well as for political activists to share and discuss current affairs in a convivial setting \cite{boyer_spirit_2005}.} to discuss company matters, particularly as they pertain to “ethics” issues, such as social or environmental concerns
\end{itemize}
There is a strong {\bfseries cultural genealogy} of “Mitbestimmung” in the German workplace. It is a well-established corporate governance mechanism in German organizations with a long-standing tradition \cite{spiro_politics_1958}. It has been formalized in a range of laws: initially the “Betriebsrätegesetz” (Worker Council Law) from 1920 to 1933 and now the “Betriebsverfassungsgesetz” (Work Constitution Act) which was introduced in 1952 \cite{muller-jentsch_organisationssoziologie_2003}. Additionally, the “Mitbestimmungsgesetz” (Codetermination Act) from 1976 applies to corporations of 2000 and above employees which has been particularly relevant for the manufacturing sector. This law mandates the representation and participation of employees in the supervisory board (“Aufsichtsrat”) \cite{muller-jentsch_organisationssoziologie_2003}.
The cultural practices around co-determination in the workplace date back to early days of the industrial revolution in the mid-1800s when, for the first time, large numbers of workers worked for one organization and questions around governance and representation arose. Collective action, famously through strikes of Silesian weavers in 1844 and across the nation in 1848 as well as large strikes among textile and garment workers and miners around the turn of the century \cite{kristof_blacksmiths_1993}, eventually prompted governmental intervention in the young Weimar Republic to formalize the representation of worker interests \cite{abelshauser_vom_1999}. Agreements between company owners, government, and workers materialized in the Weimar “Betriebsdemokratie” (business democracy): in-lieu with the progression of democratic governance overtaking the political arena, “Betriebsdemokratie” was the extension of democratic principles and processes into the realm of the companies permitting the representation of worker interests in a dual manner in society \cite{neumann_freiheit_2015, markovits_politics_2016}. The legacy of these institutions engrained co-determination as guiding governance principle into the modern German economy \cite{plumpe_betriebliche_1999}, for example through the implementation of workers’ councils ("Betriebsrat" or “Betriebsräte”),\footnote{”Betriebsräte” are worker councils that are a formalized representation of workers’ interests within a company. This working relation is mandated by the “Betriebsverfassungsgesetz” (Works Constitution Act)\cite{federal_ministry_of_labour_and_social_affairs_germany_bmas_1972}. A representative or multiple representatives are elected by all employees usually for a period of four years. The “Betriebsrat” is a voluntary position. Employees pursue those commitments on company time and receive full pay. They are tasked with representing employee interests at the top level of the company, spanning from arrangement of health and safety at the workplace, matters of company orders (“Betriebsordnung”, see below), organization of working hours and breaks to the introduction and application of technologies that surveil and evaluate the behavior and performance of employees).} which to this day play a prominent role in German corporate organizations \cite{schnabel_betriebliche_2020}.\footnote{Today ,”Betriebsräte” are in decline. Empirical research suggests that “Betriebsräte” have lost their appeal for employees as small company-sizes permit a direct negotiation with their management and that they are less likely to be established in owner-managed companies \cite{schnabel_betriebliche_2020}, which often is the case in start-ups.
} Co-determination came under attack during the Nazi era, when these institutions were targeted, broken-up and replaced with authoritarian governance structures to stifle resistance and opposition \cite{milert_zerschlagung_2014}. Consequently, re-instituting co-determination through fostering strong unions and worker councils was a central mechanism in the rebuilding of postwar Germany \cite{markovits_politics_2016}.
Today, practices of co-determination are also prominent in the German “Mittelstand”\footnote{The “Mittelstand”, small and medium-sized companies that form an economic unit, is a phenomenon specific to German-speaking countries. The “Mittelstand” is often characterized by family ownership and leadership \cite{berghoff_end_2006}. Whilst business owners hold control over company decisions over its trajectory, they equally carry the responsibilities that come with entrepreneurial ventures \cite{gantzel_wesen_1962} engendering a pronounced emotional attachment and personal commitment to the company \cite{pahnke_german_2019}. This conservative mindset also materializes in strong social bonds between owners and employees and company strategy which are guided by paternalism and longing for multi-generational continuity \cite{berghoff_end_2006}. Traditionally, internal company processes and affairs are characterized by a “patriarchal culture and informality” that promote flat hierarchies between management and staff and establish cooperation via trust and a ‘give and take’-relation \cite{berghoff_end_2006}. These social ties also engrain a strong sense of responsibility for staff which guides a desire for stability and a preference to steady company growth due to maintaining financial autonomy and careful company strategies \cite{berghoff_end_2006}} culture in which familial-like relations between business owners and employees inform a working environment of cooperation, trust, and worker participation \cite{berghoff_end_2006}. While worker organizing, representation, and co-determination is strong in many industries across Germany, and specifically those in the manufacturing sector, employees in AI or tech start-ups often face precarity and tend to be not unionized \cite{dgb_pionierarbeit_2020, beitzer_new_2020}. This is a trend that is different in other parts of the world, for example the U.S. which has seen a growing “tech worker movement” (see, for example \cite{tarnoff_making_2020}), or the Israeli tech industry, which has seen growing unionization \cite{dirksen_trade_2021}.
\subsubsection{AI Ethics as “Verantwortung”}
\hfill\break
{\bfseries AI ethics as both individual responsibility and standardized processes to ensure ethical conduct, as well as corporate responsibility.}
Across almost all of our conversations, the topic of individual and collective responsibility played a central role in how participants constructed and enacted “AI ethics”. In the case of individual “Verantwortung”, one data scientist employed at an AI start-up explained that they trusted their team members to flag ethical concerns if they “sensed” issues. This approach was echoed by another research participant who described regular feedback meetings that served as the space where employees were expected to raise (ethical) issues and concerns they identified within the company.
Collective “Verantwortung” in AI start-ups very concretely materialized in working groups. Generally, these working groups emerged “bottom up” with teams organizing informal gatherings to discuss ethical topics in relation to their organization or their professional practice, ranging from internal company processes, work replacement via automation, to sustainability. For some start-ups, this was time employees spent working on these issues on a voluntary basis, for example during lunch, or after work in a casual setting, such as the company kitchen. Here, employees would be cooking together and discussing ideas pertaining to product development, but also very concrete ethical concerns, such as risk of injury with sensor detection for automated doors, which were then shared as talking and action points with the rest of the company via an internal newsletter. Other start-ups put their employees on the clock to work on ethical issues on company time. For example, one start-up employee explained to us that they were part of a voluntary working group that was tasked with developing a comprehensive ethics strategy for the growing company. For this work, employees were allowed to spend company hours on the work, which was organized in a formal way: some employees would conduct research, others draft parts of the strategy, and findings were presented to the whole company (including senior management) in regular meetings. This research participant remarked that those employees who were part of this working group were representative of those populations most affected by potential AI harms, such as women or queer people. They stated that “it is precisely these groups of people [...] [who join the group] where I think a lot of problems exist [...], where a lot of effort needs to go, because they are not in the majority and questions appear.”\footnote{“[...] das sind halt genau diese Gruppierungen [...], wo ich halt finde, wo es noch ganz viele Probleme gibt. [...] wo halt einfach Arbeit geleistet werden muss, weil das nicht die Mehrheit ist und dort halt eben dann Fragen aufkommen.”}
The {\bfseries needs} that “Verantwortung” satisfied were different from the needs that were satisfied by “Mitbestimmung”. They responded to and evolved around ensuring cohesion across the workforce, and across hierarchies. It was assumed social bonds would be strengthened through the notion of individual and collective “Verantwortung” (explicitly including issues pertaining to the environment), and that this would also serve as conduit for more responsible conduct across and beyond the organization. Relatedly, “Verantwortung” also derived from the need of establishing and maintaining “order” by way of implicit rules, and a culture of compliance, especially vis-à-vis existing European and German data protection regulation. At the same time, “Verantwortung” was seen as a marketing device, serving the need to signal reliability and responsibility to existing and potential clients, which was felt acutely in the context of AI as an emerging and largely poorly understood technology.
The {\bfseries narratives} that underpinned “Verantwortung” were:
\begin{itemize}
\item “We all have an individual responsibility to raise any ethical concerns we may have.”
\item “The individual responsibility to flag ethical concerns serves as necessary checks and balances for corporate conduct and decision making, and as ‘ethics insurance’.”
\item “Responsible AI means grounding AI development and application in research, and to be transparent about the technology that is being designed.”
\item “Signaling responsibility is key for success-oriented interaction with (potential) investors, clients, and employees, as well as regulators.”
\item “To ensure individual responsibility and ethical conduct, we formalize our responsibilities, tasks, and commitments with clear frameworks into an organizational order that is transparent to all.”
\item “Ethics means to fully comply with relevant regulation through standardized processes and responsibilities, particularly in the context of EU and German data protection regulation.”
\end{itemize}
The {\bfseries ethics materials} that substantiated “Verantwortung” were:
\begin{itemize}
\item Internal and external guidelines, such as mission statements, outlining values that ought to inform and steer responsible behavior
\item Internal and external (ethics) advisory boards
\item Internal chats focused on “ethics” and societal issues, as well as employee time set aside for strike action (specifically the weekly “Fridays for Future” climate strikes)\footnote{“Fridays for Future” is an international youth-led climate-strike movement that started in 2018. Strikes take place during compulsory attendance at school \cite{noauthor_fridays_nodate}.} or collaborative and employee-driven working groups and initiatives to work on and present ethics focused topics, and/or to define mission, strategy or principles, such as ethics principles pertaining to the organization at large organization or to product development
\item Memberships in ethics-focused working groups of industry associations (such as the German AI Start-Up Association) or other organizations (such as United Nations Global Impact), as well as local philanthropic support
\item Active engagement in academic research through conference participation or research collaborations with university researchers (incl. joint research grants), working with open source models
\item Compliance officers and/or clearly designated compliance responsibilities across different domains, particularly data protection regulation, but also sustainability, supply chain, HR, or industry-specific regulation (such as in manufacturing or the medical field)
\end{itemize}
The notion of individual and collective responsibility in the context of company is deeply rooted in German culture \cite{berthoin_antal_rediscovering_2009, palazzo_us-american_2002}. On the one hand, it is tied to the leadership figure of the business owner, or entrepreneur (“Unternehmer”): to “do good” was, and often still is, a religiously motivated virtue among prominent German business leaders, and it includes treating workers well, and increasingly a concern for environmental protection \cite{berghoff_end_2006, berthoin_antal_rediscovering_2009}. Owners of small and medium sized firms are often locally involved in philanthropic efforts \cite{pahnke_german_2019, lehrer_germanys_2015}. The responsibility of the entrepreneur (“Unternehmerverantwortung”) is directly linked to the principle of the social market economy, installed in postwar Germany, which seeks to tame capitalism and capitalists by way of social policies and regulation to maintain a functioning welfare state \cite{bertelsmann_stiftung_normen_1994, habisch_overcoming_2005}. The responsibility of the worker, on the other hand, is more closely tied to the principle of civil society which centers the idea of a “good political order” in which citizens possess civil liberties that are tied to active (democratic) participation, which is grounded in the willingness to inform oneself politically, to participate in elections, and to take up public offices \cite{deutscher_bundestag_bericht_2002}.\footnote{In democratic society, said behavior cannot be enforced. Therefore, civil engagement in society becomes a “political virtue” which marks a “good citizen” \cite{deutscher_bundestag_bericht_2002} or a “good organization”. Being perceived as “good” in that way was an important market signal to the AI entrepreneurs we spoke to.}
The idea of ethics as “order” (“Ordnung”) in relation to responsibility also comes to matter in the context of internal protocols, routines, and standardized processes. Often, their purpose is to ensure a shared understanding of what constitutes responsibility and ethical procedures. The cultural genealogy of this “order” can be traced via the “Betriebsordnung”, which today is anchored in German labor law and regulates sociality\footnote{The German word used in this context is “Zusammenleben” which literally means “the living together”, but not just in the sense of cohabitation in a domestic setting, but generally in the sense of being a member of different communities: at home, at work, in the family unit, among friends, and so on.
} in corporate settings. The “Betriebsordnung” focuses on the “Ordnungsverhalten” (“order conduct”) of employees, which is all conduct that is not directly related to performing a job or related tasks (“Arbeitsverhalten”). Interpretations of this law \cite{betriebsraten_ordnung_nodate} govern, for example, whether or not employees must water plants, or if they are allowed to listen to the radio while at work, but also if they are allowed to accept gifts, drink alcohol, or use company equipment privately. As such, it formalizes what is considered morally acceptable within the social space of an organization and what, therefore, is the individual’s responsibility for compliance. Typically, it also defines levels of fines and punishment in case of non-compliance.
After having started the systematic dismantling of unions starting 1933 as part of the total centralization of power, the Nazi regime introduced a new law, the “Arbeitsordnungsgesetz” (AOG) which gave employers total power to install a new “Betriebsordnung”. Today, the “Betriebsordnung” cannot solely be determined by the employer, but requires involvement and co-determination via the worker council (“Betriebsrat”) as per the “Mitbestimmungsgesetz” (see above). Start-ups do not necessarily have a “Betriebsordnung” that formalizes what constitutes ethical and responsible behavior, but the cultural legacy of (“Betriebs-") “Ordnung” permeates through the strong notion of “Verantwortung”. For example, even though there is no formalization of employees having to voice their ethical concerns about a product, there is an expectation of it, so much so that it is considered a “checks and balances” system for keeping the whole organization ethical.
\section{DISCUSSION
}
The “anatomy of AI ethics” that we presented is emergent from observations and analyses of the social practice of AI ethics. We now want to propose that creating their own “anatomy of AI ethics” can help actors capture continuities, rather than disruptions and breakages. This can facilitate technical and socio-technical innovation that connects to already existing ways of “doing ethics” and ascribing meaning to them, rather than breaking them. For example, the cultural genealogy element provides a pathway for understanding how “AI ethics” is not only not a new concern, but also that its operationalization is a continuation of already existing practices of “ethics” that are culturally specific.
It is important to note that we are not proposing a normative approach of AI ethics practice, nor a simple checklist. Rather, we are proposing a framework that allows actors their own grounded theory.
Understanding existing strategies for putting ethics into action can facilitate the location of connection points to technical approaches and help actors - AI (start-up) managers, engineers, regulators, researchers, professional associations, and more - decide not only what technical and/or socio-technical innovation they need, but where and how to best integrate it into their organization against the backdrop of rapidly changing regulatory regimes. This approach echoes Rakova et al.’s \cite{rakova_where_2021} call for leveraging existing practices to help actors navigate the duality of algorithmic responsibility and organizational structure. We argue that it can be particularly useful for adapting to risk-based regulatory demands, such as the AI Act proposed by the European Commission \cite{european_commission_proposal_2021}, which follows and tasks “market surveillance authorities”\footnote{A “market surveillance authority” is “an authority designated by a Member State (...) as responsible for carrying out market surveillance in the territory of that Member State” \cite{european_commission_regulation_2019} which means monitoring product conformity and compliance with the existing EU health and safety requirements \cite{european_commission_product_2021}.} with compliance control and enforcement. These authorities must individually assess the risk tier of an AI application, and this assessment forms the basis for imposing different obligations onto AI companies, such as adequate risk assessment, documentation and traceability, human oversight, and more \cite{european_commission_regulatory_2021}.
To effectively and rapidly comply, and sustainably change the social practice of AI design and deployment vis-à-vis these requirements, we suggest actors use the AI ethics anatomy framework and do so by answering two sets of questions. We outline those below in a research and innovation assessment guide. The first set of questions allows them to map the anatomy of existing AI ethics practice/s. The second set of questions allows them to map and assess existing technical and socio-technical innovation as it is relevant to both the regulatory requirements and their own organizational practices. It prompts them to spell out the type and functionality of any given technical and/or socio-technical innovation (e.g. counterfactual explanations), as well as its aims (e.g. in terms of behavioral change, such as through increasing AI literacy among users who get enrolled into AI systems, or change in policy, such as through improved pathways to recourse). It also prompts them to spell out the connection to the principle/s, needs, narratives, ethics materials, and cultural genealogy previously identified by asking how any given technical and/or socio-technical innovation fits a principle, addresses an existing need, can be embedded into an existing narrative, can materialize as part of existing ethics materials, and connects to the cultural genealogy. To conduct this work, actors can fill-in the following table:
\\
\\
\noindent
\begin{tabular}{|p{2.5cm}|p{5.2cm}|}
\hline
\multicolumn{2}{|c|}{{\bfseries 1. Anatomy of Existing AI Ethics Practice/s}} \\
\hline
Principle/s & What is the existing actionable \hfill \break framework related to AI ethics, and what is its meaning?\\
\hline
Needs & What needs are fulfilled by this \hfill \break framework?
\\
\hline
Narratives & What narratives underpin the principle/s within a specific organizational context?
\\
\hline
Ethics Materials &
How does the principle concretely\hfill \break materialize?
\\
\hline
Cultural Genealogy & What are the culturally specific roots and distinct genealogies of practice that \hfill \break underpin the principle?
\\
\hline
\end{tabular}
\\
\\
\begin{tabular}{|p{2.5cm}|p{5cm}|}
\hline
\multicolumn{2}{|c|}{ {\bfseries 2. Technical and Socio-Technical Innovation}} \\
\hline
Type \hfill \nobreak and Functionality & \begin{itemize}
\item What is the technical / \hfill \break socio-technical innovation?
\item How does it work?
\item What is the aim of this technical / socio-technical innovation?
\vspace*{-12pt}\end{itemize} \\
\hline
Integration & \begin{itemize}
\item How does it fit the XXX \hfill \break principle?
\item How does it address the existing need of XXX?
\item How can it be embedded into the existing narrative of XXX?
\item How can it materialize as part of existing ethics materials?
\item How does it connect to the \hfill \break cultural genealogy?
\end{itemize} \\
\hline
\end{tabular}
\\
Following the research and innovation assessment guide and answering these two sets of questions will prompt actors to reflect on their own existing practices and more critically examine technical and socio-technical innovations that promise to foster fairness, accountability, and transparency vis-à-vis their potential for adoptability. This is because they will begin to understand technical and socio-technical innovation \textit{as practices}, too, which underscores the flexibility of the technical approaches themselves and their vast potential to flexibly be integrated into existing ecosystems of AI ethics practices to help respond to emerging regulatory obligations.
\section{Conclusion}
In the beginning of this paper, we suggested that there is a larger split in the field of AI ethics with rapid growth in works focused on computational interpretations of ethical concerns, normative frameworks, and socio-technical innovation that is outpacing the production of empirical research on the professional practices of AI design and AI ethics specifically. This dynamic produces an epistemological blind spot of the stabilization of social practice and the cultural context of AI and can perpetuate a top-down mentality of finding ways for addressing and mitigating AI harms - whether through behavioral change, or technological innovation \cite{sloane_inequality_2019} - slimming the chances of adoption.
In this paper, we set out to address this issue. We built on social practice theory and on empirical data from our study on the operationalization of “ethics” in German AI start-ups to address the need for a framework that connects technical and socio-technical innovation to qualitative ways of understanding “ethical AI” practices. We have argued that this can help researchers and practitioners more rapidly adapt to changing regulatory regimes pertaining to AI.
Before we make suggestions on future directions of work, we want to reflect our approach, in addition to some of the limitations that have been outlined in the methods section. In addition to those, we want to offer critical reflection on the conceptual approach proposed here. First, we want to acknowledge that centering social practice over individual behavior is by no means a silver bullet. We are not proposing to replace important approaches that already exist in the field, such as the notion of intersectionality\cite{crenshaw_mapping_1991, hammack_jr_intersectional_2018} or design inequality \cite{sloane_need_2019, costanza-chock_design_2020}, but to complement them with a view for initiating change from the bottom up, including in policy.
Second, we want to caution against a reading of the social practice theory deployed in the context of AI design, which de-centers the individual, as a de-centering of individual responsibility, as a-political. Specifically, we want to underline that focusing on the stages of stability and flux AI practices find themselves in does not mean to absolve individual and powerful actors from their unethical behavior, which ranges from union busting \cite{streitfeld_how_2021}, to tax evasion \cite{neate_silicon_2021}, to harassment \cite{conger_uber_2019}. Social practice theory does not replace a focus on power and oppression, but asks us to examine the relationship between how both come to structure \textit{what we do and how we do it} on both a micro- and a macro level. Third, we want to acknowledge that our work, and the approach we propose here, is anchored in our own positionality, and in the culturally specific data that we used as a basis for our grounded theory work. The strong presence of “principle/s” that gave meaning to AI ethics practices, for example, could be read as a feature that is specific to German culture. Based on our grounded theory work and our data, we can neither confirm nor deny such an interpretation. We suggest that such questions underscore the significance of noting the cultural genealogy of practices, which serves as a strategy for critical positionality and culturally specific interpretation of data.
Against this backdrop, we recommend that future research focuses on three different aspects. First, we recommend increasing the production of qualitative empirical research on the professional practices of ethics and AI development across different cultural contexts, and specifically non-Western contexts, and domains of application. We anticipate that impactful innovation in AI fairness, accountability and transparency will have to be context-, culture-, and domain-specific. Second, we recommned refining the methodology and further detailing and strengthening the conceptual framework of social practice theory, especially vis-à-vis notions of politicality, power, intersectionality, and justice. And third, based on the two catalogues of questions, we recommend developing and testing a usable framework for practical implementation. We hope that this agenda will help close the gap between technical, normative, and socio-technical innovation on fairness, accountability, and transparency, and qualitative research on AI design practices and the impact of AI on social life.
\begin{acks}
This work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039A. We thank the reviewers for their helpful comments.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
1,314,259,994,829 | arxiv | \section{Introduction}
\label{section:introduction}
SuperKEKB~\cite{Ohnishi:2013fma,Akai:2018mbz} is an intersecting double storage ring particle accelerator with a circumference of 3.016 km that collides electrons and positrons to provide luminosity for the Belle II experiment~\cite{Abe:2010gxa}. SuperKEKB is the successor of KEKB~\cite{KEKBReport}. While KEKB achieved a maximum peak luminosity of $2.11\times{10}^{34}{cm}^{-2}{s}^{-1}$~\cite{Abe:2010gxa}, SuperKEKB will attempt to reach a peak luminosity of $8\times{10}^{35}{cm}^{-2}{s}^{-1}$~\cite{Ohnishi:2013fma}, about 40 times larger than its predecessor. Since the luminosity is strongly dependent on beam-optical parameters at the IP, it is important to have direct measurement of such parameters. Such information is especially crucial to a nano-beam collider as SuperKEKB, where there is strong sensitivity to small parameter changes, in order to maximize the luminosity extracted for bunch crossing. The Large Angle Beamstrahlung Monitor (LABM) is a device designed to measure the beamstrahlung emitted at the Interaction Point (IP) of a e+e- collider. Beamstrahlung is the radiation emitted by two beams of charged particles due to their electromagnetic interaction~\cite{Augustin:1978ah}. Beamstrahlung is directly related to the size and configuration of the beams, and provides direct information on the beams at the IP. Specifically, vertically unequal beams create an excess of y-polarized light as seen by the telescope observing the fatter beam (conversely, y-polarized light from the smaller beam will decrease).
\par
The latest version of the LABM is installed around the IP of SuperKEKB. The LABM at SuperKEKB measures 32 independent values, with different optical properties, that are directly related to the size and position of the beams. In this framework, the LABM can be extremely useful to monitor the beams and correct them in case they show an unwanted behavior that can cause luminosity degradation. One of the challenges of the LABM is to relate these 32 measurements to observables of interest. This can be done on theoretical grounds, by constructing a variable that is function of all or some of the 32 measurements, by traditional fitting methods, e.g., a linear regression, or using machine learning techniques, e.g. a neural network. In this paper, we will present and compare results from linear regression and neural network models. This will be an experimental validation for both the LABM and, more in general, for machine learning models applied to the first particle accelerator using the nano-beam scheme. The average beam parameters at the IP of SuperKEKB IP for the data used in this paper are given in Table~\ref{table:1}.
\begin{table}[ht]
\centering
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$ Beam $ & LER ($e^{+}$) & HER ($e^{-}$) \\ \hline
$ L(cm^{-2} s^{-1}) $ & \multicolumn{2}{c|}{2.8 $\times10^{34}$} \\ \hline
$ E(GeV) $ & 4 & 7 \\ \hline
$ N (10^{10}) $ & 3.8 & 3.1 \\ \hline
$\beta^{*}_{x}(m)$ & 0.08 & 0.06 \\ \hline
$\beta^{*}_{y}(m)$ & 0.001 & 0.001 \\ \hline
$\varepsilon_{x}(nm)$ & 3.5 & 4.7 \\ \hline
$\varepsilon_{y}(pm)$ & 46.4 & 37.7 \\ \hline
$ \sigma^{*}_{x} (\mu m)$ & 16.7 & 16.7 \\ \hline
$\sigma^{*}_{y} (nm)$ & 214.8 & 192.7 \\ \hline
$\sigma^{*}_{z} (mm)$ & 6 & 5 \\ \hline
\end{tabular}
\end{center}
\caption{SuperKEKB average beam parameters at the IP for the data used in this paper. The Low Energy Ring (LER) is the positron ring, The High Energy Ring (HER) is the electron ring. L is the luminosity, E is the energy of the beam, N the number of particles per bunch, $\beta^{*}$ the $\beta$ function at the IP, $\varepsilon$ the emittance, and $\sigma^{*}$ the size of the beam at the IP.}
\label{table:1}
\end{table}
\par
This article is organized as follows. In Section~\ref{section:labm}, we describe the LABM as it was installed at SuperKEKB. In Section~\ref{section:datas}, we describe how data for this work was selected. In Section~\ref{section:nn_model}, we give an overview of the machine learning model used in this work, which is a deep Neural Network. In Section~\ref{section:beam_parameters}, we present the result of our Neural Network model and compare it with a traditional Linear Regression. In Section~\ref{section:discussion}, we discuss the results in details and provide some comments. Finally, in Section~\ref{section:conclusion}, we summarize the results presented the paper.
\section{LABM at SuperKEKB}
\label{section:labm}
The first experimental observation of beamstrahlung took place at the Stanford Linear Collider (SLC)~\cite{PhysRevLett.62.2381}, colliding $e^{+}$ and $e^{-}$. The properties, polarization and spectrum, of the beamstrahlung are directly related to the beam parameters, and the analytical relations can be found in literature~\cite{Bassetti:1983ym,DiCarlo:2017hqj}. The LABM at SuperKEKB extracts visible beamstrahlung from about 5 meters distance downstream of the IP. The light is extracted by using vacuum mirrors inside the accelerator's vacuum chamber, as shown in Figure~\ref{fig:labm} (a), with the light then going through glass windows. The light is then driven through an optical channel with a series of mirrors and reaches an optical box after several meters, see Figure~\ref{fig:labm} (b, c). The mirrors are located inside a series of aluminum pipes connected to each other at 90 degrees. Since the beamstrahlung is emitted by both the electron and positron beams, the same apparatus is installed in both the electron ring, or High Energy Ring (HER), and the positron ring, or Low Energy Ring (LER). In each ring, there are two vacuum mirrors, located downstream of the IP, at the top and at the bottom of the vacuum chamber. This allows to be sensitive to vertical asymmetries with respect to the collision of the two beams. Since there are two vacuum mirrors on each ring, we have 4 optical channels, each one producing 8 PMTs measurements. Optical boxes, containing all the optical elements necessary to the measurements, are located outside the radiation area in order to minimize interference with the electronics. Each box is organized in two sides, each side accommodating 8 PMTs that serve one optical channel. Figure~\ref{fig:labm} (d) shows one side of one optical box, containing all the optical elements needed and the 8 PMTs located on the rear side of the box. There are 2 boxes and each box has two sides, providing therefore a total of 32 PMTs measurements. Each PMT receives light with horizontal or vertical polarization and in a different spectral region in the range going from about 390 nm to about 650 nm. Therefore, while there is some level of redundancy, in reality each PMT carries new information on the beam properties.
\begin{figure}%
\centering
\subfloat[]{{\includegraphics[width=5.5cm,valign=c]{vacuum_mirrors_mod.jpg} }}%
\qquad
\subfloat[]{{\includegraphics[width=5.5cm,valign=c]{labm_pipes.jpg} }}%
\qquad
\subfloat[]{{\includegraphics[width=5.5cm,valign=c]{labm_pipes_out.jpg} }}%
\qquad
\subfloat[]{{\includegraphics[width=5.5cm,valign=c]{optical_box.jpg} }}%
\caption{(a) Vacuum mirrors and extraction windows used to extract the light inside the vacuum chamber. (b) The light extracted from the vacuum chamber travels through a LABM optical channel. (c) The light is driven outside the radiation area to a shielded area underneath through a manhole. (d) One side of an optical box used to measure the beamstrahlung. The elements contained in the box are: Wollaston prism (1), gratings (2), mirrors (3), lenses (4), a conveyor belt (5), photo-multipliers (6), and electronics (7).}%
\label{fig:labm}%
\end{figure}
\section{Data Selection}
\label{section:datas}
In this first paper, we analyze only the data from the two electron telescopes, therefore 16 phototubes. These two are located at the top in the daisy chain of mirror motors. Regrettably, the use of a single data bus for all motors leads occasionally to interference that affects the higher numbered motors (those of the positron telescopes). Once this happens, the device can be reset only by access to the Interaction Region (IR). Still, with the positron telescopes not pointed at the IP we were able to cross check that there was no sensitivity on that side. We also pointed one of the telescopes in use to a feature that was clearly a reflection, with the same results.
Still, the use of opposite telescopes (one located at the top of the vacuum chamber, and one at the bottom) permits an automatic correction for the varying beam orbits which affect both signal (beamstrahlung) and background (synchrotron light from dipoles and quadrupoles). Specifically in the case of beamstrahlung a beam entering the IP with a small vertical angle $\delta$ will increase the rate of one telescope by a quantity of order $\delta/\theta$, and decrease the rate in the opposite telescope by $-\delta/\theta$, where $\theta$ is the angle of observation. This small correction is less or of order 1\% at SuperKEKB, and efficiently dealt with by the neural network. The neural network also can reproduce small effects such as photomultiplier saturation (a 1-3\% effect in these data), and effectively finds beamstrahlung, including spectral effects (when beams cross at an angle the spectrum at our observation angle, about 8 mrad, differs for x- and y-polarized light, see Ref.~\cite{DiCarlo:2017hqj}).
In order to analyze beamstrahlung, there needs to be certainty that the telescopes are pointed at the Interaction Point (IP). Fig.~\ref{fig:beampip} shows the vertical geometry for the electron telescope located below the vacuum chamber. The second mirror in the telescope can be oriented by two stepper motors to scan the field of view looking for light spots.
%
\begin{figure}%
\centering
\includegraphics[width=\textwidth]{herpipenew.pdf}%
\caption{Vacuum chamber, or Beam Pipe (BP), vertical profile at the SuperKEKB Interaction Point. Top: HER vertical vacuum chamber profile. Bottom: LER vertical vacuum chamber profile. The IP and the vacuum mirrors are shown to the right. The thick bars to the left are the locations and lengths of the last dipoles in the beam line.}%
\label{fig:beampip}%
\end{figure}
From Fig.~\ref{fig:beampip} the features of the light spot can be expected to be in the form of a lentil, with an angle equivalent to 1500 steps in the horizontal direction and 1500 in the vertical direction, or 0.7 mrad horizontal by 0.3 mrad vertical. The data presented here are taken with two 2 mm collimators in each telescope, separated by about 10 meters, providing a triangular acceptance with base 0.4 mrad in each direction.
Often telescopes, in this case the Down telescope, provide multiple spots of light. Past experience indicates that the IP spot must have correct dimension (specified above), and is generally more y-polarized (due to the presence of beamstrahlung. Reflected synchrotron light is often strongly x-polarized) and also generally can produce more elongated features compared to a proper spot. Figs.~\ref{fig:scans} show the angular scans for two PMTs for each telescope. We generally select the geometrical center of the spot as the IP.
\begin{figure}%
\centering
\includegraphics[width=\textwidth]{ou.png} %
\qquad
\includegraphics[width=\textwidth]{od.png} %
\caption{Fine angular scans near the detected Interaction Point (IP). Top: electron (HER) telescope up. Bottom: electron (HER) telescope down. The IP is generally
located very close to the maximum intensity spot.}%
\label{fig:scans}%
\end{figure}
Although we have done much data taking in scanning mode (in the simplest case, by continuously measuring the presumed IP, and then two points at each horizontal side) in the past, looking to have "side band" measurements to measure independently signal and background, for the data presented here the motors were left at the IP without moving for 11 days. Subsequently, we took data at a spot considered fake in the down telescope, finding no correlation with beam parameters.
Data used were subject to minimal cuts. We selected only Physics data, since any tuning can easily make our spots disappear. All data with currents above 100 mA were considered for analysis. Fig.~\ref{fig:variation} shows that during data taking the transverse sizes of both beams changed quite a bit, with little correlation between any pair of parameters, resulting in good parameter space coverage for our analysis.
%
\begin{figure}%
\centering
\includegraphics[width=\textwidth]{scatterplot_sigma.png}%
\caption{Distributions of calculated beam parameters at the IP used in this paper, based on XRM \cite{Mulyani:2019gsy} beam size measurements at other locations in SuperKEKB. The plots along the diagonal represent the distributions for each parameter. The off-diagonal scatter plots show the distribution of any two pair of parameters. First
line and first column: electron $\sigma_x$; second line and second column: electron $\sigma_y$; third line and third column: positron $\sigma_x$; fourth line and fourth column: positron $\sigma_y$.}%
\label{fig:variation}%
\end{figure}
\section{The Neural Network model}
\label{section:nn_model}
The model used in this work is a fully connected deep Neural Network (NN) with the following architecture in terms of neurons: 16-64-128-64-32-1. The model therefore consists of 19777 trainable parameters. The first layer consists of the 16 neurons of the input, i.e. the 16 PMT input values from the LABM, and the output layer consists of 1 neuron because we want to use our NN to perform a regression on one of the beam parameters. The hidden layers architecture was determined after several trial and error attempts in order to optimize the result of the regression. The model was implemented using Keras~\cite{chollet2015keras}, a high level abstraction library that works on top of the low level TensorFlow~\cite{tensorflow2015-whitepaper} compute engine. The data set used consists of about 150000 LABM and SuperKEKB measurements, which are split in about 96000 points for training, 24000 for validation, and 30000 for testing of the model, obtained over 11 days every five seconds. These measurements are randomly shuffled before being used, meaning that there is no time correlation between successive points in the data set. The about 30000 measurements reserved for testing do not take any part in the model learning process, allowing for an unbiased benchmark, and will be used to present the results in the next section, as they effectively represent new measurements with respect to the model.
\section{Reproduction of beam parameters}
\label{section:beam_parameters}
In this section we show the results of the NN model and we will also compare these results with those provided by the traditional Linear Regression (LR):
\begin{equation} \label{eq:linear_regression}
y=\beta_{0}+\beta_{1}x_{1}+...+\beta_{16}x_{16}
\end{equation}
where the 16 independent variables $x_{i}$ correspond to the 16 PMT values provided by the current data selection. The data used in this paper has been collected parasitically during a physics run for the Belle II experiment. In this sense, the beam parameters tend to be quite stable, but there are still significant changes that we can observe in the experimental data and try to reproduce with the predictions provided by our models. The goal of this study is to show that the variation of the beam parameters can be reproduced by a NN model with the LABM measurements as input. In the next subsections, we will compare experimental data and prediction by sorting them from smaller to larger values with respect to the experimental data. This kind of sorted plot is sometimes called a lift chart, and it is useful to evaluate the quality of a regression. Besides the visual evaluation, the value of the Mean Absolute Error (MAE) for each set of predictions is calculated. The relative MAE, defined as
\begin{equation} \label{eq:mae}
\frac{1}{N}\sum_{i=1}^{N}\frac{|y_{i}-y_{i,pred}|}{y_{i}}
\end{equation}
where N is the number of measurements, $y_{i}$ is the measured value, and $y_{i,pred}$ the one predicted by the model, is indicated in the legend of each prediction plot as percentage error.
\subsection{Specific Luminosity}
\label{subsection:spec_lumi}
Although luminosity is not a beam parameter, it is strictly related to the beam parameters at the IP and it constitutes, together with the energy, one of the two figures of merit of a collider. Therefore, it will be the first experimental measurement that we will try to reproduce with our models. The absolute luminosity at SuperKEKB is measured by the Electromagnetic Calorimeter (ECL) monitor~\cite{Belle-ECL:2015vma}, located in the Belle II detector. The ECL measures Bhabha events and, following calibration, provides an absolute value for the luminosity. However, the luminosity depends on the currents and on the number of bunches present in the beam, and in our models we only want to use the 16 measurements from the electron side of the LABM as input. Therefore, for our purposes, we will use the specific luminosity. The specific luminosity for collinear Gaussian beams is defined as:
\begin{equation} \label{eq:spec_luminosity}
L_{sp} = \frac{f_{0}}{2\,\pi\,\Sigma_{x}\,\Sigma_{y}}
\end{equation}
where $f_{0}$ (0.1MHz for SuperKEKB) is equal to the single bunch revolution frequency, and $\Sigma_{i}$ (i=x,y) are the convoluted beam sizes, corresponding to the quadrature sum of the two beam sizes at the IP: $\Sigma_{i}^{2}=(\sigma_{i,1}^{*})^{2}+(\sigma_{i,2}^{*})^{2}$. It is obtained from the regular formula of the luminosity dividing by the factor $N_{b}\,N_{1}\,N_{2}$, i.e., the number of bunches times the product of the numbers of particles per bunch of the two beams. In this way, the specific luminosity is independent from the beam currents and from the number of bunches present in the rings. Figure~\ref{fig:spec_luma} shows the specific luminosity as predicted by LR (a) and NN (b). From the comparison, we see that while the LR model reproduces the average value fairly well, it fails to predict the changes in specific luminosity at the low and high end of the plot. On the other hand, the NN is able to better predict and follow these changes.
\begin{figure}%
\centering
\subfloat[]{{\includegraphics[width=5.5cm]{oho_spec_lum_lr.png} }}%
\qquad
\subfloat[]{{\includegraphics[width=5.5cm]{oho_spec_lum_nn.png} }}%
\caption{(a) Specific Luminosity data and prediction with Linear Regression. (b) Specific Luminosity data and prediction with Neural Network.}%
\label{fig:spec_luma}%
\end{figure}
Simple geometrical considerations show that in the SuperKEKB beam crossing situation the specific luminosity should closely track the variable
\begin{equation} \label{eq:sigma_eff}
\sigma_{y,eff}=\frac{\sigma_{y,1}^{*}\sigma_{y,2}^{*}}{\Sigma_{y}}
\end{equation}
where $\sigma_{y,1}^{*}$ and $\sigma_{y,2}^{*}$ are the beam heights of the two beams at the IP and $\Sigma_{y}=\sqrt{(\sigma_{y,1}^{*})^{2}+(\sigma_{y,2}^{*})^{2}}$. In fact, SuperKEKB adopted the nano-beam scheme with a large crossing angle (83 mrad), and in such configuration the luminosity is approximately independent from the horizontal beam sizes. The tracking of this variable by the NN network is excellent, as shown in Figure~\ref{fig:sigmaeff}.
\begin{figure}%
\centering
\subfloat[]{{\includegraphics[width=5.5cm]{oho_sigma_y_eff_lr.png} }}%
\qquad
\subfloat[]{{\includegraphics[width=5.5cm]{oho_sigma_y_eff_nn.png} }}%
\caption{(a) $\sigma_{y,eff}$ data and prediction with Linear Regression. (b) $\sigma_{y,eff}$ data and prediction with Neural Network.}%
\label{fig:sigmaeff}%
\end{figure}
\subsection{Vertical beam size}
\label{subsection:beam_size}
The beam size at SuperKEKB is measured by the X-ray monitor (XRM)~\cite{Mulyani:2019gsy}. The XRMs are installed in both the HER $(e^{-})$ and LER $(e^{+})$ rings, at about 650 and 1400 meters respectively from the IP. Using the Twiss parameters at their location, it is possible to obtain an estimate of the emittance, and through the optical transfer matrices it is possible to estimate the beam size at the IP, which is the quantity we are interested in. Figure~\ref{fig:sigmayl} shows data and prediction of SIGMAY at IP for the LER ring. In this case the predictive power of LR and NN is similar, although we can appreciate how the NN is able to better reproduce the variation in beam size for the highest values.
\begin{figure}%
\centering
\subfloat[]{{\includegraphics[width=5.5cm]{oho_sigma_y_ler_lr.png} }}%
\qquad
\subfloat[]{{\includegraphics[width=5.5cm]{oho_sigma_y_ler_nn.png} }}%
\caption{(a) SIGMAY at IP for the LER as predicted with Linear Regression. (b) SIGMAY at IP for the LER as predicted with with Neural Network.}%
\label{fig:sigmayl}%
\end{figure}
In the case of SIGMAY at IP for the HER beam, we see a much better predictive power for the NN. Figure~\ref{fig:sigmayh} shows that the NN prediction is much less noisy and the MAE is 6.6\% for the LR and 3.4\% for the NN.
\begin{figure}%
\centering
\subfloat[]{{\includegraphics[width=5.5cm]{oho_sigma_y_her_lr.png} }}%
\qquad
\subfloat[]{{\includegraphics[width=5.5cm]{oho_sigma_y_her_nn.png} }}%
\caption{(a) SIGMAY at IP for the HER as predicted with Linear Regression. (b) SIGMAY at IP for the HER as predicted with a Neural Network.}%
\label{fig:sigmayh}%
\end{figure}
Finally, we are interested in the LER/HER ratio on SIGMA Y at IP. In fact, KEKB had a vertical beam size for the LER that was consistently larger than the corresponding one for the HER. This corresponds to one beam being unfocused, causing significant luminosity degradation, which is an effect that we want to prevent at SuperKEKB. Figure~\ref{fig:sigmayr} shows the ratio sigmay LER/HER, showing as well that the NN model predicts the experimental data much better than the LR, which is very noisy, with the MAE being 9.0\% for LR and 4.8\% for the NN.
\begin{figure}%
\centering
\subfloat[]{{\includegraphics[width=5.5cm]{oho_sigma_y_ratio_lr.png} }}%
\qquad
\subfloat[]{{\includegraphics[width=5.5cm]{oho_sigma_y_ratio_nn.png} }}%
\caption{(a) Ratio SIGMA Y LER/HER data and prediction with Linear Regression. (b) Ratio SIGMA Y LER/HER data and prediction with Neural Network.}%
\label{fig:sigmayr}%
\end{figure}
We did not focus on the horizontal beam sizes since they are very stable and of little interest with respect to SuperKEKB's luminosity, as discussed above in Section \ref{subsection:spec_lumi}. However, for completeness the plots related to horizontal beam widths and their ratio are shown in Figs.~\ref{fig:sigmaxl} to ~\ref{fig:sigmaxr}. The good accuracy in predicting the horizontal sizes and ratio is noted, also with errors of order percent.
\begin{figure}%
\centering
\subfloat[]{{\includegraphics[width=5.5cm]{oho_sigma_x_ler_lr.png} }}%
\qquad
\subfloat[]{{\includegraphics[width=5.5cm]{oho_sigma_x_ler_nn.png} }}%
\caption{(a) SIGMAX at IP for the LER as predicted with Linear Regression. (b) SIGMAX at IP for the LER as predicted with with Neural Network.}%
\label{fig:sigmaxl}%
\end{figure}
\begin{figure}%
\centering
\subfloat[]{{\includegraphics[width=5.5cm]{oho_sigma_x_her_lr.png} }}%
\qquad
\subfloat[]{{\includegraphics[width=5.5cm]{oho_sigma_x_her_nn.png} }}%
\caption{(a) SIGMAX at IP for the HER as predicted with Linear Regression. (b) SIGMAX at IP for the HER as predicted with a Neural Network.}%
\label{fig:sigmaxh}%
\end{figure}
\begin{figure}%
\centering
\subfloat[]{{\includegraphics[width=5.5cm]{oho_sigma_x_ratio_lr.png} }}%
\qquad
\subfloat[]{{\includegraphics[width=5.5cm]{oho_sigma_x_ratio_nn.png} }}%
\caption{(a) Ratio SIGMA X LER/HER data and prediction with Linear Regression. (b) Ratio SIGMA X LER/HER data and prediction with Neural Network.}%
\label{fig:sigmaxr}%
\end{figure}
\section{Discussion}
\label{section:discussion}
In order to understand the results of this paper, it is worth summarizing what a single side observation can and cannot do. The technique was first proposed as a two-side observation ~\cite{Bonvicini:1999rg}. The two-side
analysis would then be able to reconstruct beam parameters. One of the main results of this paper is that single side measurements have good sensitivity to all four parameters. A two-side measurement next year will then be able to further reduce the error, while also provide auxiliary measurements of the beam tails. With two side observation the LABM beam parameters determination will
become truly independent of other sensors. In this paper we limit ourselves to correlating LABM parameters with another, existing sensor.
The beams are very flat (meaning that $\sigma_x\gg \sigma_y$), and the NN did not know our signal rate calculations with the parameters of Table ~1.
In such a situation, the ratio of beam heights can and should be measured well, but there is minimal sensitivity to each beam height. Nevertheless it is clear from Figs.~\ref{fig:sigmayh} and~\ref{fig:sigmayr} that the NN is able to predict well also the height of the electron beam (and, having the ratio and one size, the other size is derived also, Fig.~\ref{fig:sigmayl}). This necessarily entails the presence of another source of radiation which depends on the radiating beam height, which we identify as the light emitted by the vertical beam tails in the final quadrupoles. We can arrive at this conclusion by exclusion (dipole sources do not depend on the beam height).
It appears that the NN is able to disentangle the beamstrahlung and quadrupole radiation, a task that is currently beyond our abilities using standard analysis techniques.
The size of the electromagnetic fields of a beam scales like $1/\sigma_x\sigma_z$ for flat beams. A positron beam with a smaller horizontal size will make the electron beam radiate more, and vice versa. Therefore, without knowing the normalization, only the ratio of the beams horizontal sizes should be available through a beamstrahlung measurement. However, the same radiation from the final quadrupoles is present and can be used by the NN to provide an independent measurement.
Note that the hypothesized quadrupole radiation is not necessarily better or worse at measuring the horizontal size compared to the vertical size.
The horizontal size is much larger than the vertical size, leading to a larger average displacement of particles from the center of the quadrupole, and therefore there will generally be more x-polarized radiation.
However, the vacuum mirrors are located vertically from the beam line axis, and particles bent vertically will illuminate them at a smaller (or larger) angle compared to particles bent horizontally. From the general
observed rates (with ratios between x-polarized and y-polarized radiation of order 2) it appears that this enhancement make the two polarized observed rates comparable.
It is noted that the NN provided measurements of beam parameters at the few percent level, which is crucial for the viability of the device as a beam monitor. Table~\ref{table:2} summarizes the relative MAE on the LR and NN predictions presented in the previous section, representing the main result of this work.
\begin{table}[ht]
\centering
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$ Model $ & LR & NN \\ \hline
$L_{sp} $ & 4.2\% & 3.5\% \\ \hline
$\sigma_{y,eff} $ & 3.7\% & 2.2\% \\ \hline
$\sigma_{x,LER}$ & 1.0\% & 0.7\% \\ \hline
$\sigma_{x,HER}$ & 1.0\% & 0.9\% \\ \hline
$\sigma_{x,LER}/\sigma_{x,HER}$ & 1.4\% & 1.1\% \\ \hline
$\sigma_{y,LER}$ & 4.5\% & 3.7\% \\ \hline
$\sigma_{y,HER}$ & 6.6\% & 3.4\% \\ \hline
$\sigma_{y,LER}/\sigma_{y,HER}$ & 9.0\% & 4.8\%\\ \hline
\end{tabular}
\end{center}
\caption{Summary of relative MAE for LR and NN models for the results presented in this paper. The NN performs consistently better than the LR with errors at a few percent level.}
\label{table:2}
\end{table}
We also note that the main current NN limitation is just due to scarce statistics at the edges of the measured distributions, i.e. in the regions of low or high luminosity and low or high beam sizes. We stress that improvement is expected over time, allowing for larger and lower values of the parameter space contained in the measured data set to become more populated, therefore increasing the performance of the NN training, and consequently the quality of the predictions in those regions.
\section{Conclusion}
\label{section:conclusion}
This study had two main purposes: (1) to show that the LABM measurements of beamstrahlung signal are of excellent quality and are correlated with key beam parameters; (2) to show that machine learning, in this case in the form of a Neural Network model, can be useful in modern particle accelerators with extremely small beams and large sensitivities to beam parameters. In section~\ref{section:beam_parameters} we showed that the NN model was excellent at predicting specific luminosity and beam sizes for unseen data using as input only 16 of the 32 PMT values from the LABM. The data was collected parasitically during a Physics Run, and therefore large variation in the parameters is not to be expected. However, the NN model was especially good at reproducing the modest changes, performing always better or much better compared to the LR model. The NN models, one for each of the parameters reproduced, can in principle be deployed online to provide an estimation for the given parameters by taking input from the 16 PMT values. This could also be useful when one or more of the other instruments are offline, as in this case the LABM can provide a temporary replacement value. The ability to predict specific luminosity and beam size from the LABM measurements only was at the same time an independent validation of the LABM quality itself. The next step for the LABM would be to introduce functions of some or all of the 32 PMT values, based on theoretical ground or machine learning techniques, to provide new and original information about the beam parameters at the IP. Finally, this study shows that machine learning techniques can be useful in modern accelerators. SuperKEKB's instrumentation provides hundreds of variables other than LABM's that could in principle be used in larger correlation studies, significantly increasing the predictive power.
\par
This study has shown the application of machine learning techniques in the effort of reproducing key beam parameters at SuperKEKB. The Neural Network model used had a significantly larger predictive power compared to the classical Linear Regression. The model used only the LABM measurements as input, and was able to predict specific luminosity and vertical beam sizes. This constituted on one hand a validation of the LABM itself, and on the other hand a further validation of machine learning techniques use in accelerator physics.
\section*{Acknowledgments}
We thank David Cinabro for useful discussions.
This work was supported by the Frontiers of Science Program Contracts
No. FOINS-296, No. CB-221329, No. CB-236394,
No. CB-254409, No CB-A1-S-33202 and No. CB-180023, Profapi PRO-A1-018, SEP-CINVESTAV research Grant No. 237, University of Tabuk, KSA, research grant S-0265-1438, and by the U.S. Department of Energy, Office of Science, through grant SC-007983 and the US-Japan Science and Technology
Cooperation Program. We thank the beam instrumentation, commissioning, and vacuum groups of the SuperKEKB accelerator.
|
1,314,259,994,830 | arxiv | \section{Introduction}
The understanding of correlated matter strongly coupled to quantum light has been an intense area of research both theoretically and experimentally in the last few years. Hybrid photonic technologies for control of complex systems have been constantly improving, now acting as cornerstones for quantum simulations in cutting-edge platforms such as optical lattices. Namely, trapped ions are subjected to high control by laser beams allowing the manipulation of the main system parameters~\cite{ExtendedBH, Lewis_SwanDM, REHubbard&Spin,BrydgesRERandomized,Camacho_Guardian_2017OpticalLattices}. Strong light-matter couplings have been generated in superfluid and Bose-Einstein gases embedded in cavities now available to study systems with exquisitely tailored properties~\cite{ExpFERMBOSONS,ExpDickeSuperfluid,ExpDickeSuperfluidPRL,L_onard_2017BECCAvity,roux2020nat}. Furthermore, the analysis of light-controlled condensed matter systems has led to predictions of a rich variety of phenomena, including the enhancement of electron-photon superconductivity by cavity mediated fields~\cite{kiffner2019prb,thomasEFSC,curtis2019prl,FrankEPSC,GaoPEP}. Experimentally, new physical features as well as control opportunities in the ultrastrong and deep-strong-coupling regimes, where coupling strengths are comparable to or larger than subsystem energies, have been observed recently using circuit quantum electrodynamics microwave cavities~\cite{FornRMP2019,FriskNPR2019}.\\
\\
Motivated by these remarkable advances, we are encouraged to establishing new feasible hybrid cavity scenarios for the detection and control of nonlocal correlated features in solid-state setups such as topological materials~\cite{DartiailhMajoranaPairs,frank2019prl,Nie_2020Superatom}. A great deal of attention has been recently devoted to assessing nonlocal Majorana fermion quasiparticles in chains with strong spin-orbit coupling disposed over an {\it s}-wave superconductor~\cite{Mourik1003,ExpMajorana,Exp2016,Exp2018}. Majorana fermions, as topological quasiparticles in solid-state environments, have been widely searched due to their unconventional properties against local decoherence and hence for possible technological solutions to fault-tolerant quantum computing protocols~\cite{FernandoTwoTimeCorrelations,DynamicalDelocalizationMajorana,EntanglementInManyBody,AasenMajoranaQuantumComputing}.\\
\\
Since the seminal work by Kitaev~\cite{Kitaev} where a one-dimensional spinless fermion chain was shown to feature Majorana physics, topological properties of hybrid semiconductor-superconductor systems~\cite{Mourik1003,ExpMajorana,Exp2016,Exp2018} have been explored looking for the presence of the so-called zero energy modes (ZEM), corresponding to quasiparticles localized at the boundaries of the chain. The fact that these quasiparticles have zero energy makes them potential candidates for the implementation of non-Abelian gate operations within two dimensional (2D) arrangements~\cite{majoranareturns,majorana,AnyonsBurton,SurvivalMajorana,ProgramableMajoranas}.
However, some open questions still remain about the experimental occurrence of these modes since the reported phenomena observed in those experiments could be caused by a variety of alternative competing effects~\cite{andreevKim}. Therefore, new experimental frames are highly desirable to find unambiguous signs of such quasiparticles.\\
\\
An important question in this context is whether the topological phase transition of Majorana polaritons, for instance in a fermion chain embedded in a microwave cavity~\cite{MirceaTrifHamiltonian,trif2019prl,TrifMajoranasSpinOrbit}, can be detected by accessing observables such as the mean number of photons, field quadratures or cavity Fano factor (FF). In this paper, we report on an information-theoretic approach based on the analysis of the R\'enyi entropy ($S_{\rm R}$) of order two between light and matter subsystems, for connecting its singular behavior, resulting from the topological transition, with the FF. Consequently, we show a path to characterize the bipartite entanglement of the light-matter system and how to use it as a witness to identify quantum phase transitions. Additionally, we show that in a wide parameter coupling regime the cavity state is faithfully represented by a Gaussian-state (GS). Within this description, measurements of the Fano parameter and single-mode quadrature amplitudes yield directly to assessing the R\'enyi entropy. This approach allows us to link directly accessible microwave observables to quantum light-matter correlations~\cite{Acevedo2_2015,Acevedo_2015,Gomez2018}, and clarifies the role of topological phases hosted by cavity-fermion coupled systems.\\
\\
Our paper is organized as follows. Section~\ref{Sec1} gives the description of the Kitaev model embedded in a microwave cavity. In Sec.~\ref{Sec:MF}, we present the mean-field approach of the system, which is useful to predict the response of the cavity. In Sec.~\ref{Sec:PhaseDiagram}, we present the phase diagram of the composite system obtained numerically. In Sec.\ref{Sec:VNCRiticalityGS} we show that the composite system signals the phase transitions in the von Neumann entropy and that the state of the cavity can be approximated by a single-mode Gaussian-state. In Sec. \ref{Sec:Renyi} we show the connection between the Fano factor and the Rényi entropy.
Finally, in Sec.\ref{Sec:Conclusions} we present a summary of our work.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\linewidth]{Figure_1.pdf}
\caption {Schematic illustration of a Kitaev chain embedded in a single-microwave cavity. The blue curve denotes the profile of the fundamental mode of the cavity. Majorana fermion quasiparticles are depicted as blue and red spheres (bulk) and as green spheres for the edge unpaired quasiparticles for an isolated Kitaev chain in the topological phase. The light red regions illustrate the hybridization effect yielding to Majorana polaritons.
}\label{fig:Sketch}
\end{center}
\end{figure}
\section{Photon-Fermion Model}\label{Sec1}
We consider a Kitaev chain embedded in a single-mode microwave cavity as schematically shown in Fig. \ref{fig:Sketch}. The system is described by the Hamiltonian
\begin{equation}\label{FullHamiltonian}
\hat{\mathcal{H}}=\hat{\mathcal{H}}_{\rm C}+\hat{\mathcal{H}}_{\rm K}+\hat{\mathcal{H}}_{{\rm Int}}.
\end{equation}
Here $\hat{\mathcal{H}}_{\rm C}=\omega \hat{a}^\dagger \hat{a}$ is the Hamiltonian describing the microwave single-mode cavity, with $\hat{a} \pap{\hat{a}^{\dagger}}$ the annihilation (creation) microwave photon operator, and $\omega$ is the energy of the cavity; we set the energy scale by taking $\omega=1$. The isolated open-end Kitaev chain Hamiltonian $\hat{\mathcal{H}}_{\rm K}$ is given by
\begin{equation}
\begin{split}
\hat{\mathcal{H}}_{\rm K}=&-\frac{\mu}{2} \sum_{j=1}^L\left[2\hat{c}_j^\dagger \hat{c}_j-\hat{1}\right] -t\sum_{j=1}^{L-1}\left[\hat{c}_{j}^{\dagger}\hat{c}_{j+1}+\hat{c}_{j+1}^{\dagger}\hat{c}_{j}\right]\\ & +\Delta \sum_{j=1}^{L-1}\left[\hat{c}_{j}\hat{c}_{j+1}+\hat{c}_{j+1}^{\dagger}\hat{c}_{j}^{\dagger}\right].\end{split}\end{equation}
Here $\hat{c} _j\pap{\hat{c} _j^{\dagger}}$ is the annihilation (creation) operator of spinless fermions at site $j=1,\ldots, L$, $\mu$ is the chemical potential, $t$ is the hopping amplitude between nearest-neighbor sites (we assume $t\geq 0$ without loss of generality) and $\Delta$ is the nearest-neighbor superconducting induced pairing interaction. The Kitaev model features two phases: a topological and a trivial phase. In the former the Majorana ZEM emerge, which occurs whenever $|\mu| <\pm 2\Delta$ for the symmetric hopping-pairing Kitaev Hamiltonian, i.e., $t=\Delta$, the case we restrict ourselves from now on~\cite{majorana,Kitaev}. Additionally, the general interaction Hamiltonian is given by~\cite{MirceaTrifHamiltonian}
\begin{equation}
\hat{\mathcal{H}}_{{\rm Int}}=\pap{\frac{\hat{a}^\dagger+\hat{a}}{\sqrt{L}}}\bigg[\lambda_0\sum_{j=1}^L \hat{c}_j^\dagger \hat{c}_j+\frac{\lambda_1}{2}\sum_{j=1}^{L-1}\pap{\hat{c}_{j}^\dagger \hat{c}_{j+1}+ \hat{c}_{j+1}^{\dagger} \hat{c}_{j}}\bigg].
\end{equation}
Thus, for the light-matter coupling, we shall consider a general case which incorporates both on-site ($\lambda_0$) as well as hoppinglike ($\lambda_1$) terms (without loss of generality we will assume $\lambda_0$, $\lambda_1>0$). In Ref.~\cite{MirceaTrifHamiltonian}, a typical value of the on-site chain-cavity coupling, $\lambda_0 \simeq 0.1\omega$ was estimated for a fermion chain length of $L=100$ sites. Note that the whole chain is assumed to be coupled to the same cavity field.
\section{Mean-Field Approach}\label{Sec:MF}
In order to gain physical insights on how the original topological phase of the Kitaev chain is modified by its coupling to a cavity, we start by performing a Mean-Field (MF) treatment. Although we develop the MF analysis for a chain with periodic boundary conditions, the relations we will discuss in this section are indeed useful guides for interpreting the quasiexact results obtained by density matrix renormalization group (DMRG) numerical simulations in chains with open boundary conditions, as illustrated below.\\
\\
We start by separating the cavity and the chain subsystems by describing their interaction as the mean effect of one subsystem over the other. Applying the traditional MF approximation to the interaction Hamiltonian $\hat{\mathcal{H}}_{{\rm Int}}$, we set quantum fluctuations of products of bosonic and fermionic operators to 0, therefore
\begin{equation}
\pap{\hat{a}^\dagger+\hat{a}-\langle\hat{a}^\dagger+\hat{a}\rangle}\pap{\hat{c}_j^\dagger \hat{c}_j-\langle\hat{c}_j^\dagger \hat{c}_j\rangle}=0.
\end{equation}
Following a similar procedure for the hoppinglike light-matter interaction term and setting periodic boundary conditions, the new interaction Hamiltonian is given by
\begin{equation}\label{InteractionBipartition}
\begin{split}
\hat{\mathcal{H}}_{{\rm Int}}^{{\rm MF}}\approx & L\pas{\lambda_1 D+\lambda_0(1- S_z)}\pas{\hat{X}-x}\\
&+2x\bigg[\lambda_0\sum_{j=1}^L \hat{c}_j^\dagger \hat{c}_j+\frac{\lambda_1}{2}\sum_{j=1}^{L}\pap{\hat{c}_{j}^\dagger \hat{c}_{j+1}+ \hat{c}_{j+1}^{\dagger} \hat{c}_{j}}\bigg].
\end{split}
\end{equation}
Here, we define
\begin{equation}
\begin{split}
\hat{X}&=\pap{\hat{a}+\hat{a}^{\dagger}}/2\sqrt{L}, \quad x=\langle \hat{X}\rangle, \\
S_z&=1-\frac{2}{L}\sum_j\langle \hat{c}^\dagger_{j} \hat{c}_{j}\rangle,\\ D&=\frac{1}{L}\sum_j \langle \hat{c}^\dagger_{j} \hat{c}_{j+1}+\hat{c}^\dagger_{j+1}\hat{c}_{j}\rangle, \end{split} \end{equation}
where expectation values are taken with respect to the photon-fermion ground-state. The resulting Hamiltonian is that of a displaced harmonic oscillator, with photon number $\langle \hat{a}^\dagger \hat{a} \rangle\equiv \langle \hat{n} \rangle=L x^2$, and a Kitaev chain with effective chemical potential $\mu_{{\rm eff}}\equiv\mu -2\lambda_0 x$ and hopping interaction $ t_{{\rm eff}}\equiv \Delta-\lambda_1x$ (see Appendix \ref{App:MFA} for more details on this MF approach).\\
The minimization of the MF Hamiltonian expected value, $\partial \langle \hat{\mathcal{H}}_{{\rm MF}}\rangle/ \partial x=0$, yields
\begin{equation}
\label{Minimization}
\lambda_0S_z=\lambda_0+\lambda_1D+2 \omega x,
\end{equation}
which shows the interdependence of the cavity and chain states parameters. Since $x\in [-\frac{2\lambda_0+\lambda_1}{2 \omega}, 0]$, the effective MF renormalized Kitaev parameters turns out to be $\mu_{{\rm eff}} \geq \mu$ and $t_{{\rm eff}} \geq \Delta$. By choosing $\lambda_1=0$, it is easy to see that $x$ will be related to the magnetization in the equivalent transverse Ising chain~\cite{qpt,susuki,PolaritonLiberato,Ising-Kitaev}, while when choosing $\lambda_0=0$, $x$ will be associated to the occupancy of first neighbor nonlocal Majorana fermions in the Kitaev chain~\cite{FernandoTwoTimeCorrelations}.
\section{Phase Diagram}
\label{Sec:PhaseDiagram}
The ground-state of the system has been obtained by performing DMRG simulations in a matrix product state description~\cite{SCHOLLWOCK,Or_s_2019DMRG}, using the open-source TNT library~\cite{TNT,Sarah}. Notably, matrix product algorithms have been successfully applied to correlated systems embedded in a cavity~\cite{GandMDMRG,CavityKollath}, as well as to different interacting systems in starlike geometries~\cite{wolf2014prb,Mendoza:2017,zwolak2020prl,brenes2020}. In the following analysis, we consider separately each kind of cavity-chain coupling term and we sweep over $\mu$.\\
\\
The topological phase of the chain will be assessed through the two-end correlations $Q$, defined as $Q\equiv 2\langle \hat{c}_1 \hat{c}_L^\dagger+\hat{c}_L \hat{c}_1^\dagger \rangle$. This value is an indicator of the locality of edge modes, which is connected to the topology of the system \cite{ReslenQ,LeeQ} . For an infinite isolated Kitaev chain, its value is $1$ in the topological phase while it goes to $0$ in the trivial one. However, for finite sizes the value of $Q$ takes on continuous values in between, leaving a value of $1$ at the point of maximum correlations [cf. Figs.~\ref{fig:Q_PD}(e) and \ref{fig:Q_PD}(f), see Appendix \ref{App:PhaseCharacterization}]. Whenever $Q>Q_{{\rm Trigger}}$ the phase is said to be topological, where $Q_{{\rm Trigger}}$ was defined as the lowest $Q$ that allows for ZEM to emerge in an isolated Kitaev chain with the same $\Delta$ and $L$ as in the simulated light-coupled case. In Appendix \ref{App:PhaseCharacterization}, we show the agreement of this definition of the topological phase with the description provided by a topological invariant, namely, the Majorana number \cite{Kitaev}.\\
For both types of cavity couplings, second-order phase transitions arise in the composite light-matter model (an example is shown in Appendix \ref{App.DMRGVsMF}), a result for which DMRG and MF are in full agreement for a wide range of coupling values.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\linewidth]{Figure_2.pdf}
\caption{Photon-fermion phase diagrams. NP: normal phase, TP: topological phase and SP: asymptotically super-radiant phase. \text{(a)} Chemical potential-like coupling. \text{(b)} hoppinglike coupling. The Kitaev-cavity parameters are $L=100$ and $\Delta =0.6 \omega$.}\label{fig:PhaseDiagram}
\end{center}
\end{figure}
The phase diagram for the on-site coupling ($\lambda_0 \neq 0$ and $\lambda_1 = 0$) is presented in Fig. \ref{fig:PhaseDiagram}(a), whereas that for the hoppinglike coupling ($\lambda_0 = 0$ and $\lambda_1 \neq 0$) is depicted in Fig.~\ref{fig:PhaseDiagram}(b).\\
For the on-site coupling, the critical points and the maximum of correlations move asymmetrically to lower values of the chemical potential as the coupling strength increases (see Appendix \ref{App:PhaseCharacterization}). The boundary between the topological phase (TP) and the asymptotically super-radiant phase (SP), in which the number of photons approaches the maximum obtained by MF [cf. Fig.~\ref{fig:NumberOfPhotons}(a), see Appendix \ref{App:PhaseCharacterization}], is affected more dramatically causing the TP to disappear beyond $\lambda_0/\omega =1.39\pm 0.01$. For larger values of $\lambda_0$, there will only be one second-order phase transition between the normal phase (NP), which is topologically trivial and does not present radiation, and SP, holding only a trivial ordering of the chain.\\
\\
For the hoppinglike photon-chain coupling case, the phase transition points are symmetrical with respect to the transformation $\mu \rightarrow -\mu$ (see Appendix \ref{App.DMRGVsMF}). Whenever the cavity resides in a super-radiant phase, the chain is in the topological phase (see Appendix \ref{App:PhaseCharacterization}); thus the mean number of photons acts as an orderlike parameter that correlates well with the quantum state of the chain. It is evident that this type of cavity-chain coupling widens the topological phase allowed region. However, as the TP gets wider the maximum value of $Q$ decreases, indicating the degrading of nonlocal chain correlations at high coupling values.
\section{von Neumann entropy, criticality and Gaussian-states}
\label{Sec:VNCRiticalityGS}
A result well beyond the MF analysis for this photon-fermion system is that phase transitions are associated with singularities in the light-matter quantum von Neumann entropy, $S_{{\rm N}}$~\cite{Vidal,Sarah,Lewis_SwanDM}, as shown in Fig.~\ref{fig:Entropy}. Critical lines, as obtained from the nonlocal $Q$-correlation behavior, are fully consistent with results extracted from the second derivative of the energy and $S_{{\rm N}}$ behavior (see. Appendix \ref{App.DMRGVsMF}). Moreover, the maximum nonlocal edge correlation $Q=1$ coincides with the minimum of $S_{{\rm N}}$ [compare Figs.~\ref{fig:Q_PD}(e) and \ref{fig:Q_PD}(f) with Fig.~\ref{fig:Entropy}(a)]. Consistency with $S_{{\rm N}}$ singularities is also found for the hoppinglike coupling, as shown in Fig.~\ref{fig:Entropy}(b). In any case, singularities in $S_{{\rm N}}$ are intimately connected to the phase transitions for an ample domain of coupling strength parameters (see also weak-coupling behaviors in Fig.~\ref{fig:Entropy} for $\lambda_0=0.1\omega$ and $\lambda_1=0.07\omega$).
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\linewidth]{Figure_3.pdf}
\end{center}
\caption{von Neumann entropy $S_{N}$ as a function of the chemical potential of the chain $\mu/2\Delta$ for any subsystem in the bipartite cavity-chain system. Symbols (lines) indicate DMRG (Gaussian) results. (a) Local photon-fermion couplings $\lambda_0 = 0.1\omega$ (weak-coupling, black symbols and line) and $\lambda_0 = 0.4\omega$ (moderate coupling, red symbols and line). (b) nonlocal photon-fermion coupling $\lambda_1 = 0.07\omega$ (weak-coupling, black symbols and line) and $\lambda_1 = 0.4\omega$ (moderate coupling, red symbols and line). Other parameters are $L=100$, $\omega= 1$ and $\Delta=0.6\omega$}
\label{fig:Entropy}
\end{figure}
\\
The MF analytical description, which involves a single coherent state for the cavity, provides an accurate description of the bulk expectation values in the chain, the mean number of cavity photons, the cavity quadratures and the energy of the whole system (see Appendix \ref{App.DMRGVsMF}). However, this effective theory is unable to account for entanglement properties between subsystems and higher interaction terms such as the FF. Many of the properties of GS have been broadly studied~\cite{GaussianExpectedOlivares, GaussianRenyiDaeKil, GaussianVonNeumann, GaussianAlexanian}, being one of the most outstanding the fact that it is fully characterized by its $2 \times 2$ covariance matrix $\sigma$ and first moments of the field-quadrature canonical variables given by $\hat{q} = \pap{\hat{a}^\dagger+\hat{a}}/\sqrt{2}$ and $\hat{p} = i \pap{\hat{a}^\dagger-\hat{a}}/\sqrt{2}$. The covariance matrix for a single-mode GS is simply
\begin{equation}
\sigma =\begin{pmatrix}
\langle \hat{q}^2 \rangle -\langle \hat{q} \rangle^2 & \langle \hat{q}\hat{p} + \hat{p}\hat{q} \rangle -\langle \hat{q} \rangle \langle \hat{p} \rangle\\
\langle \hat{q}\hat{p} + \hat{p}\hat{q} \rangle -\langle \hat{q} \rangle \langle \hat{p} \rangle & \langle \hat{p}^2 \rangle -\langle \hat{p} \rangle
\end{pmatrix}.
\end{equation}
\\
Remarkably, an accurate description of the reduced photon system density matrix is possible by means of a single mode GS. Any single-mode GS can be expressed in terms of a fictional thermal state on which squeezed ($\hat{S}_\xi$) and displacement ($\hat{D}_\alpha$) operators act in the form:
\begin{equation}
\hat{\rho}_{\rm GS}=\hat{D}_\alpha \hat{S}_\xi\frac{N ^{\hat{a}^\dagger \hat{a}}}{\pap{1+N}^{a^\dagger a}}S^\dagger_\xi D^\dagger_\alpha,
\end{equation}
with the operators defined as
\begin{equation}
\begin{split}
\hat{D}_\alpha&=\exp\pas{\alpha \hat{a}^\dagger - \alpha^* \hat{a}}, \\ \hat{S}_\xi&=\exp\pas{\pap{ \xi^* (\hat{a})^2 - \xi (\hat{a}^\dagger)^2 }/2},
\end{split}
\end{equation}
where $\alpha\in \mathbb{C}$, $\xi=re^{i\phi}$ is an arbitrary complex number with modulus $r$ and argument $\phi$, and $N$ is the thermal state parameter~\cite{GaussianExpectedOlivares}. Furthermore, a well known property is that $S_{{\rm N}}$ is maximized for a single-mode GS at given quadrature variances and it is simply expressed as~\cite{GaussianVonNeumann, GaussianAlexanianExpected, GaussianRenyiDaeKil}
\begin{equation}
\label{Eq:GaussianSN}
S_{{\rm N}}=\pap{N+1}\ln\pas{N+1}-N\ln\pas{N}.
\end{equation}
In order to get the $\alpha$, $N$, $r$, and $\phi$ Gaussian parameters, the covariance matrix and quadratures are numerically extracted from the corresponding expected values using ground-state DMRG calculations. The imaginary part of $\alpha$ and $\phi$ must be $0$ to reach the ground-state. Therefore, the relations that endorse us with the Gaussian parameters are the following~\cite{GaussianExpectedOlivares}:
\begin{subequations}
\begin{align}
\langle \hat{q} \rangle&=\sqrt{2} \Re\pas{\alpha},\\
\langle \hat{p} \rangle&=\sqrt{2} \Im\pas{\alpha}\\
\langle \hat{q}^2 \rangle -\langle \hat{q} \rangle^2=\frac{1+2N}{2}&\pap{\cosh\pas{2r}+\sinh\pas{2r}\cos\pas{\phi} },\\
\langle \hat{p}^2 \rangle -\langle \hat{p} \rangle^2=\frac{1+2N}{2}&\pap{\cosh\pas{2r}-\sinh\pas{2r}\cos\pas{\phi}},\\
\langle \hat{q}\hat{p} + \hat{p}\hat{q} \rangle -\langle \hat{q} \rangle \langle \hat{p} \rangle&=\frac{1+2N}{2}\pap{\sinh\pas{2r}\sin\pas{\phi} }.
\end{align}
\end{subequations}
Results for $S_{{\rm N}}$ obtained from DMRG and analytical GS calculations of Eq. \eqref{Eq:GaussianSN} are in excellent agreement for different coupling types and strengths, as shown in Fig. \ref{fig:Entropy}, thus confirming the adequacy of a GS photon description for the present photon-fermion system (see also Appendix \ref{App:GaussianParameters}).
\section{R\'enyi entropy and Fano factor}
\label{Sec:Renyi}
The R\'enyi entropies, defined as
\begin{equation}
S_{\eta}\pap{\hat{\rho}}=\pap{1-\eta}^{-1}\ln\pas{\tr \pas{\hat{\rho}^{\eta}}}
\end{equation}
for a state $\hat{\rho}$, have been identified as powerful indicators of quantum correlations in multipartite systems~\cite{adessoPRL2012}. The von Neumann entropy $S_{{\rm N}}$ is retrieved as the R\'enyi entropy in the limit $\eta \rightarrow 1$. It has also been established that the R\'enyi entropy of order $\eta=2$ is well adapted for extracting correlation information from GS. Thus, from now on we restrict ourselves to consider only $S_{2}(\rho)=-\ln\pas{\tr (\rho^{2})}$ which we will simply denote as $S_{{\rm R}}$ ~\cite{Lewis_SwanDM, REHubbard&Spin,EntanglementInManyBody}. Specifically, $S_{{\rm R}}$ for a GS can be simply expressed in terms of the GS covariance matrix $\sigma$ as $S_{{\rm R}}=\frac{1}{2}\ln\pas{\det (\sigma)}$ \cite{GaussianRenyiDaeKil}.\\
\\
We also consider the photon FF, which is defined as FF$ = {\rm Var}\pap{\hat{n}}/\langle \hat{n}\rangle$, with ${\rm Var}\pap{\hat{n}}=\langle \hat{n}^{2} \rangle - \langle \hat{n}\rangle^{2} $. For further reference, FF$=1$ for a single coherent state (MF result) while it denotes either a sub- (FF$<1$) or super- (FF$>1$) Poissonian photon state. We now argue that the GS approximation allows us to analytically work out a relation between the FF and the entanglement entropy $S_{\rm R}$, raising them as both reliable and accessible indicators of phase transitions in composed photon-fermion systems. For a cavity GS, the FF and the $S_{\rm R}$ can be analytically expressed as~\cite{GaussianExpectedOlivares,GaussianAlexanianExpected,GaussianRenyiDaeKil}:
\begin{figure}
\begin{center}
\includegraphics[width=1.0\linewidth]{Figure_4.pdf}
\end{center}
\caption{[(a) and (c)] R\'enyi entropy $S_{\rm R}$ and [(b) and (d)] Fano factor (FF$-1$) of the cavity state as a function of the chemical potential of the chain $\mu/2\Delta$. Symbols (lines) indicate DMRG (GS) results. [(a) and (b)] Results for local photon-fermion coupling, $\lambda_0=0.1\omega$ (black) and $\lambda_0=0.4\omega$ (red). [(c) and (d)] Results for nonlocal photon-fermion coupling, $\lambda_1=0.07\omega$ (black) and $\lambda_1=0.4\omega$ (red). Other parameters are $L=100$, $\omega=1$, and $\Delta=0.6\omega$.}
\label{fig:GaussianVariables}
\end{figure}
\begin{align}
{\rm FF}&=\frac{(N+1/2)^2\cosh\pas{4r}+(1+2N)e^{2r}\alpha^2-1/2}{(N+1/2)\cosh\pas{2r}+\alpha^2-1/2},\label{eq:GaussianFano}\\
S_{\rm R}&=2\ln\pas{1+N}+\ln\left[1-\left(\frac{N}{1+N}\right)^2\right].\label{eq:GaussianRenyi}
\end{align}
For both kinds of photon-fermion couplings, results obtained from these analytical expressions fit exactly the numerical ones extracted from full DMRG calculations. Assuming a GS, the inequalities $N, r \ll \abs{\alpha}$ and $N,\, r \ll 1$, which allow to clearly see the connection between both quantities, are reliable and well justified for the range of parameters of experimental interest (see Appendix \ref{App:GaussianParameters}). Keeping first order terms in $r$ and $N$, in Eqs.~\eqref{eq:GaussianFano} and~\eqref{eq:GaussianRenyi}, we finally get FF$=1+2(r+N)$ (i.e., a super-Poissonian photon state) and $S_{\rm R}=2N$, from which a simple relation between $S_{\rm R}$, FF and the squeezing parameter $r$ immediately follows as
\begin{equation}
\label{eq:Fano&Renyi}
S_{\rm R}={\rm FF}-2r-1.
\end{equation}
It must be stressed that this last relation between the Rényi entropy and cavity observables does not rely on a MF analysis or equivalently on the assumption of a single coherent state for the cavity, but rather it comes exclusively from numerical DMRG calculations and their analytical backing by a Gaussian-state approximation.\\
The validity of this important result is illustrated in Fig.~\ref{fig:GaussianVariables} regardless of the photon-fermion coupling type. In spite of the similar behavior through a topological phase transition (and corresponding analytical expressions for a GS) of von Neumann and R\'enyi entropies, it is important to note that an equivalent relation to that in Eq.~\eqref{eq:Fano&Renyi} but involving $S_{{\rm N}}$ instead of $S_{\rm R}$ is hardly workable. Therefore, we stress the relevance of this connection between a theoretical quantum information entropy, $S_{\rm R}$, and measurable photon field observables, FF and $r$.\\
\\
Figures~\ref{fig:GaussianVariables}(a) and \ref{fig:GaussianVariables}(b) exhibit the behavior of different terms involved in Eq.~\eqref{eq:Fano&Renyi} for the local photon-fermion coupling ($\lambda_0=0.1 \omega$ and $0.4 \omega$), and show an excellent agreement between the results directly obtained from DMRG and those assuming a cavity GS. This validates Eq.~\eqref{eq:Fano&Renyi}, according to which $S_{\rm R}+2r$ and FF$-1$ coincide. Very small deviations between GS and DMRG results at the topological phase transition are observed, for the stronger coupling value. However, the locations of the singularities predicted by the analytical and numerical results coincide. Similarly, Figs.~\ref{fig:GaussianVariables}(c) and \ref{fig:GaussianVariables}(d) display respective calculations for a hoppinglike coupled system ($\lambda_1=0.07$ and $0.4$), showing that GS results seem to slightly drift apart from the numerically exact DMRG ones for the highest coupling.\\
\\
We observe that the squeezing gets larger as the light-matter subsystems become more entangled at the critical point (note the behavior of the $r$ parameter comparing the different curves in Fig.~\ref{fig:GaussianVariables}, see also Appendix \ref{App:GaussianParameters}). In order to measure the squeezing parameter $r$, one can resort to a homodyne detection technique which has been recently extended to the microwave spectral region~\cite{Haus_2004squeezed,andrews2015photonics,2modeSqueezingWallraff}. On the other hand, the FF can be assessed from measurements of the second-order correlation function $g^{(2)}$(for the relation between both see Refs. \cite{GaussianExpectedOlivares,GaussianAlexanianExpected}), which has successfully been measured in experiments \cite{LangG2}. Moreover, in Fig. \ref{fig:FF&RECriticality} we show a scaling analysis on the value of both, FF and $S_{\rm R}$, at criticality. The results reveals a logarithmic growth with the size of the chain, which is reminiscent of the behavior of entanglement at criticality in 1D systems \cite{EntanglementAtCriticality}. From the results of Fig. \ref{fig:FF&RECriticality} together with Eq. \eqref{eq:Fano&Renyi} it is straightforward to see that the squeezing at criticality depends logarithmically with the the size of the chain. This behavior is not unique to the hoppinglike coupling, but we found this logarithmic response of the cavity for the chemical potential-like interaction as well. Thus, the FF behavior and its very close relation to $S_{\rm R}$ turn out to be measurable, reliable and accurate indicators of entanglement for this light-matter interacting system. Aside from the fact that it is always interesting to establish the connections between different approaches, our main result in Eq.~\eqref{eq:Fano&Renyi} raises the question of whether a GS approximation remains valid for quantum open systems and/or stronger light-matter coupling strengths. For example, photon loss from the cavity is a ubiquitous deleterious effect in experimental setups, but key to measure the state of the cavity field. These subjects merit considerably further studies, motivated by our work.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{Figure_5.png}
\caption{Scaling of FF (black symbols and left scale) and $S_{\rm R}$ (red symbols and right scale) with the lattice size at the critical point between the TP and SP for the on-site coupling. The correspondent colored solid lines are the result of a linear fit, which is the typical form of the growth of entanglement at criticality for one-dimensional systems. The parameters are $\Delta=0.6\omega$ and $\lambda_0=0.49 \omega$. }
\label{fig:FF&RECriticality}
\end{figure}
\section{Conclusions}
\label{Sec:Conclusions}
In this work, we have developed a direct link between accessible microwave observables and quantum entanglement entropies in quantum matter featuring topological phase transitions. By resorting to a GS description for the photon subsystem, as supported by DMRG calculations, we found a simple but powerful relation between the photon Fano factor, single-mode quadrature amplitudes and the light-matter R\'enyi entropy. Singularities in the latter can then be of help for characterizing topological phase transitions and their connection to nonmonotonic nonlocal correlations in a fermionic chain. We also provide evidence of how the topological phase can be modified with both on-site as well as hopping terms of photon-fermion interactions, yielding in some cases to a more robust topological phase. The possibility of extracting nonlocal or topological information of the Kitaev chain from the photonic field itself should be highly timely given the continuous challenges to assess in a clean way Majorana features in transport experiments. Moreover, our results also open novel questions which motivate further studies of the role of decoherence on this quantum light-matter system.\\
\begin{acknowledgments}
The authors acknowledge the use of the Universidad de los Andes High Performance Computing (HPC) facility in carrying out this work. J.J.M-A, F.J.R and L.Q are thankful for the support of MINCIENCIAS, through the Project ``{\it Producci\'on y Caracterizaci\'on de Nuevos Materiales Cu\'anticos de Baja Dimensionalidad: Criticalidad Cu\'antica y Transiciones de Fase Electr\'onicas}" (Grant No. 120480863414) and from Facultad de Ciencias-UniAndes, projects ``{\it Quantum thermalization and optimal control in many-body systems}" (2019-2020) and ``{\it Excited State Quantum Phase Transitions in Driven Models - Part II: Dynamical QPT}" (2019-2020). C.T acknowledges financial support from the Ministerio de Econom\'ia y Competitividad (MINECO), under Project No. MAT2017-83772-R. F.J.G-R acknowledge funding support from the {\it Ministerio de Ciencia e Innovaci\'on}, under Project No. PID2019-109007GA-I00.
\end{acknowledgments}
|
1,314,259,994,831 | arxiv | \section{Introduction}
The surface hopping algorithms, in particular the celebrated Tully's
fewest switches surface hopping (FSSH) algorithm \cites{Tully,Tully2},
are widely used in theoretical and computational chemistry for mixed
quantum-classical dynamics in the non-adiabatic regime.
The Schr\"odinger equation, which is often high dimensional in
chemistry applications, is impractical to solve directly due to curse
of dimensionality. Thus, development of algorithms based on
semiclassical approximation, which only involve solving ODEs, is
necessary. Within the Born-Oppenheimer approximation, the resulting
algorithm from the semiclassical approximation is the familiar ab
initio molecular dynamics \red{and related semiclassical algorithms}. However, in many applications, the
adiabatic assumption in the Born-Oppenheimer approximation is
violated, thus, we need to consider the non-adiabatic dynamics. The
surface hopping algorithms are hence proposed to incorporate in quantum behavior due to the
non-adiabaticity.
Despite the huge popularity of the algorithm and the many attempts in
the chemistry literature for corrections and further improvements (see \textit{e.g.},
\cites{Prezhdo,Schutte,Schwartz,Subotnik1,Subotnik2,Subotnik3,HannaK1}),
which is a very active area to date, the understanding of such
algorithms, in particular, how surface hopping type algorithms can be
derived from the nuclei Schr\"odinger equations, remains rather poor.
\red{In this work, we rigorously derive a surface hopping algorithm,
named frozen Gaussian approximation with surface hopping (FGA-SH),
to approximate the Schr\"odinger equations with multiple adiabatic
states in the semiclassical regime. The FGA-SH algorithm shares
similar spirit as the FSSH algorithms used in the chemistry
literature \cite{Tully}, while it also differs in some essential
ways.} Hence, besides providing a rigorously asymptotically correct
approximation, our derivation hopefully will also help clarify several
issues and mysteries around the FSSH algorithm, and lead to systematic
improvement of this type of algorithms.
The key observation behind our work is a path integral stochastic
representation to the solution to the semiclassical Schr\"odinger
equations. The surface hopping algorithm can be in fact viewed as a
direct Monte Carlo method for evaluating the path integral. Thus, the
path space average provides an approximation to the solution of a high
dimensional PDE, similar to the familiar Feynman-Kac formula for
reaction diffusion type equations. To the best of our knowledge, this
has not been observed in the literature, and it is crucial for
understanding what the surface hopping algorithm really tries to
compute.
In this stochastic representation, the path space consists of
continuous trajectory in the phase space, whose evolution switches
between classical Hamiltonian flows corresponding to each energy
surface, and is hence piecewise deterministic, except at
hoppings. This is why these algorithms are called surface hopping
algorithms. Also to avoid any potential confusion, while we
approximate solutions to the Schr\"odinger equation, the path integral
we consider here (as it only works in the semiclassical limit) is very
different from the usual Feynman path integral for quantum
mechanics. In particular, the stochastic representation is well
defined and gives an accurate approximation to the solution of the
Schr\"odinger equation in the semiclassical regime.
Before we continue, let us review some related mathematical works.
Somehow rather confusingly, sometimes the term ``surface hopping'' is
used for a very different algorithm \cite{Tully71} which is based on
Landau-Zener transition asymptotics \cites{Landau,Zener}. This
algorithm is designed for the situation of a single avoided crossing,
while the type of surface hopping algorithm we consider in this paper,
which is mostly often used in chemistry today, is quite different and
aims to work for general situations. The Landau-Zener asymptotics has
been mathematically studied by Hagedorn and
Joye~\cites{HagedornLZ1,HagedornLZ2}. The algorithm based on
Landau-Zener formula is also studied in the mathematics literature,
see \textit{e.g.}, \cites{Lasser1,Lasser2,SH_SQZ}. While the algorithm
we consider is very different, some of these numerical techniques
might be used in our context as well.
For the surface hopping algorithm we studied in this work, the
understanding in the chemistry literature (see e.g.,
\cites{HannaK1,Schutte,Subotnik3}) often starts from the
quantum-classical Liouville equation \cite{KCmodel}, which is a
natural generalization of the usual Moyal's evolution equation of
Wigner distribution to the matrix Schr\"odinger equations. In the
mathematics literature, the quantum-classical Liouville equation was
studied numerically in \cite{Chai} in low dimensions very
recently. While we are able to derive a surface hopping type
algorithm, our derivation is based on a different tool for
semiclassical analysis, the frozen Gaussian approximation,
\textit{aka} the Herman-Kluk propagator
\cites{HermanKluk,Kay1,Kay2,SwartRousse,FGA_Conv,FGACMS}. It is not
yet clear to us whether the surface hopping algorithms used in the
chemistry literature (and the one we derived) can be rigorously
justified from the view point of quantum-classical Liouville
equation. This remains an interesting research direction.
\medskip
The surface hopping algorithm we derive is based on asymptotic
analysis on the phase space. The ansatz of the solution, represented
as an integration over the phase space and possible configurations of
hopping times is given in Section~\ref{sec:fga}, after a brief review
of the frozen Gaussian approximation for single surface case. For the
algorithmic purpose, it is more useful to take a stochastic
representation of the ansatz as a path integral, which is given in
Section~\ref{sec:prob}. A simple Monte Carlo algorithm for the path
space average then leads to a rigorously justifiable surface hopping
algorithm, which we will compare with and connect to those used in the
chemistry literature in Section~\ref{sec:literature}. The asymptotic
derivation of the ansatz is given in Section~\ref{sec:asymptotic}. The
main rigorous approximation result is stated in
Section~\ref{sec:convergence}, together with a few illustrating
examples. Some numerical examples of the algorithm are discussed in
Section~\ref{sec:numerics}. We conclude the paper with proofs of the
main result in Section~\ref{sec:proof}.
\section{\red{Integral representation for semiclassical matrix Schr\"odinger equations}}\label{sec:fga}
\subsection{Two-state \red{matrix} Schr\"odinger equation}
Consider the rescaled Schr\"odinger equation for nuclei and electrons
\begin{equation}\label{SE2}
i\veps \frac{\partial}{\partial t} u = - \frac{\veps^2}{2} \Delta_{x} u - \frac{1}{2} \Delta_{r} u + V (x, r) u.
\end{equation}
where $u(t,x,r)$ is the total wave function, $x\in \R^m$ represents
the nuclear degrees of freedom, $r\in \R^{n}$ denotes the electronic
degrees of freedom, and $V(x,r)$ is the total interaction potential. Here, $\veps \ll 1$ is the square root of the mass ratio between the electrons and the nuclei (for simplicity, we assume that all nuclei have the same
mass).
We define the electronic Hamiltonian
\[
H_e=-\frac{1}{2}\Delta_r+V(x,r),
\]
whose eigenstates $\Psi_k(r;x)$, given by
\begin{equation}\label{eig:eH}
H_e \Psi_k(r;x)=E_k(x)\Psi_k(r;x),
\end{equation}
are called the adiabatic states. Note that in the eigenvalue problem
\eqref{eig:eH}, $x$ enters as a parameter. \red{In particular, viewed as a function of $x$, the eigenvalues $E_k(x)$ will be referred as energy surfaces.}
In this work, we will only consider a finite number of adiabatic
states, that is, we assume the following expansion of the total wave
function
\[
u(t,x,r)=\sum_{n=0}^{N-1} u_n(t,x) \Psi_n(r;x).
\]
This is justified if the rest of the spectrum of $H_e$ is far
separated from that of the states under consideration, so that the
transition between these $N$ energy surfaces and others is negligible.
For the separation condition and the corresponding spectral gap
assumption, the readers may refer to \cites{BO1,BO2} for detailed
discussions.
In fact, for simplicity of notation, we will assume that the number of
states is $N = 2$, the extension to any finite $N$ is
straightforward. In this case,
$u(t, x, r) = u_0(t, x) \Psi_0(r; x) + u_1(t, x) \Psi_1(r; x)$, the
original equation is equivalent to a system of PDEs of
$U =
\bigl(\begin{smallmatrix} u_0 \\
u_1
\end{smallmatrix} \bigr)$,
which we will henceforth refer to as the \emph{matrix Schr\"odinger
equation}:
\begin{equation}\label{vSE}
i \veps \partial_t
\begin{pmatrix} u_0 \\
u_1 \end{pmatrix} = -\frac{\veps^2}{2} \Delta_x \begin{pmatrix} u_0 \\
u_1 \end{pmatrix} +
\begin{pmatrix}
E_0 \\
& E_1
\end{pmatrix} \begin{pmatrix} u_0 \\
u_1 \end{pmatrix} -\red{\frac{\veps^2}{2}} \begin{pmatrix}
D_{00} & {D_{01}} \\
{ D_{10}} & D_{11}
\end{pmatrix} \begin{pmatrix} u_0 \\
u_1 \end{pmatrix} -\red{\veps^2} \sum_{j=1}^m
\begin{pmatrix}
d_{00} &{ d_{01}} \\
{ d_{10}} & d_{11}
\end{pmatrix}_j
\partial_{x_j} \begin{pmatrix} u_0 \\
u_1 \end{pmatrix},
\end{equation}
where
\red
{
\[
D_{kl}(x) =\langle \Psi_k(r;x), \Delta_x \Psi_l(r;x) \rangle_r, \quad
\left(d_{kl}(x)\right)_j =\langle \Psi_k(r;x), \partial_{x_j}
\Psi_l(r;x) \rangle_r, \quad \text{for } k, l = 0, 1,\; j = 1, \ldots,
m.
\]
}
\subsection{Brief review of the frozen Gaussian approximation}\label{sec:revfga}
Before we consider the matrix Schr\"odinger equation \eqref{vSE}, let us recall the ansatz of frozen Gaussian approximation (\textit{aka} Herman-Kluk propagator) \cites{HermanKluk,Kay1,Kay2,SwartRousse} for scalar Schr\"odinger equation
\begin{equation}\label{eq:singleSE}
i\veps \frac{\partial}{\partial t} u(t, x) = - \frac{\veps^2}{2} \Delta u(t, x) + E(x) u(t, x).
\end{equation}
\red{Note that if we drop the terms
depending on $d$ and $D$ in \eqref{vSE}, it decouples to two equations of the form of \eqref{eq:singleSE}. }
The algorithm that we will derive for \eqref{vSE} can be viewed as an extension of the FGA to the matrix Schr\"odinger equation.
The frozen Gaussian approximation is a convergent approximation to the
solution of \eqref{eq:singleSE} with $\Or(\veps)$ error
\cite{SwartRousse}. It is based on the following \red{integral representation of an approximate solution to \eqref{eq:singleSE}}
\begin{equation}
u_{\FGA}(t,x) =\frac{1}{(2\pi \veps)^{3m/2}} \int_{\R^{3m}} a(t,q,p)e^{\frac{i}{\veps}\Phi(t,x,y,q,p) } u(0, y) \ud y \ud q \ud p,
\end{equation}
where $u(0, \cdot)$ is the initial condition. Here, the phase function $\Phi$
is given by
\[
\Phi(t,x,y,q,p)=S(t,q,p)+P(t,q,p)\cdot (x-Q(t,q,p))-p\cdot(y-q) + \frac{i}{2}|x-Q(t,q,p)|^2 + \frac{i}{2}|y-q|^2.
\]
Given $q$ and $p$ as parameters, the evolution of $Q$ and $P$ are governed by the Hamiltonian flow according to classical Hamiltonian $h(q,p) = \frac{1}{2}\abs{p}^2 + E(q)$,
\begin{align*}
\frac{\ud}{\ud t}Q & =\partial_p h(Q,P),\\
\frac{\ud}{\ud t}P & =-\partial_q h(Q,P),
\end{align*}
with initial conditions $Q(0,p,q)=q$ and $P(0,q,p)=p$. The solution to
the Hamiltonian equations defines a trajectory on the phase space
$\R^{2m}$, which we call \emph{FGA trajectory}. $S$ is the
action corresponding to the Hamiltonian flow, with initial condition
$S(0,q,p)=0$. The equation of $a$ is obtained by matched asymptotics \red{and is given by:}
\begin{equation}
\frac{\ud}{\ud t} a = \frac 1 2 a \tr\bigl( Z^{-1}\bigl(\partial_z P - i \partial_z Q \nabla^2_Q E(Q) \bigr) \bigr)
\end{equation}
with initial condition $a(0,q,p)= 2^{m/2}$, where we have used the
short hand notations
\[
\partial_z = \partial_q - i \partial_p,\quad\text{and} \quad Z=\partial_z (Q+iP).
\]
Equivalently, we can rewrite as
\[
u_{\FGA}(t,x) =\frac{1}{(2\pi \veps)^{3m/2}} \int_{\R^{2m}} A(t,q,p) e^{\frac{i}{\veps}\Theta(t,x,q,p) } \ud q \ud p,
\]
where
\begin{align*}
\Theta(t,x,q,p)& =S(t,q,p)+P(t,q,p)\cdot (x-Q(t,q,p))+ \frac{i}{2}|x-Q(t,q,p)|^2, \\
A(t,q,p) & = a(t,q,p)\int_{\R^m} u_0 (y) e^{\frac{i}{\veps} (-p\cdot(y-q)+ \frac{i}{2}|y-q|^2)} \ud y.
\end{align*}
As $A$ only differs from $a$ by a constant multiplication factor, it satisfies the same equation as $a$ does (with different initial condition).
The following lemma, which we directly quote from \cite{FGACMS}*{Lemma 3.1}, states that the FGA ansatz reproduces the initial condition.
\begin{lemma}\label{lem:initial}
For $u\in L^2(\R^m)$, we have
\[
u(x) = \frac{1}{(2\pi \veps)^{3m/2}} \int_{\R^{3m}} 2^{\frac{m}{2}} e^{\frac{i}{\veps}\Phi(0,x,y,q,p) } u(y) \ud y \ud q \ud p.
\]
\end{lemma}
The next lemma is crucial for the asymptotic matching to derive the
evolution equations. The proof can be found in \cite{FGACMS}*{Lemma
3.2} and \cite{FGA_Conv}*{Lemma 5.2}. \red{We recall this
lemma here as it will be used in the extension of frozen Gaussian
approximation to the matrix Schr\"odinger equations.}
\begin{lemma} \label{lem:asym}
For any vector $b(y,q,p)$ and any matrix $M(y,q,p)$ in Schwartz class viewed as functions of $(y,q,p)$, we have
\[
b \cdot (x-Q) \sim - \veps \partial_{z_k} (b_j Z_{jk}^{-1}),
\]
and
\[
(x-Q) \cdot M (x-Q) \sim \veps \partial_{z_l} Q_j M_{jk}Z_{kl}^{-1} +\Or(\veps^2)
\]
where Einstein's summation convention has been assumed. Moreover, for
multi-index $\alpha$ that $|\alpha| \ge 3$,
\[
(x-Q)^{\alpha} \sim \Or(\veps^{|\alpha|-1}).
\]
Here, we denote by $f \sim g$ that
\[
\int_{\R^{3m}} f e^{\frac i \veps \Phi} \ud y \ud q \ud p =\int_{\R^{3m}} g e^{\frac i \veps \Phi} \ud y \ud q \ud p.
\]
\end{lemma}
\subsection{\red{The integral representation for surface hopping}}\label{sec:fgash}
We now consider extending the \red{integral representation in
the previous subsection} to the matrix Schr\"odinger equation by
incorporating the coupling of the two energy surfaces, \red{
which is the basis of the FGA-SH algorithm.}
Let us assume, for simplicity of notation, that the initial condition
concentrates on energy surface $E_0$ (\textit{i.e.}, $u_1(0, x) = 0$
\red{and $u_0$ is non-zero}). The extension to general
initial condition is straightforward as the equation is linear.
\red{We construct an approximation to the total wave function
following the ansatz below. We will prove rigorously that it gives
an $\Or(\veps)$ approximation to the true solution; see the
convergence statement in Section~\ref{sec:convergence}. The
integral representation here is fully deterministic, and our FGA-SH
algorithm can be understood as a Monte Carlo algorithm for
evaluation.}
\begin{equation}\label{ansatz}
u_{\FGA}(t,x,r)= K^{(0)}_{00}(t,x,r)+ K^{(1)}_{01}(t,x,r)+K^{(2)}_{00}(t,x,r)+K^{(3)}_{01}(t,x,r)+\cdots
\end{equation}
where,
\begin{equation}\label{eq:defK}
K^{(l)}_{mn}(t,x,r)=\Psi_n(r;x) u_m^{(l)}(t,x)
\end{equation}
\red{represents the contribution to the ansatz by wave packets initiated at surface $m$, ends at surface
$n$, and switches the propagating surface $l$ times in between --- the meaning of which will become clear below.}
Thus, we can rewrite \eqref{ansatz} as
\[
u_{\FGA}(t,x,r)=\Psi_0(r; x) \left(u_0^{(0)}(t, x)+u_0^{(2)}(t, x)+\cdots\right) + \Psi_1(r; x) \left(u_0^{(1)}(t, x)+u_0^{(3)}(t, x)+\cdots\right).
\]
We will refer this as the surface hopping ansatz. \red{The
idea of splitting the wave function in this way is similar to that
used in the work by Wu and Herman \cites{WuHerman1,WuHerman2,
WuHerman3}, which is also based on the frozen Gaussian
approximation. The two approaches are different however in several
essential ways, as we will explain in \S\ref{sec:literature}.}
\red{As we consider initial condition starting from the surface $0$,
for simplicity of notation, we will drop the subscripts $0$ in
$u_0^{(n)}$ for now. In the ansatz, $u^{(0)}$ consists of
contribution from wave packets propagating only on energy surface
$E_0$, without switching to $E_1$ surface.} It is given by the
ansatz of frozen Gaussian approximation \red{on a single surface} as
in Section \ref{sec:revfga}:
\begin{equation}\label{eq:u00}
u^{(0)}(t, x) =\frac{1}{(2\pi\veps)^{3m/2}} \int A^{(0)}(t,z_0) \exp \left( \frac{i}{\veps} \Theta^{(0)}(t,z_0,x) \right) \ud z_0,
\end{equation}
where we have used $z_0=(q_0,p_0)$ to denote phase space variables,
\[
\Theta^{(0)}(t,q,p,x)=S^{(0)}(t,q,p)+P^{(0)}(t,q,p)\cdot \left( x-Q^{(0)} (t,q,p)\right) + \frac{i}{2} \left|x-Q^{(0)}(t,q,p)\right|^2,
\]
and
\begin{equation}\label{eq:c00}
A^{(0)}(t,q,p) = a^{(0)}(t,q,p)\int_{\R^m} u_0 (y) e^{\frac{i}{\veps} (-p\cdot(y-q)+ \frac{i}{2}|y-q|^2)} \ud y.
\end{equation}
Here, the evolution of the quantities $S^{(0)}$, $P^{(0)}$,
$Q^{(0)}$ and $A^{(0)}$ are determined by matched asymptotic and
will be specified below. We will refer these quantities as \emph{FGA
variables} in the sequel.
\red{For $n > 0$, the wave function $u^{(n)}$ counts for contribution of wave packets that switch between the two energy surfaces $n$ times. Given $n$, to specify the integral representation, let us denote
$T_{n:1}=(t_n,\cdots,t_1)$ a sequence of times ordered backwardly, \textit{i.e.,} they satisfy
\[
0 \le t_1 \le t_2 \le \cdots \le t_n \le t.
\]}
The ansatz for $u^{(n)}$
is given by
\begin{multline}\label{eq:u0n}
u^{(n)}(t, x) = \frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \int_0^t \ud t_n \int_0^{t_n} \ud t_{n-1} \cdots \int_0^{t_2} \ud t_1 \; \tau^{(1)}(T_{1:1}, z_0)\cdots \tau^{(n)} (T_{n:1},z_0) \times \\
\times A^{(n)}(t, T_{n:1}, z_0) \exp\left( \frac{i}{\veps}
\Theta^{(n)}(t,T_{n:1}, z_0, x) \right)
\end{multline}
where
\red{
\[
\Theta^{(n)}(t,T_{n:1}, z_0,x)=S^{(n)}(t,T_{n:1}, z_0)+P^{(n)}(t,T_{n:1}, z_0)\cdot \left( x-Q^{(n)} (t,T_{n:1}, z_0)\right) + \frac{i}{2} \left|x-Q^{(n)}(t,T_{n:1}, z_0)\right|^2,
\]
and}
\[
A^{(n)}(t, T_{n:1}, z_0) =a^{(n)}(t, T_{n:1}, z_0)\int_{\R^m} u_0 (y) e^{\frac{i}{\veps} (-p\cdot(y-q)+ \frac{i}{2}|y-q|^2)} \ud y.
\]
To simplify the notation, we will often write \eqref{eq:u0n} as
\begin{equation}
u^{(n)}(t, x) = \frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \int_{0 \le t_1 \le\cdots \le t_n \le t} \tau^{(1)}\cdots\tau^{(n)} \; A^{(n)} \; \exp\left( \frac{i}{\veps}
\Theta^{(n)}\right) \ud T_{n:1},
\end{equation}
where $\ud T_{n:1}=\ud t_1 \cdots \ud t_n$. \red{Note that in
\eqref{eq:u0n}, we integrate over all possible sequences of $n$ ordered times
in the time interval $[0, t]$.}
\red{Note that given the time sequence $T_{n:1}$,
\eqref{eq:u0n} depends on the FGA variables
$S^{(n)}, P^{(n)}, Q^{(n)}, A^{(n)}$, and also $\tau^{(k)}$ for
$k = 1, \ldots, n$. We will refer $\tau^{(k)}$ as the \emph{hopping
coefficients}, since they are related to the jumping intensity of
our stochastic algorithm. Note that as other FGA variables,
$\tau^{(k)}(T_{k:1},z_0)$ depend on the time sequence $T_{k:1}$ and
$z_0$.}
Let us now specify the evolution equations for the FGA variables and hopping coefficients involved in \eqref{eq:u00} and \eqref{eq:u0n} \red{to complete the integral representation}. The asymptotic derivation of these equations will be given in Section~\ref{sec:asymptotic}.
\red{Recall that for $n = 0$, the FGA trajectory evolves on a
single energy surface $E_0$. For $n > 0$, the trajectory will switch
between the two surfaces at given time sequences $T_{n:1}$. More
precisely, $T_{n:1} = (t_n, t_{n-1}, \ldots, t_1)$ determines a
partition of the time interval $[0,t]$.} Each FGA variable evolves
piecewisely in time on alternating energy surfaces, starting on energy
surface $0$ (due to our assumption of the initial condition). For
convenience, we take the convention $t_0 = 0$ and $t_{n+1} = t$ in the
following.
\red{When $t\in [t_k,t_{k+1})$ for $k$ being an even integer,
all the FGA variables evolve on energy surface $l^{(k)}=0$, and for
$k$ odd, the trajectory evolves on energy surface $l^{(k)}=1$. The
evolution equations are given accordingly as
\begin{subequations}\label{eq:evenevolve}
\begin{align}
\frac{\ud}{\ud t} Q^{(k)} & = P^{(k)}, \\
\frac{\ud}{\ud t} P^{(k) }& =- \nabla E_{l^{(k)}} (Q^{(k)}),\\
\frac{\ud}{\ud t} S^{(k)} &=\frac{1}{2} ( P^{(k)} )^2 - E_{l^{(k)}} (Q^{(k)}),\\
\frac{\ud}{\ud t} A^{(k)} & =\frac 1 2 A^{(k)} \tr\left( (Z^{(k)})^{-1}\left(\partial_z P^{(k)} - i \partial_z Q^{(k)} \nabla^2_Q E_{l^{(k)}}(Q^{(k)}) \right) \right) - A^{(k)} d_{l^{(k)}l^{(k)}}\cdot P^{(k)}.
\end{align}
\end{subequations}
We observe that the evolution equations \eqref{eq:evenevolve} are similar to those in the single surface case. This connection will become more clear in our asymptotic derivation later in Section~\ref{sec:asymptotic}.
}
The crucial difference with the single surface case is that the
trajectory now switches between the two energy surfaces. At time $t = t_k$ for $1 \le k \le n$, the trajectory switches
from one energy surface to the other. The FGA variables are continuous in time
\begin{subequations}\label{eq:contcond}
\begin{align}
A^{(k)}(t_{k}, T_{k:1},z_0) = A^{(k-1)}(t_{k},T_{k-1:1},z_0), \\
S^{(k)}(t_{k}, T_{k:1},z_0) = S^{(k-1)}(t_{k},T_{k-1:1},z_0), \\
P^{(k)}(t_{k}, T_{k:1},z_0) = P^{(k-1)}(t_{k},T_{k-1:1},z_0), \\
Q^{(k)}(t_{k}, T_{k:1},z_0) = Q^{(k-1)}(t_{k},T_{k-1:1},z_0),
\end{align}
\end{subequations}
such that the left hand sides serve as the initial conditions for the
evolution equations during the next time interval $[t_k, t_{k+1})$.
The FGA trajectory \red{for two energy surfaces} is thus defined on the
extended phase space $\R^{2m} \times \{0, 1\}$,
the piecewise Hamiltonian dynamics on each energy surface, and the
\red{continuity} condition \eqref{eq:contcond}.
Finally, the hopping coefficient $\tau^{(k)}$ is given by
\begin{equation}\label{eq:taudef}
\tau^{(k)}(T_{k:1}, z_0)=
\begin{cases}
-P^{(k)}(T_{k:1}, z_0) \cdot \red{d_{01}}\bigl(Q^{(k)}(\red{t_k,}T_{k:1}, z_0)\bigr), & k \text{ even}; \\
-P^{(k)}(T_{k:1}, z_0) \cdot \red{d_{10}}\bigl(Q^{(k)}(\red{t_k,}T_{k:1}, z_0)\bigr), & k \text{ odd}. \\
\end{cases}
\end{equation}
It is worth remarking that, $\tau^{(k)}(T_{k:1}, z_0)$ is complex valued in general, and therefore, we will later choose its modulus as the jumpinp intensity in the probabilistic interpretation of the ansatz in Section~\ref{probab}.
\section{\red{Frozen Gaussian approximation with surface hopping as a stochastic interpretation}}\label{sec:prob}
We have seen in Section~\ref{sec:fgash}, the \red{surface hopping
ansatz} is a sum of contributions involving integration on the phase
space and of all possible sequence of ordered times. Since the phase
space could be of high dimension in chemical applications and number
of time sequence grows factorially fast \red{with respect to $n$}, the
direct discretization of the integral does not give a \red{practical} algorithm. Observe that essentially we have a high dimension
integral to deal with, and hence it is natural to look for stochastic
methods (in analogy to Monte Carlo method for quadrature). Motivated
by this, in this section, we will present a stochastic representation
of the \red{surface hopping ansatz}, which can be used to numerically approximate the
\red{solution to the Schr\"odinger equations}. The resulting algorithm bares similarity to the surface
hopping algorithm developed in the chemistry literature, which will be
elaborated in Section \ref{sec:literature}.
\subsection{Probabilistic interpretation of FGA for single surface}
Before we consider the frozen Gaussian approximation with surface
hopping, let us start with the usual FGA on a single surface. Recall that the ansatz is given in this case by
\begin{equation}\label{FGA_single}
\begin{aligned}
u_{\FGA} (t,x) & =\frac{1}{(2\pi\veps)^{3m/2}} \int_{\R^{2m}} \ud z_0 \; A(t, z_0) \exp \Bigl( \frac{i}{\veps} \Theta(t,z_0,x) \Bigr) \\
& =\frac{1}{(2\pi\veps)^{3m/2}} \int_{\R^{2m}} \ud z_0 \Abs{A(0,
z_0)} \; \frac{A(t, z_0)}{\Abs{A(0, z_0)}} \exp \Bigl(
\frac{i}{\veps} \Theta(t,z_0,x) \Bigr),
\end{aligned}
\end{equation}
\red{and from Lemma~\ref{lem:initial} that }
\begin{equation}
u_0(x) = \frac{1}{(2\pi\veps)^{3m/2}} \int_{\R^{2m}} \ud z_0 \; A(0, z_0)
\exp \Bigl( \frac{i}{\veps} \Theta(0,z_0,x) \Bigr).
\end{equation}
Assuming that $A(0,z_0)$ is an integrable function in $\R^{2m}$,
\textit{i.e.},
\begin{equation*}
\int_{\R^{2m}} \ud z_0 |A(0,z_0)| < \infty,
\end{equation*}
we can define a probability measure $\mathbb{P}_0$ on $\R^{2m}$ such
that
\begin{equation}\label{eq:P0single}
\mathbb{P}_0(\Omega) = \mathcal{Z}^{-1} \frac{1}{(2\pi\veps)^{3m/2}} \int_{\Omega} \ud z_0 \Abs{A(0,z_0)}
\end{equation}
for any $\Omega \subset \R^{2m}$, where $\mathcal{Z} = \frac{1}{(2\pi\veps)^{3m/2}} \int_{\R^{2m}} \ud z_0 \Abs{A(0,z_0)}$ is a normalization factor so that $\mathbb{P}_0$ is a probability measure. Note that in general $A(0, z_0)$ is complex valued, and hence the necessity in taking the modulus in the definition \eqref{eq:P0single}. We can thus rewrite
\begin{equation}\label{eq:FGAprobsingle}
\begin{aligned}
u_{\FGA}(t,x) & = \mathcal{Z} \int \mathbb{P}_0(\ud z_0) \; \frac{A(t, z_0)}{\Abs{A(0,z_0)}} \exp \Bigl( \frac{i}{\veps} \Theta(t,z_0,x) \Bigr) \\
& = \mathcal{Z} \mathbb{E}_{z_0} \Biggl[ \frac{A(t, z_0)}{\Abs{A(0,z_0)}} \exp \Bigl( \frac{i}{\veps} \Theta(t,z_0,x) \Bigr) \Biggr],
\end{aligned}
\end{equation}
where the expectation is taken with respect to $\mathbb{P}_0$. Thus, we may use a Monte Carlo sampling for $u_{\FGA}(t, x)$ as
\begin{equation}
u_{\FGA}(t, x) \approx \frac{\mathcal{Z}}{M} \sum_{i=1}^M \frac{A(t, z_0^{(i)})}{\bigl\lvert A(0,z_0^{(i)}) \bigr\rvert} \exp \Bigl( \frac{i}{\veps} \Theta(t,z_0^{(i)},x) \Bigr),
\end{equation}
where $\{z_0^{(i)}\}_{i = 1, \ldots, M} \subset \R^{2m}$ are
independent identically distributed samples from the probability
measure $\mathbb{P}_0$. Algorithmically, once $z_0^{(i)}$ is sampled,
we evolve the FGA variables $Q, P, A, S$ up to time $t$, which gives
the value of the integrand. Denote $z_t = (Q_t, P_t)$ for the FGA
trajectory, so that $z_t$ satisfies the Hamiltonian flow with
Hamiltonian $h(q, p)$:
\[
\ud z_t = (h_p,-h_q) \ud t.
\]
The trajectory $z_t$ corresponds to a one-to-one map on the phase
space: $z_0 \mapsto z_t$. As the trajectory is deterministic once the
initial point $z_0$ is prescribed, we can equivalently view the
expectation over initial condition in \eqref{eq:FGAprobsingle} as
expectation over \red{ensemble of} trajectories $z_t$; this point of view is
useful for the extension to cases with surface hopping.
\red{In summary,} in the single surface case, the FGA ansatz can be evaluated by a
stochastic approximation where the randomness comes from sampling of
initial points of the FGA trajectory.
\subsection{Probabilistic interpretation for FGA with surface hopping} \label{probab}
We now extend the probabilistic interpretation to the cases with
surface hopping. Since the FGA trajectory in this case depends on the
energy surface on which it evolves, to prescribe a trajectory, we need
to also keep track of the energy surface. Thus, the phase space
extends to $\wt{z}_t = (z_t, l_t) \in \R^{2m} \times \{0, 1\}$, where $l_t$
indicates the energy surface that the trajectory is on at time $t$.
To take into account the possible hopping times, we will construct a
stochastic process for $\wt{z}_t$, in consistency with the ansatz we
have. The evolution of $z_t$ is deterministic on the energy surface that
$l_t$ indicates, given by the corresponding Hamiltonian flow:
\begin{equation}\label{eq:drift}
\ud z_t= \bigl( p_t, -\nabla_{q} E_{l_t}(p_t, q_t) \bigr) \ud t.
\end{equation}
This is coupled with a Markov jump process of $l_t$ which is c\`adl\`ag and hops between $0$ and $1$, with infinitesimal transition rate
\begin{equation}
\mathbb{P}\bigl(l_{t+ \delta t}=m \mid l_t=n, \,z_t = z \bigr) = \delta_{nm} + \lambda_{nm}(z) \delta t + o(\delta t)
\end{equation}
for $m, n \in \{0, 1\}$, where the rate matrix is given by
\begin{equation}
\lambda(z) =
\begin{pmatrix}
\lambda_{00}(z) & \lambda_{01}(z) \\
\lambda_{10}(z) & \lambda_{11}(z)
\end{pmatrix} =
\begin{pmatrix}
- \Abs{p \cdot \red{d_{10}}(q)} & \Abs{p \cdot \red{d_{10}}(q)} \\
\Abs{p \cdot \red{d_{01}}(q)} & - \Abs{p \cdot \red{ d_{01}}(q)}
\end{pmatrix}.
\end{equation}
\red{Note that $\lambda_{01}(z)$ corresponds to the infinitesimal rate from surface $0$ to $1$, and thus it is given by $\Abs{ p \cdot d_{10}(q)}$.}
We remark $p \cdot d_{10}(q)$ is in general complex, and hence we take
its modulus in the rate matrix; also note that the rate is state
dependent (on $z$). The $\wt{z}_t$ is thus a Markov switching
process. Equivalently, denote the probability distribution on the
extended phase space at time $t$ by $F_t(z, l)$, the corresponding
forward Kolmogorov equation is given by
\begin{equation}
\frac{\partial}{\partial t}
F_t(z, l) +
\bigl\{ h_l, F_t(z, l) \bigr\} = \sum_{m = 0}^1 \lambda_{ml}(z) F_t(z, m),
\end{equation}
where $\{\cdot, \cdot\}$ stands for the Poisson bracket corresponding to the Hamiltonian dynamics \eqref{eq:drift},
\begin{equation*}
\bigl\{h, F\bigr\} = \partial_p h \cdot \partial_q F - \partial_q h \cdot \partial_p F.
\end{equation*}
Given a time interval $[0, t]$, thanks to \eqref{eq:drift}, the $z_s$
part of the trajectory $\wt{z}_s = (z_s, l_s)$ is continuous and
piecewise differentiable, while $l_s$ is piecewise constant with
almost surely finite many jumps. Given a realization of the trajectory $\wt{z}_s = (z_s, l_s)$ starting from $\wt{z}_0 = (z_0, 0)$,\footnote{Generalization to initial condition starting from both energy surface is straightforward.} we denote by $n$ the number of jumps $l_s$ has (thus $n$ is a random variable) and also the discontinuity set of $l_s$ as $\bigl\{t_1, \cdots, t_n\bigr\}$, which is an increasingly ordered random sequence. By the properties of the associated
counting process, the probability that there is no jump $(n = 0)$ is
given by
\begin{equation}
\mathbb{P}(n = 0) = \exp\Biggl(-\int_0^t \lambda_{01}(z_s) \ud s\Biggr)
= \exp\Biggl(-\int_0^t \Abs{\tau^{(1)}(s, z_0)} \ud s \Biggr),
\end{equation}
where $\tau^{(1)}$ is defined in \eqref{eq:taudef} the hopping coefficient in the ansatz of FGA with surface hopping. Similarly, the probability with one jump $(n=1)$ is given by
\begin{equation}
\mathbb{P}(n = 1) = \int_0^t \ud t_1 \; \Abs{\tau^{(1)}(t_1, z_0)} \exp\Biggl( - \int_0^{t_1} \Abs{\tau^{(1)}(s, z_0)} \ud s \Biggr) \exp\Biggl( - \int_{t_1}^{t} \Abs{\tau^{(2)}(s, T_{1:1}, z_0)} \ud s \Biggr).
\end{equation}
In addition, conditioning on $n = 1$, the hopping time is distributed with probability density
\begin{equation}
\varrho_1(t_1) \propto \Abs{\tau^{(1)}(t_1, z_0)} \exp\Biggl( - \int_0^{t_1} \Abs{\tau^{(1)}(s, z_0)} \ud s \Biggr) \exp\Biggl( - \int_{t_1}^{t} \Abs{\tau^{(2)}(s, T_{1:1}, z_0)} \ud s \Biggr).
\end{equation}
More generally, we have
\begin{multline}\label{eq:Pnk}
\mathbb{P}(n = k) = \int_{0<t_1<\cdots<t_k<t} \ud T_{k:1} \; \prod_{j=1}^k \Abs{\tau^{(j)}(T_{j:1}, z_0)} \\
\times \exp\Biggl(-\int_{t_k}^t \Abs{\tau^{(k+1)}(s, T_{k:1}, z_0)} \ud s \Biggr) \prod_{j=1}^{k} \exp\Biggl( - \int_{t_{j-1}}^{t_j} \Abs{\tau^{(j)}(s, T_{j-1:1}, z_0)} \ud s \Biggr),
\end{multline}
and the probability density of $(t_1, \cdots, t_k)$ given there are $k$ jumps in total is
\begin{equation}\label{eq:varrhok}
\varrho_k(t_1, \cdots, t_k) \propto
\begin{cases}
\begin{aligned}
& \prod_{j=1}^k \Abs{\tau^{(j)}(T_{j:1}, z_0)} \exp\Biggl(-\int_{t_k}^t \Abs{\tau^{(k+1)}(s, T_{k:1}, z_0)} \ud s \Biggr) \\
& \qquad \qquad \times \prod_{j=1}^{k} \exp\Biggl( - \int_{t_{j-1}}^{t_j} \Abs{\tau^{(j)}(s, T_{j-1:1}, z_0)} \ud s \Biggr),
\end{aligned} & \text{if } t_1 \le t_2 \le \cdots \le t_k; \\
0, & \text{otherwise}.
\end{cases}
\end{equation}
We remark that the complicated expressions are due to the fact that the intensity function $\lambda(z)$ of the jumping process depends on the current state variable $z$, and thus depends on the previous hopping times. These formula reduce to the usual familiar expressions for homogeneous Poisson process if the intensity is uniform.
Let us now consider a \red{path integral that takes average over the ensemble of trajectories}
\begin{multline}\label{eq:trajavg}
\wt{u}(t, x, r) = \mathcal{Z} \mathbb{E}_{\wt{z}_t} \Biggl[\Psi_{n\bmod 2}(r; x)
\biggl( \prod_{k=1}^n \frac{\tau^{(k)}(T_{k:1}, z_0)}{\Abs{\tau^{(k)}(T_{k:1}, z_0)}} \biggr) \frac{A^{(n)}(t, T_{n:1}, z_0)}{\Abs{A^{(0)}(0, z_0)}} \exp\Bigl(\frac{i}{\veps} \Theta^{(n)}(t, T_{n:1}, z_0, x)\Bigr) \\
\times \exp\Biggl( \int_{t_n}^t \Abs{\tau^{(n+1)}(s, T_{n:1}, z_0)} \ud s \Biggr) \prod_{k=1}^{n} \exp\Biggl( \int_{t_{k-1}}^{t_k} \Abs{\tau^{(k)}(s, T_{k-1:1}, z_0)} \ud s \Biggr) \Biggr],
\end{multline}
where the initial condition $z_0$ is sampled from $\mathbb{P}_0$ with probability density on $\R^{2m}$ proportional to $\abs{A^{(0)}(0, z_0)}$ and
$\mathcal{Z}$ is a normalization factor (assuming integrability of $A^{(0)}(0, z_0)$ as before)
\begin{equation}
\mathcal{Z} = \frac{1}{(2\pi\veps)^{3m/2}} \int_{\R^{2m}} \Abs{A^{(0)}(0, z_0)} \ud z_0.
\end{equation}
Here, the terms on the second line of \eqref{eq:trajavg}, namely
\[
\exp\Biggl( \int_{t_n}^t \Abs{\tau^{(n+1)}(s, T_{n:1}, z_0)} \ud s \Biggr) \prod_{k=1}^{n} \exp\Biggl( \int_{t_{k-1}}^{t_k} \Abs{\tau^{(k)}(s, T_{k-1:1}, z_0)} \ud s \Biggr)
\]
are the weighting terms due to the non-homogeneous state dependent
Poisson process. Note that the whole term inside the square bracket
in \eqref{eq:trajavg} is determined by the trajectory $\wt{z}_t$, and
thus can be viewed as a functional (with fixed $t$ and $x$) evaluated
on the trajectory. We now show that \eqref{eq:trajavg} is in fact a
stochastic representation of the \red{FGA surface hopping ansatz given in \S\ref{sec:fgash}}, and hence we obtain an
asymptotically \red{convergent} path integral representation of the
semiclassical matrix Schr\"odinger equation. By the choice of the
initial condition, we have
\begin{multline*}
\wt{u}(t, x, r) = \frac{1}{(2\pi\veps)^{3m/2}} \int_{\R^{2m}} \ud z_0 \; \mathbb{E}_{\wt{z}_t} \Psi_{n\bmod 2}(r; x)
\Biggl[ \biggl( \prod_{k=1}^n \frac{\tau^{(k)}(T_{k:1}, z_0)}{\Abs{\tau^{(k)}(T_{k:1}, z_0)}} \biggr) A^{(n)}(t, T_{n:1}, z_0) \exp\Bigl(\frac{i}{\veps} \Theta^{(n)}(t, T_{n:1}, z_0, x)\Bigr) \\
\times \exp\Biggl( \int_{t_n}^t \Abs{\tau^{(n+1)}(s, T_{n:1}, z_0)} \ud s \Biggr) \prod_{k=1}^{n} \exp\Biggl( \int_{t_{k-1}}^{t_k} \Abs{\tau^{(k)}(s, T_{k-1:1}, z_0)} \ud s \Biggr) \;\Bigg\vert\; \wt{z}_t = (z_0, 0) \Biggr].
\end{multline*}
Since the randomness of the trajectory given initial condition only lies in the hopping times, we further calculate
\begin{equation*}
\begin{aligned}
\wt{u}(t, x, r) & = \frac{1}{(2\pi\veps)^{3m/2}} \int_{\R^{2m}} \ud z_0
\; \sum_{n=0}^{\infty} \mathbb{P}(n) \Psi_{n\bmod 2}(r; x)
\int_{([0, t])^{n}} \varrho_n(\ud t_1 \cdots \ud t_n)
\Biggl[ \biggl( \prod_{k=1}^n \frac{\tau^{(k)}(T_{k:1}, z_0)}{\Abs{\tau^{(k)}(T_{k:1}, z_0)}} \biggr) A^{(n)}(t, T_{n:1}, z_0)\\
& \qquad \times \exp\Bigl(\frac{i}{\veps} \Theta^{(n)}(t, T_{n:1},
z_0, x)\Bigr) \exp\Biggl( \int_{t_n}^t \Abs{\tau^{(n+1)}(s,
T_{n:1}, z_0)} \ud s \Biggr) \prod_{k=1}^{n} \exp\Biggl(
\int_{t_{k-1}}^{t_k} \Abs{\tau^{(k)}(s, T_{k-1:1}, z_0)} \ud s
\Biggr) \Biggr] \\
& = \frac{1}{(2\pi\veps)^{3m/2}} \int_{\R^{2m}} \ud z_0 \;
\sum_{n=0}^{\infty} \Psi_{n\bmod 2}(r; x)\int_{0<t_1<\cdots<t_n<t} d T_{n:1}
\; \prod_{k=1}^n \tau^{(k)}(T_{k:1}, z_0) A^{(n)}(t, T_{n:1}, z_0)\\
& \hspace{27em} \times \exp\Bigl(\frac{i}{\veps} \Theta^{(n)}(t, T_{n:1}, z_0, x)\Bigr) \\
& = u_{\FGA}(t, x, r),
\end{aligned}
\end{equation*}
where the second equality follows from \eqref{eq:Pnk} and \eqref{eq:varrhok}.
The above calculation assumes the summability of the terms in the FGA ansatz, which will be rigorously proved in Section~\ref{sec:absconv}.
\subsection{Connection to surface hopping algorithms}\label{sec:literature}
\red{As we have shown in Section~\ref{probab}, the
surface hopping ansatzis equivalent to a path integral
representation given in \eqref{eq:trajavg} based on averaging over
an ensemble of trajectories, the FGA-SH algorithm is a natural Monte
Carlo sampling scheme. The FGA-SH algorithm consists of steps of
sampling the initial points of the trajectory $(0,\,z_0)$,
numerically integrating the trajectories until the prescribed time
$t$, and finally evaluating the empirical average to obtain an
approximation to the solution. Detailed description of the algorithm
and numerical tests will be presented in
Section~\ref{sec:numerics}.}
This is a good place to connect to and compare with the surface
hopping algorithms in the chemistry literature. Our algorithm is based
on the stochastic process $\wt{z}_t$ which hops between two energy
surfaces, and thus it is very similar in spirit to the fewest switches
surface hopping and related algorithms. However, the jumping intensity
of $l_t$ is very different from what is used in the FSSH algorithm; in
fact, the hopping in FSSH is determined by an auxiliary ODE for the
evolution of ``population'' on the two surfaces \cite{Tully}. It is
not yet clear to us how such an ODE arises from the Schr\"odinger
equation. \red{On the other hand, given the trajectories produced as
in FSSH, one could in fact re-weight those to calculate the path
integral \eqref{eq:trajavg} which might correspond to an importance
sampling scheme. This connection would be left for future explorations.}
Another major difference with the surface hopping algorithms proposed
in the chemistry literature is that the trajectory $\wt{z}_t$ is
continuous in time on the phase space, while in FSSH and other version
of surface hopping, a momentum shift is introduced to conserve the
classical energy along the trajectory when hopping occurs (if hopping
occurs from energy surface $0$ to $1$, it is required that $h_0(p, q)
= h_1(p', q)$ where $p'$ is the momentum after hopping). Note that as
in the FGA for single surface Schr\"odinger equation, each Gaussian
evolved in the FGA with surface hopping does not solve the matrix
Schr\"odinger equation, and only the average of trajectories gives an
approximation to the solution. Therefore, it is not necessary for each
trajectory to conserve the classical energy. The methods in the
chemistry literature perhaps over-emphasize the energy conservation of
a single trajectory.
Also, Tully's fewest switches surface hopping algorithm only
calculates the trajectory, without giving an approximation to the wave
function. It is perhaps more like Heller's frozen Gaussian packet
\cite{Heller1} for single surface Schr\"odinger equation, which
compared to the ensemble view point of the Herman-Kluk propagator,
considers instead the evolution of a single Gaussian packet and
captures the correct semiclassical trajectory. The better
understanding of trajectory dynamics in FSSH is an interesting future
direction.
We emphasize that while an ensemble of trajectory is often used for
the surface hopping algorithm, it is rather unclear what the ensemble
average really means in the chemistry literature. There are in fact
debates on the interpretation of the surface hopping trajectories. Our
understanding on the path integral representation clarifies the
average of trajectories and hopefully will shed new light on further
development of the surface hopping algorithms.
Let us also point out that, as far as we have seen, the chemistry
literature seems to miss the weighting terms in \eqref{eq:trajavg},
resulting from the non-homogeneous state dependent Poisson process. As
the hopping rules for the surface hopping algorithm all have the
similar feature, this correction factor is very important. In fact, the approximation is far
off without the correction factors in our numerical tests.
As we already mentioned before, the ansatz we used share some
similarity with those proposed by Wu and Herman in
\cites{WuHerman1,WuHerman2, WuHerman3}, in particular, the total wave
function is also split into a series of wave functions based on the
number of hoppings. However, they are crucially different in many
ways: Whether the trajectory is continuous in the phase space, whether
the weighting terms as in \eqref{eq:trajavg} is included in the average of trajectories, and the work \cites{WuHerman1,WuHerman2,
WuHerman3} also employs some stationary phase argument, etc. While
we will provide a rigorous proof of the approximation error of our
methods, it is not clear to us that the heuristic asymptotics in
\cites{WuHerman1,WuHerman2, WuHerman3} can be rigorously justified.
\section{Asymptotic derivation}\label{sec:asymptotic}
We present in this section the asymptotic derivation of the FGA with
surface hopping \red{ansatz presented in \S\ref{sec:fgash}.} To
determine the equations for all the variables involved, we substitute
$u_{\FGA}$ into the Schr\"odinger equation
\eqref{SE2}\footnote{Alternatively, one can directly work with the
matrix Schr\"odinger equation \eqref{vSE}, which will be in fact
adopted in our proof in Section~\ref{sec:proof}. We present both
view points as both are often used in the literature.} and carry
out a matched asymptotics expansion. While the calculation in this
section is formal, the approximation error will be rigorously
\red{controlled} in Section~\ref{sec:convergence}.
We start by examining the term $(i \veps \partial_t - H)
K^{(0)}_{00}$. By definition \eqref{eq:defK}, we have
\begin{align*}
i \veps \partial_t K^{(0)}_{00} & = i \veps \Psi_0 \, \partial_t u^{(0)}, \\
\intertext{and}
H K^{(0)}_{00} & = \left( -\frac{\veps^2}{2} \Delta_x + H_e \right) \Psi_0 u^{(0)} \\
& = -\frac{\veps^2}{2} \Delta_x \left( \Psi_0 u^{(0)} \right) + E_0 \Psi_0 u^{(0)} \\
& = \Psi_0 H_0 u^{(0)} - \veps^2 \nabla_x \Psi_0 \cdot \nabla_x u^{(0)} + \left( -\frac{\veps^2}{2} \Delta_x \Psi_0 \right) u^{(0)},
\end{align*}
where we have used the notation $H_i = -\frac{\veps^2}{2}\Delta_x +
E_i$ for $i = 0, 1$. Expand the term $\nabla_x \Psi_0$ in the
adiabatic basis $\{ \Psi_k\}_{k=0,1}$ (recall that we have assumed
only two adiabatic basis functions are important):
\[
\nabla_x \Psi_0 = d_{00} \Psi_0 + \red{d_{10} }\Psi_1,
\]
where we recall that
$\red{d_{nm}(x)= \langle \Psi_n, \nabla_x \Psi_m \rangle}$, and
we thus obtain the expansion of $H K^{(0)}_{00}$ in terms of the
adiabatic basis functions
\begin{equation}\label{eq:expanu0}
H K^{(0)}_{00} = \Psi_0 H_0 u^{(0)} - \veps^2 \Psi_0 d_{00} \cdot \nabla u^{(0)} - \veps^2 \Psi_1\red{ d_{10}} \cdot \nabla u^{(0)} + \Or(\veps^2),
\end{equation}
where we have omitted the contribution from
$(-\frac{\veps^2}{2} \Delta_x \Psi_0) u^{(0)}$ which is of order
$\Or(\veps^2)$ \red{(note that the terms like
$\veps^2 \Psi_0 d_{00}\cdot \nabla u^{(0)}$ is $\Or(\veps)$ instead
of $\Or(\veps^2)$ due to the oscillation in $u^{(0)}$)}. We see that
the first two terms in \eqref{eq:expanu0} lie in the space spanned by
$\Psi_0$, while the third term is orthogonal. Hence, it is impossible
to construct $u^{(0)}$ to satisfy equation \eqref{eq:expanu0} to the
order of $\Or(\veps)$. In fact, the term
$- \veps^2 \Psi_1 \red{ d_{10} }\cdot \nabla u^{(0)}$ has to be
canceled by terms from $(i \veps \partial_t - H) K^{(1)}_{01}$, since
$\Psi_1$ corresponds to the other energy surface. This explains the
necessity of the surface hopping ansatz.
Let us thus first try to construct $u^{(0)}$ such that
\begin{equation}\label{eq:singleu0}
i \veps \partial_t u^{(0)} =H_0 u^{(0)} - \veps^2 d_{00}\cdot \nabla_x u^{(0)} + \Or(\veps^2).
\end{equation}
Note that this is very similar to the situation of the original frozen Gaussian approximation for the single surface Schr\"odinger equation.
By direct calculation, we get
\begin{align*}
i \veps \partial_t u^{(0)} & = \frac{i \veps }{(2\pi\veps)^{3m/2}} \int \ud z_0 \; \partial_t \left[ A^{(0)}(t, z_0) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t,z_0,x) \Bigr) \right] \\
& = \frac{i\veps }{(2\pi\veps)^{3m/2}} \int \ud z_0 \; \partial_t A^{(0)}(t, z_0) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t,z_0,x) \Bigr) \\
& \qquad - \frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; A^{(0)}(t, z_0) \Bigl( \partial_t S_0^{(0)}(t, z_0) + \partial_t P_0^{(0)}(t, z_0) \cdot (x - Q_0^{(0)}(t, z_0)) \\
& \hspace{17em} - \partial_t Q_0^{(0)}(t, z_0) \cdot \bigl( P_0^{(0)}(t, z_0) + i (x - Q_0^{(0)}(t, z_0)) \bigr) \Bigr) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t,z_0,x) \Bigr)\\
& = \frac{i\veps }{(2\pi\veps)^{3m/2}} \int \ud z_0 \; \partial_t A^{(0)}(t, z_0) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t,z_0,x) \Bigr) \\
& \qquad - \frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; A^{(0)}(t, z_0) \Bigl( \partial_t S_0^{(0)}(t, z_0) - \nabla_Q E_0(Q_0^{(0)}(t, z_0)) \cdot (x - Q_0^{(0)}(t, z_0)) \\
& \hspace{18em} - P_0^{(0)}(t, z_0) \cdot \bigl( P_0^{(0)}(t, z_0) + i (x - Q_0^{(0)}(t, z_0)) \bigr) \Bigr) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t,z_0,x) \Bigr).
\end{align*}
Moreover,
\begin{align*}
\veps^2 \nabla_x u^{(0)} & = \frac{i \veps}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; A^{(0)}(t, z_0) \bigl( P^{(0)}(t, z_0) +i (x-Q^{(0)}(t, z_0)) \bigr) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr), \\
- \frac{\veps^2}{2} \Delta_x u^{(0)} & = \frac{m\veps}{2} \frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; A^{(0)}(t, z_0) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr) \\
& \qquad + \frac{1}{2} \frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; A^{(0)}(t, z_0) \bigl\lvert P^{(0)}(t, z_0) +i (x-Q^{(0)}(t, z_0)) \bigr\rvert^2 \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr).
\end{align*}
Suggested by the semiclassical limit of the single surface case, we
have imposed that $(Q^{(0)}, P^{(0)})$ follows the Hamiltonian flow
with classical Hamiltonian $h_0(q,p) = \frac{1}{2} \abs{p}^2 +
E_0(q)$, namely,
\begin{subequations}
\begin{align}
\frac{\ud Q^{(0)}}{\ud t} & = P^{(0)}, \\
\frac{\ud P^{(0)}}{\ud t} & =- \nabla E_0 (Q^{(0)}).
\end{align}
\end{subequations}
To match the term $E_0(x) u^{(0)}$, we expand $E_0(x)$ around
$Q^{(0)}$ to get
\begin{align*}
E_0(x) u^{(0)}&= \frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; E_0(Q^{(0)}(t, z_0)) A^{(0)}(t, z_0) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr) \\
&\qquad +\frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; (x-Q^{(0)}(t, z_0)) \cdot \nabla_Q E_0(Q^{(0)}(t, z_0)) A^{(0)}(t, z_0) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr)\\
&\qquad +\frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; \frac{1}{2} (x-Q^{(0)}(t, z_0)) \cdot \nabla_Q^2 E_0(Q^{(0)}(t, z_0)) (x-Q^{(0)}(t, z_0)) A^{(0)}(t, z_0) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr) \\
&\qquad + \Or(\veps^2),
\end{align*}
\red{where we have used Lemma~\ref{lem:asym} to control the terms
containing $(x - Q^{(0)})^3$ and higher order terms.} To treat the
terms containing powers of $(x - Q^{(0)})$ in the above expressions,
we apply Lemma~\ref{lem:asym} and get
\begin{align*}
i \veps \partial_t u^{(0)} & =\frac{i\veps }{(2\pi\veps)^{3m/2}} \int \ud z_0 \; \partial_t A^{(0)}(t, z_0) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t,z_0,x) \Bigr) \\
& \qquad - \frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; A^{(0)}(t, z_0) \Bigl( \partial_t S_0^{(0)}(t, z_0) -| P_0^{(0)}(t, z_0)|^2 \Bigr) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t,z_0,x) \Bigr)
\\
& \qquad - \frac{\veps}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; \partial_{z_k}\Bigl( A^{(0)}\bigl(\nabla_Q E_0(Q^{(0)})+iP^{(0)}\bigr)_j (Z^{(0)})_{jk}^{-1}\Bigr) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr),
\\
\veps^2 \nabla_x u^{(0)} & = \frac{i \veps}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; A^{(0)}(t, z_0) P^{(0)}(t, z_0) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr) + \mathcal O(\veps^2), \\
- \frac{\veps^2}{2} \Delta_x u^{(0)} & = \frac{1}{2} \frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; A^{(0)}(t, z_0) \bigl\lvert P^{(0)}(t, z_0) \bigr\rvert^2 \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr) \\ \nonumber
& \qquad + \frac{m\veps}{2} \frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; A^{(0)}(t, z_0) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr) \\
& \qquad - \frac{i\veps}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; \partial_{z_k}\Bigl( A^{(0)}\bigl(P^{(0)}\bigr)_j (Z^{(0)})_{jk}^{-1}\Bigr) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr) \\
& \qquad - \frac{\veps}{2(2\pi\veps)^{3m/2}} \int \ud z_0 \; A^{(0)}(t, z_0) \partial_{z_l}(Q^{(0)})_j (I_m)_{jk} (Z^{(0)})^{-1}_{kl} \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr) + \mathcal O(\veps^2),\\
E_0(x) u^{(0)}&= \frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; E_0(Q^{(0)}(t, z_0)) A^{(0)}(t, z_0) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr) \\
& \qquad- \frac{\veps}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; \partial_{z_k}\Bigl( A^{(0)}\bigl(\nabla_Q E_0(Q^{(0)})\bigr)_j (Z^{(0)})_{jk}^{-1}\Bigr) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr)\\
& \qquad + \frac{\veps}{2(2\pi\veps)^{3m/2}} \int \ud z_0 \; A^{(0)}(t, z_0) \partial_{z_l}(Q^{(0)})_j (\nabla_Q^2 E_0(Q^{(0)})_{jk} (Z^{(0)})^{-1}_{kl} \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr) + \mathcal O(\veps^2).
\end{align*}
Therefore, matching terms on the leader order, we get
\begin{equation}
\frac{\ud S^{(0)}}{\ud t} = \frac{1}{2} \Abs{ P^{(0)}}^2 - E_0 (Q^{(0)}).
\end{equation}
The next order $\Or(\veps)$ gives
\begin{equation}\label{eq:C}
\frac{\ud}{\ud t} A^{(0)} =\frac 1 2 A^{(0)} \tr\Bigl( (Z^{(0)})^{-1}\bigl(\partial_z P^{(0)} - i \partial_z Q^{(0)} \nabla^2_Q E_0(Q^{(0)}) \bigr) \Bigr) - A^{(0)} d_{00}\cdot P^{(0)},
\end{equation}
where
\[
\partial_z=\partial_q - i \partial_p,\quad\mbox{and}\quad Z^{(0)}=\partial_z \left(Q^{(0)}+i P^{(0)}\right).
\]
Note that, compared with single surface case, the only difference is
the extra term $- A^{(0)} d_{00}\cdot P^{(0)}$, which comes from the
term $ - \veps^2 d_{00}\cdot \nabla_x u^{(0)}$ in
\eqref{eq:singleu0}. We also remark that $d_{00}$ is purely imaginary
due to the normalization of $\Psi_0$, and hence this extra term only
contributes to an extra phase of $A^{(0)}$.
Coming back to \eqref{eq:expanu0}, we still need to take care of the
term parallel to $\Psi_1$. This extra term corresponds to intersurface
mixing and should be canceled by contributions from \red{wave packets on the other surface}. More specifically, let us examine the term
$(i \veps \partial_t - H) K^{(1)}_{01}$, by direct calculations, we
get
\begin{align}
i \veps \partial_t K^{(1)}_{01} &=i \veps \Psi_1 \partial_t u^{(1)} \nonumber \\
& = i \veps \frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \int_0^t d t_1 \tau^{(1)}(t_1) \partial_t \left( A^{(1)}(t,t_1,z_0) \exp\left(\frac{i}{\veps}\Theta^{(1)}(t,t_1,z_0,x)\right) \right) \Psi_1 \label{term:2-1} \\
& \quad + i \veps \frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; \tau^{(1)}(t) A^{(1)}(t,t,z_0) \exp\left(\frac{i}{\veps}\Theta^{(1)}(t,t,z_0,x)\right) \Psi_1, \label{term:2-2} \\
\intertext{and}
H K^{(1)}_{01} & = \Bigl(-\frac{\veps^2}{2} \Delta_x + H_e\Bigr) \Psi_1 u^{(1)} \nonumber \\
& = -\frac{\veps^2}{2} \Delta_x \bigl( \Psi_1 u^{(1)} \bigr) + E_1 \Psi_1 u^{(1)} \nonumber \\
& =\Psi_1 H_1 u^{(1)} - \veps^2 \nabla_x \Psi_1 \cdot \nabla_x u^{(1)} + \left( -\frac{\veps^2}{2} \Delta_x \Psi_1 \right) u^{(1)}, \label{term:2-3}
\end{align}
where $H_1=-\frac{\veps^2}{2} \Delta_x +E_1$ is the effective Hamiltonian on the second energy surface.
Therefore, in $(i \veps \partial_t - H) K^{(1)}_{01}$, all the terms
contain the time integration with respect to $t_1$ except the term
\eqref{term:2-2}, which motivates us to impose this term to cancel the
term $-\veps^2 \Psi_1 \red{d_{10}}\cdot \nabla_x u^{(0)}$ from
$(i \veps \partial_t - H) K^{(0)}_{00}$. From the expression of
$\nabla_x u^{(0)}$, this suggests that we shall construct
$\tau^{(1)}, A^{(1)}$, and $\Theta^{(1)}$ such that
\begin{multline}
\frac{1}{(2\pi\veps)^{3m/2}} \red{ d_{10}}(x) \cdot \int \ud z_0 \; A^{(0)}(t, z_0) \bigl( P^{(0)}(t, z_0) +i (x-Q^{(0)}(t, z_0)) \bigr) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr) = \\
= - \frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \;
A^{(1)}(t,t,z_0) \tau^{(1)}(t, z_0)
\exp\Bigl(\frac{i}{\veps}\Theta^{(1)}(t,t,z_0,x)\Bigr) + \Or(\veps).
\end{multline}
Expand $\red{ d_{10}}(x)$ around $Q^{(0)}$ and apply Lemma~\ref{lem:asym} again, we want
\begin{multline}
\frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \; A^{(0)}(t, z_0) \red{d_{01}}\bigl(Q^{(0)}(t, z_0)\bigr) \cdot P^{(0)}(t, z_0) \exp \Bigl( \frac{i}{\veps} \Theta^{(0)}(t, z_0, x) \Bigr) = \\
= - \frac{1}{(2\pi\veps)^{3m/2}} \int \ud z_0 \;
A^{(1)}(t,t,z_0) \tau^{(1)}(t, z_0)
\exp\Bigl(\frac{i}{\veps}\Theta^{(1)}(t,t,z_0,x)\Bigr) + \Or(\veps).
\end{multline}
A natural choice is then to set for any $t$ and $z_0$,
\begin{align}
& A^{(1)}(t, t, z_0) = A^{(0)}(t, z_0), \label{eq:A1A0}\\
& P^{(1)}(t, t, z_0) = P^{(0)}(t, z_0), \\
& Q^{(1)}(t, t, z_0) = Q^{(0)}(t, z_0), \\
& S^{(1)}(t, t, z_0) = S^{(0)}(t, z_0), \label{eq:S1S0}\\
& \tau^{(1)}(t, z_0) = - \red{d_{10}}\bigl(Q^{(0)}(t, z_0)\bigr) \cdot P^{(0)}(t, z_0). \label{eq:tau1}
\end{align}
Therefore, this sets the initial conditions of
$A^{(1)}(t, t_1, z_0), P^{(1)}(t, t_1, z_0), Q^{(1)}(t, t_1, z_0),
S^{(1)}(t, t_1, z_0)$
at $t = t_1$ (recall that from the definition of \eqref{eq:u0n}, those
FGA variables are only needed for $t \ge t_1$).
Now back to the other terms in
$(i \veps \partial_t - H) K^{(1)}_{01}$. Again, in \eqref{term:2-3},
the term $( -\frac{\veps^2}{2} \Delta_x \Psi_1 ) u^{(1)}$ is of order
$\mathcal{O}(\veps^2)$, which will be neglected as the FGA approximation is
determined by terms up to $\Or(\veps)$. We expand $\nabla_x \Psi_1$ in
terms of the adiabatic states
$\nabla_x \Psi_1 = \red{d_{01}} \Psi_0 + d_{11} \Psi_1$, the contribution of
$\Psi_1$ will be asymptotically matched by imposing the appropriate evolution equation for $A^{(1)}$, while the term component in
$\Psi_0$, $\veps^2 \Psi_0 \red{d_{01} }\cdot \nabla_x u^{(1)}$, has to be
matched by contributions from $(i \veps \partial_t - H) K^{(2)}_{00}$.
Analogously to the construction of $u^{(0)}$, we impose that for $t \ge t_1$, $(P^{(1)}, Q^{(1)})$ satisfy the Hamiltonian flow with the effective Hamiltonian $h_1 = \frac{1}{2} \abs{p}^2 + E_1(q)$.
\begin{align*}
\frac{\ud}{\ud t} Q^{(1)} & = P^{(1)}, \\
\frac{\ud}{\ud t} P^{(1) }& = -\nabla E_1 (Q^{(1)}).
\end{align*}
The evolution of other FGA variables can be determined by matched
asymptotics, also similar to what was done for $u^{(0)}$. This leads
to for $t \ge t_1$,
\begin{align*}
\frac{\ud}{\ud t} S^{(1)} & = \frac{1}{2} ( P^{(1)} )^2 - E_1 (Q^{(1)}), \\
\frac{\ud}{\ud t} A^{(1)} & = \frac{1}{2} A^{(1)} \tr\Bigl( (Z^{(1)})^{-1}\bigl(\partial_z P^{(1)} - i \partial_z Q^{(1)} \nabla^2_Q E_1(Q^{(1)}) \bigr) \Bigr) - A^{(1)} d_{11}\cdot P^{(1)},
\end{align*}
with initial conditions given by the continuity condition \red{\eqref{eq:A1A0} and \eqref{eq:S1S0}}.
Note that the equations have the same structures as the evolution
equations for $S^{(0)}$ and $A^{(0)}$, except that they now evolve on
the energy surface $E_1$.
In a similar way, the evolution equations \eqref{eq:evenevolve} for all order FGA variables and hopping
coefficients are recursively determined, with continuity condition
\eqref{eq:contcond} at the hopping.
\smallskip
To end this part, let us generalize the ansatz to the case
that initial wave functions consist of both $\Psi_0$ and $\Psi_1$. Since the equation is linear, the solution is given by superposition of initial conditions concentrating on each energy surface. Thus, in the general case, the FGA with surface hopping is given by
\begin{equation}\label{ansatz2}
u_{\FGA}(t,x,r)= K^{(0)}(t,x,r)+ K^{(1)}(t,x,r)+K^{(2)}(t,x,r)+K^{(3)}(t,x,r)+\cdots,
\end{equation}
where
\[
K^{(n)}=
\begin{cases}
K^{(n)}_{01}+K^{(n)}_{10}= \Psi_1 u^{(n)}_0 + \Psi_0 u^{(n)}_1, & n \; \text{odd}, \\
K^{(n)}_{00}+K^{(n)}_{11}= \Psi_0 u^{(n)}_0 + \Psi_1 u^{(n)}_1, & n \; \text{even}.
\end{cases}
\]
The expression for $u_0^{(n)}$ and $u_1^{(n)}$ are similar as the
previous case, and hence will be omitted.
\section{Convergence of FGA with surface
hopping}\label{sec:convergence}
In this section, we will present the main approximation theorem of FGA
with surface hopping and also provide some examples to justify \red{and understand} the assumptions.
\subsection{Assumptions and main theorem}
For the asymptotic convergence of the frozen Gaussian approximation
with surface hopping ansatz, as $\Psi_j(r; x)$ are fixed, the heart of
matter is the approximation of $U = \bigl(\begin{smallmatrix} u_0 \\
u_1 \end{smallmatrix} \bigr)$, which solves the matrix Schr\"odinger
equation \eqref{vSE}. Recall that in the FGA ansatz, $U$ is
approximated by (note that we have assumed $u_1(0,x) = 0$)
\begin{equation}
U_{\FGA}(t, x) =
\begin{pmatrix}
u^{(0)} + u^{(2)} + \cdots \\
u^{(1)} + u^{(3)} + \cdots
\end{pmatrix}.
\end{equation}
To guarantee the validity of the asymptotic matching, we make some
natural assumptions of $E$, $d$ and $D$, the coefficients appeared in
\eqref{vSE}.
Beside the coupling terms in the matrix Schr\"odinger equation
\eqref{vSE}, the non-adiabatic transition is also related to the
gap between the adiabatic energy surfaces, given by
\begin{equation}
\delta := \inf_x \bigl( E_1(x) - E_0(x) \bigr).
\end{equation}
\red{In the most interesting non-adiabatic regime, $\delta>0$} should also be viewed as a small parameter. In
fact, if $\delta$ is fixed and $\veps \rightarrow 0$, the matrix
Schr\"odinger equation \eqref{vSE} is approaching its adiabatic limit,
namely we can neglect the transition to the other part of the spectrum; see
\cite{BO1,BO2}. To ensure significant amount of transition as
$\veps \rightarrow 0$, it is most interesting to consider
$\delta \rightarrow 0$ simultaneously. Therefore, we will consider a family of matrix
Schr\"odinger equations with the coefficients depending on $\delta$.
We will emphasize the $\delta$ dependence and write $H^\delta_e$,
$E^\delta$, $d^\delta$, $D^\delta$, etc. when confusion might occur.
We start by the assumption on the energy surface $E^\delta$.
\begin{hyp}\label{assuma}
Each energy surface $E^\delta_k(q) \in C^{\infty}(\R^m)$,
$k\in \{0, 1\}$ and satisfies the following subquadratic condition,
where the constant $C_E$ is uniform with respect to $\delta$,
\begin{equation}\label{cond:subqua}
\sup_{q \in \RR^m} \Abs{\partial_{\alpha} E^{\delta}_k (q)} \leq C_E, \quad \forall\, \abs{\alpha} = 2.
\end{equation}
\end{hyp}
This assumption guarantees that the Hamiltonian flow of each energy
surface satisfies global Lipschitz conditions, such that the global
existence of the flow is guaranteed. In particular, \eqref{cond:subqua} immediately implies that
\begin{equation}
\bigl\lvert \nabla_q E^\delta_k(q) \bigr\rvert \leq C_E \bigl(\abs{q} + 1\bigr), \quad \forall\, q \in \RR^m,
\end{equation}
which we will use later. Note that similar subquadratic assumptions
are needed even for the validity of FGA method for the single surface
model. We will further investigate the related properties of the
Hamiltonian flow in Section~\ref{sec:prelim}.
We recall that the initial coefficient of the \red{surface hopping ansatz} is given by
\begin{equation}\label{eq:C00}
A^{(0)}(0, z) = 2^{m/2} \int_{\R^m} e^{\frac{i}{\veps}(-p \cdot (y-q) + \frac{i}{2} \abs{y - q}^2)} u_0(y) \ud y.
\end{equation}
By Lemma~\ref{lem:initial}, the FGA ansatz recovers the initial
condition of the Schr\"odinger equation at time $0$.
As the Gaussian is not compactly supported, $A^{(0)}$ is in general
not compactly supported either. This causes some trouble as for
example the hopping coefficient $\tau$, given by \textit{e.g.}, $\red{ -}p
\cdot d^\delta_{01}(q)$, is not uniformly bounded with respect to all
starting points $z_0$ of the trajectory. On the other hand, notice
that $A^{(0)}$ will decay very fast on the phase space, especially
when $\veps$ is small, since the Gaussian $e^{\frac{i}{\veps}(-p \cdot
(y-q) + \frac{i}{2} \abs{y - q}^2)}$ decays very fast both in
real and Fourier spaces. Therefore, we may truncate $A^{(0)}$
outside a compact set which only introduces a small error to the
approximation to the initial condition. As this issue arises already
in the usual frozen Gaussian approximation (see \cite{FGA_Conv} for
example), we will make the assumption that there exists a compact set
$K \subset \R^{2m}$ that the initial approximation error, given by
\begin{equation}\label{eq:defein}
\eps_{\text{in}} = \Norm{ \frac{1}{(2\pi \veps)^{3m/2}} \int_K A^{(0)}(0, z) e^{\frac{i}{\veps} \Phi^{(0)}(0, x, z)} \ud z - u_0(0, x) }_{L^2(\R^m)}
\end{equation}
is negligibly small for the accuracy requirement. Note that
$\eps_{\text{in}}$ depends on the semiclassical parameter $\veps$ and
goes to zero as $\veps \to 0$ (the rate depends on $u_0(0)$).
Therefore, we will restrict the initial condition $z_0 = (q_0, p_0)$
to the compact set $K$ in the FGA ansatz with surface hopping.
Moreover, as we shall prove in Proposition \ref{prop:comp}, given
$t>0$ and initial condition $z_0 = (q_0, p_0)$ restricted in a compact
set $K$, the FGA trajectory is confined in a compact set $K_t$ which
is independent of the hopping history within $[0,t]$. For the matrix
Schr\"odinger equation, we also need the assumption on the coupling
coefficients $d^\delta$ and $D^\delta$, and their boundedness on the
compact set $K_t$.
\begin{hyp}\label{assumb}
For $k,l \in \{0,1\}$, we have $d^\delta_{kl} \in C^{\infty}(\R^m)$
and $D^\delta_{kl} \in C^{\infty}(\R^m)$. Moreover, given $t>0$, we
assume that $E^\delta_k$, $d^\delta_{kl}$ and $D^\delta_{kl}$ and
their derivatives are uniformly bounded with respect to $\delta$ on
$K_t$, which is the compact set the trajectory stays within up to
time $t$ (see Proposition~\ref{prop:comp}).
\end{hyp}
To further understand the implications of Assumption~\ref{assumb}, we
have for $k \neq l$ the explicit expression by standard perturbation
theory, when $E^\delta_k \ne E^\delta_l$,
\[
\red{d^\delta_{lk}(x)}=\langle \Psi^{\delta}_l, \nabla_x \Psi^{\delta}_k
\rangle=\frac{\langle \Psi^{\delta}_l, (\nabla_x H^\delta_e)
\Psi^{\delta}_k \rangle}{E^\delta_k-E^\delta_l}.
\]
As the denominator on the right hand side is given in terms of the
energy gap, when the gap approaches $0$, the coupling vector becomes
unbounded unless the numerator is also getting small.
\red{On the other hand, we allow the possibility that the gap between
the two energy surface is very small, even of the order of $\veps$
(but $d^\delta$ is still $\Or(1)$).} Examples will be given in the
next subsection.
Now we are ready to state the main approximation theorem.
\smallskip
\begin{theorem}\label{thm:main}
Let $U_{\FGA}(t,x)$ be the approximation given by the FGA with
surface hopping (with phase space integral restricted to $K$) for
the Schr\"odinger equation \eqref{vSE}, whose exact solution is
denoted by $U(t,x)$. Under Assumptions~\ref{assuma} and
\ref{assumb}, for any given final time $t$, there exists a constant
$C$, such that for any $\veps>0$ sufficiently small and any
$\delta > 0$, we have
\begin{equation*}
\Norm{U_{\FGA}(t, x) - U(t, x)}_{L^2(\R^m)} \le C \veps + \eps_{\text{in}},
\end{equation*}
where $\eps_{\text{in}}$ is the initial approximation error defined in \eqref{eq:defein}.
\end{theorem}
This theorem implies, in the simultaneous limit, $\eps \rightarrow 0$,
$\delta \rightarrow 0$, the FGA method with surface hopping remains a
valid approximation with $\mathcal O(\veps)$ error. This covers the
interesting regime when the transition between surfaces is not negligible
($\Or(1)$ transitions occur) and also the adiabatic regime that
$\veps \to 0$ with a fixed $\delta$ (so non-adiabatic transition is
negligible). In particular, we emphasize that the constant $C$ is
independent of both $\veps$ and $\delta$.
While we will not keep track the precise dependence of the constant
$C$ on $t$, by our proof techniques, we would
\red{at best prove an exponential growth of the
constant as $t$ gets large,} due to the use of
Gronwall type inequalities. In our numerical experience, the error
accumulation seems milder.
\subsection{Examples of matrix Schr\"odinger equations} \label{sec:example}
Let us explore some specific examples to better understand the
Assumptions~\ref{assuma} and \ref{assumb}. Recall that, we consider
the case of two adiabatic states, which means that the Hilbert space
corresponding to the electronic degree of freedom is equivalent to
$\C^2$, and hence the electronic Hamiltonian $H^\delta_e$ is
equivalent to a $2\times 2$ matrix. As discussed above, the most
interesting scenario is when the two surfaces are not well separated. We
recall that the small parameter $\delta$ indicates the gap between the
two energy surfaces and focus on the more interesting cases that the
gap goes to $0$ as $\veps \to 0$.
A general class of electronic Hamiltonian satisfying our assumptions can be given as a product of a scalar function $F^\delta(x)$ and a $2 \times 2$ matrix $M(x)$ independent of $\delta$, namely
\begin{equation*}
H_e^{\delta}(x)= F^\delta(x) M(x).
\end{equation*}
We observe that, due to the specific choice, $H_e^{\delta}$ and $M$
share the same eigenfunctions, and if we denote the eigenvalues of $M$
by $\lambda_k$, then we have
\begin{equation}\label{case1}
E_k^{\delta}(x) =F^\delta (x) \lambda_k(x).
\end{equation}
Then, we obtain that, for $k \ne l$,
\[
\red{ d^\delta_{lk}}=\frac{\langle \Psi_l, \nabla_x H_e^\delta \Psi_k \rangle}{E_k^{\delta}-E_l^{\delta}} = \frac{ F^\delta \langle \Psi_l, \nabla_x M \Psi_k \rangle}{F^\delta (\lambda_k-\lambda_l)} = \frac{ \langle \Psi_l, \nabla_x M \Psi_k \rangle}{\lambda_k-\lambda_l}.
\]
Similarly, one can show that,
\[
\red{D^\delta_{lk}}=\langle \Psi_l, \Delta_x \Psi_k \rangle=\frac{\langle \Psi_l, \Delta_x M \Psi_k \rangle-2 \nabla_x \lambda_k \cdot d_{kl}}{\lambda_k-\lambda_l}.
\]
Therefore, $d^\delta$ and $D^\delta$ are independent of independent of $\delta$, and we thereby suppress the appearance of $\delta$. Moreover, $d$ and $D$ are independent of $F^\delta$, while
we can take $F^{\delta}$ such that energy surfaces become close and
even touch each other as $\delta \to 0$. The set of almost degenerate
points of the energy surfaces may consist of one single point, several
points, or even \red{an interval}, as we will see below.
We note that even though this construction looks rather special, it is
actually versatile enough to cover many examples considered in the
chemistry literature. We will present three model problems here, which are
adapted from Tully's original examples in \cite{Tully}.
\noindent\textbf{Example 1(a). Simple avoided crossing.} We choose
$M$ to be
\[
M=
\begin{pmatrix}
\frac{\tanh (x)}{2\pi}& \frac{1}{10}\\
\frac{1}{10} & -\frac{\tanh (x)}{2\pi}
\end{pmatrix}.
\]
The eigenvalues of $M$ are \[
\pm \sqrt{\frac{\tanh^2 (x)}{4\pi^2}+\frac{1}{100} }.
\]
We observe that the two eigenvalue surfaces are close around $x=0$. By the plots of $d_{01}$ and $D_{01}$ in Figure \ref{fig:a2}, we see the coupling is significant around $x=0$ as well. To control the energy gap, we introduce the following $F^\delta$ function,
\[
F^\delta (x)= 1 +(\delta -1 ) e^{-10 x^2},
\]
such that $F^\delta(x) = \mathcal{O}(\delta)$ around $x=0$ and
$F^{\delta}(0) = \delta$, so that the energy gap vanishes at $x = 0$
as $\delta \to 0$. The eigenvalues of $H_e$ for different values of
$\delta$ are plotted in Figure \ref{fig:a2}.
\begin{figure}
\includegraphics[scale=0.5]{nex2f}\includegraphics[scale=0.5]{nex1} \\
\caption{(Example 1(a)) Left: Eigenvalues of $H_e$, $\delta =\frac 1 8$,
$\frac 1 {16}$, $\frac 1 {32}$, $\frac 1 {64}$ and $\frac{1}{128}$; reference $\delta=0$. Right:
the coupling information of $H_e$, invariant with respect to $\delta$.}
\label{fig:a2}
\end{figure}
\noindent\textbf{Example 1(b). Dual avoided crossing.} We choose $M$ to be
\[
M=
\begin{pmatrix}
0 & \frac 1 {20} \\
\frac 1 {20} & -e^{-\frac{x^2}{10}}+\frac 1 2
\end{pmatrix},
\]
The eigenvalues of $M$ are
\[
\frac 1 2 \left(-e^{- \frac{x^2}{10}}+ \frac 1 2 \right) \pm \sqrt{\frac 1 4 \left(-e^{- \frac{x^2}{10}} + \frac 1 2\right)^2+ \frac{1}{400}}.
\]
We observe that, the two eigenvalues are closest to each other when $-e^{- \frac{x^2}{10}}+ \frac 1 2=0$, or $x=\pm \sqrt{10 \ln 2}$. The coupling vectors around these points $x=\pm \sqrt{10 \ln 2}$ are significantly larger than their values elsewhere as shown in Figure \ref{fig:b2}. This explains why the model is often referred to as the dual avoided crossing.
To control the energy gap, we may introduce the following $F^\delta$ function,
\[
F^\delta (x)= 1+ e^{- (2\sqrt{10 \ln 2})^2} +(\delta -1 ) e^{- (x+\sqrt{10 \ln 2})^2}+(\delta -1 ) e^{-(x+\sqrt{2 \ln 2})^2}.
\]
We can check that $F^\delta = \mathcal O(\delta)$ around $x=\pm \sqrt{10 \ln 2}$, and
\[
\lim_{\delta \rightarrow 0} F^{\delta} (\pm \sqrt{10 \ln 2}) =
\lim_{\delta \rightarrow 0} \delta \left(1+e^{- (10\sqrt{2 \ln 2})^2}
\right)=0.
\]
Thus the energy gap vanishes at the two points as $\delta \to 0$. This is illustrated in Figure \ref{fig:b2}.
\begin{figure}
\includegraphics[scale=0.5]{nex4f}\includegraphics[scale=0.5]{nex3} \\
\caption{(Example 1(b)) Left: Eigenvalues of $H_e$, $\delta =\frac 1 8$,
$\frac 1 {16}$, $\frac 1 {32}$, $\frac 1 {64}$ and $\frac{1}{128}$; reference $\delta=0$. Right:
the coupling information of $H_e$, invariant with respect to $\delta$.}
\label{fig:b2}
\end{figure}
\noindent\textbf{Example 1(c). Extended coupling with reflection.}
In this example, $M$ is set to be
\[
M=
\begin{pmatrix}
\frac 1 {20} & \frac 1 {10} \left(\arctan(2x) +\frac{\pi}{2} \right) \\
\frac 1 {10} \left(\arctan(2x) +\frac{\pi}{2} \right) & -\frac 1 {20}
\end{pmatrix}.
\]
The eigenvalues of $M$ are
\[
\pm \sqrt{ \frac 1 {100} \left(\arctan(2x) +\frac{\pi}{2} \right)^2 + \frac 1 {400}}.
\]
Hence, as $x\rightarrow \infty$, the eigenvalues of $M$,
$\lambda_{\pm}(x) \rightarrow \pm \frac{\pi}{10}$, and as
$x\rightarrow -\infty$, the eigenvalues of $M$,
$\lambda_{\pm}(x) \rightarrow \pm \frac{1}{20}$. As shown in Figure
\ref{fig:c2}, this model involves an extended region of strong
non-adiabatic coupling when $x<0$. Moreover, as $x>0$, the upper
energy surface is \red{increasing} so that trajectories moving from
left to right on the excited energy surface \red{without a large
momentum} will be reflected while those on the ground energy surface
will be transmitted.
The energy gap between the two surfaces can be controlled by the
following $F^\delta$ function,
\[
F^\delta (x)=\frac{1}{\pi} \left( \arctan(100x)+\frac{\pi}2+\delta \right)
\]
We can check that $F^\delta = \mathcal O(\delta)$ when $x$ is sufficiently small. The family of energy surfaces are illustrated for different values of $\delta$ in Figure \ref{fig:c2}.
\begin{figure}
\includegraphics[scale=0.5]{nex6f}\includegraphics[scale=0.5]{nex5} \\
\caption{(Example 1(c)) Left: Eigenvalues of $H_e$, $\delta =\frac 1 8$,
$\frac 1 {16}$, $\frac 1 {32}$, $\frac 1 {64}$ and $\frac{1}{128}$; reference $\delta=0$. Right:
the coupling information of $H_e$, invariant with respect to $\delta$.}
\label{fig:c2}
\end{figure}
\medskip
\noindent\textbf{Example 2.}
Let us mention an example that does not satisfy our assumption in the limit $\delta \to 0$, which is in fact the classical conical intersection model with
\begin{equation}
H_e^{\delta}(x) =
\begin{pmatrix}
x & \delta \\
\delta & -x
\end{pmatrix}.
\end{equation}
In fact, this is the model often analyzed for Landau-Zener transition.
For this family of Hamiltonians, we have
\[
\Abs{E_+^{\delta}(x)-E_-^{\delta}(x)}=2 \sqrt{x^2+\delta^2},
\]
and
\[
\nabla_x H_e^{\delta} =
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}.
\]
In this case, one can compute the analytical expression for $d_{+-}$ as
\[
d_{+-}^{\delta}(x)=-\frac{\delta}{2(x^2+\delta^2)}.
\]
Clearly, at $x = 0$,
$d_{+-}^{\delta}=\mathcal{O}(\delta^{-1})$. Therefore, around $0$,
Assumption~\ref{assumb} is violated if $\delta$ goes to zero as
$\veps \to 0$ and our theorem no longer applies.
We remark however that for the practical examples with avoided
crossing, the small parameters (semiclassical parameter and energy
surface gap, etc.) are fixed, rather than converging to
$0$. Therefore, given a particular example, where $H_e^{\delta}$ is
specified for some small $\delta$, we can possibly embed the model
into a different sequence as $\veps \to 0$, so that our method can be
still used. Some numerical studies are presented in Example 4 for this scenario.
Nevertheless, the fact that the asymptotic derivation breaks down for this particular case raises the question that whether one can combine the fewest switch surface hopping type algorithms with the approaches based on Landau-Zener transition. This will be an interesting future research direction.
\section{Numerical examples}\label{sec:numerics}
In this section, we validate the algorithm based on the frozen Gaussian approximation with surface hopping and its probabilistic interpretation. The numerical examples are done for two-level matrix Schr\"odinger equations.
\subsection{Description of the algorithm}
The algorithm based on the stochastic interpretation in
Section~\ref{sec:prob} is straightforward: We sample the initial point
of the trajectory based on the weight function
$\abs{A^{(0)}(0, z_0)}$; once the initial point is given, we evolve
the trajectory and associated FGA variables with surface hopping up to
some final time; and then we reconstruct the solution based on the
trajectory average \eqref{eq:trajavg}. The value of $A^{(0)}(0,z_0)$
will be calculated on a mesh of $(q, \, p)$ with numerical quadrature
of \eqref{eq:c00}.\footnote{\red{This is of course only possible for
low-dimensional examples; approximation methods are needed for
higher dimension calculation, which we do not address here. }} The
time evolution ODEs are integrated using the forth-order Runge-Kutta
scheme. After each time step, we calculate the hopping probability
during the time step $\Delta t |\tau|$ and generate a random number to
see if a hop occurs: If a hopping happens, we change the label of the
current surface and record the phase factor
$\frac{\tau}{|\tau|}$. After the trajectory is determined up to time
$t$, we can calculate the weighting factors in \eqref{eq:trajavg} by
again a numerical quadrature. Our code is implemented in
\textsf{Matlab}.
Note that the algorithm above is the most straightforward Monte Carlo
algorithm for evaluating the average of trajectories
\eqref{eq:trajavg}. With the path integral representation, it is
possible to design more sophisticated algorithms trying to further
reduce the variance. This will be considered in future works.
Comparing the numerical solution with the exact solutions to the Schr\"odinger equations, we have several sources of error, listed below:
\begin{enumerate}
\item[a.] Initial error. This is the error coming from numerical quadrature of $A^{(0)}(0, z_0)$, the mesh approximation in the phase space, and also due to the choice of a compact domain $K$ in the phase space;
\item[b.] Asymptotic error. This is the $\mathcal O (\veps)$ error coming from the higher order term we neglected in the derivation of the frozen Gaussian approximation with surface hopping ansatz;
\item[c.] Sampling error. Since the algorithm is a Monte Carlo
algorithm to compute the average of trajectories \eqref{eq:trajavg}, for
finite sample size, we will have statistical error compared to the
mean value. Since this error is due to the variance of the sampling,
it decays as $1/\sqrt{N_{\text{traj}}}$ where $N_{\text{traj}}$ is the total number of
trajectories. This is confirmed in Figure~\ref{fig:sqrtN} and Table~\ref{sqrtNdata} for a
fixed (and somewhat large) $\veps = \frac{1}{16}$. \red{In Table~\ref{sqrtNdata}, convergence rates for tests with different number of trajectories are computed by
\[
\text{Conv. Rate} := \log_{N^{b}_{\text{traj}}/N^a_{\text{traj}}} \frac{\mathbb E(e^a)}{\mathbb E(e^b)}.
\]
}
\begin{figure}
\includegraphics[scale=0.5]{sqrtN5}\\
\caption{For $\veps=\frac{1}{16}$ and various numbers of
trajectories $N_{\text{traj}}$, the empirical averages of the total numerical error with 95\% confidence intervals.}
\label{fig:sqrtN}
\end{figure}
\begin{table}
\begin{tabular}{ c |c| c| c|c |c}
\hline
$\veps=\frac{1}{16}$ & $N_{\text{traj}}=100$ & $N_{\text{traj}}=200$ & $N_{\text{traj}}=400$ & $N_{\text{traj}}=800$ & $N_{\text{traj}}=1600$ \\ \hline
$\mathbb E(e_0)$ & 1.9889e-01& 1.4182e-01& 9.9173e-02& 7.2472e-02& 5.2443e-02
\\ \hline
\red{ Conv. Rate} & &\red{0.4879 } &\red{0.5160} & \red{0.4525} & \red{0.4667} \\ \hline
$\text{Var}(e_0)$& 2.3702e-03 & 1.1585e-03 & 5.6254e-04 & 3.1891e-04 &1.5769e-04 \\ \hline
$\mathbb E(e_1)$ & 1.6423e-01 & 1.1624e-01 & 8.1430e-02 & 6.0873e-02 & 4.3546e-02 \\ \hline
\red{Conv. Rate }& & \red{ 0.4987} & \red{0.5135} & \red{0.4198} & \red{0.4833} \\ \hline
$\text{Var}(e_1)$&1.4000e-03 & 6.5504e-04 & 3.1067e-04 & 1.8907e-04 & 1.0734e-04 \\
\hline
\end{tabular}
\medskip
\caption{For $\veps=\frac{1}{16}$ and various numbers of
trajectories $N_{\text{traj}}$, the empirical averages and sample variance of the total numerical error based on $400$ implementations for each test.}
\label{sqrtNdata}
\end{table}
\item[d.] Quadrature error. In solving the evolution of FGA variables
and make phase changes at hoppings, the ODE solvers will introduce
numerical error. Note that, some high order solvers (e.g., RK4 here)
are preferred for the FGA variables because in the phase function,
the numerical error is magnified by $\mathcal O(1/\veps)$.
\end{enumerate}
In the numerical tests, we use the following initial sampling
strategy. We first choose a partition integer $M \in \N^+$, and the
corresponding partition constant is defined as
\[
d_M = \max_{ (q, p) \in K} \frac{\bigl\lvert A_0^{(0)}\left(0,q,\,p\right) \bigr\rvert}{M}.
\]
For a specific grid point $(q, p)$, we generate $n_{(q, p)}$
independent trajectories starting with the initial point $(q, p)$ where
\[
n_{(q, p)}=\left \lceil \frac{\bigl\lvert A_0^{(0)}\left(0,q,\,p \right) \bigr\rvert}{d_M} \right \rceil.
\]
For each trajectory initiated from this grid point, the initial weight
$A_0^{(0)}(0, q, p)$ is equally divided. As the partition integer $M$
increases, the number of the trajectories increases, and thus the
numerical error reduces.
\subsection{Numerical tests}
All the test problems we consider in this paper are $1D$ two-state
matrix Schr\"odinger equation with the electronic Hamiltonian $H_e(x)$
assumed to be a $2 \times 2$ matrix potential. We compare with the results from our surface hopping algorithm with the numerical reference solution from the time splitting spectral method (TSSP) (see e.g., \cites{TS,reviewsemiclassical,TSSL}) with sufficiently fine mesh.
\noindent {\bf Example 3.} In this example, we take the electronic Hamiltonian $H_e(x)$ to be same as {Example 1(a)} in Section \ref{sec:example} with $\delta=\veps$, which we recall here for convenience,
\begin{equation}
H_e(x)=\left(1+(\veps-1)e^{-10x^2} \right)\begin{pmatrix}
\frac{\tanh (x)}{2 \pi} & \frac {1} {10} \\
\frac {1} {10} & -\frac{\tanh (x)}{2 \pi}
\end{pmatrix}.
\end{equation}
As as shown in Figure \ref{fig:a2}, the coupling vectors are not
negligible when $-1<x<1$, and hence hopping might occur.
We choose the initial condition to the two-state Schr\"odinger equation as
\[
u(0,r,x)=u_0(0,x)\Psi_0(r;x)=\red{(16 \veps)^{-1/4}} \exp\left(\frac{i2x}{\veps}\right) \exp \left(- 16(x-1)^2 \right)\Psi_0(r;x).
\]
this initial condition corresponds to a wave packet in the ground
state energy surface localized at $q=-1$ traveling to the right with
speed $p=2$.
Typical FGA trajectories with surface hopping are plotted in Figure~\ref{fig:traj} for $\veps=\frac{1}{16}$ and $\veps=\frac{1}{128}$. \begin{figure}
\includegraphics[scale=0.5]{16traj} \includegraphics[scale=0.5]{128traj}\\
\caption{Typical trajectories in the FGA algorithm. Left: $\veps=\frac{1}{16}$. Right: $\veps=\frac{1}{128}$.}
\label{fig:traj}
\end{figure}
For $\veps=\frac{1}{16}$, $\frac{1}{32}$ and $\frac{1}{64}$, and for
the partition integer $M=1$, $2$, $4$, $8$, $16$, we use the FGA
algorithm to compute the $u_0(t,x)$ and $u_1(t,x)$ till $t=1$. At
$t=1$, the wave packet has traveled across the hopping zone. We choose
the computation domain for $y$ and for $x$ to be $[-\pi,\pi]$, and the
computation domain for $(q,p)$ to be $[-\pi,\pi]\times [0.5,3.5]$. We
choose the following mesh sizes in the FGA method for initial sampling and for reconstructing the solution.
\begin{equation}\label{ex1mesh}
\Delta q= \frac{2 \pi \veps}{8}, \quad \Delta p = \frac{3 \veps}{4}, \quad \Delta x=\Delta y = \frac{2 \pi \veps}{32}.
\end{equation}
With this mesh, the initial error $e_0^\veps$ are summarized in Table
\ref{ex1init}, which is made significantly smaller than the other
parts of the error from the approximation. Also, we choose the step
size to be very small $\Delta t =\frac{\eps}{32}$ and apply the forth
order Runge-Kutta method to solve the FGA variables. Hence, the total
error is dominated by the asymptotic error and the sampling error in
these tests. The reference solution is computed on $[-\pi,\pi]$ by a second order (in time) TSSP method with sufficiently fine mesh
\[
\Delta x = \frac{2 \pi \veps}{ 64},\quad \Delta t= \frac{\veps}{32}.
\]
\begin{table}
\scalebox{1.0}{
\begin{tabular}{ l |c| c| c}
\hline
$\veps$ & $\frac{1}{16}$ & $\frac{1}{32}$ & $\frac{1}{64}$ \\ \hline
$e_0^\veps$ & 8.3178e-05 & \red{1.4173e-07 }& \red{1.1697e-07 }\\
\hline
\end{tabular}
}
\caption{Initial error for $\veps=\frac{1}{16}$, $\frac{1}{32}$ and $\frac{1}{64}$ with mesh given by \eqref{ex1mesh}.}
\label{ex1init}
\end{table}
To quantify the sampling error, we repeat each test for $400$ times
and estimate the empirical average $\mathbb E(e_k)$ and its variance
and $\text{Var}(e_k)$ are summarized in Table \ref{ex1data}, where
$e_k$ denotes the $L^2$ error of the $k$-th component of the solution,
\red{and convergence rates for different $M$ are
estimated by
\[
\text{Conv. Rate} := \log_{M^b/M^a} \frac{\mathbb E(e^a)}{\mathbb E(e^b)}.
\]}
The errors with their $95\%$ confidence intervals are
plotted in Figure \ref{fig:ex1_2}. From the numerical results, we see
clearly that increasing the partition integer $M$ can effectively
reduce the numerical error.
\begin{table}
\begin{tabular}{ c |c| c| c|c|c}
\hline
$\veps=\frac{1}{16}$ & $M=1$ & $M=2$ & $M=4$ & $M=8$ & $M=16$ \\ \hline
$\mathbb E(e_0)$ & 6.9485e-02 &5.5660e-02 &4.4370e-02 & 3.3320e-02 & 2.6339e-02 \\ \hline
\red{Conv. Rate }& & \red{0.3201} & \red{0.3271} & \red{0.4132} & \red{0.3391}\\ \hline
$\text{Var}(e_0)$& 4.9452e-04 & 2.3494e-04 & 1.5885e-04 & 8.7959e-05 & 5.5006e-05 \\ \hline
$\mathbb E(e_1)$ & 5.7770e-02& 4.6380e-02& 3.8663e-02& 3.0728e-02& 2.5886e-02 \\ \hline
\red{Conv. Rate} & & \red{0.3168} & \red{0.2626} & \red{0.3314} & \red{0.2474}\\ \hline
$\text{Var}(e_1)$& 2.7984e-04& 1.3865e-04& 1.0308e-04& 6.0901e-05& 4.8058e-05 \\
\hline
\end{tabular}
\medskip
\red{
\begin{tabular}{ c |c| c| c|c|c}
\hline
$\veps=\frac{1}{32}$ & $M=1$ & $M=2$ & $M=4$ & $M=8$ & $M=16$ \\ \hline
$\mathbb E(e_0)$ & 5.6999e-02& 4.6254e-02& 3.5705e-02& 2.6951e-02& 2.0107e-02\\ \hline
Conv. Rate & & 0.3014 & 0.3735 & 0.4058 & 0.4227\\ \hline
$\text{Var}(e_0)$& 2.8623e-04& 1.5798e-04& 1.0331e-04& 5.2099e-05& 2.8266e-05 \\ \hline
$\mathbb E(e_1)$ & 5.2574e-02& 4.2267e-02& 3.3255e-02& 2.5514e-02& 2.0454e-02 \\ \hline
Conv. Rate & &0.3149 & 0.3460 & 0.3823 & 0.3189 \\ \hline
$\text{Var}(e_1)$& 1.8928e-04& 1.2575e-04& 6.7133e-05& 4.0982e-05& 2.6102e-05 \\
\hline
\end{tabular}
}
\medskip
\red{
\begin{tabular}{ c |c| c| c|c|c}
\hline
$\veps=\frac{1}{64}$ & $M=1$ & $M=2$ & $M=4$ & $M=8$ & $M=16$ \\ \hline
$\mathbb E(e_0)$ & 4.8308e-02& 3.8534e-02& 2.9118e-02& 2.2112e-02& 1.6811e-02\\ \hline
Conv. Rate & & 0.3261 & 0.4042 & 0.3970 & 0.3955 \\ \hline
$\text{Var}(e_0)$& 1.9927e-04& 9.3424e-05& 5.0794e-05& 3.1704e-05& 1.7614e-05 \\ \hline
$\mathbb E(e_1)$ &4.6589e-02& 3.6833e-02& 2.8203e-02& 2.1660e-02& 1.6785e-02 \\ \hline
Conv. Rate & & 0.3390 & 0.3852 & 0.3808 & 0.3678 \\ \hline
$\text{Var}(e_1)$& 1.5864e-04& 8.1220e-05& 5.4446e-05& 2.6814e-05& 1.5051e-05 \\
\hline
\end{tabular}
\medskip
}
\caption{\red{(Example 3)} For various $\veps$ and partition integers $M$, the empirical averages and sample variance of the total numerical error based on $400$ implementations for each test.}
\label{ex1data}
\end{table}
\begin{figure}
\includegraphics[scale=0.55]{ex1_eu0v} \includegraphics[scale=0.55]{ex1_eu1v}\\
\caption{\red{(Example 3)} For various $\veps$ and partition integers $M$, the empirical averages of the total numerical error with 95\% confidence intervals.}
\label{fig:ex1_2}
\end{figure}
\red{ Finally, we aim to demonstrate the application of the
FGA-SH method in calculating the transition rate versus time. For
$\veps=\frac{1}{16}$ and $\frac{1}{128}$, we carry out the test with
$N_{\text{traj}}=6400$ trajectories, and calculate the transition
rates at different times till $t=1.5$. The results are plotted in
Figure \ref{fig:ex1_3}, from which we observe very nice agreements
with the reference calculations.}
\begin{figure}
\includegraphics[scale=0.55]{TRtime16} \includegraphics[scale=0.55]{TRtime128}\\
\caption{\red{(Example 3) For various $\veps$, the typical behavior of the FGA-SH
method in calculating the transition rate versus time. Left:
$\veps= \frac{1}{16}$. Right: $\veps=
\frac{1}{128}$. }}
\label{fig:ex1_3}
\end{figure}
\noindent {\bf Example 4.} In this example, the electronic Hamiltonian $H_e(x)$ is given by
\begin{equation}
H_e(x)=\begin{pmatrix}
\frac{x}{5} & \frac {1} {10} \\
\frac {1} {10} & -\frac{x}{5}
\end{pmatrix}.
\end{equation}
This matrix potential is similar to the Example 2 in
Section~\ref{sec:example} except that the we have fixed a small
$\delta$ as $\veps$
varies.
Hence, as $\veps$ goes to $0$, the energy surfaces of the electronic
Hamiltonian stay unchanged. Thus, the FGA method applies to this
case. We plot the energy surfaces, $d_{01}$ and $D_{01}$ of this matrix
potential in Figure~\ref{fig:ex4}, from which we observe that the
coupling vector is not negligible around $x=0$.
\begin{figure}
\includegraphics[scale=0.5]{avoidE}\includegraphics[scale=0.5]{avoidd} \\
\caption{(Example 4) Left: Eigenvalues of $H_e$. Right:
the coupling information of $H_e$.}
\label{fig:ex4}
\end{figure}
We choose the same initial condition to the two-state Schr\"odinger equation
\[
u(0,r,x)=u_0(0,x)\Psi_0(r;x)=\exp\left(\frac{i2x}{\veps}\right) \exp \left(- 16(x-1)^2 \right)\Psi_0(r;x),
\]
and the same computation domain and meshing sizes for the FGA algorithm and reference solver as Example 3. For $\veps=\frac{1}{16}$ and $\frac{1}{128}$, and for the partition integer $16$, we use the FGA algorithm to compute the $u_0(t,x)$ and $u_1(t,x)$ till $t=1$.
The reference solution indicates that when $\veps=\frac{1}{16}$, the
transition portion is significant but when $\veps=\frac{1}{128}$ the
transition between the surfaces is practically small. This is expected as the
gap is finite and fixed, while $\veps \rightarrow 0$, so that the
non-adiabatic transition is approaching $0$ (see \textit{e.g.},
\cites{BO1,BO2}).
Whereas, in the FGA algorithm, the hopping rate is only related to the
coupling vectors and the momentum along the FGA trajectory. Therefore,
for $\veps=\frac{1}{16}$ and $\frac{1}{128}$, the hopping
probabilities are similar along the FGA trajectory, but when
$\veps=\frac{1}{128}$, the hopped trajectories on the exited state
should average to $0$, which is verified by the numerical results
plotted in Figure~\ref{fig:ex2_1} together with the reference
solution. Besides, we show by comparing the reference solutions in Figure~\ref{fig:ex2_1} that the weighting factors in \eqref{eq:trajavg} are crucial in reconstructing the correct wave functions.
\begin{figure}
\includegraphics[scale=0.5]{n16u0wo} \includegraphics[scale=0.5]{n16u1wo}\\
\includegraphics[scale=0.5]{n16u0} \includegraphics[scale=0.5]{n16u1}\\
\includegraphics[scale=0.5]{n128u0f} \includegraphics[scale=0.5]{n128u1f}\\
\caption{\red{(Example 4)} Comparison between the FGA algorithm and the reference solutions. Top: $\veps=\frac{1}{16}$, the FGA method is implemeted without the weighting factors. Middle: $\veps=\frac{1}{16}$. Bottom: $\veps=\frac{1}{128}$, zoomed-in plots included.}
\label{fig:ex2_1}
\end{figure}
\section{Convergence proof} \label{sec:proof}
We now prove that the ansatz is a good approximation to the true
solution of the matrix Schr\"odinger equation. For simplicity of
notations, we omit the appearance of $\delta$ in the surface energy
and coupling vectors. By Assumption \ref{assumb}, the boundedness of
related quantities in the following analysis is uniform with respect
to $\delta$, and hence all the estimates below are independent of
$\delta$. In \S\ref{sec:prelim}, we study the trajectories, which
follow Hamiltonian flows on each energy surfaces with hopping between
surfaces. The absolute convergence of the infinite sum used in the
surface hopping ansatz is shown in \S\ref{sec:absconv}. Finally, in \S\ref{sec:error}, we prove the main convergence result Theorem~\ref{thm:main}.
\subsection{Preliminaries}\label{sec:prelim}
To study the absolute convergence of the FGA with surface hopping
ansatz, we fix a time $t$ and recall that
\begin{equation}\label{eq:FGAU}
U_{\FGA}(t, x) = \sum_{k=0}^{\infty} \begin{pmatrix}
u^{(2k)} \\
u^{(2k+1)}
\end{pmatrix}.
\end{equation}
For convenience of the readers, we also recall
\begin{multline}
u^{(j)}(t, x) = \frac{1}{(2\pi\veps)^{3m/2}} \int_{K} \ud z_0 \int_{0<t_1<\cdots<t_j<t} d T_{j:1} \; \tau^{(1)}(T_{1:1}, z_0)\cdots \tau^{(j)} (T_{j:1},z_0) \times \\
\times A^{(j)}(t, T_{j:1}, z_0) \exp\left( \frac{i}{\veps}
\Theta^{(j)}(t,T_{j:1}, z_0, x) \right),
\end{multline}
which is an integration over all possible $j$ hopping times $t_1,
\cdots, t_j$. Notice that as discussed above Theorem~\ref{thm:main},
we restrict the domain of integration on the phase space to $K$.
Also recall that the FGA variables in the integrand of each $u^{(j)}$
are evolved piecewisely to final time $t$. To be more specific, the
hopping time sequence $\{t_k\}_{k=1,\cdots, j}$ defines a partition of
the interval $[0,t]$, $0\le t_1\le\cdots\le t_j\le t$, such that
within each interval, the FGA trajectory and associated variables
evolve on a single energy surface, and at hopping times
$\{t_k\}_{k=1,\cdots, j}$ switch to another surface with the
continuity conditions \eqref{eq:contcond}.
We remark that since we study here the case with two energy surfaces,
it suffices to specify the hopping times to uniquely determine the
trajectory. In general, for more energy surfaces, besides the hopping
time, we also need to track which surface the trajectory hops to
(which makes the notations more complicated).
Let us first collect some properties of the Hamiltonian flow with surface
hopping. Given the hopping times $T_{j:1} = \{t_j, \cdots, t_1\}$, we
denote the map on the phase space from initial time $0$ to time $t$ by $\kappa_{t, T_{j:1}}$ ($t$ can be smaller than $t_j$ here):
\begin{align*}
\kappa_{t, T_{j:1}}: \quad \R^{2m} & \rightarrow \R^{2m} \\
(q,p) & \longmapsto \left(Q^{\kappa_{t, T_{j:1}}}(q,p), P^{\kappa_{t, T_{j:1}}}(q,p) \right),
\end{align*}
such that
\begin{equation}
\bigl(Q^{\kappa_{t, T_{j:1}}}(q,p), P^{\kappa_{t, T_{j:1}}}(q,p) \bigr)
= \begin{cases}
\bigl(Q^{(0)}(t, q, p), P^{(0)}(t, q, p)\bigr), & t \le T_1; \\
\bigl(Q^{(i)}(t, T_{i:1}, q, p), P^{(i)}(t, T_{i:1}, q, p)\bigr), & t \in [T_i, T_{i+1}], i \in \{1, \ldots, j\}; \\
\bigl(Q^{(j)}(t, T_{j:1}, q, p), P^{(j)}(t, T_{j:1}, q, p)\bigr), & t \ge T_j,
\end{cases}
\end{equation}
where the trajectory follows the Hamiltonian flow on one of the energy
surface and hops to the other at the hopping times. Let us emphasize
that, due to the continuity condition \eqref{eq:contcond}, even with
surface hopping, the trajectory $(Q, P)$ is still continuous on the
phase space as a function of $t$.
The following proposition states that for any possible number of hops
and sequence of hopping time, the trajectory under $\kappa_{t, T}$
remains (uniformly) in a compact set.
\begin{proposition}\label{prop:comp}
Given $t>0$ and a compact subset $K \subset \R^{2m}$, there exists a
compact set $K_t \subset \R^{2m}$, such that $\forall j \in \N$,
$\forall \delta > 0$ and any sequence of hopping times
$T_{j:1} \subset [0,t]$
\begin{equation}
\kappa_{t, T_{j:1}}(K) \subset K_t,
\end{equation}
namely, for any $(q, p) \in K$ and any $s \in [0, t]$
\begin{equation}\label{comp2}
\left(Q^{\kappa_{s, T_{j:1}}}(q,p), P^{\kappa_{s, T_{j:1}}}(q,p) \right) \in K_{t}.
\end{equation}
\end{proposition}
\begin{proof}
Fix an arbitrary sequence of hopping time $T_{j:1} \subset [0, t]$.
For any time $s \in [0, t]$ which belongs to one of the interval
$(t_k, t_{k+1})$ for $ k = 0, \cdots, j$ (we identify $t_0 = 0$ and
$t_{j+1} = t$). Denote the index of the energy surface during the time
interval $(t_k, t_{k+1})$ as $l_k$, we then have
\begin{align*}
& \frac{\ud}{\ud s}\Abs{P^{\kappa_{s,T_{j:1}}}} = \frac{P^{\kappa_{s,T_{j:1}}}}{\Abs{P^{\kappa_{s,T_{j:1}}}}} \cdot \frac{\ud}{\ud s} P^{\kappa_{s,T_{j:1}}} = - \frac{P^{\kappa_{s,T_{j:1}}} \cdot \nabla_Q E_{l_k}}{\Abs{P^{\kappa_{s,T_{j:1}}}}} \le C_E \Bigl(\Abs{Q^{\kappa_{s, T_{j:1}}}} + 1\Bigr), \\
& \frac{\ud}{\ud s}\Abs{Q^{\kappa_{s,T_{j:1}}}} = \frac{Q^{\kappa_{s,T_{j:1}}}}{\Abs{Q^{\kappa_{s,T_{j:1}}}}} \cdot \frac{\ud}{\ud s} Q^{\kappa_{s,T_{j:1}}} = \frac{Q^{\kappa_{s,T_{j:1}}} \cdot P^{\kappa_{s,T_{j:1}}}}{\Abs{Q^{\kappa_{s,T_{j:1}}}}} \le \Abs{P^{\kappa_{s,T_{j:1}}}},
\end{align*}
where we have used the subquadraticity of the Hamiltonian by
Assumption~\ref{assuma} (recall that $C_E$ is uniform with respect to
$\delta$). Therefore,
\begin{equation*}
\frac{\ud}{\ud s} \Bigl( \Abs{P^{\kappa_{s,T_{j:1}}}}^2 + \Abs{Q^{\kappa_{s,T_{j:1}}}}^2 \Bigr) \le 2 (C_E + 1) \Abs{P^{\kappa_{s,T_{j:1}}}} \Bigl( \Abs{Q^{\kappa_{s,T_{j:1}}}} + 1 \Bigr)\le 2 (C_E + 1) \Bigl( \Abs{P^{\kappa_{s,T_{j:1}}}}^2 + \Abs{Q^{\kappa_{s,T_{j:1}}}}^2 + 1\Bigr).
\end{equation*}
Here we emphasize that the constant on the right hand side is
universal in the sense that it does not depend on the particular
hopping time sequence. The conclusion of the Proposition follows
immediately from the differential inequality.
\end{proof}
As a corollary, since the trajectory uniformly stays in a compact set, given the final time $t$, we can take a constant $C_{\tau}$ such that the following estimate holds
\begin{equation} \label{assum1} \sup_{z_0\in K, T_{n:1} \subset [0,t],
n\in \N^+, \, i = 0, 1} |\tau_i^{(n)}(T_{n:1}, z_0)| \le \sup_{z
\in K_t} \max\bigl\{ \abs{p \cdot d^\delta_{10}(q)}, \abs{p \cdot
d^\delta_{01}(q)} \bigr\} \leq C_{\tau},
\end{equation}
where the second inequality uses Assumption~\ref{assumb} and recall
that the constants are uniform with respect to $\delta$. Thus,
the coupling coefficient stays $\Or(1)$ along all possible FGA
trajectories.
For a transformation of the phase space $\kappa: \R^{2m} \to \R^{2m}$,
we denote its Jacobian matrix as
\begin{equation}\label{jacobi}
J^{\kappa}(q,p)=
\begin{pmatrix}
\left(\partial_q Q^{\kappa} \right)^T (q,p) & \left(\partial_p Q^{\kappa} \right)^T (q,p) \\
\left(\partial_q P^{\kappa} \right)^T (q,p) & \left(\partial_p
P^{\kappa} \right)^T (q,p)
\end{pmatrix}.
\end{equation}
We say the transform $\kappa$ is a \emph{canonical} if
$J^{\kappa}$ is symplectic for any $(q,p)\in \R^{2m}$, namely,
\begin{equation}\label{cond:symp}
\left( J^{\kappa} \right)^T
\begin{pmatrix}
0 & I_m \\
-I_{m} & 0
\end{pmatrix} J^{\kappa} =
\begin{pmatrix}
0 & I_m \\
-I_{m} & 0
\end{pmatrix}.
\end{equation}
Here, $I_m$ denotes the $m\times m$ identity matrix.
The map given by the FGA trajectories $\kappa_t$ is always canonical,
as stated in the following proposition, which also gives bounds of the
Jacobian and its derivatives.
\begin{proposition}
Given $t>0$ and a compact subset $K \subset \R^{2m}$, the associated
map $\kappa_{t, T_{j:1}}$ is a canonical transformation for any
sequence of hopping times $T_{j:1}$, $\forall\, j$. Moreover, for
any $k \in \N$, there exists a constant $C_{k}$ such that
\begin{equation}\label{est:devF}
\sup_{(q, p) \in K} \max_{\abs{\alpha_p} + \abs{\alpha_q} \le k} \Abs{ \partial_q^{\alpha_q} \partial_p^{\alpha_p} \bigl[J^{\kappa_{t, T_{j:1}}}(q, p)\bigr]} \le C_{k},
\end{equation}
uniformly for any $j \in \N$, any $\delta$ and any sequence of
hopping times $T_{j:1}$.
\end{proposition}
\begin{proof}
Recall that the time evolution of $(Q^{\kappa}, P^{\kappa})$ is
piecewisely defined in the time interval between hoppings, and
remains continuous at the hopping times. During each time interval,
the symplectic condition \eqref{cond:symp} is clearly satisfied by
the Hamiltonian flow. The continuity condition guarantees the
validity of symplectic relation at the hopping times. Therefore, the
map $\kappa_{t,T_{j:1}}$ is a canonical transform.
For any time $s \in [0, t]$ which belongs to one of the interval
$(t_k, t_{k+1})$ for $ k = 0, \cdots, j$ (we identify $t_0 = 0$ and
$t_{j+1} = t$). Denote the index of the energy surface during the
time interval $(t_k, t_{k+1})$ as $l_k$, we then have by
differentiating $J^{\kappa_{s,T_{j:1}}}$ with
respect to $s$
\begin{equation}\label{eq:F}
\frac{\ud}{\ud s} J^{\kappa_{s,T_{j:1}}}=
\begin{pmatrix}
\partial_P\partial_Q H_{l_k} & \partial_P\partial_P H_{l_k} \\
-\partial_Q\partial_Q H_{l_k} & -\partial_Q\partial_P H_{l_k}
\end{pmatrix} J^{\kappa_{s,T_{j:1}}}.
\end{equation}
Then, Assumption~\ref{assuma} implies there exists a constant $C$ such that
\begin{equation}
\frac{\ud}{\ud s} \Abs{J^{\kappa_{s,T_{j:1}}}}
\le \left\lvert \begin{pmatrix}
\partial_P\partial_Q H_{l_k} & \partial_P\partial_P H_{l_k} \\
-\partial_Q\partial_Q H_{l_k} & -\partial_Q\partial_P H_{l_k}
\end{pmatrix} \right\rvert \Abs{J^{\kappa_{s,T_{j:1}}}} \le C
\Abs{J^{\kappa_{s,T_{j:1}}}}.
\end{equation}
It is worth emphasizing that this constant $C$ is independent of the
hopping time sequence and $\delta$. The boundedness of
$|J^{\kappa_{t,T_{j:1}}}|$ then follows immediately from Gronwall's
inequality and the fact that $|J^{\kappa_{0, T_{j:1}}}|=1$ since
$\kappa_{0, T_{j:1}}$ is just an identity map. To get the estimate
for derivatives of $J$, we differentiate the equation \eqref{eq:F}
with respect to $(q, p)$ and use an induction argument, and we omit
the straightforward calculations here.
\end{proof}
For a canonical transform $\kappa$, we define
\[
Z^{\kappa} (q,p) = \partial_z \left( Q^{\kappa} (q,p)+ i P^{\kappa}(q,p) \right),
\]
where $\partial_z = \partial_q - i \partial_p$. $Z^{\kappa}$ is a
complex valued $m \times m$ matrix. By mimicking the proof of
\cite{FGA_Conv}*{Lemma 5.1} and the above Proposition, we obtain the following properties of $Z^{\kappa}$.
\begin{proposition}
Given $t>0$ and a compact subset $K \subset \R^{2m}$, for any
sequence of hopping times $T_{j:1}$, $\forall\, j$, $Z^{\kappa_{t,
T_{j:1}}}$ is invertible. Moreover, for any $k \in \N$, there
exists a constant $C_{k}$ such that
\begin{equation}\label{est:devZ}
\sup_{(q, p) \in K} \max_{\abs{\alpha_p} + \abs{\alpha_q} \le k} \Abs{ \partial_q^{\alpha_q} \partial_p^{\alpha_p} \bigl[ \bigl(Z^{\kappa_{t, T_{j:1}}}(q, p)\bigr)^{-1}\bigr]} \le C_{k},
\end{equation}
uniformly for any $j \in \N$, any $\delta > 0$ and any sequence of
hopping times $T_{j:1}$.
\end{proposition}
For the frozen Gaussian approximation, it is useful to introduce the
following Fourier integral operator. For
$M \in L^\infty (\R^{2m}; \C)$, $u\in \mathcal S (\R^m;\C)$, and a FGA
flow denoted by $\kappa_{t,T_{j:1}}$ with $T_{j:1} \subset [0, t]$, we
define
\begin{equation}
\Bigl(\mathcal I_{\kappa_{t,T_{j:1}}}^\veps (M) u\Bigr) (x) = (2 \pi \veps)^{- \frac{3m}{2} } \int_{\R^m}\int_{\R^{2m}} \exp \Bigl( \frac i \veps \Phi^{(j)} (t, x, y, z) \Bigr) M(z) u(y) \ud z \ud y,
\end{equation}
where the phase function $\Phi^{(j)}$ is given by
\[
\Phi^{(j)} (t,x,y,z)= S^{(j)}(t,T_{j:1},z) + \frac{i}{2}
\bigl\lvert x-Q^{(j)}(t,T_{j:1},z)\bigr\rvert^2 + P^{(j)}(t,T_{j:1},z)
\cdot \bigl(x-Q^{(j)}(t,T_{j:1},z)\bigr) + \frac{i}{2}|y-q|^2 - p\cdot (y-q),
\]
where the FGA variables $P^{(j)}, Q^{(j)}, S^{(j)}$ are evolved as in the surface hopping ansatz, with given $t$ and the hopping time sequence $T_{j:1}$.
With this Fourier integral operator, we may rewrite the surface hopping ansatz \eqref{eq:u0n} for $u_0^{(n)}$ as
\begin{equation*}
u_0^{(n)}(t)= \int_{0<t_1<\cdots<t_n<T} \ud T_{n:1} \; \mathcal{I}_{\kappa_{t,T_{n:1}}}^\veps \Bigl( a_i^{(n)} \prod_{j=1}^n \tau_i^{(j)} \chi_K\Bigr) u_{0}(0),
\end{equation*}
where $\chi_K$ is the characteristic function on the set $K$, which
restricts the initial $z_0$ in the FGA ansatz. This representation is
particularly convenient for our estimates, as we have the following
proposition for the norm of the Fourier integral operators. The
version of Proposition without hopping was proved in
\cite{FGA_Conv}*{Proposition 3.7}. The proof in fact can be
almost verbatim used in the current situation (with some notational
change), and thus we skip the details here.
\begin{proposition}\label{operator}
For any $t$ and any hopping time sequence
$\{t_1, t_2, \cdots, t_j\}$ for $\forall\, j\in \N$, denoting the
the symplectic transform for the FGA with surface hopping flow as
$\kappa_{t,T_{j:1}}$, the operator
$\mathcal I_{\kappa_{t,T_{j:1}}}^\veps(M)$ can be extended to a
linear bounded operator on $L^2(\R^m,\C)$, and we have
\begin{equation}
\biggl\lVert \mathcal I_{\kappa_{t,T_{j:1}}}^\veps (M) \biggr\rVert_{\mathcal L (L^2(\R^m;\,\C))} \le 2^{-\frac{m}{2}} \norm{M}_{L^\infty (\R^{2m};\, \C)}.
\end{equation}
\end{proposition}
\subsection{Absolute convergence of surface hopping ansatz}
\label{sec:absconv}
Now we estimate the contribution of the terms in the FGA ansatz.
\begin{proposition}\label{FI}
For a given time $t$, there exists a constant $C_a$, depending only on
$t$ and the initial conditions of the FGA variables, such that for
any $n\in \N$ and any hopping moment sequence $T_{n:1} \subset
[0,t]$, it holds
\begin{equation}\label{est1}
\Biggl\lVert \frac{1}{(2\pi\veps)^{3m/2}} \int_K \ud z_0 \;
\prod_{j=1}^n \tau^{(j)}(T_{j:1}, z_0)
A^{(n)}(t, T_{n:1}, z_0) \exp\Bigl( \frac{i}{\veps}
\Theta^{(n)}\bigl(t,T_{n:1}, z_0,x\bigr) \Bigr)
\Biggr\rVert_{L^2(\R^m)} \le C_a.
\end{equation}
\end{proposition}
\begin{proof}
Recall that we have
\begin{multline*}
\frac{1}{(2\pi\veps)^{3m/2}} \int_K \ud z_0 \;
\prod_{j=1}^n \tau^{(j)}(T_{j:1}, z_0) A^{(n)}(t, T_{n:1}, z_0) \exp\Bigl( \frac{i}{\veps} \Theta^{(n)}(T,T_{n:1}, z_0,x) \Bigr) \\
= \Bigl(\mathcal I_{\kappa_{t,T_{n:1}}}^\veps \Bigl(\prod_{j=1}^n
\tau^{(j)}(T_{j:1}, \cdot) a^{(n)}(t, T_{n:1}, \cdot) \chi_{K}
\Bigr) u_{0}(0)\Bigr)(x).
\end{multline*}
Thus, using Proposition~\ref{operator} and the bound \eqref{assum1}
for the hopping coefficient $\tau$'s, it suffices to control
$a^{(n)}(t, T_{n:1}, z_0)$ for $z_0 \in K$. Recall that
\[
A^{(k)}(t, T_{k:1}, z_0) =a^{(k)}(t, T_{k:1}, z_0) \int_{\R^m}
u_{0}(0, y) e^{\frac{i}{\veps} (-p\cdot(y-q)+ \frac{i}{2}|y-q|^2)}
\ud y,
\]
and hence $a^{(k)}$ satisfies the same linear equation as those for
$C^{(k)}$ with the continuity conditions at the hopping times:
Depending on whether $k$ is even or odd
\begin{align*}
\frac{\ud}{\ud t} a^{(k)} & = \frac 1 2 a^{(k)} \tr\left(
(Z^{(k)})^{-1}\left(\partial_z P^{(k)} - i \partial_z Q^{(k)}
\nabla^2_Q E_0(Q^{(k)}) \right) \right) - a^{(k)} d_{00}\cdot
P^{(k)}, \qquad k \text{ even}; \\
\frac{\ud}{\ud t} a^{(k)} & = \frac 1 2 a^{(k)} \tr\left(
(Z^{(k)})^{-1}\left(\partial_z P^{(k)} - i \partial_z Q^{(k)}
\nabla^2_Q E_1(Q^{(k)}) \right) \right) - a^{(k)} d_{11}\cdot
P^{(k)}, \qquad k \text{ odd}.
\end{align*}
Note that the coefficients on the right hand side are all uniformly
bounded along the trajectory thanks to \eqref{comp2},
\eqref{assum1}, \eqref{est:devF}, and \eqref{est:devZ} in the
preliminaries. Therefore, $a^{(n)}$ is also bounded uniformly with
respect to all hopping sequences, which concludes the proof.
\end{proof}
With Proposition~\ref{FI}, we further estimate the contribution $u^{(n)}$ in the surface hopping ansatz.
\begin{theorem}\label{thm:ansatz}
Under Assumptions~\ref{assuma} and \ref{assumb}, given a fixed final
time $t$, there exist constants $C_t$ and $C$, independent of
$\veps$ and $\delta$, such that for any $n \in \N$, we have
\[
\Norm{u^{(n)}(t, x)}_{L^2(\R^m)} \le C \frac{ (C_t)^{n}}{n!},
\quad\text{and}\quad \Norm{\veps \nabla_x u^{(n)}(t, x)}_{L^2(\R^m)}
\le C \frac{ (C_t)^{n}}{n!}.
\]
In particular, the surface hopping ansatz \eqref{ansatz2} is
absolutely convergent.
\end{theorem}
\begin{proof}
Note that
\begin{equation*}
u^{(n)} = \int_{0<t_1<\cdots<t_n<t} \ud T_{n:1} \; \mathcal{I}_{\kappa_{t, T_{n:1}}}^{\veps} \Bigl( \prod_{j=1}^n \tau^{(j)}( T_{j:1}, \cdot) a^{(n)}(t, T_{n:1}, \cdot) \chi_K \Bigr) u_0.
\end{equation*}
We estimate using Proposition~\ref{FI}
\begin{align*}
\Norm{u^{(n)}}_{L^2(\R^m)} & \le \int_{0<t_1<\cdots<t_n<t} \ud T_{n:1} \; \Biggl\lVert \mathcal{I}_{\kappa_{t, T_{n:1}}}^{\veps} \Bigl( \prod_{j=1}^n \tau^{(j)}( T_{j:1}, \cdot) a^{(n)}(t, T_{n:1}, \cdot) \chi_K \Bigr) u_0 \Biggr\rVert_{L^2(\R^m)} \\
& \le C \int_{0<t_1<\cdots<t_n<t} \ud T_{n:1} \; C_{\tau}^n = C \frac{(tC_{\tau})^n}{n!}
\end{align*}
The absolute convergence of $U_{\FGA}(t, x)$ then follows from
dominated convergence.
The control of $\veps \nabla_x u^{(n)}$ is quite similar, except that we
shall use Lemma~\ref{lem:asym} to control the term $(x - Q^{(n)})$
resulting from the gradient.
Actually,
\begin{multline}
\veps \nabla_x u^{(n)}(t, x) = \frac{1}{(2\pi\veps)^{3m/2}} \int_{K} \ud z_0 \int_{0<t_1<\cdots<t_n<t} \ud T_{n:1} \; \tau^{(1)}(T_{1:1}, z_0)\cdots \tau^{(n)} (T_{n:1},z_0) \times \\
\times i \left( P^{(n)}+ i (x - Q^{(n)}) \right) A^{(n)}(t, T_{n:1}, z_0) \exp\left( \frac{i}{\veps}
\Theta^{(n)}(t,T_{n:1}, z_0, x) \right).
\end{multline}
The control of the term involving $P^{(n)}$ is the same as that of
$u^{(n)}$. By Lemma~\ref{lem:asym}, the term involving $(x - Q^{(n)})$
is an even smaller term, which follows from the estimate
\eqref{est:devZ} and a slight variation of Proposition~\ref{FI}.
\end{proof}
\subsection{The analysis of approximation error}\label{sec:error}
We now estimate the approximation error of the FGA approximation with
surface hopping to the matrix Schr\"odinger equation \eqref{vSE}. We
first state a consistency result by estimating the error of
substituting $U_{\FGA}$ into \eqref{vSE}. All the estimates and
constants below are uniform in $\delta$.
\begin{theorem}\label{thm:consistency}
Under Assumptions~\ref{assuma} and \ref{assumb}, given a final time $t$, there exists a constant $C_t$, such that
\begin{multline*}
\Biggl\lVert
i \veps \partial_t U_{\FGA} + \frac{\veps^2}{2} \Delta_x
U_{\FGA} +
\begin{pmatrix}
E_0 \\
& E_1
\end{pmatrix} U_{\FGA} -\red{\frac{\veps^2}{2}} \begin{pmatrix}
D_{00} & D_{01} \\
D_{10} & D_{11}
\end{pmatrix} U_{\FGA} - \red{\veps^2} \sum_{j=1}^m
\begin{pmatrix}
d_{00} & d_{01} \\
d_{10} & d_{11}
\end{pmatrix}_j
\partial_{x_j} U_{\FGA} \Biggl\rVert_{L^2(\R^m)}
\le \veps^2 e^{C_t}.
\end{multline*}
\end{theorem}
\begin{proof}
We first consider the term arises from the time derivative and denote for $j$ even
\begin{equation*}
I^j_1 +I^j_2 =
i \veps \partial_t
\begin{pmatrix}
u_0^{(j)} \\
0
\end{pmatrix},
\end{equation*}
where
\begin{equation}
I^j_1 =
\begin{pmatrix}\dps
i \veps \frac{1}{(2\pi \veps)^{3m/2}} \int_{K} \ud z_0 \int_{0<t_1<\cdots<t_j<t} \ud T_{j:1} \; \tau^{(1)}\cdots \tau^{(j)} \partial_t \Bigl[ A^{(j)}\exp \Bigl(\frac i \veps \Theta^{(j)} \Bigr) \Bigr] \\
0
\end{pmatrix}
\end{equation}
coming from the time derivative acting on the integrand, and for $j \ge 1$,
\begin{equation}
I^j_2 =
\begin{pmatrix}\dps
i \veps \frac{1}{(2\pi \veps)^{3m/2}} \left[ \int_{K} \ud z_0 \int_{0<t_1<\cdots<t_j<t} \ud T_{j:1} \; \tau^{(1)}\cdots \tau^{(j)} A^{(j)}\exp \Bigl(\frac i \veps \Theta_0^{(j)} \Bigr) \right]_{t_j=t} \\
0
\end{pmatrix}
\end{equation}
resulting from the time derivative acting on the upper limit of the
integral. The expression for odd $j$ is similar except that the top
and bottom rows are flipped: $i\veps \partial_t
\bigl( \begin{smallmatrix} 0 \\ u_0^{(j)} \end{smallmatrix} \bigr)$.
We also write the terms from the right hand side of \eqref{vSE} for
even $j$ as:
\begin{equation*}
I^j_3 + I^j_4 =
-\frac{\veps^2}{2} \Delta_x \begin{pmatrix} u^{(j)} \\
0 \end{pmatrix} +
\begin{pmatrix}
E_0 \\
& E_1
\end{pmatrix} \begin{pmatrix} u^{(j)} \\
0 \end{pmatrix} -\red{\frac{\veps^2}{2}} \begin{pmatrix}
D_{00} & D_{01} \\
D_{10} & D_{11}
\end{pmatrix} \begin{pmatrix} u^{(j)} \\
0 \end{pmatrix} -\red{\veps^2} \sum_{j=1}^d
\begin{pmatrix}
d_{00} & d_{01} \\
d_{10} & d_{11}
\end{pmatrix}_j
\partial_{x_j} \begin{pmatrix} u^{(j)} \\
0 \end{pmatrix}
\end{equation*}
where
\begin{align}
I^j_3 & =
\begin{pmatrix}
\bigl(H_0 - \frac{\veps^2}{2} D_{00} \bigr) u^{(j)}- \veps^2
d_{00} \cdot \nabla_x u^{(j)} \\
0
\end{pmatrix}, \\
I^j_4 & =
\begin{pmatrix}
0 \\
- \veps^2 d_{01} \cdot \nabla_x u^{(j)} - \frac{\veps^2}{2}
D_{01} u^{(j)}
\end{pmatrix}.
\end{align}
Here, $I^j_3$ contains all the terms which govern the inner-surface evolution on each energy surface, while $I^j_4$ contains the coupling terms (note that the subscripts are swapped). The expressions for odd $j$ is similar.
Denote $U_{\FGA}^n$ the sum of the first $n$ terms in the FGA ansatz
\eqref{eq:FGAU}. Summing over the contributions up to $u^{(n)}$, we
have
\begin{multline*}
i \veps \partial_t U_{\FGA}^n + \frac{\veps^2}{2} \Delta_x
U_{\FGA}^n +
\begin{pmatrix}
E_0 \\
& E_1
\end{pmatrix} U_{\FGA}^n -\red {\frac{\veps^2}{2}} \begin{pmatrix}
D_{00} & D_{01} \\
D_{10} & D_{11}
\end{pmatrix} U_{\FGA}^n -\red {\veps^2} \sum_{j=1}^d
\begin{pmatrix}
d_{00} & d_{01} \\
d_{10} & d_{11}
\end{pmatrix}_j
\partial_{x_j} U_{\FGA}^n \\
=\sum_{j=0}^n I^j_1 + \sum_{j=1}^n I^j_2 - \sum_{j=0}^n I^j_3 -
\sum_{j=0}^n I^j_4 = \sum_{j=0}^n \left(I^j_1-I^j_3\right)+
\sum_{j=0}^{n-1} \left(I^{j+1}_2-I^j_4\right) + I^n_4.
\end{multline*}
We now estimate three terms on the right hand side.
\smallskip
\noindent\emph{Term} $I_4^n$: By Theorem~\ref{thm:ansatz}, we have
\begin{equation}\label{eq:I4n}
\begin{aligned}
\bigl\lVert I_4^n \bigr\rVert_{L^2(\R^m)} & \le C \veps \norm{\veps \nabla_x u^{(j)}}_{L^2(\R^m)} + C \veps^2 \norm{u^{(j)}}_{L^2(\R^m)}
\le C \veps \frac{(C_t)^n}{n!}.
\end{aligned}
\end{equation}
\smallskip
\noindent
\emph{Term $\sum (I^j_1-I^j_3)$}: The difference $(I^j_1 - I^j_3)$
contains all the formally $O(\veps^2)$ terms we have dropped in
determining the equation for $A^{(j)}$. To estimate those, we use
the Taylor expansions with respect the beam center $Q$
\begin{align*}
E_k(x) & = \sum_{|\alpha|\le 3} \frac{\partial_\alpha E_k(Q)}{\alpha !} (x-Q)^\alpha+ R_{4,Q}[E_k]; \\
d_{kl}(x) & = \sum_{|\alpha|\le 1} \frac{\partial_\alpha
d_{kl}(Q)}{\alpha !} (x-Q)^\alpha+ R_{2,Q}[d_{kl}],
\end{align*}
where $R_{k,Q}[f]$ denotes the $k$-th order remainder in the Taylor
expansion of the function $f$ at $Q$.
\begin{equation*}
\bigl\lVert I^j_1-I^j_3 \bigr\rVert_{L^2(\R^m)} \le I^j_{11} + I^j_{12} + I^j_{13},
\end{equation*}
where ($k = 0$ if $j$ even and $k = 1$ if $j$ odd)
\begin{align*}
I^j_{11} & = \sum_{|\alpha|=3} \left\| \frac{1}{(2\pi\veps)^{3m/2}}
\int_K \ud z_0 \int_{0<t_1<\cdots<t_j<t} \ud T_{j:1}\;
\tau^{(1)}\cdots \tau^{(j)}
A^{(j)} e^{\frac{i}{\veps} \Theta^{(j)}} \frac{\partial_\alpha E_k(Q^{(j)})}{\alpha !} (x-Q^{(j)})^\alpha \right\|_{L^2} \\
& \quad + \veps \sum_{|\alpha|=1} \left\|
\frac{1}{(2\pi\veps)^{3m/2}} \int_K \ud z_0 \int_{0<t_1<\cdots<t_j<t} \ud T_{j:1}\; \tau^{(1)}\cdots \tau^{(j)}
A^{(j)} e^{ \frac{i}{\veps} \Theta^{(j)}} P^{(j)} \cdot \frac{\partial_\alpha d_{kk}(Q^{(j)})}{\alpha !} (x-Q^{(j)})^\alpha \right\|_{L^2} \\
& \quad +\veps \left\| \frac{1}{(2\pi\veps)^{3m/2}} \int_K \ud z_0
\int_{0<t_1<\cdots<t_j<t} \ud T_{j:1} \; \tau^{(1)}\cdots
\tau^{(j)} A^{(j)} e^{ \frac{i}{\veps} \Theta^{(j)}}
d_{kk}(Q^{(j)}) \cdot (x-Q^{(j)}) \right\|_{L^2}, \\
I^j_{12} &= \left\| \frac{1}{(2\pi\veps)^{3m/2}} \int_K \ud z_0
\int_{0<t_1<\cdots<t_j<t} \ud T_{j:1} \; \tau^{(1)}\cdots
\tau^{(j)}
A^{(j)} e^{ \frac{i}{\veps} \Theta^{(j)}} R_{4,Q^{(j)}}[E_k] \right\|_{L^2} \\
& \quad + \veps \left\|\frac{1}{(2\pi\veps)^{3m/2}} \int_K \ud z_0
\int_{0<t_1<\cdots<t_j<t} \ud T_{j:1} \; \tau^{(1)}\cdots
\tau^{(j)} A^{(j)}e^{ \frac{i}{\veps} \Theta^{(j)}}
P^{(j)} \cdot R_{2,Q^{(j)}}[d_{kk}] \right\|_{L^2} \\
& \quad + \veps \left\|\frac{1}{(2\pi\veps)^{3m/2}}
\int_K \ud z_0\int_{0<t_1<\cdots<t_j<t} \ud T_{j:1} \;
\tau^{(1)}\cdots \tau^{(j)} A^{(j)}e^{ \frac{i}{\veps}
\Theta^{(j)}}R_{1,Q^{(j)}}[d_{kk}] \cdot
(x-Q^{(j)})\right\|_{L^2}, \\
I^j_{13} & = \frac{\veps^2}{2} \left\|D_{kk} u^{(j)}\right\|_{L^2}.
\end{align*}
Here, $I^j_{11}$ contains the next order Taylor expansion terms after asymptotic matching, $I^j_{12}$ contains the remainder terms in the Taylor expansions, and $I^j_{13}$ contains the contribution from $D_{kk}$.
To estimate $I_{11}^j$, note that by Assumption~\ref{assuma},
Proposition~\ref{operator}, and Lemma~\ref{lem:asym}, we have for
$\abs{\alpha} = 3$
\[
\left\| \frac{1}{(2\pi\veps)^{3m/2}} \int_K \ud z_0\; A^{(n)} e^{\frac{i}{\veps} \Theta^{(n)}} \frac{\partial_\alpha E_k(Q)}{\alpha !} (x-Q)^\alpha \right\|_{L^2} \le C \veps^2.
\]
Thus by a similar calculation as in the proof of
Theorem~\ref{thm:ansatz}, we obtain
\[
\left\| \frac{1}{(2\pi\veps)^{3m/2}} \int_K \ud z_0 \int_{0<t_1<\cdots<t_j<t} \ud T_{j:1} \; \tau^{(1)}\cdots \tau^{(j)}
A^{(j)} e^{\frac{i}{\veps} \Theta^{(j)}} \frac{\partial_\alpha E_k(Q)}{\alpha !} (x-Q)^\alpha\Psi_k \right\|_{L^2}\le C \veps^2 \frac{(C_t)^j}{j !} .
\]
We can similarly estimate the other two terms in $I_{11}^j$, which can by controlled by the same bound, which yields
\begin{equation}\label{eq:I11}
I_{11}^j \le C \veps^2 \frac{(C_t)^j}{j!}.
\end{equation}
The estimate of the term $I_{12}^j$ is similar as by Lemma~\ref{lem:asym}, the powers of $(x - Q)^{\alpha}$ is of higher order in $\veps$. In particular, we have
\begin{equation*}
\left\| \frac{1}{(2\pi\veps)^{3m/2}} \int_{K} \ud z_0 A^{(j)} e^{\frac{i}{\veps} \Theta^{(j)}} R_{4,Q}[E_k] \right\|_{L^2}
\leq C \sum_{|\alpha|=4} \left\| \frac{1}{(2\pi\veps)^{3m/2}} \int_{K}
\ud z_0 A^{(j)} e^{\frac{i}{\veps} \Theta^{(j)}}
|x-Q^{(j)}|^{\alpha}\right\|_{L^2} = O(\veps^2),
\end{equation*}
and hence,
\begin{equation*}
\left\| \frac{1}{(2\pi\veps)^{3m/2}} \int_{K} \ud z_0 \int_{0<t_1<\cdots<t_j<t} \ud T_{j:1} \; \tau^{(1)}\cdots \tau^{(j)}
A^{(j)} e^{ \frac{i}{\veps} \Theta^{(j)}}R_{4,Q}[E_k] \right\|_{L^2} \le C \veps^2 \frac{(C_t)^j}{j!}.
\end{equation*}
The other two terms in $I_{12}^j$ can be similarly bounded, and we arrive at
\begin{equation}\label{eq:I12}
I_{12}^j \le C \veps^2 \frac{(C_t)^j}{j!}.
\end{equation}
The $I_{13}^j$ term can be estimated using Assumption~\ref{assumb} and Theorem~\ref{thm:ansatz}, which yields
\begin{equation}\label{eq:I13}
I_{13}^j \le C \veps^2 \norm{u^{(j)}}_{L^2(\R^m)} \le C \veps^2 \frac{(C_t)^j}{j!}.
\end{equation}
Now adding up \eqref{eq:I11}, \eqref{eq:I12}, and \eqref{eq:I13} from
$j = 0$ to $n$, we get
\begin{equation}\label{eq:I1I3}
\sum_{j=0}^n \norm{I_1^j - I_3^j}_{L^2(\R^m)} \le C \veps^2 \sum_{j=0}^n \frac{(C_t)^j}{j!} \le C \veps^2 e^{C_t}.
\end{equation}
\smallskip
\noindent
\emph{Term $\sum (I^{j+1}_2-I^j_4)$:} The difference $(I^{j+1}_2 -
I^j_4)$ contains all the formally $O(\veps^2)$ terms we have dropped
in specifying the hopping coefficients $\tau^{(j)}$. By Taylor
expansion,
\begin{equation*}
\bigl\lVert I^{j+1}_2-I^j_4 \bigr\rVert_{L^2(\R^m)} \le I^j_{21} + I^j_{22} + I^j_{23},
\end{equation*}
where (for $j$ even, the formula for odd $j$ is similar except that
$d_{01}, D_{01}$ change to $d_{10}, D_{10}$ respectively)
\begin{align*}
I^j_{21} & = \veps \sum_{|\alpha|=1} \left\|
\frac{1}{(2\pi\veps)^{3m/2}} \int_K \ud z_0 \int_{0<t_1<\cdots<t_j<t} \ud T_{j:1}\; \tau^{(1)}\cdots \tau^{(j)}
A^{(j)} e^{ \frac{i}{\veps} \Theta^{(j)}} P^{(j)} \cdot \frac{\partial_\alpha d_{01}(Q^{(j)})}{\alpha !} (x-Q^{(j)})^\alpha \right\|_{L^2} \\
& \quad +\veps \left\| \frac{1}{(2\pi\veps)^{3m/2}} \int_K \ud z_0
\int_{0<t_1<\cdots<t_j<t} \ud T_{j:1} \; \tau^{(1)}\cdots
\tau^{(j)} A^{(j)} e^{ \frac{i}{\veps} \Theta^{(j)}}
d_{01}(Q^{(j)}) \cdot (x-Q^{(j)}) \right\|_{L^2}, \\
I^j_{22} &= \veps \left\|\frac{1}{(2\pi\veps)^{3m/2}} \int_K \ud z_0
\int_{0<t_1<\cdots<t_j<t} \ud T_{j:1} \; \tau^{(1)}\cdots
\tau^{(j)} A^{(j)}e^{ \frac{i}{\veps} \Theta^{(j)}}
P^{(j)} \cdot R_{2,Q^{(j)}}[d_{01}] \right\|_{L^2} \\
& \quad + \veps \left\|\frac{1}{(2\pi\veps)^{3m/2}}
\int_K \ud z_0 \int_{0<t_1<\cdots<t_j<t} \ud T_{j:1} \;
\tau^{(1)}\cdots \tau^{(j)} A^{(j)}e^{ \frac{i}{\veps}
\Theta^{(j)}}R_{1,Q^{(j)}}[d_{01}] \cdot
(x-Q^{(j)})\right\|_{L^2}, \\
I^j_{23} & = \frac{\veps^2}{2} \left\|D_{01} u^{(j)}\right\|_{L^2}.
\end{align*}
The estimates of these terms are similar to that we have done for the
terms arising from $\left(I^j_1-I^j_3\right)$, and hence we omit the
details. We get
\begin{equation}\label{eq:I2I4}
\sum_{j=0}^n \norm{I_2^{j+1} - I_4^{j}}_{L^2(\R^m)} \lesssim \veps^2 \sum_{j=0}^n \frac{(C_t)^j}{j!} \le C \veps^2 e^{C_t}.
\end{equation}
\smallskip
Therefore, putting together \eqref{eq:I4n}, \eqref{eq:I1I3}, \eqref{eq:I2I4}, we get
\begin{multline*}
\Biggl\lVert
i \veps \partial_t U_{\FGA}^n + \frac{\veps^2}{2} \Delta_x
U_{\FGA}^n +
\begin{pmatrix}
E_0 \\
& E_1
\end{pmatrix} U_{\FGA}^n -\red {\frac{\veps^2}{2}} \begin{pmatrix}
D_{00} & D_{01} \\
D_{10} & D_{11}
\end{pmatrix} U_{\FGA}^n -\red{\veps^2} \sum_{j=1}^d
\begin{pmatrix}
d_{00} & d_{01} \\
d_{10} & d_{11}
\end{pmatrix}_j
\partial_{x_j} U_{\FGA}^n \Biggl\rVert_{L^2(\R^m)} \\
\le C \veps \frac{(C_t)^n}{n!} + C \veps^2 e^{C_t}.
\end{multline*}
Taking the limit $n \to \infty$ and by increasing $C_t$ to absorb the
constant $C$ above, we arrive at the conclusion.
\end{proof}
To control the propagation of the consistency error of the FGA solution in time, we need the next lemma.
\begin{lemma}\label{lem:conv}
Suppose $H^\veps$ is a family of self-adjoint operators for $\veps > 0$. Suppose a time dependent wave function $\phi ^\veps (t)$, which belongs to the domain of $H^\veps$, is continuously differentiable in $t$. In addition, $\phi^\veps(t)$ satisfies the following equation,
\begin{equation} \label{eq:approx_remainder}
\left( i \veps \frac{\partial}{\partial t} - H^\veps \right) \phi^\veps (t) = \zeta^\veps(t),
\end{equation}
where the remainder $\zeta^{\veps}$ satisfying the following estimate
\[
\|\zeta^\veps (t)\|_{L^2} \le \mu^\veps(t).
\]
Then, let $\widetilde \phi^\veps $ be the solution to the Schr\"odinger equation with Hamiltonian $H^\veps$, and
\[
\|\phi^\veps(0)-\widetilde \phi^\veps (0)\|_{L^2} \le e_0.
\]
We have then
\begin{equation}\label{eq:stab_est}
\|\phi^\veps(t)-\widetilde \phi^\veps (t)\|_{L^2} \le e_0 + \frac{\int_0^t \mu^\veps (s) \ud s}{ \veps}.
\end{equation}
\end{lemma}
\begin{proof}
Since $H^\veps$ is self-adjoint, it generates a unitary propagator $\mathcal U^\veps(t,s)=\exp\left(\int^t_s -i H^\veps \ud s'/\veps\right)$, such that
\[
\mathcal U^\veps (t,s) \widetilde \phi^\veps (s) = \widetilde \phi^\veps(t).
\]
Therefore, we obtain,
\begin{align*}
\|\phi^\veps(t)-\widetilde \phi^\veps (t)\|_{L^2} & = \|\phi^\veps(t)- \mathcal U^\veps (t,0) \widetilde \phi^\veps (0)\|_{L^2} \\
&= \| \mathcal U^\veps (0,t) \phi^\veps(t)- \widetilde \phi^\veps (0)\|_{L^2}.
\end{align*}
Here, we have used $ (\mathcal U^\veps)^{-1} (t,0)=\mathcal U^\veps (0,t)$. Then, by triangle inequality, we have
\begin{align*}
\|\phi^\veps(t)-\widetilde \phi^\veps (t)\|_{L^2} & \le \| \mathcal U^\veps (0,t) \phi^\veps(t)- \phi^\veps (0)\|_{L^2} + e_0 \\[5 pt]
& = \left\| \int_0^t \frac{\partial}{\partial s} \left( \mathcal U^\veps (0,s) \phi^\veps(s) \right) \ud s \right\|_{L^2} + e_0 \\
& = \left\| \int_0^t \left( \frac{\partial}{\partial s} \mathcal U^\veps (0,s)\phi^\veps(s) +\mathcal U^\veps (0,s) \frac{\partial}{\partial s} \phi^\veps(s) \right) \ud s \right\|_{L^2} + e_0.
\end{align*}
Then, by using properties of the unitary propagator and equation \eqref{eq:approx_remainder}, we get
\begin{align*}
\|\phi^\veps(t)-\widetilde \phi^\veps (t)\|_{L^2} & \le \left\| \frac{i}{\veps}\int_0^t \left( - \mathcal U^\veps (0,s) H^\veps \phi^\veps(s) +\mathcal U^\veps (0,s) ( H^\veps \phi^\veps(s) + \zeta^\veps(s)) \right) \ud s \right\|_{L^2} +e_0 \\
& = \frac{1}{\veps} \left\| \int_0^t \left( \mathcal U^\veps (0,s) \zeta^\veps(s)\right) \ud s \right\|_{L^2} +e_0.
\end{align*}
We arrive at \eqref{eq:stab_est} by noticing that
\[
\left\| \int_0^t \left( \mathcal U^\veps (0,s) \zeta^\veps(s)\right) \ud s \right\|_{L^2} \le \int_0^t \left\| \mathcal U^\veps (0,s) \zeta^\veps(s)\right\|_{L^2} \ud s \le \int_0^t \mu^\veps(s) \ud s.
\]
\end{proof}
In the lemma above, $\phi^\veps$ almost solves the Schr\"odinger
equation with Hamiltonian $H^\veps$ in the sense of equation
\eqref{eq:approx_remainder}, where the remainder term $\zeta^\veps$ is
controlled. Then, $\phi^\veps$ can be considered as an approximate
solution to $\widetilde \phi^\veps$, if the right hand side of the
estimate \eqref{eq:stab_est} is small. Therefore, if we take
$\phi^\veps$ to the approximation given by FGA with surface hopping,
then with the stability lemma, we can conclude the error estimate in
Theorem~\ref{thm:main}.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
Note that the initial error is given by $\eps_{\text{in}}$ by definition. The theorem is then a corollary of Theorem~\ref{thm:consistency} and Lemma~\ref{lem:conv}.
\end{proof}
\bibliographystyle{amsxport}
|
1,314,259,994,832 | arxiv | \section{Introduction}
The use of Bayesian methods has great potential within astrophysics and has been applied in areas from binary stars to cosmology \cite[see][for an introduction]{lor90}. Bayesian methods have recently been employed within some areas of solar physics, such as the analysis of radiochemical solar neutrino data \citep{stu08}, inversion of Stokes profiles \citep{ase08} and an approach to solar flare prediction \citep{whe04}. The work of \cite{jay87}, \cite{bre88} and others has shown that the application of Bayesian statistical techniques to spectral analysis has many applications in physics, but it has not yet been exploited in solar physics research. Current analysis techniques applied to the problem of wave detection and parameterization in solar physics are not optimal to the problem at hand. Particularly concerning the estimation of oscillation parameters and their uncertainties, it is not clear how to interpret least squares fitting or the Fourier and wavelet transform without understanding a relation to probability theory. It is possible to extract much more information contained within the data by applying Bayesian statistical methods, compared to the traditional least squares, Fourier or wavelet analysis currently employed. The Bayesian method allows extremely precise estimates of the oscillation parameters to be made, with a consistent statistical analysis of their uncertainties. For example, the probability based approach allows the obtainable frequency resolution to be estimated, which is much higher than can be interpreted from a Fourier transform.
We apply the methods described by \cite{jay87} and \cite{bre88} to the problem of frequency estimation within solar data. A Bayesian numerical code is applied to artificial time series data, typical of oscillations within the solar corona, to demonstrate the high precision parameter estimation that can be achieved.
It is shown that frequencies spaced closer than neighboring Fourier frequencies can be successfully resolved by a Bayesian model. This makes the Bayesian approach ideal for determining the number of frequencies present in a time series. Section~\ref{solar} applies the method to transition region data, demonstrating that it is possible to detect and resolve the presence of multiple frequencies in a time series where a Fourier analysis is unable to do so and its application is invalid.
\section{Bayes' Theorem}\label{sect_bayes}
The posterior probability of a hypothesis $H$, given the data $D$ and all other prior information $I$ is stated by Bayes' theorem:
\begin{equation}\label{bayes}
P(H \vert D,I) = \frac{P(H \vert I)P(D \vert H,I)}{P(D \vert I)}.
\end{equation}
Bayes' theorem derives from commutative logic and the product rule of probability theory \cite[see][]{gre05}.
Where $P(H \vert I)$ is the prior probability of $H$ given $I$, or the prior; $P(D \vert I)$ is the probability of the data given $I$, and is usually taken as a normalizing constant; $P(D \vert H,I)$ is the direct probability of obtaining the data given the hypothesis and prior information. The direct probability is termed the sampling distribution, when the hypothesis is held constant and different sets of data are measured. This sampling distribution has become the traditional approach to estimating the probability of oscillations within astrophysics, particularly within the field of solar physics. However, unlike a laboratory experimenter, or statistician, typically, we can obtain only one measurement of the process under observation. To proceed, the current archetypal method is to assume that the data is one of a large number of possible measurements from a given sampling space. This sampling space is estimated by the application of Monte-Carlo or Fisher-type randomization techniques to generate a large number of artificial `datasets' \citep[e.g. see,][]{osh01, nem85}. Assuming a particular hypothesis, the probability of observing the data within this sampling space of artificial data is then used to estimate the level of confidence in the hypothesis. In the problem of oscillation detection, the level of confidence that there is no oscillating signal present within the data is usually estimated, the null hypothesis.
Since we generally have only one measurement of the data, rather than generating a distribution of artificial `observations', it appears more logical to test the probability of obtaining the measured data against different hypotheses, incorporating the prior information we have available. This is the basis of the Bayesian method. The direct probability is then termed the likelihood function when the data are considered constant and tested against different hypotheses.
\section{Application of Bayes theorem to oscillation detection}
This section summarizes the results of \cite{jay87} and \cite{bre88} which are applied in the code to calculate the Bayesian probability density function and the results in the following sections.
\subsection{The likelihood function}
When applied to the question of oscillation detection, we wish to compute the probability of a particular time series model, given the data and all other prior information. To calculate the likelihood function, the probability of the noise must be calculated. If the true model was known, then the difference between the data and the model function would be equal to the noise distribution. Assuming Gaussian distributed noise, the probability of obtaining a particular series of noise values $e_{i}$ is given by:
\begin{equation}\label{noise}
P(e_{1}...e_{N} \vert \sigma, I) \propto \prod_{i=1}^{N}\left[\frac{1}{\sqrt{2\pi\sigma^{2}}}exp\left(-\frac{e_{i}^{2}}{2\sigma^{2}}\right)\right],
\end{equation}
where $N$ is the number of elements in the series, and $\sigma^{2}$ is the noise variance. The likelihood function is then given by:
\begin{equation}\label{likely}
L(\{B\}, \{p\}, \sigma) = \sigma^{-N} exp \{-\frac{1}{2\sigma^{2}} \sum_{i=1}^{N} [d_{i} - f(t_{i})]^{2} \},
\end{equation}
where $d_{i}$ are the measured values of the data. We suppose that the measured data is a combination of the model function and the noise i.e. $$d_{i}=f(t_{i}) + e_{i}.$$ Note that, the data sampling is not required to be evenly spaced, unlike the Fourier transform.
In the most general case, the model as a function of time can be expressed as:
\begin{equation}\label{func}
f(t)=\sum_{j=1}^{m} B_{j} G_{j}(t, \{p\}),
\end{equation}
where $B_{j}$ are the amplitudes, $m$ is the total number of component model functions $G_{j}$, which are functions of any number of other parameters $\{p\}$, such as frequency, decay rates\ldots etc.
Substituting for $f(t)$, the summation in the likelihood (Eqn~\ref{likely}) becomes:
\begin{equation}\label{Q}
Q \equiv \bar{d^{2}} - \frac{2}{N} \sum_{j=1}^{m} \sum_{i=1}^{N} B_{j} d_{i} G_{j}(t_{i}) + \frac{1}{N} \sum_{j=1}^{m} \sum_{k=1}^{m} g_{jk} B_{j} B_{k},
\end{equation}
where the cross term for the general model function $f(t)$, in Eqn~\ref{likely}, can be expressed as a matrix of the component model function products summed over time $g_{jk}$.
i.e.
\begin{equation}\label{matrix}
g_{jk} = \sum_{i=1}^{N} G_{j}(t_{i}) G_{k}(t_{i}), \hspace{0.2in} (1 \le j,k \le m).
\end{equation}
The general model function $f(t)$ in Eqn.~\ref{func} may be composed of any number of component model functions. This is more easily represented in matrix form by a square matrix with indices j,k representing the standard row-major matrix notation. Equation~\ref{Q} is then greatly simplified if the matrix $g_{jk}$ is diagonal.
\subsection{Calculating orthonormal functions}
In the simplest case of an oscillating model function containing a single frequency $f(t)=B_{1}\cos(\omega t) + B_{2}\sin(\omega t)$, the matrix is essentially diagonal due to orthogonality.
In a more complex model containing multiple oscillations, the matrix will not generally be diagonal. To diagonalize the matrix, the component model functions in Eqn.~\ref{matrix} must be transformed to a set of orthogonal functions. The matrix $g_{jk}$ is always a symmetric $m\times m$ square matrix; any matrix of this form has $m$ linearly independent orthonormal eigen vectors and is orthogonally diagonalizable.
The orthonormal model functions are given by:
\begin{equation}\label{h_j}
H_{j}(t) = \frac{1}{\lambda_{j}} \sum_{k=1}^{m} e_{jk} G_{k}(t),
\end{equation}
where $e_{jk}$ is the $k$th component of the $j$th normalized eigen vector of $g_{jk}$ with a corresponding eigen value $\lambda_{j}$. The functions $H(t)$ then satisfy the orthonormality condition $\sum_{i=1}^{N} H_{j}(t_{i}) H_{k}(t_{i}) = \delta_{jk}$ where $\delta_{jk}$ is the identity matrix. The general model equation (Eqn.~\ref{func}) can then be expressed using these orthonormal component model functions.
The matrix of these functions is then diagonal and Eqn.~\ref{Q} is greatly simplified.
\subsection{Marginalized probability}\label{sect_margin}
The marginalization process allows us to calculate the probability independently of the parameters in which we may have no interest, such as the component model function amplitudes, noise\ldots etc.
Marginalization allows one to remove parameters from further explicit consideration in the posterior distribution, by assigning prior probabilities and integrating the posterior probability distribution over the variable to be removed.
The resulting marginal distribution has no explicit mention of the removed variable, but rather expresses the probability as a function of the remaining variables.
\cite{bre88} derives the probability density as a function of the frequency parameters as follows.
Expressing the summation from the likelihood function (Eqn.~\ref{Q}) using the orthonormal model functions allows the likelihood function to be written in independent terms for each of the component model function amplitudes. The likelihood function is marginalized to be independent of the model amplitudes, by assigning a uniform prior and integrating over each of the amplitudes; this assumes that we have no prior information to constrain the amplitudes of the component model functions. Assuming that we have no prior information to constrain the noise, it can be marginalized in a similar manner to the amplitudes by assigning a Jeffreys prior and integrating over all positive values.
These parameters are marginalized using uninformative priors, where they are not constrained to any particular values. This gives an upper limit to the uncertainty of the parameter estimates. Should we have any prior information to constrain the parameter prior probabilities, then greater precision estimates would be achieved.
This process has a great advantage compared to least squared fitting, in that the probability is evaluated only as a function of the parameters of interest. Thus reducing the dimensionality of the computed parameter space, whereas all parameters must be considered simultaneously using a least squares approach. Even after marginalization of the posterior probability distribution, the Bayesian method still allows good estimates of the marginalized parameters to be recovered without intensive computation, as described in Sect.~\ref{sect_param_est}.
\subsection{The probability density function}\label{sect_pdf}
The resulting posterior probability density that a general oscillatory model is present within the data is given by:
\begin{equation}\label{pdf}
P\left(\{\omega\}|D,I\right) \propto \left[1 - \frac{m \overline{h^{2}}}{N \overline{d^{2}}}\right]^{\frac{m-N}{2}}.
\end{equation}
This probability density has been derived as a function of the angular frequency parameters only $\{\omega\}$,
assuming data with an unknown noise variance; where $m$ is the number of component model functions, $N$ is the number of measurements in the data time series and $\overline{d^2}$ is the mean square value of the data.
It is the $\overline{h^2}$ function which carries the frequency dependence of the probability density. Given by:
\begin{equation}\label{h2bar}
\overline{h^2} = \frac{1}{m} \sum_{j=1}^{m}{h_{j}^2},
\end{equation}
where,
\begin{equation}\label{hj}
h_{j} = \sum_{i=1}^{N}{d_{i} H_{j}(t_{i})}, \hspace{0.2in} (1 \le j \le m).
\end{equation}
The $h_{j}$ values are the projections of the data onto the orthonormal model functions defined by Eqn.~\ref{h_j}, and $\overline{h^2}$ is the mean square value of these projections as a function of $\{\omega\}$.
The maximum of this function gives the most probable frequency $\hat{\omega}$, supported by the data, for each of the component functions assumed by the model. The corresponding maximum in the probability density function (Eqn.~\ref{pdf}) is sharply peaked at these frequency values $\hat{\omega}$, since the form of the function is similar to an exponential. This allows very precise frequency estimates to be made, at a resolution much higher than can be estimated from the Fourier transform, as described in Sect.~\ref{sect_freq_par} and \ref{sect_freq_res}.
In the simplest case where we assume a general model function containing a single stationary harmonic frequency given by $f(t)=B_{1}\cos(\omega t) + B_{2}\sin(\omega t)$, where sine and cosine are the component model functions $G_{j}$ given in Eqn.~\ref{func}, then the eigen values and eigen vectors of the matrix $g_{jk}$ described in Eqn.~\ref{h_j} are equal to $\lambda_{j}=\frac{1}{\sqrt{N/2}}$ and $e_{jk}=\pm \frac{1}{\sqrt{2}}$ respectively. The $\overline{h^2}$ function is the exact general solution; if we approximate by neglecting the negligible non-diagonal elements of the matrix then $\overline{h^2} \equiv{} \frac{1}{N} \mid \sum_{j=1}^{N} d_{j} e^{i \omega t} \mid^{2}$, which is the Schuster periodogram. It is an important, but subtle, point that probability theory shows there is a direct relation between the Schuster periodogram and the probability that there is a single harmonic frequency within the data. As described by \cite{jay87, bre88}, the maximum of the periodogram gives the most probable frequency assuming that: there is a single stationary harmonic frequency present, the value of $N$ is large, there is no constant component or low frequencies, and the data has a white noise distribution. \cite{ire08} take advantage of this fact, by applying an algorithm for automated oscillation detection within the solar corona.
\section{Parameter Estimation}\label{sect_param_est}
Although, in the single frequency case, the periodogram, Fourier transform and $\overline{h^2}$ are similar; $\overline{h^2}$ has been derived from probability theory and can be understood in a statistical sense. This understanding allows estimates of the model parameters, and their precision, to be derived. These estimates cannot be made from least squares fitting, the Fourier transform, or periodogram, alone without understanding their origin in probability theory. Even though the posterior probability density (Eqn.~\ref{pdf}) is derived to be independent of parameters such as the noise variance and model amplitudes, good estimates of these parameters and their uncertainties can be recovered due to the sharpness of the probability density around the most probable frequencies, as described in this section. The parameter uncertainties are not given directly by least squares fitting, or the Fourier transform, which would require a sampling distribution approach. As described in Sect. 2, this is computationally intensive, and it is questionable whether this approach is appropriate given a single measurement of the data. Here we outline estimates of the model parameters and their variance.
\subsection{The expected noise variance $\langle\sigma^2\rangle$}
The Bayesian analysis allows the expectation value of the noise variance within the data to be calculated as:
\begin{equation}\label{sigma2}
\langle \sigma^2 \rangle = \frac{1}{N-m-2}\left[\sum_{i=1}^{N} d_i^2 - \sum_{j=1}^{m} h_{j}^2 \right].
\end{equation}
The expected noise variance is a function of $\omega$ and is estimated with the $h_{j}$ functions evaluated at the most probable frequencies $\hat{\omega}$ given by the probability density function.
The expectation value of the noise variance is essentially the difference between: the total square value of the data, and the total square value of the data projected onto the orthonormal model functions defined by Eqn.~\ref{h_j}. It is implicit in the Bayesian model that everything within the data that is not fitted by the model is assumed to be noise. Thus the expected noise variance gives an indication to what degree the model represents the data. We may increase the complexity of the model, by the addition of more component model functions. However, once the real signal within the data has been accounted for, the addition of more component functions will have the effect of reducing $\langle \sigma^2 \rangle$ by fitting the noise. A method to determine the point at which the model best represents the true signal is described in Sect.~\ref{best_model}
The percentage accuracy $\epsilon$ of the expected noise variance is given by:
\begin{equation}\label{sigma2_err}
\epsilon = \sqrt{2/(N-m-4)},
\end{equation}
where $N$ is the number of elements in the series and $m$ is the number of component model functions. The standard deviation accuracy estimate of the expected noise variance is then equal to $\pm \epsilon \sigma^{2}$.
\subsection{The frequency parameters $\{\omega\}$}\label{sect_freq_par}
The most probable frequencies, of the applied model, are evaluated numerically from the location of the maximum within the probability density function described in Sect.~\ref{sect_pdf}. The accuracy of the frequency parameters can be estimated by expanding $\overline{h^2}$ (Eqn.~\ref{h2bar}) in a Taylor series. This accuracy is dependent on the Hessian matrix of $\overline{h^2}$ evaluated at the most probable model frequencies $\hat{\omega}$:
\begin{equation}\label{bjk}
b_{jk}=-\frac{m}{2} \frac{\partial^{2}\overline{h^2}}{\partial \omega_{j} \partial \omega_{k}}, \hspace{0.2in} (1 \le j,k \le r).
\end{equation}
The estimated angular frequency resolution is given by the variance of the probability density function for $\omega_{k}$:
\begin{equation}\label{delta_omega2}
\sigma_{\omega_{k}}^{2}= \langle \sigma^2 \rangle \sum_{j=1}^{r} \frac{u_{jk}^{2}}{v_{j}}, \hspace{0.2in} (1 \le k \le r)
\end{equation}
where $u_{jk}$ is the kth component of the jth eigen vector of $b_{jk}$ with a corresponding eigen value $v_{j}$, $r$ is the number of model frequencies, and the expected noise variance $\langle \sigma^2 \rangle$ is evaluated at the most probable frequencies. These most probable frequencies are then equal to $\hat{\omega} \pm \sigma_{\omega_{k}}$.
Equations~\ref{bjk} and \ref{delta_omega2} show that the obtainable frequency resolution is related to how sharply the probability density is peaked around the most probable frequencies and the magnitude of the noise variance within the data. As described in Sect.~\ref{sect_pdf}, the form of the probability density function is sharply peaked around these frequencies; it is this sharpness which allows very high precision frequency estimates to be made, as described in Sect.~\ref{sect_freq_res}, and permits the results obtained in Sect.~\ref{solar}.
\subsection{The amplitude parameters $\langle B \rangle$}\label{sect_amp_par}
If each oscillating function within the model is expressed in the form $f(t)=B_{\cos}\cos(\omega_{} t_{}) + B_{\sin}\sin(\omega_{} t_{}),$ then two component model functions (sine and cosine) are used to describe each frequency component in Eqn.~\ref{func}. Where $B_{sin}$ and $B_{cos}$ represent the amplitude parameters of the sine and cosine functions. For multiple frequency models,
\begin{eqnarray}\nonumber
f(t)=B_{1}\cos(\omega_{1} t_{}) + B_{2}\cos(\omega_{2} t_{}) + \\ \nonumber
B_{3}\sin(\omega_{1} t_{}) + B_{4}\sin(\omega_{2} t_{}) \ldots,
\end{eqnarray}
where the $B_{k}$ parameters of the cosine functions are indexed consecutively for each frequency component, followed by the sine functions.
The expectation value of each amplitude parameter is given by:
\begin{equation}\label{amps}
\langle B_{k} \rangle = \sum_{j=1}^{m} \frac{h_{j}e_{jk}}{\sqrt{\lambda_{j}}}, \hspace{0.2in} (1 \le k \le m),
\end{equation}
where $h_{j}$ are the projections of the data onto the orthonormal model functions in Eqn.~\ref{h_j}, $e_{jk}$ are the components of the normalized eigen vectors of the matrix $g_{jk}$ given in Eqn.~\ref{matrix}, with corresponding eigen values $\lambda_{j}$. These amplitude parameters are dependent on $\omega$, and are estimated using the orthonormal model functions evaluated at the most probable frequency parameters $\hat{\omega}$ described in the previous section. This is a very good approximation, due to the sharpness of the peak in the probability density function which is almost described by a delta-function.
The variance of each amplitude parameter is then given by:
\begin{eqnarray}\label{delta_amp2}
\nonumber \sigma_{B_{k}}^{2}= \left[ \frac{N}{N-2} \right] \left[\frac{2N-5}{2N-5-2m}\right] \left[ \frac{2N-7}{2N-7-2m} \right] \\ \left[ \overline{d^2}-\frac{m\overline{h^2}}{N} \right] \sum_{j=1}^{m} \frac{e_{jk}^{2}}{\lambda_{j}},
\end{eqnarray}
where $N$ is the number of time series elements, $\overline{d^{2}}$ is the mean value of the data squared and $\overline{h^{2}}$ is the mean square value of the data projected on to the orthonormal model functions.
\subsection{The polar amplitude $\langle A \rangle$ and phase parameters $\langle \phi \rangle$}\label{sect_phase_par}
The expected amplitudes can be used to express the model function results in polar coordinates, where each oscillation within the model is of the form:
$$f(t)=A\cos(\omega t + \phi).$$ The polar amplitude and phase for each frequency component are then given by: $$\langle A \rangle=\sqrt{B_{\cos}^2+B_{\sin}^2}, \hspace{0.2in} \langle \phi \rangle=\arctan(-\frac{B_{\sin}}{B_{\cos}}).$$
The $1\sigma$ errors $\sigma_{A}$ and $\sigma_{\phi}$ are then given by the propagation of the $B_{k}$ $1\sigma$ errors given in Eqn.~\ref{delta_amp2}.
\section{Frequency Resolution}\label{sect_freq_res}
In this section, we apply the Bayesian model to artificial test data, typical of the type of oscillations that are observed within the solar corona using current instrumentation.
\subsection{The single frequency case}\label{sect_single_freq}
Here we compare the results obtained from the Bayesian model with those obtained by performing a Fourier, and wavelet analysis. The analyzed time series is of the form:
$$d_{i}= \cos(2\pi f t_{i}) + \sin(2\pi f t_{i}) +e_{i},$$
where $f$=3.3~mHz and $e_{i}$ is Gaussian distributed noise. This is typical of the 5-minute period band oscillations observed in coronal loops, with 100 samples of 30~s cadence and has a low signal to noise ratio with a RMS S/N=1. The Bayesian model is applied to calculate the probability density function for a single harmonic frequency model of the time series, using Eqn.~\ref{pdf}. The most probable frequency, given by the peak within the probability density function, can be described by a Gaussian of width $\sigma_{\omega}$ given by Eqn.~\ref{delta_omega2}, converting from angular frequency to give $\sigma_{f}$ in Hz. This width determines the theoretical frequency resolution obtainable with the Bayesian model, which is much less than the estimated frequency resolution obtained using the Fourier transform.
\begin{figure}[t]
\centering
\plotone{fig1.eps}
\caption{A comparison of the frequency resolution obtained for the single frequency data with a S/N=1, with the normalized Bayesian probability density function (solid line), FFT (dashed) and global wavelet transform (dotted). We see that the Bayesian PDF has an order of magnitude increase in resolution compared to the FFT.}
\label{pdf_fft_wav}
\end{figure}
\begin{table}[t]
\centering
\caption{Obtained frequency resolutions for the single frequency data with a S/N=1.}\label{tab1}
\begin{tabular}{ll} \hline
Analysis method & $\delta f$ (mHz) \\ \hline
Bayesian PDF ($\sigma_{f}$) & 0.02 \\
FFT (HWHM) & 0.2 \\
Wavelet (HWHM) & 0.5 \\
\hline
\end{tabular}
\end{table}
Figure~\ref{pdf_fft_wav} shows the obtainable resolving power of the Bayesian model, normalized to that obtained by the Fourier and wavelet transforms. The solid line indicates the probability density function for the single frequency Bayesian model, the dashed line shows the FFT and the dotted line is the global wavelet transform using the Morlet wavelet function. We can see that the Bayesian model gives a significant increase in the resolution of the estimated frequency, with the probability density almost described by a delta-function. Table~\ref{tab1} lists the obtained frequency resolution for each method, estimating the resolution of the Fourier and global wavelet transforms using their half width half maximum (HWHM). We see that, for a S/N=1, the $1\sigma$ error on the estimated frequency from the Bayesian model gives an order of magnitude increase in resolution over the FFT. The global wavelet has an even lower resolution due to the smoothing effect of the transform on the wavelet scale. In fact, if we are interested in high precision frequency measurements of stationary frequencies, or closely separated frequencies, then a wavelet analysis is one of the worst methods that we can apply.
\subsection{Two closely separated frequencies}\label{sect_2_close_freq}
We now compare the results obtained from a Bayesian model and a Fourier analysis of two closely separated frequencies within a simulated time series shown in Fig~\ref{2freq}a. Again, we generate a time series typical of coronal loop oscillations of the form:
\begin{eqnarray}
\nonumber d_{i}=B_{1}\cos(2\pi f_{1} t_{i}) + B_{2}\cos(2\pi f_{2} t_{i}) + \\ \nonumber
B_{3}\sin(2\pi f_{1} t_{i}) + B_{4}\sin(2\pi f_{2} t_{i}) + e_{i},
\end{eqnarray}
with 100 samples at 30~s cadence, a harmonic frequency of $f_{1}$=3.3~mHz, an additional frequency of $f_{2}$=3.0~mHz, Gaussian distributed noise $e_{i}$ and a RMS S/N=1. These two frequencies are separated by only one frequency step in the Fourier transform, so in principle their frequencies are directly adjacent in the FFT.
\begin{figure}
\centering
\plotone{fig2a.eps}
\plotone{fig2b.eps}
\plotone{fig2c.eps}
\plotone{fig2d.eps}
\caption{a) Two frequency time series with a S/N=1. b) FFT of the time series containing two frequencies separated by 1 Fourier frequency step. c) Gaussian representations of the frequency resolution obtained from the Bayesian probability density function, normalized to the FFT peak. d) Signal reconstructed from the Bayesian model (solid), true signal (dot-dash).}
\label{2freq}
\end{figure}
\begin{table}[t]
\centering
\caption{Estimated frequencies and $1\sigma$ errors, of the parameters expressed in polar coordinates, obtained from the Bayesian model applied to the two frequency time series, separated by one Fourier frequency step and a RMS S/N=1.}\label{tab2}
\begin{tabular}{cc}
\tableline\tableline
Frequencies $f \pm \sigma_{f}$ (mHz) & True value (mHz) \\ \tableline
2.98 $\pm$ 0.05 & 3.00 \\
3.37 $\pm$ 0.05 & 3.33 \\
\tableline
Amplitude $A \pm \sigma_{A}$ & True value \\ \tableline
0.96 $\pm$ 0.19 & 1.00 \\
0.94 $\pm$ 0.19 & 1.00 \\ \tableline
Phase $\phi \pm \sigma_{\phi}$ (rad) & True value \\ \tableline
5.64 $\pm$ 0.14 & 5.50 \\
5.52 $\pm$ 0.15 & 5.50 \\ \tableline
$\langle\sigma^2\rangle$ & True value \\ \tableline
0.88 $\pm$ 0.13 & 0.89 \\ \tableline
\end{tabular}
\end{table}
Figure~\ref{2freq}b shows the FFT for the two frequency time series; Fig.~\ref{2freq}c shows the Gaussian representation of the resolution obtained from the Bayesian probability density function, which has been normalized to the FFT peak for comparison. Note Fig~\ref{2freq}c is not a power spectrum, but an illustration of the frequency resolution obtained with the Bayesian model. As expected the FFT is unable to resolve two such closely separated frequencies. A single broad peak is observed, with a large HWHM, suggesting the possibility that more than one frequency may be present. However, the result from the Bayesian model resolves the two frequencies independently and to a very high precision even with a S/N=1. Table~\ref{tab2} lists the resolved frequencies and their $1\sigma$ errors, estimated from the probability density function of the two harmonic frequency Bayesian model. We see that the Bayesian model not only resolves the two frequencies but does so to a very high precision with 1$\sigma_{f}$ errors of 0.05~mHz, even with a relatively short duration time series. Figure~\ref{2freq}d shows the signal reconstructed from the Bayesian model parameters, and the true signal within the simulated time series. We see that the Bayesian code provides a very good reconstruction of the signal even with a low signal to noise ratio.
\section{Application to Solar Data}\label{solar}
The Bayesian model is now applied to real observations of solar oscillations. \cite{me06} observe the apparent propagation of slow-magnetoacoustic waves within a sunspot region. These waves are observed to propagate from the transition region into the coronal loop system emerging from the sunspot and are interpreted as the propagation of photospheric p-modes waveguided along the magnetic field.
The original analysis applied Fourier techniques to the time series; here we apply the Bayesian model to the \ion{O}{5} data described in \cite{me06}. The results presented in \cite{me06} show the presence of two frequencies, in the 3-min period band, observed in the transition region above the sunspot umbra. The data consist of 100 samples observed at 26~s cadence obtained with the Coronal Diagnostic Spectrometer (CDS) \citep[see, ][]{har95}. The Bayesian model is applied iteratively, with the addition of further model functions to increase the complexity of the applied model.
\subsection{Model selection}\label{best_model}
In addition to the problem of fitting a model to the data, we must determine which is the most probable model of those under test. If we already have a knowledge of the noise variance within the data, then the calculated expectation value of the noise variance (Eqn~\ref{sigma2}) may be used to determine when the model has accounted for the real signal. Component functions may be added to the model until the expectation value of the noise variance equals the known noise variance. At this point the most appropriate model has been determined, and the addition of further component functions to the model will simply be fitting the noise.
\begin{figure}[t]
\centering
\plotone{fig3.eps}
\caption{Change in the expectation value of the noise variance for an increasing number of component frequencies within the Bayesian model. The dotted line indicates the known noise variance within the CDS data.}
\label{model_var}
\end{figure}
The noise properties of the CDS instrument are described by \cite{sn49}. The noise variance of the CDS detector is given by $\sigma^{2} = 2N_{ph} + R^{2}n$, where $N_{ph}$ is the number of detected photons, $R$ is the readout noise (here we use a conservative value of 1 photon-event pixel$^{-1}$), and $n$ is the number of pixels summed over. We may use this known value of the noise variance to determine when the model has reached sufficient complexity to account for the real signal and the addition of further model functions will begin to fit the noise.
Figure~\ref{model_var} shows the change in the expectation value of the noise variance for increasingly complex models with the addition of more component frequencies. The error bars show the standard deviation error estimate of the expected noise variance derived from Eqn.~\ref{sigma2_err}. The dotted line shows the noise variance within the data due to the photon statistics and the noise properties of the CDS detector. As expected, with the addition of further component frequencies to the model, the expected noise variance is reduced. The expected noise variance reaches the level of the CDS data with a model containing four harmonic frequencies. Therefore we can state that the data best supports a model containing four frequencies. We are able to derive parameter estimates of these functions and their associated errors, from the probability density function of the four frequency model.
\subsection{The Bayesian results}
Figure~\ref{ov_fft_4freq}a shows the FFT of the \ion{O}{5} data presented in \cite{me06}, with two main peaks resolved in the transform. The broad width of the peaks may suggest that the data does not simply consist of two monochromatic frequencies, but this is the limit to the information available using a Fourier analysis. Fig.~\ref{ov_fft_4freq}b shows the Gaussian representation of the four frequencies resolved by the Bayesian model. In addition to the oscillation frequency, the Bayesian code also returns high precision estimates of the amplitude and phase parameters, listed in Table~\ref{tab3}. The height of the peaks in Fig.~\ref{ov_fft_4freq} are normalized to the corresponding amplitude parameters; note that this figure is not a power spectrum, but a representation of the frequency resolution obtained from the probability density function. As demonstrated in Sect.~\ref{sect_2_close_freq}, the Bayesian model is able to resolve closely spaced frequencies to a much higher resolution than is possible with the Fourier transform, even with short duration observations. Where the FFT can resolve only two frequencies, the Bayesian model is able to resolve four independent frequencies within the data, to a very high precision.
\begin{figure}
\centering
\plotone{fig4.eps}
\caption{a) The FFT of the \ion{O}{5} data originally presented in \cite{me06} showing two frequencies in the 3-min range. b) Gaussian representations of the frequency resolution obtained from the Bayesian probability density function for the four frequency model, where the peaks are normalized to the derived amplitudes given in Table~\ref{tab3}. c) Signal reconstructed from the Bayesian model (solid), original data (dot-dash). }
\label{ov_fft_4freq}
\end{figure}
\section{Conclusions}
When approaching the problem of oscillation detection, it is possible to extract much more information from the data by the application of a Bayesian model rather than the traditional least squares fitting, Fourier, or wavelet analysis. Considering the problem of frequency estimation within a time series, the Bayesian method returns very precise estimates and employs a rigorous self-consistent error analysis, due to its statistical derivation. It is not clear how to determine frequency error estimates from the Fourier transform or least squares, without understanding their relation to probability theory. It is often misconceived that the Fourier frequency spacing is the limit to the resolution with which a frequency can be resolved within a time series. This is not the case, as the resolution limit is principally determined by the signal to noise ratio. As shown in Sect.~\ref{sect_single_freq}, we are able to estimate a single frequency, with S/N=1, to a resolution an order of magnitude greater than the FFT. In Sect.~\ref{sect_2_close_freq}, we are able to resolve two oscillations with frequency separations directly adjacent in the FFT. This is not surprising as the Bayesian method has similarities with least squares fitting of a particular function. In an analogous way, the limiting resolution with which an oscillation can be fitted with a sinusoidal function is not equal to the time series cadence, nor is the resolution with which a spectral line can be fitted with a Gaussian equal to the pixel spacing on the detector; this resolution limit is largely determined by the signal to noise ratio within the data.
\begin{deluxetable}{ccc}
\tabletypesize{\scriptsize}
\tablecaption{Bayesian model parameters derived from the \cite{me06} data.\label{tab3}}
\tablewidth{0pt}
\tablehead{
\colhead{Frequencies $f \pm \sigma_{f}$ (mHz)} & \colhead{Amplitude $A \pm \sigma_{A}$ (Photon-Events)} & \colhead{Phase $\phi \pm \sigma_{\phi}$ (rad)}
}
\startdata
5.81 $\pm$ 0.05 & 102.4 $\pm$ 26.2 & 0.10 $\pm$ 0.18 \\
6.29 $\pm$ 0.03 & 117.3 $\pm$ 26.1 & 1.35 $\pm$ 0.16 \\
7.06 $\pm$ 0.05 & 143.6 $\pm$ 26.2 & 2.63 $\pm$ 0.13 \\
7.56 $\pm$ 0.04 & 133.6 $\pm$ 26.3 & 5.31 $\pm$ 0.14 \\ \tableline
Expected noise variance $\langle\sigma^2\rangle$ & 15062.0 $\pm$ 2271.0 & \\
\tablenotemark{a}CDS noise variance $\sigma^2$ & 15029.0 & \\
\enddata
\tablenotetext{a}{CDS noise variance calculated using \cite{sn49}}
\end{deluxetable}
\cite{lou78} describe how the Fourier transform gives erroneous results for closely spaced frequencies. Their results are also explained by probability theory; \cite{jay87} discovered that the periodogram or Fourier transform is only directly related to the probability of a single stationary harmonic frequency within the data. If multiple frequencies are well separated, the Fourier transform still gives good frequency estimates, as the problem separates out into independent single frequency probability problems. If the frequencies are closely spaced, however, the non-diagonal elements in the matrix $g_{jk}$ (Eqn.~\ref{matrix}) become significant and the frequencies are not orthogonal. The approximation of using a single frequency probability model for the purpose of frequency estimation is no longer valid, nor is the use of the Fourier transform. In this case, the transformation to orthogonal functions used in the Bayesian analysis is necessary to determine accurate frequency estimates and their uncertainties.
As mentioned by \cite{bre88}, the Bayesian method is similar to least squares, in that a least squares approach minimizes the summation in Eqn.~\ref{likely}, whereas the Bayesian results maximize the likelihood function. As described in Sect~\ref{sect_margin}, the Bayesian method allows only the parameters of interest to be considered, greatly reducing the dimensionality of the parameter space compared to a least squares approach. In the previous section, a least squares approach would require a 12 dimensional parameter space, where this is reduced to 4 dimensions with the Bayesian model. In principal, least squares will produce similar frequency estimates to the Bayesian model, since uninformative priors have been used, however, least squares does not directly determine their uncertainties. The Bayesian framework allows the oscillatory model to be understood in terms of probability theory, and achieves higher precision estimates due to the sharp maximum of the posterior probability density.
We apply the Bayesian model to the \ion{O}{5} data presented in \cite{me06}, and resolve four closely spaced independent frequencies within the 3-minute period range. The observations presented in \cite{me06} are interpreted as the conversion and propagation of photospheric p-mode oscillations along the magnetic field into the corona.
5-minute period range oscillations are observed within the umbral photosphere of sunspots, and are shown to be connected to the global p-mode oscillation distribution centered on 5-minutes \citep{pen93, bal87, bra87}. Oscillations in the 3-minute period range have been observed in the chromosphere above sunspot umbrae for many years \citep{bec69, bec72, gur82, lit82}. The 3-minute period range oscillations are thought to be due to amplitude steepening of the photospheric p-mode spectrum \citep{bog00}. Recent work by \cite{cen06} supports this, demonstrating that 3-minute range power in the chromosphere is due to linear wave propagation from the 5-minute range power in the photosphere. The 3-minute oscillations are also observed in the umbral transition region \citep{tho87,flu01, osh02, ren03, bry04}.
The results presented here suggest that we are able to resolve these oscillations into four closely spaced p-mode frequencies. \cite{zhu02} calculate the spectrum of eigen modes within the vertical magnetic field of the sunspot umbra, finding that the 3-minute umbral oscillations are due to p-modes modified by the strong magnetic field within the sunspot. \cite{zhu05} calculates the same spectrum of umbral oscillations using the method of resonant filtering and by solving the eigen value problem, also determining that the 3-minute oscillations are part of the photospheric p-mode spectrum which propagates through the umbral atmosphere. The frequencies detected here, and their spacing, are consistent with the model results of \cite{zhu05} and may represent the detection of the $P_{4}$, $P_{5}$, $P_{6}$ and $P_{7}$ photospheric p-modes in the solar transition region. These results provide precise observational constraints for future modeling of umbral eigen modes. A more detailed discussion, on the characterization of these resolved modes, is required than can be addressed here. A following paper will investigate these modes and their constraints on a model atmosphere.
\acknowledgments
M.S. Marsh is supported by the NASA/ORAU postdoctoral program, and would like to acknowledge the encouragement of L.E. Pickard. SOHO is a project of international cooperation between ESA and NASA. We would like to thank the anonymous referee for useful comments that improved the manuscript.
{\it Facilities:} \facility{SOHO (CDS)}.
\bibliographystyle{apj}
|
1,314,259,994,833 | arxiv | \section{Introduction}
Quantum computers have the potential to impact a broad range of disciplines such as quantum chemistry~\cite{Moll2018}, finance \cite{Orus2019, Egger2020}, optimization~\cite{Farhi2014, Egger2020b}, and machine learning~\cite{Biamonte2017, Havlicek2019}.
The performance of noisy quantum computers has been improving as measured by metrics such as the Quantum Volume~\cite{Cross2019, Jurcevic2021} or
the coherence of superconducting transmon-based devices~\cite{Krantz2019, Kjaergaard2020,koch2007charge} which has exceeded $100~\mu{\rm s}$~\cite{Rigetti2012, IBMQuantum}.
To overcome limitations set by the noise, several error mitigation techniques such as readout error mitigation~\cite{Bravyi2020, Barron2020a} and Richardson extrapolation~\cite{Temme2017, Kandala2018} have been developed.
Gate families with continuous parameters further improve results~\cite{LaCroix2020, Foxen2020,gokhale2020optimized} as they require less coherence time than circuits in which the CNOT is the only two-qubit gate.
Aggregating instructions and optimizing the corresponding pulses, using e.g. gradient ascent algorithms such as GRAPE~\cite{Khaneja2005}, reduces the duration of the pulse schedules~\cite{Shi2019}.
However, such pulses require calibration to overcome model errors~\cite{Egger2014, Wittler2020} which typically needs closed-loop optimization~\cite{Kelly2014, Werninghaus2021} and sophisticated readout methods~\cite{Rol2017, Werninghaus2020}.
This may therefore be difficult to scale as calibration is time consuming and increasingly harder as the control pulses become more complex.
Some of these limitations may be overcome with novel control methods~\cite{Machnes2018}.
Since calibrating a two-qubit gate is time-consuming, IBM Quantum~\cite{IBMQuantum} backends only expose a calibrated CNOT gate built from echoed cross-resonance pulses~\cite{Chow2011, Sheldon2016} with rotary tones~\cite{Sundaresan2020}.
Quantum circuit users must therefore transpile their circuits to CNOT gates which often makes a poor usage of the limited coherence time.
With the help of Qiskit pulse~\cite{Mckay2018, Alexander2020} users may extend the set of two-qubit gates \cite{Garion2020, Oomura2021, Heya2021}.
Such gates can in turn generate other multi-qubit gates more effectively than when the CNOT gate is the only two-qubit gate available~\cite{Oomura2021}.
However, creating these gates comes at the expense of additional calibration which is often impractical on a queue-based quantum computer.
Furthermore, only a limited number of users can access these benefits due to the need for an intimate familiarity with quantum control.
In Ref.~\cite{Stenger2020} the authors show a pulse-scaling methodology to create the control pulses for the continuous gate set $R_{ZX}(\theta)$ which they leverage to create $R_{YX}(\theta)$ gates and manually assemble into pulse schedules.
Crucially, the scaled pulses improved gate fidelity without the need for any extra calibration.
Here, we extend the methodology of Ref.~\cite{Stenger2020} to arbitrary $SU(4)$ gates and show how to make pulse-efficient circuit transpilation available to general users without having to manipulate pulse schedules.
In Sec.~\ref{sec:rzx} we review the pulse-scaling methodology of Ref.~\cite{Stenger2020} and carefully benchmark the performance of $R_{ZZ}$ gates.
Next, in Sec.~\ref{sec:su4}, we leverage this pulse-efficient gate generation to create arbitrary $SU(4)$ gates which we benchmark with quantum process tomography~\cite{Mohseni2008, Bialczak2010}.
In Sec.~\ref{sec:transpilation} we show how pulse-efficient gates can be included in automated circuit transpiler passes.
Finally, in Sec.~\ref{sec:demo} we demonstrate the advantage of our pulse-efficient transpilation by applying it to the Quantum Approximate Optimization Algorithm (QAOA)~\cite{Farhi2014}.
\section{Scaling hardware-native cross-resonance gates\label{sec:rzx}}
We consider an all-microwave fixed-frequency transmon architecture that implements the echoed cross-resonance gate~\cite{Sheldon2016}.
A two-qubit system in which a control qubit is driven at the frequency of a target qubit evolves under the time-dependent cross-resonance Hamiltonian $H_\text{cr}(t)$.
The time-independent approximation of $H_\text{cr}(t)$ is
\begin{align}
\Bar{H}_{cr}=\frac{1}{2}\left(Z\otimes B + I\otimes C\right)
\end{align}
where $B=\omega_{ZI}I+\omega_{ZX}X+\omega_{ZY}Y+\omega_{ZZ}Z$ and $C=\omega_{IX}X+\omega_{IY}Y+\omega_{IZ}Z$.
Here, $X$, $Y$, and $Z$ are Pauli matrices, $I$ is the identity, and $\omega_{ij}$ are drive strengths.
An echo sequence \cite{Sheldon2016} and rotary tones \cite{Sundaresan2020} isolate the $ZX$ interaction which ideally results in the unitary $R_{ZX}(\theta)=\exp\{-i\theta ZX/2\}$.
The rotation angle $\theta$ is $t_{cr}\omega_{ZX}(\bar{A})$ where $t_{cr}$ is the duration of the cross-resonance drive.
The drive strength $\omega_{ZX}$ has a non-linear dependency on the average drive-amplitude $\bar{A}$ as shown by a third-order approximation of the cross-resonance Hamiltonian~\cite{Magesan2020, Alexander2020}.
IBM Quantum systems expose to their users a calibrated CNOT gate built from $R_{ZX}(\pi/2)$ rotations implemented by the echoed cross-resonance gate.
The pulse sequence of $R_{ZX}(\pi/2)$ on the control qubit is ${\rm CR}(\pi/4)X{\rm CR}(-\pi/4)X$.
Here, ${\rm CR}(\pm \pi/4)$ are flat-top pulses of amplitude $A^*$, width $w^*$, and Gaussian flanks with standard deviations $\sigma$, truncated after $n_\sigma$ times $\sigma$.
Their area is $\alpha^*=|A^*|[w^*+\sqrt{2\pi}\sigma \mathrm{erf}(n_\sigma)]$ where the star superscript refers to the parameter values of the calibrated pulses in the CNOT gate.
During each ${\rm CR}$ pulse rotary tones are applied to the target qubit to help reduce the magnitude of the undesired $\omega_{IY}$ interaction.
We can create $R_{ZX}(\theta)$-rotations by scaling the area of the $\rm CR$ and rotary pulses following $\alpha(\theta)=2\theta\alpha^*/\pi$ as done in Ref.~\cite{Stenger2020}.
To create a target area $\alpha(\theta)$ we first scale $w$ to minimize the effect of the non-linearity between the drive strength $\omega_{ZX}(\bar A)$ and the pulse amplitude.
When $\alpha(\theta)<|A^*|\sigma\sqrt{2\pi}\mathrm{erf}(n_\sigma)$ we set $w=0$ and scale the pulse amplitude such that $|A(\theta)|=\alpha(\theta)/[\sigma\sqrt{2\pi}\mathrm{erf}(n_\sigma)]$.
We investigate the effect of the pulse scaling methodology with quantum process tomography by carefully benchmarking scaled $R_{ZZ}(\theta)$ gates, see Fig.~\ref{fig:QPT_angle_dev}(a), with respect to the double-CNOT decomposition, see Fig.~\ref{fig:QPT_angle_dev}(b).
We measure the process fidelity $\mathcal{F}[U_\text{meas}, R_{ZZ}(\theta)]$ between the target gate $R_{ZZ}(\theta)$ and the measured gate $U_\text{meas}$.
To determine $U_\text{meas}$ we prepare each qubit in $\ket{0}$, $\ket{1}$, $(\ket{0}+\ket{1})/\sqrt{2}$, and $(\ket{0}+i\ket{1})/\sqrt{2}$ and measure in the $X$, $Y$, and $Z$ bases.
Two qubit process tomography therefore requires a total of 148 circuits for each angle of interest which includes four circuits needed to mitigate readout errors~\cite{Bravyi2020, Barron2020a}.
The scaled pulses consistently have a better fidelity than the double CNOT benchmark as demonstrated by the data gathered on \emph{ibmq\_mumbai} with qubits one and two, see Fig.~\ref{fig:QPT_angle_dev}(c).
Appendix~\ref{sec:appendix_additional_data} shows key device parameters and additional data taken on other IBM Quantum devices which illustrates the reliability of the methodology.
The relative error reduction of the measured gate fidelity correlates well to the relative error reduction of the coherence limited average gate fidelity~\cite{Horodecki1999, Magesan2011, Sundaresan2020}, see Fig.~\ref{fig:QPT_angle_dev}(d) and details in Appendix~\ref{sec:appendix_fidelity_limit}.
We therefore attribute the error reduction to the shorter schedules as they use less coherence time.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/Figure1.png}
\caption{$R_{ZZ}(\theta)$ characterization for qubits one and two on \emph{ibmq\_mumbai}.
(a) Double-CNOT benchmark.
(b) Continuous gate implementation where $U_{1,1}=R_Z(\pi/2)\sqrt{X}R_Z(\pi/2)$.
Here, $R_{ZX}(\theta)$ is a scaled cross-resonance pulse with a built-in echo.
(c) Gate fidelity $\mathcal{F}[U_\text{meas}, R_{ZZ}(\theta)]$ of the double-CNOT implementation (blue) and the scaled cross-resonance pulses (orange).
The vertical line indicates the angle at which $w=0$.
(d) The relative error between the two implementations (green dots), and the theoretical expectations for a coherence limited gate (solid black line).
(e) The deviation angle $\Delta \theta=\theta-\theta_\text{max}$ corresponding to the data in (c) that achieves the maximum gate fidelity $\mathcal{F}[U_\text{meas}, R_{ZZ}(\theta_{\text{max}} )]$.}
\label{fig:QPT_angle_dev}
\end{figure}
In addition to the gate fidelity, we compare the deviation $\Delta\theta$ from the target angle of both implementations of the $R_{ZZ}(\theta)$ rotation.
The deviation $\Delta\theta$ is the difference between the target rotation angle $\theta$ and the angle $\theta_{\text{max}}$ which satisfies $\mathcal{F}[U_\text{meas}, R_{ZZ}(\theta_\text{max})]\geq \mathcal{F}[U_\text{meas}, R_{ZZ}(\theta')]~\forall~\theta'$.
Since the $R_Z(\theta)$ is virtual~\cite{Mckay2017} the implementation with two CNOT gates does not depend on the desired target angle, see Fig.~\ref{fig:QPT_angle_dev}(e).
However, the scaled gate has two competing non-linearities: an expected non-linearity from the amplitude scaling and an unexpected one from scaling the width.
As the width is scaled down, the angle deviation increases from $\sim\!\!10~{\rm mrad}$ to $\sim\!\!35~{\rm mrad}$.
Once the amplitude scaling begins, a non-linearity arises which reduces the deviation angle of the scaled gates.
At $\alpha(\theta) \approx |A^*|\sigma\sqrt{2\pi}\mathrm{erf}(n_\sigma)/2$ the angle deviation of the scaled gates once again matches the deviation of the benchmark within the measured standard deviation.
\section{Creating arbitrary SU(4) gates\label{sec:su4}}
\begin{figure}[htbp]
\includegraphics[width=\columnwidth]{figures/Figure2.pdf}
\caption{Cartan's $KAK$ decomposition. (a) Circuit representation of the $KAK$ decomposition of a two-qubit gate $U\in SU(4)$ with $k_1 = (A_1 \otimes A_0)$ and $k_2 = (B_1 \otimes B_0)$. (b) Circuit in (a) without $k_{1,2}$ and decomposed into three CNOT gates and transpiled to the basis gates $(R_Z(\theta), \sqrt{X}, \mathrm{CNOT})$. (c) Circuit in (a) decomposed into the hardware-native $R_{ZX}$ gates. Here, each $R_{ZX}$ gate has a built-in echo as shown in (d).
Transpiling circuit (c) to the basis $(R_Z(\theta), \sqrt{X}, R_{ZX}(\theta))$ with the echoes exposed to the transpiler results in the pulse-efficient circuit shown in (e) where the scaled $R_{ZX}$ gates do not have an echo.
We replaced $R_Z(n\pi/2)\sqrt{X}R_Z(m\pi/2)$ with $U_{n,m}$ and $U_{1,\alpha} = R_Z(\pi/2)\sqrt{X}R_Z(\alpha)$ to shorten the notation.
\label{fig:KAK}}
\end{figure}
We now generalize the results from Sec.~\ref{sec:rzx}.
Cartan's decomposition of an arbitrary two-qubit gate $U\in SU(4)$ is $U = k_1 A k_2$ which we refer to as Cartan's $KAK$ decomposition~\cite{Khaneja2001}.
Here $k_1$ and $k_2$ are local operations, i.e. $k_{1,2} \in SU(2) \otimes SU(2)$, and $A = e^{i\boldsymbol{k}^T\cdot\boldsymbol{\Sigma}/2} \in SU(4) \setminus SU(2) \otimes SU(2)$ is a non-local operation with $\boldsymbol{\Sigma}^T=(XX, YY, ZZ)$~\cite{Zhang2003,Tucci2005, Byron2008}, see Fig.~\ref{fig:KAK}(a).
The non-local term is defined by the three angles $\boldsymbol{k}^T=(\alpha, \beta, \gamma)\in\mathbb{R}^{3}$ satisfying $\alpha+\beta+\gamma\leq 3\pi/2$ and $\pi\geq\alpha\geq\beta\geq\gamma\geq0$.
Geometrically, the $KAK$ decomposition is represented in a tetrahedron known as the Weyl chamber in the three-dimensional space, see Fig.~\ref{fig:weyl_chamber}.
Every point $(\alpha, \beta, \gamma)$ in the Weyl chamber (except in the base) defines a continuous set of two-qubit gates equivalent up to single-qubit rotations~\cite{Zhang2003}.
For instance, the point $(\frac{\pi}{2}, 0, 0)$, labeled as $C$ in Fig.~\ref{fig:weyl_chamber}, corresponds to the local equivalence class of the CNOT gate, and the point $(\frac{\pi}{2},\frac{\pi}{2},\frac{\pi}{2})$, labeled as $A_3$, represents the SWAP gate.
\begin{figure}[tbp!]
\centering
\includegraphics[width=\columnwidth, clip, trim=20 0 0 0 ]{figures/Figure3.pdf}
\caption{Weyl Chamber of $SU(4)$. The coordinates of the chamber are $O=(0,0,0)$, $A_1=(\pi,0,0)$, $A_2=(\frac{\pi}{2}, \frac{\pi}{2}, 0)$, and $A_3=(\frac{\pi}{2}, \frac{\pi}{2}, \frac{\pi}{2})$.
$C$ corresponds to the $\rm CNOT$ gate.
The blue dots represent the data from Fig.~\ref{fig:SU4} taken on \emph{ibmq\_mumbai}.
}
\label{fig:weyl_chamber}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.\columnwidth]{figures/Figure4.pdf}
\caption{Gate error reduction of the pulse-efficient $SU(4)$ gates relative to the three CNOT benchmark for random angles in the Weyl chamber measured on \emph{ibmq\_dublin}, qubits one and two (light blue circles), and \emph{ibmq\_mumbai}, qubits 19 and 16 (dark blue triangles).
The $x$-axis is the duration of the pulse-efficient $SU(4)$ gates relative to the three CNOT benchmark.
The angles of three gates are indicated in parenthesis as example.
\label{fig:SU4}}
\end{figure}
\begin{figure*}[htbp!]
\includegraphics[width=\textwidth]{figures/Figure5.pdf}
\caption{Pulse-efficient transpilation example. (a) Circuit of the cost operator for a QAOA circuit implemented on three qubits connected in a line. (b) and (c) Templates of the $R_{ZZ}$ and phase-swap gates, respectively.
Here, $R_{ZZ}(\theta)$ and $\mathrm{SWAP}(\theta)$ hold the rules with which to decompose them into the hardware-native $R_{ZX}$ gates.
(d) Circuit resulting from the template matching of (b) and (c) performed on circuit (a).
(e) Circuit resulting from a transpilation of (d) which uses the decompositions rules of $R_{ZZ}(\theta)$ and $\mathrm{SWAP}(\theta)$ into $R_{ZX}$.
To shorten the circuit figure we replaced $R_Z(n\pi/2)\sqrt{X}R_Z(m\pi/2)$ with $U_{n,m}$ and $c=0.215$.
\label{fig:tranpiler_example}}
\end{figure*}
Since the rotations generated by $XX$, $YY$, and $ZZ$ are locally equivalent to rotations generated by $ZX$ we decompose the non-local $e^{i\boldsymbol{k}^T\cdot\boldsymbol{\Sigma}/2}$ term into a circuit with three $R_{ZX}$ rotations, see Fig.~\ref{fig:KAK}(c).
We shorten the total duration of the circuit by exposing the echo in the cross-resonance gate, see Fig.~\ref{fig:KAK}(d), to the transpiler.
This ensures that at most one single-qubit pulse is needed on each qubit between each non-echoed cross-resonance $R_{ZX}$ gate.
By scaling the cross-resonance pulses we create the $R_{ZX}$ gates for arbitrary angles and therefore generalize the methods of Sec.~\ref{sec:rzx} to arbitrary gates in $SU(4)$.
We generate $R_{ZX}$-based circuits as shown in Fig.~\ref{fig:KAK}(e) for $(\alpha, \beta, \gamma)$ angles chosen at random from the Weyl chamber and measure their fidelity using process tomography with readout error mitigation.
Each $R_{ZX}$-based circuit is benchmarked against its equivalent three CNOT decomposition presented in Ref.~\cite{Vidal2004} and shown in Fig.~\ref{fig:KAK}(b).
The experiments are run on \emph{ibmq\_dublin} and \emph{ibmq\_mumbai} with 2048 shots for each circuit which we measure three times to gain statistics.
The pulse-efficient $R_{ZX}$-based decomposition of the circuits results in a significant fidelity increase for almost all angles, see Fig.~\ref{fig:SU4}.
A subset of the data is also shown in the Weyl chamber in Fig.~\ref{fig:weyl_chamber}.
The correlation between the relative error reduction and the relative schedule duration indicates that the gains in fidelity come from a better usage of the finite coherence time as the scaled cross-resonance pulses achieve the same unitary in less time.
Remarkably, these results were achieved without recalibrating any pulses.
\section{Pulse-efficient transpiler passes\label{sec:transpilation}}
The quantum circuits of an algorithm are typically expressed using generic gates such as the CNOT or controlled-phase gate and then transpiled to the hardware on which they are run~\cite{Qiskit}.
Quantum algorithms can benefit from the continuous family of gates presented in Sec.~\ref{sec:rzx} and \ref{sec:su4} if the underlying quantum circuit is either directly built from, or transpiled to, the hardware native $R_{ZX}(\theta)$ gate.
We now show how to transpile quantum circuits to a $R_{ZX}(\theta)$-based-circuit with template substitution~\cite{Iten2020}.
A template is a quantum circuit made of $|T|$ gates acting on $n_T$ qubits that compose to the identity $U_1...U_{|T|}=\mathds{1}$, see e.g. Fig.~\ref{fig:tranpiler_example}(b) and (c).
In a template substitution transpilation pass we identify a sub-set of the gates in the template $U_a...U_b$ that match those in a given quantum circuit.
Next, if a cost of the matched gates is higher than the cost of the unmatched gates in the template we replace $U_\text{match}=U_a...U_b$ with $U_\text{match}=U^\dagger_{a-1}...U^\dagger_1U_{|T|}^\dagger...U_{b+1}^\dagger$.
As cost we use a heuristic that sums the cost of each gate defined as an integer weight which is higher for two-qubit gates, details are provided in Appendix~\ref{sec:appendix_implementation}.
The complexity of the template matching algorithm on a circuit with $|C|$ gates and $n_C$ qubits is
\begin{align}
\mathcal{O}\left(|C|^{|T|+3}|T|^{|T|+4}n_C^{n_T-1}\right),
\end{align}
i.e. exponential in the template length~\cite{Iten2020}.
We therefore create short templates where the inverse of the intended match, i.e. $U_\text{match}^\dagger$, is specified as a single gate with rules to further decompose it into $R_{ZX}$ and single-qubit gates in a subsequent transpilation pass.
In these decompositions we expose the echoed cross-resonance implementation of $R_{ZX}$ to the transpiler by writing $R_{ZX}(\theta)=X R_{ZX}(-\theta/2)XR_{ZX}(\theta/2)$.
This allows the transpiler to further simplify the single-qubit gates that would otherwise be hidden in the schedules of the two-qubit gates, as exemplified in the circuit in Fig.~\ref{fig:tranpiler_example}(e).
Finally, once the $R_{ZX}(\theta)$ gates are introduced into the quantum circuit we run a third transpilation pass to attach pulse schedules to each $R_{ZX}(\theta)$ gate built from the backend's calibrated CNOT gates following the procedure in Sec.~\ref{sec:rzx}.
The attached schedules consist of the scaled cross-resonance pulse and rotary tone without any echo.
Details on the Qiskit implementation are given in Appendix~\ref{sec:appendix_implementation}.
\section{Improving QAOA with Cartan's decomposition\label{sec:demo}}
We use the QAOA~\cite{Farhi2014, Farhi2015, Yang2017}, applied to MAXCUT, to demonstrate gains of a pulse-efficient circuit transpilation on noisy hardware.
QAOA maps a quadratic binary optimization problem with $n$ decision variables to a cost function Hamiltonian $\hat H_C=\sum_{i,j}\alpha_{i,j}Z_iZ_j$ where $\alpha_{ij}\in\mathbb{R}$ are problem dependent and $Z_i$ are Pauli $Z$ operators.
The ground state of $\hat H_C$ encodes the solution to the problem.
Next, a classical solver minimizes the energy $\braket{\psi(\boldsymbol{\beta}, \boldsymbol{\gamma})|\hat H_C |\psi(\boldsymbol{\beta}, \boldsymbol{\gamma})}$ of a trial state $\ket{\psi{(\boldsymbol{\beta}, \boldsymbol{\gamma})}}$ created by applying $p$-layers of the operator $\exp(-i\beta_k\sum_{i=0}^{j-n} X_j)\exp(-i\gamma_k\hat H_C)$ where $k=1, ..., p$ to the equal superposition of all states.
\begin{figure}[htbp!]
\centering
\includegraphics[width=\columnwidth, clip, trim= 4 0 4 0]{figures/Figure6.pdf}
\caption{Depth-one QAOA energy landscape.
(a) Noiseless simulation of the cut value, averaged over all 4096 bit-strings sampled from $\ket{\psi(\beta, \gamma)}$, obtained using the QASM simulator for the weighted graph shown in (b).
The maximum cut, with value 28, is indicated by the color of the nodes in (b).
Figures (c) and (e) show hardware results obtained by transpiling to CNOT gates and by using the $R_{ZX}$ pulse-efficient methodology, respectively. Figures (d) and (f) share the same color scale and show the absolute deviation from the ideal averaged cut values in figures (c) and (e), respectively.}
\label{fig:qaoa}
\end{figure}
Implementing the operator $\exp(-i\gamma_k\hat H_C)$ requires applying the $R_{ZZ}(\theta)=\exp(-i\theta ZZ/2)$ gate on pairs of qubits.
However, to overcome the limited connectivity of superconducting qubit chips~\cite{Harrigan2021}, several $R_{ZZ}(\theta)$ gates are followed or preceded by a ${\rm SWAP}$ resulting in the unitary operator
\begin{align}
{\rm SWAP}(\theta)=
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & e^{i\theta} & 0 \\
0 & e^{i\theta} & 0 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}
\end{align}
up to a global phase.
When mapped to the $KAK$ decomposition ${\rm SWAP}(\theta)$ corresponds to $\boldsymbol{k}^T=(\eta\pi/2, \eta\pi/2, \theta + \eta\pi/2))$ where $\eta=-1$ if $\theta>0$ and 1 otherwise.
This allows us to reduce the total cross-resonance duration using the methodology presented in Sec.~\ref{sec:su4}.
We perform a depth-one QAOA circuit for an eleven node graph, shown in Fig.~\ref{fig:qaoa}(b), built from CNOT gates.
We map the decision variables zero to ten to qubits 7, 10, 12, 15, 18, 13, 8, 11, 14, 16, 19 on \emph{ibmq\_mumbai}, respectively.
Since the graph is non-hardware-native eight $\rm SWAP$ gates are needed to implement the circuits.
In QAOA the optimal values of $(\beta, \gamma)$ are found with a classical optimizer~\cite{Bengtsson2020}.
Here, we scan $\beta$ and $\gamma$ from $\pm2~{\rm rad}$ and $\pm1~{\rm rad}$, respectively, as we submit jobs through the queue of the cloud-based IBM Quantum computers.
For each $(\beta, \gamma)$ pair we run the circuits with the noiseless QASM simulator in Qiskit, see Fig.~\ref{fig:qaoa}(a) and twice on the hardware.
The first hardware run is done using a CNOT decomposition with the Qiskit transpiler on optimization level three, see Fig.~\ref{fig:qaoa}(c) for results.
The second run is done with the pulse-efficient circuit transpilation, see Fig.~\ref{fig:qaoa}(e) for results.
Here, we first perform the template substitution with the $R_{ZZ}(\theta)$ and ${\rm SWAP}(\theta)$ templates, shown in Fig.~\ref{fig:tranpiler_example}(b), (c) and Appendix~\ref{sec:appendix_implementation} for further details.
A second transpilation pass then exposes the $R_{ZX}(\theta)$ gates to which we attach pulse schedules in a third transpilation pass following Sections \ref{sec:rzx} -- \ref{sec:transpilation}.
In each case we measure 4096 shots.
The pulse-efficient circuits produce less noisy average cut values, compare Fig.~\ref{fig:qaoa}(c) with (e), and have a lower absolute deviation from the noiseless simulation than the circuits transpiled to CNOT gates, compare Fig.~\ref{fig:qaoa}(d) with (f).
The maximum error in the cut value averaged over the sampled bit-strings is reduced by 38\% from 3.65 to 2.26.
We attribute the increased quality of the results to the decrease in total cross-resonance time and the fact that the pulse-efficient transpilation keeps the number of single-qubit pulses to a minimum.
In total, we observe a reduction in total schedule duration ranging from 42\% to 52\% depending on $\gamma$ when using the pulse efficient transpilation methodology, see Fig.~\ref{fig:qaoa_duration}.
Since the schedule duration of $R_{ZZ}(\gamma\alpha_{i,j})$ and ${\rm SWAP}(\gamma\alpha_{i,j})$ decreases and increases as $\gamma$ decreases, respectively, we observe a non-monotonous reduction in the schedule duration of the QAOA circuit as a function of $\gamma$.
\begin{figure}[htbp!]
\centering
\includegraphics[width=\columnwidth, clip, trim= 4 0 4 0]{figures/Figure7.pdf}
\caption{QAOA schedule durations.
(a) Duration of the scheduled quantum circuits transpiled to CNOTs with optimization level three (blue circles) and with the pulse-efficient methodology (orange triangles).
In both cases we removed the final measurements from the quantum circuits.
(b) Length of the pulse efficient schedules relative to the CNOT-based schedules.}
\label{fig:qaoa_duration}
\end{figure}
\section{Discussion and Conclusion}
The results in Sec.~\ref{sec:rzx} and \ref{sec:su4} showed that by scaling cross-resonance gates we can automatically create a continuous family of gates which implements $SU(4)$.
These scaled gates typically have shorter pulse schedules and higher fidelities than the digital CNOT implementation.
This fidelity is limited by coherence, imperfections in the initial calibration, and non-linear effects.
Crucially, the resulting gate-tailored pulse schedules do not require additional calibration and can therefore be automatically generated by the transpiler.
Transpilation passes, as discussed in Sec.~\ref{sec:transpilation}, can be leveraged to identify and attach the scaled pulse schedules to the gates in a quantum circuit.
Furthermore, exposing the echo in the cross-resonance gate to the transpiler allows further simplifications of the single-qubit gates.
We used this pulse-efficient transpilation methodology to reduce errors in an eleven-qubit depth-one QAOA.
Scaled gates are particularly appealing for Trotter based applications, as shown in Ref.~\cite{Stenger2020}, and could therefore benefit quantum simulations~\cite{Tornow2020}.
Future work may also include scaling direct cross-resonance gates~\cite{Jurcevic2021} and benchmarking their impact on Quantum Volume~\cite{Cross2019}.
Methods to interpolate pulse parameters based on a set of reference $R_{ZX}(\theta)$ gates, calibrated at a few reference angles $\theta$, might also improve the gate fidelity and help deal with non-linearities between the rotation angle $\theta$ and pulse parameters.
For variational algorithms, such as the variational quantum eigensolver, the scaled $SU(4)$ gates may allow for better results due to the shorted schedules while still being robust to some unitary errors such as angle errors~\cite{Colless2018, Egger2019}.
We believe that the methods presented in our work will help users of noisy quantum hardware to reap the benefits of pulse-level control without having to know its intricacies.
This can improve the quality of a broad class of quantum applications running on noisy quantum hardware.
\section{Acknowledgments}
The authors acknowledge use of the IBM Quantum devices for this work.
The authors also thank L. Capelluto, N. Kanazawa, N. Bronn, T. Itoko and E. Pritchett for insightful discussions and S. Woerner for a careful read of the manuscript.
|
1,314,259,994,834 | arxiv | \section{Introduction}
Clusters of galaxies represent the end product of hierarchical structure formation. They play a key role in understanding the cosmological interplay of dark matter and dark energy. Their number density, baryonic content, and their growth are sensitive probes of cosmological parameters, such as the mean dark matter and dark energy density $\Omega_{\rm{m}}$ and $\Omega_{\rm{\Lambda}}$, the dark energy equation of state parameter $w$ and the normalization of the matter power spectrum $\sigma_8$ (see \citealt{allen11} for a recent review).
The idea of using cluster counts to probe cosmology is based on the halo mass function, which predicts their number density as a function of mass, redshift and cosmological parameters (see e.g. \citealt{press74,sheth99,tinker08}). The observational task consists of obtaining an ensemble of galaxy clusters with an observable that correlates with their true mass and a well defined selection function.
In recent years a number of multiwavelength, deep, and wide observations and surveys have been conducted which allow to detect galaxy clusters with a high signal-to-noise ratio (S/N) out to high redshifts (e.g., $z\sim2.5$). Observations are based on properties of baryonic origin, among them the number count of red galaxies (called richness, see e.g. \citealt{gladders05,koester07,rykoff14}) or the inverse Compton scattering of cosmic microwave photons on the hot intra-cluster gas (the \citealt{sunyaev80} effect, see \citealt{bleem15} and \citealt{planckcollaboration15} for the latest observational results). Another approach is to select a galaxy cluster sample from X-ray observations (see e.g. \citealt{ebeling98,boehringer04,ebeling10, gozaliasl14, gozaliasl19}). However, hydrodynamical simulations have shown that even for excellently measured X-ray observables with small intrinsic scatter at fixed mass and dynamically relaxed clusters at optimal measurement radii ($r \sim r_{2500}$), non-thermal pressure support from residual gas bulk motion and other processes are expected to bias the hydrostatic X-ray mass estimates down by up to 5-10 per cent (see \citealt{nagai07,rasia12}), which represents the currently dominant systematic uncertainty in constraining cosmology from X-ray cluster samples (see \citealt{henry09,vikhlinin09,mantz10,rozo10,benson13,mantz15}).
For this reason, the idea of absolute calibration of the mass scale of large cluster samples by weak gravitational cluster lensing (see e.g. \citealt{hoekstra07,marrone12,gruen14,vonderlinden14a,vonderlinden14b,melchior16,herbonnet19}) has gained traction over the last years. Weak gravitational lensing is sensitive to the entire gravitational matter and is therefore mostly free of systematic uncertainty that relates to the more complex interaction of baryons.
However, weak lensing mass measurements for individual clusters are inherently quite noisy, as the measured ellipticities of background galaxies do not only depend on the gravitational shear induced by the analysed galaxy cluster but also on the quite broad intrinsic ellipticity distribution, and on the gravitational imprint of all matter along the line of sight, including unrelated projected structure (see e.g. \citealt{hoekstra01,hoekstra03,spinelli12}). On top of this, at fixed true mass the density profiles of clusters intrinsically vary, causing additional scatter in weak lensing mass estimation \citep{becker11,gruen11,gruen15}. For this reason, relatively large samples of galaxy clusters need to be investigated to statistically meet the calibration requirements of cosmology.
Even with large samples of clusters and sufficiently deep optical data to measure the shapes of numerous background galaxies, several systematic uncertainties limit the power of weak lensing mass calibration. Firstly, shape measurement algorithms commonly recover the amplitude of gravitational shear only with a one to several per-cent level multiplicative bias (e.g. \citealt{mandelbaum15,jarvis16,fenechconti17}, but see the recent advances of \citealt{huff17,sheldon17}). Secondly, the amplitude of the weak lensing signal does not only depend on the cluster mass, but also on the geometric configuration between observer, lens and background objects, more specific on their angular diameter distances among the observer, lens, and source. For interpreting the shear signal, additional photometric data are required to obtain the necessary distance information by photometric redshifts (\citealt{lerchster11,gruen13}), colour cuts or distance estimates by colour-magnitude properties (\citealt{gruen14,cibirka16}). All these methods suffer from systematic uncertainties (see e.g. \citealt{applegate14,gruen16}) that translate to systematic errors in cluster masses. On a related note, cluster member galaxies can enter the photometrically selected background galaxy sample and lower the observed gravitational shear signal (see e.g. \citealt{sheldon04,gruen14,melchior16} for different methods of estimating and correcting the impact of this). Finally, a mismatch between the fitted density profile and the underlying true mean profile of clusters at a given mass (including the miscentring of clusters relative to the assumed positions in the lensing analysis) can cause significant uncertainty in weak lensing cluster mass estimates (see e.g. \citealt{melchior16}).
In this COnstrain Dark Energy with X-ray clusters (CODEX) study we present weak lensing mass analysis for a total of 25 galaxy clusters. The initial CODEX sample of 407 clusters, from which the main lensing sample is obtained, is cut at $0.35 < z < 0.65$ with $\lambda \geq 60$ with X-ray based selection function.
To this end, we also develop new methods to provide a full likelihood of the lensing signal as a function of individual cluster mass, and carefully characterize the systematic uncertainty.
This paper is structured as follows. In \autoref{sec:data and analysis} we present the data and analysis, including data reduction, photometric processing, richness estimation, shape measurement and mass likelihood. In \autoref{sec:bayesmodel} we describe our Hierarchical Bayesian model, which we use to estimate richness-mass relation. In \autoref{sec:application} we apply the Hierarchical Bayesian model to find the richness--mass relation of all 25 clusters in the weak lensing mass catalog. In \autoref{sec:results} we present our results of the Bayesian analysis, and in \autoref{sec:conclusion} we summarize and conclude. In the Appendix, we detail our systematic uncertainties, fields with incomplete colour information, and present weak lensing mass measurements for 32 clusters excluded from the richness--mass calibration.
We adopt a concordance $\Lambda$CDM cosmology and WMAP7 results \citep{Komatsu_2011} with $\Omega_{\rm{m}}=0.27$, $\Omega_{\rm{\Lambda}}=0.73$ and
$H_0 = 70$ km\, s$^{-1}$\, Mpc$^{-1}$. The halo mass of galaxy clusters in this study corresponds to $M_{\mathrm{200c}}$, defined as the mass within radius $r_{200c}$, the radius in which the mass and concentration definitions is taken to be 200 times the critical density of the Universe ($\rho_{\rm{c}}$).
\section{Data and Analysis}
\label{sec:data and analysis}
\subsection{Cluster catalogue}
\label{sec:cluster catalog}
The CODEX sample was initially selected by a $4\sigma$ photon excess inside a wavelet aperture in the overlap of the ROSAT All-Sky Survey (RASS, \citealt{voges99}) with the Sloan Digital Sky Survey (SDSS).
We use RASS photon images and search for X-ray sources on scales from 1.5 arcmin to 6 arcmin using wavelets. Any source detected is considered as a cluster candidate and enters the redMaPPer code (see \citealt{rykoff14} and \autoref{sec:redmapper}), which associates an optical counterpart for each source and reports its richness and redshift. For this sample, we consider a high threshold of richness 60 and redshifts above 0.35, which yields the sample of most massive X-ray selected high-z clusters, for which we seek to perform a weak lensing calibration. While other X-ray source catalogues using RASS data exist (e.g. \citealt{boller16}), the advantage of our approach consists of performing detailed modelling of the cluster selection function using our detection pipeline, which takes into account the RASS sensitivity as a function of sky position, Galactic absorption, and cluster detectability as a function of mass and redshift. Availability of such a selection function enables precise modelling of the cluster appearance in the catalog, critically important for the Bayesian modelling of the scaling relations.
At the positions of these overdensities, the redMaPPer algorithm is run to extract estimates of photometric redshift, richness, and a refined position and ROSAT X-ray flux estimate. For more details on the catalog construction, see \citet{clerc16}, \citet{cibirka16} and \citet{finoguenov2019codex}.
The initial sample of 407 clusters is selected by the richness $\lambda_{\rm RM,SDSS}$ and redshift $z_{\rm{RM,SDSS}}$ estimated from the redMaPPer run on SDSS photometric catalogues, cut at $\lambda_{\rm RM,SDSS} \ge 60$ and $0.35 < z_{\rm{RM,SDSS}} < 0.65$. A subsample of the initial sample was chosen as a weak lensing follow-up with CFHT (Canada-France-Hawaii Telescope) designed to calibrate richness--mass relation for this survey. This deeper CFHT survey of 36 clusters, that we call S-I, falls into the CFHT Legacy Survey\footnote{http://www.cfht.hawaii.edu/Science/CFHTLS/} (CFHTLS) footprint, and is selected only by observability. To have an optically clean sample without missing data in CFHT richness or weak lensing mass, we exclude a total of 11 clusters, and define the remaining 25 cluster sample as our main lensing sample. The main lensing sample of 25 clusters is listed in Table \ref{tab:cleanedWL}. The excluded clusters of S-I are described in section \ref{sec:application}, and listed in the Appendix Table \ref{tab:primaryWL}.
Since weak lensing analysis requires precise knowledge of the cluster redshift,
for 20 clusters without spectroscopic redshifts in S-I, we targeted red-sequence member galaxies for spectroscopy. The clusters observed as a part of the CFHT program, are targeted by several Nordic Optical Telescope (NOT) programs (PI A. Finoguenov, 48-025, 52-026, 53-020, 51-034). Each cluster is observed in multi-object spectroscopy mode, targeting $\sim$20 member galaxies including Brightest Cluster Galaxies (BCGs) and having spectral resolving power of $\sim$500. The typical exposure per mask is 2700 s with a grism that provides wavelength coverage between approximately 400 - 700 nm. The average seeing over the four programmes is near 1 arcsec. Because we are solely interested in the redshift of the Ca H+K lines, only wavelength calibration frames are additionally obtained. Standard IRAF7 packages are used in the data reduction, spectra extraction and wavelength calibration process. The redshifts are determined finally using RVIDLINES to measure the positions of the two calcium lines for a weighted average fit. The acquired spectroscopic cluster redshifts for the weak lensing sample are listed in Table \ref{tab:cleanedWL}, along with X-ray observables, richness estimates, and available photometric data.
\subsection{Imaging data and data reduction}
\label{sec:imaging data}
This study comprises imaging data covering 34 pointings centred on CODEX clusters observed with the wide field optical camera MegaCam \citep{boulade03} at the CFHT.
For 28 of these pointings full colour information of filters $u$.MP9301, $g$.MP9401, $r$.MP9601, $i$.MP9702, $z$.MP9801 is available. All considered pointings possess $i$-band information. A summary of the imaging data of S-I can be seen in Appendix Table \ref{tab:primary images}.
\\
A detailed description of the data reduction can be found in \citet{cibirka16}. We only give a brief overview here.
\\
We process the CODEX data using the algorithms and processing pipelines (\texttt{THELI} ) developed within the CFHTLS-Archive Research Survey (CARS, see \citealt{erben09,erben05}; \citealt{schirmer13}) and CFHT Lensing Survey\footnote{http://cfhtlens.org}, (CFHTLenS, see \citealt{erben13,heymans12}).
\\
Starting point is the CODEX data, preprocessed with \texttt{Elixir}, available at the Canadian Astronomical Data Centre\footnote{http://www4.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/cadc} (CADC). The \texttt{Elixir}\space preprocessing removes the entire instrumental signature from the raw data and provides all necessary photometric calibration information.
\\
The final data reduction comprises deselection of damaged raw images or images of low quality, astrometric and relative photometric calibration using \texttt{scamp}\footnote{http://www.astromatic.net/software/scamp} \citep{bertin02}, coaddition of the final reduced single frames with \texttt{swarp}\footnote{http://www.astromatic.net/software/swarp} and creation of image masks by running the \texttt{automask} tool\footnote{http://marvinweb.astro.uni- bonn.de/data\_\\products/THELIWWW/automask.html} \citep{dietrich07} to indicate photometrically defective areas (satellite and asteroid tracks, bright, saturated stars and areas which would influence the analysis of faint background sources).
\subsection{Photometric catalogue creation}
\label{sec:photometry}
The photometric redshift calibration, photometric catalogue creation and the photometric redshift estimation are presented in \citet{brimioulle13}. We only give a brief overview here.
The estimation of meaningful colours from aperture fluxes requires same or at least similar shape of the point spread function (PSF) in the different filters of one pointing. Therefore in the first step we adjust the PSF by convolving all images of one pointing/filter with a fixed Gaussian kernel, degrading the PSF to the value of the worst band (in general $u$). We select the appropriate kernel in an iterative process, so the observational stellar colours no longer depend on the diameter of the circular aperture they are measured in. We then run \texttt{SExtractor}\footnote{http://www.astromatic.net/software/sextractor} (see \citealt{bertin96}) in dual-image-mode, selecting the unconvolved $i$-band as detection band and extracting the photometric information from the convolved images. We extract all objects which are at least 2$\sigma$ above the background on at least four contiguous pixels.
Unfortunately the original magnitude zeropoint determination by the \texttt{Elixir}\space pipeline proved to be inaccurate. The colours of stars and galaxies can vary from field to field due to galactic extinction and because of remaining zero-point calibration errors. Since the CFHTLS-Wide fields are selected to be off the galactic plane, the extinction is rather small and does not change a lot over one square degree tiles: the maximum and minimum extinction in all Wide fields is 0.03 and 0.14, respectively, and the difference between maximum and minimum extinction value per square degree can be up to 0.03 for high extinction fields and 0.01 for fields with low extinction values. We account for one zero-point and extinction correction value per square degree field by shifting the observed stellar colours to those predicted from the Pickles stellar library \citep{pickles98} for the given photometric system. In this way we do not only correct for the inaccurate magnitude zeropoints, but do also correct for galactic extinction and field-to-field zeropoint variations.
\subsection{redMaPPer}
\label{sec:redmapper}
redMaPPer \citep{rykoff14} is a red-sequence photometric cluster finding procedure that builds an empirical model for the red-sequence colour-magnitude relation of early type cluster galaxies. It is built around the optimized richness estimator developed in \cite{rozo09} and \cite{rykoff2012robust}. redMaPPer detects clusters as overdensities of red galaxies, and measures the probability that each red galaxy is a member of a cluster according to a matched filter approach that models the galaxy distribution as the sum of a cluster and background component. The main design criterion for redMaPPer is to provide a galaxy count based mass proxy with as little intrinsic scatter as possible. To this end, member galaxies are selected at luminosities $L>0.2L_{\star}$, based on their match to the red-sequence model, and with an optimal spatial filter scale \citep[see][]{rykoff16}.
The redMaPPer richness of clusters is the sum of the membership probabilities of all galaxies. The aperture used as a cluster radius to estimate the cluster richness is self-consistently computed with the cluster richness, ensuring that richer clusters have larger cluster radii. This radius is selected to minimize the scatter of richness estimates at a given mass. The cluster richness estimated by redMaPPer has been shown to be strongly correlated with cluster mass by comparing the richness to well-known mass proxies such as X-ray gas mass and Sunyaev–Zel'dovich (SZ) decrements. The main (v5.2) redMaPPer algorithm was presented in \cite{rykoff14}, to which the reader is referred for more details.
Especially at higher cluster redshift, the shallow SDSS photometry only allows for a relatively uncertain estimate of richness due to incompleteness at a magnitude corresponding to galaxies fainter than the redMaPPer limit of 0.2$L_{\rm \star}$.
The acquired follow-up CFHT photometry is significantly deeper, and therefore allows for improved estimates of $\lambda$ for the observed CODEX lensing sample. This, however, requires an independent calibration of the red sequence in the used set of filters $g$.MP9401, $r$.MP9601, $i$.MP9702 and $z$.MP9801.
In section \ref{sec:application}, we calibrate the richness--mass relation based on these improved CFHT richness estimates, and use the observed SDSS richnesses only to determine the shape of the sampling function, as is described in \autoref{sec:subsample_selection_function} .
Due to incomplete observations in $g$ and $z$ for some of the clusters in our sample, we perform this in three separate variants, namely based on $griz$, $gri$ and $riz$ photometry. In the case of CODEX35646, where no $i$.MP0702 band data is available, we generate artificial magnitudes by adding the $i$.MP9702-colour of a red galaxy template at the cluster redshift to the available $i$.MP9701-magnitude of all galaxies in the field.
For calibrating the red sequence, we use the spectroscopic cluster redshifts (see Table \ref{tab:primaryWL} and Table \ref{tab:subsamples} ), where available. To account for masking to correct galaxy counts for undetected members, we convert the polygon masks applied to the CFHT object catalogues to a \textsc{healpix} mask \citep{gorski05} with $N_{\rm side}=4096$.
Using the spectroscopic redshifts obtained for this sample we can verify redMaPPer redshift determination. \mbox{Fig. \ref{fig:zspeczSDSS}} shows spectroscopic redshift of the cluster BCGs versus the redMaPPer photometric redshift estimate $z_{\rm{RM}}$. Through this comparison the photometric redshift precision for both samples of SDSS and CFHT are found to correspond to $\sigma_{\Delta_{z_{\mathrm{RM,SDSS}}}/(1+z_{\mathrm{spec}})}=0.008$ and $\sigma_{\Delta_{z_{\mathrm{RM,CFHT}}}/(1+z_{\mathrm{spec}})}=0.003$.
While the redMaPPer photometric redshift precision of the SDSS-DR8 catalog is $\sigma_{\Delta_{z_{\mathrm{SDSS,DR8}}}/(1+z_{\mathrm{spec}})}=0.006$, as estimated by \cite{rykoff14}.
\begin{figure}
\includegraphics[width=8.45cm]{plots/zCFHT_SDSS.pdf}
\caption{Spectroscopic redshifts versus CFHT/SDSS photometric cluster redshift estimates by redMaPPer for all spectroscopically covered clusters. Through a comparison with the spectroscopic redshifts of clusters, we measure photometric redshift precision of $\sigma_{\Delta_{z_{\mathrm{RM,SDSS}}}/(1+z_{\mathrm{spec}})}=0.008$ and $\sigma_{\Delta_{z_{\mathrm{RM,CFHT}}}/(1+z_{\mathrm{spec}})}=0.003$. }
\label{fig:zspeczSDSS}
\end{figure}
\subsection{Shape measurement}
\label{sec:shape measurement}
We use the \textsc{lensfit} algorithm (see \citealt{miller13}) to measure galaxy shapes. We chose the $i$-band images for shape extraction as this band has usually smaller FWHM and lower atmospheric differential diffraction than the bluer bands.
The extracted quantities are the measured ellipticity components $e_1$ and $e_2$ and the weight taking into account shape measurement errors and the expected intrinsic shape distribution as defined in \citet{miller13}. In order to sort out failed measurements and stellar contamination of our background sample we only consider background objects with a \emph{lensfit weight} greater than 0 and a \emph{lensfit fitclass} equal to 0.
For our sample S-I we make use of the latest `self-calibrating' version of the \emph{lensfit} shape measurement (see \citealt{fenechconti17}). Here we only highlight a few important facts about the self-calibration, for a detailed description we refer the reader to its first application in the Kilo-Degree Survey (KiDS, see \citealt{fenechconti17,hildebrandt17}). The main motivation for the self-calibration is given by the noise bias problem plaguing shape measurements techniques (see e.g. \citealt{ melchior12,refregier12,miller13, fenechconti17, Kannawadi_2019}). However, the self-calibration is not perfect as it is shown to contain a residual calibration of the order of 2 per cent. \citet{fenechconti17} discussed how to further reduce this with help of image simulations to the sub-per cent level as required for cosmic shear studies as presented by \citet{hildebrandt17}, but given the residual statistical uncertainties in our cluster lensing studies, we discard this step and use the self-calibrated shapes directly.
We estimate the uncertainty associated with this step to be around 3-5
percent of the actual shear value.
\subsection{Source selection and redshift estimation}
\label{sec:photoz}
The observable in a weak lensing analysis is the mean tangential component of reduced gravitational shear $g_{\rm t}$ (see equation~\ref{eqn:meanshear}) of an ensemble of sources. At a given projected radius $r$ from the centre of the lens, it is related to the physical surface mass density profile of the latter, $\Sigma(r)$, by
\begin{equation}
g_{\rm t}(r)=\frac{\Delta\Sigma(r)/\Sigma_{\rm crit}}{1-\Sigma(r)/\Sigma_{\rm crit}} + \rm{Noise}\; ,
\label{eqn:reducedgt}
\end{equation}
where we have defined $\Delta\Sigma(r)=\langle\Sigma(r')\rangle_{r'<r}-\Sigma(r)$. In the limit where $\Sigma\ll\Sigma_{\rm crit}$, $g_t$ is equal to the tangential gravitational shear $\gamma_t$,
\begin{equation}
g_t(r)\approx\gamma_t(r)=\Delta\Sigma(r)/\Sigma_{\rm crit} \; .
\label{eqn:gammag}
\end{equation}
The critical surface mass density,
\begin{equation}
\Sigma_{\rm crit}=\frac{c^2}{4\pi G D_{\rm d}}\frac{D_{\rm s}}{D_{\rm ds}} \; ,
\label{eqn:sigmac}
\end{equation}
is a function of the angular diameter distances between the observer and lens $D_{\rm d}$, observer and source $D_{\rm s}$, and lens and source $D_{\rm ds}$. The ratio of the latter two is denoted in the following as the shorthand
\begin{equation}
\beta=\frac{D_{\rm ds}}{D_{\rm s}} \; .
\label{eqn:beta}
\end{equation}
This is the part of equation~\ref{eqn:sigmac} that depends on source redshifts, illustrating that the latter need to be known for converting lensing observables to physical surface densities.
Based on five-band photometry, redshifts of individual galaxies cannot be estimated unambiguously. However, since the lensing signal of each cluster is measured as the mean $\langle g_t\rangle$ over a large number of galaxies, for an unbiased interpretation of the signal it is sufficient to know the overall redshift \emph{distributions} of the lensing-weighted source sample only. Here, we do this by defining subregions of the CFHT color-magnitude space with a decision tree algorithm. Each source galaxy can then be assigned to one of these subregions. A reference sample of galaxies with measurements in the same and additional photometric bands can be assigned to the same subregions. The redshift distribution of galaxies in each subregion can be estimated as the histogram of the high-quality photometric redshifts for the reference sample of galaxies assigned to the same subregion. The redshift distribution of the whole sample is a linear combination of the redshift distributions of the contributing subregions.
To this end, we follow the same algorithm as in \citet{cibirka16}, described in more detail in \citet{gruen16}. In a nutshell, we divide five-band colour-magnitude space into boxes (hyper-rectangle subregions) and estimate the redshift distribution in each box from a reference catalog of 9-band optical+near-Infrared photo-$z$.
The reference catalogue of high-quality photo-$z$ is based on a magnitude-limited galaxy sample with 9-band ($u$.MP9301, $g$.MP9401, $r$.MP9601, $i$.MP9701, $i$.MP9702, $z$.MP9801, $J$.WC8101, $H$.WC8201, $Ks$.WC8302)-photometry from the four pointings of the CFHTLS Deep and WIRCam Deep \citep{bielby12} Surveys. The outlier rate of these redshift estimates is $\eta=2.1$ per cent, with a photo-$z$ scatter $\sigma_{\Delta z/(1+z)}=0.026$ for $i<24$ (see \citealt{gruen16}, their Fig. 4). We emphasize that the photometric catalogues in this work and the reference catalogue in \citet{gruen16} have been created in the exact same way. In order to reduce contamination and enhance signal-to-noise-ratio we apply several cuts during the construction of the colour-magnitude decision tree, as in \citet{cibirka16}. This way, we remove parts of colour-magnitude space in which contamination with galaxies at the cluster redshift is possible. In addition, we identify and remove parts of color-magnitude space in which our 9-band photometric redshifts disagree with the COSMOS2015 photo-$z$ of \citet{laigle16}. We also use the latter catalog to identify systematic uncertainties due to potential remaining biases in the high-quality photo-$z$ (see Appendix~\ref{sec:bias_z_distribution}).
To perform the cuts described above, \emph{before} construction of the decision tree we remove all galaxies from cluster and reference fields whose colour is in the range spanned by galaxies in the reference catalogue best fitted by a red galaxy template in the redshift interval $z_{\rm{d}} \pm 0.04$.
\emph{After} construction of the decision tree we remove
\begin{itemize}
\item{all galaxies in colour-magnitude hyper-rectangles for which $\langle \beta \rangle$ from COSMOS2015 photometric redshifts are below 0.2.}
\item{all galaxies in colour-magnitude hyper-rectangles populated with any galaxies in the reference catalogue for which the redshift estimate is within $z_{\rm{d}} \pm 0.06$. In particular we remove all galaxies with a \emph{cprob}-estimate unequal 0 to prevent contamination of the source sample with cluster members. We estimate the precision of the resulting estimate might still be biased up to a level of 2 per cent.}
\item{all galaxies in colour-magnitude hyper-rectangles where the ratio of $\langle \beta \rangle$-estimates from COSMOS2015 versus our 9-band photometric redshifts deviates by more than 10 per cent from the median ratio over all hyper-rectangles.}
\end{itemize}
The final estimate of the redshift distribution of a color-magnitude box comes from the 9-band photometric redshifts. We estimate the $\beta$ of a source galaxy as the mean $\beta$ of galaxies in the same box which it falls into. We refer the reader to Appendix \ref{sec:bias_z_distribution} for details on systematic errors in the redshift calibration.
\subsection{Tangential shear and \texorpdfstring{$\Delta\Sigma$}{DeltaSigma} profile}
For a cluster $C$ and any radial bin $R$, we use the weighted mean of tangential ellipticities measured for a set of source galaxies $i$,
\begin{equation}
g_{\rm t}(C,R)=\sum_{i}w_i\epsilon_{\mathrm{t},i} \; ,
\label{eqn:meanshear}
\end{equation}
where $\epsilon_{\mathrm{t},i}$ is the component of the measured shape of galaxy $i$ tangential to the cluster centre and the sum runs over all sources around $C$ in a radial bin $R$, with weights $w_i$ that are normalized to $1=\sum_i{w_i}$.
Equivalently, in the limit of equation~\ref{eqn:gammag}, we can estimate
\begin{equation}
\Delta\Sigma(C, R)=\sum_i W_i \Delta\Sigma_i = \sum_i W_i \epsilon_{\mathrm{t},i}/\langle \Sigma_{\mathrm{crit},i}^{-1}\rangle \; ,
\label{eqn:meandeltasigma}
\end{equation}
with a different set of weights $W_i$, again with $1=\sum_i{W_i}$. The expectation value of $\Sigma_{\rm crit}^{-1}$ is calculated from equation~\ref{eqn:sigmac} with the value of $\beta$ estimated in \autoref{sec:photoz}.
The statistically optimally weighted mean (i.e., the one with the highest signal-to-noise ratio) is achieved by using weights equivalent to the $\Delta\Sigma$ estimator of \citet{sheldon04}, namely
\begin{equation}
w_i\propto\frac{\beta_i}{\sigma^2_{\rm intr}+\sigma^2_{\rm obs}} \;,
\label{eqn:sourceweight}
\end{equation}
\begin{equation}
W_i\propto\frac{\langle \Sigma_{\mathrm {crit},i}^{-1}\rangle^2}{\sigma^2_{\rm intr}+\sigma^2_{\rm obs}}\propto\frac{\beta_i^2}{\sigma^2_{\rm intr}+\sigma^2_{\rm obs}}\;,
\label{eqn:dssourceweight}
\end{equation}
where $\beta_i$ is the estimate of a galaxy's $\beta$ as described above, $\sigma^2_{\rm intr}$ is the intrinsic variance of an individual component of galaxy ellipticity, and $\sigma^2_{\rm obs}$ is the variance in an individual component of galaxy shape due to observational uncertainty, both variances obtained from \emph{lensfit}.
Equation~\ref{eqn:meanshear} with these weights $w$ yields what we will call, in the following, mean tangential shear, and equation~\ref{eqn:meandeltasigma} with $W$ what we will call mean $\Delta\Sigma$. The above definitions and normalization conditions lead to the relation
\begin{equation}
\Delta\Sigma(C,R)=g_t(C,R)/\langle\Sigma_{\rm crit}^{-1}\rangle \; ,
\end{equation}
where
\begin{equation}
\langle\Sigma_{\rm crit}^{-1}\rangle = \sum_i w_i \Sigma_{\mathrm {crit},i}^{-1} \; .
\end{equation}
Mean shear and mean $\Delta\Sigma$ are therefore identical, up to normalization by the $w$-weighted mean of $\Sigma_{\mathrm {crit},i}^{-1}$.
We do not show individual shear profiles, as they are rather noisy, but stacked profiles of the same cluster sample, that we have used in this work, can be found in \citet{cibirka16}.
\subsection{Surface density model}
\label{sec:model}
The interpretation of the weak lensing signal in order to derive a mass estimate for the galaxy cluster requires modelling of the surface density profile $\Sigma$. $\Sigma$ is related to the tangential reduced gravitational shear $g_t$ (equation~\ref{eqn:reducedgt}) through the critical surface mass density (equation~\ref{eqn:sigmac}).
In our analysis we assume the galaxy cluster mass profile to follow a universal density profile, also known as NFW profile (see \citealt{navarro96,navarro97}), which is described by
\begin{equation}
\rho(r)=\frac{\delta_{\rm c} \rho_{\rm c}(z) }{(r/r_{\rm s})(1 + r/r_{\rm s})^2} ,
\label{eqn:NFW}
\end{equation}
where $\rho_{\rm c} = \frac{3 H(z)^2}{8 \pi G}$ represents the critical density of the Universe at redshift $z$, $r_s$ refers to the scale radius where the logarithmic profile slope changes from -1 to -3, and $\delta_c$ describes the characteristic over-density of the halo
\begin{equation}
\delta_{\rm c} = \frac{ 200}{3} \frac{c^3}{\rm{\ln}(1+c) - c/(1+c)}\ .
\label{eqn:deltac}
\end{equation}
The characteristic over-density $\delta_c$ itself is a function of the so-called concentration parameter $c=r_{200}/r_s$.
For the explicit parametrizations for NFW shear components $\gamma_{\rm{t}}$, $g_{\rm{t}}$ and density contrast $\Delta\Sigma$ we refer to equations 11-16 of \cite{wright00}. Note that the measured mean $\Delta\Sigma$ of equation~\ref{eqn:meandeltasigma} is equal to the true $\Delta\Sigma$ only in the weak shear limit, $\kappa \ll1,$ where $\kappa \equiv \Sigma(r)/\Sigma_{\mathrm{crit}}$ denotes convergence, i.e. the dimensionless surface-mass density (cf. equation~\ref{eqn:reducedgt}). To compensate the effect of reduced shear, we boost our model by $(1-\kappa)^{-1}$ when comparing it to the data.
In order to evaluate the weak lensing signal, we calculate the average of $\Delta \Sigma$ in logarithmically equidistantly binned annuli, both for the observational data and the analytic NFW profile that we use as a model. The radial range around the gravitational lens has to be chosen to minimize systematic effects but maximize our statistical power. Removing too much information on small scales results in loss of the region with the highest S/N. However, it is those small scales which are affected the most by off-centring. This subject will be investigated in further detail in Appendix~\ref{sec:profilecalibration} by examining simulated galaxy cluster halo profiles. As a trade-off, we decide to discard all background sources closer to the cluster centre than 500 $h^{-1}$ kpc, reducing a possible mass bias from off-centring to a minimum.
On the other side large scales come with two effects. Firstly, the integrated NFW mass diverges for infinite scales, i.e. at a certain point the integrated analytic mass will exceed the physical cluster mass and thus bias low. On the other hand large scales start to be affected by higher-order effects as e.g. 2-halo-term, enhancing the observational mass profile, counter-acting at least partially the first effect. However, since these effects are not trivial to model, in our case the safer option is to discard those regions where these complicating effects start increasing, selecting as an outer analysis radius cut a distance of 2500 $h^{-1}$ kpc. In a nutshell, we logarithmically bin our data in 12 radial annuli within 500 and 2500 $h^{-1}$ kpc. Remaining biases by off-centring, large scale effects and other differences between our assumed NFW profile and the actual profile of galaxy clusters will be determined by calibration on recovered masses from simulated cluster halo profiles from \cite{becker11} in Appendix~\ref{sec:profilecalibration} as mentioned before and be taken into account.
Given this choice of scales, we fit mass only, fixing the concentration parameter by the concentration-mass relation of \citet{dutton14} to
\begin{equation}
\rm{\log_{10}} \ c = \it{a} + \it{b} \ \rm{\log_{10}}\ (\it{M}/[10^{12}h^{-1} \rm{M_{\odot}}]) ,
\end{equation}
with
\begin{align*}
b = -0.101 + 0.026z
\end{align*}
and
\begin{align*}
a = 0.520 + (0.905 - 0.520)\ \rm{exp} {(-0.617\it{z}^{\rm{1.21}})}.
\end{align*}
\subsection{Covariance matrix}
The measured profile $\Delta\Sigma_{\rm obs}$ of any cluster of true mass $M$ deviates from the mean profile $\Delta\Sigma(M)$ of clusters of the same mass and redshift. In some annulus $i$, we can write
\begin{equation}
\Delta\Sigma_{\mathrm{obs},i}=\Delta\Sigma_i(M) + \delta_i \; .
\end{equation}
The covariance matrix element $C_{ij}$ required when determining a likelihood of $\Delta\Sigma_{\rm obs}$ as a function of mass is the expectation value
\begin{equation}
C_{ij}=\langle \delta_i \, \delta_j\rangle \; ,
\end{equation}
which contains several components:
\begin{enumerate}
\item \emph{shape noise}, i.e.~the scatter in measured mean shear due to intrinsic shapes and measurement uncertainty of shapes of background galaxies,
\item \emph{uncorrelated large-scale structure}, i.e.~statistical fluctuations of the matter density along the line of sight to the cluster, influencing the light path from the ensemble of background galaxies to the observer,
\item \emph{intrinsic variations of cluster profiles} that would be present even under idealized conditions of infinite background source density and perfectly homogeneous lines of sight.
\end{enumerate}
All these components can be described as independent contributions to the covariance matrix, i.e.
\begin{equation}
C_{ij}(M)=\langle \delta_i \, \delta_j\rangle=C_{ij}^{\rm shape}+C_{ij}^{\rm LSS}+C_{ij}^{\rm intr}(M) \; .
\label{eqn:covcomponents}
\end{equation}
We have made the dependence of the intrinsic variations of the cluster profile on mass $M$ explicit.
The following sections describe these terms in turn. Since the overlap of annuli of pairs of different clusters in our sample is minimal, we assume that there is no cross-correlation of shears measured around different clusters.
\subsubsection{Shape noise}
The \emph{lensfit} algorithm provides the sum of intrinsic and measurement related variance of the ellipticity of each source $i$, $\sigma_{g,i}^2=\sigma_{\rm intr}^2+\sigma_{\rm obs}^2$.
Using this to get the shape noise related variance in $\Delta\Sigma_i$,
\begin{equation}
\sigma^2_{\Delta\Sigma, i}=\left(\frac{\sigma_{g,i}}{\langle\Sigma_{\mathrm{crit},i}^{-1}\rangle}\right)^2\propto W_i^{-1}
\end{equation}
the mean $\Delta\Sigma$ with the weights $W_i$ of equation~\ref{eqn:dssourceweight} has a variance
\begin{equation}
C^{\rm shape}_{ii}=\frac{1}{\sum_i \sigma^{-2}_{\Delta\Sigma, i}} \propto \frac{1}{\sum_i W_i} \; .
\end{equation}
Due to the negligible correlation of shape noise between different galaxies, off-diagonal components are set to zero.
\subsubsection{Uncorrelated large-scale structure}
Random structures along the line of sight towards the source galaxies used for measuring the cluster shear profiles cause an additional shear signal of their own. The latter is zero on average, but has a variance (and co-variance between different annuli) that is an integral over the convergence power spectrum and therefore depends both on the matter power spectrum and the weighted distribution of source redshifts.
We analytically account for this contribution to the covariance matrix as \citep[e.g.][]{schneider98,hoekstra03,umetsu11,gruen15}
\begin{equation}
C^{\rm LSS}_{ij}=\int \frac{\ell \mathrm{d}\ell}{2\pi} P_{\kappa}(\ell) \hat{J}_0(\ell\theta_i) \hat{J}_0(\ell\theta_j) \; .
\label{eqn:integral}
\end{equation}
Here, $\hat{J}_0(\ell\theta_i)$ is the area-weighted average of the Bessel function of the first kind $J_0$ over annulus $i$. The convergence power spectrum $P_{\kappa}$ is obtained from the matter power spectrum by the \citet{limber54} approximation as
\begin{eqnarray}
P_{\kappa}(\ell)=\frac{9H_0^2\Omega_m^2}{4c^2}&\int_{0}^{\chi_{\rm max}}\mathrm{d}\chi a^{-2}(\chi)P_{\rm nl}(\ell/\chi,\chi)\nonumber \\ &\int_{\chi}^{\chi_{\rm max}}\mathrm{d}\chi_s p(\chi_s)\left(\frac{\chi_s-\chi}{\chi_s}\right)^2 \; .
\end{eqnarray}
Here $\chi$ denotes comoving distance to a given redshift, and $p(\chi_s)$ is the PDF of comoving distance to sources in the lensing sample, defined as the sum of each individual source PDF (\autoref{sec:photoz}), weighted by the $w$ of equation~\ref{eqn:sourceweight}. For the non-linear matter power spectrum $P_{\rm nl}$ we use the model of \citet{smith03} with the \citet{eisenstein98} transfer function including baryonic effects. Note that since the source sample, weighting, and angular size of annuli is different for each cluster, we calculate a different $C^{\rm LSS}$ for each one of them.
\subsubsection{Intrinsic variations of cluster profiles}
Even under perfect observing conditions without shape noise, and in the hypothetical case of a line of sight undisturbed by inhomogeneities along the line of sight, the shear profiles of a sample of clusters of identical mass would still vary around their mean.
The reason for this are intrinsic variations in cluster profiles, halo ellipticity and orientation, and subhaloes in their interior and correlated environment. We describe these variations using the semi-analytic model of \citet{gruen15}, which proposes templates for each of these components and determines their amplitudes to match the actual variations of true cluster profiles at fixed mass seen in simulations \citep{becker11}.
We write
\begin{equation}
C_{ij}^{\rm intr}(M) = C_{ij}^{\rm conc}(M) + C_{ij}^{\rm ell}(M) + C_{ij}^{\rm corr}(M) + C_{ij}^{\rm off}(M) \; ,
\end{equation}
where we assume the best-fit re-scaled model of \citet{gruen15} for the contributions from halo concentration variation $C_{ij}^{\rm conc}$, halo ellipticity and orientation $C_{ij}^{\rm ell}$ and correlated secondary haloes $C_{ij}^{\rm corr}$. For the purpose of this work, the templates in \citet{gruen15} are resampled from convergence to shear measurement and re-scaled to the $\Delta\Sigma$ units of our measurement with the weighted mean $\Sigma_{\rm crit}$ of the source sample.
The final component, $C_{ij}^{\rm off}$ is added to account for variations in off-centring width of haloes. It is calculated as the covariance of shear profiles of haloes of fixed mass, with miscentring offsets drawn according to the prescription of \citet{rykoff16}.
We note that each of these components depends on halo mass, halo redshift, and angular binning scheme. We therefore calculate a different $C_{ij}^{\rm intr}(M)$ for each cluster in our sample. The code producing these covariance matrices is available at \texttt{https://github.com/danielgruen/ccv}.
\iffalse
\begin{figure*}
\includegraphics[width=8.8cm]{plots/mean_lss.pdf}
\includegraphics[width=8.8cm]{plots/mean_intr.pdf}
\includegraphics[width=8.8cm]{plots/mean_full.pdf}
\includegraphics[width=8.8cm]{plots/mean_full_corr2.pdf}
\caption{Mean covariance for a galaxy cluster with $\rm{M_{\rm{200c}}}=5 \times 10^{14}\ h^{-1}\ \rm{M_{\odot}}$ at redshift z=0.5 induced by large scale structure only (upper left panel), intrinsic effects only (upper right panel) and the total covariance induced by large scale structure, intrinsic effects and shape noise (lower left panel). As can be seen the total covariance is strongly dominated by shape noise. The lower right panel shows the correlation matrix where the non-diagonal elements are almost negligible.}
\label{fig:covariance}
\end{figure*}
\fi
\begin{table*}
\caption{Main weak lensing sample ($\lambda_{\rm{RM,SDSS}}>60$ and $z \ge 0.35$) of 25 clusters}
\label{tab:cleanedWL}
\begin{adjustbox}{angle=90}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\hline \hline
CODEX & SPIDERS & R.A. & Dec & R.A. & Dec & Filters & z & $\rm{z_{RM}}$ & $\rm{\lambda_{RM}}$ & $\rm{z_{RM}}$ & $\rm{\lambda_{RM}}$ & $\rm{\log M_{200, WL}}$ & $\rm{L_X}$ \\
ID & ID & opt & opt & X-ray & X-ray & CFHT & spec & SDSS & SDSS & CFHT & CFHT & $\ \rm{M_{\odot}}$ & $\rm[h_{70}^{-2}\ 10^{44}erg/s]$\\
\hline \hline
16566 & 1\_2639 & 08:42:31 & 47:49:19 & 08:42:28 & 47:50:03 & ugriz & 0.382 & 0.368 & $108 \pm 7$ & 0.383 & $120 \pm 3$ & $ 14.61 _{ -0.29 } ^{ +0.20 } \pm 0.02 $ & $ 3.1 \pm 1.2 $ \\
24865 & 1\_5729 & 08:22:42 & 41:27:30 & 08:22:45 & 41:28:09 & ugriz & 0.486 & 0.477 & $138 \pm 23$ & 0.487 & $ 91 \pm 3$ & $ 14.91 _{ -0.27 } ^{ +0.19 } \pm 0.03 $ & $ 4.9 \pm 1.7 $ \\
24872 & 1\_5735 & 08:26:06 & 40:17:31 & 08:25:59 & 40:15:19 & ugriz & 0.402 & 0.391 & $149 \pm 10$ & 0.407 & $116 \pm 4$ & $ 14.76 _{ -0.35 } ^{ +0.23 } \pm 0.02 $ & $ 5.4 \pm 1.3 $ \\
24877 & 1\_5740 & 08:24:27 & 40:06:19 & 08:24:40 & 40:06:53 & ugriz & 0.592 & 0.539 & $ 63 \pm 59$ & 0.593 & $ 71 \pm 4$ & $ 15.28 _{ -0.24 } ^{ +0.18 } \pm 0.03 $ & $ 4.9 \pm 2.0 $ \\
24981 & 1\_5830 & 08:56:13 & 37:56:16 & 08:56:14 & 37:55:52 & ugriz & 0.411 & 0.411 & $123 \pm 12$ & 0.407 & $107 \pm 3$ & $ 14.68 _{ -0.34 } ^{ +0.23 } \pm 0.02 $ & $ 7.6 \pm 1.9 $ \\
25424 & 1\_6220 & 11:30:56 & 38:25:10 & 11:31:01 & 38:24:42 & ugriz & 0.509 & 0.513 & $ 65 \pm 17$ & 0.510 & $ 69 \pm 3$ & $ 14.51 _{ -0.35 } ^{ +0.23 } \pm 0.02 $ & $ 5.5 \pm 2.1 $ \\
25953 & 1\_6687 & 14:03:44 & 38:27:04 & 14:03:42 & 38:27:38 & ugriz & 0.478 & 0.484 & $131 \pm 19$ & 0.478 & $ 88 \pm 3$ & $ 14.70 _{ -0.29 } ^{ +0.20 } \pm 0.03 $ & $ 4.6 \pm 1.3 $ \\
27940 & 1\_7312 & 00:20:09 & 34:51:18 & 00:20:10 & 34:53:36 & ugriz & 0.449 & 0.46 & $116 \pm 24$ & 0.463 & $ 89 \pm 3$ & $ 14.82 _{ -0.31 } ^{ +0.21 } \pm 0.03 $ & $ 9.2 \pm 2.1 $ \\
27974 & 2\_6669 & 00:08:51 & 32:12:24 & 00:08:55 & 32:11:12 & ugriz & 0.475 & 0.491 & $100 \pm 25$ & 0.469 & $ 75 \pm 3$ & $ 14.74 _{ -0.26 } ^{ +0.19 } \pm 0.03 $ & $ 8.1 \pm 3.6 $ \\
29283 & 1\_7697 & 08:04:35 & 33:05:08 & 08:04:36 & 33:05:27 & ugriz & 0.549 & 0.536 & $129 \pm 30$ & 0.552 & $107 \pm 3$ & $ 15.02 _{ -0.30 } ^{ +0.21 } \pm 0.03 $ & $ 7.0 \pm 2.3 $ \\
29284 & 1\_7698 & 08:03:30 & 33:01:47 & 08:03:30 & 33:02:06 & ugriz & 0.550 & 0.557 & $122 \pm 33$ & 0.541 & $ 68 \pm 3$ & $ 14.50 _{ -0.61 } ^{ +0.31 } \pm 0.03 $ & $ 4.9 \pm 1.9 $ \\
35361 & 1\_11298 & 14:56:11 & 30:21:04 & 14:56:13 & 30:21:12 & ugriz & 0.414 & 0.411 & $103 \pm 9$ & 0.411 & $ 98 \pm 3$ & $ 14.76 _{ -0.23 } ^{ +0.18 } \pm 0.03 $ & $ 6.0 \pm 1.3 $ \\
35399 & 1\_11334 & 15:03:03 & 27:54:58 & 15:03:10 & 27:55:00 & ugriz & 0.516 & 0.534 & $153 \pm 31$ & 0.517 & $ 81 \pm 3$ & $ 14.87 _{ -0.26 } ^{ +0.19 } \pm 0.03 $ & $ 4.8 \pm 1.8 $ \\
41843 & 1\_14643 & 23:40:45 & 20:52:04 & 23:40:45 & 20:53:02 & ugriz & 0.434 & 0.435 & $119 \pm 13$ & 0.436 & $ 75 \pm 3$ & $ 14.52 _{ -0.48 } ^{ +0.28 } \pm 0.02 $ & $ 3.7 \pm 1.4 $ \\
41911 & 1\_14706 & 00:23:01 & 14:46:57 & 00:23:01 & 14:46:31 & ugriz & 0.386 & 0.372 & $104 \pm 7$ & 0.413 & $ 81 \pm 3$ & $ 14.84 _{ -0.25 } ^{ +0.19 } \pm 0.03 $ & $ 4.0 \pm 1.4 $ \\
43403 & 1\_15084 & 08:10:18 & 18:15:18 & 08:10:20 & 18:15:13 & ugriz & 0.422 & 0.418 & $130 \pm 10$ & 0.423 & $ 94 \pm 3$ & $ 14.94 _{ -0.23 } ^{ +0.17 } \pm 0.03 $ & $ 4.7 \pm 1.7 $ \\
46649 & 1\_17215 & 01:35:17 & 08:47:50 & 01:35:17 & 08:48:14 & ugriz & 0.619 & 0.536 & $ 85 \pm 31$ & 0.587 & $128 \pm 5$ & $ 15.13 _{ -0.23 } ^{ +0.18 } \pm 0.03 $ & $ 14.2 \pm 4.6 $ \\
47981 & 1\_17406 & 08:40:03 & 08:37:54 & 08:40:02 & 08:38:04 & ugriz & 0.543 & 0.551 & $136 \pm 33$ & 0.540 & $ 69 \pm 3$ & $ 14.83 _{ -0.53 } ^{ +0.30 } \pm 0.02 $ & $ 6.9 \pm 2.6 $ \\
50492 & 1\_18933 & 23:16:43 & 12:46:55 & 23:16:46 & 12:47:12 & ugriz & 0.527 & 0.524 & $163 \pm 30$ & 0.525 & $105 \pm 3$ & $ 15.24 _{ -0.22 } ^{ +0.17 } \pm 0.03 $ & $ 7.4 \pm 2.2 $ \\
50514 & 1\_18954 & 23:32:14 & 10:36:35 & 23:32:14 & 10:35:32 & ugriz & 0.466 & 0.463 & $ 82 \pm 13$ & 0.475 & $ 73 \pm 3$ & $ 14.50 _{ -0.53 } ^{ +0.29 } \pm 0.03 $ & $ 3.6 \pm 1.3 $ \\
52480 & 1\_19778 & 09:34:39 & 05:41:45 & 09:34:37 & 05:40:53 & ugriz & 0.565 & 0.546 & $106 \pm 54$ & 0.542 & $ 83 \pm 3$ & $ 15.09 _{ -0.29 } ^{ +0.21 } \pm 0.02 $ & $ 7.6 \pm 2.4 $ \\
54795 & 1\_21153 & 23:02:16 & 08:00:30 & 23:02:17 & 08:02:14 & ugriz & 0.428 & 0.429 & $125 \pm 35$ & 0.435 & $ 73 \pm 3$ & $ 14.57 _{ -0.46 } ^{ +0.27 } \pm 0.03 $ & $ 5.9 \pm 1.6 $ \\
55181 & 1\_21510 & 00:45:12 & -01:52:32 & 00:45:10 & -01:51:49 & ugriz & 0.547 & 0.542 & $149 \pm 43$ & 0.532 & $ 97 \pm 4$ & $ 14.67 _{ -0.38 } ^{ +0.24 } \pm 0.03 $ & $ 5.9 \pm 2.3 $ \\
59915 & 1\_23940 & 01:25:05 & -05:31:05 & 01:25:01 & -05:31:53 & ugriz & 0.475 & 0.489 & $143 \pm 25$ & 0.472 & $ 98 \pm 3$ & $ 15.17 _{ -0.18 } ^{ +0.15 } \pm 0.03 $ & $ 3.9 \pm 1.4 $ \\
64232 & 1\_24833 & 00:42:33 & -11:01:58 & 00:42:32 & -11:04:07 & ugriz & 0.529 & 0.529 & $112 \pm 37$ & 0.553 & $ 66 \pm 3$ & $ 14.53 _{ -0.72 } ^{ +0.34 } \pm 0.02 $ & $ 4.8 \pm 1.8 $ \\
\hline \hline
\end{tabular}
\end{adjustbox}
\end{table*}
\subsection{Mass likelihood}
\label{sec:likelihood}
The lensing likelihood for an individual cluster is proportional to the probability of observing the present mean $\Delta\Sigma$ given a true cluster mass $M=M_{\rm{200c}}$. Assuming multivariate Gaussian errors in the observed signal, it can be written as
\begin{equation}
p(\Delta\Sigma|M)\propto\frac{1}{\sqrt{\det C(M)}}\times\exp\left(-\frac{1}{2}\bm{E}(M)^{\rm T} C^{-1}(M)\bm{E}(M)\right) \; ,
\label{eqn:likelihood}
\end{equation}
where $\bm{E}$ is the vector of residuals between data and model evaluated at mass $M$,
\begin{equation}
E_i(M)=\Delta\Sigma^{\rm obs}_i-\Delta\Sigma^{\rm model}_i(M) \; ,
\end{equation}
and $C$ is the covariance matrix (cf. equation~\ref{eqn:covcomponents}). The mass dependence of the covariance, due entirely to $C_{\rm intr}$, causes a complication relative to a simple minimum-$\chi^2$ analysis: the normalization of the Gaussian PDF depends on mass that needs to be accounted for by the $\det^{-1/2}C(M)$ term in equation~\ref{eqn:likelihood}. If the covariance is modelled perfectly, including the mass dependence, the above is the correct likelihood (see e.g.~\citealp{kodwani19}). If, however, the mass dependence of the covariance is modeled with some statistical or systematic uncertainty, the $\det^{-1/2}C(M)$ term can cause a bias in the best-fit masses.
For this reason, we use a two-step scheme:
\begin{enumerate}
\item determine the best-fit mass using a covariance that consists of shape noise and LSS contributions only, i.e. has no mass dependence
\item evaluate $C^{\rm intr}$ at the best fit mass of step (i), add this to the covariance without mass dependence and repeat the likelihood analysis with this updated, full, yet mass-independent covariance.
\end{enumerate}
\section{Hierarchical Bayesian model}
\label{sec:bayesmodel}
Below we describe the hierarchical Bayesian model, which we use to determine the posterior distribution of the parameters of interest. The following section follows a similar framework as in \cite{Nagarajan_2018} and \cite{Mulroy2019}, except, instead of one selection function, we introduce two separate selection functions, the CODEX selection function and the sampling function, for our lensing subsample.
The true underlying halo mass of the cluster $i$ in log-space $\mu_i = \ln(M_i)$ is related to all other observables through a scaling model $P(\bs_i, \mu_i | \theta)$, where $\bs_i = \ln(\bS_i)$ is the vector of true quantities in log-space and $\btheta$ represents a vector of parameters of interest. At given redshift, the joint probability distribution that there exist a cluster of mass $\mu_i$ can be written as
\begin{equation}
P(\bs_i,\mu_i | \btheta, z_i) = P(\bs_i | \mu_i, \btheta)P(\mu_i | z_i)P(z),
\end{equation}
where we model the conditional distribution for the mass at given redshift $z_i$, $P(\mu_i |z_i)$, as the halo mass function (HMF) $\frac{dn}{d \ln m}(\mu_i | z_i)$ and $P(z)$ is the comoving differential volume element $dV/dz(z)$. In practice, $P(\mu_i |z_i)$ is evaluated as a \citet{tinker08} mass function using fixed $\Lambda$CDM cosmology, where $\Omega_m = 0.27$, $\Omega_{\Lambda} = 0.73$, $\Omega_b = 0.049$, $H_0 = 70$ km\, s$^{-1}$\, Mpc$^{-1}$, $\sigma_8 = 0.82$, $n_s = 0.962$, for a density contrast of $200 \times \rho_{c}$.
The underlying true values of the observables in log-space $\bs_i$ are assumed to come from a multivariate Gaussian distribution:
\begin{equation}
P(\bs_i | \mu_i, \btheta) \propto \det (\bSigma_i^{-1/2}) \exp \bigg[-\frac{1}{2}(\bs_i - \langle \bs_i \rangle)^T \bSigma_i^{-1} (\bs_i - \langle \bs_i \rangle)\bigg],
\end{equation}
where the mean of the probability distribution of observables is modelled as a linear function in log-space $ \langle \bs_i \rangle = \balpha \mu_i + \bbeta $. The model parameters are defined as $\btheta = \{ {\balpha}, \bbeta, \bSigma_i \}$, where $\balpha$ is the vector of slopes, $\bbeta$ is vector of intercepts and $\bSigma$ is the intrinsic covariance matrix of the cluster observables at fixed mass. The diagonal elements of the intrinsic covariance matrix, $\sigma_{\ln s_i | \mu}$, represent the intrinsic scatter for a cluster observable $s_i$ at fixed mass. The off-diagonal elements are the covariance terms between different cluster observables at fixed mass.
However, we cannot directly access cluster observables, but only have estimates through observations, which contain observational uncertainties. We denote the observed logarithmic quantities with tilde: $\tilde{\bs}_i, \tilde{\mu}_i, \tilde{z}_i$, and the vector of all observables as $\tilde{\bf{o}} \in \{\tilde{\bs}_i, \tilde{\mu}_i, \tilde{z}_i\}$.
To connect them to their respective underlying true observables $\bf{o} \in \{ s_{i}$, $\mu_i$, $z_i \}$, we assume the full lensing likelihood from equation \ref{eqn:likelihood} for the mass, which we denote here $P(\tilde{\mu}_i$|$\mu_i)$, and, for other parameters, a multivariate Gaussian distribution $P(\tilde{\bs}_i, \tilde{z}_i | \bs_i, z_i) $,
which acts as our measurement error model:
\begin{equation}
P(\tilde{\bs}_i, \tilde{z}_i | \bs_i, z_i) \propto \det (\tilde{\bSigma}_i^{-1/2}) \exp \bigg[-\frac{1}{2}(\tilde{\bs}_i - \bs_i )^T \tilde{\bSigma}_i^{-1} (\tilde{\bs_i} - \bs_i )\bigg].
\label{eq:measurement_error_model}
\end{equation}
The diagonal elements of the covariance matrix in equation \ref{eq:measurement_error_model} represent the relative statistical errors in the observables for cluster $i$ and the off-diagonal elements the covariance between the relative errors of different observables. In practice, instead of using the evaluated richness measurement errors from the redMaPPer algorithm, we assume a Poisson noise model, described further in equations \ref{eq:sdss} and \ref{eq:cfht}. For simplicity, for a single cluster, we expect independent measurement errors between different observables.
For the total population, the probability of measuring the observed cluster property $\tilde{\bs}_i$ for a single cluster $i$ at fixed observed mass $\tilde{\mu}_i$ and observed redshift $\tilde{z}_i$, can be expressed as
\begin{equation}
\label{eq:prob_single_cluster_population}
\begin{split}
P(\tilde{\bs}_i, \tilde{\mu}_i, \tilde{z}_i | \btheta ) = &\int d\bs_i \int d\mu_i \int dz_i P(\tilde{\bs}_i, \tilde{z}_i | \bs_i, z_i) \\
\ \cdot &P(\tilde{\mu}_i | \mu_i)P(\bs_i | \mu_i, \btheta)P(\mu_i | z_i)P(z).
\end{split}
\end{equation}
Note that in equation \ref{eq:prob_single_cluster_population}, we have to marginalize over all the unobserved cluster properties, i.e., underlying halo mass, true observables and true redshift.
In reality, one cannot directly observe the full population of clusters, but a subsample of it based on some easily observable cluster property, such as luminosity or richness of the cluster. In order to rectify the bias coming from the observed censored population, one has to include the selection process in the model. If the selection is done several times with different observables, e.g., taking a subsample from a sample that represents the population, one should introduce all different selection processes into the modelling.
In order to introduce a selection effect into the Bayesian modelling, we define a boolean variable for the selection $I$, which we will use as a conditional variable to specify whether a cluster is detected or not.
Let's first consider a single selection variable $\tilde{\lambda}$. Assume we have made a cut at $\tilde{\lambda}$, and we observe all the clusters above this limit. Then $P(I=1 | \tilde{\lambda} \ge \mathrm{cut}) = 1$ for all observed clusters, and $P(I=0 | \tilde{\lambda} < \mathrm{cut}) = 0$, for all unobserved clusters.
However, if we don't detect all the clusters above the cut, just a subsample of clusters, but know how many clusters we miss, we can calculate the fraction of clusters from the subsample that belong to the sample at certain richness $f(\tilde{\lambda}_{i,\mathrm{sub}})=\tilde{N}_{\mathrm{sub}}/\tilde{N}_{\mathrm{sample}}(\tilde{\lambda}_{i, \mathrm{sub}})$, and treat this fraction as our subsample detection probability, for which $P(I=1 | \tilde{\lambda}_{i,\mathrm{sub}}) = f(\tilde{\lambda}_{i,\mathrm{sub}}) \leq 1$. We note that $f$ returns to the heaviside step function, if we observe all the clusters above the cut $\tilde{\lambda}$. Below, we generalize the selection probability $P(I | \tilde{\bf{o}}_i, \btheta)$ by considering any selection function to depend on multiple selection variables $\tilde{\bf{o}}_i$, and the vector of parameters of interest $\btheta$.
Using the Bayes' theorem, the probability of measuring the observed cluster properties $\tilde{\bf{o}}_i$, given fixed vector of parameters $\btheta$ and that the cluster passed the selection is
\begin{equation}
\label{eq:prob_w_sel}
P(\tilde{\bf{o}}_i | I, \btheta) = \frac{P(I | \tilde{\bf{o}}_i, \btheta)P(\tilde{\bf{o}}_i | \btheta ) }{P(I | \btheta)},
\end{equation}
where $P(I | \tilde{\bf{o}}_i , \btheta)$ quantifies the probability of detecting a single cluster, and $P(I | \btheta)$, is the overall probability for all the clusters to be selected, which can be evaluated by marginalizing over the observed cluster properties from the numerator in equation \ref{eq:prob_w_sel}:
\begin{equation}
P(I | \btheta) = \int d\tilde{\bf{o}}_i P(I | \tilde{\bf{o}}_i, \btheta)P(\tilde{\bf{o}}_i | \btheta).
\end{equation}
In the case where the selection depends on \textit{both} observed and true quantities, equation \ref{eq:prob_w_sel} becomes, according to Bayes theorem:
\begin{equation}
\label{eq:prob_obs_unobs}
P(\tilde{\bf{o}}_i, | I_{\mathrm{tot}}, \btheta)
=
P(\tilde{\bf{o}}_i, | I_{\mathrm{obs}}, I_{\mathrm{true}}, \btheta) = \frac{P(I_{\mathrm{obs}} | \tilde{\bf{o}}_i, \btheta) P(I_{\mathrm{true}}, \tilde{\bf{o}}_i | \btheta ) }{P(I_{\mathrm{obs}}, I_{\mathrm{true}} | \btheta)} ,
\end{equation}
where we have introduced a second selection parameter $I_{\mathrm{true}}$, that denotes the selection based on true quantities. The first term is the same selection function $P(I | \tilde{\bf{o}}_i, \btheta)$
as in equation \ref{eq:prob_w_sel}, and the second term in the numerator can be expressed as
\begin{equation}
\label{eq:sel_unobs}
\begin{split}
P(I_{\mathrm{true}}, \tilde{\bf{o}}_i | \btheta ) =
&\int d\bs_i \int d\mu_i P(I_{\mathrm{true}} | \bs_i, \mu_i ) \\ \ \cdot &P(\tilde{\bs}_i, \tilde{\mu}_i | \bs_i, \mu_i)P(\bs_i, \mu_i | \btheta).
\end{split}
\end{equation}
Equation \ref{eq:prob_single_cluster_population} is assumed to work only if no censoring is involved, but equation \ref{eq:sel_unobs} assumes that the observed set belongs to a larger population, and the selection $P(I_{\mathrm{true}} |\bs_i, \mu_i)$ can be modelled with simulations, where the true observables are known. In section \ref{sec:xray_selection}, we introduce the CODEX X-ray selection, $P(I_X | \bs_i, \mu_i)$, which is defined as a function of true observables.
The normalization of the likelihood function in equation \ref{eq:prob_obs_unobs} can also be expressed as an integral over all observables:
\begin{equation}
P(I_{\mathrm{obs}}, I_{\mathrm{true}} | \btheta) = \int d\tilde{\bf{o}}_i P(I_{\mathrm{obs}} | \tilde{\bf{o}}_i, \btheta) P(I_{\mathrm{true}}, \tilde{\bf{o}}_i | \btheta).
\end{equation}
Finally, the full likelihood function for the subsample, with the inclusion of the selection effects, becomes a product of the single cluster likelihood functions from equation \ref{eq:prob_obs_unobs}:
\begin{equation}
\mathcal{L}( \tilde{\bf{o}}_{N} | \btheta ) = \prod_{i=1}^N
P(\tilde{\bf{o}}_i | I_{\mathrm{tot}}, \btheta ),
\end{equation}
where subscript N denotes the full vector of observed measurements from all the clusters. The full posterior distribution, which describes the probability distribution of parameters of interest, given the observed mass, redshift and set of observables is then
\begin{equation}
P(\btheta | \tilde{\bs}_{N}, \tilde{\mu}_{N}, \tilde{z}_N) \propto \pi(\btheta)\mathcal{L}( \tilde{\bs}_{N}, \tilde{\mu}_{N}, \tilde{z}_N | \btheta ),
\end{equation}
where $\pi(\btheta)$ describes the prior knowledge of the parameters.
\section{Application to the CODEX weak lensing sample}
\label{sec:application}
We apply the above described Bayesian method to the lensing sample S-I, and exclude eleven clusters: CODEX ID 53436 and 53495 as they are missing both CFHT richness and weak lensing information; 37098 as it is missing weak lensing information; 13390, 29811 and 56934 as they are missing CFHT richness information; CODEX ID 13062 (griz) and 35646 (griz) as we only employed our method to clusters measured with five filters (ugriz); CODEX ID 12451, 18127 and 36818 as their CFHT richness are below the 10\% CODEX survey completeness limit, which is further described in section \ref{sec:optical_selection}.
We aim to constrain both the intrinsic scatter in richness and the scaling relation parameters describing the richness-mass relation, see equation \ref{eq:M-l}. For that we fit a model of richness-mass relation to CFHT richness estimates and weak lensing mass likelihood (see Table \ref{tab:primaryWL} for CFHT richness estimates).
We don't fit for the SDSS richness-mass relation as the SDSS richness estimates have mean relative uncertainty of $\sim20 \%$, in contrast to CFHT richness mean relative uncertainty of $\sim4 \%$. However, since the lensing sample of 25 clusters, i.e., a subsample of the initial CODEX sample, is based purely on observability, such that not all clusters above the $\tilde{\lambda}_{\mathrm{SDSS}} = 60$ cut are observed, we use the fraction of SDSS richnesses $P(I = 1 | \ln \tilde{\lambda}_{\mathrm{SDSS}})$ as our subsample selection function, and treat the SDSS richness in our likelihood function as one of the selection variables, which we will marginalize over. As for CFHT and SDSS richnesses, we assume both are coming from the similar log-normal richness distribution, i.e., $P(\ln \tilde{\lambda} | \ln \lambda) = \mathcal{N}(\ln \tilde{\lambda} ; \ln \lambda, \sigma_{\ln \lambda })$, but with somewhat larger scatter for the SDSS richness, which is described below.
The relation between underlying true richness and true mass of the cluster is assumed to be a Gaussian distribution in logarithmic space, with the mean of this relation given by the logarithm of a power-law:
\begin{equation}
\langle \ln\lambda_i | \mu_i \rangle = \alpha \mu_i + \beta,
\label{eq:M-l}
\end{equation}
where we have defined $\mu_i \equiv \ln(M_i/M_{\mathrm{piv}})$ with pivot mass set to $M_{\mathrm{piv}} = 10^{14.81} M_{\odot}$, i.e., the median mass of the lensing subsample. The model parameters of interest, $\alpha$ and $\beta$, describe the scaling relation slope and intercept, respectively. This parametrization follows \citet{saro2015}. We write the full scatter in $\tilde{\lambda}_{\mathrm{SDSS}}$ as the sum in quadrature of a Poisson and an intrinsic variance terms. Thus, the total variance in observed SDSS richness at a fixed true mass $\mu_i$ can be written as \citep{spiders2018}:
\begin{equation}
\label{eq:sdss}
\sigma^2_\mathrm{tot, SDSS}(\ln \lambda_i| \mu_i) = \frac{\eta(z_i)}{\exp{\left \langle \ln\lambda_i | \mu_i \right \rangle}} \\
+\sigma^2_{\ln \lambda|\mu, \mathrm{intr}} \ ,
\end{equation}
where $\sigma^2_{\ln \lambda|\mu, \mathrm{intr}}$ is the third free parameter of our model.
As described in \citet{spiders2018}, a redshift dependent correction factor $\eta(z)$ is estimated for high redshift clusters to remedy the effect that the SDSS photometric data is not deep enough to correctly measure the richness after a certain magnitude limit is reached. As the CFHT photometric richnesses come from a sufficiently deep survey, we can set the survey depth correction factor to unity, so that the total variance in CFHT richness can be modelled as:
\begin{equation}
\label{eq:cfht}
\sigma^2_\mathrm{tot, CFHT}(\ln \lambda_i| \mu_i) = \frac{1}{\exp{\left \langle \ln\lambda_i | \mu_i \right \rangle}} + \sigma^2_{\ln \lambda|\mu, \mathrm{intr}} \ .
\end{equation}
We also test the Poisson term in terms of true richness, in contrast to mean richness, and the difference between these two error estimation methods are negligible.
For the observed mass estimation, we use the single cluster mass likelihood function $P(\tilde{\mu} | \mu)$, from equation \ref{eqn:likelihood}. We introduce a fourth scalar parameter, $l_{\mathrm{sys}}$ with standard normal distributed prior, to draw how different the noiseless logarithmic lensing masses are from the true logarithmic masses due to imperfect calibration of lensing shapes, redshifts, and the cluster density profiles.
We assume that the observed spectroscopic redshift is close to the true redshift of the cluster, i.e., we model the term $P(\tilde{z} |z)$ as a delta function.
In the case the sample is only limited by observed richness $\tilde{\lambda}_i$, with the calibration of the richness-mass scaling relation based on weak lensing data, the probability distribution can be written according to equation \ref{eq:prob_w_sel}. The initial CODEX sample contains both optical and X-ray selection. The X-ray selection requires the inclusion of the CODEX selection function, replacing equation \ref{eq:prob_w_sel} with equation \ref{eq:prob_obs_unobs}.
\subsection{Optical selection functions}
\label{sec:optical_selection}
We consider two separate optical selection functions below that account for optical cleaning and incompleteness of the survey.
We describe by $P(I_{\mathrm{clean}} | \tilde{\lambda}, \tilde{z})$ the optical cleaning applied to the catalog. In practice, this is a redshift dependent cut in observed richness used to minimize false X-ray sources while keeping as many true systems as possible. For the CODEX survey, this redshift cut is chosen by the $10 \%$ sensitivity limit. We adopt the 10\% CODEX sensitivity limit
\begin{equation}
P(I_{\mathrm{clean}} | \tilde{\lambda}, \tilde{z}) = \left\{
\begin{array}{ll}
1,& \mathrm{if} \tilde{\lambda} > 22\left(\frac{\tilde{z}}{0.15}\right)^{0.8} \\
0,& \mathrm{otherwise}.
\end{array}
\right.
\end{equation}
from \citet{finoguenov2019codex} to CFHT richnesses to only account for clusters which have richness completeness over 10\%. This cut excludes three clusters from S-I (CODEX ID 12451, 18127, and 36818).
We also consider the $50 \%$ SDSS richness completeness boundary:
\begin{equation}
\begin{split}
\ln \lambda_{50 \%}(z) = \ln\left(17.2 + \exp\left(\frac{z}{0.32}\right)^2\right)
\end{split}
\end{equation}
i.e., clusters with SDSS richness above these limits have at least 50\% completeness, respectively.
We include the 50\% SDSS richness completion as an optical selection function
\begin{equation}
P(I_{\mathrm{opt}} | \ln \lambda) = 1-\frac{1}{2}\mathrm{erfc}\left(\frac{\ln \lambda - \ln \lambda_{50\%}}{\sqrt{2}\sigma}\right)
\end{equation}
in the likelihood function with a scatter of $\sigma=0.2$, as described in \citet{finoguenov2019codex}.
This term accounts for incompleteness due to limited photometric depth of the SDSS survey causing a fraction of clusters to go unobserved.
\subsection{X-ray selection function}
\label{sec:xray_selection}
Details of the CODEX selection function are given in \citet{finoguenov2019codex}. The CODEX selection function $P(I_X | \mu, \mathrm{z}, \nu)$ provides an effective survey area at a given mass, redshift, and deviation from the mean richness at fixed mass $\nu \equiv \frac{\ln \lambda_i - \left<\ln \lambda | \mu_i \right>}{\sigma_{\ln\lambda}^\mathrm{intr}}$, which accounts for the covariance between scatter in richness and X-ray luminosity. The limits for $\nu$ is fixed between $\pm 4$. In the modelling the CODEX selection function, the $L_x$-mass scaling relations are fixed to those by the XMM-XXL survey (\citealt{lieu16, giles}), but the richness-mass relation is not modelled explicitly in the selection function, only the covariance between richness and luminosity.
For the selection function modelling, the covariance coefficient is fixed to $\rho_{\mathrm{L}_\mathrm{X} - \lambda} = -0.3$, which is based on results from \citet{farahi}. In this work, the CODEX selection function is evaluated at fixed cosmology with $\Omega_m = 0.27$.
The formulation of selection function allows us to propagate these effects into the full selection function.
As the CODEX selection function depends on $\nu(\lambda, \left<\ln \lambda \right>)$, and the mean richness in $\nu$ depends on scaling relation parameters, we can simplify the likelihood function by evaluating it in $\nu$-space instead of in $\lambda$-space. In $\nu$-space, equation \ref{eq:sel_unobs} can be rewritten as
\begin{equation}
\begin{split}
P(I_X, \ln \tilde{\lambda}, \tilde{\mu}, \tilde{z} | \btheta) = &\int d\nu \int d\mu \int dz P(I_X | \mu, \nu, z) P( \tilde{\mu} | \mu)P(\tilde{z} | z) \\
&\ \cdot P(\ln \tilde{\lambda} | \nu, \theta, \mu)P(\nu)P(\mu | z)P(z), \\
\end{split}
\end{equation}
which is the probability of observing a full sample with the inclusion of CODEX selection. However, we are dealing with a subsample, which gets selected with the sampling function, described below.
\subsection{Subsample selection function}
\label{sec:subsample_selection_function}
For evaluating the sampling function, based on SDSS richness, we use the initial CODEX sample (407 clusters, three light blue bins behind the three dark blue bins in Fig. \ref{fig:optical_sel}) and its subsample (25 clusters, three dark blue bins in Fig. \ref{fig:optical_sel}).
We bin both the initial sample and the subsample, the lensing sample, into equal bin widths and evaluate the ratio of the height of the bins. We then fit a linear piecewise function between the mean of the bins, which becomes our sampling function that depends on observed SDSS richness, depicted by the orange curve in Fig. \ref{fig:optical_sel}.
The sampling function has the following form:
\begin{equation}
P(I_\mathrm{samp} |§Kii \tilde{\lambda}_{\mathrm{SDSS}}) = \left\{
\begin{array}{ll}
0 & \tilde{\lambda} < 60 \\
\frac{1}{1000}(\tilde{\lambda} - 60) + \frac{7}{1000} & 60 \leq \tilde{\lambda} < 91 \\
\frac{33}{1000}(\tilde{\lambda} - 91) + \frac{38}{1000} & 91 \leq \tilde{\lambda} < 136 \\
\frac{186}{1000} & 136 \leq \tilde{\lambda} \leq 163, \\
\end{array}
\right.
\end{equation}
where $\tilde{\lambda} \equiv \tilde{\lambda}_{\mathrm{SDSS}}$.
As the clusters in the 407 cluster initial sample has cut at $\tilde{\lambda}_{\mathrm{SDSS}} \ge 60$, the sampling function defines a null probability for clusters below this cut.
Since the lensing sample, a subsample of the initial sample, is selected based only by observability, some of the clusters in the initial sample above the richness cut are unobserved, the sampling function differs from a typical heaviside step function.
\begin{figure}
\includegraphics[width=8cm]{plots/sampling_function.pdf}
\caption{SDSS richness distributions of CODEX sample and lensing sample, from which the sampling function (weight as a function of observed richness) is derived.}
\label{fig:optical_sel}
\end{figure}
The sampling function depends only on SDSS richness, which we can consider as an effective richness. We introduce an additional Gaussian distribution $P(\ln \tilde{\lambda}_{SDSS} | \ln \lambda)$ to account for the connection between SDSS richness and true richness and marginalize the likelihood function over the SDSS richness.
\subsection{Full data likelihood function}
Included for completeness is the full likelihood function in $\nu$-space that we use to constrain the parameters of interest $\theta = \{\alpha, \beta, \sigma_{\ln \lambda}^{\mathrm{intr}}\}$:
\begin{equation}
\begin{split}
\mathcal{L} = \prod_{i=1}^{N}\phi(I_X, I_{\mathrm{samp}}, I_{\mathrm{opt}} | \theta)^{-1} &\int d\nu_i \int d \mu_i \int d\ln \tilde{\lambda}_{i,SDSS} \\
\ \cdot &P(I_\mathrm{samp} | \ln \tilde{\lambda}_{i,SDSS}) \\
\ \cdot&P(I_X | \mu_i, \tilde{z}_i, \nu_i) \\
\ \cdot&P(I_{\mathrm{opt}} | \nu_i, \mu_i, \theta ) \\
\ \cdot&P(\ln \tilde{\lambda}_{i,SDSS} | \nu_i, \mu_i, \theta, \tilde{z}_i) \\
\ \cdot&P(\ln \tilde{\lambda}_{i,CFHT} | \nu_i, \mu_i, \theta) \\
\ \cdot&P(\tilde{\mu}_i | \mu_i ) \\
\ \cdot&P(\nu_i) \\
\ \cdot&P(\mu_i,\tilde{z}_i), \\
\end{split}
\end{equation}
where the normalization of the likelihood is :
\begin{equation}
\begin{split}
\phi(I_X, I_{\mathrm{samp}}, I_{\mathrm{opt}} | \theta) =& \int d\nu \int d\mu \int d\ln\tilde{\lambda}_{SDSS} \int d\tilde{z} \\
\ \cdot&P(I_{\mathrm{samp}} | \ln \tilde{\lambda}_{SDSS}) \\
\ \cdot&P(I_X | \mu, \tilde{z}, \nu) \\
\ \cdot&P(I_{\mathrm{opt}} | \nu, \mu, \theta ) \\
\ \cdot&P(\ln \tilde{\lambda}_{SDSS} | \nu, \mu, \theta, \tilde{z}) \\
\ \cdot&P(\nu) \\
\ \cdot&P(\mu,\tilde{z}).
\end{split}
\end{equation}
The subscript $i$ is omitted in the normalization as it is identical for all clusters. We note, that the full likelihood function incorporates three of the four selection effects: X-ray selection $P(I_X | \mu_i, \tilde{z}_i, \nu_i) $, to account for covariance between X-ray cluster properties with richness, optical selection $P(I_{\mathrm{opt}} | \nu_i, \mu_i, \theta )$, to account for the incompleteness of the SDSS richness, and the sampling function $P(I_{\mathrm{samp}} | \ln \tilde{\lambda}_{SDSS})$, to account for the fact that we analyse a subsample of the initial CODEX sample. We don't include the fourth selection function, the optical cleaning function $P(I_{\mathrm{clean}} | \tilde{\lambda}, \tilde{z})$ in the data likelihood, as it is only used to make the redshift dependent cut, removing cluster ID 12451, 18127, and 36818 from the S-I sample.
\section{Results and Discussion}
\label{sec:results}
\begin{figure}
\includegraphics[width=8cm]{plots/cfht-sdss-corner_24chains_v2.pdf}
\caption{Result from the MCMC fitting, with the one and two dimensional projections of the posterior distributions for the CFHT samples. Contours indicate the statistical 1$\sigma$ ($68 \%$) , and 2$\sigma$ ($95 \%$) credible regions.}
\label{fig:cfht-sdss-corner}
\end{figure}
\begin{table*}
\caption{Summary of measured parameters, their initial values, priors and posteriors. The initial parameter values for each of the 24 random walkers in the MCMC run are randomly drawn around a circle with the center value listed in the Initial column and with radius $10^{-2}$. This way all walkers start to scan the parameter space at slightly different initial position.
}
\begin{center}
\begin{tabular}{ccccc} \hline\hline
\centering
\input{params.tex}
\hline
\end{tabular}
\end{center}
\begin{tablenotes}
\item[1] $\alpha$ is the mass slope of the richness--mass relation $\langle \ln \lambda | \mu \rangle = \alpha\mu + \beta$.
\item[2] $\beta$ is intercept (normalization) of the richness--mass relation.
\item[3] $\sigma_{\ln \lambda}^{\mathrm{intr}}$ is the intrinsic scatter in richness, which quantifies how much true richness at given mass scatters from the mean.
\item[4] $l_{\mathrm{sys}}$ is a scalar lensing systematic parameter. It is used to draw how different the noiseless log lensing masses are from the log true masses due to imperfect calibration of lensing shapes, redshifts, and the cluster density profiles.
\end{tablenotes}
\label{tab:results}
\end{table*}
\begin{table*}
\caption{Scaling relation parameter comparison to literature. The credible intervals refers to $1\sigma$ ($68 \%$) statistical uncertainties.}
\label{tab:scal_rel_comparison}
\begin{tabular}{c|c|c|c|}
\hline \hline
Bayesian analysis results & Intercept & Slope & Scatter \\
& $\lambda_0 = \exp{(\beta)}$ & $\alpha$& $\sigma_{\ln \lambda}^{\mathrm{intr}}$ \\
\hline
CODEX lensing sample & $84.0^{+9.2}_{-14.8} $ & $0.49^{+0.20}_{-0.15}$ & $0.17^{+0.13}_{-0.09}$ \\
\hline \hline
Previously published results & $\lambda_0(10^{14.81}M_{\odot}, z=0.5)$ & $M_{200c}^{\alpha} $ & $\sigma_{\ln \lambda}^{\mathrm{intr}}$ \\
\hline
LoCuSS prediction \textbf{\citep{Mulroy2019}} & $93.66 \pm 7.43$ & $0.74 \pm 0.06$ & $0.24 \pm 0.05$ \\
SPIDERS prediction \textbf{\citep{spiders2018}}& $65.10 \pm 7.21$ & $0.98 \pm 0.07$ & $0.22^{+0.08}_{-0.09}$ \\
SPTpol prediction \textbf{\citep{Bleem_2020}}& $79.15 \pm 8.30$ & $1.02 \pm 0.08$ & $0.23 \pm 0.16$ \\
DES Y1 prediction \textbf{\citep{desy1}}& $70.66 \pm 2.55$ & $0.73 \pm 0.03$ & $-$ \\
\hline \hline
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{plots/params_v2.pdf}
\caption{Comparison between the predicted richness and other results from the literature. The predicted richnesses are evaluated at $M_{200c} = 10^{14.81} M_{\odot}$ and $z = 0.5$. Gray bands denote the statistical $1\sigma$ $(68 \%)$ uncertainty of this work. For the DES Y1 analysis, the intrinsic scatter and its $1\sigma$ uncertainty is not shown, as it is not constrained in their work.
}
\label{fig:params}
\end{figure*}
\begin{figure}
\includegraphics[width=8cm]{plots/cfht-sdss-corner_comparison_new.pdf}
\caption{Identical MCMC fitting results as in Fig. \ref{fig:cfht-sdss-corner}, but with the inclusion of the scaling relation results from the literature, rescaled at $M_{200c} = 10^{14.81} M_{\odot}$ and $z=0.5$. Contours indicate the statistical 1$\sigma$ ($68 \%$) , and 2$\sigma$ ($95 \%$) credible regions.}
\label{fig:cfht-corner-comparison}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8cm]{plots/cfht-sptpol-corner_comparison2.pdf}
\caption{Comparison of the predicted \citet{Bleem_2020} parameter distributions (in orange) with respect to this work, but assuming a similar slope as in \citet{Bleem_2020} (in cyan). For the slope, instead of using a flat prior, we use a Gaussian prior with the mean and scatter set to SPTpol prediction listed in Table \ref{tab:scal_rel_comparison}. Contours indicate the statistical 1$\sigma$ ($68 \%$) , and 2$\sigma$ ($95 \%$) credible regions.}
\label{fig:cfht-sptpol}
\end{figure}
We sample the likelihood of the parameters using the \emph{EMCEE} package \citep{foremanmackey13}, which is a Markov Chain Monte Carlo (MCMC) algorithm. We run 24 walkers with 2.000 steps each, excluding the first 400 steps of each chain to remove the burn-in region. We checked the chain convergence by running a successful Gelman-Rubin and Geweke statistic for it using the ChainConsumer package \citep{chainconsumer}.
The summary of both initial and prior parameter values used for the MCMC and their posterior values and $1\sigma$ statistical uncertainties are listed in Table \ref{tab:results}. The initial values for these scaling relations are set to the results of the SPIDERS cluster work \citep{spiders2018}. Originally, we set the upper limit of $\alpha$ prior to 3, but above 1.6, this upper limit introduced two additional disconnected regions of relatively good likelihood. The two regions had mean values of $\alpha = 2.4, \beta = 4.4$ and $\sigma_{\ln \lambda}^{\mathrm{intr}}=0.25$, and $\alpha = 2.1, \beta = 4.2$ and $\sigma_{\ln \lambda}^{\mathrm{intr}}=1.00$. The scaling relations of these two regions have nonphysically low true and mean richness at low masses ($< 3 \times 10^{14} M_{\odot}$).
Therefore, we rerun the MCMC algorithm with
the upper limit of $\alpha$ prior set to 1.6, which removed the two nonphysical regions. We report the maximum likelihood of the posterior distribution as our best-fit values, and the uncertainties correspond to the interval containing $68\%$ of the points.
Fig. \ref{fig:cfht-sdss-corner} shows the results of the MCMC fitting. For the normalization $\lambda_0$ of the richness--mass relation, in logarithmic form $\left< \ln \lambda | \mu_{200c} \right> = \ln \lambda_0 + \alpha \mu_{200c}$, we found $\lambda_0 =\exp{\beta}= 84.0^{+9.2}_{-14.8}$, and for the slope $\alpha = 0.49^{+0.20}_{-0.15}$ at pivot mass $M_{\mathrm{200c, piv}} = 10^{14.81} M_{\odot}$. Our result for the intrinsic scatter in richness at fixed mass is $\sigma_{\ln{\lambda}|\mu}^{\mathrm{intr}} = 0.17^{+0.13}_{-0.09}$.
We compare our richness--mass relation to previous work from \citet{Mulroy2019}, \citet{spiders2018}, \citet{desy1}, and \citet{Bleem_2020}. We give a brief summary of each of their results below.
In \citet{Mulroy2019}, a simultaneous analysis on several galaxy cluster scaling relations between weak lensing mass and multiple cluster observables is done, including
richness--mass relation in logarithmic space $\left< \ln \lambda | \mu_{500c} \right> = \beta + \alpha\mu_{500c}
$ using a sample of 41 X-ray luminous clusters from the Local Cluster Substructure Survey (LoCuSS), spanning the redshift range of $0.15 < z < 0.3$ and mass range of $2.1 \times 10^{14} M_{\odot} < M_{500c, WL} < 1.6 \times 10^{15} M_{\odot}$, with $z_{\mathrm{piv}} = 0.22$, and $M_{500c, \mathrm{piv}} = 7.14 \times 10^{14} M_{\odot}$. Their method for estimating the data likelihood function has the same basis as this work, thus we expect the least disagreement between their results and ours.
\citet{spiders2018} derive the richness--mass--redshift relation $\left< \lambda | \mu_{200c}, z \right> = A\mu_{200c}^{\alpha}(\frac{1+z}{1+z_{\mathrm{piv}}})^{\gamma}$ using a sample of 428 X-ray luminous clusters from the SPIDERS survey, spanning the redshift range $0.03 \leq z \leq 0.66$ and dynamical mass range $1.6 \times 10^{14} M_{\odot} < M_{200c, \mathrm{dyn}}< 1.6 \times 10^{15} M_{\odot} $ with $z_{\mathrm{piv}}=0.18$ and $M_{200c, \mathrm{piv}} = 3 \times 10^{14} M_{\odot}$. We compare our richness-mass results to their baseline analysis that accounted for the CODEX selection function.
Since the CODEX survey is part of the SPIDERS programme, they share a similar CODEX selection function as we do. Between $0.4<z<0.65$ our CODEX cluster sample overlap with \citet{spiders2018} with the cluster mass, richness, and redshift range. However, clusters with $z > 0.4$ in both \citet{spiders2018} and our work have the median number of spectroscopic redshift members $\leq 20$, as can be seen from Fig. \ref{fig:median_spec_zbin}, below, thus the quality of dynamical mass estimates is very different at $z<0.2$, where there are many more than 20 members (median is up to 60 members at $z<0.1$).
\begin{figure}
\includegraphics[width=8cm]{plots/median-spec_zbin.pdf}
\caption{Median of the spectroscopic members as a function of spectroscopic redshift of the SPIDERS sample, which CODEX sample is part of. The redshift bin is set to $\Delta z = 0.05$, and the selection cuts are set those of \citet{spiders2018}
($\lambda \geq 60$ and $N_{\mathrm{mem}} \geq 10$).}
\label{fig:median_spec_zbin}
\end{figure}
\citet{desy1} derive mass--richness--redshift relation $\left< M_{200m} | \lambda, z \right> = M_0 (\lambda/40)^{F}((1+z)/1.35)^G$, and they constrained the normalization of their scaling relation at the 5.0 per cent level, finding $M_0 = (3.081 \pm 0.075) \times 10^{14} M_{\odot} $ at $\lambda = 40$ and $z=0.35$. They find the richness slope at $F = 1.356 \pm 0.051$ and the redshift scaling index $G = -0.3 \pm 0.30$.
They use redMaPPer galaxy cluster identifier in the Dark Energy Survey Year 1 data using weak gravitational lensing, and $4 \times 3$ bins of richness $\lambda$ and redshift $z$ for $\lambda \ge 20$ and $0.2 \leq z \leq 0.65$. The analysis of \citet{desy1} is the most statistically constraining result from the literature that we consider. However, they consider purely optically selected clusters, which are known to be prone to contamination of low-mass systems.
\citet{Bleem_2020} derive richness--mass--redshift relation $\langle \ln \lambda | M_{500c}\rangle = \ln A + B \ln(M_{500c}/ 3\times 10^{14} M_{\odot} h^{-1}) + C\ln(E(z)/E(z=0.6))$,
and found $A = 76.9 \pm 8.2$, $B = 1.020 \pm 0.08$, $C = 0.29 \pm 0.27$. They report finding a $28 \%$ shallower slope $F = 1/B$ than \citet{desy1} with the difference significant at the $4\sigma$ level.
This 2770 $\mathrm{deg}^2$ survey is conducted using the polarization sensitive receiver in the South Pole Telescope (SPTpol) using the identified Sunyaev-Zel'dovich (SZ) signal of 652 clusters to estimate the cluster masses. The richnesses of the clusters are estimated using the redMaPPer algorithm and matched with DES Y3 RM catalog to calibrate the richness--mass relation, taking the SPT selection into account. This sample is closest to ours in terms of sample definition, as both X-ray and SZ signal require the presence of hot intracluster medium (ICM), which cleans the contamination of optical samples.
In a recently published CODEX weak lensing analysis by \citet{Phriksee}, a mass-richness comparison was made to \citet{spiders2018}, with 279 clusters in the optical richness range at $20 \leq \lambda \leq$ 110, and $0.1 \leq z \leq 0.2$. They found an excellent agreement with both dynamical mass estimates and weak lensing mass estimates at $z \leq 0.15$.
We use the {\tt colossus} python package \citep{Diemer2018ab} to convert the $M_{500c}$, and $M_{200m}$ to $M_{200c}$ when necessary, and evaluate the slope and intercept at $M_{200c, \mathrm{piv}} = 10^{14.81} M_{\odot}$, in order to compare our constraints with other results. Since \citet{spiders2018}, \citet{desy1}, and \citet{Bleem_2020} included the $z$ evolution of their scaling relation, we estimate their relation at $z=0.5$, the mean $z$ of our 25 cluster subsample, to make our results comparable. For \citet{Mulroy2019}, we rescale the scaling relation parameters by assuming $\lambda_0(z) = \exp{\beta}(z) = const$. For the \citet{desy1} results, we use the \citet{leauthaud_2009} to invert the mass-richness relation, and evaluate the relation at $z=0.5$, $M_{\mathrm{200c, piv}} = 10^{14.81} M_{\odot}$. The inversion requires a bias term, which depends on the $\sigma_{\ln \lambda}^{\mathrm{intr}}$, for which we use our intrinsic scatter value of $\sigma_{\ln \lambda}^{\mathrm{intr}}=0.17^{+0.13}_{-0.09}$, as \citet{desy1} did not constrain it.
In Table \ref{tab:scal_rel_comparison}, we show the predicted richness--mass mean parameter values and their $1\sigma$ statistical uncertainties from the LoCuSS, SPIDERS, SPTpol, and DES Y1 work, all evaluated at $z=0.5$ and $M_{200c, \mathrm{piv}} = 10^{14.81} M_{\odot}$. In Fig. \ref{fig:params}, we compare the slope and predicted richness $\lambda_0 = \langle \lambda | M = 10^{14.81} M_{\odot}, z=0.5, \rangle = \exp(\beta)$ from our work (gray bands) to the ones in the literature.
Fig. \ref{fig:cfht-corner-comparison} shows the predicted mean relations from Table \ref{tab:scal_rel_comparison} overplotted to our MCMC fitting results from Fig. \ref{fig:cfht-sdss-corner}. We note that all the predicted mean results fall within $2\sigma$ region of our posterior distributions, where the largest deviation in both slope and intercept is with \citet{spiders2018} and \citet{Bleem_2020}.
Since our slope is only accurate up to $2\sigma$ for both \citet{spiders2018} and \citet{Bleem_2020}, with both centered around unity, and the latter having shallower constraints for the slope, to see how different prior of the slope affects our parameter estimation, we redo our Bayesian analysis with the same 25 clusters as before, but using a Gaussian prior for the slope, set to the mean and the scatter from SPTpol prediction of Table \ref{tab:scal_rel_comparison}. In Fig. \ref{fig:cfht-sptpol}, we show the posterior distributions of the Gaussian prior for the slope in cyan, and compare the parameter distributions against the predicted SPTpol parameter distributions, shown in orange. When using a Gaussian prior for the slope, we found the posterior slope $\alpha = 0.98 \pm 0.09$, normalization $\lambda_0 = \exp(\beta) = 74.4^{+21.4}_{-18.2}$, and intrinsic scatter in richness $\sigma_{\ln \lambda}^{\mathrm{intr}} = 0.28^{+0.16}_{-0.14}$. We create the SPTpol parameter distributions by using a multivariate Gaussian with mean and elements of the diagonal scatter matrix set at the mean and the square of the $1\sigma$ uncertainties of the SPTpol predictions from Table \ref{tab:scal_rel_comparison}. We note that a tight parameter constraint on the slope loosens both the normalization, and the intrinsic scatter to wider range, forcing the mean of the normalization parameters towards smaller values, but intrinsic scatter towards the predicted SPTpol results.
Since the number of clusters is small in our subsample, the prior shape has a larger impact on the final marginalized posterior distributions. We have a preference for choosing a flat prior for the slope, as our data points are within narrow mass range with large uncertainty on the mass, and small uncertainty on the richness.
In Fig. \ref{fig:best_fit}, we show the richness--mass relations from Table \ref{tab:scal_rel_comparison}. In the upper panel, we only consider the statistical $1\sigma \space (68\%)$ uncertainty around the mean relations, whereas in the lower panel, we consider the $1\sigma \space (68\%)$ interval, where new richness observations may fall at fixed mass. We do this by introducing the $\sigma_{\ln \lambda}^{\mathrm{intr}}$ and its $1\sigma$ uncertainty to all surveys, except for DES Y1, which lacked intrinsic scatter information.
The $1\sigma$ confidence regions in Fig. \ref{fig:best_fit} are done the following way:
\begin{enumerate}
\item Draw 5000 new scaling relation parameter samples ($\alpha$, $\beta$, and $\sigma_{\ln \lambda}^{\mathrm{intr}}$) from a multivariate Gaussian distribution with mean and diagonal scatter matrix set to results from Table \ref{tab:scal_rel_comparison},
\item Use new values of $\alpha$ and $\beta$ to generate 5000 new mean richnesses at each mass point,
\item For the upper panel, calculate the $1\sigma$ statistics of these 5000 mean richness values and plot them,
\item For the lower panel, sample 1000 new richness values for each of the 5000 mean richness values from a log-normal distribution with mean and scatter set to values sampled from the multivariate Gaussian in step (i),
\item Calculate the $1\sigma$ uncertainty from the 1000 new richness values for each of the 5000 mean richnesses and plot those uncertainties to the lower panel.
\end{enumerate}
The error envelopes in the lower panel include the $1\sigma$ uncertainties of the slope, the intercept and the intrinsic scatter in richness. Typically in the literature, only the mean with $1\sigma$ uncertainties are shown as the scaling relation, like in the upper panel of Fig. \ref{fig:best_fit}, but this method only accounts for uncertainty in the slope and intercept, and does not consider that the mean relation may deviate from the fixed data points by the intrinsic scatter. In the lower panel of Fig. \ref{fig:best_fit}, we also take account the effect of intrinsic scatter in richness and its $1\sigma$ uncertainty in the scaling relations. The latter method takes into account both the uncertainty of the mean relation due to intrinsic scatter, along with the uncertainty on the parameters. We note that the data points in Fig. \ref{fig:best_fit} refer to observed values from Table \ref{tab:cleanedWL}, not to their true values. We show these here to point out the narrow mass range of the observed data with large statistical uncertainty in weak lensing mass and small uncertainty in the observed richness.
From Fig. \ref{fig:params}, the richness normalization $\lambda_0$, at $z=0.5$ and $M_{200c}=10^{14.81} M_{\odot}$, from our work overlaps within $1\sigma$ uncertainty with all four different survey richness normalizations that we consider.
The main difference in the normalization is between LoCuSS, which had measured clusters at $0.15<z<0.3$, and the rest of the surveys, but given that LoCuSS richness relation is estimated without redshift dependent evolution in richness, so this might mean that there is an evolution of cluster richness at a given mass, as discussed in \citep{spiders2018}.
Relatively flat slopes found in this and in LoCuSS work could be attributed to a combination of probing small mass range, and that intrinsic scatter in richness could increase with decreasing mass $\sigma_{\ln \lambda}^{\mathrm{intr}}(m) \propto 1/m$.
Although, our mass slope is only $1\sigma$ away from the slope found by \citet{desy1}, a steeper slope of $\alpha = 1.0^{+0.22}_{-0.22}$ was robustly established in low-z CODEX studies \citep{Phriksee}, and was attributed to CODEX X-ray clusters being less prone to possible contamination by projected low mass groups of galaxies along line-of-sight than purely optically selected clusters, such as \citet{desy1}.
Also, from Fig. \ref{fig:params}, we see that our result on the intrinsic scatter in richness overlaps within $1\sigma$ with other results found from the literature, however with smaller mean at $\sigma_{\ln \lambda}^{\mathrm{intr}} = 0.17^{+0.13}_{-0.09}$. When the same analysis is done with a Gaussian prior on the slope, $\alpha \sim \mathcal{N}(1.02, 0.08)$ (see Fig. \ref{fig:cfht-sptpol}), we find the intrinsic scatter at $\sigma_{\ln \lambda}^{\mathrm{intr}} = 0.28^{+0.16}_{-0.14}$, indicating the importance of the prior choice, when a small sample size is considered.
Our comparison to the results of the dynamical mass modelling, presented in \citet{spiders2018}, indicate marginally lower mass for a given richness at richness values around $80$. Considering other weak lensing calibrations, performed on X-ray clusters, we quote from \cite{Phriksee} that at $z<0.15$ the weak lensing calibration of CODEX clusters of \cite{Phriksee} agrees well with \citet{spiders2018}, while we find from Fig. \ref{fig:params} that LoCuSS \citep{Mulroy2019} results ($0.15<z<0.3$) are in significant tension with \citet{spiders2018}. These results, if confirmed, could be used to constrain the models of modified gravity \citep{arnold,Sakstein2016,wilcox2016, mitchell, Tamosiunas_2019}. Improvements in spectroscopic follow-up of high-z clusters is however, very critical. As \citet{zhang2017} showed, a low number of spectroscopic redshifts per cluster and fiber-collisions of SPIDERS tiling can have strong effect on bias and scatter of dynamical mass estimates.
\begin{figure}
\begin{subfigure}{8.45cm}
\centering\includegraphics[width=7cm]{plots/best_fit_nointr.pdf}
\end{subfigure}
\begin{subfigure}{8.45cm}
\centering\includegraphics[width=7cm]{plots/best_fit_itrscat.pdf}
\end{subfigure}
\caption{\textbf{Upper panel}: Mean relation comparison with the predicted results from the literature. The confidence regions (light blue, light green, light red, light orange, and light violet envelopes) represent the 1$\sigma$ uncertainty of the slope and intercept of the mean relations (blue, green, red, orange and violet dashed lines, respectively). The predicted relations from DES Y1, SPTpol, and SPIDERS have been scaled to $z_{\mathrm{pivot}}=0.5$, and the DES Y1 relation is inverted according to \citet{leauthaud_2009}. The vertical green line is the pivot mass of this work. We limit each predicted relation to their respective mass and richness range.}
\textbf{Lower panel}: Since in the data likelihood function, we account for the intrinsic scatter in richness, it is meaningful to include its effect to the overall parameter uncertainty budget. The error envelopes takes into account the $1\sigma$ uncertainties of the slope, intercept and the intrinsic scatter in richness. The uncertainties in data points represent $1\sigma$ statistical error in mass and observed richness.
\label{fig:best_fit}
\end{figure}
\section{Conclusions}
\label{sec:conclusion}
We present the results of Bayesian weak lensing mass calibration analysis of CODEX cluster sample of 25 clusters for high redshift ($0.35 < z < 0.62 $), with redMaPPer richness $\ge 60$, and with a detailed consideration of systematic uncertainties.
The weak lensing data is obtained by pointed CFHT observations of CODEX clusters, to which we add a reanalysis of the public CFHTLS data. We obtain the cluster masses by running a likelihood analysis including a covariance matrix to account for contributions by large scale structure and intrinsic properties. We refine the original richness estimates based on SDSS photometry by rerunning redMaPPer on CFHT photometry and obtain richness-mass relation $\langle \ln \lambda | \mu \rangle = \alpha \mu + \beta$, with $\mu = \ln (M_{200c}/10^{14.81} M_{\odot})$, and compare this relation to the one obtained by \citet{Mulroy2019} ($z\sim0.2$), and z=0.5 predictions of \citet{spiders2018}, \citet{desy1}, and \citet{Bleem_2020}. We
measure richness-mass relation with slope of $\alpha=0.49^{+0.20}_{-0.15}$ and intercept of $\lambda_0 = \exp(\beta) = 84.0^{9.2}_{-14.8}$, using a data likelihood function that incorporate the overall error budget of the weak lensing mass calibration analysis, along with optical, X-ray, survey incompleteness and subsample selection effects.
We find our results on the slope, intercept, and intrinsic scatter in richness overlap with the weak lensing analysis of low-z ($0.15<z<0.3$) LoCuSS clusters by \citet{Mulroy2019} within $1\sigma$ uncertainty over the entire LoCuSS mass range.
At masses of $10^{14.81}M_\odot$, our 68\% credible region for the mean cluster richness overlaps with that of \citet{Mulroy2019}, \citet{desy1}, and \citet{Bleem_2020}, and at around the 16th percentile, slightly overlaps the 84th percentile of the \citet{spiders2018}. The $1\sigma$ statistical uncertainty in richness is at the level of difference in the results based on different cluster selection and different mass measurements.
Even though we consider a multitude of selection effects with a narrow mass range and a small sample size, we find relatively flat slope. Thus, future improvements should not be directed solely towards increasing the sample size, but also on understanding the selection effects and improvements in the mass measurements.
The importance of our work consists in extending the weak lensing calibration of massive X-ray clusters to $z \leq 0.6$, where previously, large disagreements on weak lensing calibrations were reported \citep{Smith_2015}.
\section*{Acknowledgements}
We thank an anonymous referee for thorough review of the manuscript, Raffaella Capasso and Jacob Ider Chitham for discussion of the results.
This work is based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/IRFU, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii.
\\
We use data from the Canada-France-Hawaii Lensing Survey \citep{heymans12}, hereafter referred to as CFHTLenS. The CFHTLenS survey analysis combined weak lensing data processing with THELI \citep{erben13} and shear measurement with lensfit \citep{miller13}. A full systematic error analysis of the shear measurements in combination with the photometric redshifts is presented in \citet{heymans12}.
\\
Based on observations made with the Nordic Optical Telescope, operated by the Nordic Optical Telescope Scientific Association at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias.
\\
We acknowledge Fabrice Brimioulle for his substantial work on an early version of this manuscript, and we understand his decision not to be listed on the paper, since he is no longer working in astronomy. We thank Matthew R.~Becker and Andrey Kravtsov for making their cluster simulations available.
\\
KK and JV acknowledge financial support from the Finnish Cultural Foundation, KK the Magnus Ehrnrooth foundation, and the Academy of Finland grant 295113. This work was supported by the Department of Energy, Laboratory Directed Research and Development program at SLAC National Accelerator Laboratory, under contract DE-AC02-76SF00515 and as part of the Panofsky Fellowship awarded to DG. NC acknowledges financial support from the Brazilian agencies CNPQ and CAPES (process \#2684/2015-2 PDSE). NC also acknowledges support from the Max-Planck-Institute for Extraterrestrial Physics and the Excellence Cluster Universe. ESC acknowledges financial support from Brazilian agencies CNPQ and FAPESP (process \#2014/13723-3). LM acknowledges STFC grant ST/N000919/1. AF \& CK acknowledge the Finnish Academy award, decision 266918. HYS acknowledges the support from the Shanghai Committee of Science and Technology grant No. 19ZR1466600.
\\
We acknowledge R. Bender for the use of his photometric redshift pipeline in this work. NC acknowledge J. Weller for the hospitality.
This work made use of the astronomical data analysis software \texttt{TOPCAT} \citep{Taylor2005aa}. Data analysis has been carried out with University of Helsinki computing clusters Alcyone and Kale. We acknowledge the use of the research infrastructures Euclid Science Data Center Finland (SDC-FI, urn:nbn:fi:research-infras-2016072529) and the Finnish Grid and Cloud Computing Infrastructure (FGCI, urn:nbn:fi:research-infras-2016072533), and the Academy of Finland infrastructure grant 292882. The author acknowledges the usage of the following python packages, in alphabetical order: \texttt{astropy} \citep{Astropy-Collaboration2013aa, Astropy-Collaboration2018aa}, \texttt{chainConsumer} \citep{chainconsumer},
\texttt{emcee} \citep{emcee3}, \texttt{matplotlib} \citep{matplotlib}, \texttt{numpy} \citep{numpy1, numpy2}, and \texttt{scipy} \citep{scipy}.
\section*{Data availability}
The raw data underlying this article are available in CFHT server, at \url{https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/cfht/}
\addcontentsline{toc}{chapter}{Bibliography}
\bibliographystyle{mnras}
|
1,314,259,994,835 | arxiv | \section{Introduction}
\label{intro}
One of the main interests in high-mass star formation research today
is a thorough characterization of the expected massive accretion disks
(e.g., \citealt{yorke2002,krumholz2006b}). Observations of this type
require high spatial resolution, and disks have been suggested in
numerous systems, although not always with the same tracers (for a
recent compilation see \citealt{cesaroni2006}). A major complication
is the large degree of chemical diversity in these regions. This is
seen on large scales in terms of the large number of molecular
detections (e.g., \citealt{schilke1997b,vandishoeck1998}) but in
particular on the small scale (of the order $10^4$\,AU and even
smaller) where significant chemical differentiation is found (e.g.,
\citealt{beuther2005a}). At present, we are only beginning to
decipher the chemical structure of these objects at high spatial
resolution, whereas our understanding of the physical properties
better allows us to place regions into an evolutionary context (e.g.,
\citealt{beuther2006b,zinnecker2007}). It is not yet clear how our
emerging picture of the physical evolution is related to the observed
chemical diversity and evolution.
Various theory groups work on the chemical evolution during massive
star formation (e.g.,
\citealt{caselli1993,millar1997,charnley1997,viti2004,nomura2004,wakelam2005b,doty2002,doty2006}),
and the results are promising. However, the observational database to
test these models against is still relatively poor. Some single-dish
low-spatial-resolution line surveys toward several sources do exist,
but they are all conducted with different spatial resolution and
covering different frequency bands (e.g.,
\citealt{blake1987,macdonald1996,schilke1997b,hatchell1998b,mccutcheon2000,vandertak2000,vandertak2003,johnstone2003,bisschop2007}).
Furthermore, the chemical structure in massive star-forming regions is
far from uniform, and at high resolution one observes spatial
variations between many species, prominent examples are Orion-KL,
W3OH/H$_2$O or Cepheus A (see, e.g.,
\citealt{wright1996,wyrowski1999,beuther2005a,brogan2007}).
Single-dish studies targeted larger source samples at low spatial
resolution and described the averaged chemical properties of the
target regions (e.g., \citealt{hatchell1998b,bisschop2007}). However,
no consistent chemical investigation of a sample of massive
star-forming regions exists at high spatial resolution. To obtain an
observational census of the chemical evolution at high spatial
resolution and to build up a database for chemical evolutionary models
of massive star formation, it is important to establish a rather
uniformly selected sample of massive star-forming regions in various
evolutionary stages. Furthermore, this sample should be observed in
the same spectral setup at high spatial resolution. While the former
is necessary for a reliable comparison, the latter is crucial to
disentangle the chemical spatial variations in the complex massive
star-forming regions. Because submm interferometric imaging is a time
consuming task, it is impossible to observe a large sample in a short
time. Hence, it is useful to employ synergy effects and observe
various sources over a few years in the same spectral lines.
We have undertaken such a chemical survey of massive molecular cores
containing high-mass protostars in different evolutionary stages using
the Submillimeter Array (SMA\footnote{The Submillimeter Array is a
joint project between the Smithsonian Astrophysical Observatory and
the Academia Sinica Institute of Astronomy and Astrophysics, and is
funded by the Smithsonian Institution and the Academia Sinica.},
\citealt{ho2004}) since 2003 in exactly the same spectral setup. The
four massive star-forming regions span a range of evolutionary stages
and luminosities: (1) the prototypical hot molecular core (HMC)
Orion-KL (\citealt{beuther2004g,beuther2005a}), (2) an HMC at larger
distance G29.96 \citep{beuther2007d}, and two regions in a presumably
earlier evolutionary phase, namely (3) the younger but similar
luminous High-Mass Protostellar Object (HMPO) IRAS\,23151+5912
\citep{beuther2007f} and (4) the less luminous HMPO IRAS\,05358+3543
\citep{leurini2007}. Although the latter two regions also have central
temperatures $>$100\,K, qualifying them as ``hot'', their molecular
line emission is considerably weaker than from regions which are
usually termed HMCs. Therefore, we refer to them from now on as
early-HMPOs (see also the evolutionary sequence in
\citealt{beuther2006b}). Table \ref{source_parameters} lists the main
physical parameters of the selected target regions.
The SMA offers high spatial resolution (of the order $1''$) and a
large enough instantaneous bandwidth of 4\,GHz to sample numerous
molecular transitions simultaneously (e.g., $^{28}$SiO and its rarer
isotopologue $^{30}$SiO, a large series of CH$_3$OH lines in the
$v_t=0,1,2$ states, CH$_3$CN, HCOOCH$_3$, SO, SO$_2$, and many more
lines in the given setup, see \S\ref{data}). Each of the these
observations have been published separately where we provide detailed
discussions of the particularities of each source
\citep{beuther2004g,beuther2005a,beuther2007f,leurini2007}). These
objects span an evolutionary range where the molecular gas and ice
coated grains in close proximity to the forming star are subject
to increasing degrees of heating. In this fashion, volatiles will be
released from ices near the most evolved (luminous) sources altering
the surrounding gas-phase chemical equilibrium and molecular emission.
In this paper, we synthesize these data and effectively re-observe
these systems at identical physical resolution in order to identify
coherent trends. Our main goals are to search for (a) trends in the
chemistry as a function of evolutionary state and (b) to explore, in a
unbiased manner, the capability of molecular emission to trace
coherent velocity structures that have, in the past, been attribute to
Keplerian disks. We will demonstrate that the ability of various
tracers to probe the innermost region, where a disk should reside,
changes as a function of evolution.
\begin{table*}[htb]
\caption{Source parameters}
\begin{tabular}{lrrrr}
\hline
\hline
& Orion-KL$^b$& G29.96$^b$ & 23151$^b$& 05358$^b$ \\
\hline
$L^a$\,[L$_{\odot}$] & $10^5$ & $9\times 10^4$ & $10^5$ & $10^{3.8}$ \\
$d$\,[pc] & 450 & 6000 & 5700 & 1800 \\
$M_{\rm{gas}}^{a,c}$\,[M$_{\odot}$]& 140$^d$ & 2500$^e$ & 600 & 300 \\
$T_{\rm{rot}}^f$\,[K] & 300 & 340 & 150 & 220 \\
$N_{\rm{peak}}(\rm{H_2})^g$\,[cm${^-2}$] & $9\times 10^{24}$ & $6\times 10^{24}$ & $2\times 10^{24}$ & $2\times 10^{24}$ \\
Type & HMC & HMC & early-HMPO& early-HMPO \\
\hline
\hline
\end{tabular}
\footnotesize{~\\
$^a$ Luminosities and masses are derived from single-dish data. Since most regions split up into multiple sources, individual values for sub-members are lower.\\
$^b$ The SMA data are first published in \citealt{beuther2005a,beuther2007d,beuther2007f} and \citet{leurini2007}. Other parameters are taken from \citet{menten1995,olmi2003,sridha,beuther2002a}.\\
$^c$ The integrated masses should be accurate within a factor 5 \citep{beuther2002a}.\\
$^d$ This value was calculated from the 870\,$\mu$m flux of \citet{schilke1997b} following \citet{hildebrand1983} assuming an average temperature of 50\,K.\\
$^e$ This value was calculated from the 850\,$\mu$m flux of \citet{thompson2006} following \citet{hildebrand1983} as in comment $b$.\\
$^f$ Peak rotational temperatures derived from CH$_3$OH.\\
$^g$ H$_2$ column densities toward the peak positions derived from the submm dust continuum observatios \citep{beuther2004g,beuther2007d,beuther2007f,beuther2007c}.}
\label{source_parameters}
\end{table*}
\section{Data}
\label{data}
The four sources were observed with the SMA between 2003 and 2005 in
several array configurations achieving (sub)arcsecond spatial
resolution. For detailed observational description, see
\citet{beuther2005a,beuther2007d,beuther2007f} and
\citet{leurini2007}. The main point of interest to be mentioned here
is that all four regions were observed in exactly the same spectral
setup. The receivers operated in a double-sideband mode with an IF
band of 4-6\,GHz so that the upper and lower sideband were separated
by 10\,GHz. The central frequencies of the upper and lower sideband
were 348.2 and 338.2\,GHz, respectively. The correlator had a
bandwidth of 2\,GHz and the channel spacing was 0.8125\,MHz, resulting
in a nominal spectral resolution of $\sim$0.7\,km\,s$^{-1}$. However,
for the analysis presented below, we smoothed the data-cubes to
2\,km\,s$^{-1}$. The spatial resolution of the several datasets is
given in Table \ref{resolution}. Line identifications were done in an
iterative way: we first compared our data with the single-dish line
survey of Orion by \citet{schilke1997b} and then refined the analysis
via the molecular spectroscopy catalogs of JPL and the Cologne
database of molecular spectroscopy CDMS
\citep{poynter1985,mueller2002}.
\begin{table}[htb]
\caption{Spatial resolution}
\begin{tabular}{lrrrr}
\hline
\hline
& Orion-KL & G29.96 & 23151 & 05358 \\
\hline
Cont. [$''$] & $0.8\times 0.7$ & $0.4\times 0.3$ & $0.6\times 0.5$ & $1.1\times 0.6$ \\
Av. Cont. [AU] & 340 & 2100 & 3100 & 1500 \\
Line [$''$] & $1.4\times 1.1 $& $0.6\times 0.5$ & $1.1\times 0.8$ & $1.4\times 0.8$\\
Av. Line [AU] & 560 & 3300 & 5400 & 2000 \\
\hline
\hline
\end{tabular}
\label{resolution}
\end{table}
\section{Results and Discussion}
\subsection{Submm continuum emission}
To set the four regions spatially into context, Figure
\ref{cont_sample} presents the four submm continuum images obtained
with the SMA. As one expects in a clustered mode of massive star
formation, all regions exhibit multiple structures with the number of
sub-sources $\geq 2$. As discussed in the corresponding papers, while
most of the submm continuum peaks likely correspond to a
embedded protostellar sources, this is not necessarily the case for
all of them. Some of the submm continuum peaks could be
externally heated gas clumps (e.g., the Orion hot core peak,
\citealt{beuther2004g}) or may be produced by shock interactions with
molecular outflows (e.g., IRAS\,05358+3543, \citealt{beuther2007c}).
In the following spectral line images, we will always show the
corresponding submm continuum map in grey-scale as a reference frame.
\begin{figure*}[htb]
\includegraphics[angle=-90,width=\textwidth]{f1.eps}
\caption{862\,$\mu$m continuum images toward the four massive
star-forming regions. Positive and negative features are shown as
solid and dashed contours, respectively. The contouring always
starts at the $3\sigma$ levels and continues in $2\sigma$ steps
(with $1\sigma$ values of 35, 21, 5, 7\,mJy\,beam$^{-1}$,
respectively). These figures are adaptions from the papers by
\citet{beuther2005a,beuther2007d,beuther2007f,beuther2007c}.
Scale-bars are presented in each panel, and the wedges show the flux
density scales in Jy. The axis are labeled in R.A. (J2000) and Dec.
(J2000). The synthesized beams sizes are listed in Table
\ref{resolution}.}
\label{cont_sample}
\end{figure*}
\subsection{Spectral characteristics as a function of evolution}
Because the four regions are at different distances (Table
\ref{source_parameters}), to compare the overall spectra we smoothed
all datasets to the same linear spatial resolution of $\sim$5700\,AU.
Figure \ref{sample_spectra} presents the final spectra extracted at
this common resolution toward the peak positions of all four regions.
Furthermore, we present images at the original spatial resolution of
the four molecular species or vibrationally-torsionally excited lines
that are detected toward all four target regions (Figs.~3 to 6).
\begin{figure*}[htb]
\includegraphics[angle=-90,width=0.5\textwidth]{f2a.eps}
\includegraphics[angle=-90,width=0.5\textwidth]{f2b.eps}\\
\includegraphics[angle=-90,width=0.5\textwidth]{f2c.eps}
\includegraphics[angle=-90,width=0.5\textwidth]{f2d.eps}\\
\includegraphics[angle=-90,width=0.5\textwidth]{f2e.eps}
\includegraphics[angle=-90,width=0.5\textwidth]{f2f.eps}\\
\includegraphics[angle=-90,width=0.5\textwidth]{f2g.eps}
\includegraphics[angle=-90,width=0.5\textwidth]{f2h.eps}
\caption{SMA spectra extracted from the final data-cubes in the image
domain toward four massive star-forming regions (Orion-KL top-row,
G29.96 second row, IRAS\,23151+5912 third row and IRAS\,05358+3543
fourth row). For a better comparison, the data-cubes were all
smoothed to the same spatial resolution of $\sim$5700\,AU from the
current IRAS\,23151+3543 dataset. The circles in
Fig.~\ref{ch3oh_sample} outline the corresponding spatial regions.
The spectral resolution in all spectra is 2\,km/s.}
\label{sample_spectra}
\end{figure*}
The spectral characteristics between the HMCs and the early-HMPOs vary
considerably. Table \ref{linelistall} lists all detected lines in the
four regions with their upper energy levels $E_u/k$ and peak
intensities $S_{\rm{peak}}$ at a spatial resolution of $\sim$5700\,AU.
\subsubsection{Excitation and optical depth effects?}
\label{excitation_effects}
Since the line detections and intensities are not only affected by the
chemistry but also by excitation effects, we have to estimate
quantitatively how much the latter can influence our data. In local
thermodynamic equilibrium, the line intensities are to first order
depending on the Boltzmann factor $e^{\frac{-E_u}{kT}}$ and the
partition function $Q$:
$$\int I(T) dv \propto \frac{e^{\frac{-E_u}{kT}}}{Q(t)}$$
where $\frac{E_u}{k}$ is the upper level energy state. For polyatomic
molecules in the high-temperature limit, one can approximate
$Q(T)\propto \sqrt{T^3}$ (e.g., \citealt{blake1987}). To get a feeling
how much temperature changes affect lines with different
$\frac{E_u}{k}$, one can form the ratio of $\int I(T) dv$ at two
different temperatures:
$$\frac{\int I(2T) dv}{\int I(T) dv} = \frac{\sqrt{e^{\frac{E_u}{kT}}}}{\sqrt{2^3}}$$
Equating this ratio now for a few respective upper level energy states
and gas temperatures of $T=100$ \& 50\,K (Table \ref{excite}), we find
that the induced intensity changes mostly barely exceed a factor 2.
Only for very highly excited lines like
CH$_3$OH$(7_{1,7}-6_{1,6})v_t=1$ at relatively low temperatures
($T=50$\,K) do excitation effects become significant. However, at such
low temperatures, these highly excited lines emit well below our
detection limits, hence this case is not important for the present
comparison. Therefore, excitation plays only a minor role in
producing the molecular line differences discussed below, and other
effects like the chemistry turn out to be far more important.
\begin{table}[htb]
\caption{Excitation effects}
\begin{tabular}{lrrr}
\hline
\hline
Line & $\frac{E_u}{k}$ & $\frac{\int I(2T) dv}{\int I(T) dv}$ & $\frac{\int I(2T) dv}{\int I(T) dv}$\\
& (K) & @ $T$=100K & @ $T$=50K\\
\hline
C$^{34}$S(7--6) & 65 & 0.49 & 0.68 \\
SO$_2(18_{4,1}-18_{3,1})$ & 197 & 0.95 & 2.5 \\
CH$_3$OH$(7_{1,7}-6_{1,6})v_t=1$& 356 & 2.1 & 12.4 \\
~\\
\hline
\hline
\end{tabular}
\label{excite}
\end{table}
As shown in Table \ref{linelistall}, our spectral setup barely
contains few lines from rarer isotopologues. We therefore cannot
readily determine the optical depth of the molecular lines. While it
is likely that, for example, the ground state CH$_3$OH lines have
significant optical depths, rarer species and vibrationally excited
lines should be more optically thin. We checked this for a few
respective species (e.g., C$^{34}$S or SO$_2$) via running
large-velocity gradient models (LVG, \citealt{vandertak2007}) with
typical parameters for these kind of regions, confirming the overall
validity of optically thin emission for most lines (see
\S\ref{column_para} \& \S\ref{c34s}). However, without additional
data, we cannot address this issue in more detail.
\subsubsection{Column densities}
\label{column_para}
Estimating reliable molecular column densities and/or abundances is a
relatively difficult task for interferometric datasets like those
presented here. The data are suffering from missing short spacings and
filter out large fractions of the gas and dust emission. Because of
the different nature of the sources and their varying distances, the
spatial filtering affects each dataset in a different way.
Furthermore, because of spatial variations between the molecular gas
distributions and the dust emission representing the H$_2$ column
densities, the spatial filtering affects the dust continuum and the
spectral line emission differently. On top of this, Figures
\ref{ch3oh_sample} to \ref{so2_sample} show that is several cases the
molecular line and dust continuum emission are even spatially offset,
preventing the estimation of reliable abundances.
While these problems make direct abundance estimates relative to H$_2$
impossible, nevertheless, we are at least able to estimate approximate
molecular column densities for the sources. Since we are dealing with
high-density regions, we can derive the column densities from the
spectra shown in Figure \ref{sample_spectra} assuming local
thermodynamic equilibrium (LTE) and optically thin emission. We
modeled the molecular emission of each species separately using the
XCLASS superset to the CLASS software developed by Peter Schilke
(priv.~comm.). This software package uses the line catalogs from JPL
and CDMS \citep{poynter1985,mueller2001}. The main free parameters for
the molecular spectra are temperature, source size and column density.
We used the temperatures given in Table \ref{source_parameters}
(except of Orion-KL where we used 200\,K because of the large
smoothing to 5700\,AU) with approximate source sizes estimated from
the dust continuum and spectral line maps. Then we produced model
spectra with the column density as the remaining free parameter.
Considering the missing flux and the uncertainties for temperatures
and source sizes, the derived column densities should be taken with
caution and only be considered as order-of-magnitude estimates. Table
\ref{column} presents the results for all sources and detected
molecules.
\subsubsection{General differences}
An obvious difference between the four regions is the large line
forest observed toward the two HMCs Orion-KL and G29.96 and the
progressively less detected molecular lines toward IRAS\,23151+5912
and IRAS\,05358+3543. Especially prominent is the difference in
vibrationally-torsionally excited CH$_3$OH lines: we detect many
transitions in the HMCs and only a single one in the two early-HMPOs.
Since the vibrationally-torsionally excited CH$_3$OH lines have higher
excitation levels $E_u/k$, this can be relatively easily explained by
on average lower temperatures of the molecular gas in the early-HMPOs.
Assuming an evolutionary sequence, we anticipate that the two
early-HMPOs will eventually develop similar line forests like the two
HMCs.
Analyzing the spatial distribution of CH$_3$OH we find that it is
associated with several physical entities (Figs.~\ref{ch3oh_sample} \&
\ref{ch3oh_vt1_sample}). While it shows strong emission toward most
submm continuum peaks, it exhibits additional interesting features.
For example, the double-peaked structure in G29.96
(Fig~\ref{ch3oh_sample}) may be caused by high optical depth of the
molecular emission, whereas the lower optical depth
vibrationally-torsionally excited lines do peak toward the central
dust and gas core (Fig~\ref{ch3oh_vt1_sample}, smoothing the submm
continuum map to the lower spatial resolution of the line data, the
four submm sources merge into one central peak). In contrast, toward
Orion-KL the strongest CH$_3$OH features in the ground state and the
vibrationally-torsionally excited states are toward the south-western
region called the compact ridge. This is the interface between a
molecular outflow and the ambient gas. Our data confirm previous work
which suggests an abundance enrichment (e.g., \citealt{blake1987}).
For instance in quiescent gas in Orion the ratio of CH$_3$OH/C$^{34}$S
$<$\,20 \citep{bergin1997}, while our data find a ratio of 100 (see
Table \ref{column}). This is believed to be caused by outflow shock
processes in the dense surrounding gas (e.g., \citealt{wright1996}).
Furthermore, as will be discussed in \S\ref{disks}, there exist
observational indications in two of the sources (IRAS\,23151+5912 and
IRAS\,05358+3543) that the vibrationally-torsionally excited CH$_3$OH
may be a suitable tracer of inner rotating disk or tori structures.
\begin{figure*}[htb]
\includegraphics[angle=-90,width=\textwidth]{f3.eps}
\caption{CH$_3$OH contour images (line blend between
CH$_3$OH$(7_{2,5}-6_{2,4})$ and CH$_3$OH$(7_{2,6}-6_{2,5})$) toward
the four massive star-forming regions. Positive and negative
features are shown as solid and dashed contours. The contouring is
done from 15 to 95\% (step 10\%) of the peak emission, and the peak
emission values are 7.5, 0.9, 1.0 and 0.6\,Jy\,beam$^{-1}$ from left
to right, respectively. The integration regimes for the four sources
are [5,15], [90,104], [-60,-52] and [-20,-12]\,km\,s$^{-1}$. The
grey-scale with dotted contours shows the submm continuum emission
from Fig.~\ref{cont_sample}. This figure is an adaption from the
papers by
\citet{beuther2005a,beuther2007d,beuther2007f,leurini2007}. The axis
are labeled in R.A. (J2000) and Dec. (J2000). The spatial
resolution is listed in Table \ref{resolution}. The circles
represent the regions of diameter 5700\,AU used for the comparison
spectra in Fig.~\ref{sample_spectra}.}
\label{ch3oh_sample}
\end{figure*}
\begin{figure*}[htb]
\includegraphics[angle=-90,width=\textwidth]{f4.eps}
\caption{Rotationally-torsionally excited
CH$_3$OH$(7_{1,7}-6_{1,6})(v_t=1)$ images toward the four massive
star-forming regions. Positive and negative features are shown as
solid and dashed contours. The contouring is done from 15 to 95\%
(step 10\%) of the peak emission, and the peak emission values are
7.5, 0.9, 1.0 and 0.6\,Jy\,beam$^{-1}$ from left to right,
respectively. The integration regimes for the four sources are
[3,13], [91,105], [-58,-54] and [-18,-10]\,km\,s$^{-1}$. The
grey-scale with dotted contours shows the submm continuum emission
from Fig.~\ref{cont_sample}. This figure is an adaption from the
papers by
\citet{beuther2005a,beuther2007d,beuther2007f,leurini2007}. The axis
are labeled in R.A. (J2000) and Dec. (J2000). The spatial resolution
is listed in Table \ref{resolution}.}
\label{ch3oh_vt1_sample}
\end{figure*}
Toward G29.96, we detect most lines previously also observed toward
Orion-KL, a few exceptions are the $^{30}$SiO line, some of the
vibrationally-torsionally state $v_t=2$ CH$_3$OH lines, some N-bearing
molecular lines from larger molecules like CH$_3$CH$_2$CN or
CH$_3$CHCN, as well as a few $^{34}$SO$_2$ and HCOOCH$_3$ lines.
While for some of the weaker lines this difference may partly be
attributed to the larger distance of G29.96, for other lines such an
argument is unlikely to hold. For example, the CH$_3$CH$_2$CN line at
348.55\,GHz is stronger than the neighboring H$_2$CS line in Orion-KL
whereas it remains undetected in G29.96 compared to the strong H$_2$CS
line there. This is also reflected in the different abundance ratio of
CH$_3$CH$_2$CN/H$_2$CS which is more than an order of magnitude larger
in Orion-KL compared with G29.96 (see Table \ref{column}). Therefore,
these differences are likely tracing true chemical variations between
sources. In contrast, the only lines observed toward G29.96 but not
detected toward Orion-KL are a few CH$_3$OCH$_3$ lines.
The main SiO isotopologue $^{28}$SiO(8--7) is detected in all sources
but IRAS\,05358+3543. This is relatively surprising because SiO(2--1)
is strong in this region \citep{beuther2002d}, and the upper level
energy of the $J=8-7$ of $\sim$75\,K does not seem that
extraordinarily high to produce a non-detection. For example, the
detected CH$_3$OH$(7_{1,7}-6_{1,6})v_t=1$ transition has an upper
level energy of 356\,K. This implies that IRAS\,05358+3543 does have
warm molecular gas close to the central sources. However, the
outflow-components traced by SiO are at on average lower temperatures
(probably of the order 30\,K, e.g., \citealt{cabrit1990}) which may be
the cause of the non-detection in IRAS\,05358+3543. Furthermore, the
critical density of the SiO(8--7) line is about two orders of
magnitude higher than that of the (2--1) transition. Hence, the
density structure of the core may cause the (8--7) non-detection in
IRAS\,05358+3543 as well.
While the rarer $^{30}$SiO(8--7) isotopologue is detected toward
Orion-KL with nearly comparable strength as the main isotopologue
(Fig.~\ref{sample_spectra} and \citealt{beuther2005a}), we do not
detect it at all in any of the other sources.
A little bit surprising, the H$_2$CS line at 338.081\,GHz is detected
toward Orion-KL, G29.96 as well as the lowest luminosity source
IRAS\,05358+3543, however, it remains undetected toward the more
luminous HMPO IRAS\,23151+5912. We are currently lacking a good
explanation for this phenomenon because H$_2$CS is predicted by most
chemistry networks as a parent molecule to be found early in the
evolutionary sequence (e.g., \citealt{nomura2004}). The sulphur and
nitrogen chemistries are also peculiar in this sample, and we outline
some examples below.
\subsubsection{Sulphur chemistry}
\label{c34s}
The rare Carbon-sulphur isotopologue C$^{34}$S is detected toward all
four regions (Fig.~\ref{sample_spectra}). However, as shown in
Fig.~\ref{c34s_sample} C$^{34}$S does not peak toward the main submm
continuum peaks but is offset at the edge of the core. In the cases of
G29.96 and IRAS\,23151+5912 the C$^{34}$S morphology appears to wrap
around the main submm continuum peaks. Toward IRAS\,05358+3543
C$^{34}$S is also weak toward the strongest submm peak (at the eastern
edge of the image) but shows the strongest C$^{34}$S emission features
offset from a secondary submm continuum source (in the middle of the
image). Toward Orion-KL, weak C$^{34}$S emission is detected in the
vicinity of the hot core peak, whereas we find strong C$^{34}$S
emission peaks offset from the dust continuum emission.
\begin{figure*}[htb]
\includegraphics[angle=-90,width=\textwidth]{f5.eps}
\caption{C$^{34}$S(7--6) images toward the four massive star-forming
regions. Positive and negative features are shown as solid and
dashed contours. The contouring is done from 15 to 95\% (step 10\%)
of the peak emission, and the peak emission values are 7.5, 0.9, 1.0
and 0.6\,Jy\,beam$^{-1}$ from left to right, respectively. The
integration regimes for the four sources are [-2,14], [92,104],
[-58,-51] and [-17,-13]\,km\,s$^{-1}$. The grey-scale with dotted
contours shows the submm continuum emission from
Fig.~\ref{cont_sample}. This figure is an adaption from the papers
by \citet{beuther2005a,beuther2007d,beuther2007f,leurini2007}. The
axis are labeled in R.A. (J2000) and Dec. (J2000). The spatial
resolution is listed in Table \ref{resolution}.}
\label{c34s_sample}
\end{figure*}
To check whether our optical-thin assumption from \S\ref{column_para}
is valid also for C$^{34}$S we ran LVG radiative transfer models
(RADEX, \citealt{vandertak2007}). We started with the H$_2$ column
densities from Table \ref{source_parameters} and assumed a typical
CS/H$_2$ abundance of $10^{-8}$ with a terrestrial CS/C$^{34}$S ratio
of 23 \citep{wannier1980}. Above the critical density of $2\times
10^7$\,cm$^{-3}$, with the given broad C$^{34}$S spectral FWHM
(between 5 and 12\,km\,s$^{-1}$ for the four sources), the
C$^{34}$(7--6) emission is indeed optically thin. Hence optical depth
effects are not causing these large offsets. Since furthermore the
line intensities depend not just on the excitation but more strongly
on the gas column densities, excitation effects only, as quantified in
\S\ref{excitation_effects}, cannot cause the observational offsets as
well. Therefore, chemical evolution may be more important. A likely
scenario is based on different desorption temperatures of molecules
from dust grains (e.g., \citealt{viti2004}): CS and C$^{34}$S are
desorbed from grains at temperatures of a few 10\,K, and at such
temperatures, these molecules are expected to be well correlated with
the dust continuum emission. Warming up further, at 100\,K H$_2$O
desorbs and then dissociates to OH. The OH quickly reacts with the
sulphur forming SO and SO$_2$ which then will be centrally peaked (see
\S\ref{modeling}). Toward G29.96, IRAS\,23151+5912 and
IRAS\,05358+3543 we find that the SO$_2$ emission is centrally peaked
toward the main submm continuum peaks confirming the above outlined
chemical scenario (Fig.~\ref{so2_sample} \&
\S\ref{modeling}\footnote{For G29.96 the SO$_2$ peaks are actually
right between the better resolved submm continuum sources. This is
due to the lower resolution of the line data because smoothing the
continuum to the same spatial resolution, they peak very close to
the SO$_2$ emission peaks (see, e.g., Fig.~2 in
\citealt{beuther2007d}).}) . To further investigate the
C$^{34}$S/SO$_2$ differences, we produced column density ratio maps
between C$^{34}$S and SO$_2$ for G29.96, IRAS\,23151+5912 and
IRAS\,05358+3543, assuming local thermodynamic equilibrium and
optically thin emission (Fig.~\ref{ratio}). To account for the spatial
differences that SO$_2$ is observed toward the submm peak positions
whereas C$^{34}$S is seen more to the edges of the cores, for the
column density calculations we assumed the temperatures given in Table
\ref{source_parameters} for SO$_2$ whereas we used half that
temperature for C$^{34}$S. Although the absolute ratio values are
highly uncertain because of the different spatial filtering properties
of the two molecules, qualitatively as expected, the column density
ratio maps have the lowest values in the vicinity of the submm
continuum sources and show increased emission at the core edges. The
case is less clear for Orion-KL which shows the strongest SO$_2$
emission toward the south-eastern region called compact ridge. Since
this compact ridge is believed to be caused by the interaction of a
molecular outflow with the ambient dense gas (e.g., \citealt{liu2002})
and SO$_2$ is known to be enriched by shock interactions with
outflows, this shock-outflow interaction may dominate in Orion-KL
compared with the above discussed C$^{34}$S/SO$_2$ scenario. For more
details on the chemical evolution see the modeling in
\S\ref{modeling}.
\begin{figure*}[htb]
\includegraphics[angle=-90,width=\textwidth]{f6.eps}
\caption{SO$_2(14_{4,14}-18_{3,15})$ images toward the four massive
star-forming regions. Positive and negative features are shown as
solid and dashed contours. The contouring is done from 15 to 95\%
(step 10\%) of the peak emission, and the peak emission values are
7.5, 0.9, 1.0 and 0.6\,Jy\,beam$^{-1}$ from left to right,
respectively. Only for IRAS\,05358+1732 the contouring starts at the
35\% level because of a worse signal-to-noise ratio. The integration
regimes for the four sources are [0,20], [94,100], [-60,-50] and
[-17,-13]\,km\,s$^{-1}$. The grey-scale with dotted contours shows
the submm continuum emission from Fig.~\ref{cont_sample}. This
figure is an adaption from the papers by
\citet{beuther2005a,beuther2007d,beuther2007f,leurini2007}. The axis
are labeled in R.A. (J2000) and Dec. (J2000). The spatial
resolution is listed in Table \ref{resolution}.}
\label{so2_sample}
\end{figure*}
\begin{figure*}[htb]
\includegraphics[angle=-90,width=\textwidth]{f7.eps}
\caption{Column density ratios between C$^{34}$S over SO$_2$ toward
G29.96, IRAS\,23151+5912 and IRAS\,05358+3543 are shown in
grey-scale. The contours present the corresponding submm continuum
maps as in shown the previous figures. For IRAS\,05358+3543 we zoom
only into the central region to better show the ratio variations.}
\label{ratio}
\end{figure*}
\subsubsection{Nitrogen chemistry}
\label{n}
It is intriguing that we do not detect any nitrogen-bearing molecule
toward the two younger HMPOs IRAS\,23151+5912 and IRAS\,05358+3543
(Fig.\ref{sample_spectra} and Table \ref{linelistall}). This already
indicates that the nitrogen-chemistry needs warmer gas to initiate or
requires more time to proceed. To get an idea about the more subtle
variations of the nitrogen chemistry, one may compare some specific
line pairs: For example, the HN$^{13}$C/CH$_3$CH$_2$CN line blend
(dominated by HN$^{13}$C) and the SO$_2$ line between 348.3 and
348.4\,GHz are of similar strength in the HMC Orion-KL
(Fig.~\ref{sample_spectra}). The same is approximately true for the
HMC G29.96, although SO$_2$ is relatively speaking a bit weaker there.
The more interesting differences arise if one contrasts with the
younger sources. Toward the $10^5$\,L$_{\odot}$ early-HMPO
IRAS\,23151+5912, we only detect the SO$_2$ line and the
HN$^{13}$C/CH$_3$CH$_2$CN line blend remains a non-detection in this
source. In the lower luminosity early-HMPO IRAS\,05358+3543, both
lines are not detected, although another SO$_2$ line at 338.3\,GHz is
detected there.
Judging from these line ratios, one can infer that SO$_2$ is
relatively easy to excite early-on in the evolution of high-mass
star-forming regions. Other sulphur-bearing molecules like H$_2$CS or
CS are released even earlier from the grains, but SO-type molecules
are formed quickly (e.g., \citealt{charnley1997,nomura2004}). In
contrast to this, the non-detection of spectral lines like the
HN$^{13}$C/CH$_3$CH$_2$CN line blend in the early-HMPOs indicates that
the formation and excitation of such nitrogen-bearing species takes
place in an evolutionary more evolved phase. This may either be due to
molecule-selective temperature-dependent gas-dust desorption processes
or chemical network reactions requiring higher temperatures.
Furthermore, simulations of chemical networks show that the complex
nitrogen chemistry simply needs more time to be activated (e.g.,
\citealt{charnley2001,nomura2004}). Recent modeling by
\citet{garrod2006} indicates that the gradual switch-on phase of hot
molecular cores is an important evolutionary stage to produce complex
molecules. In this picture, the HMCs have switched on their heating
sources earlier and hence had more time to form these nitrogen
molecules.
Comparing just the two HMCs, we find that the CH$_3$CH$_2$CN at
348.55\,GHz is strong in Orion-KL but not detected in G29.96. Since
G29.96 also exhibits less vibrationally-torsionally excited CH$_3$OH
lines, it is likely on average still at lower temperatures than
Orion-KL and may hence have not formed yet all the complex
nitrogen-bearing molecules present already in Orion-KL.
\subsubsection{Modeling}
\label{modeling}
To examine the chemical evolution of warm regions in greater detail we
used the chemical model of \citet{bergin1997}. This model includes
gas-phase chemistry and the freeze-out to and sublimation from grain
surfaces. The details of this model are provided in that paper with
the only addition being the inclusion of water ice formation on grain
surfaces. The binding energies we have adopted are for bare silicate
grains, except for water ice which is assumed to have a binding energy
appropriate for hydrogen bonding between frozen water molecules
\citep{fraser2001}. To explore the chemistry of these hot evaporative
regions we have run the model for 10$^6$\,yrs with starting conditions
at $n_{\rm H_2} = 10^6$\,cm$^{-3}$ and
$T_{\rm{gas}}=T_{\rm{dust}}$=20\,K. Under these conditions most
gaseous molecules, excluding H$_2$, will freeze onto the grain surface
and the ice mantle forms, dominated by H$_2$O. This timescale
(10$^5$\,yrs ) is quite short, but is longer than the free-fall time
at this density and is chosen as a representative time that gas might
spend at very high density. After completion we assume that a massive
star forms and the gas and dust temperature is raised such that the
ice mantle evaporates and the gas-phase chemistry readjusts. We have
made one further adjustment to this model. Our data suggest that HNC
(a representative nitrogen-bearing species) is not detected in
early-HMPOs and that the release of this species (or its pre-cursor)
from the ice occurs during more evolved and warmer stages. Laboratory
data on ice desorption suggest that the process is not as simple as
generally used in models where a given molecule evaporates at its
sublimation temperature \citep{collings2004}. Rather some species
co-desorb with water ice and the key nitrogen-bearing species, NH$_3$,
falls into this category (see also \citealt{viti2004}). We have
therefore assumed that the ammonia evaporates at the same temperature
as water ice. Our initial abundances are taken from
\citet{aikawa1996}, except we assume that 50\% of the nitrogen is
frozen in the form of NH$_3$ ice. This assumption is consistent with
two sets of observations. First, detections of NH$_3$ in ices towards
YSO's find abundances of $\sim$ 2-7\% relative to H$_2$O
\citep{bottinelli2007}. \citet{dartois2002} find a limit of $\sim$
5\% relative to H$_2$O towards other YSO's. Our assumed abundance of
NH$_3$ ice is 5 $\times 10^{-6}$ relative to H$_2$O, assuming an ice
abundance of 10$^{-4}$ (as appropriate for cold sources) this provides
an ammonia ice abundance of $\sim$5\%, which is consistent with ice
observations. Second, high resolution ammonia observations often find
high abundances of NH$_3$ in the gas phase towards hot cores, often as
high as 10$^{-5}$, relative to H$_2$ (e.g., \citealt{cesaroni1994}).
Pure gas phase chemistry will have some difficulty making this high
abundance in the cold phase and we therefore assume it is made on
grains during cold phases.
\begin{figure*}[htb]
\includegraphics[width=\textwidth]{f8.eps}
\caption{The left and right columns show results from our modeling
of the ``cold and warm models'', respectively. The two top-panels
show the abundance ratios between several species important for
our observations versus the time. The bottom panels present the
abundances relative to H$_2$. Time in this figure refers to time
after the onset of massive star formation and hence we do not show
any evolution during the cold starless phase.}
\label{model}
\end{figure*}
Figure \ref{model} presents our chemical model results at the point of
star ``turn on'' where the gas and dust becomes warmer and ices
evaporate. Two different models were explored. The first labeled
as $T_{dust}$=70\,K (``warm model'') is a model which is insufficient
to evaporate water ice, while the second, ``hot model''
with $T_{dust}$=150\,K, evaporates the entire ice mantle. Much of
the chemical variations amongst the early-HMPO and HMC phases is found
for CS, SO, SO$_2$, and HNC (note this line is blended with
CH$_3$CH$_2$CN) and we focus on these species in our plots (along with
H$_2$O). It is also important to reiterate that due to differences in
spatial sampling between the different sources as well as between line
and continuum emission, we cannot derive accurate abundances from
these data, but rather can attempt to use the models to explain
trends. For the warm model the main result is that the imbalance
created in the chemistry by the release of most species, with ammonia
and water remaining as ices on the grains, leads to enhanced
production of CS. Essentially the removal of oxygen from the gas
(excluding the O found in CO) allows for rapid CS production from the
atomic Sulfur frozen on grains during the cold phase. Thus, for
early-HMPOs, which might not have a large fraction of gas above the
water sublimation point ($T\sim110$\,K), the ratios of CS to other
species are quite large. In the hot model, when the temperature can
evaporate both H$_2$O and NH$_3$ ice, ratios between the same
molecules are orders of magnitude lower (Fig.\,\ref{model}). There is
a large jump in the water vapor abundance (and NH$_3$, which is not
shown) between the warm and the hot model driving the chemistry into
new directions. CS remains in the gas, but not with as high abundance
as in the warm phase and is gradually eroded into SO and ultimately
SO$_2$. Hence, SO$_2$ appears to be a better tracer of more evolved
stages. In this sense, even the early-HMPOs can be considered as
relatively evolved, and it will be important to extend similar studies
to even earlier evolutionary stages. HNC also appears as a brief
intermediate product of the nitrogen chemistry.
The above picture is in qualitative agreement with our observations:
that CS should be a good tracer in early evolutionary states even
prior to our observed sample, but that it is less well suited for more
evolved regions like those studied here. Other sulfur-bearing
molecules, in particular SO$_2$, appear to better trace the warmest
gas near the forming star. HNC, and perhaps other nitrogen-bearing
species, are better tracers when the gas is warm enough to evaporate a
significant amount of the ice mantle.
\subsection{Searching for disk signatures}
\label{disks}
While the chemical evolution of massive star-forming regions is
interesting in itself, one also wants to use the different
characteristics of molecular lines as tools to trace various physical
processes. While molecules like CO or SiO have regularly been used to
investigate molecular outflows (e.g., \citealt{arce2006}), the problem
to identify the right molecular tracer to investigate accretion disks
in massive star formation is much more severe (e.g.,
\citealt{cesaroni2006}). Major observational obstacles arise from the
fact that disk-tracing molecular lines are usually often not
unambiguously found only in the accretion disk, but that other
processes can produce such line emission as well. For example,
molecular lines from CN and HCN are high-density tracers and were
believed to be good candidates to investigate disks in embedded
protostellar sources (e.g., \citealt{aikawa2001}). However,
observations revealed that both spectral lines are strongly affected
by the associated molecular outflow and hence difficult to use for
massive disk studies (CN \citealt{beuther2004e}, HCN
\citealt{zhang2007}).
As presented in \citet{cesaroni2006}, various different molecules have
in the past been used to investigate disks/rotational signatures in
massive star formation (e.g., CH$_3$CN, C$^{34}$S, NH$_3$, HCOOCH$_3$,
C$^{17}$O, H$_2^{18}$O, see also Fig.~\ref{disk_examples}). The data
presented here add three other potential disk tracers (HN$^{13}$C and
HC$_3$N for G29.96, \citealt{beuther2007d}, and torsionally excited
CH$_3$OH for IRAS\,23151+5912 and IRAS\,05358+3543
\citealt{beuther2007f,leurini2007}, Fig.~\ref{disk_examples}). An
important point to note is that in most sources only one or the other
spectral line exclusively allows a study of rotational motions,
whereas other lines apparently do not trace the warm disks. For
example, C$^{34}$S traces the Keplerian motion in the young source
IRAS\,20126+4104 whereas it does not trace the central protostars at
all in the sources presented here (Fig.~\ref{c34s_sample}). In the
contrary, HN$^{13}$C shows rotational signatures in the HMC G29.96,
but it remains completely undetected in the younger early-HMPO sources
of our sample (Figs.~\ref{disk_examples} \& \ref{sample_spectra}). As
discussed in sections \ref{c34s} and \ref{n}, this implies that,
depending on the chemical evolution, molecules like C$^{34}$S should
be better suited for disk studies at very early evolutionary stages,
whereas complex nitrogen-bearing molecules are promising in more
evolved hot-core-type sources. While the chemical evolution is
important for these molecules, temperature effects have to be taken
into account as well. For example, the torsionally excited CH$_3$OH
line traces rotating motions in IRAS\,23151+5912 and IRAS\,05358+3543
(\citealt{beuther2007f,leurini2007}, Fig.~\ref{disk_examples}) but it
is weak and difficult to detect in colder and younger sources.
Therefore, some initial heating is required to employ highly excited
lines for kinematic studies. In addition to these evolutionary
effects, optical depth is important for many lines. For example,
\citet{cesaroni1997,cesaroni1999} have shown that CH$_3$CN traces the
rotating structure in IRAS\,20126+4104, whereas the same molecule does
not indicate any rotation in IRAS\,18089-1732 \citep{beuther2005c}. A
likely explanation for the latter is high optical depth of the
CH$_3$CN submm lines \citep{beuther2005c}. Again other molecules are
excited in the accretion disks as well as the surrounding envelope,
causing confusion problems to disentangle the various physical
components. {\it In summary, getting a chemical rich spectral line
census like the ones presented here shows that one can find several
disk-tracing molecules in different sources, but it also implies
that some previously assumed good tracers are not necessarily
universally useful.}
\begin{figure*}[htb]
\includegraphics[angle=-90,width=6.5cm]{f9a.ps}
\includegraphics[angle=-90,width=4.5cm]{f9b.eps}
\includegraphics[angle=-90,width=5.0cm]{f9c.ps}\\
\hspace*{4.0cm}\includegraphics[angle=-90,width=4.5cm]{f9d.eps}
\includegraphics[angle=-90,width=5.2cm]{f9e.eps}
\caption{Examples of rotation-tracing molecules: top-left: C$^{34}$S
in IRAS\,20126+4104 \citep{cesaroni1999,cesaroni2005}, top-middle:
HCOOCH$_3$ in IRAS\,18089-1732 \citep{beuther2005c}, top-right:
H$_2^{18}$O in AFGL2591 \citep{vandertak2006}, bottom-left: CH$_3$OH
$v_t=1$ in IRAS\,23151+5912 \citep{beuther2007f}, bottom-right:
HN$^{13}$C in G29.96 \citep{beuther2007d}.}
\label{disk_examples}
\end{figure*}
The advent of broad bandpass interferometers like the SMA now
fortunately allows to observe many molecular lines simultaneously.
This way, one often finds a suitable rotation-tracing molecule in an
observational spectral setup. Nevertheless, one has to keep in mind
the chemical and physical complexity in such regions, and it is likely
that in many cases only combined modeling of infall, rotation and
outflow will disentangle the accretion disk from the rest of the
star-forming gas and dust core.
\section{Conclusion and Summary}
We compiled a sample of four massive star-forming regions in different
evolutionary stages with varying luminosities that were observed in
exactly the same spectral setup at high angular resolution with the
SMA. We estimated column densities for all sources and detected
species, and we compared the spatial distributions of the molecular
gas. This allows us to start investigating chemical evolutionary
effects also in a spatially resolved manner. Chemical modeling was
conducted to explain our observations in more detail.
A general result from this comparison is that many different physical
and chemical processes are important to produce the complex chemical
signatures we observe. While some features, e.g., the non-detection of
the rich vibrationally-torsionally excited CH$_3$OH line forest toward
the two early-HMPOs can be explained by on average lower temperatures of
the molecular gas compared to the more evolved HMCs, other
observational characteristics require chemical evolutionary sequences
caused by heating including grain-surface and gas phase reactions.
Even other features are then better explained by shock-induced
chemical networks.
The rare isotopologue C$^{34}$S is usually not detected right toward
the main submm continuum peaks, but rather at the edge of the
star-forming cores. This may be explained by temperature-selective
gas-desorption processes and successive gas chemistry networks.
Furthermore, we find some nitrogen-bearing molecular lines to be only
present in the HMCs, whereas they remain undetected at earlier
evolutionary stages. This indicates that the formation and excitation
of many nitrogen-bearing molecules needs considerably higher
temperatures and/or more time during the warm-up phase of the HMC,
perhaps relating to the fact that NH$_3$ is bonded within the water
ice mantle. Although the statistical database is still too poor to
set tighter constraints, these observations give the direction how one
can use the presence {\it and} morphology of various molecular lines
to identify and study different (chemical) evolutionary sequences.
Furthermore, we discussed the observational difficulty to
unambiguously use one or the other spectral line as a tracer of
massive accretion disks. While some early spectral line candidates are
discarded for such kind of studies by now (e.g., CN), in many other
sources we find different lines exhibiting rotational velocity
signatures. The observational feature that in most sources apparently
only one or the other spectral line exclusively traces the desired
structures has likely to be attributed to a range of effects. (1)
Chemical effects, where for example C$^{34}$S may work in the youngest
sources whereas some nitrogen-bearing molecules like HN$^{13}$C are
better in typical HMCs. (2) Confusion from multiple gas components,
mainly outflows, infall from the envelope and rotation. (3) High
optical depth from many molecular lines. This implies that for future
statistical studies we have to select spectral setups that comprise
many molecular lines from various species. This way, one has good
chances to identify for each source separately the right molecular
tracer, and hence still draw statistically significant conclusions.
To advance in this field and to become more quantitative, different
steps are necessary. First of all, we need to establish a larger
database of more sources at different evolutionary stages, in
particular even younger sources, as well as with varying luminosities
to better characterize the differences and similarities. From an
observational and technical point of view, although the presented data
are state of the art multi-wavelength and high angular resolution
observations, the quantitative interpretation is still hampered by the
spatial filtering of the interferometer. To become more quantitative,
it is therefore necessary to complement such data with the missing
short spacing information. While we have high angular resolution in
all datasets with a similar baseline coverage and hence similarly
covered angular scales, the broad range of distances causes a
different coverage of sampled linear spatial scales. Hence the
missing short spacings affect each dataset in a different fashion
which is currently the main limiting factor for a better quantitative
interpretation of the data. Therefore, obtaining single-dish
observations in the same spectral setup and then combining them with
the SMA observations is a crucial step to derive more reliable column
densities and from that abundances. These parameters then can be used
by theorists to better model the chemical networks, explain the
observations and predict other suitable molecules for, e.g., massive
disk studies.
\begin{acknowledgements}
H.B.~acknowledges financial support by the Emmy-Noether-Program of the
Deutsche Forschungsgemeinschaft (DFG, grant BE2578).
\end{acknowledgements}
\input{ms.bbl}
\begin{longtable}[htb]{lrrcccc}
\caption{Line peak intensities $S_{\rm{peak}}$ and upper state
energy levels $E_u/k$ from spectra toward peak positions of the
respective massive star-forming regions (Fig.~\ref{sample_spectra}).}\\
\hline \hline
Freq. & Line & $E_u/k$ & $S_{\rm{peak}}$ & $S_{\rm{peak}}$ & $S_{\rm{peak}}$ & $S_{\rm{peak}}$ \\
(GHz) & & (K) & (Jy) & (Jy) & (Jy) & (Jy)\\
& & & Orion & G29.96 & 23151 & 05358 \\
\hline
337.252 & CH$_3$OH$(7_{3,5}-6_{3,4})$A($v_t$=2) & 739 & 6.0 \\
337.274 & CH$_3$OH$(7_{4,3}-6_{4,2})$A($v_t$=2) & 695 & 7.4 \\
337.279 & CH$_3$OH$(7_{2,5}-6_{2,4})$E($v_t$=2) & 727 & 5.4 \\
337.284 & CH$_3$OH$(7_{0,7}-6_{0,6})$A($v_t$=2) & 589 & 9.9 \\
337.297 & CH$_3$OH$(7_{1,7}-6_{1,6})$A($v_t$=1) & 390 & 10.6 & 1.7 \\
337.312 & CH$_3$OH$(7_{1,6}-6_{1,5})$E($v_t$=2) & 613 & 9.2 \\
337.348 & CH$_3$CH$_2$CN$(38_{3,36}-37_{3,35})$ & 328 & 14.2 & 1.5 & & \\
337.397 & C$^{34}$S(7--6) & 65 & 12.6 & 2.0 & 0.3 & 0.6 \\
337.421 & CH$_3$OCH$_3(21_{2,19}-20_{3,18})$ & 220 & 3.0 & 0.6 \\
337.446 & CH$_3$CH$_2$CN$(37_{4,33}-36_{4,32})$ & 322 & 11.2 & 0.8 \\
337.464 & CH$_3$OH$(7_{6,1}-6_{0,0})$A($v_t$=1) & 533 & 7.2 & 1.1 \\
337.474 & UL & & 4.9 & 0.6 \\
337.490 & HCOOCH$_3(27_{8,20}-26_{8,19})$E & 267 & 6.3 & 0.7 \\
337.519 & CH$_3$OH$(7_{5,2}-6_{5,2})$E($v_t$=1) & 482 & 8.1 & 1.0 \\
337.546 & CH$_3$OH$(7_{5,3}-6_{5,2})$A($v_t$=1) & 485 & 10.0$^b$ & 1.4$^b$ & & \\
& CH$_3$OH$(7_{5,2}-6_{5,1})$A$^{-}$($v_t$=1) & 485 & 10.0$^b$ & 1.4$^b$ & & \\
337.582 & $^{34}$SO$(8_8-7_7)$ & 86 & 12.2 & 2.0 & 1.1 &\\
337.605 & CH$_3$OH$(7_{2,5}-6_{2,4})$E($v_t$=1) & 429 & 9.7 & 2.4 & & \\
337.611 & CH$_3$OH$(7_{6,1}-6_{6,0})$E($v_t$=1) & 657 & 6.2$^b$ & 2.0$^b$ & & \\
& CH$_3$OH$(7_{3,4}-6_{3,3})$E($v_t$=1) & 388 & 6.2$^b$ & 2.0$^b$ & & \\
337.626 & CH$_3$OH$(7_{2,5}-6_{2,4})$A($v_t$=1) & 364 & 11.0 & 1.9 & & \\
337.636 & CH$_3$OH$(7_{2,6}-6_{2,5})$A$^-$($v_t$=1) & 364 & 8.2 & 2.5 & & \\
337.642 & CH$_3$OH$(7_{1,7}-6_{1,6})$E($v_t$=1) & 356 & 10.9$^b$ & 2.9$^b$ & 0.6$^b$ & 1.1$^b$\\
337.644 & CH$_3$OH$(7_{0,7}-6_{0,6})$E($v_t$=1) & 365 & 10.9$^b$ & 2.9$^b$ & 0.6$^b$ & 1.1$^b$\\
337.646 & CH$_3$OH$(7_{4,3}-6_{4,2})$E($v_t$=1) & 470 & 10.9$^b$ & 2.9$^b$ & 0.6$^b$ & 1.1$^b$ \\
337.648 & CH$_3$OH$(7_{5,3}-6_{5,2})$E($v_t$=1) & 611 & 10.9$^b$ & 2.9$^b$ & 0.6$^b$ & 1.1$^b$ \\
337.655 & CH$_3$OH$(7_{3,5}-6_{3,4})$A($v_t$=1) & 461 & 10.8$^b$ & 2.0$^b$ & & \\
& CH$_3$OH$(7_{3,4}-6_{3,3})$A$^-$($v_t$=1) & 461 & 10.8$^b$ & 2.0$^b$ & & \\
337.671 & CH$_3$OH$(7_{2,6}-6_{2,5})$E($v_t$=1) & 465 & 10.2 & 2.1 & & \\
337.686 & CH$_3$OH$(7_{4,3}-6_{4,2})$A($v_t$=1) & 546 & 9.$^b$5 & 2.0$^b$ & & \\
& CH$_3$OH$(7_{4,4}-6_{4,3})$A$^-$($v_t$=1) & 546 & 9.5$^b$ & 2.0$^b$ & & \\
& CH$_3$OH$(7_{5,2}-6_{5,1})$E($v_t$=1) & 494 & 9.5$^b$ & 2.0$^b$ & & \\
337.708 & CH$_3$OH$(7_{1,6}-6_{1,5})$E($v_t$=1) & 489 & 7.9 & 1.8 & & \\
337.722 & CH$_3$OCH$_3(7_{4,4}-6_{3,3})$EE & 48 & & 0.9 & & \\
337.732 & CH$_3$OCH$_3(7_{4,3}-6_{3,3})$EE & 48 & & 1.4 & &\\
337.749 & CH$_3$OH$(7_{0,7}-6_{0,6})$A($v_t$=1) & 489 & 8.7 & 1.9 & & \\
337.778 & CH$_3$OCH$_3(7_{4,4}-6_{3,4})$EE & 48 & & 1.3 & &\\
337.787 & CH$_3$OCH$_3(7_{4,3}-6_{3,4})$AA & 48 & & 1.4 & &\\
337.825 & HC$_3$N$(37-36)v_7=1$ & 629 & 14.8 & 1.4 & & \\
337.838 & CH$_3$OH$(20_{6,14}-21_{5,16})$E & 676 & 5.6 & 1.1 & & \\
337.878 & CH$_3$OH$(7_{1,6}-6_{1,5})$A($v_t$=2) & 748 & 2.7 & 0.6 & & \\
337.969 & CH$_3$OH$(7_{1,6}-6_{1,5})$A($v_t$=1) & 390 & 12.0 & 2.1 & & \\
338.081 & H$_2$CS$(10_{1,10}-9_{1,9})$ & 102 & 5.8 & 2.3 & & 0.5 \\
338.125 & CH$_3$OH$(7_{0,7}-6_{0,6})$E & 78 & 6.9 & 2.8 & 1.4 & 1.9 \\
338.143 & CH$_3$CH$_2$CN$(37_{3,34}-36_{3,33})$ & 317 & 14.4 & 0.9 & & \\
338.214 & CH$_2$CHCN$(37_{1,37}-36_{1,36})$ & 312 & 4.0 \\
338.306 & SO$_2(18_{4,1}-18_{3,1})$ & 197 & x$^c$ & 0.8 & 1.2 & 0.7 \\
338.345 & CH$_3$OH$(7_{1,7}-6_{1,6})$E & 71 & 13.4 & 2.1 & 1.3 & 2.3\\
338.405 & CH$_3$OH$(7_{6,2}-6_{6,1})$E & 244 & 13.1$^b$ & 3.0$^b$ & & \\
338.409 & CH$_3$OH$(7_{0,7}-6_{0,6})$A & 65 & 13.1$^b$ & 3.0$^b$ & 1.5 & 2.4\\
338.431 & CH$_3$OH$(7_{6,1}-6_{6,0})$E & 254 & 9.5 & 1.8 & & \\
338.442 & CH$_3$OH$(7_{6,1}-6_{6,0})$A & 259 & 11.7$^b$ & 2.7$^b$ & & 0.5$^b$ \\
& CH$_3$OH$(7_{6,2}-6_{6,1})$A$^-$ & 259 & 11.7$^b$ & 2.7$^b$ & & 0.5$^b$ \\
338.457 & CH$_3$OH$(7_{5,2}-6_{5,1})$E & 189 & 8.9 & 2.0 & 0.4 & 0.6 \\
338.475 & CH$_3$OH$(7_{5,3}-6_{5,2})$E & 201 & 12.6 & 2.5 & 0.5 & \\
338.486 & CH$_3$OH$(7_{5,3}-6_{5,2})$A & 203 & 9.4$^b$ & 2.3$^b$ & 0.8$^b$ & 0.8$^b$ \\
& CH$_3$OH$(7_{5,2}-6_{5,1})$A$^-$ & 203 & 9.4$^b$ & 2.3$^b$ & 0.8$^b$ & 0.8$^b$ \\
338.504 & CH$_3$OH$(7_{4,4}-6_{4,3})$E & 153 & 8.5 & 2.7 & 0.7 & 0.7 \\
338.513 & CH$_3$OH$(7_{4,4}-6_{4,3})$A$^-$ & 145 & 13.7$^b$ & 2.8$^b$ & 1.4$^b$ & 1.3$^b$ \\
& CH$_3$OH$(7_{4,3}-6_{4,2})$A & 145 & 13.7$^b$ & 2.8$^b$ & 1.4$^b$ & 1.3$^b$ \\
& CH$_3$OH$(7_{2,6}-6_{2,5})$A$^-$ & 103 & 13.7$^b$ & 2.8$^b$ & 1.4$^b$ & 1.3$^b$ \\
338.530 & CH$_3$OH$(7_{4,3}-6_{4,2})$E & 161 & 5.8 & 2.7 & 0.7 & 0.9 \\
338.541 & CH$_3$OH$(7_{3,5}-6_{3,4})$A$^+$ & 115 & 12.5$^b$ & 3.0$^b$ & 2.0$^b$ & 1.4$^b$ \\
338.543 & CH$_3$OH$(7_{3,4}-6_{3,3})$A$^-$ & 115 & 12.5$^b$ & 3.0$^b$ & 2.0$^b$ & 1.4$^b$ \\
338.560 & CH$_3$OH$(7_{3,5}-6_{3,4})$E & 128 & 15.6 & 2.5 & 0.9 & 0.6 \\
338.583 & CH$_3$OH$(7_{3,4}-6_{3,3})$E & 113 & 11.5 & 3.4 & 1.0 & 1.1 \\
338.612 & SO$_2(20_{1,19}-19_{2,18})$ & 199 & x$^c$ & 2.9 & 1.5 & 1.9 \\
338.615 & CH$_3$OH$(7_{1,6}-6_{1,5})$E & 86 & x$^d$ & 2.9$^d$ & 1.5$^d$ & 1.9$^d$\\
338.640 & CH$_3$OH$(7_{2,5}-6_{2,4})$A & 103 & 7.2 & 2.5 & 1.0 & 1.0 \\
338.722 & CH$_3$OH$(7_{2,5}-6_{2,4})$E & 87 & 10.2$^b$ & 3.2$^b$ & 2.1$^b$ & 2.7$^b$\\
338.723 & CH$_3$OH$(7_{2,6}-6_{2,5})$E & 91 & 10.2$^b$ & 3.2$^b$ & 2.1$^b$ & 2.7$^b$\\
338.760 & $^{13}$CH$_3$OH$(13_{7,7}-12_{7,6})$A & 206 & 4.0 & 1.1 & & \\
338.769 & HC$_3$N$(37-36)v_7=2$ & 525 & ? & ? & & \\
338.786 & $^{34}$SO$_2(14_{4,10}-14_{3,1})$ & 134 & 6.2 \\
338.886 & C$_2$H$_5$OH$(15_{7,8}-15_{6,19})$ & 162 & 5.3 & 0.8 & & \\
338.930 & $^{30}$SiO(8--7) & 73 & 24.4 \\
339.058 & C$_2$H$_5$OH$(14_{7,7}-14_{6,8})$ & 150 & x$^e$ & 0.6 & & \\
347.232 & CH$_2$CHCN$(38_{1,38}-37_{1,37})$ & 329 & 4.9 & 0.6 & & \\
347.331 & $^{28}$SiO(8--7) & 75 & 22.1 & 0.9 & 0.7 &\\
347.438 & UL & & 7.5 & & & \\
347.446 & UL & & 3.4 & 0.8 & & \\
347.478 & HCOOCH$_3(27_{1,26}-26_{1,25})$E & 247 & 4.8 \\
347.494 & HCOOCH$_3(27_{5,22}-26_{5,21})$A & 247 & 2.9 & 0.6 & & \\
347.590 & HCOOCH$_3(16_{6,10}-15_{5,11})$A & 104 & 2.4 \\
347.599 & HCOOCH$_3(16_{6,10}-15_{5,11})$E & 105 & 1.7 \\
347.617 & HCOOCH$_3(28_{10,19}-27_{10,18})$A & 307 & 2.6 \\
347.628 & HCOOCH$_3(28_{10,19}-27_{10,18})$E & 307 & 3.6 \\
347.667 & UL & & 4.5 \\
347.759 & CH$_2$CHCN$(36_{2,34}-35_{2,32})$ & 317 & 8.3 & 0.7 & & \\
347.792 & UL & & 5.4 & 0.7 & & \\
347.842 & UL, $^{13}$CH$_3$OH & & 3.1 & 0.5 & & \\
347.916 & C$_2$H$_5$OH$(20_{4,17}-19_{4,16})$ & 251 & 3.3 & 0.7 & & \\
347.983 & UL & & & 0.6 & & \\
348.050 & HCOOCH$_3(28_{4,24}-27_{4,23})$E & 266 & 2.8 \\
348.066 & HCOOCH$_3(28_{6,23}-27_{6,22})$A & 266 & 3.0 \\
348.118 & $^{34}$SO$_2(19_{4,16}-19_{3,17})$ & 213 & 5.8 \\
348.261 & CH$_3$CH$_2$CN$(39_{2,37}-38_{2,36})$ & 344 & 11.2 & 1.2 & & \\
348.340 & HN$^{13}$C(4--3) & 42 & 16.1$^b$ & 2.0$^b$ & &\\
348.345 & CH$_3$CH$_2$CN$(40_{2,39}-39_{2,38})$ & 351 & 16.1$^b$ & 2.0$^b$ & & \\
348.388 & SO$_2(24_{2,22}-23_{3,21})$ & 293 & 9.3 & 0.5 & 1.0 & \\
348.518 & UL, HNOS$(1_{1,1}-2_{0,2})$ & & 10.6 & 0.7 \\
348.532 & H$_2$CS$(10_{1,9}-9_{1,8})$ & 105 & 7.4 & 1.9 & & \\
348.553 & CH$_3$CH$_2$CN$(40_{1,39}-39_{1,38})$ & 351 & 20.1 \\
348.910 & HCOOCH$_3(28_{9,20}-27_{9,19})$E & 295 & 11.0$^b$ & 1.6$^b$ & & \\
348.911 & CH$_3$CN$(19_{9}-18_{9})$ & 745 & 11.0$^b$ & 1.6$^b$ & & \\
348.991 & CH$_2$CHCN$(37_{1,36}-36_{1,35})$ & 325 & 5.9 \\
349.025 & CH$_3$CN$(19_{8}-18_{8})$ & 624 & 9.5 & 1.1 \\
349.107 & CH$_3$OH$(14_{1,13}-14_{0,14})$ & 43 & 12.2 & 3.1 & 1.3 & 1.1\\
\hline \hline
\multicolumn{7}{l}{\footnotesize $^a$ Doubtful detection since other close $v_t=2$ lines with similar upper }\\
\multicolumn{7}{l}{\footnotesize energy levels were not detected.}\\
\multicolumn{7}{l}{\footnotesize $^b$ Line blend.}\\
\multicolumn{7}{l}{\footnotesize $^c$ No flux measurement possible because averaged over the given 5700\,AU negative }\\
\multicolumn{7}{l}{\footnotesize features due to missing short spacings overwhelm the positive features (see Figs.~\ref{ch3oh_sample} \& \ref{so2_sample}).}\\
\multicolumn{7}{l}{\footnotesize $^d$ Peak flux corrupted by neighboring SO$_2$ line.}\\
\multicolumn{7}{l}{\footnotesize $^e$ Only detectable with higher
spatial resolution \citep{beuther2005a}.}\label{linelistall}
\end{longtable}
\begin{table}[htb]
\caption{Molecular column densities.}
\begin{tabular}{lrrrr}
\hline
\hline
& Orion-KL$^a$ & G29.96 & 23151 & 05358 \\
\hline
CH$_3$OH & $2{\sf x}10^{16}$ & $4{\sf x}10^{17}\,^b$ & $3{\sf x}10^{16}-1{\sf x}10^{17}\,^c$ & $4{\sf x}10^{15}-4{\sf x}10^{18}\,^d$ \\
CH$_3$CH$_2$CN & $5{\sf x}10^{15}$ & $1{\sf x}10^{16}$ & -- & -- \\
CH$_2$CHCN & $5{\sf x}10^{15}$ & $7{\sf x}10^{15}$ & -- & -- \\
C$^{34}$S & $2{\sf x}10^{14}$ & $2{\sf x}10^{15}$ & $2{\sf x}10^{14}$ & $6{\sf x}10^{13}$ \\
CH$_3$OCH$_3$ & $1{\sf x}10^{16}$ & $2{\sf x}10^{17}\,^e$ & -- & -- \\
HCOOCH$_3$ & $6{\sf x}10^{15}$ & $8{\sf x}10^{16}$ & -- & -- \\
$^{34}$SO & $6{\sf x}10^{14}$ & $1{\sf x}10^{16}$ & $4{\sf x}10^{15}$ & -- \\
SO$_2$ & $2{\sf x}10^{15}$ & $3{\sf x}10^{16}$ & $3{\sf x}10^{16}$ & $1{\sf x}10^{16}\,^d$ \\
HC$_3$N & $6{\sf x}10^{14}$ & $2{\sf x}10^{15}$ & -- & -- \\
HN$^{13}$C & blend & blend & -- & -- \\
H$_2$CS & $4{\sf x}10^{14}$ & $2{\sf x}10^{16}$ & -- & $4{\sf x}10^{14}$ \\
C$_2$H$_5$OH & $2{\sf x}10^{15}$ & $6{\sf x}10^{16}$ & -- & -- \\
SiO & $2{\sf x}10^{14}$ & $4{\sf x}10^{14}$ & $2{\sf x}10^{14}$ & -- \\
CH$_3$CN & $2{\sf x}10^{15}$ & $1{\sf x}10^{16}$ & -- & $8{\sf x}10^{16}\,^d$ \\
\hline
\hline
\end{tabular}
\footnotesize{~\\
$^a$ Calculated for lower average $T$ of 200\,K because of smoothing to 5700\,AU resolution (Figs.~\ref{sample_spectra} \& \ref{ch3oh_sample}). The source size was approximated by half the spatial resolution. The on average lower Orion-KL column densities are likely due to the largest amount of missing flux for the closest source of the sample.\\
$^b$ From \citet{beuther2007d}. \\
$^c$ From \citet{beuther2007f} for different sub-sources.\\
$^d$ From \citet{leurini2007} for different sub-sources.\\
$^e$ At lower temperature of 100\,K, because otherwise different lines would get excited.
}
\label{column}
\end{table}
\end{document}
|
1,314,259,994,836 | arxiv | \section{Introduction}
Paired photons entangled in the spatial degree of freedom are
represented by an infinite dimensional Hilbert space. This offers
the possibility to implement quantum algorithms that either
inherently use dimensions higher than two or exhibit enhanced
efficiency in increasingly higher dimensions (see \cite{nature1}
and references inside). These include the demonstration of the
violation of bipartite, three dimensional Bell inequalities
\cite{vaziri1}, the implementation of the {\it quantum coin
tossing} protocol with qutrits \cite{molina1}, and the generation
of quantum states in ultra-high dimensional spaces
\cite{barreiro1}. Actually, the amount of spatial bandwidth, and
the degree of spatial entanglement, can be tailored
\cite{torres1,eberly1}, being possible to control the effective
dimensionality where spatial entanglement resides.
The most widely used source for generating paired photons with
entangled spatial properties is spontaneous parametric
down-conversion (SPDC) \cite{arnaut1,mair1}. In this process,
photons are known to be emitted in cones whose shape depends of
the phase matching conditions inside the nonlinear crystal. All
relevant experiments reported to date make use of a small section
of the full down-conversion cone. But the spatial properties of
different sections of the cone have been unexplored experimentally
up to now. This could be done, for example, by relocating the
single photon counting modules. Then, one question naturally
arises: {\em Are the entangled spatial properties of the photons
modified depending of the location in the down-conversion cone
where they are detected?}
The answer to this question is of great relevance for the
implementation of many quantum information schemes. When
considering entanglement in the spatial degree of freedom, one
should determine whether pairs of photons with different azimuthal
angle of emission might show different spatial quantum
correlations, since all quantum information applications are based
on the availability and use of specific quantum states.
Additionally, the spatial properties of entangled two-photon
states have to be taken into account even when entanglement takes
place in other degrees of freedom, such as polarization. In
general, it is required to suppress any spatial ``which-path''
information that otherwise degrades the degree of entanglement.
This is especially true for configurations that make use of a
large spatial bandwidth \cite{lee1} and in certain SPDC
configurations where horizontally and vertically polarized photons
are generated in different sections of the down-conversion cone
\cite{kwiat1, kwiat2}. Finally, the generation of heralded single
photons with well defined spatial properties, i.e. a gaussian
shape for optimum coupling into monomode optical fibers, depends
on the angle of emission \cite{torres2}.
Here we experimentally demonstrate that the presence of Poynting
vector walk-off, which is unavoidable in most SPDC configurations
currently being used, introduces {\em azimuthal distinguishing
information in the down-conversion cone}. Paired photons generated
with different azimuthal angles show correspondingly different
spatial quantum correlations and amount of entanglement. We also
show that this spatial distinguishing information can severely
degrade the quality of polarization entanglement, since the full
quantum state that describes the entangled photons is a
nonseparable mixture of polarization and spatial variables.
\begin{figure}
\centering
\includegraphics[scale=0.60]{figure1}[t]
\caption{(a) Diagram of the experimental set up and (b) The
down-conversion cone. Single photon detectors are located in
opposite sides of the cone, forming an angle $\alpha$ with the
$YZ$ plane.} \label{figSPDC}
\end{figure}
\section{Experimental set-up and results}
In Fig. \ref{figSPDC} we present a scheme of our experimental
set-up. The output beam of a CW diode laser emitting at
$\lambda_p=405nm$, is appropriately spatially filtered to obtain a
beam with a gaussian profile, while a half wave plate (HWP) is
used to control the polarization. The pump beam is focalized to
$w_0=136\mu m$ beam waist on the input face of a $L=5$mm thick
lithium iodate crystal, cut at $42^{\circ}$ for Type I degenerate
collinear phase matching. The generated photons, signal and idler,
are ordinary polarized, in opposition to the extraordinary
polarized pump beam. The crystal is tilted to generate paired
photons which propagate inside the nonlinear crystal with a
non-collinear angle of $\varphi=4^{\circ}$. Due to the crystal
birefringence, the pump beam exhibits Pointing vector walk-off
with angle $\mathbf{\rho_0}=4.9^{\circ}$, while the generated
photons do not exhibit spatial walk-off. Fig. \ref{figSPDC}(b)
represents the transversal section of the down-conversion cone.
The directions of propagation of the signal and the idler photons
over this ring are determined by the azimuthal angle $\alpha$,
which is the angle between the plane of propagation of the
down-converted photons and the $YZ$ plane. To determine
experimentally the position of the crystal optics axis, and the
origin of $\alpha$, we measure the relative position of the pump
beam in the plane $XY$ at the input and output faces of the
nonlinear crystal using a CCD camera.
Right after the crystal, each of the generated photons traverse a
$2-f$ system with focal length f=$50cm$. Low-pass filters are used
to remove the remaining pump beam radiation. After the filters,
the photons are coupled into multimode fibers. In order to
increase our spatial resolution, we use small pinholes of
$300\mu$m of diameter. We keep the idler pinhole fixed and measure
the coincidence rate while scanning the signal photon transverse
spatial shape with a motorized $XY$ translation stage. Finally, as
we are interested in the different spatial correlations at
different azimuthal positions of the downconversion ring, instead
of rotating the whole detection system, the nonlinear crystal and
the polarization of the pump beam are rotated around the
propagation direction. Due to slight misalignments of the rotation
axis of the crystal, after every rotation it is necessary to
adjust the tilt of the crystal to achieve generation of photons at
the same non-collinear angle in all the cases.
Images for different azimuthal sections of the cone were taken. We
present a sample of them in the upper row of Fig.
\ref{figresults}, which summarizes our main experimental results.
Each column shows the coincidence rate for $\alpha=0^{\circ}$,
$90^{\circ}$, $180^{\circ}$ and $270^{\circ}$. The movie shows the
experimental and theoretical spatial shape of the signal photon
corresponding to other values of the angle $\alpha$. Each point of
these images corresponds to the recording of a $10s$ measurement.
The typical number of coincidences at the maximum is around $10$
photons per second. The resolution of the experimental images is
$50 \times 50$ pixels. The different spatial shapes measured of
the mode function of the signal photons clearly show that the
down-conversion cone does not posses azimuthal symmetry. This
agrees with the theoretical predictions presented in the lower row
of Fig. \ref{figresults}. Note that no fitting parameter has been
used whatsoever. Slight discrepancies between experimental data
and theoretical predictions might be due to the small, but not
negligible, bandwidth of the pump beam and to the fact that the
resolution of our system is limited by the detection pinholes
size.
\begin{figure}[ht]
\centering
\includegraphics[bb=28 331 447 581, scale=0.65]{figure2}
\caption{Images showing the spatial shape of the mode function of
the signal photon when measuring coincidences rates. Upper row
corresponds to theoretical predictions, and the lower row
corresponds to experimental data. (a) $\alpha=0^{\circ}$; (b)
$\alpha=90^{\circ}$; (c) $\alpha=180^{\circ}$ and (d)
$\alpha=270^{\circ}$. See also the corresponding movie.
\label{figresults}}
\end{figure}
An interesting feature that can be observed in these images is
that the mode function in Fig. \ref{figresults}(b), corresponding
to the case $\alpha=90^{\circ}$ presents a nearly gaussian shape.
We will show below that this effect happens whenever $\varphi
\simeq \rho_0$, which corresponds to our experimental conditions.
On the other hand the mode function shown in Figs.
\ref{figresults} (a) and (c) are highly elliptical.
\section{Azimuthal distinguishability of paired photons generated in different
sections of the the down-conversion cone} To gain further insight,
we turn to the theoretical description of this problem. The signal
photon propagates along the direction ${\mathbf z_1}$ (see Fig.
\ref{figSPDC}) with longitudinal wavevector $k_s=[(\omega_s
n_s/c)^2-|\mathbf{p}|^2]^{1/2}$, and transverse wavevector ${\bf
p}=(p_x,p_y)$. Similarly, the idler photon propagates along the
${\mathbf z_2}$ direction with longitudinal wavevector $k_i$, and
transverse wavevector ${\bf q}$. Here we consider the signal and
idler photons as purely monochromatic, due to the use of a narrow
pinhole in the idler side, which selects a very small bandwidth of
frequencies of the down-converted ring. Although photons detected
in different parts of the down-conversion cone might present
slightly different polarizations \cite{migdall1}, this is a small
effect, and therefore we neglect it.
The quantum two-photon state at the output face of the nonlinear
crystal, within the first order perturbation theory, can be
written as $|\Psi\rangle=\int d {\bf p} d {\bf q} \Phi ({\bf
p},{\bf q})a_s^{\dag} ({\bf p}) a_i^{\dag} ({\bf q}) |0,0\rangle$,
where the mode function writes \cite{rubin1,torres2}
\begin{eqnarray}
& & \Phi \left( {\bf p},{\bf q} \right)={\cal N} \exp \left\{
-\frac{\left( \Gamma L \right)^2}{4} \Delta_k^2+ i \frac{\Delta_k
L}{2} \right\} \nonumber \\
& & \times \exp \left\{ -\frac{ \left( p_x+q_x \right)^2 w_0^2 +
\left( p_y+q_y \right)^2 w_0^2 \cos^2 \varphi}{4} \right\}
\nonumber \\
& & \times \exp \left\{ -\frac{|{\bf p}|^2 w_s^2}{4}-\frac{|{\bf
q}|^2 w_s^2}{4} \right\} \label{eqmo2}
\end{eqnarray}
where $\Delta_k=\tan \rho_0 \left[ (p_x+q_x)\cos \alpha +(p_y+q_y)
\cos \varphi \sin\alpha \right]-(p_y-q_y)\sin\varphi$ comes from
the phase matching condition in the $z$ direction, ${\cal N}$ is a
normalization constant, and we assume that the pump beam shows a
gaussian beam profile with beam width $w_0$ at the input face of
the nonlinear crystal. We neglect the transverse momentum
dependence of all longitudinal wavevectors. The phase matching
function, $sinc(\Delta_k L/2)$ has been approximated by an
exponential function that has the same width at the $1/e^{2}$ of
the intensity: $sinc(b x)\simeq \exp[-(\Gamma b)^{2}x^{2}]$, with
$\Gamma=0.455$. The value of $w_s$ describes the effect of the
unavoidable spatial filtering produced by the specific optical
detection system used. In our experimental set-up, the probability
to detect a signal photon at $\mathbf{x_1}$ in coincidence with an
idler photon at the fixed pinhole position $\mathbf{x_2}=0$ is
given by $R_c(\mathbf{x_1},\mathbf{x_2}=0)=|\Phi \left( 2 \pi
\mathbf{x_1}/ \left( \lambda_s f \right),\mathbf{x_2}=0 \right)
|^2$.
Eq. \ref{eqmo2} shows that the spatial mode function shape shows
ellipticity. The amount of ellipticity depends on the
non-collinear configuration \cite{molina2}, and on the azimuthal
angle of emission ($\alpha$) due to the presence of spatial walk
off. {\em The latter is the cause of the azimuthal symmetry
breaking of the down-conversion cone}. Both effects turn out to be
important when the length of the crystal $L$ is larger than the
non-collinear length $L_{nc}=w_0/\sin{\varphi}$ and the walk-off
length $L_{w}=w_0/\tan{\rho_0}$. Our experimental configuration is
fully in this regime. We should notice that in a collinear SPDC
configuration, Poynting vector walk-off also introduces
ellipticity of the mode function \cite{fedorov}.
The theory also predicts the orientation of the spatial mode
function of the signal photon, as shown in \ref{figresults}. This
is given by the slope $\tan \beta$ in the $(p_x,p_y)$ plane of the
loci of perfect phase matching transverse momentum, which writes
$\tan \beta= \left( \sin \varphi-\tan \rho_0 \cos \varphi \sin
\alpha \right)/\left(\tan \rho_0 \cos \alpha \right)$. If $\varphi
\simeq \rho_0$ and $\alpha=90^\circ$, the spatial mode function of
the signal photons shows a nearly gaussian shape, due to the
compensation of the non-collinear and walk-off effects. All these
results are in agreement with experimental data in Fig.
\ref{figresults}.
This azimuthal variation of the spatial correlations can be made
clearer if we express the mode function of the signal photon,
$\Phi_s \left( {\bf p} \right)=\Phi\left( {\bf p},{\bf q=0}
\right)$ in terms of orbital angular momentum modes. The mode
function can be described by superposition of spiral harmonics
\cite{management} $\Phi_{s} \left( \rho,\varphi \right)=\left(
2\pi \right)^{-1/2} \sum_{m} a_{m} \left( \rho \right) \exp
\left(i m \varphi \right)$, where $a_{m} \left( \rho \right)=
1/(2\pi)^{1/2} \int d\varphi \Phi_s \left(\rho,\varphi \right)
\exp \left(-i m \varphi \right)$, and $\rho$ and $\varphi$ are
cylindrical coordinates in the transverse wave-number space. The
weight of the $m$-harmonic is given by $C_{m}=\int \rho d\rho
|a_{m} \left( \rho \right)|^2$.
\begin{figure}
\centering
\includegraphics[scale=0.6]{figure3}
\caption{Weight of the OAM modes $l_s=0$ (solid line), and all
other modes (dashed lines) as a function of the angle $\alpha$.
(a), (c) and (d) $w_0=100\mu m$; (b), (e) and (f) $w_0=600\mu m$.
(c) and (e) show the OAM distribution for $\alpha=0^{\circ}$, and
(d) and (f) corresponds to $\alpha=90^{\circ}$. We assume
negligible spatial filtering ($w_s \simeq 0$). Dot-dashed lines:
no spatial walk-off. \label{figmodes}}
\end{figure}
The gaussian pump beam corresponds to a mode with $l_p=0$, while
the idler photon is projected into ${\bf q}=0$, which corresponds
to projection into a large area gaussian mode ($l_i=0$). Fig.
\ref{figmodes}(a) and (b) show the weight of the mode $l_s=0$, and
the weight of all other OAM modes, as a function of the angle
$\alpha$ for two different pump beam widths. We observe that the
OAM correlations of the two-photon state change along the
down-conversion cone due to the azimuthal symmetry breaking
induced by the spatial walk-off. This implies that the
correlations between OAM modes do not follow the relationship
$l_p=l_s+l_i$. From Fig. \ref{figmodes} it is clearly observed
that for larger pump beams the azimuthal changes are smoothed out,
since in this case the non-collinear and walk-off lengths are much
larger than the crystal length.
Figure \ref{figmodes}(c) and \ref{figmodes}(d) plots the OAM
decomposition for $w_0=100 \mu$m, and Figs. \ref{figmodes}(e) and
\ref{figmodes}(f) for $w_0=600 \mu$m, for $\alpha=0,90^\circ$.
Notice that the weight of the $l_s=0$ mode is maximum for
$\alpha=90^\circ$, which therefore is the optimum angle for the
generation of heralded single photons with a gaussian-like shape.
This effect can be clearly observed in Figs. \ref{figresults}(b),
\ref{figmodes}(d) and \ref{figmodes}(f). On the contrary, for
$\alpha=270^\circ$, the combined effects of the noncollinear and
walk off effects make the weight of the $l_s=0$ mode to obtain its
minimum value. This is of relevance in any quantum information
protocol where the generated photons, no matter the degree of
freedom where the quantum information is encoded, are to be
coupled into single mode fibers.
\begin{figure}
\centering
\includegraphics[scale=0.60]{figure4}
\caption{(a) Schmidt number (K) as a function of the angle
$\alpha$. The width of the collection mode is $w_s=50 \mu$m. (b)
Schmidt number as a function of the width of the collection mode
for different values of $\alpha$. In all cases, the pump beam
width is $w_0=100 \mu$m. The Schmidt number for the case with
negligible walk-off (dashed lines) is shown for comparison.
\label{fig4}}
\end{figure}
Importantly, the degree of spatial entanglement of the two-photon
state also shows azimuthal variations, depending on the direction
of emission of the down-converted photons. Fig. \ref{fig4} shows
the Schmidt number $K=1/Tr \rho_s^2$, where $\rho_s=Tr_i
|\Psi\rangle\langle \Psi|$, is the density matrix that describe
the quantum state of the signal photon, after tracing out the
spatial variables corresponding to the idler photon. The Schmidt
number \cite{eberly1} is a measure of the degree of entanglement
of the spatial two photon state, $K=1$ corresponding to a product
state, while larger values of $K$ corresponds to increasingly
larger values of the degree of entanglement. The degree of
entanglement is maximum for $\alpha=0$, and minimum for
$\alpha=90^\circ$, as shown in Fig. \ref{fig4}(a). The degree of
entanglement is known to decrease with increasing filtering
\cite{vanexter1}, i.e., larger values of $w_s$, as shown in Fig.
\ref{fig4}(b), and to increase for larger values of the pump beam
width ($w_0$).
\section{Effects on the generation of polarization entanglement}
The azimuthal distinguishing information introduced by walking
SPDC affect the quantum properties of polarization-entangled
states, when photons generated in different sections of the
down-conversion cone are used. This is the case when using two
type I SPDC crystal whose optical axis are rotated $90^{\circ}$.
This configuration, originally demonstrated for the generation of
polarization-entangled photons \cite{kwiat1}, has been used as
well for the generation of hyperentangled quantum states
\cite{barreiro1}. The quantum state of the two-photon state writes
\begin{eqnarray}
\label{kwiat} & & |\Psi\rangle=\frac{1}{\sqrt{2}}\int d{\bf p}
d{\bf q} \left[ \Phi_1 \left({\bf p},{\bf q} \right) |H,{\bf
p}\rangle_s |H,{\bf
q}\rangle_i \nonumber \right. \\
& & \left. + \Phi_2 \left( {\bf p},{\bf q} \right)|V,{\bf
p}\rangle_s |V,{\bf q}\rangle_i \right]
\end{eqnarray}
$\Phi_1 \left({\bf p},{\bf q} \right)= \Phi \left(\alpha=0,{\bf
p},{\bf q} \right)\exp \left( i p_y \tan \rho_s L + i q_y \tan
\rho_i L\right)$ describes the spatial shape of the photons
generated in the first nonlinear crystal, $\rho_{s,i}$ are the
spatial walk-off angles of the down-converted photons traversing
the second nonlinear crystal, and $\Phi_2 \left({\bf p},{\bf q}
\right)= \Phi \left(\alpha=90^\circ,{\bf p},{\bf q} \right)$
corresponds to the photons generated in the second nonlinear
crystal. The quantum state in the polarization space is obtained
tracing out the spatial variables, i.e., $\rho_p=Tr_{s}
|\Psi\rangle \langle\Psi|$, which gives
\begin{equation}
\label{pol} \rho_p=\frac{1}{2} \left\{ |H \rangle_s |H\rangle_i
\langle H|_s \langle H |_i+ |V \rangle_s |V\rangle_i \langle V|_s
\langle V |_i+ \xi \left[ |H \rangle_s |H\rangle_i \langle V|_s
\langle V |_i+ |V \rangle_s |V\rangle_i \langle H|_s \langle H
|_i\right] \right\}
\end{equation}
where $\xi=\int d{\bf p} d{\bf q} \Phi_1 \left({\bf p},{\bf q}
\right) \Phi_2^{*} \left({\bf p},{\bf q} \right)$.
\begin{figure}
\centering
\includegraphics[scale=0.7]{figure5}
\caption{Concurrence (C) of the polarization entangled bi-photon
state generated in a two crystal configuration, as a function of
the pump beam, for two different values of the crystal length. The
non-collinear angle is $\varphi=2^{\circ}$, and the pump beam
waist is $w_0=100 \mu$m. \label{fig5}}
\end{figure}
The degree of mixture of polarization and spatial variables is
determined by the purity ($P$) of the quantum state given by Eq.
(\ref{pol}), which writes $P=1/2 \left(1+|\xi|^2 \right)$. The
concurrence of the polarization-entangled state, which writes
writes $C=|\xi|$, quantifies the quality of the polarization
entangled state. Fig. 5 shows the concurrence of the quantum
state for two different crystal lengths. If spatial walk-off
effects are negligible, $|\xi|=1$ and spatial and polarization
variables can be separated. Therefore, both the purity and the
concurrence are equal to $1$. This is the case shown in Fig.
\ref{fig5} for a crystal length of $L=0.5$ mm. Notwithstanding,
this is not generally he case, as demonstrated above.
Interestingly, the degree of spatial entanglement of the
horizontally polarized photons is unchanged when traversing the
second crystal, despite the fact that the down-converted photons
shows walk-off. Notwithstanding, the spatial correlations are
modified due to the presence of walk-off. It is this effect which
enhance spatial distinguishing information and thus degrades the
quality of polarization entanglement.
\section{Conclusions}
We have shown theoretically and experimentally, that the presence
of Poynting vector walk-off in SPDC configurations introduces
azimuthal distinguishing information of paired photons emitted in
different directions of propagation. The quantum correlations of
the spatial two-photon state and, consequently, the degree of
entanglement show azimuthal variations that are enhanced when
using highly focused pump beams and broadband spatial filters.
This breaking of the azimuthal symmetry of the down-conversion
cone has important consequences when designing and implementing
sources of paired photons with entangled properties.
\section{Acknowledgements}
We want to thank X. Vidal and M. Navascues for helpful
discussions. This work was supported by projects FIS2004-03556 and
Consolider-Ingenio 2010 QOIT from Spain, by the European
Commission under the Integrated Project Qubit Applications
(Contract No. 015848) and by the Generalitat de Catalunya.
|
1,314,259,994,837 | arxiv | \section*{Abstract}
We consider tripartite entangled states for continuous variable systems of EPR type, which generalise the famous bipartite CV EPR states (eigenvectors of conjugate choices $X_1 - X_2, P_1+ P_2$, of the systems' relative position and total momentum variables). We give the regularised forms of such tripartite EPR states in second-quantised formulation, and derive their Wigner functions. This is directly compared with the established NOPA-like states from quantum optics. Whereas the multipartite entangled states of NOPA type have singular Wigner functions in the limit of large squeezing, $r \rightarrow \infty$, or $\tanh r \rightarrow 1^-$ (approaching the EPR states in the bipartite case), our regularised tripartite EPR states show singular behaviour not only in the approach to the EPR-type region ($s \rightarrow 1$ in our notation), but also for an additional, auxiliary regime of the regulator ($s \rightarrow \sqrt{2}$). While the $s\rightarrow 1$ limit pertains to tripartite CV states with singular eigenstates of the relative coordinates and remaining squeezed in the total momentum, the $s\rightarrow \sqrt{2}$ limit yields singular eigenstates of the total momentum, but squeezed in the relative coordinates. Regarded as expectation values of displaced parity measurements, the tripartite Wigner functions provide the ingredients for generalised CHSH inequalities. Violations of the tripartite CHSH bound ($B_3 \le 2$) are established, with $B_3 \cong 2.09$ in the canonical regime ($s \rightarrow 1^+$), as well as $B_3 \cong 2.32$ in the auxiliary regime ($s \rightarrow \sqrt{2^+}$).
\end{titlepage}
\section{Introduction}
The nature of quantum entanglement has been pursued almost since the inception of quantum mechanics itself. Whereas the early insights of Schr\"{o}dinger, as well as of Einstein, Podolsky and Rosen (EPR) \cite{EPR1935} were framed in the \emph{Gedankenexperiment} mode of discussion with continuous degrees of freedom (particle position and momentum eigenstates), the issues were taken up quantitatively in Bell's theorem \cite{Bell1964} for the case of spin degrees of freedom, via the transcription to this context given by Bohm \cite{Bohm1951}. However, continuous variable (CV) systems are the natural framework for most quantum optics and quantum communication work \cite{NielsenandChuang1968, WallsandMilburn1994}, and Bell, and the more general Clauser, Horne, Shimony and Holt (CHSH) inequalities \cite{Clauseretal1969}, are important measures of entanglement. A technical difficulty in working with continuous variables is that the theoretical ideal EPR type states are singular, whereas experimental investigations require regularised states. In the bipartite case these may be provided by so-called NOPA squeezed states \cite{Reid1989, Ouetal1988, Ouetal1992, BanaszekandWodkiewicz1998, BanaszekandWodkiewicz1999} in the large squeezing limit. In an alternative approach, Fan and Klauder \cite{FanandKlauder1994} constructed somewhat more general classes of EPR states, but without providing a regularisation.
In the extension to multipartite cases, a natural question is the choice of relative variables, chosen from amongst the positions and momenta of the constituent particles of the system, which will provide an appropriate generalisation of the bipartite EPR states (with or without regularisation)\footnote{We refer conventionally to the subsystems as `particles', but it should be borne in mind that the CV systems could equally be independent photon polarisation modes, photon modes or even joint photon and phonon degrees of freedom.}. One possibility is provided by the multipartite Greenberger-Horne-Zeilinger NOPA-like squeezed states \cite{vanLoockandBraunstein2001, Chen2002}. These have the virtue of experimental accessibility \cite{Ouetal1988, Ouetal1992}, and do show singular behaviour in the large squeezing limit which moreover leads to violations of the multipartite CHSH inequalities \cite{Kuzmichetal2000}. For a derivation of the CHSH inequalities for $N$-particle systems see for example \cite{Mermin1990}. However, many other choices of relative variables exist -- see for example \cite{Trifonov1998} and references therein.
In this paper we take up a logical generalisation of the original EPR suggestion, in selecting simultaneously diagonalisable joint degrees of freedom from amongst the canonical Jacobi relative coordinates of the particles. \S 2 is divided into two subsections. In the first we introduce the bipartite Fan and Klauder EPR-like state and propose a possible regularisation. This is shown to be identical to the bipartite NOPA state (which approximates the ideal EPR limit) with squeezing parameter $r\rightarrow \infty$. In fact the correspondence is that our regularisation $s \rightarrow 1^+$ coincides with $\tanh r \rightarrow 1^-$ as $r\rightarrow \infty$, with $\tanh r=1/s^2$. In the bipartite case the Fan and Klauder states are more general than the original EPR states in that they realise explicit nonzero eigenvalues of total or relative positions and momenta; but a study of the Wigner functions reveals that such nonzero eigenvalues can be absorbed into shifts of the complex displacement parameters, and so in the bipartite case the NOPA states do not lose any generality in not allowing for such nonzero eigenvectors. In the next subsection we follow the Fan-Klauder approach, in second quantised formalism, to derive the explicit theoretical tripartite CV states of EPR type conforming to this structure, and we develop the methods to provide a plausible regularisation.
In \S 3, we derive explicit Wigner functions for our regularised tripartite states of EPR type by interpreting the Wigner functions themselves as expectation values of displaced parity measurement operators. As could be expected from their different second-quantised forms, the tripartite EPR and NOPA Wigner functions differ significantly (the appendix provides a comparison of the second-quantised forms and their respective Wigner functions, including an explicit evaluation of the former in the NOPA-like case, and a detailed derivation of the latter for our EPR-type states). Specifically, whereas the multipartite NOPA Wigner functions are singular in the large squeezing limit, our regularised tripartite states of EPR type admit two different singular regimes: not only in the EPR-type regime ($s \rightarrow 1$ in our notation), where of course the Wigner function still differs from that of NOPA, but also, rather unexpectedly, for an additional, auxiliary regime of the regulator ($s \rightarrow \sqrt{2}$). In \S 4 we exploit the fact that Wigner functions are immediately applicable as summands in the appropriate tripartite CHSH inequalities. We explore the two singular regimes and their Bell operator expectation values which control the classical-quantum boundary via the CHSH bound ($B_3 \le 2$) and identify some instances of violations for each of the cases $s\rightarrow 1^+$ and $s\rightarrow \sqrt{2^+}$. \S 5 includes further discussion of our findings, as well as comparison with the recent work \cite{FanandZhang1998, FanandLiu2007}, which provides a general construction of ideal EPR states, and some concluding remarks.
\section{Regularised CV EPR states}
\subsection{Bipartite states}\label{Bipsub}
The case considered by EPR in \cite{EPR1935} discusses the simultaneous diagonalisation of the two commuting variables of difference in position $(X_1-X_2)$ and total momentum $(P_1+P_2)$, where $X_j,\,P_j,\,\,j=1,2$ are a standard pair of canonically conjugate variables with $[X_j,P_k]=i\delta_{jk}$. Fan and Klauder \cite{FanandKlauder1994} give an explicit form for the common eigenvectors of the relative position and total momentum for two EPR particles in terms of creation operators as follows:
\begin{eqnarray}
|\eta\rangle = e^{-\frac 12 |\eta|{}^2 + \eta a{}^\dagger - \eta ^* b{}^\dagger +a{}^\dagger b{}^\dagger}|00\rangle,
\end{eqnarray}
where $\eta = \eta_1 + i\eta_2$ is an arbitrary complex number, $[a,a^\dagger]=1$, $[b,b^\dagger]=1$ and $|00\rangle \equiv |0,0\rangle$, the two-mode vacuum state. Thus
\begin{eqnarray}
\left(X_1 - X_2\right)|\eta\rangle = \sqrt{2}\eta_1|\eta\rangle, \hspace{0.5cm} \left(P_1+P_2\right)|\eta\rangle=\sqrt{2}\eta_2|\eta\rangle,
\end{eqnarray}
with the coordinate and momentum operators definable as:
\begin{eqnarray}
X_1=\frac{1}{\sqrt{2}}\left(a+a^\dagger\right), \hspace{0.4cm} X_2=\frac{1}{\sqrt{2}}\left(b+b^\dagger\right), \hspace{0.4cm} P_1=\frac{1}{i\sqrt{2}}\left(a-a^\dagger\right), \hspace{0.4cm} P_2=\frac{1}{i\sqrt{2}}\left(b-b^\dagger\right).
\end{eqnarray}
As a genuine representation of ideal generalised EPR states, with appropriate orthonormality and completeness, $|\eta\rangle$ is singular. In this paper we consider the following regularised version:
\begin{eqnarray}
|\eta\rangle_s &:=& N_2\,e^{-\frac{1}{2s^2} |\eta|^2 + \frac 1s \eta a^{\dagger} - \frac{1}{s} \eta^* b^{\dagger} + \frac{1}{s^2} a^{\dagger}b^{\dagger}} |00\rangle\label{Bieta},
\end{eqnarray}
with normalisation $|N_2|^2=|(s^4-1)|^{1/2}/s^2$.
The bipartite CV state (\ref{Bieta}) is to be compared with a regularised EPR-like state which has already appeared in the literature -- the so-called NOPA state from quantum optics \cite{Reid1989}-\cite{BanaszekandWodkiewicz1999}:
\begin{eqnarray}
|NOPA\rangle &=& e^{r\left( a^\dagger b^\dagger - a b\right)} |00\rangle.\label{NOPA}
\end{eqnarray}
NOPA states are produced by Nondegenerate Optical Parametric Amplification, and are the optical analog to the EPR state in the limit of strong squeezing. The NOPA state has already been shown to be a genuinely entangled state that produces violations of the CHSH inequality \cite{BanaszekandWodkiewicz1998, Ouetal1992, Kuzmichetal2000}.
Following \cite{Yurkeetal1986} on reordering $SU(1,1)$ operators, we can reorder the expression for NOPA (\ref{NOPA}) into the following form:
\begin{eqnarray}
|NOPA\rangle &=& e^{r(a^{\dagger}b^{\dagger} - ab)}|00\rangle\nonumber\\
&=& e^{ra^{\dagger}b^{\dagger}}e^{-2\ln\,\cosh(r)\frac 12(a^{\dagger}a + b^{\dagger}b + 1)}e^{-rab}|00\rangle\nonumber\\
&=& \sqrt{1-\tanh{}^2 r}\;e^{\tanh ra^{\dagger}b^{\dagger}}|00\rangle.\label{RearrangedNOPA}
\end{eqnarray}
Note from (\ref{RearrangedNOPA}) that in the number basis, as $\tanh r\rightarrow1$ the $|NOPA\rangle$ state approximates the ideal EPR limit:
\begin{eqnarray}
\lim_{r\rightarrow\infty}|NOPA\rangle \approx |EPR\rangle \approx |0,0\rangle + |1,1\rangle +|2,2\rangle +\ldots.
\end{eqnarray}
In \ref{etaappendix} it is argued that taking $\eta = 0$ corresponds to a shift in the parameters of the displacement operators (with some constraints on the choice of new parameters), such that we may rearrange the $|\eta\rangle_s$ regularisation to show that it approaches the NOPA regularisation, with $\tanh r = 1/s^2$. For $\eta = 0$ we therefore have:
\begin{eqnarray}
|\eta=0\rangle_s = N_{2}\,e^{\frac{1}{s^2}a^\dagger b^\dagger}|00\rangle\label{Bizeroeta}.
\end{eqnarray}
\subsection{Tripartite states}
A suitable analogue of equations (\ref{Bieta}) or (\ref{RearrangedNOPA}) which has the features required of an entangled state, which we analyse in detail below, is defined by:
\begin{eqnarray}
|\eta, \eta', \eta''\rangle_s= N_3 \,e^{-\frac{1}{4s^2} |\eta|^2-\frac{1}{4s^2} |\eta'|^2-\frac{1}{4s^2} |\eta''|^2 + \frac 1s (\eta a^{\dagger} + \eta' b^{\dagger} + \eta'' c^{\dagger} ) + \frac{1}{s^2} (a^{\dagger}b^{\dagger} + a^{\dagger}c^{\dagger} + b^{\dagger}c^{\dagger})} |000\rangle\label{Trieta},
\end{eqnarray}
with normalisation $|N_3|^2=|(s^4-1)^2\left(s^4-4\right)|^{1/2}/s^6$. For the case $\eta=\eta'=\eta'' = 0$, the tripartite EPR-like state becomes:
\begin{eqnarray}
|\eta=\eta'=\eta'' = 0\rangle_s= N_{3}e^{\frac{1}{s^2} (a^{\dagger}b^{\dagger} + a^{\dagger}c^{\dagger} + b^{\dagger}c^{\dagger})} |000\rangle.\label{trizeroeta}
\end{eqnarray}
Note here that, while the set of states (\ref{trizeroeta}) belong to the well known pure, fully symmetric three-mode Gaussian states, the more general case of (\ref{Trieta}) where the parameters $\eta$, $\eta'$ and $\eta''$ are retained is not symmetric, since the parameters can all differ. For discussion of Gaussian states in relation to entanglement in CV systems, see \cite{AdessoandIlluminatiREVIEW2007} and references therein.
Whereas the bipartite state $|\eta\rangle_{s}$ was a simultaneous eigenstate of $(X_1\!-\!X_2)$ and $(P_1\!+\!P_2)$, in the tripartite case the choice of relative variables is no longer immediately apparent. In a similar manner to the derivation of (\ref{Bieta}), it is readily established using manipulations of the type:
\begin{eqnarray}
ae^A = e^A\left\{a-[A,a]+\frac 12 [A,[A,a]]+\ldots\right\},
\end{eqnarray}
that generically $|\eta,\eta',\eta''\rangle_s$ is an eigenstate of the following combinations:
\begin{eqnarray}
\left( a-\frac{1}{s^2}\left(b^\dagger+c^\dagger\right)\right)|\eta,\eta',\eta''\rangle_s &=& \frac 1s \eta|\eta,\eta',\eta''\rangle_s,\nonumber\\
\left( b-\frac{1}{s^2}\left(c^\dagger+a^\dagger\right)\right)|\eta,\eta',\eta''\rangle_s &=& \frac 1s \eta'|\eta,\eta',\eta''\rangle_s,\nonumber\\
\left( c-\frac{1}{s^2}\left(a^\dagger+b^\dagger\right)\right)|\eta,\eta',\eta''\rangle_s &=& \frac 1s \eta''|\eta,\eta',\eta''\rangle_s.
\end{eqnarray}
From this it is clear that different values of $s$ will dictate limiting cases wherein $|\eta,\eta',\eta''\rangle_s$ becomes a singular eigenvalue of various choices of relative variables. (Note that in the bipartite case we could have introduced $|\eta\rangle_s$ as $|\eta,\eta'\rangle_s$ analogously, recovering (\ref{Bieta}) in the case $\eta'=-\eta^*$.) Keeping $s$ general, the eigenvalue equations become:
\begin{eqnarray}
\frac{1}{\sqrt{2}}\left(s\!+\!\frac 1s\right)(X_1\!-\!X_2)+\frac{i}{\sqrt{2}}\left(s\!-\!\frac 1s\right)(P_1\!-\!P_2)|\eta,\eta',\eta''\rangle_{s} &=& (\eta\!-\!\eta')|\eta,\eta',\eta''\rangle_{s},\nonumber\\
\frac{1}{\sqrt{2}}\left(s\!+\!\frac 1s\right)(X_2\!-\!X_3)+\frac{i}{\sqrt{2}}\left(s\!-\!\frac 1s\right)(P_2\!-\!P_3)|\eta,\eta',\eta''\rangle_{s} &=& (\eta'\!-\!\eta'')|\eta,\eta',\eta''\rangle_{s},\nonumber\\
\frac{1}{\sqrt{2}}\left(s\!-\!\frac 2s\right)(X_1\!+\!X_2\!+\!X_3)+\frac{i}{\sqrt{2}}\left(s\!+\!\frac 2s\right)(P_1\!+\!P_2\!+\!P_3)|\eta,\eta',\eta''\rangle_{s} &=& (\eta\!+\!\eta'\!+\!\eta'')|\eta,\eta',\eta''\rangle_{s}\label{NewEigen}.
\end{eqnarray}
From (\ref{NewEigen}) it is clear that the singular cases will occur for $s=1$ and $s=\sqrt{2}$. For the case $s=1$ we evidently have a singular eigenstate of the relative coordinates, while remaining a \emph{squeezed} state \cite{WallsandMilburn1994} of the total momentum. Conversely, for $s=\sqrt{2}$ we have a singular eigenstate of the total momentum, but a squeezed state of the relative coordinates.
If we construct mode operators corresponding to the Jacobi relative variables and the canonical centre-of-mass variables, say
\begin{eqnarray}
\mathfrak{a}_{rel} &=& \frac 12\left(X_1-X_3\right)+\frac i2\left(P_1-P_3\right),\nonumber\\
\mathfrak{b}_{rel} &=& \frac{1}{2\sqrt{3}}\left(X_1+X_3-2X_2\right)+\frac{i}{2\sqrt{3}}\left(P_1+P_3-2P_2\right),\nonumber\\
\mathfrak{a}_{cm} &=& \frac{1}{\sqrt{6}}\left(X_1+X_2+X_3\right)+\frac{i}{\sqrt{6}}\left(P_1+P_2+P_3\right),
\end{eqnarray}
then we find, from (\ref{NewEigen}) for general $s$:
\begin{eqnarray}
\left(s\mathfrak{a}_{rel} +\frac{1}{s}\mathfrak{a}^\dagger_{rel}\right)|\eta,\eta',\eta''\rangle_s &=& \frac{1}{\sqrt{2}} \left(\eta-\eta''\right)|\eta,\eta',\eta''\rangle_s,\nonumber\\
\left(s\mathfrak{b}_{rel} + \frac{1}{s} \mathfrak{b}^\dagger_{rel}\right)|\eta,\eta',\eta''\rangle_s &=& \frac{1}{\sqrt{6}} \left(\eta-2\eta'+\eta''\right)|\eta,\eta',\eta''\rangle_s,\nonumber\\
\left(s\mathfrak{a}_{cm} - \frac 2s \mathfrak{a}_{cm}^\dagger\right)|\eta,\eta',\eta''\rangle_s &=& \frac{1}{\sqrt{3}}\left(\eta+\eta'+\eta''\right)|\eta,\eta',\eta''\rangle_s,
\end{eqnarray}
from which it is again obvious that for $s=1$ or $s=\sqrt{2}$, canonical combinations arise in the first two, and last cases respectively. On the other hand, the non-canonical combinations appearing for $s=1$ in the third, and $s=\sqrt{2}$ in the first two cases, indicate that the squeezing parameters have the values $\frac 12 \ln 3$ in each instance.
Having established the structure of the tripartite EPR-like states, we can examine their behaviour when applied to Wigner functions, and the consequences of using these states in CHSH inequalities.
\section{Tripartite Wigner Function}\label{Wignersec}
The Wigner function \cite{Wigner1932, Hilleryetal1984}, was an attempt to provide the Schr\"odinger wavefunction with a probability in phase space. The time-independent function for one pair of $x$ and $p$ variables is:
\begin{eqnarray}
W(x,p) = \frac{1}{\pi \hbar} \int^{\infty}_{-\infty} dy \psi^* (x+y)\psi (x-y) e^{2ipy/\hbar}.\label{wignerfn}
\end{eqnarray}
Alternatively, it has been shown \cite{Moyal1949, Royer1977} that a useful expression of the Wigner function is in the form of quantum expectation values. For $N$ modes, the Wigner function for a state $|\psi\rangle$ may be expressed as the expectation value of the displaced parity operator, where the parity operator itself performs reflections about phase-space points ($\alpha_j)$, where $\alpha_j=\frac{1}{\sqrt{2}}(x_j+ip_j)$, with $j=1,2,\ldots,N$ denoting the mode, and:
\begin{eqnarray}
W(\alpha_1, \alpha_2, \ldots,\alpha_N) &=& \left(\textstyle{\frac{2}{\pi}}\right)^N\langle\Pi(\alpha_1, \alpha_2, \ldots,\alpha_N)\rangle.\label{Wigner}
\end{eqnarray}
The displaced parity operator is:
\begin{eqnarray}
\Pi(\alpha_1, \alpha_2, \ldots,\alpha_N) &=& {\otimes^{N}_{j=1}}D_j(\alpha_j)(-1)^{n_j}D_j^\dagger(\alpha_j),
\end{eqnarray}
where $n_j$ are the number operators, and for each mode the Glauber displacement operators are of the form:
\begin{eqnarray}
D(\alpha)=e^{\alpha a^\dagger - \alpha^*a}.
\end{eqnarray}
When we express the Wigner function in the form of (\ref{Wigner}), we can derive a set of these functions to construct the inequalities that will be discussed in section \ref{CHSHsect}. The tripartite Wigner function for $|\eta,\eta',\eta''\rangle_s$ becomes:
\begin{eqnarray}
W(\alpha,\beta,\gamma) &=& \left(\frac{2}{\pi}\right)^3 \, N_3^2 \, e^{-\frac{1}{2s^2} |\eta|^2-\frac{1}{2s^2} |\eta'|^2-\frac{1}{2s^2} |\eta''|^2}\nonumber\\
&&\times \langle 000|\exp\left(\frac 1s \left(\eta^*a+\eta'^* b + \eta''^* c\right) + \frac{1}{s^2}\left(ab+ac+bc\right)\right)\nonumber\\
&&\times\; e^{\alpha a^\dagger - \alpha^*a}e^{\beta b^\dagger -\beta^*b}e^{\gamma c^\dagger - \gamma^* c}\left(-1\right)^{n_a+n_b+n_c}e^{\alpha^* a - \alpha a^\dagger}e^{\beta^*b -\beta b^\dagger}e^{\gamma^* c - \gamma c^\dagger}\nonumber\\
&&\times\; \exp\left(\frac 1s \left(\eta a^\dagger+\eta' b^\dagger + \eta'' c^\dagger\right) + \frac{1}{s^2}\left(a^\dagger b^\dagger+a^\dagger c^\dagger +b^\dagger c^\dagger \right)\right)|000\rangle\label{WignerStart}.
\end{eqnarray}
We evaluate such matrix elements by commuting mode operators with the parity operator and rearranging using BCH identities before casting the operators into anti-normal ordered form.
Then a complete set of coherent states is inserted and integrated over. As indicated in section \ref{Bipsub} and shown in \ref{etaappendix}, we can absorb the $\eta$, $\eta'$, $\eta''$ parameters by shifting the displacement parameters up to a factor: $W_{\eta,\eta',\eta''}(\alpha',\beta',\gamma')=E(\alpha',\beta',\gamma',\eta,\eta',\eta'')W_{0,0,0}(\alpha,\beta,\gamma)$. We are free to choose instances where $E(\alpha',\beta',\gamma',\eta,\eta',\eta'')=1$, and henceforth we assume $\eta=\eta'=\eta''=0$ unless otherwise stated, and write simply $W(\alpha,\beta,\gamma)$. With this shift in displacements, the Wigner function for our tripartite state becomes:
\begin{eqnarray}
W(\alpha,\beta,\gamma) &=& \frac{8}{\pi^3} \exp \left(\frac{1}{(s^4-4)(s^4-1)}\left[ C_1(|\alpha|^2 + |\beta|^2 + |\gamma|^2) \right.\right.\nonumber\\
&&+\;\; C_2(\alpha\beta + \alpha\gamma + \beta\gamma + \alpha^*\beta^* + \alpha^*\gamma^* + \beta^*\gamma^*) \nonumber\\
&&+\;\; C_3(\alpha\beta^* + \alpha\gamma^* + \beta\alpha^* + \beta\gamma^* + \gamma\alpha^* + \gamma\beta^*)\nonumber\\
&&+\;\; \left.C_4(\alpha^2 + \beta^2 + \gamma^2 + \alpha^{*2} + \beta^{*2} + \gamma^{*2}) \right]\Bigg)\label{Wigner2},
\end{eqnarray}
where
\begin{eqnarray}
C_1=-2(s^8-s^4-4)\,,\hspace{0.25cm} C_2=4s^2(s^4-2)\,,\hspace{0.25cm} C_3=-4s^4\,,\hspace{0.25cm} C_4=4s^2.\nonumber
\end{eqnarray}
The most important point to note here is the emergence of mixed conjugate/non-conjugate pairs, which do not appear in the Wigner function for the second-quantised NOPA-like optical analogue (see \ref{A1}). To make the behaviour of the Wigner function in the asymptotic region clearer, the parameters $\alpha$, $\beta$ and $\gamma$ are written in polar form, $\alpha = |\alpha|e^{i\phi_\alpha}$ etc. The Wigner function thus becomes:
\begin{eqnarray}
W(\alpha,\beta,\gamma) &=&\frac{8}{\pi^3}\exp\left(\frac{1}{(s^4-4)(s^4-1)}\left[ C_1\left(|\alpha|^2+|\beta|^2+|\gamma|^2\right)\right.\right.\nonumber\\
&& + \;2C_2\left(|\alpha||\beta|\cos(\phi_\alpha+\phi_\beta) +|\beta||\gamma|\cos(\phi_\beta+\phi_\gamma) + |\gamma||\alpha|\cos(\phi_\gamma+\phi_\alpha)\right)\nonumber\\
&&+ \;2C_3\left(|\alpha||\beta|\cos(\phi_\beta-\phi_\alpha) +
|\beta||\gamma|\cos(\phi_\gamma-\phi_\beta) + |\gamma||\alpha|\cos(\phi_\gamma-\phi_\alpha)\right)\nonumber\\
&&\left. + \;2C_4\left(|\alpha|^2\cos(2\phi_\alpha) + |\beta|^2\cos(2\phi_\beta)+ |\gamma|^2\cos(2\phi_\gamma)\right)\right]\Bigg)\label{PolarWigner}.
\end{eqnarray}
\section{CHSH inequalities and violations}\label{CHSHsect}
The CHSH inequalities \cite{Clauseretal1969} are CV generalisations of the original Bell inequalities which were set up to test the scheme proposed by Einstein, Podolsky and Rosen (EPR) in 1935 \cite{EPR1935}. Considering the measurement of an entangled pair of particles performed after they have been separated such that no classical communication channels are open when the wavefunction collapses, EPR posited that either quantum mechanics must be incomplete, with room for a hidden variable theory, or spatiotemporal locality is violated. Bell showed that hidden variables were not permitted if we preserve both the assumptions of accepted theory -- specifically locality -- and the probabilities predicted by quantum mechanics.
Generalised $N$-mode Bell inequalities -- CHSH inequalities, in terms of the Bell operator expectation values $B_N$ -- exist (see for example \cite{Mermin1990}). In their bi- and tripartite form we can apply these to our regularised EPR-like states. Following \cite{BanaszekandWodkiewicz1998, vanLoockandBraunstein2001}, the CHSH inequalities for the bi- and tripartite forms are the possible combinations:
\begin{eqnarray}
B_2 &=& \Pi(0,0) + \Pi(0,\beta) + \Pi(\alpha,0) - \Pi(\alpha,\beta),\\
|B_2| &\leq& 2,\nonumber\\
B_3 &=& \Pi(0,0,\gamma) + \Pi(0,\beta,0) + \Pi(\alpha,0,0) - \Pi(\alpha,\beta,\gamma),\label{B3}\\
|B_3| &\leq& 2.\nonumber
\end{eqnarray}
In \cite{vanLoockandBraunstein2001}, the CHSH inequality constructed with the tripartite NOPA-like state (see equation (\ref{NOPA3})) is maximised by taking an all-imaginary substitution $\alpha =\beta=\gamma=i\sqrt{J}$, where $J$ is some distance measure. In the bipartite case (Figure \ref{Bipartite}), if we look in the region $s\rightarrow 1^+$ for (\ref{Bizeroeta}), an all-imaginary substitution obviously gives exactly the same maximum violation as NOPA with $r\rightarrow \infty$, both having a maximum value of $B_2^{max} \approx 2.19$ \cite{BanaszekandWodkiewicz1998, vanLoockandBraunstein2001}. The figure shows clearly that the value of $B_2$ increases as $s\rightarrow 1^+$ and $J\rightarrow 0$.
In the tripartite case however, we must examine the wealth of other possible choices which extremise the inequality. From the form of the Wigner function in (\ref{PolarWigner}), however, there are some clear choices that will minimise the last term in $B_3$. Choosing all the phases $\phi_\alpha=\phi_\beta=\phi_\gamma=\frac{\pi}{2}$, and all the magnitudes $|\alpha|=|\beta|=|\gamma|=\sqrt{J}$, such that all the parameters are imaginary $(i\sqrt{J})$, equation (\ref{PolarWigner}) becomes:
\begin{eqnarray}
W(i\sqrt{J},0,0)&=&W(0,i\sqrt{J},0)=W(0,0,i\sqrt{J})= \frac{8}{\pi^3}\exp\left(-\frac{J(s^4-s^2+2)}{(s^2+1)(s^2-2)}\right),\nonumber\\
W(i\sqrt{J},i\sqrt{J},i\sqrt{J}) &=& \frac{8}{\pi^3}\exp\left(-\frac{3J(s^2+2)}{s^2-2}\right).
\end{eqnarray}
Consequently $B_3$ is (from (\ref{B3})):
\begin{eqnarray}
B_3=3\exp\left\{-\frac{J(s^4-s^2+2)}{(s^2+1)(s^2-2)}\right\}-\exp\left\{-\frac{3J(s^2+2)}{(s^2-2)}\right\}\label{ImB3}.
\end{eqnarray}
In the region $s\rightarrow 1^+$, $B_3$ never reaches a value greater than $2$ (Figure \ref{Tripartite2}). A violation corresponding to the EPR limit $s\rightarrow1^+$ can be found by making the choice $\alpha = -\beta = -\sqrt{J}$; $\gamma=0$, for which (\ref{PolarWigner}) gives:
\begin{eqnarray}
W(-\sqrt{J},0,0)&=&W(0,\sqrt{J},0) = \exp\left\{-\frac {J \left( {s}^{4}+{s}^{2}+2 \right) }{ \left( s^2-1 \right) \left( s^2+2 \right) }\right\},\nonumber\\
W(-\sqrt{J},\sqrt{J},0) &=& \frac{8}{\pi^3}\exp\left(-\frac{2J(s^2+1)}{(s-1)(s+1)}\right),\nonumber\\
W(0,0,0)&=&1,
\end{eqnarray}
and $B_3$ becomes (Figure \ref{Tripartite2.09}):
\begin{eqnarray}
B_3&=& 1+2\,\exp\left\{-\frac {J \left( {s}^{4}+{s}^{2}+2 \right) }{ \left( s^2-1 \right) \left( s^2+2 \right) }\right\}
-\exp\left\{-\frac {2 J \left( {s}^{2}+1 \right)}{ \left( s^2-1\right) }\right\}.
\end{eqnarray}
As $s\rightarrow 1^+$, $J\rightarrow 0$, the maximum value is $B_3^{max} \approx 2.09$, which can be checked both analytically and numerically.
However, what is more interesting still is exploring an auxiliary regime of the regulator, $s\rightarrow \sqrt{2^+}$, in equation (\ref{ImB3}). This is shown in Figure \ref{Tripartite2.32}. Analytically, we can approximate the maximum to the lowest order in $s-\sqrt{2}=\epsilon$ by writing $B_3=3x-x^\lambda$, where $x=\exp(-4J/3\epsilon^2)$, and $\lambda=9$, with maximum
\begin{eqnarray}
B_3^{max} &\cong& (\lambda-1)\left(\frac{3}{\lambda}\right)^{\frac{\lambda}{\lambda-1}} \cong 2.32.\label{B3max}
\end{eqnarray}
at $x=\left(\frac{3}{\lambda}\right)^{\frac{1}{\lambda-1}}$. This can be confirmed numerically for $s\rightarrow \sqrt{2^+}$, $J\rightarrow 0$. The values of $B_3^{max}$ correspond exactly to those calculated for the experimentally verified NOPA-like states, whose maximisation as $r \rightarrow \infty$ is also governed by (\ref{B3max}).
\section{Discussion}
In this paper we have analysed tripartite CV entangled states which are natural generalisations of the classic bipartite EPR-type states (for two systems with canonical variables $X_1$, $P_1$, $X_2$, $P_2$). Given the necessity of working with normalisable states which still approximate the ideal EPR-type limit for practical implementation of CHSH inequalities, we examined a family of such regulated states parameterized by a regulating parameter $s$. This family of states was compared with those relating to multipartite NOPA-like states. The NOPA states have been shown to manifest CHSH violations, and have the advantage of being directly accessible by experiment via standard quantum optics protocols such as multiparametric heterodyne detection techniques and beam splitter operations. However, as an extension of a direct transcription of the EPR paradox, this new family of regularised states provides an alternative, systematic description of the approach to the ideal EPR states for relative variables.
By finding expressions for the eigenstates of the regularised tripartite CV EPR-like states it became apparent that there are two regimes of the regularisation parameter in which these states become singular: in one case ($s \rightarrow 1$) we have a singular eigenstate of the relative coordinates while remaining squeezed in the total momentum; in the other, $s\rightarrow \sqrt{2}$ limit we have a singular eigenstate of the total momentum, but squeezed in the relative coordinates. In these two regimes we have explored CHSH inequalities via Wigner functions regarded as expectation values of displaced parity operators. Violations of the tripartite CHSH bound ($B_3 \le 2$) are established analytically and numerically, with $B_3 \cong 2.09$ in the canonical regime ($s \rightarrow 1^+$), as well as $B_3 \cong 2.32$ in the auxiliary regime ($s \rightarrow \sqrt{2^+}$).
Related tripartite entangled states have recently been constructed by Fan \cite{Fan2006}. Although these states are also accessible by standard quantum optics techniques, they are not true generalisations of `EPR' states. In this case, while they diagonalise one centre-of-mass variable (for example, $X_1\!+\!X_2\!+\!X_3$), they are \emph{coherent} states \cite{KlauderandSkagerstam1985} of the remaining relative Jacobi observables (that is, they diagonalise their annihilation mode operators ${\mathfrak a}$, ${\mathfrak b}$ in contrast to the $s\rightarrow \sqrt{2}$ limit of our EPR-type tripartite states, which as stated above turn out to be \emph{squeezed} states of these relative degrees of freedom (eigenstates of a linear combination $\frac{1}{\sqrt{3}}(2{\mathfrak a} + {\mathfrak a}^\dagger)$, $\frac{1}{\sqrt{3}}(2{\mathfrak b} + {\mathfrak b}^\dagger)$ in the relative mode operators, with the value $\frac 12 \ln 3$ for the squeezing parameter). In the case of the tripartite entangled states of \cite{Fan2006}, no regularisation has been given. A construction of true multipartite ideal EPR states has, however, been provided in \cite{FanandZhang1998, FanandLiu2007}, with a second-quantised form for the tripartite state (\ref{Fan3}), which may be compared with the form for the NOPA-like state (\ref{QuantizedNOPA3}). Although the Wigner functions for the tripartite NOPA-like states show peaks at zeroes of $X_i-X_j$ and $P_i+P_j$ for all distinct pairs $i,j$ \cite{BanaszekandWodkiewicz1999}, which does not appear to be consistent with simultaneous diagonalisation of commuting observables, it can be inferred from the agreement of (\ref{Fan3}) and (\ref{QuantizedNOPA3}) that indeed in the infinite squeezing limit, $\tanh(r)=1$, and with relative parameters equal to zero, the NOPA-like state does again tend to the ideal EPR state.
Since the NOPA-like states are constructed with a view to experimental realisability, and, in the bipartite case, to manufacture the specific properties of the Wigner function, this new suggestion for a regularisation stemming from a direct transcription of the EPR paradox in terms of the simultaneous diagonalisation of commuting observables could be seen as a more general or fundamental description. It also considers in more detail the specific instance of tripartite EPR-type states, compared to the comprehensive \cite{FanandLiu2007} which finds $n$-partite representations of entangled states through their Gaussian-form completeness relation without exploring regularisations and Wigner function properties. As the proposed EPR-type regularised state produces a different Wigner function from the NOPA-type, with two singular limits, this paper's proposed regularisation may potentially suggest that alternative experimental ways to achieve the violations of the CHSH inequalities are possible. For a review of Gaussian states, and discussions of the realisability of entangled states, we refer to \cite{AdessoandIlluminatiREVIEW2007, AdessoandIlluminati2005, FerraroandParis2005}, and references therein. It will be worth investigating the full extent of the constraints placed on the choices of displacement parameters entailed by the shift in $\eta$ (see \ref{etaappendix}). The current discussion might also easily be extended to include a presentation of the alternative bipartite starting point of conjugate variable choice $X_1+X_2$ and $P_1-P_2$. \cite{Trifonov1998} discusses the canonical combinations for any number of modes, but in our case it is reasonable to assume that an $N$-partite generalisation would be of the form $\left[\exp\left(\frac{1}{s^2}\left(\sum_{i<j}a_i^\dagger a_j^\dagger\right)\right)\right]$. We would also expect that these states would admit standard completeness relations in the singular cases.
In conclusion, we have presented a rigorous extension of Fan and Klauder's general EPR-like states to the regularised tripartite CV case for relative variables, and highlighted the connection to current quantum optics implementations. The CHSH inequalities constructed with component Wigner functions for this case show significant violation of the classical bound, and the different choices of regularisation parameter making the state singular illustrate an interesting new feature of the structure of generalised CV EPR-like states.
\section*{Acknowledgements}
This research was partially supported by the Commonwealth of Australia through the International Endeavour Awards. We thank Robert Delbourgo for a careful reading of the manuscript, and the referees for their suggestions helping to improve the presentation of this work.
\pagebreak
\pagebreak
\begin{appendix}
\section*{Deriving the tripartite NOPA and EPR-like Wigner functions}
\setcounter{section}{1}
\noindent In this appendix the tripartite NOPA and $|\eta, \eta', \eta''\rangle_s$ Wigner functions are derived for comparison.
\subsection{Second-quantised form and Wigner function of tripartite NOPA-like state}\label{A1}
Applying two phase-free beamsplitters at specified angles acting on one momentum squeezed state and two position squeezed states of mode $1$, $2$ and $3$ respectively, \cite{vanLoockandBraunstein2000} states that the tripartite NOPA-like states can be derived from the following expression:
\begin{eqnarray}
|NOPA^{(3)}\rangle &=& B_{23}\left(\frac{\pi}{4}\right) B_{12}\left(\rm{arccos}\frac{1}{\sqrt{3}}\right)\nonumber\\ &&\times\exp\left(\frac{r}{2}\left(a^2\!-\!a^{\dagger 2}\right)\right) \exp\left(\frac{-r}{2}\left(b^2\!-\!b^{\dagger 2}\right)\right) \exp\left(\frac{-r}{2}\left(c^2\!-\!c^{\dagger 2}\right)\right)|000\rangle.
\end{eqnarray}
We can therefore use the following formula quoted in \cite{Truax1985}\footnote{Note the misprint in the sign of the last exponential in \cite{Truax1985}; see \cite{FisherNietoandSandberg1984}} for the squeezing operator $S(z)$ (where $z=e^{i\theta}$) with the Baker-Campbell-Hausdorff (BCH) relation:
\begin{eqnarray}
S(z) &=& \exp \left[\frac 12 (za^{\dagger 2} - \overline{z} a^2)\right]\nonumber\\
&=& \exp\left[\frac 12 (e^{i\theta}\tanh r)a^{\dagger 2}\right] \exp\left[-2(\ln\,\cosh r)(\frac 12 a^{\dagger}a \!+ \!\frac 14)\right]\exp \left[-\frac 12 (e^{-i\theta}\tanh r)a^2\right].\label{BCH}
\end{eqnarray}
The following beamsplitter operation\footnote{An additional overall relative sign ($180^\circ$ phase shift) between the two modes has been omitted; see for example \cite{Hamilton2000}} can then be applied, where $\theta$ here refers to the angles $\pi/4$ and $\arccos(1/\sqrt{3})$ for the $B_{23}$ and $B_{12}$ splitters respectively:
\begin{eqnarray}
B_{ab}(\theta) : \left\{\begin{array}{c} a \rightarrow a \cos\theta + b\sin\theta \\ b \rightarrow - a\sin\theta + b\cos\theta \end{array} \right.
\end{eqnarray}
and normalising, the tripartite NOPA is expressible in second-quantised form as:
\begin{eqnarray}
|NOPA^{(3)}\rangle &=& \left(1-\tanh^2(r)\right)^{3/4}\nonumber\\
&& \times\exp\left(-\frac 16\tanh r\left( a^{\dagger 2}\! + \! b^{\dagger 2} \!+ \!c^{\dagger 2}\right) + \frac 23 \tanh r\left(b^{\dagger}c^{\dagger}\! + \! a^{\dagger}b^{\dagger}\! +\! a^{\dagger}c^{\dagger}\right)\right)|000\rangle\label{QuantizedNOPA3}.
\end{eqnarray}
Using this state to derive the Wigner function of the form (\ref{Wigner}), the complex exponential that is produced may be rearranged using the formula (\ref{Berezin}). The tripartite NOPA Wigner function then becomes:
\begin{eqnarray}
W_{NOPA} &=&\left(\frac{2}{\pi}\right)^3 \exp\left\{\left(2-\frac{4}{1-\tanh{}^2 r}\right)\left(|\alpha|^2+|\beta|^2+|\gamma|^2\right)\right.\nonumber\\
&& -\frac {2\tanh r}{3\left(1-\tanh{}^2 r\right)} \left(\alpha^{*2}+\beta^{*2}+\gamma^{*2}+\alpha^2+\beta^2+\gamma^2\right) \nonumber\\
&&\left.+ \frac{8\tanh r}{3\left(1-\tanh{}^2 r\right)}\left(\alpha\beta + \beta\gamma + \gamma\alpha + \alpha^*\beta^* + \beta^*\gamma^* + \gamma^*\alpha^*\right)\right\}\nonumber\\
&=& \frac{8}{\pi^3} \exp\left\{\left(-2\cosh(2r)\right)\left(|\alpha|^2+|\beta|^2+|\gamma|^2\right)\right.\nonumber\\
&& -\frac 13 \sinh(2r) \left(\alpha^{*2}+\beta^{*2}+\gamma^{*2}+\alpha^2+\beta^2+\gamma^2\right)\nonumber\\
&&\left. + \frac 43 \sinh(2r)\left(\alpha\beta + \beta\gamma + \gamma\alpha + \alpha^*\beta^* + \beta^*\gamma^* + \gamma^*\alpha^*\right) \right\}.\label{NOPA3}
\end{eqnarray}
This is the result quoted in \cite{vanLoockandBraunstein2001}, and further explication can be found in that paper. This function should be compared with the Wigner function of our regularised tripartite EPR-like state, $|\eta,\eta',\eta''\rangle_s$. Further details of that derivation are given below.
\subsection{Derivation of conditions for $\eta=0$}\label{etaappendix}
In the interest of brevity, the conditions for shifting the $\eta$ parameters are shown below for the biparite case. However, the analysis extends in an obvious way to the triparite case. After BCH and anti-normal ordering, the bipartite Wigner function (tripartite given in equation (\ref{WignerStart})) becomes:
\begin{eqnarray}
W(\alpha, \beta) &=& _s\langle \eta,\eta'|\underbrace{e^{2|\alpha|^2}e^{2|\beta|^2}e^{-2\alpha^*a}e^{-2\beta^*b}e^{2\alpha a^\dagger}e^{2\beta b^\dagger}}_{F(\alpha,\beta)}(-1)^{n_a+n_b}|\eta,\eta'\rangle_s\nonumber\\
&=& \langle 00|\exp\left(-\frac {1}{4s^2}|\eta|^2 -\frac {1}{4s^2}|\eta'|^2 +\frac 1s\eta^*a +\frac 1s\eta'^*b +\frac {1}{s^2}ab\right) F(\alpha,\beta)\nonumber\\
&&\times\exp\left(-\frac {1}{4s^2}|\eta|^2 -\frac {1}{4s^2}|\eta'|^2 -\frac 1s\eta a^\dagger -\frac 1s\eta' b^\dagger +\frac {1}{s^2}a^\dagger b^\dagger\right)|00\rangle.
\end{eqnarray}
We then make the generic substitutions
\begin{eqnarray}
\alpha &=& \alpha' + A(\eta,s),\nonumber\\
\beta &=& \beta' + B(\eta',s),
\end{eqnarray}
into $F(\alpha,\beta)$. To find the expressions for $A(\eta,s)$ and $B(\eta',s)$ that will allow us to set $\eta=\eta'(=\eta'')=0$, we solve the following:
\begin{eqnarray}
2\alpha'A^* + 2\alpha'^*A+2|A|^2 - 2A^*a+2Aa^\dagger &=& \frac {1}{2s^2} |\eta|^2 - \frac 1s \eta^*a + \frac 1s \eta a^\dagger,\nonumber\\
2\beta'B^* + 2\beta'^*B+2|B|^2 - 2B^*b+2Bb^\dagger &=& \frac {1}{2s^2} |\eta'|^2 - \frac 1s \eta'^*b + \frac 1s \eta' b^\dagger.
\end{eqnarray}
Thus we can see that, if we allow the constraints
\begin{eqnarray}
\frac{\alpha'\eta^*}{s}+\frac{\alpha'^*\eta}{s}&=&0,\nonumber\\
\frac{\beta'\eta'^*}{s}+\frac{\beta'^*\eta'}{s}&=&0,
\end{eqnarray}
(i.e. $\alpha'$ real and $\eta$ imaginary or vice versa), then the expressions for $A(\eta,s)$ and $B(\eta',s)$ become
\begin{eqnarray}
A(\eta,s)=\frac{\eta}{2s}, \hspace{0.5cm} A^*(\eta,s)=\frac{\eta^*}{2s},\nonumber\\
B(\eta',s)=\frac{\eta'}{2s}, \hspace{0.5cm} B^*(\eta',s)=\frac{\eta'^*}{2s}.
\end{eqnarray}
Therefore, up to a factor, taking $\eta=\eta'(=\eta'')=0$ corresponds to a shift in the parameters of the displacement operators $\alpha'=\alpha-\frac{\eta}{2s}$, $\beta'=\beta-\frac{\eta'}{2s}$ (and $\gamma'=\gamma-\frac{\eta''}{2s}$ in the tripartite case). For the tripartite Wigner function as used in section (\ref{Wignersec}), this can be expressed as:
\begin{eqnarray} W_{\eta,\eta',\eta''}(\alpha',\beta',\gamma')&=&E(\alpha',\beta',\gamma',\eta,\eta',\eta'')W_{0,0,0}(\alpha,\beta,\gamma)\\
E(\alpha',\beta',\gamma',\eta,\eta',\eta'')&=&\exp\left(\frac 1s(\alpha'\eta^* +\alpha'^*\eta +\beta'\eta'^* +\beta'^*\eta' +\gamma'\eta''^* +\gamma'^*\eta'')\right)
\end{eqnarray}
\subsection{Details of derivation of tripartite $|\eta,\eta',\eta''\rangle_s$ Wigner function}\label{A2}
The second-quantised EPR-like state is expressed as (\ref{Trieta}). This is used to find the Wigner function in the form of (\ref{WignerStart}). By commuting mode operators with the parity operator and rearranging using BCH identities, the expression becomes, in anti-normal ordered form:
\begin{eqnarray}
W &=& \left(\frac{2}{\pi}\right)^3 \, N_3^2 \, e^{-\frac{1}{2s^2} |\eta|^2-\frac{1}{2s^2} |\eta'|^2-\frac{1}{2s^2} |\eta''|^2}\nonumber\\
&&\times \langle 000|\exp\left(\frac 1s \left(\eta^*a+\eta'^* b + \eta''^* c\right) + \frac{1}{s^2}\left(ab+ac+bc\right)\right)\nonumber\\
&&\times\; e^{2|\alpha|^2}e^{2|\beta|^2}e^{2|\gamma|^2}e^{-2\alpha^*a}e^{-2\beta^*b}e^{-2\gamma^*c}e^{2\alpha a^\dagger}e^{2\beta b^\dagger}e^{2\gamma c^\dagger}\nonumber\\
&&\times\; \exp\left(\frac 1s \left(\eta a^\dagger+\eta' b^\dagger + \eta'' c^\dagger\right) + \frac{1}{s^2}\left(a^\dagger b^\dagger+a^\dagger c^\dagger +b^\dagger c^\dagger \right)\right)|000\rangle.
\end{eqnarray}
In anti-normal ordered form we may insert a complete set of coherent states $\int |u,v,w\rangle\langle u,v,w|\frac{d^2ud^2vd^2w}{\pi}$ such that we may rearrange the exponential according to the formula \cite{Berezin1966, Fan1990}:
\begin{eqnarray}
\int \prod^{n}_{i}\left[ \frac {d^2 z_{i}}{\pi}\right] \exp\left(-\frac 12 (z,z^*)\left(\begin{array}{cc}A & B \\ C & D \end{array}\right)\left(\begin{array}{c} z\\z^* \end{array}\right) + (\mu, \nu^*)\left(\begin{array}{c} z\\z^* \end{array}\right)\right)\nonumber\\
= \left[det\left(\begin{array}{cc}C & D \\ A & B \end{array}\right)\right]^{-\frac 12} \exp\left[\frac 12(\mu,\nu^*)\left(\begin{array}{cc}A & B \\ C & D \end{array}\right)^{-1} \left(\begin{array}{c} \mu\\\nu^* \end{array}\right)\right]\nonumber\\
= \left[det\left(\begin{array}{cc}C & D \\ A & B \end{array}\right)\right]^{-\frac 12} \exp\left[\frac 12(\mu,\nu^*)\left(\begin{array}{cc}C & D \\ A & B \end{array}\right)^{-1} \left(\begin{array}{c} \nu^*\\\mu \end{array}\right)\right]\label{Berezin},
\end{eqnarray}
where matrices $A$ and $D$ must be symmetrical, and $C=B^T$. In this instance
\begin{eqnarray}
(z,z^*)&=& \left(u,v,w,u^*,v^*,w^*\right),\nonumber\\
(\mu,\nu^*)&=&\left(\frac 1s \eta^*-2\alpha^*,\frac 1s \eta'^*-2\beta^*,\frac 1s \eta''^*-2\gamma^*,-\frac 1s \eta+2\alpha,-\frac 1s \eta'+2\beta,-\frac 1s \eta''+2\gamma \right),
\end{eqnarray}
and we have
\begin{eqnarray}
\left(\begin{array}{cc}C & D \\ A & B \end{array}\right)= \left(\begin{array}{cccccc} 1&0&0&0&-\frac{1}{s^2}&-\frac{1}{s^2}\\0&1&0&-\frac{1}{s^2}&0&-\frac{1}{s^2}\\0&0&1&-\frac{1}{s^2}&-\frac{1}{s^2}&0\\0&-\frac{1}{s^2}&-\frac{1}{s^2}&1&0&0\\-\frac{1}{s^2}&0&-\frac{1}{s^2}&0&1&0\\-\frac{1}{s^2}&-\frac{1}{s^2}&0&0&0&1 \end{array}\right),
\end{eqnarray}
with inverse
\begin{eqnarray}
\left(\begin{array}{cc}C & D \\ A & B \end{array}\right)^{-1} = \frac{s^4}{\left(s^4 -4\right)\left(s^4-1\right)}\left( \begin {array}{cccccc} {s}^{4}-3&1&1&2\,{s}^{-2}&{\frac {{s}^{
4}-2}{{s}^{2}}}&{\frac {{s}^{4}-2}{{s}^{2}}}\\\noalign{\medskip}1&{s}^
{4}-3&1&{\frac {{s}^{4}-2}{{s}^{2}}}&2\,{s}^{-2}&{\frac {{s}^{4}-2}{{s
}^{2}}}\\\noalign{\medskip}1&1&{s}^{4}-3&{\frac {{s}^{4}-2}{{s}^{2}}}&
{\frac {{s}^{4}-2}{{s}^{2}}}&2\,{s}^{-2}\\\noalign{\medskip}2\,{s}^{-2
}&{\frac {{s}^{4}-2}{{s}^{2}}}&{\frac {{s}^{4}-2}{{s}^{2}}}&{s}^{4}-3&
1&1\\\noalign{\medskip}{\frac {{s}^{4}-2}{{s}^{2}}}&2\,{s}^{-2}&{
\frac {{s}^{4}-2}{{s}^{2}}}&1&{s}^{4}-3&1\\\noalign{\medskip}{\frac {{
s}^{4}-2}{{s}^{2}}}&{\frac {{s}^{4}-2}{{s}^{2}}}&2\,{s}^{-2}&1&1&{s}^{
4}-3\end {array} \right).
\end{eqnarray}
Note also that
\begin{eqnarray}
\left[\rm{det}\left(\begin{array}{cc}C & D \\ A & B \end{array}\right)\right]^{-\frac 12} = \left[(s^{12}-6s^7+9s^4-4)/s^{12}\right]^{-\frac 12} = \frac{1}{N_3^2},
\end{eqnarray}
such that the $N_3^2$ cancel in the Wigner function.
From the argument in \ref{etaappendix}, we assume that $\eta=\eta'=\eta''=0$ unless otherwise specified, and continue to use $W(\alpha,\beta,\gamma)$. This gives equation (\ref{Wigner2}), which may now easily be compared with the Wigner function derived for the NOPA-like case (equation (\ref{NOPA3})). Further discussion of similar manipulations of the Wigner function can be found in \cite{deGosson2004}.
\subsection{Tripartite entangled state from \cite{FanandLiu2007}}
Equation $(27)$ in \cite{FanandLiu2007} provides the ideal EPR state for the tripartite entangled state:
\begin{eqnarray}
|p,\xi_2,\xi_3\rangle &=& \frac{1}{\sqrt{3}\pi^{\frac 34}}\exp\left[A+\frac{i\sqrt{2}p}{3}\sum^{3}_{i=1}a_i^\dagger+\frac{\sqrt{2}\xi_2}{3}(a_1^\dagger-2a_2^\dagger +a_3^\dagger)+\frac{\sqrt{2}\xi_3}{3}(a_1^\dagger+a_2^\dagger-2a_3^\dagger)+S^\dagger\right]|000\rangle\label{Fan3},\nonumber\\
A&\equiv& -\frac{p^2}{6} -\frac 13(\xi_2^2 +\xi_3^2-\xi_2\xi_3),\nonumber\\
S&\equiv&\frac 23 \sum^3_{i<j=1}a_ia_j-\frac 16 \sum_{i=1}^3a_i^2.
\end{eqnarray}
\end{appendix}
\pagebreak
|
1,314,259,994,838 | arxiv | \chapter{Introduction}
Recently classical, long wavelength information of M theory has been
successfully used to understand detailed quantum properties
of four-dimensional supersymmetric gauge theories. This work was initiated
by Witten [\Wittenone], who argued that the auxiliary Riemann surface
which appears in the Seiberg-Witten low energy effective action of
$N=2$ Yang-Mills can be naturally interpreted and moreover derived from
the geometric structure of a single M theory fivebrane.
In a recent paper
it was shown [\HLW]
that not only the geometrical structure of the Seiberg-Witten
solution, but in fact the entire low energy effective action can
be deduced from a knowledge of the classical M-fivebranes equations of
motion describing threebrane solitons [\three].
However, while the full low energy
effective action was derived in [\HLW], the discussion
relied heavily on the $N=2$ supersymmetry to deduce
the vector zero mode equations from a knowledge of the scalar action alone.
In particular this
calculation did not discuss the origin of the vector zero
modes in detail
or how their equations might be determined independently of the
scalar zero modes. Furthermore this calculation only made use of
the purely scalar part of the M-fivebrane equations, which can be derived
from the standard `brane' action $\sqrt{-\det g}$. Thus no use was made
of the rich and unique structure of the M-fivebrane arising from the
self-dual three-form.
The purpose of this paper is to explicitly derive the
Seiberg-Witten effective action from the M-fivebranes equations of motion
for both the scalar and vector fields.
In addition the literature has also been
concerned with deriving information on $N=1$ Yang-Mills
from the M-fivebrane. Another motivation for considering the method
presented in this paper is
as a first step towards calculating the M-fivebrane's low energy
effective action in these cases. It is also not known to
what extent the M-fivebrane can reproduce the correct low energy quantum
corrections to the classical action in these cases.
Such calculations are then potentially particularly
significant
since there is no known analogue of the Seiberg-Witten effective action
for $N=1$ Yang-Mills coupled to massless scalars.
It is commonly written that the vector part of the
Seiberg-Witten effective action may be obtained from the
six dimensional action
$$
S = \int d^6 x\ H\wedge\star H\ ,
\eqn\thewrongthingtodo
$$
as follows.
First one decomposes the three form
into $H = F^I\wedge\lambda_I$ where $\lambda_I$, $I=1,...,N$
are a basis of non-trivial one forms of the Riemann Surface and $F^{I}$
are four-dimensional $U(1)$ field strengths. The next step is to
impose the self-duality constraint on $H$ and dimensionally
reduce the resulting expression in \thewrongthingtodo\ over
the Riemann surface $\Sigma$. However,
as a text book calculation quickly reveals, since
$H$ is an odd form $H\wedge \star H = H\wedge H = 0$. Thus it is highly
unlikely that an interesting action in four dimensions can be obtained by
this procedure. Another problem with an action as a starting point
can be seen in the case that the Riemann surface has genus one. One then
has only one holomorphic form $\lambda$ and its complex conjugate ${\bar \lambda}$.
Thus the only non-zero expression that occurs is
$\int_{\Sigma} \lambda\wedge{\bar \lambda}$. However this integral is
pure imaginary and is related to ${\rm Im} \tau$ whereas both
${\rm Im} \tau$ and ${\rm Re}\tau$ appear in the Seiberg-Witten
effective action.
Indeed, there are strong objections to the use of an
action for the M-fivebrane equations of motion,
due to the chiral nature of its three form. In fact it has been shown that
no action can capture the full physics of the M-fivebrane
[\Wittentwo]. This point has again been stressed more recently by
Witten [\Witten].
In this paper we shall only consider the
equations of motion of the M-fivebrane and not attempt to
invoke an action at any time. At the end of the day it will turn out
that the resulting four-dimensional equations of motion do possess an
action formulation
(the Seiberg-Witten effective action), but this is not
surprising since there are no longer any chiral forms.
In this paper we shall use the manifestly covariant form of the
M-fivebrane's equations of motion found in [\HSW]. These equations were derived
from the superembedding formalism applied to the M-fivebrane [\HS,\BS]. In the
next section we obtain the relevant equations for the low energy motion of
$N$ threebrane solitons moving in the M-fivebrane. In section three we discuss
the solution to the self-duality condition. In sections four and five
we reduce the vector and scalar equations over the Riemann surface
respectively. In section six we discuss the exact form of an infinite number
of higher derivative contributions to the purely
scalar part of the low energy action, as predicted by the M-fivebrane. These
terms are compared with the corrections to the Seiberg-Witten effective
action obtained from calculations in Yang-Mills theory. Finally we close
with a discussion of our work in section seven.
It should be noted that there are other
formulations of the M-fivebrane equations, which furthermore can be
derived from an action [\S]. This action
relies on the appearance of an auxiliary, closed vector field $V$ with unit
length.\foot{There is a non-covariant form without an auxiliary
field, however this may be viewed a resulting from a particular choice of
the vector field, namely $V = dx^5$.} While the form of this action is
different from \thewrongthingtodo\ and so not manifestly zero,
it is not clear what one should
take for the vector field $V$. Indeed on a generic Riemann Surface
(with genus different from one) there are topological obstructions to the
global existence of $V$. Once this problem is over come, one would
have to determine the r\^ole of the auxiliary field in relation to
the Seiberg-Witten effective action. As mentioned above, the
derivation presented in this paper does not invoke a six-dimensional action.
Furthermore, some details of the derivation presented here could be
interpreted as suggesting that there are substantial difficulties
in obtaining the Seiberg-Witten effective
action from a six-dimensional action. This maybe in accord with the statement
[\Wittentwo] that one cannot derive all of the M-fivebrane
physics from an action.
\chapter{Fivebrane Dynamics in the Presence of Threebranes}
The M theory fivebrane has a six-dimensional $(2,0)$
tensor multiplet of
massless fields on its worldvolume. The component fields of this
supermultiplet are five real scalars $X^{a'}$, a gauge field $B_{\hat m \hat n}$
whose filed strength satisfies a modified self-duality condition and
sixteen spinors $\Theta ^i_\beta$. The scalars are the
coordinates transverse to the fivebrane and correspond to the
breaking of 11 dimensional translation invariance by the presence
of the fivebrane. The sixteen spinors correspond to the breaking
of half of the 32 component supersymmetry of M-theory. The
classical equations of motion of the fivebrane in the absence of
fermions and background fields are [\HSW]
$$
G^{\hat m\hat n} \nabla_{\hat m} \nabla_{\hat n} X^{a'}= 0\ ,
\eqn\eqomone
$$
and
$$
G^{\hat m \hat n} \nabla_{\hat m}H_{\hat n\hat p\hat q} = 0.
\eqn\eqomtwo
$$
where the worldvolume indices are $\hat m,\hat n, \hat p=0,1,...,5$
and the world tangent indices $\hat a,\hat b,\hat c=0,1,...,5$.
The transverse indices are $a',b'=6,7,8,9,10$. We now define the
symbols that occur in the equation of motion. The usual induced metric
for a $p$-brane is given, in static gauge, by
$$
g_{\hat m \hat n} = \eta _{\hat m \hat n}+
\partial _{\hat m}X^{a'} \partial _{\hat n}X^{b'}\delta _{a' b'}\ .
\eqn\gdef
$$
The covariant derivative in the equations of motion
is defined with the Levi-Civita connection with respect to the metric
$g_{\hat m \hat n} $.
Its action on a vector field $T_{\hat n}$ is given by
$\nabla_{\hat m} T_{\hat n} = \partial _{\hat m} T_{\hat n}-
\Gamma _{\hat m \hat n}^{\hat p}T_{\hat p}$
where
$$
\Gamma _{\hat m \hat n}^{\ \ \hat p}
= \partial _{\hat m } \partial _{\hat n} X^{a'}
\partial _{\hat r} X^{b'}g^{\hat r \hat s}\delta _{a' b' }\ .
\eqn\Gammadef
$$
We define the vielbein associated with the above metric in the
usual way
$g_{\hat m\hat n}=
e_{\hat m}^{\ \hat a} \eta _{\hat a \hat b} e_{\hat n}^{\ \hat b}$.
The inverse metric $G^{\hat m\hat n}$ which occurs in the equations
of motion
is related to the usual induced metric given above by the equation
$$
G^{\hat m\hat n} = {(e^{-1})}^{\hat m}_{\ \hat c} \eta ^{\hat c \hat a}
m_{\hat a}^{\ \hat d} m_{\hat d} ^{\ \hat b} {(e^{-1})}^{\hat m}_{\ \hat b}\ .
\eqn\Gdef
$$
The matrix $m$ is given by
$$
m_{\hat a}^{\ \hat b} = \delta_{\hat a}^{\ \hat b}
-2h_{\hat a\hat c\hat d}h^{\hat b\hat c\hat d}\ .
\eqn\mdef
$$
The field $h_{\hat a\hat b\hat c}$ is an anti-symmetric three form
which is self-dual;
$$
h_{\hat a\hat b\hat c}=
{1\over3!}\varepsilon_{\hat a\hat b\hat c\hat d\hat
e\hat f}h^{\hat d\hat e\hat f}\ ,
\eqn\hsd
$$
but it is not the curl of a three form gauge field. It is related to
the field
$H_{\hat m \hat n \hat p}$ which appears in the
equations of motion
and is the curl of a gauge field, but
$H_{\hat m \hat n \hat p}$ is not self-dual.
The relationship between
the two fields is given by
$$
H_{\hat m \hat n \hat p}= e_{\hat m}^{\ \hat a}
e_{\hat n}^{\ \hat b} e_{\hat p}^{\ \hat c} {({m }^{-1})}_{\hat
c}^{\ \hat d} h_{\hat a\hat b\hat d}\ .
\eqn\Hh
$$
Clearly, the self-duality condition on $h_{\hat a\hat b\hat d}$
transforms into a
condition on $H_{\hat m \hat n \hat p}$ and vice-versa
for the Bianchi identify $dH=0$.
The appearance of a metric in the equations of motion which is
different to the usual induced metric has its origins in the
fact that the natural metric that appears for the fivebrane
has an associated inverse vielbein denoted by
${(E^{-1})}_{\hat a}^{\ \hat m}$ which
is related in the usual way through $G^{\hat m\hat n} =
{(E^{-1})}_{\hat a}^{\ \hat m}
{(E^{-1})}_{\hat b}^{\ \hat n}\hat \eta^{\hat a\hat b}$.
The relationship between the two inverse vielbeins being
${(e^{-1})}_{\hat a}^{\ \hat m} =(m^{-1})_{\hat a}^{\ \hat b}
{(E^{-1})}_{\hat b}^{\ \hat
m}$. The inverse vielbein ${(E^{-1})}_{\hat a}^{\ \hat m}$ will play no
further
role in this paper.
This completes our discussion of the fivebrane equations of
motion and we refer the reader to reference [\HSW] for more details
of the formalism and notation.
We will be interested in fivebrane configurations that contain within
them threebrane solutions. Such solutions, which were found in [\three],
play an crucial part in the recovery of the
$N=2$ Yang-Mills theory in four dimensions and we now
summarise them. Given that the six-dimensional
coordinates of the fivebrane are denoted by
hatted variables $\hat m,\hat n = 0,1,2,...,5$
we take the world-volume
of the threebrane to be in the plane $x^\mu=(x^0,x^1,x^2,x^3)$. We
let unhatted variables refer to the coordinates transverse
of the threebrane i.e. $x^n=(x^4,x^5)$. We will assume all
fields to depend only on these transverse coordinates. In fact, of the
two transverse scalars of the fivebrane we take only two of them,
$X^6$ and $X^{10}$ to be non-constant.
We also take
the gauge field strength
$H_{\hat \mu \hat \nu \hat \rho}=0$. Examining the
supersymmetric variation of the spinor we find that this configuration
preserves half of the original sixteen supersymmetries, leaving eight
supersymmetries, provided the two transverse coordinates $X^6$ and
$X^{10}$ obey the Cauchy-Riemann equation with respect to $x^4$ and
$x^5$ [\three]. As such, we introduce the variables
$$
z= x^4 + i x^5,\ \ s= {X^6+iX^{10}} \ ,
\eqn\zsdef
$$
and conclude that the Bogomoln`yi condition is simply that
$s$ depends only on $z$ and not $\bar z$ (i.e.
$s=s(z)$). As seen from the M-theory perspective this result means
that the presence of threebranes within the the fivebrane
implies that the fivebrane is
wrapped on a Riemann surface in the space with complex coordinates
$s$ and $z$ [\HLW]. In fact, the threebranes correspond to the self
intersections of the M-theory fivebrane.
The fivebrane equations then reduce to the
flat Laplacian [\three]
$$
\delta^{mn} \partial _n \partial _m s =0\ ,
\eqn\flatlap
$$
which is automatically satisfied due to the Bogomoln'yi condition.
We are interested in field configurations which are everywhere
smooth except at $z=\infty$. Some such configurations are given
by [\HLW]
$$
s = s_0-\ln\left(B +\sqrt{Q} \right)\ ,
\eqn\hat s
$$
where we have introduced the polynomial
$Q = B^2(z) - \Lambda^{2N}$ and
$s_0$ and $\Lambda$
are constants.
The quantity $B(z)$ is a $N$th order polynomial in $z$ which can be
written in the form
$$
B(z) = z^{N} - u_{N-1} z^{N-2} - u_{N-2} z^{N-3} - ... - u_{1}\ .
\eqn\Bdef
$$
Following [\Wittenone] we introduce the variable $t= e^{-s}$
whereupon we find
that the threebrane solution implies the equation
$$
F(t,z) = t^2 - 2B(z)t + \Lambda^{2N} = 0 \ .
\eqn\Ftwo
$$
We recognise this equation, after a
suitable shift in $t$ to absorb the term linear in t, to be
the standard equation for a
hyper-elliptic curve of genus
$N$. The
$u_i$'s corresponding to the moduli of this Riemann surface.
For simplicity, we will be interested for most of this paper in
the case of $N=2$, but the results are readily extended to
the case of $N\ge 3$
We are interested in the low energy motion of $N$ threebranes and so
take the zero modes of the threebrane to depend on
the worldvolume coordinates $x^\mu,\ \mu=0,1,\dots 3$, of the threebrane.
Thus we will arrive at a theory with eight supersymmetries
living on the four dimensional threebrane
worldvolume.
The moduli $u_i$ of the Riemann surface are related to the positions of
the $N$ threebranes and as such we take them to depend on the
world-volume i.e.
$u_i(x^{\mu})$. For example the $u_N$ coefficient, which is the sum of the
roots of $B$, represents the centre of mass coordinate for the threebrane and
has been set to zero.
These will turn out to be the $2N$ complex scalars in the four
dimensional theory. There are also
$4N$ further bosonic moduli associated with large two form gauge
transformations at infinity. These are associated with the field
strength
$H_{\hat m\hat n\hat p}$ and their precise form will be discussed
later. These zero modes correspond to the gauge fields of
the resulting four dimensional theory on the four
dimensional threebrane world-volume. We also have $8N$ fermionic zero modes
corresponding to the broken supersymmetries in the presence of
the threebrane, these correspond to the spinors in the resulting four
dimensional theory. Taken together the zero modes make up
a $N=2$ super ${(U(1))}^N$ multiplet which lives on the four dimensional
threebrane world-volume.
This procedure is analogous to the more simple case of
monopole solutions to
$N=2$ Yang-Mills theory in four dimensions. At low energy the
motion of the monopoles can be described by taking the
moduli of the monopole solution to depend on the worldline
coordinate $t$. For the case of one monopole the moduli
space is just given by the transverse coordinates
$\underline x$ and the coordinate corresponding to large gauge
transformations $\theta$.
Our main task in this section is to derive the consequences of the
fivebrane classical dynamics, encoded in the equations of motion
\eqomone\ and \eqomtwo ,
for the $x^\mu$ dependent moduli of the threebrane.
The first step
in this direction is to work out in detail the geometry of the
fivebrane in the presence of these zero modes. We work with the
coordinates $z = x^4+ix^5\quad\bar z=x^4-ix^5$,
introduced above, for
which the Euclidean metric takes the form
$\eta_{z\bar z}={1\over2}\quad\eta^{z\bar z}=2$ and
$\eta_{z z}=0 =\eta_{\bar z\bar z}$. We also define the derivatives
$\partial= {\partial \over \partial z}$ and
$\bar \partial= {\partial \over \partial \bar z}$. In these
coordinates we find, for example, that
$ |\partial s|^2=\partial s\bar \partial \bar s
={1\over2}\delta ^{mn}\partial_ns\partial_m\bar s$.
The usual induced metric of the fivebrane, in the static gauge, and
in the presence of the threebrane takes the form
$$
g_{\hat n\hat m}
=\eta_{\hat n\hat m}+{1\over2}(\partial_{\hat
m}s\partial_{\hat n}\bar s+\partial_{\hat n}s\partial_{\hat m}\bar s)\ .
\eqn\gbrane
$$
For future use we list the individual components in the
longitudinal and transverse directions to the threebrane
$$\eqalign{
g_{z\mu}&={1\over2}\partial s\partial_\mu\bar s= {(g_{\bar z \mu})}^*\ ,\cr
g_{\mu \nu}&=\eta_{\mu\nu}+{1\over2}(\partial_\mu
s\partial_\nu\bar s+\partial_\nu s\partial_\mu\bar s)\ ,\cr
g_{z\bar z}&= g_{\bar z z}={1\over 2}(1+|\partial s|^2)\ ,\cr
g_{\bar z\bar z}&= g_{z z} = 0\ .\cr}
\eqn\gcomplex
$$
It is straightforward, if a little tedious, to construct the
inverse of this metric and the result is
$$
g^{\hat m\hat n}
=\eta^{\hat m\hat n}+\alpha(\partial^{\hat
m}s\partial^{\hat n}\bar s
+\partial^{\hat m}\bar s\partial^{\hat n}s)
+\beta\partial^{\hat m}s\partial^{\hat n}s
+\bar\beta\partial^{\hat
m}\bar s\partial^{\hat n}\bar s\ ,
\eqn\ginv
$$
where
$$\eqalign{
\alpha&={1\over2}{(1+|\partial s|^2+{1\over2}|\partial_\mu
s|^2)\over\left\{{1\over4}(\partial_\mu\bar s)^2(\partial_\mu
s)^2-\big(1+|\partial s|^2+{1\over2}(|\partial_\mu
s|)^2\big)^2\right\}} \ ,\cr
\beta
&=-{\partial_\mu\bar s \partial^\mu\bar s\over\left\{(\partial_\mu\bar
s)^2(\partial_\mu s)^2-4(1+|\partial s|^2+{1\over2}|\partial_\mu
s|^2)^2\right\}}\ .\cr}
\eqn\alphabetadef
$$
We will only be interested in the low energy action and
it will prove useful to list the
component of the inverse metric to the second order in spacetime derivatives.
The results are given by
$$\eqalign{
g^{\mu\nu}&=\eta^{\mu\nu}
+{1\over2}(1+|\partial s|^2)^{-1}(\partial^\mu s\partial^\nu\bar s
+\partial^\mu\bar s\partial^\nu s)
+O((\partial_\mu s)^3)\ , \cr
g^{\mu z}&= -(1+|\partial s|^2)^{-1}\partial _\mu s \partial \bar
s +O((\partial_\mu s)^3) = {(g^{\mu \bar z})}^*\ ,\cr
g^{zz}&= (1+|\partial s|^2)^{-2} \partial _\rho s
\partial ^\rho s
\bar \partial \bar s\bar \partial\bar s +
O((\partial_\mu s)^3)= {(g^{\bar z \bar z})}^*\ , \cr
g^{z\bar z}&={2 \over 1 + |\partial s|^2}
+ {\partial_{\mu}s\partial^{\mu}{\bar s}|\partial s|^2\over (1 + |\partial s|^2)^2}
+ O((\partial_\mu s)^3)\ .\cr}
\eqn\glowest
$$
We also require the vielbein associated with the usual induced
metric (i.e.
$e_{\hat n}{}^{\hat a}\eta_{\hat a\hat b}e_{\hat m}{}^{\hat b}
=g_{\hat n\hat m}$).
To the order in spacetime derivatives to which we are working
the components of the vielbein is given by
$$\eqalign{
e_\mu{}^z
&={1\over(1+|\partial s|^2)^{1\over2}}\partial_\mu s
{\bar \partial} \bar s;\quad e_z{}^\mu=0 \ ,\cr
e_{\mu}^{\bar z}&={1\over(1+|\partial s|^2)^{1\over2}}\partial_\mu\bar
s\partial s;
\quad e_{\bar z}^{\ \mu}=0 \ ,\cr
e_z ^{\ z} &= e_{\bar z}^{\ \bar z} = (1+|\partial s|^2)^{1\over2} \ ,\cr
e_\mu ^{\ a}&=\delta_\mu ^{\ a}\ ,\quad
e_{\bar z}^{\ z}=e_{z}^{\ \bar z}=0 \ .\cr}
\eqn\vielbein
$$
Finally, we compute the Christoffel symbol given in equation \Gammadef. For
the configuration of interest to us it becomes
$$
\Gamma_{\hat n\hat m}^{\ \ \ \hat r}
=(\partial_{\hat n}\partial_{\hat m}
X^{a'})\partial_{\hat p}
X_{a'}g^{\hat p\hat r}
={1\over2}(\partial_{\hat n}\partial_{\hat m} s
\partial_{\hat p}\bar s g^{\hat p\hat r}+\partial_{\hat
n}\partial_{\hat m}\bar s\partial_{\hat p} s g^{\hat p\hat r})\ .
\eqn\Gammais
$$
To the order to which we are working we find, for example, that
$$\eqalign{
\Gamma_{\mu\nu}{}^{\rho}&=0\ ,\cr
\Gamma_{\mu\nu}{}^z&
=\partial_{\mu}\partial_{\nu} s\bar \partial \bar s
(1+|\partial s|^2)^{-1}\ ,\cr
\Gamma_{\mu\nu}{}^{\bar z}
&=\partial_\mu\partial_\nu\bar s\partial s (1+|\partial s|^2)^{-1}\ .\cr}
\eqn\Gammalowest
$$
Having computed the geometry of the fivebrane in the presence of the
threebrane zero modes, we can now evaluate the bosonic equations of
motion for the fivebrane. While the order of the the $\partial _\mu s$
is clear form the above expressions we must also establish the order
of the spacetime derivatives in the
gauge field field strength $H_{}$. The precise form of this
object follows from solving the self-duality condition and is
given in the next section. We note here that
$H_{\mu\nu z}= {(H_{\mu\nu \bar z}})^*$ is first
order in spacetime derivatives, while $H_{\mu\nu\rho}$ is second order
in spacetime derivatives and $H_{\mu z{\bar z}}=0$.
We begin with the scalar equation
of equation \eqomone. Using \mdef\ to second order in spacetime
derivatives this equation can be written as
$$
g^{\hat m\hat n}\nabla_{\hat m}\nabla_{\hat n} s
-4(e^{-1})^{z}_{\ z}h^{z\hat c\hat d}h_{z\hat c\hat d}
(e^{-1})^z_{\ z}\nabla_z \nabla_z s =0\ .
\eqn\scalarone
$$
Using equation \Gammalowest\ for the
Christoffel symbol the
term $g^{\mu \nu}\nabla_\mu \nabla_\nu s$ becomes
$$
g^{\mu\nu}\nabla_\mu\nabla_\nu s
=g^{\mu\nu}(\partial_\mu\partial_\nu
s-\Gamma_{\mu\nu}{}^{\hat n}\partial_{\hat n}s)
=g ^{\mu\nu}\partial_\mu\partial_\nu s
\left( 1-
{1\over 2} \partial _{\hat p }\bar s g^{\hat p \hat n} \partial _{\hat n } s
\right) \ ,
\eqn\scalartwo
$$
The final factor in this equation can be evaluated
to be ${(1+|\partial s|^2)}^{-1}$. The other terms
can be processed in a similar way and one finds that the scalar
equation of motion is given by
$$
(g^{\mu\nu}\partial_\mu\partial_\nu s+2g^{\mu
z}\partial_\mu\partial_zs+g^{zz}\partial_z\partial_zs
-4(e^{-1})^{z}_{\ z}h^{z\hat c\hat d}h_{z\hat c\hat d}
(e^{-1})^z{}_{\ z}\partial \partial s)
{1\over(1+|\partial s|^2)}=0\ .
\eqn\scalarthree
$$
Substituting the expressions for the
inverse metric to the appropriate order in (four-dimensional)
spacetime derivatives, we
find the scalar equation becomes
$$
{1\over(1+|\partial s|^2)} E=0\ ,
\eqn\scalarfour
$$
where
$$
E\equiv \eta^{\mu\nu}\partial_\mu\partial_\nu s
-\partial_z\left\{{(\partial_\varrho s\partial^\varrho s)
\bar \partial\bar s\over(1+|\partial s|^2)}\right\}
-{16\over(1+|\partial s|^2)^2}
H_{\mu\nu\bar z}H^{\mu\nu}{}_{\bar z}\partial \partial s
=0\ .
\eqn\Escalar
$$
Let us now evaluate the vector equation \eqomtwo. To the order in
spacetime derivatives to which we are working we can set
$m^{\ \hat a}_{\hat b}= \delta ^{\ \hat a}_{\hat b}$
and the vector equation becomes
$$
g^{\hat m\hat n}\nabla_{\hat m}H_{\hat n\hat p\hat q}=0\ .
\eqn\vectorone
$$
Taking $(\hat p,\hat q)=(\mu,\nu)$
the equation can be shown to become
$$E_{\mu\nu}\equiv
\partial_zH_{\mu\nu\bar z}+\partial_{\bar z}H_{\mu\nu z}=0\ ,
\eqn\Eoff
$$
after discarding a the factor $(1+|\partial s|^2)^{-1}$.
In finding this last result we
have, for example, discarded spacetime derivatives acting on
$H_{\mu\nu\rho}$
as such terms would be cubic in spacetime derivatives.
Taking $(\hat p,\hat q)=(\nu,z)$ the equation becomes
$$
g^{\hat m \hat n}\{\partial_{\hat m}H_{\hat n\nu z}-\Gamma_{\hat
m \hat n}{}^{\hat p}
H_{\hat p\nu z}-\Gamma_{\hat m\nu}{}^{\hat p}H_{\hat n
pz}-\Gamma_{\hat m z}{}^{\hat p}H_{\hat n \nu\hat p}\}=0\ .
\eqn\vectortwo
$$
These terms can be processed as for the scalar equation,
for example, we find that
$$
-g^{\hat m \hat n }\Gamma_{\hat m z}^{\ \ \ \hat r}H_{\hat n \nu \hat r}
=- \partial_z \left(
{\partial^\mu s\bar \partial \bar s\over
(1+|\partial s|^2)}\right) H_{\mu\nu z}
+ \partial_z \left(
{\partial^\mu {\bar s} \partial s\over
(1+|\partial s|^2)}\right) H_{\mu\nu {\bar z}}\ .
\eqn\vectorfive
$$
Evaluating the other terms in a similar way the vector equation
becomes
$$
\partial^\mu H_{\mu\nu z}-{\partial_\mu\bar s
\partial s\over(1+|\partial s|^2)}\partial_{\bar z}H_{\mu\nu z}
-\partial_z\left\{{\bar \partial \bar s\partial_\mu sH_{\mu\nu z}
\over(1+|\partial s|^2)}\right\}
+\partial_z\left\{{\partial s\partial_\mu{\bar s}\over(1+|\partial
s|^2)}\right\}H_{\mu\nu\bar z}=0\ .
\eqn\vectorsix
$$
Finally, using equation \Eoff\
we can rewrite this as
$$
E_{\nu z}\equiv
\partial^\mu H_{\mu\nu z}-\partial_zT_{\nu}=0\ ,
\eqn\Evect
$$
where
$$
T_{\nu} ={\bar \partial \bar
s\partial^\mu s\over(1+|\partial s|^2)}H_{\mu\nu z}
-{\partial s\partial^\mu\bar s\over(1+|\partial s|^2)}H_{\mu\nu\bar z}\ .
\eqn\Tdef
$$
\chapter{The Self-Dual Three Form}
Although the field $H_{\hat m \hat n \hat p}$ is a curl it does not obey a
simple self-duality condition. On the other hand,
the field $h_{\hat a \hat b \hat c}$
is not a curl but does obey a simple self-duality condition,
namely
$$
h_{\hat a\hat b\hat c}={1\over3!}\varepsilon_{\hat a\hat b\hat c\hat d\hat
e\hat f}h^{\hat d\hat e\hat f}\ .
\eqn\hsd
$$
The strategy we adopt to solve the self-duality condition for
$h_{\hat a\hat b\hat c}$ and use equation \Hh\ which relates
$H_{\hat m \hat n \hat p}$ to $h_{\hat a\hat b\hat c}$ and deduce
the consequences for $H_{\hat m \hat n \hat p}$.
We adopt the convention that all indices on $h_{\dots}$ are always tangent
indices.
Upon taking the various choices for the indices ${\hat a\hat b\hat c}$
we find that equation \hsd\ becomes
$$\eqalign{
h_{abz}&={i\over2}\varepsilon_{abcd}h^{cd}_{\ \ z}\ ,\cr
h_{ab\bar z}&=-{i\over2}\varepsilon_{abcd}h^{cd}_{\ \ \bar z}\ ,\cr
h_{z\bar za}&={1\over3!}{i\over2}\varepsilon_{abcd}h^{bcd}\ ,\cr
h_{bcd}&=-2i\varepsilon_{bcde}h_{z\bar z}^{\ \ \ e}\ .\cr}
\eqn\hsdcmpt
$$
Hence the independent components can be taken to be
$h_{abz},\ h_{ab\bar z}$ and $h_{z\bar z e}$.
We are not interested in the most general such three form, but one which
corresponds to the zero modes of the three form field
strength that correspond to
finite gauge transformations at infinity. As such, we
set $h^{}_{z\bar ze}=h^{}_{abc}=0$.
We now deduce $H_{\hat n\hat m\hat p}$ from
Equation \Hh\ which at the order we are working takes the form
$$
H_{\hat n\hat m\hat p}=e_{\hat n}{}^{\hat a}e_{\hat m}{}^{\hat b}
e_{\hat p}{}^{\hat c}h_{\hat a\hat b\hat c}\ .
\eqn\Hhtwo
$$
Taking $(\hat n\hat m\hat p)=(\mu,\nu,z)$, using the form of the vielbein
of equation \vielbein, and working to at most second order in
spacetime derivatives we find that
$$
H_{\mu\nu z}=\delta_\mu{}^a\delta_\nu{}^b(1+|\partial
s|^2)^{1\over2}h^{}_{abz}\ ,
\eqn\Hhthree
$$
since the vielbein $e_z{}^{\hat c}$ can only have $\hat c=z$ to order
$(\partial_\mu s)^1$.
Taking $(\hat n\hat m\hat p)=(\mu,z,\bar z)$ equation \Hh\ becomes
$$
H_{\mu z\bar z}= e_\mu{}^ae_z{}^{z}e_{\bar z}{}^{\bar z}
h^{}_{az\bar z}=0 \ .
\eqn\Hhfour
$$
Finally taking $(\hat n\hat m\hat p)=(\mu,\nu, \rho)$, we find that
$$\eqalign{
H_{\mu\nu\varrho}&=
3\delta_{[\mu}{}^a\delta_\nu{}^b(e_{\varrho]}{}^z
h_{abz}+e_{\varrho]}{}^{\bar z}h_{ab\bar z})\ , \cr
&={3\over (1+|\partial s|^2)^{1\over2}}
\delta_{[\mu}{}^a\delta_\nu{}^b(\partial_{\varrho]}s\partial_{\bar z}
{\bar s}h_{abz}+\partial_{\varrho]}\bar s\partial_zsh_{ab\bar z})\ .\cr}
\eqn\Hhfive
$$
Since $h_{abz}$ and $h_{abz}$ obey the self-duality conditions of equation
\hsd,
it follows that $H_{\mu\nu z}$ and $H_{\mu\nu \bar z}$ will obey the
self-duality relations
$$
H_{\mu\nu z}={i\over2}\varepsilon_{\mu\nu\varrho\kappa}
H^{\varrho \kappa}_{\ \ \ z};\
H_{\mu\nu\bar z}=-{i\over2}\varepsilon_{\mu\nu\varrho \kappa}
H^{\varrho \kappa}_{\ \ \ \bar z}\ .
\eqn\Hsd
$$
Substituting for $H_{\mu\nu z}$ and $H_{\mu\nu \bar z}$ in equation \Hhfive\
we find that $H_{\mu\nu\rho}$ can be written as
$$
H_{\mu\nu\varrho}={3\over(1+|\partial
s|^2)}(\partial_{[\varrho}s\bar \partial \bar
sH_{\mu\nu]z}+\partial_{[\varrho}\bar s\partial s H_{\mu\nu]\bar z})\ ,
\eqn\Hsdtwo
$$
which in turn can be rewritten as
$$
H_{\mu\nu\varrho}={i\over(1+|\partial
s|^2)}\varepsilon_{\mu\nu\varrho\lambda}(\partial^\tau\bar
s\partial s H_{\lambda\tau\bar z}-\partial^\tau s\bar \partial {\bar s}
H_{\lambda\tau z})\ .
\eqn\Hoff
$$
In the previous section we worked out the equations of motion
for the three form and in this section we have solved the self-duality
condition arising in the fivebrane dynamics to find
$$\eqalign{
H = & {1\over 2!}{H}_{\mu\nu z}dx^{\mu}\wedge dx^{\nu} \wedge dz
+ {1\over 2!}{\bar H}_{\mu\nu{\bar z}}dx^{\mu}\wedge dx^{\nu} \wedge d{\bar z} \cr
&+{1\over 2!}{i\over(1+|\partial s|^2)}
\varepsilon_{\mu\nu\varrho\lambda}(\partial^\tau\bar
s\partial_zsH_{\lambda\tau\bar z}-\partial^\tau s\partial_{\bar z}{\bar s}
H_{\lambda\tau z})
dx^{\mu}\wedge dx^{\nu} \wedge dx^{\varrho}\ .}
\eqn\Hform
$$
Finally we should
check that the three form
$H$ is closed (to second order in four-dimensional derivatives):
$$\eqalign{
dH &=
{1\over 2!}\partial_{\lambda}{H}_{\mu\nu z}
dx^{\lambda}\wedge dx^{\mu}\wedge dx^{\nu} \wedge dz
+ {1\over 2!}\partial_{\lambda}{\bar H}_{\mu\nu{\bar z}}
dx^{\lambda}\wedge dx^{\mu}\wedge dx^{\nu} \wedge d{\bar z}\ \cr
&-{1\over 3!}\partial_{z}H_{\mu\nu\lambda}
dx^{\mu}\wedge dx^{\nu} \wedge dx^{\lambda}\wedge dz
-{1\over 3!}\partial_{{\bar z}}H_{\mu\nu\lambda}
dx^{\mu}\wedge dx^{\nu} \wedge dx^{\lambda}\wedge d{\bar z} \cr
&-{1\over 2!}\partial_{z}H_{\mu\nu{\bar z}}
dx^{\mu}\wedge dx^{\nu} \wedge d{\bar z}\wedge dz
+{1\over 2!}\partial_{{\bar z}}H_{\mu\nu z}
dx^{\mu}\wedge dx^{\nu} \wedge dz\wedge d{\bar z}\ ,\cr
&= {i\over 3!}\varepsilon^{\ \ \ \ \mu}_{\rho\lambda\nu}E_{\mu z}
dx^{\rho}\wedge dx^{\lambda}\wedge dx^{\nu} \wedge dz
- {i\over 3!}\varepsilon^{\ \ \ \ \mu}_{\rho\lambda\nu}E_{\mu {\bar z}}
dx^{\rho}\wedge dx^{\lambda}\wedge dx^{\nu} \wedge d{\bar z} \ ,\cr
&+ {1\over 2!}
E_{\mu\nu}dx^{\mu}\wedge dx^{\nu}\wedge dz\wedge d{\bar z}\ .\cr}
\eqn\Hclosed
$$
Thus
one can readily verify that conditions following from $dH=0$ are just
equations of motion found in the previous section, as indeed
should be the case.
We complete this discussion by solving equation \Hsd\ in terms of a
real field $F_{\mu\nu}$ and its four-dimensional
dual $\star F_{\mu\nu}= {1\over 2}
\epsilon _{\mu\nu\lambda\tau}F^{\lambda \tau}$. Writing $H_{\mu\nu z}$
as an arbitrary linear combination of these two fields we find that the
only solution to equation \Hsd\ is given by
$$
H_{\mu\nu z} = \kappa {\cal F}_{\mu\nu}
\eqn\Hdef
$$
where ${\cal F}_{\mu\nu}= F_{\mu\nu} + i\star F_{\mu\nu}$ and
$\kappa$ is an as yet undetermined quantity.
Since $H_{\mu\nu \bar z}$ is the complex conjugate of
$H_{\mu\nu z}$ we conclude that
$$
H_{\mu\nu \bar z} = {\bar \kappa}{\bar {\cal F}}_{\mu\nu}
\eqn\Hbardef
$$
where ${\bar {\cal F}}_{\mu\nu}= F_{\mu\nu} -i\star F_{\mu\nu}$.
In order to satisfy the equation \Eoff\ one a finds
that $\kappa dz$ must be a holomorhic one form on $\Sigma$.
Therefore, for a Riemann surface of
genus one, we must set
$\kappa = \kappa_0\lambda_z$, where
$\lambda _z = ds/du$
is the unique (up to scaling)
holomorphic one form on $\Sigma$
and $\kappa_0$ is independent of $z$ and ${\bar z}$.
In fact, we will take
$$
H_{\mu\nu z} = {ds\over da}{\cal F}_{\mu\nu}
= \left({da\over du}\right)^{-1} {\cal F_{\mu\nu}} \lambda_z \ .
\eqn\H
$$
Here $a$ is the scalar mode used in the Seiberg-Witten theory [\SW]
which we will define below. We will also see below that $\kappa dz$ is a
holomorphic one form whose integral around the $A$-cycle is normalised to one.
Of course until one specifies $F_{\mu\nu}$ the coefficient $\kappa_0$
has no independent meaning.
However, with this choice it will turn out that the equations of motion
imply that $F_{\mu\nu}$ satisfies a simple Bianchi identity and so
can be written as the curl of the four dimensional gauge field.
We point out that our final result for $H$ is significantly different
to those proposed in a number of recent works.
\chapter{The Vector Equation of Motion}
In this section we will obtain the four-dimensional equations of motion for
the vector zero modes of the threebrane soliton.
It is important to assume that
$X^{10}$ is compactified on a circle of radius $R$ and redefine
$s =(X^6+iX^{10})/R$. This allows
the connection between the M-fivebrane and
perturbative type IIA string theory and quantum
Yang-Mills theory to be made for small $R$ [\Wittenone] (it also ensures
that $s(t)$ is well defined).
Also, we promote
$\Lambda$ and $z$ to have dimensions of
mass which facilitates a more immediate contact with
the Seiberg-Witten solution. Thus in using the previous formulae
we must rescale
$s\rightarrow R s$ and $z\rightarrow \Lambda^{-2} z$ (except that
$\lambda_z = ds/du$ as before).
We have seen in equation \Evect\ that the three form equation of motion
to lowest order is $E_{\mu z} dz-{\bar E}_{\mu {\bar z}}d{\bar z}=0$.
To obtain the equation of motion for the
vector zero modes in four dimensions it is instructive to perform the
reduction over the Riemann Surface in two ways. First consider the integral
$$
\eqalign{
0 & = \int_B \star dH \int_A {\bar \lambda}
- \int_A \star dH\int_B {\bar \lambda} \ ,\cr
&=\int_B (E_{\mu z}dz-{\bar E}_{\mu {\bar z}}d{\bar z})\int_A {\bar \lambda}
- \int_A (E_{\mu z}dz-{\bar E}_{\mu{\bar z}}d{\bar z})\int_B {\bar \lambda} \ ,\cr}
\eqn\bilinear
$$
here $A$ and $B$ are a basis of cycles of the Riemann surface.
Before proceeding with the integrals in \bilinear\ it is necessary to
remind the reader of the scalar fields $a$ and $a_D$. In [\SW] it was shown
that a global description of the moduli space was given by a pair of local
coordinates $a(u)$ and $a_D(u)$ defined as the periods of a single holomorphic
form $\lambda_{SW}$
$$
a \equiv \int_{A} \lambda_{SW}\ , \ \ \ a_D \equiv \int_{B} \lambda_{SW}\ .
\eqn\aad
$$
Furthermore
the Seiberg-Witten differential $\lambda_{SW}$ is itself defined so that
$\partial \lambda_{SW}/\partial u$ $=\lambda$. From \hat s\ one can check that
$\lambda_z = \partial s/\partial u$.
From these definitions one sees that
$$
\tau = {\int_{B}\lambda\over\int_{A} \lambda} = {da_D/du\over da/du} \ .
\eqn\tdef
$$
Here one sees that the coefficient in \H\ serves to normalise the
$A$-period of $\kappa dz$ to unity.
Returning to the equations of motion, if
we substitute $H_{\mu\nu z}= R(ds/da){\cal F}_{\mu\nu}$
into $E_{\mu z}$ and \bilinear\ we find that the
terms involving $T_{\mu}$ combine to form a total derivative which can
be ignored in the line integrals. The remaining terms can then simply be
evaluated to give
$$\eqalign{
0= &\partial^{\nu}{\cal F}_{\mu\nu}\left({da\over du}\right)^{-1}
\left({da\over du}{d{\bar a}_D\over d{\bar u}} - {da_D\over du}{d{\bar a}\over d{\bar u}}\right)\cr
&- {\cal F}_{\mu\nu}\partial^{\nu}u {d^2 a\over du^2}
\left({da\over du}\right)^{-2}
\left({da\over du}{d{\bar a}_D\over d{\bar u}} - {da_D\over du}{d{\bar a}\over d{\bar u}}\right)\cr
&+ {\cal F}_{\mu\nu}\partial^{\nu}u\left({da\over du}\right)^{-1}
\left({d^2a\over du^2}{d{\bar a}_D\over d{\bar u}}
- {d^2a_D\over du^2}{d{\bar a}\over d{\bar u}}\right)\cr
&- {\bar{\cal F}}_{\mu\nu}\partial^{\nu}{\bar u}
\left({d{\bar a}\over d{\bar u}}\right)^{-1}
\left({d^2{\bar a}\over d{\bar u}^2}{d{\bar a}_D\over d{\bar u}}-
{d^2{\bar a}_D\over d{\bar u}^2}{d{\bar a}\over d{\bar u}}\right)\ .\cr}
\eqn\bilineartwo
$$
Recalling that $\tau = da_D/da$ one easily obtains the equation
of motion
$$
0= \partial^{\nu}{\cal F}_{\mu\nu}(\tau-{\bar \tau})
+ {\cal F}_{\mu\nu}\partial^{\nu}u{d\tau\over du}
- {\bar{\cal F}}_{\mu\nu}\partial^{\nu}{\bar u}{d{\bar \tau}\over d{\bar u}} \ .
\eqn\eqofmvect
$$
Examining the real and imaginary
parts of \eqofmvect\ we find
$$\eqalign{
0 &= \partial_{[\lambda} F_{\mu\nu]}\ ,\cr
0 &= {\rm Im}(\partial_{\mu}(\tau{\cal F}^{\mu\nu}))\ . \cr}
\eqn\reim
$$
Thus the choice of $F$ given in \H\ does indeed
obey the standard Bianchi identity, justifying our
ansatz for $H$.
These are precisely the vector equation of motion obtained
from the Seiberg-Witten effective action.
While the above derivation of the vector equation of motion using differential
forms is simple and
direct, the analogous procedure for the scalar equation of motion is not
so straightforward.
To this end let us consider another derivation of \eqofmvect . Now we simply
start from
$$\eqalign{
0 &= \int_{\Sigma} \star dH\wedge{\bar \lambda} \cr
&= \int_{\Sigma} E_{\mu z}dz\wedge {\bar \lambda} \ ,\cr}
\eqn\surface
$$
and directly substitute the same expressions in for $H$. In principle
one may consider any
one form in the integral in
\surface, rather than $\lambda$.
However, since one needs the integrand to be well defined over
the entire Riemann surface, there is effectively a unique choice, i.e.
the holomorphic one form. Thus we find
$$\eqalign{
0 = &\partial^{\nu}{\cal F}_{\mu\nu}\left({da\over du}\right)^{-1}I_0
-{\cal F}_{\mu\nu}\partial^{\nu}u{d^2 a\over du^2}
\left({da\over du}\right)^{-2}I_0\
+ {\cal F}_{\mu\nu}\partial^{\nu}u\left({da\over du}\right)^{-1}{dI_0\over du}
\cr&
-{\cal F}_{\mu\nu}\partial^{\nu}u \left({da\over du}\right)^{-1}J
+{\bar{\cal F}}_{\mu\nu}\partial^{\nu}{\bar u}
\left({d{\bar a}\over d{\bar u}}\right)^{-1}K
\ ,\cr}
\eqn\surfacetwo
$$
where
$$\eqalign{
I_0 &= \int_{\Sigma}\lambda\wedge{\bar \lambda} \ ,\cr
J&= R^2\Lambda^4\int_{\Sigma}\partial_z\left(
{\lambda_z^2\partial_{{\bar z}}{\bar s}\over 1+R^2\Lambda^4\partial_z s\partial_{{\bar z}}{\bar s}}
\right)dz\wedge{\bar \lambda}\ ,\cr
K &= R^2\Lambda^4 \int_{\Sigma}\partial_{z}\left(
{{\bar \lambda}_{{\bar z}}^2\partial_{z}s\over 1+R^2\Lambda^4\partial_z s\partial_{{\bar z}}
{\bar s}}
\right)dz\wedge{\bar \lambda} \ .\cr}
\eqn\Idefs
$$
Here we see that we arrive at some non-holomorphic integrals over $\Sigma$.
While it is straightforward to evaluate $I_0$ using the Riemann Bilinear
relation to find
$$
I_0 = {da_D\over du}{d{\bar a}\over d{\bar u}} - {da\over du}{d{\bar a}_D\over d{\bar u}} \ ,
\eqn\Io
$$
the $J$ and $K$ integrals require a more sophisticated analysis
to evaluate them directly. However,
upon comparing \surfacetwo\ with \bilineartwo\ we learn that
$$\eqalign{
J &= 0\ , \cr
K &= \left({d^2{\bar a}\over d{\bar u}^2}{d{\bar a}_D\over d{\bar u}}-
{d^2{\bar a}_D\over d{\bar u}^2}{d{\bar a}\over d{\bar u}}\right)\ ,\cr
&= -\left({d{\bar a}\over d{\bar u}}\right)^{2}{d{\bar \tau}\over d{\bar u}}\ .\cr}
\eqn\JK
$$
Only with these identifications to we find that the equation of
motion for the vectors is obtained in agreement with the first method. We
will see that the $J$ and $K$ integrals will appear again in the scalar
equation. We will discuss the explicit evaluation of these integrals elsewhere.
\chapter{The Scalar Equation of Motion}
In this section we will derive the equation of motion for the scalar
zero modes when the vectors are non-zero. As seen in equation \Escalar\
above the
equation of motion for the scalar zero modes in six dimensions is just
$E=0$.
To reduce this equation to four dimensions we consider the analogue of
\surface\
$$
0 = \int_{\Sigma} Edz\wedge{\bar \lambda} \ .
\eqn\scalartwo
$$
If we note that
$\partial_{\mu}s = \lambda_{z}\partial_{\mu}u$ and substitute
\H\ for the three form we find{
$$
0 = \partial^{\mu}\partial_{\mu}u I_0
+ \partial_{\mu}u\partial^{\mu}u {dI_0\over du}
- \partial_{\mu}u\partial^{\mu} u J
- 16{\bar {\cal F}}_{\mu\nu}{\bar {\cal F}}^{\mu\nu}
\left({d{\bar a}\over d{\bar u}}\right)^{-2}K .
\eqn\scalartwo
$$
Thus again the $I_0$, $J$ and $K$ integrals appear. Since we have deduced
the values of these integrals previously we may simply write down the
four-dimensional scalar equation of motion as
$$\eqalign{
0=&\partial^{\mu}\partial_{\mu}u
{da\over du}{d{\bar a}\over d{\bar u}}(\tau-{\bar \tau}) +
\partial^{\mu}u\partial_{\mu}u
\left({da\over du}{d{\bar a}\over d{\bar u}}{d\tau\over du}
+(\tau-{\bar \tau}){d^2a\over du^2}{d{\bar a}\over d{\bar u}}\right) \cr
&+ 16{\bar {\cal F}}_{\mu\nu}{\bar {\cal F}}^{\mu\nu}
{d {\bar \tau}\over d{\bar u}}\ .\cr}
\eqn\scalarthree
$$
This may then be rewritten as
$$
0=\partial^{\mu}\partial_{\mu}a(\tau-{\bar \tau}) +
\partial^{\mu}a\partial_{\mu}a{d\tau\over da}
+ 16 {\bar {\cal F}}_{\mu\nu}{\bar {\cal F}}^{\mu\nu}
{d {\bar \tau}\over d{\bar a}} \ .
\eqn\eqofmscalar
$$
Thus we have obtained the complete low energy effective equations of
motion for the
M-fivebrane in the presence of threebranes. We note that both
\eqofmvect\ and \eqofmscalar\ can be obtained from the four-dimensional action
$$
S_{SW} = \int d^4 x\ {\rm Im} \left(
\tau\partial_{\mu}a\partial^{\mu}{\bar a}
+16 \tau{\cal F}_{\mu\nu}{\cal F}^{\mu\nu}\right)\ ,
\eqn\action
$$
which
is precisely the bosonic part of
the full Seiberg-Witten effective action for
$N=2$ supersymmetric $SU(2)$ Yang-Mills [\SW].
\chapter{Higher Derivative Terms}
In the absence of the vectors the dynamics for the scalars
can be encoded in the more familiar $p$-brane action
$$
S_5= M_p^6\int d^6 x \sqrt {-{\rm det} g_{\hat m \hat n}}\ ,
\eqn\scalaraction
$$
where $M_p$ is the eleven-dimensional Planck mass.
It was shown in [\HLW] that the terms only quadratic in spacetime derivatives
were precisely those of the Seiberg-Witten action. In [\HLW] it was
pointed out that \scalaraction\ predicts an infinite number of
higher derivatives terms and the fourth order correction was explicitly
given.
In this section we examine these terms in more detail.
The determinant of $g_{\hat n\hat m}$ can be shown to be given by
$$
\sqrt{-\det g_{\hat n\hat m}}=\left(1+{1\over2}\partial_{\hat m}s\partial^{
\hat m}\bar s\right)\left\{1-{1\over4}{|\partial_{\hat m}s\partial^{\hat m}s|^2
\over(1+{1\over2}
\partial_{\hat m}s\partial^{\hat m}\bar s)}\right\}^{1\over2}\ .
\eqn\detexp
$$
It is straightforward to substitute this expression into the above action
and then Taylor expand in the number of spacetime derivatives. The result
after subtracting a total derivative is
$$\eqalign{
S_5=
& {M^6_pR^2\over \Lambda^4}{1\over 2i}
\int d^4xd z\wedge d {\bar z}\left\{
{1\over 2}\partial_{\mu} u\partial^{\mu}{\bar u}{1\over Q{\bar Q}}\right.\cr
&\left. + \sum^{\infty}_{n=1, p=0}R^{2p+4n-2}
C_{n,p}{(\partial_\mu u\partial^\mu\bar u)^p|\partial_\mu u\partial^\mu u|^{2n}
\over (Q\bar Q)(Q\bar Q+4R^2\Lambda^4 z \bar z)^{p+2n-1}}\right\} \ ,\cr}
\eqn\expansion
$$
where
$$
C_{n,p}=(-1)^n\left({1\over2}\right)^{2n+p}\left({{1\over 2}\atop n}\right)
\left({-2n+1\atop p}\right), \ \ n\geq 1,\quad p\ge 0\ ,
\eqn\Cnpdef
$$
and $\left(n \atop m\right)$ is the $m$th binomial coefficient in the
expansion of $(1+x)^n$.
The first term is none other than the Seiberg-Witten action. Rewriting
the remaning terms, \expansion\ can be cast in the form
$$\eqalign{
S_5= &{M^6_pR^2\over \Lambda^4}{1\over 2i} \int d^4x\left\{ {1\over 2}
\partial_\mu u \partial^\mu {\bar u} I_0\right.\cr
&\left.+ \sum^{\infty}_{n=1, p=0} {C_{n,p}I_{(p+2n-1)}}
(\partial_\mu u\partial^\mu\bar u)^p
|\partial_\mu u\partial^\mu u|^{2n} \right\}\ ,\cr}
\eqn\expansiontwo
$$
where
$$\eqalign{
I_{k}&= {1\over \Lambda^{4k}}\int dz\wedge d{\bar z}
\left({1\over Q \bar Q}\right)
\left({R^2\Lambda^4\over Q\bar Q+4R^2\Lambda^4 z {\bar z}}\right)^{k}\ ,\cr
&={1\over \Lambda^{6k+2}}\int d z_0 \wedge d{\bar z}_0
\left({1\over Q_0 \bar Q_0}\right)
\left({\sigma^2\over Q_0{\bar Q}_0+4\sigma^2 z_0 {\bar z}_0}\right)^{k}
\ .\cr}
\eqn\Ipdef
$$
Here $\sigma =R\Lambda$, $z_0 = z/\Lambda$,
$Q_0 =\sqrt{(z_0^2 + u_0)^2 - 1}$ and $u_0=u/\Lambda^2$.
This form for $I_k$ makes it clear that they obey the scaling relation
$I_k(\rho^{-1}R,\rho \Lambda,\rho^2 u) = \rho^{-6k-2} I_k (R, \Lambda, u)$.
The integrals $I_{k}$ that occur in the action are finite,
that is there are no singularities of the integrand on the Riemann
surface which lead the integrals to diverge. Given the type of
singularities that can occur at $z=\infty$ and at the roots of $Q$
it is remarkable to examine how the above integral avoids
the possible divergences. This is presumably a tribute to the
consistency of M theory.
Using the Cauchy-Schwartz inequality we may place a bound on the integrals
$$\eqalign{
I_{k} &\le {1\over \Lambda^{4k}}
\left|\int d z \wedge d\bar z {1\over Q\bar Q}\right|
\left|\int d z \wedge d\bar z
\left({R^2\Lambda^4\over Q\bar Q+4 R^2\Lambda^2 z\bar z}\right)^k\right| \cr
&\le R^{2k}\left|\int d z \wedge d\bar z
{1\over Q\bar Q}\right|^{k+1}\ .}
\eqn\ineq
$$
To obtain the last line we used the fact that
$|Q\bar Q|\le |Q\bar Q+4 R^2\Lambda^4 z\bar z|$. The expression that occurs
in the final line is just that for the Seiberg-Witten action
which we can evaluate exactly. In particular in the region $|u|\to \infty $
we therefore find that
$$
I_{k} \le R^{2k}
\left(\left| {da \over du}\right|^2{\rm Im}\tau \right)^{k+1}
\approx R^{2k}\left|{{\rm ln} u \over u}\right|^{k+1}\ .
\eqn\bound
$$
The most interesting question is whether the above higher derivative
terms which originate from the classical fivebrane equations of motion
are related to those that occur in the $N=2$ Yang-Mills theory.
Yang-Mills theory has an effective action which in principle
depends on $g_{YM},\hbar,u$ and the renormalisation scale $\mu$. However,
as can be seen from the classical action,
$g_{YM}$ and $\hbar$ always appear as $\hbar g^2_{YM}$. It is also well known
from the renormalisation group that $\hbar g^2_{YM}$
and $\mu$ appear as a single scale $\Lambda_{QCD}$ in the quantum theory.
Thus the low energy
effective action for Yang-Mills theory only
depends on $\Lambda_{QCD}$ and $u$.
By comparing $I_0$ with the the
Seiberg-Witten solution one learns that $\Lambda = \Lambda_{QCD}$. However
the extra parameter $\sigma$ appears in
$I_k$ for $k\ge 1$, hence the higher derivative terms
of \scalaraction\ also depend on $\sigma$.
From this observation
it is clear that the higher derivative terms in \expansiontwo\ can never
reproduce those obtained from the Yang-Mills equations. The appearance
of the extra parameter in the M-fivebrane effective action reflects
the fact that these are really the long wave length equations of a
self-dual string theory, not a Yang-Mills theory.
However, let us see how they
qualitatively compare.
The higher derivative terms of Yang-Mills theory are of the form
$$
\int d^4 x \int d^4\theta d^4 \bar \theta K(A, \bar A)\ .
\eqn\Kaction
$$
Unfortunately, the exact form of $K$ is not not known,
but arguments have been given to suggest that $K$ has the form
(at lowest order)
$K\propto {\rm ln} A \ {\rm ln} \bar A$ [\Grisaru].
Using the expression of [\Grisaru] we may evaluate this in terms of $N=1$
superfields from which it is apparent that the result is of the form
(ignoring logarithmic corrections)
$|\partial_\mu a \partial^\mu \bar a|^2 |a|^{-4}$,
in the region of large $a$.
This must be compared with the first term in the expansion \expansion.
Due to its subtle form, it is difficult to evaluate the integral $I_1$,
even in the large $u$ limit.
One could assume that the dominant contributions come from the
zeroes in $Q_0$ or make the substitution $Q_0=|z_0|^2 + u_0$.
Both these approximations are consistent with the bound \bound\ and
lead to the behaviour $I_1 \approx k|u|^{-2}$ where the
constant $k$ is independent
of $\sigma$. Since
we have in this region $u\propto a^2$, we must
conclude that if the suggested higher derivative corrections to the
$N=2$ Yang-Mills theory are correct and our approximation methods reliable
then the higher derivative
corrections obtained from the classical M-fivebrane dynamics
have a weaker fall-off for large $u$ than those of the Yang-Mills theory.
Since it is believed that these additional
terms come from a string theory it
is natural to see that the high energy behaviour is qualitatively different
from that of a field theory.
\chapter{Discussion}
In this paper we have presented the complete details concerning the
evaluation of the bosonic
low energy effective action for threebrane solitons in the M-fivebrane.
In particular we explicitly derived the equations of motion for the
vector degrees of freedom and verified that they exactly reproduce
those of the Seiberg-Witten effective action. We also discussed an
infinite number of higher derivative terms predicted by the M-fivebrane
theory and compared them with the expected higher derivative terms of
$N=2$ $SU(2)$ Yang-Mills. We note here that the
generalisation to the gauge group $SU(N)$ is
easily obtained by considering $N$ threebranes with $N-1$ moduli $u_i$
as in [\HLW]. In this case the natural choice for the three form is
$H_{\mu\nu z}= (ds/du_i){\cal F}^i_{\mu\nu}$. In addition one
may also consider $SO(N)$ and $Sp(N)$
groups by substituting in the curves of [\BSTY] for $F(t,s)$.
Finally, since one motivation of this
paper was to set up the appropriate formalism to calculate four-dimensional
effective actions with $N=1$ supersymmetry, let us briefly describe how this
might work. The simplest generalisation would be to consider threebrane
solitons obtained by the intersection of three M-fivebranes over a threebrane.
This is achieved by turning on an addition complex scalar $w = X^7 + iX^8$.
Following [\three] one again finds that a configuration with
$X^6,X^{10}$ and $X^7,X^{8}$ active will preserve one quarter of the
M-fivebrane
supersymmetry, provided that both $s$ and $w$ are holomorphic functions
of $z$. Furthermore the M-fivebrane equations of motion will also be
automatically satisfied in this case. Thus, a
configuration with both $s$ and $w$ holomorphic will lead to a
threebrane soliton with $N=1$ supersymmetry on its worldvolume with both
vector and scalar zero modes. Once can then follow the analysis presented in
this paper and derive the low energy effective equations of motion for both
the vector and scalar zero modes. However, unlike the $N=2$ case considered
here, there will be no supersymmetry which relates the two. It would be
interesting to see if the correct quantum corrections to the low energy
effective action are also predicted
by these models.
\endpage
\noindent Note Added
While this paper was being written we received a copy of [\dHOO] which has some
overlap with section six of this paper.
There the derivation
of the Seiberg-Witten effective action from the M-fivebrane presented in
[\HLW] was repeated. The first higher derivative
correction found in [\HLW] is considered and it is concluded
that these terms are not from a Yang-Mills theory.
\refout
\end
|
1,314,259,994,839 | arxiv | \section{Introduction}
\sloppy
\subsection{Motivation}
Little is known about compositional and structural diversity of super-Earths.
We often consider super-Earths to be distinct from sub-Neptunes in terms of their {volatile fraction}. In fact, there is an intriguing transition around 1.5 R$_{\rm \Earth}$\xspace, above which most planets appear to contain a significant amount of volatiles \citep[e.g.,][]{ Rogers, Dressing}. The distribution of planet densities and radii suggest a transition that is continuous rather than stepwise \citep{leconte}, although the limited number of available observations might not allow a firm conclusion yet \citep{Rogers}.
A key criterion to distinguish super-Earths from sub-Neptunes is the origin of its atmosphere. Super-Earths atmospheres are thought to be dominated by outgassing from the interior, whereas sub-Neptunes have accreted and retained a substantial amount of \reve{primordial} hydrogen and helium. The atmospheric scale height will be significantly larger in the latter case since it scales as the reciprocal of the mean molecular mass. In consequence, the radius fraction of volatiles is often used to distinguish between super-Earths and sub-Neptunes.
The nature of an atmosphere, be it \reve{primordial} or secondary, helps to clearly categorize a planet. Atmospheres can have three different origins. (1) Accreted nebular gas from the protoplanetary disk (\reve{primordial} origin), (2) gas-release during disruption of accreting volatile-enriched planetesimals, or (3) outgassing from the interior (secondary origin). The time-scales associated with (1-2) and (3) are very different. An atmosphere that is dominated by outgassed planetesimal disruption (2) can theoretically be significantly different from a hydrogen-helium atmosphere \reve{\cite[e.g.,][]{fortney, elkins, schaefer, hashimoto, zahnle, venturini}}. \citet{venturini} show that enriched gas layers speed up the accretion of gas from the \reve{primordial} disk, which explains large fractions of H/He for intermediate mass planets. However, to what extent atmospheres of low-mass planets can be enriched (e.g., in water) and sustain their metallicity over their lifetime is subject of ongoing research. For the close-in super-Earths HD~219134~b and c, we consider the two scenarios (1) and (3). \reve{In other words, we use the term {\it primordial} to refer to H$_2$-dominated atmospheres that are pristine and compositionally unaffected by subsequent physical or chemical processing including atmospheric escape \citep[e.g.,][]{Hu2015, Lammer} or interaction with the rocky interior. }
The atmospheres of close-in planets are subject to significant mass loss (atmospheric escape), driven by extreme ultraviolet and X-ray heating from their stars. The goal of this study is to present a method for determining if a planet may host a gaseous layer, and if this gas layer is \reve{ hydrogen-dominated (primordial) or dominated by high mean molecular masses} (secondary). \reve{Our method is different and complementary to studies that use spectroscopic signatures to distinguish between hydrogen-rich and hydrogen-poor atmospheres \citep[e.g.,][]{millerricci}.} We focus on the HD 219134 system, which hosts multiple planets. Two of which fall in the super-Earth regime. Both planets b and c are transiting \citep{vogt, gillon} and represent together the coolest super-Earth pair yet detected in a star system (Figure \ref{fig1}).
The characterization of two planets from the same system benefits from possible compositional correlations between them. We can expect a correlation in relative abundance of refractory elements \citep{sotin07}. Abundances measured in the photosphere of the host star can be used as proxies for the relative bulk abundances, namely Fe/Si and Mg/Si \citep{dorn}. Here, we use different photospheric measurements on HD~219134, compiled by \citet{hinkel}.
These bulk abundance constraints in addition to mass, radius, and stellar irradiation are the data that we use to infer structure and composition of the planets.
A rigorous interior characterization that accounts for data and model uncertainty can be done sensibly using Bayesian inference analysis, for which we use the generalized method of \citet{dornA}.
\rev{The previous work of \citet{dorn} and \citet{dornA} showed that Bayesian inference analysis is a robust method for quantifying interior parameter degeneracy for a given (observed) exoplanet. While \citet{dorn} focussed on purely rocky planets, a generalized method for super-Earths and mini-Neptunes was developed by \citet{dornA} by including volatiles (liquid and high pressure ices, and gas layers). Inferred confidence regions of interior parameters are generally large, which emphasizes the need to utilise extra data that further informs one about a planet's composition and structure. Here, we investigate additional considerations on atmospheric escape to further constrain the nature of the atmosphere.
Similarly to the previous works, we assume planets that are made of distinct layers, i.e. iron core, silicate mantle, water layer, and gas layer as illustrated in Figure \ref{figsketch}. The use of an inference analysis allows us to account for the degeneracy among the layer properties, i.e., core size, mantle size and composition, water mass fraction, gas mass fraction and metallicity, and intrinsic luminosity. In this study, we account for interior degeneracy and calculate robust confidence regions of atmospheric thicknesses ($r_{\rm env}$). These inferred thicknesses $r_{\rm env}$ are then compared to theoretically possible thicknesses of a H$_2$-dominated atmosphere. The theoretically possible range of a H$_2$-dominated atmospheres is restricted due to atmospheric escape, i.e., too thin H$_2$-dominated atmospheres cannot be retained over a planet's lifetime. This implies a threshold thickness below which H$_2$-dominated atmospheres cannot be retained. Here, we present how this threshold thickness ($\Delta R$) can be estimated. The comparison between $\Delta R$ and $r_{\rm env}$ is a key aspect of our study and allows us to draw conclusions about the nature of possible planetary atmospheres.
}
\subsection{Concept and method}
{We wish to first describe the method conceptually, before providing the technical details later in the paper. Consider a planet orbiting close to its star, which emits ultraviolet and X-ray radiation. Planet formation occurs on short time-scales ($\sim 10^6-10^8$ years) and is essentially instantaneous over the lifetime of a $\sim 1$--10 Gyr-old star. Immediately after the planet has formed, it retains a hydrogen-dominated atmosphere, which is then continuously eroded until the present time. We take the total time lapsed to be the age of the star ($t_\star$).
The total mass of lost \reve{primordial} atmosphere, $M_{\rm env, lost}(t)$, increases over time due to atmospheric escape. Over the lifetime $t_\star$, the total escaped mass is $M_{\rm env, lost}(t_\star)$, which we convert to a fraction of the planetary radius, $\Delta R/R$. Atmospheric escape can erode $\Delta R$ worth of atmosphere over the age of the star.
\rev{Independent of $\Delta R$ and from our Bayesian inference analysis, we can estimate the possible range of atmospheric thicknesses} at the present time, $r_{\rm env}(t_\star)$. If $r_{\rm env}(t_\star) < \Delta R$, then the atmosphere is not H$_2$-dominated, because any H$_2$ atmosphere would have been eroded away. {Thus, $\Delta R$ may be visualized as being the threshold thickness above which a \reve{primordial} atmosphere can be retained against atmospheric escape over a time $t_\star$. The comparison between $r_{\rm env}(t_\star)$ and $\Delta R$ is a key aspect of this study.} }
The outline of this study is as follows: We first discuss the method of characterizing planet interiors. We explain how we approximate the amount of \reve{primordial} atmosphere that may be lost due to stellar irradiation and how we relate this to a threshold thickness of a \reve{primordial} atmosphere. Based on these estimates, we demonstrate how we infer the atmospheric origin. We show results for HD~219134~b and c, and compare them to 55~Cnc~e, HD~97658~b, and GJ~1214~b. In an attempt to get an idea of the distribution of \reve{enriched (secondary)} atmospheres, we apply the method to low-mass planets ($<10$~M$_{\rm \Earth}$\xspace). We finish with a discussion and conclusions.
Note that we use the terms \emph{atmosphere} and \emph{gas layer} synonymously. The atmosphere/gas layer model comprises a radiative layer on top of a convection-dominated envelope.
\begin{figure}[ht]
\centering
\includegraphics[width = .5\textwidth, trim = 0cm 1cm 0cm 0cm, clip]{plot_MR.pdf}\\
\caption{Mass-radius diagram for planets below 2.7 R$_{\rm \Earth}$\xspace and 10 M$_{\rm \Earth}$\xspace and mass uncertainties better than 20\% in general. HD~219134~b and c are among the coolest exoplanets yet detected regarding their equilibrium temperature (in color). Planets in bold face are included in the comparative study in section \ref{comparison}.}
\label{fig1}
\end{figure}
\section{Methodology}
\label{Methodology}
\subsection{Interior characterization}
Using the generalized Bayesian inference analysis of \citet{dornA} \rev{that employs a Markov chain Monte Carlo (McMC) method}, we rigorously quantify the degeneracy of the following interior parameters for a general planet:
\begin{itemize}
\item {\bf core}: core size ($r_{\rm core}$\xspace),
\item {\bf mantle}: mantle composition (${\rm Fe}/{\rm Si}_{\rm mantle}\xspace$, ${\rm Mg}/{\rm Si}_{\rm mantle}\xspace$) and size ($r_{\rm mantle}$\xspace),
\item {\bf water}: water mass fraction ($m_{\rm water}$\xspace),
\item {\bf gas}: intrinsic luminosity ($L_{\rm env}$\xspace), gas mass ($m_{\rm env}$\xspace), and metallicity ($Z_{\rm env}$\xspace).
\end{itemize}
\reve{ From the posterior distribution of those interior parameters, we can compute the posterior distribution of the thickness of a possible gas layer ($r_{\rm env}$), which we then use to infer if the gas layer is hydrogen-rich or poor (Section \ref{compa}). Regarding the volatile-rich layers, our parameterization allows us to produce planet structures that range from (1) purely-rocky to (2) thick water layers with no additional gas layer to (3) thick gas layers without water layers below. The latter structure (3) determines the largest values of $r_{\rm env}$. }
Figure \ref{figsketch} illustrates the interior parameters of interest.
\begin{figure}[ht]
\centering
\includegraphics[width = .4\textwidth, trim = 0cm 0cm 0cm 0cm, clip]{illustr1.pdf}\\
\caption{\rev{Illustration of interior parameters: core size ($r_{\rm core}$\xspace), mantle composition (${\rm Fe}/{\rm Si}_{\rm mantle}\xspace$, ${\rm Mg}/{\rm Si}_{\rm mantle}\xspace$), mantle size ($r_{\rm mantle}$\xspace), water mass fraction ($m_{\rm water}$\xspace), intrinsic luminosity ($L_{\rm env}$\xspace), gas mass ($m_{\rm env}$\xspace), gas metallicity ($Z_{\rm env}$\xspace), and atmospheric thickness ($r_{\rm env}$).}}
\label{figsketch}
\end{figure}
The considered data comprise:
\begin{itemize}
\item mass $M$,
\item radius $R$,
\item bulk abundance constraints on ${\rm Fe}/{\rm Si}_{\rm bulk}\xspace$ and ${\rm Mg}/{\rm Si}_{\rm bulk}\xspace$, and minor elements Na, Ca, Al,
\item semi-major axes $a$,
\item stellar irradiation (namely, effective temperature $T_{\rm eff}$ and stellar radius $R_\star$).
\end{itemize}
For ${\rm Fe}/{\rm Si}_{\rm bulk}\xspace$, ${\rm Mg}/{\rm Si}_{\rm bulk}\xspace$ and minor elements, we use their equivalent stellar ratios as proxies that can be measured in the stellar photosphere \citep{dorn}.
The prior distributions of the interior parameters are listed in Table \ref{tableprior}. The priors are chosen conservatively. The cubic uniform priors on $r_{\rm core}$\xspace and $r_{\rm mantle}$\xspace reflect equal weighing of masses for both core and mantle. Prior bounds on ${\rm Fe}/{\rm Si}_{\rm mantle}\xspace$ and ${\rm Mg}/{\rm Si}_{\rm mantle}\xspace$ are determined by the host star's photospheric abundance proxies. Since iron is distributed between core and mantle, ${\rm Fe}/{\rm Si}_{\rm bulk}\xspace$ only sets an upper bound on ${\rm Fe}/{\rm Si}_{\rm mantle}\xspace$.
A log-uniform prior is set for $m_{\rm env}$\xspace and $L_{\rm env}$\xspace. A uniform prior in $Z_{\rm env}$\xspace equally favors metal-poor and metal-rich atmospheres, which seems appropriate for secondary atmospheres. In Section \ref{HDstuff}, we investigate the effect of different priors on $Z_{\rm env}$\xspace.
In this study, the \reve{planetary} interior is assumed to be composed of a pure iron core, a silicate mantle comprising the oxides Na$_2$O--CaO--FeO--MgO--Al$_2$O$_3$--SiO$_2$, pure water layer, and an atmosphere of H, He, C, and O.
The structural model for the interior uses self-consistent thermodynamics for core, mantle, high-pressure ice, and water ocean, and to some extent also atmosphere.
For the core density profile, we use the equation of state (EoS) fit of iron in the hcp (hexagonal close-packed) structure provided by \citet{bouchet} on {\it ab initio} molecular dynamics simulations. \reve{We assume a solid state iron, since the density increase due to solidification in the Earth's core is small (0.4 g/cm3, or 3\%) \citep{Dziewonski}.}
For the silicate mantle, we compute equilibrium mineralogy and density as a function of pressure, temperature, and bulk composition by minimizing Gibbs free energy \citep{connolly09}. For the water layers, we follow \citet{Vazan} using a quotidian equation of state (QEOS) and above 44.3 GPa, we use the tabulated EoS from \citet{seager2007} that is derived from DFT simulations. \reve{Depending on pressure and temperature, the water can be in solid, liquid or vapour phase.}
We assume an adiabatic temperature profile within core, mantle, and water layers.
The surface temperature of the water layer is set equal to the temperature of the bottom of the gas layer.
For the gas layer, we solve the equations of hydrostatic equilibrium, mass conservation, and energy transport. For the EoS of elemental compositions of H, He, C, and O, we employ the CEA (Chemical Equilibrium with Applications) package \citep{CEA}, which performs chemical equilibrium calculations for an arbitrary gaseous mixture, including dissociation and ionization and assuming ideal gas behavior. The metallicity $Z_{\rm env}$\xspace is the mass fraction of C and O in the gas layer, which can range from 0 to 1.
\reve{For the gas layer, we assume an irradiated layer on} top of a \reve{convective-dominated} envelope,
for which we assume a semi-gray, analytic, global temperature averaged profile \citep{GUILLOT2010, Heng2014}.
The boundary between the irradiated layer and the \reve{underlying} envelope is defined where the optical depth in visible wavelength is $100 / \sqrt{3}$ \citep{JIN2014}. Within the envelope, the usual Schwarzschild criterion is used to distinguish between convective and radiative layers.
The planet radius is defined where the chord optical depth becomes 0.56 \citep{Lecavelier08}.
We refer to model I in \citet{dornA} for more
details on both the inference analysis and the structural model.
\begin{table}[ht]
\caption{Prior ranges. \label{tableprior}}
\begin{center}
\begin{tabular}{lll}
\hline\noalign{\smallskip}
parameter & prior range & distribution \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
$r_{\rm core}$ & (0.01 -- 1) $r_{\rm mantle}$ &uniform in $r_{\rm core}^3$\\
${\rm Fe}/{\rm Si}_{\rm mantle}\xspace$ & 0 -- ${\rm Fe}/{\rm Si}_{\rm star}\xspace$&uniform\\
${\rm Mg}/{\rm Si}_{\rm mantle}\xspace$ & ${\rm Mg}/{\rm Si}_{\rm star}\xspace$ &Gaussian\\
$r_{\rm mantle}$ & (0.01 -- 1) $R$& uniform in $r_{\rm mantle}^3$\\
$m_{\rm water}$ & 0 -- 0.98 $M$& uniform\\
$m_{\rm env}$\xspace & 0 -- $m_{\rm env, max}$ &uniform in log-scale\\
$L_{\rm env}$\xspace & $10^{18} - 10^{23}$ erg/s&uniform in log-scale\\
$Z_{\rm env}$\xspace & 0 -- 1&uniform\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Estimating the threshold thickness $\Delta R$ of a \reve{primordial} atmosphere \rev{layer} considering atmospheric escape}
We approximate $\Delta R$ by the atmospheric \rev{layer} thickness that corresponds to the \rev{accumulated} mass of hydrogen that may be lost over the planet's lifetime \rev{($M_{\rm env,lost}$)}. Loss rates are determined by X-ray irradiation from the star and mass-loss efficiencies. \reve{Hydrostatic balance} is used to calculate the layer thickness $\Delta R$ corresponding to a \reve{primordial} atmosphere of mass $M_{\rm env,lost}$. The detailed calculation of $\Delta R$ involves several steps, that are discussed in the following.
\rev{Let the layer thickness $\Delta R$ be the difference in radius attributed to a \reve{primordial} atmosphere. If we assume this layer to be in hydrostatic equilibrium, then this difference in radius is}
\begin{equation}\label{eq0}
\Delta R = H \ln{\left(\frac{P_{\rm b}}{P_{\rm t}}\right)},
\end{equation}
where $H$ is pressure scale height and $P_{\rm b}$ is the pressure at the bottom of the \rev{layer}, which we will derive in the next paragraphs.
$P_{\rm t}$ is \rev{the pressure at the top of the layer, corresponding to the} transit radius \citep{heng16},
\begin{equation}
P_{\rm t} \approx \frac{g}{\kappa} \sqrt{\frac{H}{2\pi R}}.
\end{equation}
If we assume a mean opacity of $\kappa = 0.1$ cm$^2$ g$^{-1}$ \citep{freedman14}, then for both the b and c planets we get $P_{\rm t} \approx 1$ mbar.
The pressure scale height $H$ is calculated assuming a hydrogen-dominated layer (mean molecular mass $\mu=2~$g/mol) and using the equilibrium temperature $T_{\rm eq}$,
\begin{equation}\label{eq3}
H = \frac{T_{\rm eq} R^{*}}{g_{\rm surf} \mu },
\end{equation}
where $g_{\rm surf}$ is surface gravity and $R^{*}$ is the universal gas constant (8.3144598 J mol$^{-1}$ K$^{-1}$). The estimates in \citet{heng16} suggest that the assumption of $T = T_{\rm eq}$ is reasonable. Values for $g_{\rm surf}$ and $T_{\rm eq}$ are listed in Tables \ref{data1} and \ref{data2}.
\rev{The pressure at the bottom of the layer $P_{\rm b}$ corresponds to the accumulated mass of escaped hydrogen over the planet's lifetime $M_{\rm env,lost}$,}
\begin{equation}\label{eq2}
P_{\rm b} = \frac{ g M_{\rm env,lost}}{4 \pi R^2},
\end{equation}
which is simply a restatement of Newton's second law.
$M_{\rm env,lost}$ is approximated by atmospheric escape considerations. Dimensional analysis yields an expression for the atmospheric escape rate,
\begin{equation}\label{eq1}
\dot{M} = \frac{\pi \eta F_{\rm X} R^2}{E_g},
\end{equation}
where $F_{\rm X}$ is the X-ray flux of the star and $E_g = GM/R$ is the gravitational potential energy. The evaporation efficiency, $\eta$, is the fraction of the input stellar energy that is converted to escaping outflow from the planet. It is often assumed to be a constant, but it is more likely that its value varies with the age of the system \citep{ow13}. The evaporation efficiency $\eta$ has been studied by various authors \citep[e.g.,][]{Shematovich,salz}, who demonstrate that values between 0.01 and 0.2 are reasonable for our planet range of interest. In other words, $\eta$ hides the complexity of atmospheric radiative transfer of X-ray photons \reve{as well as unknown quantities such as the planetary albedo}.
The strongest assumption we make is that mass loss is constant over the planet's lifetime, such that $M_{\rm env,lost} = t_\star \dot{M}$ ($t_\star=12.9$ Gyr; \citealt{takeda07}).
Thus, equations \ref{eq2} and \ref{eq1} provide us the expression
\begin{equation}
P_{\rm b} = \frac{\eta L_{\rm X} t_\star}{16 \pi a^2 R},
\end{equation}
with $L_{\rm X} = 4 \pi a^2 F_{\rm X} = 4 \times 10^{26}$ erg s$^{-1}$ \citep{porto06} being the X-ray luminosity of the star. {In Figure \ref{fig:dr}, we compute $\Delta R/R$ as a function of $\eta$, since the exact value of $\eta$ is not well known. Fortunately, $\Delta R/R$ depends weakly on $\eta$. Also, uncertainty in stellar age only has \reve{a small} effect on $\Delta R/R$: a difference in stellar age of 1 Gyr only introduces variations on $\Delta R/R$ of less than one percent. The spread in $\Delta R/R$ is mainly due to the uncertainties in planetary mass and radius. }
The physical interpretation of the preceding expressions for $P_{\rm b}$ and $\Delta R$ are worth emphasizing. {The former is the amount of \reve{primordial} atmosphere that may be lost by atmospheric escape during the lifetime of the star.} It provides a conservative estimate, because we have assumed the X-ray luminosity to be constant, whereas in reality stars tend to be brighter in X-rays earlier in their lifetimes:
\begin{equation}
M_{\rm env,lost} = \int_0^{t_\star} \dot{M}(t) dt > \dot{M}t_\star \,.
\end{equation}
The expression for $\Delta R$ is then a lower limit for a \rev{\reve{primordial}} atmosphere thickness corresponding to this atmospheric mass loss scenario.
\begin{figure
\begin{center}
\includegraphics[width=\columnwidth]{plot_paper_eta.pdf}
\end{center}
\caption{Threshold thickness $\Delta R$ as a function of evaporation efficiency $\eta$. The spread accounts for the uncertainty in planet mass and radius, and age. If the inferred radius $r_{\rm env}$ is less than $\Delta R$, then the atmosphere is most likely enriched and not dominated by H$_2$.}
\label{fig:dr}
\end{figure}
\subsection{Assessing secondary/\reve{primordial} nature of an atmosphere}
\label{compa}
We wish to compare $\Delta R$ with the gas thickness $r_{\rm env}$ inferred from the interior characterization.
There are three possible scenarios:
\begin{itemize}
\item $r_{\rm env} > \Delta R$:
Atmospheric escape is not efficient enough in removing \rev{a possible \reve{primordial}} atmosphere. This suggests that a large portion of the atmosphere can be \reve{primordial}. \rev{However, a secondary atmosphere is also possible.}
\item $r_{\rm env} \approx \Delta R $:
Mass loss can be still ongoing and no conclusion can be drawn about the nature of the atmosphere.
\item $r_{\rm env} < \Delta R $:
Atmospheric escape should have efficiently removed any \reve{primordial} H$_2$ atmosphere. If a finite $r_{\rm env}/{R}$\xspace is inferred, the atmosphere is likely enriched and thus of secondary origin. Since the calculation of the threshold thickness $\Delta R$ is conservative, this is the only scenario that can be used for a conclusive statement on the atmospheric origin.
This conclusion is illustrated in Figure \ref{fig9}, where the time-evolutions of H$_2$-dominated atmosphere thicknesses for HD219134~b are shown for $\eta=0.01$. The curves are constructed such that at $t=t_\star$ the relative thicknesses $r_{\rm env}/{R}$\xspace are equal to 0 (blue), 0.1 (black), 0.17 (red), and 0.23 (green). Furthermore, the solid curves include the time-evolution of X-ray flux, which we assume here to be solar-like ($F_{\rm X} \propto t^{-1.83}$ for $t > t_{\rm sat}$ and
$F_{\rm X} = F_{\rm X}$ for $t < t_{\rm sat}$, where the saturation time is equal to 100 Myr and
$F_{\rm X}(t_\star)$ is the observed value) \citep{Ribas}. Compared to a constant X-ray flux, the higher stellar activity for a young star implies that an atmosphere thickness of $r_{\rm env}/{R}$\xspace at $t_\star$ must have started with a higher gas fraction at $t=0$ (see difference between solid and dashed curves in Figure \ref{fig9}).
In both scenarios, we find that the smaller the observed thickness $r_{\rm env}/{R}$\xspace compared to $\Delta R/R$, the shorter the time a planet spends with this atmosphere thickness.
\reve{Thus it is possible for a planet to host remaining small amounts of an initially thick \reve{primordial} atmosphere that will have a $r_{\rm env}/{R}$\xspace lower than the threshold thickness. However, we find that this state is a very short fraction of the planet's lifetime, so it is unlikely that the planets we observe with $r_{\rm env}/{R}$\xspace lower than the threshold value will be remnants of thicker \reve{primordial} atmospheres.}
In consequence, inferred atmospheres with thicknesses less than $\Delta R/R$ are likely to be \reve{enriched (secondary)}.
\end{itemize}
\begin{figure}[ht]
\centering
\includegraphics[width = .5\textwidth, trim = 0cm 0cm 0cm 0cm, clip]{plot_forLUND-01.pdf}\\
\caption{Evolution of H$_2$-dominated atmosphere thicknesses $r_{\rm env}/{R}$\xspace for HD219134b leading to different thicknesses at $t=t_\star$ ($\eta$ = 0.01 in all cases). Solid curves account for a time variable stellar X-ray flux $F_X(t)$ \citep{Ribas}, whereas dashed curves imply a constant $F_X$.The blue-shaded area depicts the evolution of $\Delta R/R$, its spread accounts for the uncertainties in planet mass and radius. }
\label{fig9}
\end{figure}
\section{Results}
\label{Results}
\subsection{Interiors of HD~219134~b and c}
\label{HDstuff}
We apply the inferrence method to HD~219134~b and c with the data listed in Tables \ref{data1} and \ref{data2}. The latter lists different stellar abundance estimates from the literature \citep[][]{Mishenina15,Ramirez,Valenti,Thevenin,Thevenin2} that were compiled by \citep{dornB} to \reve{examine} different bulk abundance scenarios. Besides a median abundance estimate (V0), they provide an iron-rich (V1) and an iron-poor (V2) scenario, that \reve{reflect} the limited accuracy in stellar abundance estimates \reve{(Table \ref{bulk})}. First, we use the median stellar abundance estimate denoted with V0.
Figures \ref{corrB} and \ref{corrC} show the two and one-dimensional (2-D and 1-D) marginal posteriors for all eight model parameters.
Best constrained parameters are the layer thicknesses represented by $m_{\rm water}$\xspace, $r_{\rm mantle}$\xspace, $r_{\rm core}$\xspace. We summarize our findings on the interiors of HD~219134~b and c with respect to the models that fit the data within 1-$\sigma$ uncertainty (blue dots in Figs. \ref{corrB} and \ref{corrC}):
\begin{itemize}
\item The possible interiors of HD~219134~b and c span a large region, including purely rocky and volatile-rich scenarios.
\item {less than 0.1\%} of the model solutions for planets b and c, repectively, are rocky ($r_{rocks}$/R $>$ 0.98).
\item The possible water mass fraction of HD~219134~b and c can reach from 0 -- 0.2 and 0 -- 0.1, respectively.
\item Unsurprisingly, the individual atmosphere properties ($m_{\rm env}$\xspace, $L_{\rm env}$\xspace, $Z_{\rm env}$\xspace) are weakly constrained. Consequently, their pdfs are dominated by prior information. However, the possible range of atmosphere thickness is well constrained to 0 -- 0.18 and 0 -- 0.13 for planets b and c, respectively (see Section \ref{vary}).
\end{itemize}
\begin{figure*}[ht]
\centering
\includegraphics[width = .8\textwidth, trim = 0cm 0cm 1.5cm 0cm, clip]{Figure_2CorrT_HD219134b.pdf}\\
\caption{Sampled two and one-dimensional marginal posterior for HD~219134~b interior parameters: gas mass $m_{\rm env}$\xspace, gas metallicity $Z_{\rm env}$\xspace, intrinsic luminosity $L_{\rm env}$\xspace, mass of water $m_{\rm water}$\xspace, radius of rocky interior $r_{\rm mantle}$\xspace, core radius $r_{\rm core}$\xspace, and mantle's relative abundances ${\rm Fe}/{\rm Si}_{\rm mantle}\xspace$ and ${\rm Mg}/{\rm Si}_{\rm mantle}\xspace$. Blue dots explain the data within 1-$\sigma$ uncertainty. \reve{Dashed curves represent the prior distributions assumed.}}
\label{corrB}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width =.8\textwidth, trim = 0cm 0cm 1.5cm 0cm, clip]{Figure_2CorrT_HD219134c.pdf}\\
\caption{Sampled two-dimensional (2-D) marginal posterior for HD~219134~c interior parameters: gas mass $m_{\rm env}$\xspace, gas metallicity $Z_{\rm env}$\xspace, intrinsic luminosity $L_{\rm env}$\xspace, mass of water $m_{\rm water}$\xspace, radius of rocky interior $r_{\rm mantle}$\xspace, core radius $r_{\rm core}$\xspace, and mantle's relative abundances ${\rm Fe}/{\rm Si}_{\rm mantle}\xspace$ and ${\rm Mg}/{\rm Si}_{\rm mantle}\xspace$. Blue dots explain the data within 1-$\sigma$ uncertainty. \reve{Dashed curves represent the prior distributions assumed.}}
\label{corrC}
\end{figure*}
\begin{table}[ht]
\caption{Summary of \reve{planetary} data \citep{motalebi, gillon}. \label{data1}}
\begin{center}
\begin{tabular}{lcccccc}
\hline\noalign{\smallskip}
parameter &\multicolumn{3}{c}{ HD~219134~b}& \multicolumn{3}{c}{HD~219134~c} \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
$R$/R$_{\rm \Earth}$\xspace &\multicolumn{3}{c}{ 1.606$\pm$0.086}&\multicolumn{3}{c}{1.515 $\pm$ 0.047}\\
$M$/M$_{\rm \Earth}$\xspace &\multicolumn{3}{c}{4.36$\pm$0.44}&\multicolumn{3}{c}{4.34 $\pm$ 0.22}\\
$g_{\rm surf}$[cm/$s^{-2}$] &\multicolumn{3}{c}{1656}&\multicolumn{3}{c}{1865}\\
$T_{\rm eq}$ [K] &\multicolumn{3}{c}{1025}&\multicolumn{3}{c}{784}\\
\vspace{2mm}
$a$ [AU] &\multicolumn{3}{c}{0.038}&\multicolumn{3}{c}{0.065}\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht]
\caption{Summary of \reve{stellar} data \citep{motalebi}. \label{data2}}
\begin{center}
\begin{tabular}{lc}
\hline\noalign{\smallskip}
parameter &\multicolumn{1}{c}{ HD~219134}\\
\noalign{\smallskip}
\hline\noalign{\smallskip}
$R_{\rm star}$/R$_{\rm sun}$ &0.778~$\pm$~0.005 \\
$T_{\rm eff}$ in K& 4699~$\pm$~16 \\
$[\rm Fe/H]$ & 0.04--0.84\\
\vspace{2mm}
$[\rm Fe/H]_{median}$ & 0.13 \\
$[\rm Mg/H]$ & 0.09--0.37 \\
\vspace{2mm}
$[\rm Mg/H]_{median}$ & 0.32 \\
$[\rm Si/H]$& 0.04--0.27\\
\vspace{2mm}
$[\rm Si/H]_{median}$ &0.12\\
$[\rm Na/H]$& 0.17--0.32 \\
\vspace{2mm}
$[\rm Na/H]_{median}$&0.19\\
$[\rm Al/H]$& 0.16--0.29 \\
\vspace{2mm}
$[\rm Al/H]_{median}$& 0.23\\
$[\rm Ca/H]$& 0.18--0.25\\
$[\rm Ca/H]_{median}$ & 0.21\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht]
\caption{Considered planet bulk abundance cases. V0 represents median abundance estimates, whereas V1 and V2 refer to iron-rich and iron-poor cases, respectively. \label{bulk}}
\begin{center}
\begin{tabular}{lccccccc}
\hline\noalign{\smallskip}
parameter& &\multicolumn{2}{c}{V0} &\multicolumn{2}{c}{V1}& \multicolumn{2}{c}{V2}\\
\hline\noalign{\smallskip}
${\rm Fe}/{\rm Si}_{\rm bulk}\xspace$ & &\multicolumn{2}{c}{1.73$\pm$1.55} &\multicolumn{2}{c}{10.68$\pm$1.55}& \multicolumn{2}{c}{1.00$\pm$1.55}\\
${\rm Mg}/{\rm Si}_{\rm bulk}\xspace$ & &\multicolumn{2}{c}{1.44$\pm$0.91} &\multicolumn{2}{c}{1.02$\pm$0.91}& \multicolumn{2}{c}{1.14$\pm$0.91}\\
Na$_2$O [wt\%] & &\multicolumn{2}{c}{0.021 } &\multicolumn{2}{c}{0.01}& \multicolumn{2}{c}{0.025 }\\
Al$_2$O$_3$ [wt\%] & &\multicolumn{2}{c}{0.055} &\multicolumn{2}{c}{0.023}& \multicolumn{2}{c}{0.057}\\
CaO [wt\%] & &\multicolumn{2}{c}{0.021} &\multicolumn{2}{c}{0.01}& \multicolumn{2}{c}{0.021}\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Influence of stellar abundances}
\label{vary}
\citet{dornB} investigated the influence of different bulk abundance constraints on interior estimates. In Figures \ref{fig3} and \ref{fig5}, we similarly show this influence on key interior parameters ($r_{\rm env}/{R}$\xspace, $m_{\rm water}$\xspace, $r_{\rm mantle}$\xspace, and $r_{\rm core}$\xspace) for the median abundance estimate (V0, blue), the iron-rich case (V1, light green), and the iron-poor case (V2, dark green) \reve{(Table \ref{bulk})}. As discussed by \citet{dornB}, the largest effects are seen on $r_{\rm mantle}$\xspace and $r_{\rm core}$\xspace: if the planets are iron-rich, the core size is significantly larger, which implies a smaller rocky interior ($r_{\rm mantle}$\xspace) in order to fit mass. The effect on $r_{\rm env}/{R}$\xspace is apparent in the comparison between the iron-rich case V1 and V0. For an iron-rich planet, the density of the rocky interior is higher. In order to fit the mass, $r_{\rm mantle}$\xspace is smaller. Consequently, to fit the radius, $r_{\rm env}/{R}$\xspace can be larger. For the iron-rich case (V1), the upper 99\% percentile of $r_{\rm env}/{R}$\xspace is 0.02 (HD~219134~b) and 0.04 (HD~219134~c) larger than for V0. Even if the iron-rich case is in agreement with spectroscopic data, we believe that V1 may represent a limitation in accuracy rather than the actual planet bulk abundance.
\subsubsection{Influence of data uncertainty}
In Figures \ref{fig3} and \ref{fig5}, we also investigate the improvement in constraining interior parameters assuming the hypothetical case of having double the precision on (1) observed mass (light purple), and (2) mass and radius (dark purple). Significant improvement in constraining interior parameters is only obvious for HD~219134~b when \reve{both mass and radius} precision is doubled. For HD~219134~c, the increase in both mass and radius uncertainty leads to only moderate improvement of parameter estimates. The different potential to improve interior estimates by reducing data uncertainty for planets b and c stems from the fact, that the uncertainties are much smaller for planet c ($\sigma_{R} = 3.1\%$) compared to b ($\sigma_{R} = 5.4\%$).
\reve{For the considered planets, improved constraints for interior parameters are dominantly gained by a better precision in radius.} This is expected, since mass-radius curves flatten out at higher masses (Fig. \ref{fig1}).
\subsubsection{Influence of prior on $Z_{\rm env}$\xspace}
We have shown that the individual parameters $L_{\rm env}$\xspace, $Z_{\rm env}$\xspace, and $m_{\rm env}$\xspace are weakly constrained and thereby are dominated by their prior distributions. However, $r_{\rm env}/{R}$\xspace is well constrained (Figures \ref{fig3} and \ref{fig5}), which is not explicitly a model parameter in this study. Here, we investigate the effect of different priors on the radius fractions $r_{\rm env}/{R}$\xspace. An obvious prior to test is on $Z_{\rm env}$\xspace. The prior on gas metallicity can be chosen such that it favors \reve{H$_2$-dominated} (uniform in 1/$Z_{\rm env}$\xspace) or \reve{enriched} atmospheres (uniform in $Z_{\rm env}$\xspace). In Figures \ref{fig3} and \ref{fig5} (comparing blue and dark grey curve), we demonstrate that different priors in $Z_{\rm env}$\xspace have only small effects on the possible distribution of radius fractions $r_{\rm env}/{R}$\xspace.
\begin{figure*}[ht]
\centering
\includegraphics[width = 1.\textwidth, trim = 3cm 0cm 2.9cm 0cm, clip]{Figure_HD219134b_pdf.pdf}\\
\caption{Sampled one-dimensional marginal posterior for selected parameters of HD~219134~b: (a) gas radius fraction $r_{\rm env}/{R}$\xspace, (b) water mass fraction $m_{\rm water}$\xspace$/M$, (c) rock radius fraction $r_{\rm mantle}$\xspace$/R$, and (d) relative core radius $r_{\rm core}$\xspace/$r_{\rm mantle}$\xspace. The posterior distributions depend on precision on bulk abundance constraints (light and dark green curves), and mass and radius uncertainties (light and dark purple curves). For comparison, the Earth-like solution is highlighted in red.}
\label{fig3}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width = 1.\textwidth, trim = 3cm 0cm 2.9cm 0cm, clip]{Figure_HD219134c_pdf.pdf}\\
\caption{Sampled one-dimensional marginal posterior for selected parameters of HD~219134~c: (a) gas radius fraction $r_{\rm env}/{R}$\xspace, (b) water mass fraction $m_{\rm water}$\xspace$/M$, (c) rock radius fraction $r_{\rm mantle}$\xspace$/R$, and (d) relative core radius $r_{\rm core}$\xspace/$r_{\rm mantle}$\xspace. The posterior distributions depend on precision on bulk abundance constraints (light and dark green curves), and mass and radius uncertainties (light and dark purple curves). For comparison, the Earth-like solution is highlighted in red.}
\label{fig5}
\end{figure*}
\subsection{Secondary or \reve{primordial} atmosphere?}
The comparison in Figure \ref{fig6} of the inferred $r_{\rm env}/{R}$\xspace (solid lines) to {the threshold thickness} $\Delta R/R$ (dashed areas) shows that the possible atmospheres of planets b and c are significantly smaller than $\Delta R/R$. This indicates that the possible atmospheres {are not dominated by hydrogen} but must be secondary in nature. This provides a simple {test to identify \reve{H$_2$-rich versus enriched} atmospheres, which may then guide future spectroscopic campaigns to characterise atmospheres (e.g., JWST, E-ELT)}.
\subsection{Comparison to other planets}
\label{comparison}
Similarly to HD~219134~b and c, we compare $r_{\rm env}/{R}$\xspace with $\Delta R/R$ (Figure \ref{fig6} and \reve{Table \ref{deltaR1}}) for GJ~1214~b,
HD~97658~b, and 55~Cnc~e. This serves as a benchmark for our proposed determination for \reve{H$_2$-dominated and enriched} atmospheres, since large efforts were put in understanding composition and nature of the atmospheres of the three planets.
For GJ~1214~b, the distribution of $r_{\rm env}/{R}$\xspace is large and overlaps with $\Delta R/R$. The possible atmosphere is consistent with both a \reve{H$_2$-dominated and enriched} atmosphere. Prior to our study, much effort has been invested in characterizing the atmosphere of GJ~1214~b \citep[e.g.,][]{berta, kreidberg}. Studies on interior structure suggested either an hydrogen-rich atmosphere that formed by recent outgassing or a maintained hydrogen-helium atmosphere of \reve{primordial} nature \citep{Rogers2010}. A third scenario of a massive water layer surrounded by a dense water atmosphere has been disfavored by \citet{nettelmann} based on thermal evolution calculations that argued that the water-to-rock ratios would be unreasonable large. Transmission spectroscopy and photometric transit observations revealed that the atmosphere has clouds and/or shows features from a high mean-molecular-mass composition \citep{berta, kreidberg}.
For HD~97658~b, we find that $r_{\rm env}/{R}$\xspace is very likely smaller than $\Delta R/R$. This suggests an atmosphere that \reve{is enriched and thus possibly} of secondary nature, however, a \reve{primordial} atmosphere cannot be ruled out with certainty. Previous transmission spectroscopy results of \citet{knutson} are in agreement with a flat transmission spectrum, indicating either a cloudy or water-rich atmosphere. The latter scenario would involve photodissociation of water into OH and H at high altitudes. Evidence for this would be neutral hydrogen escape. \citet{bourrier} undertook a dedicated Lyman-$\alpha$ line search of three transits but could not find any signature. Any neutral hydrogen escape could happen at low rates only. Consequently, a low hydrogen content in the upper atmosphere is a likely scenario. This is consistent with our findings, that a secondary atmosphere is probable.
For 55~Cnc~e, our prediction clearly indicates a secondary atmosphere, since $r_{\rm env}/{R}$\xspace is significantly lower than $\Delta R/R$. This is in agreement with previous interpretations based on infra-red and optical observations of transits, occultations, and phase curves \reve{\citep{demory12,demory16,angelo}}. This planet has a large day-night-side temperature contrast of about 1300 K and its hottest spot is shifted eastwards from the substellar point \reve{\citep{demory16,angelo}. The implication for the atmosphere is an optically thick atmosphere with inefficient heat redistribution. A bare rocky planet is disfavored \citep{angelo}.} Furthermore, \citet{ehrenreich} give evidence for no extended hydrogen planetary atmospheres \citep[but see][]{tsiaras}. If an atmosphere is present, it would be of secondary nature. Our \reve{approximated approach} leads to the same conclusion.
\reve{Furthermore, the study of 55~Cnc~e's thermal evolution and atmospheric evaporation by \citet{lopez} suggest either a bare rocky planet or a water-rich interior. Although the composition of 55~Cnc~e is a matter of debate, a hydrogen-dominated atmosphere seems unlikely.}
Also we note that this test holds for Earth and Venus, {although atmospheric loss mechanisms are very different for them (i.e., Jeans escape and non-thermal escape) \citep{shizgal}.} The threshold thicknesses of possible \reve{primordial} atmospheres are larger than 10 \%, whereas the actual thicknesses are no more than a few percent. Thus, our tests would correctly predict a secondary atmosphere for Earth and Venus.
\begin{figure}[ht]
\centering
\includegraphics[width = .5\textwidth, trim = 0cm 0cm 0cm 0cm, clip]{Figure_comparison_pdf1.pdf}\\
\caption{Comparison of $r_{\rm env}/{R}$\xspace between five highlighted planets in Figure \ref{fig1}. Inferred radius (solid lines) and approximated {threshold thicknesses $\Delta R/R$ (colored areas with dashed borders)}. $\Delta R/R$ is listed in \reve{Table \ref{deltaR1}} for all five planets.}
\label{fig6}
\end{figure}
\begin{table}[ht]
\caption{Threshold {thickness} $\Delta$R/R for different evaporation efficiencies $\eta$. \label{deltaR1}}
\begin{center}
\begin{tabular}{llll}
\hline\noalign{\smallskip}
planet & L$_{\rm x}$ [erg/s] & $\Delta R/R$ & $\Delta R/R$ \\
& & ($\eta = 0.01$) & ($\eta = 0.2$) \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
HD~219134~b& 4$\times 10^{26}$ &0.28&0.36 \\
HD~219134~c &4$\times 10^{26}$ &0.19 & 0.24 \\
GJ~1214~b& 7.4$\times 10^{25}$&0.17 & 0.22 \\
55~Cnc~e & 4$\times 10^{26}$ &0.37 & 0.46 \\
HD~97658~b &1.2$\times 10^{28}$& 0.18 & 0.22\\
Earth &2.24$\times 10^{27}$& 0.12 &0.16\\
Venus &2.24$\times 10^{27}$& 0.15 & 0.21\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht]
\caption{\rev{Threshold {thickness} $\Delta$R/R for evaporation efficiencies $\eta$ of 0.01 and 95th-percentile of inferred atmosphere thicknesses $r_{\rm env}/{R}$\xspace. $^*$In case of planets for which stellar X-ray luminosities are not available, we assume solar X-ray luminosities. \label{deltaR2}}}
\begin{center}
\begin{tabular}{lll}
\hline\noalign{\smallskip}
planet &95th-percentile of $r_{\rm env}/{R}$\xspace& $\Delta R/R$ \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
Kepler-78 b & 0.15 & 1.0$^*$\\
GJ 1132 b& 0.1 & 0.40$^*$\\
Kepler-93 b& $<$0.05& 0.31$^*$\\
Kepler-10 b& $<$0.05& 0.76$^*$\\
Kepler-36 b& $<$0.05& 0.23$^*$\\
HD 219134 c& 0.13& 0.19\\ %
HD 219134 b& 0.18& 0.28 \\%
CoRoT-7 b &0.1 & 0.59$^*$\\
Kepler-21 b& 0.05& 0.64$^*$\\
Kepler-20 b & 0.05& 0.18$^*$\\
55 Cnc e& 0.18& 0.37 \\%
Kepler-19 b& 0.27& 0.25$^*$\\
Kepler-102 e&0.17& 0.10$^*$\\%---
HD 97658 b& 0.21&0.18 \\ %
Kepler-68 b&0.28 & 0.25$^*$\\
Kepler-454 b& 0.28& 0.16$^*$\\%---
GJ 1214 b& 0.39& 0.17 \\
Kepler-11 d&0.43 & 0.20$^*$\\%---
Kepler-33 c& 0.48& 0.18$^*$\\%---
Kepler-79 e& 0.58& 0.25$^*$\\
Kepler-36 c& 0.49& 0.30$^*$\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width = .5\textwidth, trim = 0cm 0cm 0cm 0cm, clip]{plot_rhoTeff.pdf}\\
\caption{Possible origin of atmospheres depending on effective temperature and planet mass. For labeled planets, we use our method described in the text. For unlabelled planets, stellar X-ray luminosities are not available and thus we assume solar X-ray luminosities which is a fair assumption given that the Kepler mission targeted Sun-like stars. Radii and masses of considered planets are shown in Figure \ref{fig1}. }
\label{fig8}
\end{figure}
A comparison of atmospheric origin on a larger set of exoplanets is limited due to the lack of estimated X-ray stellar luminosities. \reve{For simplicity, we assume solar X-ray luminosities whenever stellar X-ray luminosities are not available, which is a fair assumption given that the Kepler mission targeted Sun-like stars}. Using such simple assumptions, the distribution of planets with secondary atmospheres depends on planet mass and equilibrium temperature to first order (Figure \ref{fig8} and \reve{Table \ref{deltaR2}}).
In comparison to the tested HD~219134~b and c, most planets have higher equilibrium temperatures and are thus more vulnerable against atmospheric loss. Also, \citet{dornB} concluded that it is unlikely that the planets HD~219134~b, Kepler-10~b, Kepler-93~b, CoRoT-7b, and 55~Cnc~e could retain a hydrogen-dominated atmosphere against evaporative mass loss.
The \reve{possible} transition between secondary and \reve{primordial} atmospheres depending on $T_{\rm eq}$ is positively correlated with planet mass (Figure \ref{fig8}). Theoretical photo-evaporation studies \citep[e.g.,][]{JIN2014, lopez2013} {and the study on observed planets by \citet{Lecavelier}} predict similar trends, in that planets need to be more massive when receiving higher incident flux in order to retain their \reve{primordial} atmospheres.
For a better understanding of the observed distribution of secondary atmospheres,
future estimates of X-ray stellar luminosities are required.
\section{Discussion}
\label{Discussion}
As already mentioned, the strongest assumption we make is that mass loss is constant over the stellar age. A more accurate approach is to calculate
\begin{equation}
M_{\rm env,lost} = \int_0^{t_\star} \frac{\pi \eta F_{\rm X} R^2}{E_g} dt,
\end{equation}
where $\eta(t)$ and $F_{\rm X}(t)$ are both functions of time. The X-ray luminosity evolves over the lifetime of the star, which in turn causes the efficiency of atmospheric escape to evolve. \rev{Also, the planetary radius $R$ depends on the gas mass fraction that changes over time.} We emphasize that, while the $M_{\rm env,lost} = \dot{M} t_\star$ approximation may lack precision, the logical structure of our approach is robust and accurate. The reasoning remains that $M_{\rm env,lost}$ (corresponding to a thickness of $\Delta R$) worth of atmosphere may be eroded over the stellar lifetime, so any inferred atmosphere with thicknesses less than this threshold are very unlikely to be \reve{primordial} {(H$_2$-dominated)}.
\reve{In addition, we assume $T = T_{\rm eq}$ while estimating the threshold thickness (Equation \ref{eq3}). \citet{heng16} finds differences on the order of few tens of percents while approximating the scale height with the isothermal scale height at $T = T_{\rm eq}$. If temperatures are higher, the hydrogen escape would be more efficient and $\Delta$R/R would be higher (and vice versa). The uncertainty on the temperature is accounted for by the variability in $\eta$.}
Furthermore, our estimates of the radius fraction $r_{\rm env}/{R}$\xspace are subject to our choices of interior model and assumptions. Changes in the interior model, especially the atmosphere model, can affect the estimated $r_{\rm env}/{R}$\xspace as discussed by \citep{dornA}. Furthermore, we assume distinct layers of core, mantle, water, and gas. This may not be true as discussed for giant planets \citep{stevenson1985,helled}.
Following the outlined strategy, it is possible to test for other types of atmospheres (e.g., N$_2$ or CO$_2$-dominated atmospheres). Here, we focussed on an atmosphere type that informs us about formation processes, i.e. we have assumed that a \reve{primordial} atmosphere is dominated by hydrogen. In principle, a \reve{primordial} atmosphere can be enriched by planetesimal disruption during the accretion. However, initial gas fractions for super-Earths are small and it is not clear whether atmosphere enrichment can be efficient in these cases nor if metal-enriched thin atmospheres remain well-mixed over long timescales.
We have demonstrated that the possible atmospheres on HD~219134~b and c are very likely to be secondary in nature. We have shown that this result is robust against different assumptions of bulk abundance constraints and prior choices, as shown for $Z_{\rm env}$\xspace.
Based on bulk density, both planets could be potentially rocky. However, we would expect planets, that are rocky and that formed within the same disk, to roughly lie on the same mass-radius curve. This is because we expect a compositional correlation, i.e. similar abundances of relative refractory elements \citep[e.g.,][]{sotin07}. The fact, that HD~219134~b and c do not fall on one mass-radius curve, suggests that the larger planet b must harbor a substantial volatile layer.
\reve{Our use of stellar composition as a proxy for the planet bulk composition excludes Mercury-like rocky interiors. If such interiors were applicable to the HD~219134 planets, the rocky interiors would be iron-rich surrounded by substantially thick volatile envelopes in order to fit mass and radius. It remains an open question whether Mercury-like interiors are common or not.}
\section{Conclusions and Outlook}
\label{Conclusions}
We have presented a method in order to determine the nature of a possible atmosphere. Since close-in planets suffer from evaporative mass loss, the amount of \reve{primordial} atmosphere that can be lost is determined by irradiation from the star, lifetime of the system, and evaporation efficiency. Fortunately, the amount of \reve{primordial} atmosphere loss is weakly dependent on evaporation efficiency and system lifetime \reve{in case of the usually Gyr-old observed exoplanets}. {A comparison between the threshold thickness above which a \reve{primordial} atmosphere can be retained against atmospheric escape and the actual possible atmosphere thickness is a clear indicator of whether an atmosphere is secondary.} We performed this analysis for HD~219134~b and HD~219134~c.
The possible thicknesses of their atmospheres were inferred by using a generalized Bayesian inference method. For this, we have used the data of planet mass, radius, stellar irradiation, and bulk abundance constraints from the star to constrain the interiors of HD~219134~b and c. Interior parameters include core size, mantle composition and size, water mass fraction, intrinsic luminosity, gas mass, and gas metallicity. Although individual parameters of the gas layer ($m_{\rm env}$\xspace, $L_{\rm env}$\xspace, $Z_{\rm env}$\xspace) are only weakly constrained, the thickness is well contrained. Inferred thicknesses $r_{\rm env}/{R}$\xspace are robust against different assumed priors and bulk abundance constraints.
We summarize our findings on HD~219134~b and HD~219134~c below:
\begin{itemize}
\item maximum radius fractions of possible gas layers are 0.18 (HD~219134~b) and 0.13 (HD~219134~c),
\item the possible atmospheres are likely secondary in nature,
\item HD~219134~b must contain a significant amount of volatiles.
\end{itemize}
Here, we have proposed a simple quantitative determination of the nature of an exoplanetary atmosphere, that does not include spectroscopic measurement. In order to check our method against planets whose atmospheres are intensively studied, we applied it to GJ~1214~b, HD~97658~b, and 55~Cnc~e. Our predictions agree with previous findings on their atmospheres, and may be tested by future infrared transmission spectroscopy performed on these exoplanets.
\begin{acknowledgements}
We thank Yann Alibert and an anonymous referee for constructive comments.
This work was supported by the Swiss National Foundation under grant 15-144 and PZ00P2\_174028. It was in part carried out within the frame of the National Centre for Competence in Research PlanetS.
\end{acknowledgements}
|
1,314,259,994,840 | arxiv | \section{Introduction}
Classical 1-categories define an important special case of $(\infty,1)$-categories. The fact that $(\infty,1)$-category theory restricts to ordinary 1-category can be understood, in part, by the observation that the inclusion of 1-categories into $(\infty,1)$-categories is full as an inclusion of $(\infty,2)$-categories. This full inclusion is reflective---with the left adjoint given by the functor that sends an $(\infty,1)$-category to its quotient ``homotopy category''---but not coreflective and as a consequence colimits of ordinary 1-categories need not be preserved by the passage to $(\infty,1)$-categories. Indeed there are known examples of colimits of 1-categories that generate non-trivial higher-dimensional structure when the colimit as formed in the category of $(\infty,1)$-categories.
For example, consider the span of posets:
\[
\begin{tikzcd}[sep=tiny]
& & [-6pt] \bullet \arrow[dd] \arrow[dl] \arrow[dr] \\ [+5pt] & \bullet \arrow[dr, shorten >= -.25em, shorten <= -.45em] & & [-6pt] \bullet \arrow[dl, shorten >= -.25em, shorten <= -.45em] \\ [-3pt] & & \bullet \\ \bullet \arrow[uur] \arrow[urr] \arrow[rr] & & \bullet \arrow[u, shorten >= -.25em, shorten <= -.25em] & & \bullet \arrow[ull] \arrow[uul] \arrow[ll]
\end{tikzcd} \hookleftarrow
\begin{tikzcd}[sep=tiny]
& & [-7pt] \bullet \arrow[dl] \arrow[dr] \\ [+5pt] & \bullet & & [-7pt] \bullet \\ [-3pt] & & \makebox*{$\bullet$}{$~$} \\ \bullet \arrow[uur] \arrow[rr] & & \bullet & & \bullet \arrow[uul] \arrow[ll]
\end{tikzcd} \to \quad \bullet
\]
The pushout in 1-categories is the arrow category $\bullet \to \bullet$, while the pushout in $(\infty,1)$-categories defines an $(\infty,1)$-category with two objects in which the non-trivial hom-space has the homotopy type of the 2-sphere.
As a second example, let $M$ be the monoid with five elements $e, x_{11}, x_{12}, x_{21}, x_{22}$ and multiplication rule given by $x_{ij}x_{k\ell}=x_{i\ell}$.
Inverting all elements of $M$ yields the trivial group.
That is, if one considers $M$ as a 1-category with a single object, then the pushout of the span $M \leftarrow \amalg_{M} \ensuremath{\mathbbb{2}} \rightarrow \amalg_{M} \mathbb{I}$ (where $\ensuremath{\mathbbb{2}}$ is the free-living arrow and $\mathbb{I}$ is the free-living isomorphism) in categories is the terminal category $\ensuremath{\mathbbb{1}}$.
On the other hand, the pushout of this span in $(\infty,1)$-categories is the $\infty$-groupoid $S^2$ as follows from \cite[Lemma]{FiedorowiczCounterexample}.
The results of \cite{McDuffMonoids} imply that this example is generalizable to a vast class of monoids.
More generally, the Gabriel--Zisman category of fractions $\mathcal{C}[\mathcal{W}^{-1}]$ is formed by freely inverting the morphisms in a class of arrows $\mathcal{W}$ in a 1-category $\mathcal{C}$.
This can also be constructed as a pushout of 1-categories of the span
\begin{equation*}
\label{spanLoc}
\begin{tikzcd} \mathcal{C} & \aamalg{w \in \mathcal{W}} \ensuremath{\mathbbb{2}} \arrow[l] \arrow[r, hook] & \aamalg{w \in \mathcal{W}} \mathbb{I}
\end{tikzcd}
\end{equation*}
where each arrow in $\mathcal{W}$ is replaced by a free-living isomorphism. By contrast, the $(\infty,1)$-category defined by this pushout is modelled by the Dwyer--Kan simplicial localization, which has non-trivial higher dimensional structure in many instances \cite{DwyerKan:SLC}, \cite[Lemma 18]{Stevenson:CMSSL}, \cite[p.~168]{JoyalVolumeII}. Indeed, all $(\infty,1)$-categories arise in this way \cite{BarwickKan:RCAMHTHT}.
As the examples above show, pushouts of 1-categories in particular are problematic. Our aim in this paper is to prove that a certain class of pushout diagrams of 1-categories are guaranteed to be $(\infty,1)$-categorical. The requirement is that one of the two maps in the span that generates the pushout belong to a class of functors between 1-categories first considered by Thomason under the name ``Dwyer maps'' \cite[Definition 4.1]{ThomasonModelCat} that feature in a central way in the construction of the Thomason model structure on categories.
\begin{defn}[Thomason]\label{defn:Dwyer-map}
A full sub-$1$-category inclusion $I \colon \mathcal{A} \hookrightarrow \mathcal{B}$ is \textbf{Dwyer map}
if the following conditions hold.
\begin{enumerate}[label=(\roman*)]
\item The category $\mathcal{A}$ is a \emph{sieve} in $\mathcal{B}$, meaning there is a necessarily unique functor $\chi \colon \mathcal{B} \to \ensuremath{\mathbbb{2}}$ with $\chi^{-1}(0) = \mathcal{A}$. We write $\mathcal{V}:=\chi^{-1}(1)$ for the complementary \emph{cosieve} of $\mathcal{A}$ in $\mathcal{B}$.
\item The inclusion $I \colon \mathcal{A} \hookrightarrow\mathcal{W}$ into the \emph{minimal cosieve}\footnote{Explicitly $\mathcal{W}$ is the full subcategory of $\mathcal{B}$ containing every object that arises as the codomain of an arrow with domain in $\mathcal{A}$.} $\mathcal{W} \subset \mathcal{B}$ containing $\mathcal{A}$ admits a right adjoint left inverse $R\colon \mathcal{W}\to \mathcal{A}$, a right adjoint for which the unit is an identity.
\end{enumerate}
\end{defn}
Schwede describes Dwyer maps as ``categorical analogs of the inclusion of a neighborhood deformation retract'' \cite{SchwedeOrbispaces}. In fact, many examples of Dwyer maps are more like deformation retracts, in that the cosieve $\mathcal{W}$ generated by $\mathcal{A}$ is the full codomain category $\mathcal{B}$.
\begin{ex}
\label{basicDwyer}
The vertex inclusion $0 \colon \ensuremath{\mathbbb{1}} \to \ensuremath{\mathbbb{2}}$ is a Dwyer map, with $! \colon \ensuremath{\mathbbb{2}} \to \ensuremath{\mathbbb{1}}$ the right adjoint left inverse.
The other vertex inclusion $1 \colon \ensuremath{\mathbbb{1}} \to \ensuremath{\mathbbb{2}}$ is not a Dwyer map.
\end{ex}
Generalizing the previous example:
\begin{ex}\label{ex:new-terminal-Dwyer} If $\mathcal{A}$ is a category with a terminal object and $\mathcal{A}^{\triangleright}$ is the category which formally adds a new terminal object, then the inclusion $\mathcal{A} \hookrightarrow \mathcal{A}^{\triangleright}$ is a Dwyer map.\footnote{If $\mathcal{A}$ does not have a terminal object, then $\mathcal{A} \to \mathcal{A}^{\triangleright}$ need not be a Dwyer map. Indeed, if $\mathcal{A}=\ensuremath{\mathbbb{1}}\amalg\ensuremath{\mathbbb{1}}$, the only cosieve containing $\mathcal{A}$ is $\mathcal{A}^\triangleright$ itself, and there cannot be a right adjoint $\mathcal{A}^\triangleright \to \mathcal{A}$ as $\mathcal{A}$ does not have a terminal object.}
\end{ex}
We warn the reader that we are using the original notion of Dwyer map, not the pseudo-Dwyer maps introduced by Cisinski \cite{CisinskiDwyer}, which are retracts of Dwyer maps. In particular, our Dwyer maps are not closed under retracts. Thomason observed, however, that they are stable under pushouts, as we now recall:
\begin{lem}[{\cite[Proposition 4.3]{ThomasonModelCat}}]
\label{pushoutDwyer}
Any pushout of a Dwyer map $I$ defines a Dwyer map $J$:
\[
\begin{tikzcd}
\mathcal{A} \arrow[d, "I"', hook]\arrow[r, "F"]\arrow[dr, phantom, "\ulcorner" very near end] &\mathcal{C} \ar[d, "J", hook]\\
\mathcal{B} \arrow[r, "G" swap] &\mathcal{D}.
\end{tikzcd}
\]
\end{lem}
Note for example, that \cref{pushoutDwyer} explains the Dwyer map of \cref{ex:new-terminal-Dwyer}: if $\mathcal{A}$ has a terminal object $t$, then the pushout
\[
\begin{tikzcd} \arrow[dr, phantom, "\ulcorner" very near end] \ensuremath{\mathbbb{1}} \arrow[d, hook, "0"'] \arrow[r, "t"] & \mathcal{A} \arrow[d, hook] \\ \ensuremath{\mathbbb{2}} \arrow[r] & \mathcal{A}^\triangleright
\end{tikzcd}
\]
defines the category $\mathcal{A}^\triangleright$.
Our aim is to show that pushouts of categories involving at least one Dwyer map can also be regarded as pushouts of $(\infty,1)$-categories in the sense made precise by considering the nerve embedding from categories into quasi-categories:
\begin{thm}
\label{DwyerPushout}
Let
\[
\begin{tikzcd}
\mathcal{A} \arrow[d, "I"', hook]\arrow[r, "F"]\arrow[dr, phantom, "\ulcorner" very near end] &\mathcal{C} \ar[d, "J", hook]\\
\mathcal{B} \arrow[r, "G" swap] &\mathcal{D}
\end{tikzcd}
\]
be a pushout of categories, and assume $I$ to be a Dwyer map. Then the induced map of simplicial sets \[N\mathcal{C}\aamalg{N\mathcal{A}} N\mathcal{B} \to N\mathcal{D}\] is a weak categorical equivalence.
\end{thm}
By a weak categorical equivalence, we mean a weak equivalence in Joyal's model structure for quasi-categories \cite[\S 1]{JoyalTierney:QCSS}. \cref{DwyerPushout} is a refinement of a similar result of Thomason \cite[Proposition 4.3]{ThomasonModelCat} which proves that the same map is a weak homotopy equivalence.
In a companion paper, we give an application of \cref{DwyerPushout} to the theory of $(\infty,2)$-categories. There we prove:
\begin{thm}[{\cite[4.4.2]{HORR}}] The space of composites of any pasting diagram in any $(\infty,2)$-category is contractible.
\end{thm}
To prove this, we make use of Lurie's model structure of $(\infty,2)$-categories as categories enriched over quasi-categories \cite{LurieGoodwillie}. In this model, a \emph{pasting diagram} is a simplicially enriched functor out of the free simplicially enriched category defined by gluing together the objects, atomic 1-cells, and atomic 2-cells of a pasting scheme, while the composites of these cells belong to the homotopy coherent diagram indexed by the nerve of the free 2-category generated by the pasting scheme.
This pair of $(\infty,2)$-categories has a common set of objects so the difference lies in their hom-spaces. The essential difference between the procedure of attaching an atomic 2-cell along the bottom of a pasting diagram or along the bottom of the free 2-category it generates is the difference between forming a pushout of hom-categories in the category of $(\infty,1)$-categories or in the category of 1-categories. Since one of the functors in the span that defines the pushout under consideration is a Dwyer map, \cref{DwyerPushout} proves that the resulting $(\infty,2)$-categories are equivalent.
In \S\ref{sec:reduction}, we explain how \cref{DwyerPushout} reduces to two special cases in which we are free to make additional assumptions on the non-Dwyer functor $F$. In \S\ref{sec:pushout}, we analyze 1-categorical pushouts of Dwyer maps, establish some common notation to be used in the remaining sections, and develop some terminology for the simplices in their nerves that require the greatest attention. In \S\ref{sec:anodyne-dwyer-pushout}, we state and prove our first main result. Theorem \ref{AnodyneDwyer} observes that the canonical comparison between the pushout of nerves of categories and the nerve of the pushout is inner anodyne, provided that one of the functors in the span is a Dwyer map and the other is an injective-on-objects faithful functor. The remaining special case, where $F$ is instead bijective-on-objects and full, is proven in \S\ref{sec:full}.
\section{A reduction of the problem}\label{sec:reduction}
We prove \cref{DwyerPushout} in stages in which we are allowed to assume some special properties of the functor $F \colon \mathcal{A} \to \mathcal{C}$, obtained by factoring this functor using the following (weak) factorization systems on $\cat{Cat}$.
\begin{rec}\label{bofull-faithful} A functor $F \colon \mathcal{A} \to \mathcal{C}$ of 1-categories may be factored as $\mathcal{A}\to\mathcal{E}\to\mathcal{C}$ by
\begin{enumerate}[label=(\roman*)]
\item factoring the object map trivially as an identity followed by an arbitrary map:
\[
\begin{tikzcd}[row sep=tiny] \textup{ob}{\mathcal{A}} \arrow[rr, "F"] \arrow[dr, dashed, equals] & & \textup{ob}{\mathcal{C}} \\ & \textup{ob}{\mathcal{E}} \arrow[ur, dashed, "F"']
\end{tikzcd}
\]
\item then factoring each hom-set function as an epimorphism followed by a monomorphism:
\[
\begin{tikzcd}[row sep=tiny] \mathcal{A}(a,a') \arrow[rr, "F_{a,a'}", dashed] \arrow[dr, two heads] & & \mathcal{C}(Fa,Fa') \\ & \mathcal{E}(a,a') \arrow[ur, dashed, tail]
\end{tikzcd}
\]
\end{enumerate}
This has the effect of factoring $F$ as a bijective-on-objects full functor followed by a faithful functor.
\end{rec}
\begin{rec}\label{bo-ff} A functor $F \colon \mathcal{A} \to \mathcal{C}$ of 1-categories may be factored as $\mathcal{A}\to\mathcal{E}\to\mathcal{C}$ by
\begin{enumerate}[label=(\roman*)]
\item factoring the object map trivially as an identity followed by an arbitrary map:
\[
\begin{tikzcd}[row sep=tiny] \textup{ob}{\mathcal{A}} \arrow[rr, "F"] \arrow[dr, dashed, equals] & & \textup{ob}{\mathcal{C}} \\ & \textup{ob}{\mathcal{E}} \arrow[ur, dashed, "F"']
\end{tikzcd}
\]
\item then factoring each hom-set function trivially as an arbitrary map followed by an identity:
\[
\begin{tikzcd}[row sep=tiny] \mathcal{A}(a,a') \arrow[rr, "F_{a,a'}"] \arrow[dr, dashed, "F_{a,a'}"'] & & \mathcal{C}(Fa,Fa') \\ & \mathcal{E}(a,a') \arrow[ur, dashed, equals]
\end{tikzcd}
\]
\end{enumerate}
This has the effect of factoring $F$ as a bijective-on-objects functor followed by a fully-faithful functor.
\end{rec}
\begin{rec}\label{inj-surjequiv} A functor $F \colon \mathcal{A} \to \mathcal{C}$ of $1$-categories may be factored by
\begin{enumerate}[label=(\roman*)]
\item forming the cograph factorization:
\[
\begin{tikzcd}[row sep=tiny] \mathcal{A} \arrow[rr, "F"] \arrow[dr, dashed, "i_1"'] & & \mathcal{C} \\ & \mathcal{A} \amalg \mathcal{C} \arrow[ur, dashed, "{\langle F,\textup{id}\rangle} "']
\end{tikzcd}
\]
\item then taking the bijective-on-objects, fully-faithful factorization of the right factor defined by \cref{bo-ff}.
\end{enumerate}
This has the effect of factoring $F$ as an injective-on-objects functor followed by a surjective-on-objects equivalence.
\end{rec}
By combining these factorizations, we may first factor a functor $F \colon \mathcal{A} \to \mathcal{C}$ as a bijective-on-objects full functor $\mathcal{A} \to \mathcal{C}_0$ followed by a faithful functor $\mathcal{C}_0 \to \mathcal{C}=\mathcal{C}_2$.
Then factor the faithful functor as an injective-on-objects functor $\mathcal{C}_0 \to \mathcal{C}_1$ followed by a surjective-on-objects equivalence $\mathcal{C}_1\to \mathcal{C}_2$.
Notice that $\mathcal{C}_0 \rightarrowtail \mathcal{C}_1$ is now both injective-on-objects and faithful.
A Dwyer map $I \colon \mathcal{A} \hookrightarrow \mathcal{B}$ is in particular injective-on-objects and faithful. Form the following pushouts in $\cat{Cat}$:
\[ \begin{tikzcd}
\mathcal{A} \rar["\text{full}", "\text{bij ob}"'] \dar[hook] \ar[dr, phantom, "\ulcorner" very near end] & \mathcal{C}_0 \rar[tail,"\text{faith}"', "\text{inj ob}"] \dar[hook] \ar[dr, phantom, "\ulcorner" very near end] & \mathcal{C}_1 \rar[two heads, we] \dar[hook] \ar[dr, phantom, "\ulcorner" very near end] & \mathcal{C}_2 \dar[hook] \\
\mathcal{B} \rar & \mathcal{D}_0 \rar[tail] & \mathcal{D}_1 \arrow[r, we] & \mathcal{D}_2
\end{tikzcd} \]
The canonical model structure on $\cat{Cat}$, whose cofibrations are injective-on-objects functors, whose fibrations are isofibrations, and whose weak equivalences are equivalences of categories, is left proper. Thus, $\mathcal{D}_1 \to \mathcal{D}_2$ is an equivalence since $\cat{Cat}$ is left proper.
Applying nerves and then take iterated pushouts gives rise to the following diagram of simplicial sets:
\[ \begin{tikzcd}
N\mathcal{A} \rar \dar[tail] \ar[dr, phantom, "\ulcorner" very near end] & N\mathcal{C}_0 \rar[tail] \dar[tail] \ar[dr, phantom, "\ulcorner" very near end] & N\mathcal{C}_1 \rar[two heads, we] \dar[tail] \ar[dr, phantom, "\ulcorner" very near end] & N\mathcal{C}_2 \dar[tail] \\
N\mathcal{B} \rar & P_0 \rar[tail] \dar \ar[dr, phantom, "\ulcorner" very near end] & P_1 \rar[we] \dar \ar[dr, phantom, "\ulcorner" very near end] & P_2 \dar \\
& N\mathcal{D}_0 \rar[tail] & Q_1 \rar \dar \ar[dr, phantom, "\ulcorner" very near end] & Q_2 \dar \\
& & N\mathcal{D}_1 \rar \ar[dr, bend right, we] & R_2 \dar \\
& & & N\mathcal{D}_2
\end{tikzcd} \]
The nerve functor carries injective-on-objects faithful functors to monomorphisms of simplicial sets, which are the cofibrations in the Joyal model structure. Thus, the maps decorated with a tail are cofibrations. The nerve functor also carries equivalences of categories to weak equivalences in the Joyal model structure. Since this model structure is left proper, the maps decorated with a tilde are weak equivalences.
Using this diagram we may reduce \cref{DwyerPushout} to two special cases:
\begin{lem}\label{lemma on reductions}
Consider a pushout diagram of categories in which $I$ is a Dwyer map:
\begin{equation*}
\begin{tikzcd}
\mathcal{A} \arrow[d, "I"', hook]\arrow[r, "F"]\arrow[dr, phantom, "\ulcorner" very near end] &\mathcal{C} \ar[d, "J", hook]\\
\mathcal{B} \arrow[r, "G" swap] &\mathcal{D}
\end{tikzcd}
\end{equation*}
If the induced map of simplicial sets $N\mathcal{C} \amalg_{N\mathcal{A}} N\mathcal{B} \to N\mathcal{D}$ is a weak categorical equivalence whenever
\begin{enumerate}[label=(\roman*), ref=\roman*]
\item\label{assum:injobfaith} the functor $F$ is injective-on-objects and faithful, or
\item\label{assum:bijobfull} the functor $F$ is bijective-on-objects and full,
\end{enumerate}
then this map is a weak categorical equivalence for any functor $F$.
\end{lem}
\begin{proof}
The second of these stated assumptions tells us that the map $P_0 \to N\mathcal{D}_0$ is a weak equivalence while the first assumption tells us that the map $Q_1 \to N\mathcal{D}_1$ is a weak equivalence, as indicated in the following diagram
\[ \begin{tikzcd}
N\mathcal{A} \rar \dar[tail] \ar[dr, phantom, "\ulcorner" very near end] & N\mathcal{C}_0 \rar[tail] \dar[tail] \ar[dr, phantom, "\ulcorner" very near end] & N\mathcal{C}_1 \rar[two heads, tfibarrow] \dar[tail] \ar[dr, phantom, "\ulcorner" very near end] & N\mathcal{C}_2 \dar[tail] \\
N\mathcal{B} \rar & P_0 \rar[tail] \dar[we] \ar[dr, phantom, "\ulcorner" very near end] & P_1 \rar[we] \dar[we] \ar[dr, phantom, "\ulcorner" very near end] & P_2 \dar \ar[ddd, bend left, we] \\
& N\mathcal{D}_0 \rar[tail] & Q_1 \rar \dar[we] \ar[dr, phantom, "\ulcorner" very near end] & Q_2 \dar \\
& & N\mathcal{D}_1 \rar \ar[dr, bend right, we] & R_2 \dar \\
& & & N\mathcal{D}_2
\end{tikzcd} \]
Since the Joyal model structure is left proper, the map $P_1 \to Q_1$ is a weak equivalence. Hence, the map $P_2 \to N\mathcal{D}_2$ is a weak equivalence by two-of-three, which is the general case of the Dwyer map theorem.
\end{proof}
Thus, to prove \cref{DwyerPushout} it suffices to prove the two special cases \eqref{assum:injobfaith} and \eqref{assum:bijobfull}, which appear as \cref{AnodyneDwyer} and \cref{DwyerThmBijObjFull} below.
\section{Dwyer pushouts and their nerves}\label{sec:pushout}
We now establish some notation that we will freely reference in the remainder of this paper. By \cref{defn:Dwyer-map}, a Dwyer map $I \colon \mathcal{A} \hookrightarrow \mathcal{B}$ uniquely determines a functor
$\chi \colon \mathcal{B} \to \ensuremath{\mathbbb{2}}$ that classifies the sieve $\mathcal{A} \coloneqq \chi^{-1}(0)$ and its complementary cosieve $\mathcal{V} \coloneqq \chi^{-1}(1)$
\begin{equation}\label{eq:basic-non-bridge}
\begin{tikzcd} \mathcal{V} \arrow[r, hook] \arrow[d] \arrow[dr, phantom, "\lrcorner" very near start] & \mathcal{B} \arrow[d, "\chi"] & \arrow[dl, phantom, "\llcorner" very near start] \mathcal{A} \arrow[l, hook'] \arrow[d] \\ \ensuremath{\mathbbb{1}} \arrow[r, hook, "1"'] & \ensuremath{\mathbbb{2}} & \ensuremath{\mathbbb{1}} \arrow[l, hook', "0"]
\end{tikzcd}
\end{equation}
as well as a right adjoint left inverse adjunction $(I \dashv R,\varepsilon \colon IR \Rightarrow \textup{id}_\mathcal{W})$ associated to the inclusion of $\mathcal{A}$ into the minimal cosieve $\mathcal{A} \subset\mathcal{W}\subset\mathcal{B}$. This data may be summarized by the diagram
\begin{equation}\label{eq:DwyerData}
\begin{tikzcd}
& & \varnothing \arrow[dl] \arrow[dr] \arrow[dd, phantom, "\rotatebox{135}{$\ulcorner$}" very near start] \\ & \mathcal{U} \arrow[dl, hook'] \arrow[dr, hook] \arrow[dd, phantom, "\rotatebox{135}{$\ulcorner$}" very near start]&
& \mathcal{A} \arrow[dl, hook'] \arrow[d] \arrow[ddl, phantom, "\llcorner" very near start] \\
\mathcal{V} \arrow[d] \arrow[dr, hook] \arrow[ddr, phantom, "\lrcorner" very near start] & & \mathcal{W} \arrow[dl, hook'] \arrow[ur, dashed, "R", bend left, "\rotatebox{45}{$\top$}"' outer sep=-2pt]& \ensuremath{\mathbbb{1}} \arrow[ddll, "0", hook'] \\
\ensuremath{\mathbbb{1}} \arrow[dr, hook, "1"'] & \mathcal{B} \arrow[d] & ~ & \\& \ensuremath{\mathbbb{2}}
\end{tikzcd}
\end{equation}
in which $\mathcal{U} \coloneqq\mathcal{W}\cap\mathcal{V}\cong \mathcal{W}\backslash\mathcal{A} $. By inspection, the codomain category of a Dwyer map admits the following description:
\begin{lem}\label{lem:DwyerCodomain} For any Dwyer map $I \colon \mathcal{A} \hookrightarrow \mathcal{B}$ as in \eqref{eq:DwyerData}, there is a partitioning
\[
\begin{tikzcd} \textup{ob}\mathcal{A}\amalg\textup{ob}\mathcal{V} \arrow[r, iso] & \textup{ob}\mathcal{B}
\end{tikzcd}
\]
of the objects of the codomain category while the hom-sets are given by
\[ \mathcal{A}(a,a') \cong \mathcal{B}(a,a') \qquad \mathcal{V}(v,v') \cong \mathcal{B}(v,v') \qquad \mathcal{A}(a,Ru) \cong \mathcal{B}(a,u) \]
for all $a,a' \in \mathcal{A}$, $v, v' \in \mathcal{V}$, and $u \in \mathcal{U}$, and are empty otherwise. For objects $a,a' \in \mathcal{A}$ and $u,u' \in \mathcal{U}$, the composition map
\[
\begin{tikzcd} \mathcal{B}(u,u') \times \mathcal{B}(a, u) \times \mathcal{B}(a',a) \arrow[r, "\circ"] & \mathcal{B}(a', u') \\
\arrow[u, iso]
\mathcal{B}(u,u') \times \mathcal{A}(a,Ru) \times \mathcal{A}(a',a) \arrow[d, "R \times \textup{id}"'] \\ \mathcal{A}(Ru,Ru') \times \mathcal{A}(a, Ru) \times \mathcal{A}(a',a) \arrow[r, "\circ"] & \mathcal{A}(a',Ru') \arrow[uu, iso']
\end{tikzcd}
\]
is the unique map making the diagram commute.
\qed
\end{lem}
In particular, a sequence of composable morphisms in $\mathcal{B}$ may start with morphisms in the full subcategory $\mathcal{A}$ and may then leave this subcategory by means of a \textbf{bridging morphism} $f \colon a \to u$ from $a \in \mathcal{A}$ to $u \in \mathcal{U}$ but if it does so it will necessarily continue in, and never leave, the full subcategory $\mathcal{U}$. This gives us a natural partitioning of the simplices in the nerve of $\mathcal{B}$.
\begin{defn}\label{defn:bridge-index}
Each $n$-simplex $\sigma$ of $N\mathcal{B}$ defines a composite map
\[
\begin{tikzcd} \Delta^n \arrow[r, "\sigma"] & N\mathcal{B} \arrow[r, "\chi"] & \Delta^1
\end{tikzcd}
\]
the data of which is determined by the \textbf{bridge index} $r \in \{+,n,\ldots, 1, -\}$ of the $n$-simplex $\sigma$. When the composite map $\chi\sigma \colon \Delta^n \to \Delta^1$ is surjective, the bridge index is an integer that corresponds to the minimal vertex of $\Delta^n$ that maps to the vertex $1$. When $\chi\sigma \colon \Delta^n \to \Delta^1$ is constant at 0 or 1, the bridge index is $+$ or $-$, respectively.
\end{defn}
The simplices with bridge index $+$ or $-$ are exactly those that lie in the images of the full inclusions $N\mathcal{A}\hookrightarrow N\mathcal{B}$ and $N\mathcal{V} \hookrightarrow N\mathcal{B}$ induced by \eqref{eq:basic-non-bridge}, respectively.
\begin{defn}\label{defn:bridging-simplex}
By \cref{lem:DwyerCodomain}, an $n$-simplex with integer bridge index $r$ has the form
\begin{equation}\label{eq:basic-bridging-n-simplex}
\begin{tikzcd}
a_0 \arrow[r, "f_1"] & \cdots \arrow[r, "f_{r-1}"] & a_{r-1} \arrow[r, "\hat{f}"] & u_r \arrow[r, "h_{r+1}"] & \cdots \arrow[r, "h_n"] & u_n
\end{tikzcd}
\end{equation}
where $f_i \colon a_{i-1} \to a_i \in \mathcal{A}$ for $1 \leq i \leq r-1$, $h_j \colon u_{j-1} \to u_j \in \mathcal{U}$ for $r < j \leq n$, and $f \colon a_{r-1} \to Ru_r \in \mathcal{C}$ with adjoint transpose $\hat{f} \colon a_{r-1} \to u_r$ is the unique bridging morphism in the sequence. We refer to the simplices with integer bridge index as \textbf{bridging simplices}.
\end{defn}
\begin{defn}
\label{defn:bascule}
A bridging simplex is \textbf{bascule}\footnote{A bascule bridge is also known as a drawbridge. The idea, as explained by \cref{cor:bascule-bijection} is that the relation between a bascule simplex of bridge index $r$ and its $(r{-}1)$st face is analogous to that between a bascule bridge that is raised to allow boat traffic and a bascule bridge that is flat for car traffic.} if its bridging morphism $\hat{f} \colon a \to u$ is a component of the counit of the adjunction
\[ \begin{tikzcd} \mathcal{A} \arrow[r, bend left, hook, "I", start anchor=north east, end anchor=north west] \arrow[r, phantom, "\bot"] & \mathcal{W} \subset \mathcal{B} \arrow[l, bend left, "R", start anchor=south west, end anchor=south east]
\end{tikzcd}
\] i.e., if $\hat{f} = \varepsilon_{u} \colon Ru \to u \in \mathcal{B}$ is the transpose of $\textup{id}_{Ru} \colon Ru \to Ru \in \mathcal{A}$, in which case $a = Ru$.
\end{defn}
\begin{lem}
\label{BoundarySuspectDwyer}
Let $\sigma$ be a bascule $n$-simplex in $N\mathcal{B}$ of bridge index $r$. Then the $k$th face $\sigma\cdot\delta^{k}$ is
\begin{itemize}
\item a non-bridging simplex if $k=r-1=0$ or if $k=r=n$,
\item a non-bascule simplex of bridge index $r-1$ if $k=r-1 > 0$ and the edge from the $(r{-}2)$nd vertex to the $(r{-}1)$st vertex is not an identity,
\item a simplex of bridge index $r$ if $k=r < n$, or
\item a bascule simplex otherwise.
\end{itemize}
\end{lem}
\begin{proof}
We prove the case $k=r-1 > 0$, which is the most subtle and most important, and leave the others to the reader.
The face $\sigma\cdot\delta^{r-1}$ of a bascule simplex is formed by composing the indicated morphisms
\[
\begin{tikzcd}[sep=small]
& && Ru_r\arrow[rd, "{\varepsilon_{u_{r}}}" ]&& &\\
a_0 \arrow[r] & \cdots \arrow[r]& a_{r-2} \arrow[rr, "\widehat{f_{r-1}}"']\arrow[ru, "f_{r-1}"] && u_{r} \arrow[r] & \cdots \arrow[r] & u_{n}
\end{tikzcd}
\]
The bridging morphism $\widehat{f_{r-1}} \colon a_{r-2} \to u_r$ in $\sigma\cdot\delta^{r-1}$ is a counit component if and only if its transpose $f_{r-1}$ is the identity. Thus $\sigma\cdot\delta^{r-1}$ is a non-bascule simplex except in this case.
\end{proof}
Reversing this process, we see that any bridging $n$-simplex of bridge index $r$ arises as the $r$th face of a unique bascule $(n{+}1)$-simplex of bridge index $r+1$---its \textbf{bascule lift}---obtained by factoring the bridging morphism $\hat{f} \colon a \to u \in \mathcal{W} \subset \mathcal{B}$ through its adjoint transpose $f \colon a \to Ru \in \mathcal{A}$:
\[
\begin{tikzcd}[sep=small] & Ru \arrow[rd, "\varepsilon_{u}"] \\ a \arrow[ur, "f"] \arrow[rr, "\hat{f}"'] & & u.
\end{tikzcd}
\]
Consequently:
\begin{cor}\label{cor:bascule-bijection}
Fix a positive integer $n$. For any $1 \leq r \leq n$, the $r$th face map defines a bijection between
\begin{itemize}
\item the bascule $(n{+}1)$-simplices of bridge index $r+1$ and
\item the bridging $n$-simplices of bridge index $r$
\end{itemize}
that restricts to a bijection between
\begin{itemize}
\item the non-degenerate bascule $(n{+}1)$-simplices of bridge index $r+1$ and
\item the non-degenerate non-bascule bridging $n$-simplices of bridge index $r$. \qed
\end{itemize}
\end{cor}
Now consider the pushout of a Dwyer map along an arbitrary functor $F \colon \mathcal{A} \to \mathcal{C}$.
\begin{equation}\label{eq:pi-defn}
\begin{tikzcd}[sep=small] & [+10pt] ~& \mathcal{A} \arrow[dl, hook', "I"'] \arrow[dr, phantom, "\ulcorner" very near end] \arrow[rr, "F"] \arrow[dd] & & \mathcal{C} \arrow[dd] \arrow[dl, "J", hook'] \\ \mathcal{V} \arrow[r, hook] \arrow[dd] & \mathcal{B} \arrow[dd, "\chi"'] \arrow[rr, "G"' near end, crossing over] & & \mathcal{D} \\
& & \ensuremath{\mathbbb{1}} \arrow[rr, equals] \arrow[dl, hook', "0"'] & & \ensuremath{\mathbbb{1}} \arrow[dl, hook', "0"] \\ \ensuremath{\mathbbb{1}} \arrow[r, hook, "1"] & \ensuremath{\mathbbb{2}} \arrow[rr, equals] & & \ensuremath{\mathbbb{2}} \arrow[from=uu, "\pi" near start, dashed, crossing over] \end{tikzcd}
\end{equation}
The induced functor $\pi \colon \mathcal{D} \to \ensuremath{\mathbbb{2}}$ partitions the objects of $\mathcal{D}$ into the two fibers $\textup{ob}(\pi^{-1}(0)) \cong \textup{ob}\mathcal{C}$ and $\textup{ob}(\pi^{-1}(1))\cong \textup{ob}\mathcal{V}$ and prohibits any morphisms from the latter to the former.
The right adjoint left inverse adjunction $(I \dashv R,\varepsilon \colon IR \Rightarrow \textup{id}_\mathcal{W})$ associated to the inclusion of $\mathcal{A}$ into the minimal cosieve $\mathcal{A} \subset\mathcal{W}\subset\mathcal{B}$ pushes out to define a right adjoint left inverse $(J \dashv S, \nu \colon JS \Rightarrow \textup{id}_{\mathcal{Y}})$ to the inclusion of $\mathcal{C}$ into the minimal cosieve $\mathcal{C} \subset\mathcal{Y}\subset\mathcal{D}$.
\[
\begin{tikzcd}
\mathcal{A} \arrow[r, "F"] \arrow[d, hook, bend right, "I"'] \arrow[d, phantom, "\dashv"] \arrow[dr, phantom, "\ulcorner" very near end] & \mathcal{C} \arrow[d, hook, bend right, "J"'] \arrow[d, phantom, "\dashv"] & \mathcal{A} \arrow[r, "F"] \arrow[d, hook, "I"'] \arrow[dr, phantom, "\ulcorner" very near end] & \mathcal{C} \arrow[d, hook, "J"] \arrow[ddr, bend left, equals] & &\mathcal{A} \arrow[r, "F"] \arrow[d, hook, "I"'] \arrow[dr, phantom, "\ulcorner" very near end] & \mathcal{C} \arrow[d, hook, "J"] \arrow[dr, "\Delta"] \\
\mathcal{W} \arrow[r, "G"'] \arrow[d, hook] \arrow[dr, phantom, "\ulcorner" very near end] \arrow[u, bend right, "R"'] & \mathcal{Y} \arrow[d, hook] \arrow[u, bend right, "S"'] & \mathcal{W} \arrow[r, "G"'] \arrow[dr, "R"'] & \mathcal{Y} \arrow[dr, dashed, "S"] & & \mathcal{W} \arrow[dr, "\varepsilon"'] \arrow[r, "G"'] & \mathcal{Y} \arrow[dr, dashed, "\nu"] & \mathcal{C}^\ensuremath{\mathbbb{2}} \arrow[d, "J^\ensuremath{\mathbbb{2}}"] \\ \mathcal{B} \arrow[r, "G"'] & \mathcal{D} & & \mathcal{A} \arrow[r, "F"'] & \mathcal{C} & & \mathcal{W}^\ensuremath{\mathbbb{2}} \arrow[r, "G^\ensuremath{\mathbbb{2}}"'] & \mathcal{Y}^\ensuremath{\mathbbb{2}}
\end{tikzcd}
\]
These observations explain the closure of Dwyer maps under pushout and furthermore can be used to explicitly describe the structure of the category $\mathcal{D}$ defined by the pushout of a Dwyer map, as proven in \cite[Proof of Lemma 2.5]{BMOOPY}; cf.\ also \cite[Construction 1.2]{SchwedeOrbispaces} and \cite[\textsection7.1]{AraMaltsiniotisVers}.
\begin{prop}
\label{prop:PushoutDwyerHom}
The objects in the pushout category $\mathcal{D}$ are given by
\[
\begin{tikzcd} \textup{ob}\mathcal{C}\amalg\textup{ob}\mathcal{V} \arrow[r, iso] & \textup{ob}\mathcal{D}
\end{tikzcd}
\]
while the hom-sets are given by
\[
\begin{tikzcd}[row sep=tiny]
\mathcal{C}(c,c') \cong \mathcal{D}(c,c') & \mathcal{V}(v,v') \cong \mathcal{B}(v,v') \cong \mathcal{D}(v,v') & \mathcal{C}(c,Su) \arrow[r, "\nu_{u} \circ (-)", iso'] & \mathcal{D}(c,u) \\ & & f \rar[mapsto] &\hat f
\end{tikzcd}
\]
for all $c,c' \in \mathcal{C}$, $v,v' \in \mathcal{V}$, and $u \in \mathcal{U}$, and are empty otherwise. Functoriality of the inclusions $J$ and $G$ defines the composition on the image of $\mathcal{C}$ and $\mathcal{V}$. For objects $c,c' \in \mathcal{C}$ and $u,u' \in \mathcal{U}$, the composition map
\[
\begin{tikzcd} \mathcal{D}(u,u') \times \mathcal{D}(c, u) \times \mathcal{D}(c',c) \arrow[r, "\circ"] & \mathcal{D}(c', u') \\
\arrow[u, iso]
\mathcal{D}(u,u') \times \mathcal{C}(c,Su) \times \mathcal{C}(c',c) \arrow[d, "S \times \textup{id}"'] \\ \mathcal{C}(Su,Su') \times \mathcal{C}(c, Su) \times \mathcal{C}(c',c) \arrow[r, "\circ"] & \mathcal{C}(c',Su') \arrow[uu, iso']
\end{tikzcd}
\]
is the unique map making the diagram commute.\footnote{Note if $u \in \mathcal{U}$ and $v \in \mathcal{V}\backslash\mathcal{U}$, then $\mathcal{B}(u,v)=\varnothing$.}
\end{prop}
To summarize, $J$ and $G$ define fully-faithful inclusions
\begin{equation}\label{eq:non-bridge}
\begin{tikzcd} \mathcal{V} \arrow[d] \arrow[r, hook] \arrow[dr, phantom, "\lrcorner" very near start] & \mathcal{D} \arrow[d, "\pi"] & \mathcal{C} \arrow[l, hook'] \arrow[d] \arrow[dl, phantom, "\llcorner" very near start] \\ \ensuremath{\mathbbb{1}} \arrow[r, hook, "1"] & \ensuremath{\mathbbb{2}} & \ensuremath{\mathbbb{1}} \arrow[l, hook', "0"']
\end{tikzcd}
\end{equation}
that are jointly surjective on objects. In particular, we may identify $\mathcal{V}$ with the complementary cosieve of $\mathcal{C}$ in $\mathcal{D}$. Adopting the terminology introduced above, the category $\mathcal{D}$ contains additional \textbf{bridging morphisms} $\hat{f} \colon c \to u$ from the fiber over 0 to the fiber over 1 but only with codomain in $\mathcal{U} = \mathcal{V} \cap \mathcal{W} = \mathcal{V} \cap \mathcal{Y}$ and these all arise as adjoint transposes under $J\dashv S$ of some morphism $f \colon c \to Su$ in $\mathcal{C}$.
\begin{rmk}\label{rmk:image-of-B} Under the identifications of \cref{prop:PushoutDwyerHom}, the action of the functor $G \colon \mathcal{B} \to \mathcal{D}$ on hom-sets is described by
\[
\begin{tikzcd}[column sep=large, row sep=tiny]
\mathcal{B}(a,a') \arrow[d, phantom, "\rotatebox{90}{$\cong$}"] \arrow[rr, "G"] & & \mathcal{D}(Ga,Ga') \arrow[d, phantom, "\rotatebox{90}{$=$}"] \\
\mathcal{A}(a,a') \arrow[r, "F"] & \mathcal{C}(Fa,Fa') \arrow[r, "J", iso'] &\mathcal{D}(Fa, Fa')
\end{tikzcd}
\]
\[
\begin{tikzcd}[column sep=large] \mathcal{B}(v,v') \arrow[r, "G", iso'] & \mathcal{D}(v,v') \end{tikzcd}
\]
\begin{equation}\label{eq:strict-adjunction-morphism}
\begin{tikzcd}
\mathcal{A}(a, Ru) \arrow[d, "\varepsilon_{u} \circ (-)"', iso] \arrow[r, "F"] & \mathcal{C}(Fa, FRu) \arrow[r, equals] & \mathcal{C}(Fa,Su) \arrow[d, "\nu_{u} \circ (-)", iso'] \\
\mathcal{B}(a,u) \arrow[r, "G"] & \mathcal{D}(Ga,u) \arrow[r, equals] & \mathcal{D}(Fa,u)
\end{tikzcd}
\end{equation}
for all $a,a' \in \mathcal{A}$, $v,v' \in \mathcal{V}$, and $u \in \mathcal{U}$. Note these three cases are disjoint and cover all the non-empty hom-sets in $\mathcal{B}$ and $\mathcal{D}$.
In particular, a bridging morphism $\hat{f} \colon c \to u$ in $\mathcal{D}$ lies in the image of $G$ if and only if the transpose $f \colon c \to Su = FRu$ in $\mathcal{C}$ equals $Fg$ for some $g \colon a \to Ru$ in $\mathcal{A}$. In this case, $G$ carries the transpose $\hat{g} \colon a \to u$ in $\mathcal{B}$ of $g$ to $\hat{f} \colon Ga = Fc \to u$ in $\mathcal{D}$.
\end{rmk}
Since the pushout $J \colon \mathcal{C} \hookrightarrow\mathcal{D}$ is a Dwyer map classified by $\pi \colon \mathcal{D} \to \ensuremath{\mathbbb{2}}$, the terminology introduced in \cref{defn:bridge-index,defn:bridging-simplex,defn:bascule} also apply to simplices in $N\mathcal{D}$, as do the results proven in \cref{BoundarySuspectDwyer} and \cref{cor:bascule-bijection}. Moreover, the pair of functors $F$ and $G$ define a strict adjunction morphism from the adjunction $I \dashv R$ to the adjunction $J \dashv S$, meaning that the action on homs commutes with adjoint transposition in the sense expressed by the commutative diagram \eqref{eq:strict-adjunction-morphism}. Consequently:
\begin{lem}\label{lem:bascule-preservation} The functor $NG \colon N\mathcal{B} \to N\mathcal{D}$ preserves bascule simplices and bascule lifts.
\end{lem}
\begin{proof} A bridging simplex in $N\mathcal{B}$ is bascule if its bridging morphism is a component $\varepsilon_{u}$ of the counit $\varepsilon$. The image of this simplex in $N\mathcal{D}$ then has $G\varepsilon_{u} = \nu_{Gu}$ as the bridging morphism, which is a component of the counit $\nu$. Thus, $NG$ preserves bascule simplices. As a map of simplicial sets, $NG$ commutes with taking the $r$th face map of bascule simplices of bridge index $r+1$. Thus, it commutes with bascule lifts, which can be understood by \cref{cor:bascule-bijection} as the inverse of this operation.
\end{proof}
\begin{rmk}
While any simplex in $N\mathcal{B}$ that maps to a bridging simplex in $N\mathcal{D}$ is bridging, when $G \colon \mathcal{B} \to \mathcal{D}$ is not injective, basculeness is not necessarily reflected. For instance, if there are any objects $u \in \mathcal{U}$ so that $\textup{id}_{Su} \in \mathcal{C}$ has a non-identity preimage in $\mathcal{A}$, then the transpose of this map defines a bridging non-bascule 1-simplex in $\mathcal{B}$ that maps to a bascule 1-simplex in $\mathcal{D}$.
\end{rmk}
Now let $P = N\mathcal{C}\amalg_{N\mathcal{A}} N\mathcal{B}$ be the simplicial set defined as the pushout of the nerves. Via the composite map
$P \to N\mathcal{D} \xrightarrow{\pi} \Delta^1$, its simplices each are assigned a bridge index, as in \cref{defn:bridge-index}. The set of $n$-simplices may be partitioned as follows
\[ P_n \cong N\mathcal{C}_n \amalg (N\mathcal{A}\overset{n}{\vert} N\mathcal{U})_n \amalg \cdots \amalg (N\mathcal{A}\overset{1}{\vert} N\mathcal{U})_n\amalg N\mathcal{V}_n\]
where $N\mathcal{C}_n$ and $N\mathcal{V}_n$ are identified with the subsets of bridge indices $+$ and $-$, respectively, and $(N\mathcal{A}\vert^k N\mathcal{U})_n \subset N\mathcal{B}_n$ is the subset of bridging $n$-simplices in $\mathcal{B}$ of bridge index $k$. The natural comparison map
\[
\begin{tikzcd}[column sep=0em]
P_n \arrow[d] \arrow[r, phantom, "\cong"] & N\mathcal{C}_n \amalg (N\mathcal{A}\overset{n}{\vert} N\mathcal{U})_n \amalg \cdots \amalg (N\mathcal{A}\overset{1}{\vert} N\mathcal{U})_n \amalg N\mathcal{V}_n \arrow[d] \\
N\mathcal{D}_n \arrow[start anchor=south east, dr] \arrow[r, phantom, "\cong"] & N\mathcal{C}_n \amalg (N\mathcal{C}\overset{n}{\vert} N\mathcal{U})_n \amalg \cdots \amalg (N\mathcal{C}\overset{1}{\vert} N\mathcal{U})_n \amalg N\mathcal{V}_n \arrow[d]\\ &
{\{+,n,\ldots, 1,-\}}
\end{tikzcd}
\]
is bijective on the fibers over $-$ and $+$ but need neither be injective nor surjective on the other fibers.
We refer to a bridging $n$-simplex in $P$ as \textbf{bascule}, if it is bascule when identified with a simplex in $\amalg_k (N\mathcal{A}\vert^k N\mathcal{U})_n \subset N\mathcal{B}_n$.
\section{Dwyer pushouts along injective functors}\label{sec:anodyne-dwyer-pushout}
We begin by considering the case where $F \colon \mathcal{A} \to \mathcal{C}$ is injective-on-objects and faithful, in which case we are able to strengthen our conclusion and prove that the canonical comparison map is inner anodyne.
\begin{thm}
\label{AnodyneDwyer}
Let
\[
\begin{tikzcd}
\mathcal{A} \arrow[d, "I"', hook]\arrow[r, "F"]\arrow[dr, phantom, "\ulcorner" very near end] &\mathcal{C} \ar[d, "J", hook]\\
\mathcal{B} \arrow[r, "G" swap] &\mathcal{D}
\end{tikzcd}
\]
be a pushout of categories, in which $I$ is a Dwyer map and $F$ is faithful and injective on objects. Then the induced inclusion of simplicial sets \[N\mathcal{C}\aamalg{N\mathcal{A}} N\mathcal{B} \hookrightarrow N\mathcal{D}\] is inner anodyne and in particular a weak categorical equivalence.
\end{thm}
\begin{proof}
Observe in this case that $N\mathcal{C}\amalg_{N\mathcal{A}}N\mathcal{B} \hookrightarrow N(\mathcal{C}\amalg_\mathcal{A}\mathcal{B})$ is an inclusion.
We will filter this inclusion through a series of subcomplexes
\[
N\mathcal{C}\aamalg{N\mathcal{A}}N\mathcal{B} = K^{0,\infty} \hookrightarrow \cdots \hookrightarrow K^{m-1,\infty} \hookrightarrow K^{m,\infty} \hookrightarrow \cdots \hookrightarrow N(\mathcal{C}\aamalg{\mathcal{A}}\mathcal{B})=N\mathcal{D}\]
and then further filter each step as follows
\[ K^{m-1,\infty} = K^{m,0} \hookrightarrow \cdots \hookrightarrow K^{m,t-1} \hookrightarrow K^{m,t} \hookrightarrow \cdots \hookrightarrow K^{m,\infty}.\]
We will then express each $K^{m,t-1}\hookrightarrow K^{m,t}$ as a pushout of inner horn inclusions.
For $m$ and $t$ non-negative integers, let $K^{m,t} \subset N\mathcal{D}$ be the smallest simplicial subset of $N\mathcal{D}$ containing:
\begin{itemize}
\item $N\mathcal{C} \amalg_{N\mathcal{A}} N\mathcal{B}$,
\item all simplices of dimension strictly less than $m$,
\item all bridging $m$-simplices of bridge index at least $m-t+1$,
\item all bascule $m$-simplices, and
\item all bascule $(m{+}1)$-simplices of bridge index at least $(m+1) - t+1$.
\end{itemize}
Note that if $m < m'$ and $t,t'$ are arbitrary, then $K^{m,t} \subset K^{m',t'}$.
If $t < t'$, then $K^{m,t} \subset K^{m,t'}$.
The simplicial subsets $K^{m,0}$ can be described without explicit reference to bridge index, as the indicated bridge indices are out of range.
It is immediate that $K^{m,m+1} = K^{m,m+2} = K^{m,m+3} = \cdots = K^{m+1,0}$ since integral bridge index is always positive.
We will observe below that we can do better, and the $K^{m,t}$ sequence for fixed $m$ actually stabilizes at $t=m$, rather than $m+1$. We write $K^{m,\infty}$ for this stable value.
To warm up, let us start with low dimensions beginning with $m=0$.
Every $0$-simplex is non-bridging, so in particular is non-bascule, hence $K^{0,0} = N\mathcal{C} \amalg_{N\mathcal{A}} N\mathcal{B}$.
The simplicial subset $K^{0,1}$ adds the bascule 1-simplices of bridge index 1, that is, all bascule 1-simplices.
These $\nu_{u} \colon Su \to u$ are indexed by the objects $u \in \mathcal{U} \subset \mathcal{V}$, that is, those objects which are in $\mathcal{W}$ but not in $\mathcal{A}$. Since $\nu_{G} = G\varepsilon$, such a 1-simplex is the image under $NG \colon N\mathcal{B} \to N\mathcal{D}$ of the 1-simplex $\varepsilon_u \colon Ru \to u$, hence was already present in $K^{0,0}= N\mathcal{C} \amalg_{N\mathcal{A}} N\mathcal{B}$. Therefore
\[
N\mathcal{C} \aamalg{N\mathcal{A}} N\mathcal{B} = K^{0,0} = K^{0,1} = \cdots = K^{0,\infty} = K^{1,0}.
\]
We now turn to the case of $m=1$.
The simplicial subset $K^{1,1}$ adds 1-simplices of bridge index 1 and bascule 2-simplices of bridge index 2.
But some of these simplices are already in $K^{1,0}$, namely
\begin{itemize}
\item the bascule $1$-simplices,
\item the degenerate bascule 2-simplices of bridge index 2,
\item the non-bascule 1-simplices of bridge index 1 which are in the image of $N\mathcal{B}$, and
\item the non-degenerate bascule 2-simplices of bridge index 2 which are in the image of $N\mathcal{B}$.
\end{itemize}
The last two are in bijection via \cref{cor:bascule-bijection}.
But the simplices that are \emph{actually} added to $K^{1,1}$ are
\begin{itemize}
\item the
\emph{non-bascule} $1$-simplices of bridge index $1$ which are \emph{not} in the image of $N\mathcal{B}$ and
\item the \emph{non-degenerate} bascule 2-simplices of bridge index 2 which are \emph{not} in the image of $N\mathcal{B}$.
\end{itemize}
\cref{cor:bascule-bijection} tells us these two sets are in bijection via the first face map, so it suffices to add the latter set of simplices.
If $\sigma = (c_0 \to c_1 \xrightarrow{\nu_{u_2}} u_2)$ is a bascule 2-simplex of bridge index 2 which is not in the image of $N\mathcal{B}$, then $\sigma\cdot\delta^0$ is a bascule 1-simplex, hence is in $K^{1,0}$, while $\sigma\cdot\delta^2$ is a $1$-simplex in $N\mathcal{C} \subset K^{1,0}$.
We thus can form $K^{1,1}$ via the following pushout
\[
\begin{tikzcd}
\coprod\limits_{\sigma \in B_{1,1}} \Lambda^2_1 \arrow[r, hook] \arrow[d]\arrow[dr, phantom, "\ulcorner" very near end]&\coprod\limits_{\sigma \in B_{1,1}} \Delta^2 \arrow[d]\\
K^{1,0} \arrow[r, hook]& K^{1,1}
\end{tikzcd}
\]
where the coproducts are indexed by the set $B_{1,1}$ of non-degenerate bascule 2-simplices of bridge index 2. This shows that $K^{1,0} \hookrightarrow K^{1,1}$ is inner anodyne.
To build $K^{1,2}$, we are meant to add bascule $2$-simplices of bridge index $1$. But such a bascule 2-simplex $c_0 \xrightarrow{\nu_{u_1}} u_1 \to u_2$ has that $c_0 = Su_1 = FRu_1$, and is actually just $NG(Ru_1 \xrightarrow{\varepsilon_{u_1}} u_1 \to u_2)$, hence already in $K^{0,0} \subset K^{1,1}$. Thus $K^{1,1}= K^{1,2}$.
This last argument generalizes: a bascule $(m{+}1)$-simplex $\sigma$ of bridge index $1$
\[
c_0 \xrightarrow{\nu_{u_1}} u_1 \to u_2 \to \cdots \to u_{m+1}
\]
is in the image of $G$, hence is contained in $K^{0,0} \subset K^{m,0} \subset K^{m,m}$.
But these are the only elements that could be added in passing from $K^{m,m}$ to $K^{m,m+1}$, which is why we always have $K^{m,m} = K^{m,m+1}$ for any $m$.
Let's now pass to the general case.
For $1\leq t \leq m < \infty$, let $B_{m,t}$ be the set of non-degenerate bascule $(m{+}1)$-simplices in $N\mathcal{D}$ of bridge index $m+2-t$ which are not in the image of $N\mathcal{B}$.
Then the $(m{+}1{-}t)$th face map is a bijection from the set $B_{m,t}$ to the set of non-degenerate, non-bascule $m$-simplices in $N\mathcal{D}$ of bridge index $m+1-t$ which are not in the image of $N\mathcal{B}$, since $N\mathcal{B} \hookrightarrow N\mathcal{D}$, as an injective map of simplicial sets, respects the bijections of \cref{cor:bascule-bijection}.
These two sets are precisely what is new that must be added to go from $K^{m,t-1}$ to $K^{m,t}$.
We will attach these simplices to $K^{m,t-1}$ via a horn $\Lambda^{m+1}_{m+1-t} \hookrightarrow\Delta^{m+1}$ for each $\sigma \in B_{m,t}$. To do so, we must argue that all of the other faces of such simplices $\sigma$ belong to $K^{m,t-1}$ already. By \cref{BoundarySuspectDwyer}, the $(m{+}2{-}t)$th face map takes elements of $B_{m,t}$ to $m$-simplices of bridge index $m+2-t$ when $t>1$, or to non-bridging simplices when $t=1$; in both cases, the $(m{+}2{-}t)$th face is contained in $K^{m,t-1}$.
By \cref{BoundarySuspectDwyer}, for $\sigma \in B_{m,t}$ and all other $k \neq m+1-t, m+2-t$, the face $\sigma\cdot\delta^k$ is either bascule or non-bridging, and hence is contained in $K^{m,0} \subset K^{m,t-1}$.
We can thus form $K^{m,t}$ as the pushout
\[
\begin{tikzcd}
\coprod\limits_{\sigma \in B_{m,t}} \Lambda^{m+1}_{m+1-t} \arrow[r, hook] \arrow[d]\arrow[dr, phantom, "\ulcorner" very near end]&\coprod\limits_{\sigma\in B_{m,t}} \Delta^{m+1} \arrow[d]\\
K^{m,t-1} \arrow[r, hook]& K^{m,t}.
\end{tikzcd}
\]
Since $1\leq t \leq m$, we have $m \geq m+1-t \geq 1$, so this is an inner anodyne extension.
\end{proof}
\section{Dwyer pushouts along bijective-on-objects and full functors}\label{sec:full}
Suppose $F \colon \mathcal{A} \to \mathcal{C}$ is a bijective-on-objects and full functor and $I \colon \mathcal{A} \hookrightarrow \mathcal{B}$ is a Dwyer map. In this case, the object bijection of \cref{prop:PushoutDwyerHom} restricts to define a bijection
\[
\begin{tikzcd} \textup{ob}\mathcal{B} & \arrow[l, iso] \textup{ob}\mathcal{A}\amalg\textup{ob}\mathcal{V} \arrow[r, iso', "{(F,{\textup{id}})}"] & \textup{ob}\mathcal{C}\amalg\textup{ob}\mathcal{V} \arrow[r, iso'] & \textup{ob}\mathcal{D}
\end{tikzcd}
\]
which is the action on objects of the functor $G : \textup{ob}\mathcal{B} \cong \textup{ob}\mathcal{D}$. While pushouts of full functors need not be full in general, in this case, by \cref{rmk:image-of-B}, $G\colon \mathcal{B} \to \mathcal{D}$ is a bijective-on-objects and full functor whose actions on hom-sets are given by
\[
\begin{tikzcd}[column sep=large, row sep=tiny]
\mathcal{B}(Ia,Ia') \arrow[d, phantom, "\rotatebox{90}{$\cong$}"] \arrow[rr, "G", two heads] & & \mathcal{D}(GIa,GIa') \arrow[d, phantom, "\rotatebox{90}{$=$}"] \\
\mathcal{A}(a,a') \arrow[r, "F", two heads] & \mathcal{C}(Fa,Fa') \arrow[r, "J", iso'] &\mathcal{D}(JFa, JFa')
\end{tikzcd}
\]
\[
\begin{tikzcd}[column sep=large] \mathcal{B}(v,v') \arrow[r, "G", iso'] & \mathcal{D}(Gv,Gv') \end{tikzcd}
\]
\[
\begin{tikzcd}[column sep=large, row sep=tiny]
\mathcal{B}(Ia,u)
\arrow[d, phantom, "\rotatebox{90}{$\cong$}"] \arrow[rr, "G", two heads] & & \mathcal{D}(GIa,Gu) \arrow[d, phantom, "\rotatebox{90}{$=$}"] \\
\mathcal{A}(a, Ru) \arrow[r, "F", two heads] & \mathcal{C}(Fa, FRu) = \mathcal{C}(Fa,SGu) \arrow[r, "\nu_{Gu} \circ J(-)", iso'] & \mathcal{D}(JFa,Gu)
\end{tikzcd}
\]
for all $a,a' \in \mathcal{A}$, $v,v' \in \mathcal{V}$, and $u \in \mathcal{U}$. Note these three cases are disjoint and cover all the non-empty hom-sets in $\mathcal{B}$ and $\mathcal{D}$. The functor $G \colon \mathcal{B} \to \mathcal{D}$ need not be faithful, however: it will identify any parallel morphisms from $\mathcal{A}$ that are identified in $\mathcal{C}$ by $F$ and will also identify parallel morphisms from an object of $\mathcal{A}$ to an object of $\mathcal{U}$ whenever the transposes in $\mathcal{A}$ are identified in $\mathcal{C}$ by $F$.
\begin{thm}
\label{DwyerThmBijObjFull}
Let
\[
\begin{tikzcd}
\mathcal{A} \arrow[d, "I"', hook]\arrow[r, "F"]\arrow[dr, phantom, "\ulcorner" very near end] &\mathcal{C} \ar[d, "J", hook]\\
\mathcal{B} \arrow[r, "G" swap] &\mathcal{D}
\end{tikzcd}
\]
be a pushout of categories, in which $I$ is a Dwyer map and $F$ is bijective on objects and full. Then the induced map of simplicial sets \[N\mathcal{C}\aamalg{N\mathcal{A}} N\mathcal{B} \rightarrow N\mathcal{D}\]
is a weak categorical equivalence.
\end{thm}
Write $P = N\mathcal{C}\amalg_{N\mathcal{A}} N\mathcal{B}$ for the pushout of the nerves, and $N\mathcal{D}$ for the nerve of the pushout. The map $NG \colon N\mathcal{B} \twoheadrightarrow N\mathcal{D}$ factors as a pair of epimorphisms $N\mathcal{B} \twoheadrightarrow P$ and $P \twoheadrightarrow N\mathcal{D}$. The first map only identifies simplices that are in the simplicial subset $N\mathcal{A} \subset N\mathcal{B}$ and which become identified under $NF\colon N\mathcal{A} \twoheadrightarrow N\mathcal{C}$ --- the first case just mentioned --- while the job of the second map is to identify bridging simplices in $N\mathcal{B}$ --- the second case just described.
\[
\begin{tikzcd}[column sep=0em]
N\mathcal{B}_n \arrow[d,two heads] \arrow[r, phantom, "\cong"] & N\mathcal{A}_n \amalg (N\mathcal{A}\overset{n}{\vert} N\mathcal{U})_n \amalg \cdots \amalg (N\mathcal{A}\overset{1}{\vert} N\mathcal{U})_n \amalg N\mathcal{V}_n \arrow[d,two heads] \\
P_n \arrow[d, two heads] \arrow[r, phantom, "\cong"] & N\mathcal{C}_n \amalg (N\mathcal{A}\overset{n}{\vert} N\mathcal{U})_n \amalg \cdots \amalg (N\mathcal{A}\overset{1}{\vert} N\mathcal{U})_n \amalg N\mathcal{V}_n \arrow[d, two heads] \\
N\mathcal{D}_n \arrow[start anchor=south, dr] \arrow[r, phantom, "\cong"] & N\mathcal{C}_n \amalg (N\mathcal{C}\overset{n}{\vert} N\mathcal{U})_n \amalg \cdots \amalg (N\mathcal{C}\overset{1}{\vert} N\mathcal{U})_n \amalg N\mathcal{V}_n \arrow[d]\\ &
{\{+,n,\ldots, 1,-\}}
\end{tikzcd}
\]
\begin{ntn}\label{notation bascule}
Write ${B} \mathcal{B}_n \subset N\mathcal{B}_n$ and ${B} \mathcal{D}_n \subset N\mathcal{D}_n$ for the subsets of bascule $n$-simplices and ${B}^r\mathcal{B}_n$ and ${B}^r\mathcal{D}_n$ for the bascule $n$-simplices with bridge index $r$.
\end{ntn}
When $G$ is bijective on objects and full, the epimorphism $NG \colon N\mathcal{B}_n \twoheadrightarrow N\mathcal{D}_n$ restricts to an epimorphism $NG \colon {B}^r \mathcal{B}_n \twoheadrightarrow {B}^r \mathcal{D}_n$ for each $1 \leq r \leq n$.
We define an `anti-filtration'
\[
\begin{tikzcd} P \eqqcolon Q^1 \arrow[r, two heads] \arrow[drrr, two heads] & \cdots \arrow[r, two heads] & Q^{n-1} \arrow[r, two heads] \arrow[dr, two heads]& Q^{n}\arrow[d, two heads] \arrow[r, two heads] & \cdots \arrow[r, two heads] & \textup{colim}_n Q^n \arrow[dll, dashed, iso] \\ & & & N\mathcal{D}
\end{tikzcd}
\]
with the aim of proving that each $Q^{n-1} \twoheadrightarrow Q^n$ is a weak categorical equivalence.
\begin{const}\label{def antifiltration}
For each $n \geq 0$, the simplicial set $Q^n$ is defined by the pushout:
\[
\begin{tikzcd}[column sep=small]
& N\mathcal{A} \arrow[d] \arrow[r] \arrow[dr, phantom, "\ulcorner" very near end] & N \mathcal{C} \arrow[d] \ar[dddr, bend left=20]
\\
\coprod\limits_{k=1}^{n} \coprod\limits_{{B} \mathcal{B}_k} \times \Delta^k \rar \dar \ar[drr, phantom, "\ulcorner" very near end]
& N\mathcal{B} \rar
& P \dar &\\
\coprod\limits_{k=1}^{n} \coprod\limits_{
{B} \mathcal{D}_k} \times \Delta^k \ar[rr] \ar[drrr, bend right=20]
&
& Q^n \ar[dr, dashed] \\[-12pt]
& & & N\mathcal{D}
\end{tikzcd} \qquad \rightsquigarrow \qquad
\begin{tikzcd}
\coprod\limits_{{B}\mathcal{B}_n} \Delta^n \arrow[dr, phantom, "\ulcorner" very near end] \arrow[r] \arrow[d, two heads] & Q^{n-1} \arrow[d, two heads] \\ \coprod\limits_{{B}\mathcal{D}_n} \Delta^n\arrow[r]& Q^n
\end{tikzcd}
\]
Since the bascule 1-simplices of $N\mathcal{B}$ and $N\mathcal{D}$ are identical---both are in bijection with the objects in $\mathcal{U}$---$P \cong Q^1$.
\end{const}
\begin{lem}\label{lem:anti-convergence}
By construction, the map $\textup{sk}_{n-1}Q^n \to \textup{sk}_{n-1}N\mathcal{D}$ is an isomorphism.
Consequently $\textup{colim}_nQ^n \cong N\mathcal{D}$, and $Q^{n-1} \twoheadrightarrow Q^{n}$ is bijective on $(n{-}2)$-skeleta.
\end{lem}
\begin{proof}
By construction of the quotient $Q^n$ of $P$, each bascule simplex in $N\mathcal{B}$ of dimension up to and including $n$ is identified with its image in $N\mathcal{D}$. Since, by \cref{cor:bascule-bijection}, each simplex of dimension at most $n-1$ in $N\mathcal{D}$ arises as a face of such a simplex, the map $Q^{n} \to N\mathcal{D}$ is bijective on $n$-skeleta. Thus, the anti-filtration converges to $N\mathcal{D}$. By the 2-of-3 property of isomorphisms it follows that $Q^{n-1} \to Q^n$ is bijective on $(n{-}2)$-skeleta.
\end{proof}
In order to show that $Q^{n-1} \to Q^n$ is a weak categorical equivalence, we factor it as a finite composite
\[
\begin{tikzcd} Q^{n-1} = Q^{n,n+1} \arrow[r, two heads] & Q^{n,n} \arrow[r, two heads] & \cdots \arrow[r, two heads] & Q^{n,1} = Q^n
\end{tikzcd}
\]
using the partition of the bascule simplices according to their bridge index:\[ {B} \mathcal{B}_{n} = {B}^n\mathcal{B}_n \amalg \cdots \amalg {B}^1\mathcal{B}_n. \]
\begin{const}
For each $n \geq 0$, and $r= n, \ldots, 1$, we form the pushout below-left, which factors as below-right:
\[
\begin{tikzcd}
\coprod\limits_{t \geq r}
{B}^{t} \mathcal{B}_{n} \times \Delta^{n} \rar \dar[two heads] \ar[dr, phantom, "\ulcorner" very near end]
& Q^{n-1} \dar[two heads] \ar[ddr, two heads, bend left=20]
& & \coprod\limits_{{B}^r\mathcal{B}_n} \Delta^n \arrow[r] \arrow[d, two heads] \arrow[dr, phantom, "\ulcorner" very near end] & Q^{n,r+1} \arrow[d, two heads]\\
\coprod\limits_{t \geq r } {B}^{t} \mathcal{D}_{n} \times \Delta^{n} \rar \ar[drr, bend right=20]
& Q^{n,r} \ar[dr, dashed, two heads] & & \coprod\limits_{{B}^r\mathcal{D}_n} \Delta^n \arrow[r] & Q^{n,r} \\[-12pt]
& & Q^n
\end{tikzcd}
\]
\end{const}
By construction $Q^{n,1} = Q^n$. In fact:
\begin{lem} The quotient map $Q^{n,2} \twoheadrightarrow Q^{n,1}$ is the identity.
\end{lem}
\begin{proof}
Since the functor $G \colon \mathcal{B} \to \mathcal{D}$ is bijective on objects, the map $NG \colon N\mathcal{B} \to N\mathcal{D}$ is bijective on bascule simplices of bridge index 1. Thus, no quotienting happens in the last step.
\end{proof}
Thus, we assume that $r>1$ henceforth and seek to analyze the quotient maps $Q^{n,r+1} \to Q^{n,r}$. Our aim, finally achieved in \cref{Qnr pushout}, is to express this quotient map as a pushout of a weak categorical equivalence along a monomorphism, allowing us to conclude that $Q^{n,r+1} \to Q^{n,r}$ is itself a weak categorical equivalence.
As a first step towards achieving this, we define simplicial sets that capture the shape of the data in $Q^{n,r+1}$ that will be quotiented. For $r >1$ and $\sigma \in {B}^r\mathcal{D}_n$, consider the set ${B}_\sigma \subset {B}^{r}\mathcal{B}_n$ of \emph{bascule} preimages of $\sigma$, i.e., the fiber of the map ${B}^r\mathcal{B}_n \to {B}^r\mathcal{D}_n$ over $\sigma$.
We next give a simplicial set $H_\sigma$ which can be given as a colimit of a star-shaped diagram whose legs are indexed by ${B}_\sigma$.
\begin{const} Let $\sigma \in {B}^{r} \mathcal{D}_n$ and define $H_\sigma$ to be the simplicial set formed as the pushout
\[
\begin{tikzcd}
\coprod\limits_{\tau \in {B}_\sigma} \Lambda^{n}_{r-1} \rar[tail,we] \dar[two heads] \ar[dr, phantom, "\ulcorner" very near end]
&
\coprod\limits_{\tau \in {B}_\sigma} \Delta^{n} \dar[two heads] \ar[ddr, bend left=20, two heads] &[-8pt]\\
\Lambda^{n}_{r-1} \rar[tail,we] \ar[drr, bend right=20, tail, we] & H_\sigma \ar[dr,dashed, two heads, we] \\[-10pt]
& & \Delta^{n}
\end{tikzcd}
\]
which comes equipped with a weak equivalence to $\Delta^{n}$.
\end{const}
\begin{lem}\label{claim about where landing}
If $\sigma \in {B}^{r}\mathcal{D}_n$ is a bascule $n$-simplex of bridge index $r$, then the square
\[ \begin{tikzcd}
\coprod\limits_{\tau \in {B}_\sigma} \Delta^{n} \ar[rr] \ar[dd, two heads] & & N\mathcal{B} \dar[two heads] \\
& & Q^{n,r+1} \dar[two heads] \\
H_\sigma \rar{\sim} \ar[urr,dashed, bend left=10] & \Delta^{n} \rar{\sigma} & N\mathcal{D}
\end{tikzcd} \]
admits a unique lift through $Q^{n,r+1}$.
\end{lem}
\begin{proof}
Suppose $\tau_1$ and $\tau_2$ are two elements of ${B}_\sigma$.
We want to show that $\tau_1\cdot\delta^k$ and $\tau_2\cdot\delta^k$ map to the same element in $Q^{n,r+1}$ for $k\neq r-1$.
For $k\neq r-1, r$, the faces $\sigma\cdot\delta^k$, $\tau_1\cdot\delta^k$, and $\tau_2\cdot\delta^k$ are bascule $(n{-}1)$-simplices (of bridge index $r-1$ or $r$) by \cref{BoundarySuspectDwyer}, with $\tau_1\cdot\delta^k, \tau_2\cdot\delta^k \in {B}_{\sigma\cdot\delta^k}$.
This implies that $\tau_1\cdot\delta^k$ and $\tau_2\cdot\delta^k$ are identified in $Q^{n-1}$.
The (possibly non-bascule) face $\sigma\cdot\delta^r$ is non-bridging if $r=n$ and otherwise has bridge index $r$. In the former case, $\sigma\cdot\delta^r$ has a unique lift to $P$ so we suppose $r < n$. In this case, the bascule lift $\sigma'$ of $\sigma\cdot\delta^r$ has bridge index $r+1$ and satisfies $\sigma'\cdot\delta^{r+1} = \sigma\cdot\delta^r$.
For $\tau = \tau_1$ or $\tau_2$, we can likewise form the bascule lifts $\tau'$ of $\tau\cdot\delta^{r}$, which again have bridge index $r+1$ and satisfy $\tau'\cdot\delta^{r+1} = \tau\cdot\delta^{r}$. These bascule lifts can be described explicitly:
for $\sigma\cdot\delta^r$, the bascule lift (see \cref{cor:bascule-bijection}) takes the form
\[
\begin{tikzcd}
a_0 \arrow[r, "f_1"] & \cdots \arrow[r, "f_{r-1}"] & a_{r-1} \arrow[r, "{Sh_{r+1}}"] & Su_{r+1} \arrow[r, "\nu_{u_{r+1}}"] & u_{r+1} \arrow[r, "h_{r+2}"] & \cdots \arrow[r, "h_{n}"] & u_{n}
\end{tikzcd}
\]
where $a_{r-1} = Ru_{r}$. In other words, the edge from the $(r{-}1)$st vertex to the $r$th vertex is obtained by applying the functor $S$ to the edge from the $r$th vertex to the $(r{+}1)$st vertex of $\sigma$. The bascule lifts of $\tau_1$ and $\tau_2$ may be described similarly using the functor $R$. Thus, by \cref{lem:bascule-preservation}, the bascule lifts $\tau_1'$ and $\tau_2'$ are bascule preimages of $\sigma'$. Hence $\tau_1',\tau_2' \in {B}_{\sigma'}$ are identified in $Q^{n,r+1}$, so their faces must be identified as well.
\end{proof}
Using the lift constructed in \cref{claim about where landing}, we have a commutative square whose left-hand leg is a weak categorical equivalence:
\[
\begin{tikzcd} H_\sigma \arrow[r, dashed] \arrow[d, two heads, we'] & Q^{n, r+1} \arrow[d, two heads]\\ \Delta^n \arrow[r, "\sigma"'] & N\mathcal{D}
\end{tikzcd}
\]
However, the map $H_\sigma \to Q^{n,r+1}$ is not necessarily injective, so this weak categorical equivalence would not necessarily be preserved under pushout. Thus, in two steps, we will introduce further quotients of $H_\sigma$ in order to obtain a simplicial set that maps injectively into $Q^{n,r+1}$.
The first replacement of $H_{\sigma}$, the simplicial set $U_{\sigma}$ of \cref{Uconstruction}, takes care of the fact that when $\sigma$ is degenerate, some of the $n$-simplices of $H_{\sigma}$ arise from degenerate simplices of $N\mathcal{B}$. In the second step, we also take into account that some faces of the $n$-simplices of $U_{\sigma}$ might be identified in $N\mathcal{D}$. We will start with some preparations for these constructions.
Given some $\tau \in {B}_\sigma \subseteq N\mathcal{B}_{n}$, we use the Eilenberg--Zilber property to produce a unique pair $(s_\tau,y_\tau)$ with $s_\tau \colon \Delta^{n} \twoheadrightarrow \Delta^{m_\tau}$ a degeneracy operator and $y_\tau$ a non-degenerate $m_\tau$-simplex of $N\mathcal{B}$, such that $y_\tau \cdot s_\tau = \tau$.
Likewise, we can form $(s_\sigma \colon \Delta^{n} \twoheadrightarrow \Delta^{m_\sigma},y_\sigma \in N\mathcal{D}_{m_\sigma})$ with $y_\sigma$ non-degenerate in $N\mathcal{D}$ and $y_\sigma \cdot s_\sigma = \sigma$.
We have factorizations of $s_\sigma$:
\[ \begin{tikzcd}[column sep=small]
\Delta^{n} \ar[rr, two heads, "s_\sigma"] \ar[dr, two heads, "s_\tau"'] & & \Delta^{m_\sigma} \\
& \Delta^{m_\tau} \ar[ur,two heads, "s_\tau'"']
\end{tikzcd} \]
Further, let $m_\sigma$ be the minimal value of $m_{\tau}$ amongst all bascule preimages $\tau$ of $\sigma$. Note that $m_\sigma < n$ if and only if $\sigma$ is degenerate.
\begin{const}\label{Uconstruction}
We create a new star-shaped diagram indexed over ${B}_\sigma$ so that the $\tau$ leg is of the form
\[ \begin{tikzcd}[column sep=small]
\Lambda_{r-1}^{n} \rar & \Delta^{n} \rar{s_\tau} & \Delta^{m_\tau}
\end{tikzcd} \]
and define $U_\sigma$ to be the pushout:
\[
\begin{tikzcd}
\coprod\limits_{\tau \in {B}_\sigma} \Lambda^{n}_{r-1} \rar \dar[two heads] \ar[dr, phantom, "\ulcorner" very near end]
&
\coprod\limits_{\tau \in {B}_\sigma} \Delta^{m_\tau} \dar[two heads] &[-8pt]\\
\Lambda^{n}_{r-1} \rar & U_\sigma \\
\end{tikzcd}
\]
The simplicial sets $H_{\sigma}$ and $U_\sigma$ fit into the following diagram:
\begin{equation}
\label{large_diagram_Usigma}
\begin{tikzcd}
\coprod\limits_{\tau \in {B}_\sigma} \Lambda^{n}_{r-1} \rar[tail,we] \dar \ar[dr, phantom, "\ulcorner" very near end]
&
\coprod\limits_{\tau \in {B}_\sigma} \Delta^{n} \dar \rar[two heads, "s_\tau"]
\ar[dr, phantom, "\ulcorner" very near end]
&
\coprod\limits_{\tau \in {B}_\sigma} \Delta^{m_\tau} \dar \arrow[r] & N\mathcal{B} \arrow[d, two heads]
\\
\Lambda^{n}_{r-1} \rar[tail,we] & H_\sigma \dar["\rotatebox{90}{$\sim$}"] \rar[ two heads]
&
U_\sigma \dar \arrow[r] & Q^{n,r+1} \arrow[d]
\\
& \Delta^{n} \rar[two heads, "s_\sigma"']& \Delta^{m_\sigma} \arrow[r, "y_\sigma"'] & N\mathcal{D}
\end{tikzcd} \end{equation}
\end{const}
Since $H_\sigma \twoheadrightarrow U_\sigma$ is a pushout of the degeneracy maps $s_\tau$ associated to $\tau \in {B}_\sigma$, the canonical map $H_\sigma \to Q^{n,r+1}$ of \cref{claim about where landing} factors uniquely through $U_\sigma$ as displayed above.
\begin{lem}\label{lem pushout of H to U}
For each $\sigma \in {B}^r\mathcal{D}_n$ a bascule $n$-simplex of bridge index $r$, the square
\[ \begin{tikzcd}
H_\sigma \dar["\rotatebox{90}{$\sim$}"] \rar
&
U_\sigma \dar
\\
\Delta^{n} \rar{s_\sigma}& \Delta^{m_\sigma}
\end{tikzcd} \]
is a pushout.
\end{lem}
\begin{proof}
By \eqref{large_diagram_Usigma}, it suffices to show that the interior commutative square below is a pushout, so we consider a cone with nadir $Z$ under the span:
\[ \begin{tikzcd}
\coprod\limits_{\tau} \Delta^{n} \dar["\nabla"'] \rar{ s_\tau }
\ar[dr, phantom, "\ulcorner" very near end]
&
\coprod\limits_{\tau} \Delta^{m_\tau} \dar{s_\tau'} \arrow[ddr, bend left, "z_\tau"]
\\
\Delta^{n} \rar[swap]{s_\sigma} \arrow[drr, bend right, "z"'] & \Delta^{m_\sigma} \arrow[dr, "z_{\tau_0}", dashed] \\
& & Z
\end{tikzcd} \]
We know that there is some $\tau_0$ for which $s_{\tau_0}' = \textup{id}$. Thus, if this cone factors through the claimed pushout it must factor via the map $z_{\tau_0}$. Since $s_\sigma = s_{\tau_0}$, the bottom triangle commutes. It remains to argue that the rightmost triangle commutes, which we may show one component at a time.
To verify that $z_\tau = z_{\tau_0} \cdot s_\tau'$ it suffices to verify this commutativity after precomposing with the epimorphism $s_\tau$. Now we have
\[ z_\tau \cdot s_\tau = z = z_{\tau_0} \cdot s_{\tau_0} = z_{\tau_0} \cdot s_\sigma = z_{\tau_0} \cdot s_\tau' \cdot s_\tau,\] as desired.
\end{proof}
\begin{lem}\label{lem:Usigma=Usigma'}
Let $\sigma \in {B}^{r} \mathcal{D}_n$.
If $\sigma$ is degenerate, then we can
exhibit $U_\sigma$ as the colimit of the smaller star-shaped diagram
\begin{equation}\label{eq:new-U-pushout}
\begin{tikzcd}
\coprod\limits_{\mathrm{nd}{B}_\sigma} \Lambda_{r-1}^{n} \rar[tail,we] \dar \ar[dr, phantom, "\ulcorner" very near end] & \coprod\limits_{\mathrm{nd}{B}_\sigma} \Delta^{n} \dar
\\
\Lambda^{n}_{r-1} \rar[tail,we] \arrow[d, two heads, "s_\sigma"'] \ar[dr, phantom, "\ulcorner" very near end]
&
\bullet
\dar[two heads] \\
\Delta^{m_\sigma} \rar[tail,we] & U_\sigma
\end{tikzcd}
\end{equation}
where all but one of the legs is indexed by the set $\mathrm{nd}{B}_\sigma$ of non-degenerate elements of ${B}_\sigma$, and the final leg being $\Lambda^{n}_{r-1} \hookrightarrow \Delta^n \xrightarrow{s_\sigma} \Delta^{m_\sigma}$.
\end{lem}
\begin{proof}
We will exhibit a natural isomorphism between cones under the diagram \eqref{eq:new-U-pushout} and cones under the defining pushout of \cref{Uconstruction}.
We can choose and fix a simplex $\rho$ of dimension $m_{\rho}=m_{\sigma}$ in the defining diagram for $U_{\sigma}$ (and then in particular we necessarily have $s_{\rho}=s_{\sigma}$). Sending the last leg to $\rho$, we can view the smaller diagram as a subdiagram of the defining diagram for $U_{\sigma}$. Every cone under the latter gives in particular a cone under the former by restriction. Now assume we are given a cone under the smaller diagram. We will show that it extends uniquely to a cone under the larger diagram, thus proving the claim.
Let $\tau$ be one of the simplices in the definition of $U_\sigma$ so that $m_{\tau}<n$ and $\tau\neq \rho$. Since $s_\tau\colon \Delta^{n}\to \Delta^{m_{\tau}}$ is not the identity, it has sections $\Delta^{m_{\tau}}\to \Delta^{n}$ of which at least one has an image contained in $\Lambda^{n}_{r-1}$.
Choose and fix one such section $d_{\tau}$. Define the map on the summand $\Delta^{m_{\tau}}$ as
\[
\Delta^{m_{\tau}} \xrightarrow{d_{\tau}} \Lambda^{n}_{r-1} \to \Delta^{n} \xrightarrow{s_{\rho}}\Delta^{m_{\sigma}}.
\]
We observe that the composite
\[
\Lambda^{n}_{r-1} \to \Delta^{n} \xrightarrow{s_{\tau}}\Delta^{m_{\tau}} \xrightarrow{d_{\tau}} \Lambda^{n}_{r-1} \to \Delta^{n} \xrightarrow{s_{\rho}} \Delta^{m_{\sigma}}
\]
equals
\[
\Lambda^{n}_{r-1} \to \Delta^{n} \xrightarrow{s_{\rho}} \Delta^{m_{\sigma}}
\]
since we have $s_{\rho}=s_{\sigma}=s'_{\tau}s_{\tau}$. Thus, the newly defined maps indeed extend the cone under the small diagram into a cone under the large one. As for uniqueness, we observe that the morphism
\[
\Lambda^{n}_{r-1} \to \Delta^{n} \xrightarrow{s_{\tau}}\Delta^{m_{\tau}}
\]
is an epimorphism since it has a section, implying the desired uniqueness.
In total, this proves the bijection between the cones under the respective diagrams, yielding the claim.
\end{proof}
By the 2-of-3 property, the canonical retraction to the map $\begin{tikzcd}[cramped, sep=small]\Delta^{m_\sigma} \arrow[r, we] & U_\sigma \end{tikzcd}$ of \eqref{eq:new-U-pushout} is an equivalence:
\begin{cor}\label{lem U vs Delta equiv}
Let $\sigma \in {B}^{r} \mathcal{D}_n$. The map $U_\sigma \twoheadrightarrow \Delta^{m_\sigma}$ is an equivalence. \qed
\end{cor}
We now have a pushout
\[
\begin{tikzcd} H_\sigma \arrow[r, two heads] \arrow[d, two heads, we'] \arrow[dr, phantom, "\ulcorner" very near end] & U_\sigma \arrow[d, two heads, we'] \arrow[r] & Q^{n,r+1} \arrow[d, two heads]\\ \Delta^n \arrow[r, two heads] & \Delta^{m_\sigma} \arrow[r, "y_\sigma"'] & N\mathcal{D}
\end{tikzcd}
\]
Unfortunately, the map $U_\sigma \to Q^{n,r+1}$ may still fail to be injective. Over the next sequence of lemmas, we will identify the image of the map
\[ \coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} U_\sigma \to Q^{n,r+1}.\]
\begin{const}\label{def core}
For $\sigma \in {B}^{r} \mathcal{D}_n$,
let $\core(\sigma) \subseteq U_\sigma$ be the image
\[
\begin{tikzcd}[column sep=large, row sep=small]
\Lambda^{n}_{r-1} \rar[tail] \ar[dr, two heads] & H_\sigma \rar & U_\sigma \\
& \core(\sigma) \ar[ur, tail]
\end{tikzcd} \] of the canonical map $\Lambda_{r-1}^{n} \to U_\sigma$.
\end{const}
\begin{lem}\label{lem identification of core}
Let $\sigma \in {B}^{r} \mathcal{D}_n$.
If $\sigma$ is non-degenerate, then $\Lambda^{n}_{r-1} \to \core(\sigma)$ is an isomorphism. If $\sigma$ is degenerate, then the composite
\[
\core(\sigma) \rightarrowtail U_\sigma \twoheadrightarrow \Delta^{m_\sigma}
\]
is an isomorphism.
\end{lem}
\begin{proof}
If $\sigma$ is non-degenerate, then every element in ${B}_\sigma$ is also non-degenerate.
This implies $s_\tau = \textup{id}_{[n]}$ for all $\tau \in {B}_\sigma$, so $H_\sigma \to U_\sigma$ is an isomorphism, and the monomorphism $\Lambda^{n}_{r-1} \to H_\sigma$ identifies $\Lambda^n_{r-1}$ with its image.
If $\sigma$ is degenerate, the map $\Lambda^{n-1} \to H_\sigma \to U_\sigma$ is identified with the diagonal composite in the lower square of \eqref{eq:new-U-pushout}. That square then displays the image factorization.
\end{proof}
\begin{const}
Define a simplicial set $I^{n,r}$ as the image factorization of the composite map $\coprod_{\sigma \in {B}^r\mathcal{D}_n} \core(\sigma)\to N\mathcal{D}$ given by
\[ \begin{tikzcd}
\coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} \core(\sigma) \rar[tail] \arrow[dd, two heads, dashed] & \coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} U_{\sigma} \rar \arrow[d, two heads, we'] & Q^{n,r+1} \arrow[dd, two heads] \\ & \coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} \Delta^{m_\sigma} \arrow[dr, ] \\
I^{n,r} \ar[rr, tail, dashed] & & N\mathcal{D}
\end{tikzcd} \]
where the sums are over all bascule $n$-simplices $\sigma$ of bridge index $r$, for fixed $r > 1$.
\end{const}
This gives the diagram
\[ \begin{tikzcd}
I^{n,r} \dar[equals] & \coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} \core(\sigma) \dar[equals] \lar[two heads] \rar[tail] & \coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} U_{\sigma}\arrow[d, we] & I^{n,r} \aamalg{(\coprod_\sigma \core(\sigma))} (\coprod_\sigma U_\sigma) \arrow[d, wedashed] \\
I^{n,r} & \coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} \core(\sigma) \lar[two heads] \rar[tail] & \coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} \Delta^{m_{\sigma}} & I^{n,r} \aamalg{(\coprod_\sigma\core(\sigma))} (\coprod_\sigma \Delta^{m_\sigma})
\end{tikzcd} \]
Taking pushouts of each row yields the map above-right, which is an equivalence since both the top and bottom pushouts are homotopy pushouts. Note this map is the pushout of coproduct of the weak equivalences $U_\sigma \to \Delta^{m_\sigma}$ as displayed below-left. We will demonstrate in \cref{prop lift exists} that the map $\coprod_\sigma \core(\sigma) \to \coprod_\sigma U_{\sigma} \to Q^{n,r+1}$ factors through $I^{n,r}$,
allowing us to form the square
\[ \begin{tikzcd}
\coprod U_{\sigma} \arrow[d, we'] \rar \arrow[dr, phantom, "\ulcorner" very near end] & I^{n,r} \aamalg{(\coprod_\sigma \core(\sigma))} (\coprod_\sigma U_\sigma) \rar[dashed] \arrow[d, we] & Q^{n,r+1} \dar \\
\coprod \Delta^{m_{\sigma}} \rar & I^{n,r} \aamalg{(\coprod_\sigma\core(\sigma))} (\coprod_\sigma \Delta^{m_\sigma}) \rar & N\mathcal{D}
\end{tikzcd}\]
Then, we will show in \cref{Qnr pushout} that, when we form the pushout in the right-hand square, we recover the simplicial set $Q^{n,r}$.
\[ \begin{tikzcd}
\coprod_\sigma U_{\sigma} \arrow[d, we'] \arrow[dr, phantom, "\ulcorner" very near end] \rar & I^{n,r} \aamalg{(\coprod_\sigma \core(\sigma))} (\coprod_\sigma U_\sigma) \rar[dashed] \arrow[d, we] \arrow[dr, phantom, "\ulcorner" very near end] & Q^{n,r+1} \dar \\
\coprod_\sigma \Delta^{m_{\sigma}} \rar & I^{n,r} \aamalg{(\coprod_\sigma\core(\sigma))} (\coprod_\sigma \Delta^{m_\sigma}) \rar & Q^{n,r}.
\end{tikzcd} \]
Finally, we will demonstrate in \cref{prop injectivity} that the map $I^{n,r} \amalg_{(\coprod_\sigma\core(\sigma))} (\coprod_\sigma U_\sigma) \to Q^{n,r+1}$ is injective. It follows that $Q^{n,r+1} \to Q^{n,r}$ is a weak equivalence, as desired.
\begin{prop}\label{prop lift exists}
Let $\sigma \in {B}^{r} \mathcal{D}_n$.
The dashed lift exists in the following diagram
\[ \begin{tikzcd}
\coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} \core(\sigma) \rar \dar[two heads] & \coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} U_{\sigma} \rar & Q^{n,r+1} \dar{\pi} \\
I^{n,r} \ar[rr,tail] \ar[urr,dashed] & & N\mathcal{D}
\end{tikzcd} \]
\end{prop}
\begin{proof}
The quotient map $\coprod_{\sigma \in {B}^r\mathcal{D}_n} \core(\sigma) \twoheadrightarrow I^{n,r}$ may:
\begin{itemize}
\item identify simplices in $\core(\sigma)$ for a fixed $\sigma$, and may also
\item identify simplices in $\core(\sigma)$ with simplices in $\core(\sigma')$ for $\sigma \neq \sigma'$.
\end{itemize}
We argue that for both sorts of identifications in $N\mathcal{D}$ these identifications lift to $Q^{n,r+1}$, and in fact treat both cases simultaneously by considering a pair of simplices $\sigma, \sigma' \in {B}^r\mathcal{D}_n$ which may or may not be distinct. Since $\textup{sk}_{n-2} Q^{n,r+1} \to \textup{sk}_{n-2} N\mathcal{D}$ is an isomorphism, the only quotienting that occurs is in higher dimensions. Since each $\core(\sigma)$ and thus $I^{n,r}$ is $(n{-}1)$-skeletal, it suffices to consider identifications of $(n{-}1)$-simplices, at least one of which is non-degenerate. We will argue that any $(n{-}1)$-simplices $\gamma \in \core(\sigma)$ and $\gamma' \in \core(\sigma')$ that are identified in $N\mathcal{D}$ are also identified in $Q^{n,r+1}$.
Write $\bar \gamma, \bar\gamma' \in Q^{n,r+1}$ for their images.
We introduce some further notation to better describe these images. We write the bascule $n$-simplex $\sigma$ as
\[
\begin{tikzcd}
a_0 \arrow[r, "f_1"] & \cdots \arrow[r, "f_{r-1}"] & a_{r-1} \arrow[r, "\varepsilon"] & u_r \arrow[r, "h_{r+1}"] & u_{r+1} \arrow[r, "h_{r+2}"] & \cdots \arrow[r, "h_{n}"] & u_n
\end{tikzcd}
\]
and let $\tau \in N\mathcal{B}$ be a fixed bascule preimage of $\sigma$
\[
\begin{tikzcd}
a_0 \arrow[r, "p_1"] & \cdots \arrow[r, "p_{r-1}"] & a_{r-1} \arrow[r, "\varepsilon"] & u_r \arrow[r, "h_{r+1}"] & u_{r+1} \arrow[r, "h_{r+2}"] & \cdots \arrow[r, "h_{n}"] & u_n
\end{tikzcd}
\]
that is ``maximally degenerate,'' meaning it is
chosen with the property that $p_k$ an identity if and only if $F(p_k) = f_k$ is an identity. Note there is a map $\mu$ as displayed below
\[ \begin{tikzcd}
& & \Delta^n \dar \rar{\tau} & N\mathcal{B} \dar{\chi}
\\
\Delta^{n-1} \arrow[r, "\gamma"] \arrow[rrr, bend right=10, "{\bar\gamma}"'] & \core(\sigma) \rar \ar[ur, dashed,"\mu"] & U_{\sigma} \rar & Q^{n,r+1} \dar{\pi} \\ &
& & N\mathcal{D}
\end{tikzcd} \]
that lands in $\Lambda^n_{r-1}\subset \Delta^n$. The map $\mu$ is either the inclusion $\Lambda^n_{r-1} \to \Delta^n$ if $\sigma$ is non-degenerate, or is a section to the degeneracy operator involved in $\sigma$. Thus we may use our lift $\tau$ of $\sigma$ to describe the image of $\core(\sigma)$ in $Q^{n,r+1}$: in particular $\bar\gamma = \chi\tau\mu\gamma$. Define $\tau'$ and $\mu'$ analogously for $\sigma'$ so that $\bar\gamma' = \chi\tau'\mu'\gamma'$.
Our goal is to show that if $\pi \bar \gamma = \pi \bar \gamma'$, then $\bar \gamma = \bar \gamma'$.
As noted above, we may assume without loss of generality that $\gamma$ is a non-degenerate $(n{-}1)$-simplex, which presents two possibilities for $\sigma$:
\begin{enumerate}[label=(\roman*), ref=\roman*]
\item If $\sigma$ is non-degenerate, then $\core(\sigma) = \Lambda^n_{r-1}$ and $\mu\gamma = \delta^{k} \colon \Delta^{n-1} \to \Delta^n$ for some $k \neq r-1$.
In this case, $\bar \gamma = \chi \tau \delta^{k}$.\label{option one a}
\item If $\sigma$ is degenerate, then the only way for $\core(\sigma) = \Delta^{m_{\sigma}}$ to have a non-degenerate $(n{-}1)$-simplex is if $m_{\sigma} = n-1$. That is, $\sigma = \tilde \sigma \cdot s^{\ell}$ for $\ell \neq r-1$ and $\tilde \sigma$ non-degenerate.
In this case, $\mu$ is given by $\delta^{\ell}$, so $\bar \gamma = \chi \tau \delta^{\ell}$.\label{option one b}
\end{enumerate}
Thus in either case $\bar \gamma = \chi \tau \delta^k$ for $k\neq r-1$.
Similarly, if $\gamma'$ is non-degenerate, then $\bar \gamma' = \chi \tau' \delta^{k'}$ for some $k' \neq r-1$.
If both $\gamma$ and $\gamma'$ are non-degenerate, then we have $\bar \gamma = \bar \gamma'$ by \cref{lem:non-degenerate-core-quotient} below.
If $\gamma'$ is degenerate, then we will show that $\bar \gamma = \bar \gamma'$ in \cref{lem:mixed-core-quotient} below.
This uses the following description of $\bar \gamma'$.
\begin{enumerate}[start=3, label=(\roman*), ref=\roman*]
\item If $\gamma'$ is degenerate, then $\mu' \gamma' \colon \Delta^{n-1} \to \Delta^n$ is of the form
\[ \begin{tikzcd}
\Delta^{n-1} \arrow[r, "s", two heads] & \Delta^\ell \arrow[r, "d", tail] & \Delta^{n-1} \rar{\delta^{k'}} & \Delta^n
\end{tikzcd} \]
where $\ell < n-1$ and $k' \neq r-1$ and $s\neq \textup{id}$. \label{option two}
\end{enumerate}
Thus after we have proved \cref{lem:non-degenerate-core-quotient,lem:mixed-core-quotient} below, the current proposition will be established.
\end{proof}
\begin{lem}\label{lem:non-degenerate-core-quotient}
Using the notation from the proof of \cref{prop lift exists}, suppose that $\gamma \in \core(\sigma)$ and $\gamma' \in \core(\sigma')$ are non-degenerate $(n{-}1)$-simplices. If $\pi\bar \gamma = \pi \bar \gamma'$,
then $\bar \gamma = \bar \gamma' \in Q^{n,r+1}$.
\end{lem}
\begin{proof}
As we saw in the proof of \cref{prop lift exists}, the non-degeneracy assumption implies $\pi \bar \gamma = \pi \chi \tau \delta^k = \sigma \delta^k$ and $\pi \bar \gamma' = \pi \chi \tau' \delta^{k'} = \sigma' \delta^{k'}$ with $k,k' \neq r-1$.
If the simplices $\tau \delta^k$, $\tau' \delta^{k'}$, and $\sigma \delta^k = \sigma' \delta^{k'}$ are non-bridging, then $\tau \delta^k$ and $\tau' \delta^{k'}$ are identified in $P$, hence in $Q^{n,r+1}$.
Below we assume these simplices are bridging simplices.
Suppose that both $\tau \delta^{k}$ and $\tau' \delta^{k'}$ are bascule.
Then their common image $\sigma \delta^{k} = \sigma'\delta^{k'}$ is bascule as well, and since we have already identified all bascule preimages of $(n{-}1)$-simplices by the stage $Q^{n-1}$, we know that the bascule preimages $\tau \delta^{k}$ and $\tau' \delta^{k'}$ are identified in $Q^{n,r+1}$.
We must still consider the case when one or both of $\tau \delta^k$ or $\tau' \delta^{k'}$ are non-bascule bridging simplices. Without loss of generality, suppose that $\tau \delta^k$ is non-bascule.
By \cref{BoundarySuspectDwyer} this means that $k = r $; since this is a bridging simplex, we have that it is of bridge index $r$.
Since $\tau \delta^r = \tau \delta^k$ and $\tau' \delta^{k'}$ both map to the same element in $N\mathcal{D}$, they have the same bridge index.
Write $\tilde \tau$ and $\tilde \tau'$ for the bascule lifts of $\tau \delta^r = \tau \delta^k$ and $\tau' \delta^{k'}$.
The bascule lift $\tilde\tau$ of $\tau \delta^r$ is pictured below, but note that $\tilde \tau'$ may not be of the same form if $k' > r$:
\[
\begin{tikzcd}
a_0 \arrow[r, "p_1"] & \cdots \arrow[r, "p_{r-1}"] & a_{r-1} \rar{Rh_{r+1}} & Ru_{r+1} \arrow[r, "\varepsilon"] & u_{r+1} \arrow[r, "h_{r+2}"] & \cdots \arrow[r, "h_{n}"] & u_n
\end{tikzcd}
\]
We know that $NG (\tau \delta^k) = NG(\tau' \delta^{k'})$, hence $NG$ identifies their bascule lifts $NG(\tilde \tau) = NG(\tilde \tau')$.
Since $\tilde \tau$ and $\tilde \tau'$ are bascule $n$-simplices of bridge index $r+1$, they are identified in $Q^{n,r+1}$. Thus their faces $\tilde \tau \delta^r = \tau \delta^k$ and $\tilde \tau' \delta^r = \tau' \delta^{k'}$ are identified there as well.
\end{proof}
\begin{lem}\label{lem:mixed-core-quotient}
Using the notation from the proof of \cref{prop lift exists}, suppose that $\gamma \in \core(\sigma)$ is non-degenerate, $\gamma' \in \core(\sigma')$ is degenerate, and $\pi \bar \gamma = \pi \bar \gamma'$.
Then $\bar \gamma = \bar \gamma' \in Q^{n,r+1}$.
\end{lem}
\begin{proof}
By \eqref{option one a} and \eqref{option one b} from the proof of \cref{prop lift exists}, we have $\bar \gamma = \chi \tau \delta^{k}$ and $k \neq r-1$.
By \eqref{option two} we have $\bar \gamma' = \chi \tau' \delta^{k'} d s$, where $s$ is not the identity and $k' \neq r-1$.
Our assumption is that
\begin{equation}\label{eq:mixed-hypothesis}
\sigma \delta^k = \sigma' \delta^{k'} d s
\end{equation}
which is a degenerate simplex in $N\mathcal{D}$.
We first argue that we can rule out the case where $\sigma = \tilde \sigma s^{\ell}$ is degenerate as in \eqref{option one b}. In this case, we have $k = \ell$ and $\sigma \delta^{\ell} = \tilde \sigma s^{\ell} \delta^{\ell} = \tilde \sigma$. Thus, the left-hand side of \eqref{eq:mixed-hypothesis} is non-degenerate, while the right-hand side is degenerate, a contradiction.
Thus we must be in the situation of \eqref{option one a} meaning that $\sigma$ is non-degenerate. Thus, for its face $\sigma \delta^{k}$ to be degenerate means that composing at the $k$th vertex produces an identity, which can only happen if $k \neq r-1,r$. Thus one of the following holds:
\begin{align*}
f_{k+1} f_{k} &= \textup{id} & \text{and} && a_{k-1}=a_{k+1} & &\text{if} && 1 \leq k < r-1 & \\
h_{k+1} h_{k} &= \textup{id} & \text{and} && u_{k-1}=u_{k+1} && \text{if} && r+1 \leq k < n.
\end{align*}
Since $\sigma$ is non-degenerate, this is the only identity that appears in this simplex $\sigma \delta^k$.
Also note, since $k \neq r-1, r$, Lemma \ref{BoundarySuspectDwyer} tells us that $\sigma \delta^k$ is bascule.
The element $\tau \delta^k$ is a bascule preimage of the bascule $(n{-}1)$-simplex $\sigma \delta^k$.
Using this information, we will define an $(n{-}2)$-simplex $\tilde\tau$ of $N\mathcal{B}$ so that $\pi(\tilde\tau s^{k-1}) = \pi(\tau\delta^k)$. Define $\tilde \tau$ to be one of the following two $(n{-}2)$-simplices in $N\mathcal{B}$, depending on whether $k < r-1$ or $k > r$:\footnote{In the first case, we have $\tilde \tau s^{k-1}$ may be distinct from $\tau \delta^{k}$, since we don't know that $p_{k+1} p_{k_1} = \textup{id}$ holds, only that it is in the $F$-preimage of the identity.
But in the second case, $h_{k+1} h_{k} = \textup{id}$ so $\tilde \tau s^{k-1} = \tau \delta^k$ already holds in $N\mathcal{B}$.}
\[
\begin{tikzcd}[column sep=1.9em]
a_0 \arrow[r, "p_1"] & \cdots \rar{p_{k-1}} & a_{k+1} \rar{p_{k+2}} & \cdots \arrow[r, "p_{r-1}"] & a_{r-1} \arrow[r, "\varepsilon"] & u_r \arrow[r, "h_{r+1}"] & u_{r+1} \arrow[r, "h_{r+2}"] & \cdots \arrow[r, "h_{n}"] & u_n \\
a_0 \arrow[r, "p_1"] & \cdots \arrow[r, "p_{r-1}"] & a_{r-1} \arrow[r, "\varepsilon"] & u_r \arrow[r, "h_{r+1}"] & u_{r+1} \arrow[r, "h_{r+2}"] & \cdots \rar{h_{k-1}} & u_{k+1} \rar{h_{k+2}} & \cdots \arrow[r, "h_{n}"] & u_n
\end{tikzcd}
\]
Let $\tilde\sigma$ be the image of $\tilde\tau$ in $N\mathcal{D}$. We have
\[
\tilde \sigma s^{k-1} = \sigma \delta^{k} = \sigma' \delta^{k'} d s
\]
with $\tilde \sigma$ non-degenerate and $s$ non-identity, so it follows by the Eilenberg--Zilber Lemma that $\sigma' \delta^{k'} d = \tilde \sigma$ and $s=s^{k-1}$ and $\ell = n-2$. We have $\textup{sk}_{n-2} Q^{n,r+1} \to \textup{sk}_{n-2} N\mathcal{D}$ is an isomorphism.
This implies that the $(n{-}2)$-simplices $\tilde \tau$ and $\tau' \delta^{k'} d$ of $N\mathcal{B}$ are identified in $Q^{n,r+1}$, hence we know $\chi \tilde \tau s^{k-1}= \chi \tau' \delta^{k'} d s^{k-1} = \bar \gamma'$.
But $\tau \delta^k$ and $\tilde \tau s^{k-1}$ are both bascule $(n{-}1)$-simplices with common image in $N\mathcal{D}$, hence become equal in $Q^{n,r+1}$.
Thus $\bar \gamma = \chi \tau \delta^k = \chi \tilde \tau s^{k-1} = \bar \gamma'$.
\end{proof}
\begin{lem}\label{Qnr pushout} The square
\[ \begin{tikzcd}
I^{n,r} \aamalg{(\coprod_\sigma \core(\sigma))} (\coprod_\sigma U_\sigma) \rar[dashed] \arrow[d, "{\simeq}"'] \arrow[dr, phantom, "\ulcorner" very near end] & Q^{n,r+1} \dar \\
I^{n,r} \aamalg{(\coprod_\sigma\core(\sigma))} (\coprod_\sigma \Delta^{m_\sigma}) \rar & Q^{n,r}
\end{tikzcd} \] is a pushout.
\end{lem}
\begin{proof} Recall that $Q^{n,r}$ is built from $Q^{n,r+1}$ via the pushout
\[
\begin{tikzcd}
\coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} \coprod\limits_{\tau \in {B}_\sigma} \Delta^n \arrow[r] \arrow[d, two heads] \arrow[dr, phantom, "\ulcorner" very near end] & Q^{n,r+1} \arrow[d, two heads]\\ \coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} \Delta^n \arrow[r] & Q^{n,r}
\end{tikzcd}
\]
which factors through the left-hand pushout square by \eqref{large_diagram_Usigma} and \cref{lem pushout of H to U}
\[
\begin{tikzcd}
\coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} \coprod\limits_{\tau \in {B}_\sigma} \Delta^n \arrow[r] \arrow[d, two heads] \arrow[dr, phantom, "\ulcorner" very near end] & \coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} \coprod\limits_{\tau \in {B}_\sigma} \Delta^{m_\tau} \arrow[r] \arrow[d, two heads] \arrow[dr, phantom, "\ulcorner" very near end] & Q^{n,r+1} \arrow[d, two heads]\\ \coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} \Delta^n \arrow[r] & \coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} \Delta^{m_\sigma} \arrow[r]& Q^{n,r}
\end{tikzcd}
\]
By \eqref{large_diagram_Usigma}, the right-hand pushout square factors as
\[
\begin{tikzcd}
\coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} \coprod\limits_{\tau \in {B}_\sigma} \Delta^{m_\tau} \arrow[rd, two heads]\arrow[drr, start anchor=east, bend left=15] \arrow[ddr, bend right=15] \\&
\coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} U_\sigma \arrow[r] \arrow[d, two heads] \arrow[dr, phantom, "\ulcorner" very near end] & Q^{n,r+1} \arrow[d, two heads]\\ &\coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} \Delta^{m_\sigma} \arrow[r]& Q^{n,r}
\end{tikzcd}
\]
and since the map $\coprod_\sigma \coprod_\tau \Delta^{m_\tau} \to \coprod_\sigma U_\sigma$ is an epimorphism, this smaller square is still a pushout square. Finally, this smaller square factors through the left-hand pushout
\[ \begin{tikzcd}
\coprod_{\sigma \in {B}^r\mathcal{D}_n} U_{\sigma} \arrow[d, we'] \arrow[dr, phantom, "\ulcorner" very near end] \rar & I^{n,r} \aamalg{(\coprod_\sigma \core(\sigma))} (\coprod_\sigma U_\sigma) \rar[dashed] \arrow[d, we] & Q^{n,r+1} \dar \\
\coprod_{\sigma \in {B}^r\mathcal{D}_n} \Delta^{m_{\sigma}} \rar & I^{n,r} \aamalg{(\coprod_\sigma\core(\sigma))} (\coprod_\sigma \Delta^{m_\sigma}) \rar & Q^{n,r}
\end{tikzcd} \]
so we may conclude that the right-hand square is a pushout as claimed.
\end{proof}
\begin{prop}\label{prop injectivity}
The dashed map in the diagram is injective:
\[ \begin{tikzcd}
\coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} \core(\sigma) \rar[tail] \dar[two heads] \arrow[dr, phantom, "\ulcorner" very near end] & \coprod\limits_{\sigma \in {B}^r\mathcal{D}_n} U_{\sigma} \dar \ar[ddr, bend left=15] \\
I^{n,r} \rar \ar[drr,bend right=15, tail] & \bullet \ar[dr,dashed]\\[-2em]
& &[-2em] Q^{n,r+1}
\end{tikzcd} \]
\end{prop}
By its construction as a lift of the monomorphism $I^{n,r} \rightarrowtail N\mathcal{D}$, the map $I^{n,r} \rightarrowtail Q^{n,r+1}$ constructed in \cref{prop lift exists} is automatically a monomorphism. So it remains only to consider simplices in the complement of the image of $I^{n,r}$ in the pushout, or equivalently in the complement of the image of $\coprod \core{\sigma} \rightarrowtail \coprod U_\sigma$, and argue that any two such are neither identified with each other nor with something in the image of $I^{n,r}$. As everything in the pushout part of the diagram is $n$-skeletal, it is enough, by \cite[8.1.28]{Cisinski-thesis}, to restrict attention to simplices of dimension at most $n$. To argue this, we make use of the following definition.
There are two types of $n$-simplices in $U_\sigma$ that are not in the image of $\core(\sigma) \subset U_\sigma$:
\begin{itemize}
\item The $n$-simplices $\tau$ associated to some non-degenerate bascule preimage of $\sigma$.
\item Degenerate $n$-simplices on the face $\tau\cdot\delta^{r-1}$ for some $\tau$ as above.
\end{itemize}
Note both cases are canonically identified with simplices in $N\mathcal{B}$. Our aim is to show that both varieties of $n$-simplex are not identified with any others under the canonical projection $N\mathcal{B} \to Q^{n,r+1}$.
\begin{lem}\label{lem first unsquash}
Suppose that $\tau \in N\mathcal{B}$ is a non-degenerate bascule $n$-simplex of bridge index $r$, $z$ is one of the simplices $\tau$, $\tau\delta^{r-1}$, $\tau\delta^{r-1} s^i$, and $\chi \colon N\mathcal{B} \to Q^{n,r+1}$ is the defining projection map.
Then $\chi^{-1}\chi(z) = \{ z \}$.
\end{lem}
\begin{proof}
By construction, $Q^{n,r+1}$ is a quotient of $N\mathcal{B}$ identifying certain bridging simplices that become identified in $N\mathcal{D}$, namely those with dimension less than $n$ and with bridge index greater than $r$. Thus, to see that these elements are not identified with any others, it is enough to show that if $\tau'\colon \Delta^{n'} \to N\mathcal{B}$ is a bascule simplex
\begin{itemize}
\item with dimension $n' < n$, or
\item with dimension $n$ and bridge index $r'> r$,
\end{itemize}
then $\tau\delta^{r-1}$, $\tau\delta^{r-1} s^i$, and $\tau$ are not in the image of $\tau' \colon \Delta^{n'} \to N\mathcal{B}$.
Notice that if either $\tau$ or $\tau\delta^{r-1} s^i$ is in the image of such a map, then so is $\tau\delta^{r-1}$, hence it is enough to show that $\tau\delta^{r-1}$ is not in such an image.
Assume first that $\tau'$ is of dimension $n'<n$. Since $\tau\delta^{r-1}$ is a non-degenerate simplex of dimension $n-1$, this necessarily implies $n'=n-1$. This would in turn mean $\tau\delta^{r-1}=\tau'$, which is a contradiction because $\tau\delta^{r-1}$ is non-bascule and $\tau'$ is bascule by assumption.
Suppose $\tau'$ has dimension $n$ and bridge index $r' > r$ and that $r > 1$.
If $\tau'\delta^{k} = \tau\delta^{r-1}$, then $\tau'\delta^{k}$ is non-bascule and has bridge index $r-1$.
By \cref{BoundarySuspectDwyer} this means $k=r'-1=r-1$ or $k=r'=r-1$.
Neither of these can happen since $r' > r$.
Suppose $\tau'$ has dimension $n$ and bridge index $r' > r$ and $r=1$.
If $\tau'\delta^{k} = \tau\delta^{0}$ which is non-bridging, we either have $k=r'-1=0$ or $k=r'=n$ by \cref{BoundarySuspectDwyer}.
The first case means that $r'=r$, which can't happen.
The second case can't happen, since all vertices of $\tau'\delta^{n}$ are in $\mathcal{A}$ and all of the vertices of $\tau\delta^{0}$ are in $\mathcal{V}$.
\end{proof}
\begin{lem}\label{missing lemma from injectivity document}
Let $\tau, \tau' \in {B}^r\mathcal{B}_n$, and suppose that $x,x'\in \Delta^n$ are two elements of dimension $n-1$ or $n$ so that $\tau(x), \tau'(x') \in N\mathcal{B}$ map to the same element of $Q^{n,r+1}$.
If $x \notin \Lambda^n_{r-1}$ and $\tau$ is non-degenerate, then $\tau = \tau'$ and $x=x'$.
\end{lem}
\begin{proof}
Since $x \notin \Lambda_{r-1}^n$, we either have $x = \textup{id}$ or $x = \delta^{r-1}$ or $x = \delta^{r-1}s^i$.
We have $\tau x = \tau' x'$ by \cref{lem first unsquash}.
Letting $d = \textup{id}$ in the first two cases or $d = \delta^i$ in the last case, we have $\tau = \tau' x' d$ or $\tau \delta^{r-1} = \tau' x' d$.
Since these elements of $N\mathcal{B}$ are non-degenerate, $x'd$ is either an identity or a coface map.
We can conclude that $\tau = \tau'$.
In the first case, this is because $\tau = \tau' x' d = \tau' \textup{id}$.
For the second case, we have that $\tau \delta^{r-1} = \tau' \delta$ for some codimension one coface map $\delta$.
Since $\tau$ and $\tau'$ are bascule simplices of bridge index $r$ and $\tau\delta^{r-1} = \tau' \delta$ is non-bascule of bridge index $r-1$ (or non-integral bridge index $-$ when $r=1$), it follows from \cref{BoundarySuspectDwyer} (or observation when $r=1$) that $\delta = \delta^{r-1}$.
Since $(-)\cdot \delta^{r-1}$ is injective on bascule simplices of bridge index $r$ (for $r>1$ this is \cref{cor:bascule-bijection} while for $r=1$ this is just observation), we have $\tau = \tau'$ in the second case as well.
Our final goal is to show that $x = x'$.
If $x'd = \textup{id}$ then by dimension reason we have $x'=\textup{id}$, hence $x=x'$.
If $x'd$ is a coface, with $\tau \delta^{r-1} = \tau' x' d = \tau x' d$, we must have $x'd = \delta^{r-1}$ by \cref{BoundarySuspectDwyer}.
If $x= \delta^{r-1}$ then $d = \textup{id}$ so $x' = x' d = \delta^{r-1}$.
If $x = \delta^{r-1} s^i$, then $d = \delta^i$ and $x' \colon \Delta^n \to \Delta^n$ factors as $\delta^j s^k \colon \Delta^n \to \Delta^{n-1} \to \Delta^n$.
Since $\delta^j s^k \delta^i = \delta^{r-1}$ we have $s^k \delta^i = \textup{id}$ and so $j=r-1$.
But then $\tau \delta^{r-1}s^i = \tau x = \tau x' = \tau \delta^{r-1} s^k$, and since $\tau \delta^{r-1}$ is non-degenerate we have $i=k$.
Thus $x = \delta^{r-1} s^i = x'$.
\end{proof}
\begin{proof}[Proof of \cref{prop injectivity}]
Note that the pushout $I \cup (\coprod U_\sigma)$ is $n$-skeletal, so by \cite[Lemme 8.1.28]{Cisinski-thesis} it is enough to show that $(I \cup (\coprod U_\sigma))_k \to Q_k^{n,r+1}$ is injective for $k \leq n$.
To do so, we will show that if two elements in $\coprod U_\sigma$ of dimension at most $n$ map to the same element of $Q^{n,r+1}$, then either they are equal or they are both in $\coprod \core(\sigma)$.
We assume that at least one of the elements is not in $\coprod \core(\sigma)$; this implies that both elements have dimension at least $n-1$ since $\textup{sk}_{n-2} \core(\sigma) = \textup{sk}_{n-2} U_\sigma$ for all $\sigma \in {B}^r\mathcal{D}_n$.
Suppose $\bar x \in U_\sigma$ is a simplex of dimension $n-1$ or $n$ which does not lie in $\core(\sigma)$.
Then there is a non-degenerate simplex $\tau \in \mathrm{nd}{B}_\sigma \subseteq {B}^r\mathcal{B}_n$ along with $x\in \Delta^n$ so that $\bar x$ is represented by $x$ in the $\tau$-leg defining $U_\sigma$.
Since $\bar x \notin \core(\sigma)$, we have $x\notin \Lambda^n_{r-1}$.
We can also represent another element $\bar x' \in U_{\sigma'} \subseteq \coprod U_\sigma$ by an element $x' \in \Delta^n$ and $\tau' \in {B}_{\sigma'}$ since the map $s_{\tau'}$ in \eqref{large_diagram_Usigma} above \cref{lem pushout of H to U} admits a section.
Again writing $\chi \colon N\mathcal{B} \to Q^{n,r+1}$, the element $\bar x$ of $\coprod U_\sigma$ maps to the element $\chi( \tau(x))$ of $Q^{n,r+1}$, while $\bar x'$ maps to $\chi (\tau'(x'))$.
If these images in $Q^{n,r+1}$ coincide, then $\tau = \tau'$ and $x=x'$ by \cref{missing lemma from injectivity document}, which implies that $\bar x = \bar x'$.
\end{proof}
\begin{cor}\label{cor rplusone to r}
The map $Q^{n,r+1} \to Q^{n,r}$ is a weak equivalence.
\end{cor}
\begin{proof}
By \cref{Qnr pushout,prop injectivity}, the map $Q^{n,r+1} \to Q^{n,r}$ is a pushout of a weak equivalence along a monomorphism
\[ \begin{tikzcd}
I^{n,r} \aamalg{(\coprod_\sigma \core(\sigma))} (\coprod_\sigma U_\sigma) \arrow[r, tail] \arrow[d, we'] \arrow[dr, phantom, "\ulcorner" very near end] & Q^{n,r+1} \arrow[d, we] \\
I^{n,r} \aamalg{(\coprod_\sigma\core(\sigma))} (\coprod_\sigma \Delta^{m_\sigma}) \arrow[r, tail] & Q^{n,r}
\end{tikzcd} \]
and thus is a weak equivalence by left properness of the Joyal model structure.
\end{proof}
\begin{proof}[Proof of \cref{DwyerThmBijObjFull}]
The previous corollary implies that $Q^{n-1} \to Q^n$ is a weak categorical equivalence for all $n$.
As weak categorical equivalences are closed under filtered colimits \cite[Corollary 3.9.8]{CisinskiBook}, this implies that $P \to \textup{colim}_n Q^n \cong N\mathcal{D}$ is a weak categorical equivalence as well.
\end{proof}
Since we have established \cref{AnodyneDwyer} and \cref{DwyerThmBijObjFull}, our main result, \cref{DwyerPushout}, now follows from \cref{lemma on reductions}.
\bibliographystyle{alpha}
|
1,314,259,994,841 | arxiv | \section{Introduction}
\label{int}
Since the early sixties, research has paid increasing attention to the study of reflected stochastic differential equations, the reflection process being approached in different ways. Skorohod, for instance, considered the problem of reflection for diffusion processes into a bounded domain (see, e.g., \cite{Skorohod62}). Tanaka focused on the problem of reflecting boundary conditions into convex sets for stochastic differential equations (see \cite{Tanaka78}). This kind of problem became the interest of many other authors, who considered that the state process is reflected by one or two reflecting barriers (see, e.g., \cite{Cepa:93}, \cite{Cepa:98}, \cite{Mckean63}, \cite{Hamadene/Hassani:2005} and the references therein). While, during the first studies, the trajectories of the system were reflected upon the normal direction, in 1984 Lions and Sznitman, in the paper \cite{Lions/Sznitman-84}, studied for the first time the following problem of oblique reflection in a domain:
\begin{equation}
\left\{
\begin{array}{l}
dX_{t}+dK_{t}=f\left(t,X_{t}\right) dt+g\left(t,X_{t}\right) dB_{t},\quad
t>0,\smallskip \\
X_{0}=x,\quad K_{t}={\displaystyle\int_{0}^{t}}1_{\left\{ X_{s}\in Bd(\emph{E})\right\}}\gamma(X_{s})d\left\updownarrow K\right\updownarrow_{s},
\end{array}
\right.\label{firstSkorohod problem}
\end{equation}
where, for the bounded oblique reflection $\gamma\in\mathcal{C}^{2}\left(\mathbb{R}^{d}\right)$, there exists a positive constant $\nu$ such that $\left(\gamma(x),n(x)\right)\geq\nu$, for every $x\in\emph{Bd}(E)$, $n(x)$ being the unit outward normal vector. A generalization, with
respect to the smoothness of the domain, of the result of Lions and Sznitman
was given after by Depuis and Ishi in the paper \cite{Dupuis/Ishii:93}. They
assumed that the domain in which we have the oblique reflection has some
additional regularity properties.
The aim of our paper consists in extending the problem of oblique reflection in the framework of deterministic and stochastic variational inequalities. This kind of multivalued stochastic differential equations were introduced in the literature by Asiminoaei \& R\u{a}\c{s}canu in \cite{Asiminoaei/Rascanu:97}, Barbu \& R\u{a}\c{s}canu in \cite{Barbu/Rascanu:97} and Bensoussan \& R\u{a}\c{s}canu in \cite{Bensoussan/Rascanu:97}. They proved the existence and uniqueness result for the case of stochastic variational differential systems involving
subdifferential operators and, even more, they provided approximation and splitting-up schemes for this type of equations. The general result, for stochastic differential equations governed by maximal monotone operators
\begin{equation}
\left\{
\begin{array}{l}
dX_{t}+A\left(X_{t}\right) \left(dt\right) \ni f\left(t,X_{t}\right)
dt+g\left(t,X_{t}\right) dB_{t},\smallskip \\
X_{0}=\xi,\ t\in \left[0,T\right]
\end{array}
\right.\nonumber
\end{equation}
was given by R\u{a}\c{s}canu in \cite{Rascanu:96}, the approach for proving
the existence and uniqueness being done via a deterministic multivalued
equation with singular input.
A different approach for solving these type of equations was introduced by R\u{a}\c{s}canu \& Rotenstein in the paper \cite{Rascanu/Rotenstein:11}. They reduced the existence problem for multivalued stochastic differential equations to a minimizing problem of a convex lower semicontinuous function. The solutions of these equations were identified with the minimum points of some suitably constructed convex lower semicontinuous functionals, defined on well chosen Banach spaces.
As the main objective of this paper we prove the existence and uniqueness of the solution for the following stochastic variational inequality
\begin{equation}
\left\{
\begin{array}{l}
dX_{t}+H\left(X_{t}\right) \partial \varphi \left(X_{t}\right)\left(dt\right)\ni f\left(t,X_{t}\right)dt+g\left(t,X_{t}\right)dB_{t},\quad
\quad t>0,\smallskip \\
X_{0}=x_{0},
\end{array}
\right. \label{eqstgenerales}
\end{equation}
where $B$ is a standard Brownian motion defined on a complete probability
space and the new quantity that appears acts on the set of subgradients and
it will be called, from now on, \textit{oblique subgradient}. The problem becomes challenging due to the presence of this new term, which impose the use of some specific approaches because this new term preserve neither the monotony of the subdifferential operator nor the Lipschitz property of the matrix involved. First, we will
focus on the deterministic case, considering a generalized Skorohod problem
with oblique reflection of the form
\begin{equation}
\left\{
\begin{array}{l}
x\left(t\right) +{\displaystyle\int_{0}^{t}}H\left(x\left(s\right)
\right)dk\left(s\right)=x_{0}+{\displaystyle\int_{0}^{t}}f\left(s,x\left(s\right)\right)ds+m\left(t\right),\quad t\geq 0,\smallskip \\
dk\left(s\right)\in\partial\varphi\left(x\left(s\right)\right)
\left(ds\right),
\end{array}
\right. \label{generalesko}
\end{equation}
where the singular input $m:\mathbb{R}_{+}\rightarrow\mathbb{R}^{d}$ is a continuous function. The existence results are obtained via Yosida penalization techniques.
The paper is organized as follows. Section 2 presents the notations and assumptions that will be used along this article and, also, a deterministic
generalized Skorohod problem with oblique reflection is constructed. The existence and uniqueness result for this problem can also be found here.
Section 3 is dedicated to the main result of our work; more precisely, the existence of a unique strong solution for our stochastic variational
inequa\-li\-ty with oblique subgradients is proved. The last part of the paper groups together some useful results that are used throughout this article.
\section{Generalized convex Skorohod problem with oblique subgradients}
\subsection{Notations. Hypotheses}
We first study the following deterministic generalized convex Skorohod problem with oblique subgradients:
\begin{equation}
\left\{
\begin{array}{l}
dx\left(t\right)+H\left(x\left(t\right)\right)\partial\varphi\left(x\left(t\right)\right)\left(dt\right)\ni dm\left(t\right),\quad
t>0,\smallskip \\
x\left( 0\right) =x_{0},
\end{array}
\right. \label{osp-eq1}
\end{equation}
where
\begin{equation}
\left\{
\begin{array}{rl}
\left(i\right)\quad & x_{0}\in Dom\left( \varphi \right) \overset{def}{=}\{x\in \mathbb{R}^{d}:\varphi(x)<\infty \},\medskip \\
\left(ii\right)\quad & m\in C\left(\mathbb{R}_{+};\mathbb{R}^{d}\right),\quad m\left( 0\right)=0,
\end{array}
\right. \label{osp-h0}
\end{equation}
$H=\left(h_{i,j}\right)_{d\times d}\in C_{b}^{2}\left(\mathbb{R}^{d};
\mathbb{R}^{d\times d}\right)$ is a matrix, such that for all $x\in \mathbb{R}^{d}$,
\begin{equation}
\left\{
\begin{array}{rl}
\left(i\right)\quad & h_{i,j}\left(x\right)=h_{j,i}\left(x\right),\quad \textrm{for every\ }i,j\in \overline{1,d},\medskip \\
\left(ii\right)\quad & \dfrac{1}{c}\left\vert u\right\vert^{2}\leq
\left\langle H\left(x\right)u,u\right\rangle\leq c\left\vert u\right\vert
^{2},\quad \forall ~u\in \mathbb{R}^{d}\textrm{\ (for some\ }c\geq 1\textrm{)}
\end{array}
\right. \label{osp-h0-A}
\end{equation}
and
\begin{equation}
\varphi:\mathbb{R}^{d}\rightarrow\left]-\infty,+\infty\right]\textrm{\ is
a proper l.s.c. convex function.} \label{osp-h2}
\end{equation}
Denote by $\partial \varphi$ the subdifferential operator of $\varphi$:
$$\partial\varphi\left(x\right)\overset{def}{=}\left\{ \hat{x}\in \mathbb{R}^{d}:\left\langle \hat{x},y-x\right\rangle +\varphi \left( x\right) \leq
\varphi \left(y\right),\textrm{ for all }y\in \mathbb{R}^{d}\right\}$$ and
$Dom(\partial \varphi )=\{x\in \mathbb{R}^{d}:\partial \varphi (x)\neq\emptyset \}$. We will use the notation $(x,\hat{x})\in \partial \varphi$ in order to express that $x\in Dom(\partial \varphi )$ and $\hat{x}\in \partial \varphi (x)$.\medskip
The vector defined by the quantity $H\left(x\right) h$, with $h\in \partial
\varphi \left( x\right)$, will be called in what follows {\it oblique subgradient}.
\begin{remark}
If $E$ is a closed convex subset of $\mathbb{R}^{d}$, then
$$\varphi\left(x\right)=I_{E}\left(x\right)=\left\{\begin{array}{cc}
0, & \textrm{if }x\in E,\medskip \\
+\infty, & \textrm{if\ }x\notin E
\end{array}
\right.$$
is a convex l.s.c. function and, for $x\in E$,
$$\partial I_{E}\left(x\right)=\{\hat{x}\in \mathbb{R}^{d}:\left\langle \hat{x},y-x\right\rangle \leq 0,\;\forall ~y\in E\}=N_{E}\left(x\right),$$ where $N_{E}\left(x\right) $ is the closed external normal cone to $E$ at $x$. We have $N_{E}\left( x\right)=\emptyset$ if $x\notin E$ and $N_{E}\left(x\right)=\left\{0\right\}$ if $x\in int\left(E\right) $ (we denoted by $int\left(E\right)$ the interior of the set $E$).
\end{remark}
\begin{remark}
A vector $\nu_{x}$ associated to $x\in Bd\left(E\right)$ (we denoted by $Bd\left(E\right)$ the boundary of the set $E$) is called
external direction if there exists $\rho_{0}>0$ such that $x+\rho \nu
_{x}\notin E$ for all $0<\rho \leq \rho_{0}.$ In this case there exists $
c^{\prime }>0$, $n_{x}\in N_{E}\left(x\right),$ $\left\vert
n_{x}\right\vert=1$, such that $\left\langle n_{x},\nu_{x}\right\rangle
\geq c^{\prime}$. Remark that, if we consider the symmetric matrix
\begin{equation}
M\left( x\right) =\left\langle \nu _{x},n_{x}\right\rangle I_{d\times d}-\nu
_{x}\otimes n_{x}-n_{x}\otimes \nu _{x}+\frac{2}{\left\langle \nu
_{x},n_{x}\right\rangle }\nu _{x}\otimes \nu _{x}~, \label{osp-A1}
\end{equation}
then%
$$\nu _{x}=M\left( x\right) n_{x}~,\textrm{ for all\ } x\in Bd\left(E\right).$$\smallskip
\end{remark}
Let $\left[ H\left( x\right) \right] ^{-1}$ be the inverse matrix of $H\left( x\right)$. Then $\left[ H\left(x\right)\right]^{-1}$ has the
same properties (\ref{osp-h0-A}) as $H\left( x\right)$. Denote
$$b=\sup\limits_{x,y\in \mathbb{R}^{d},~x\neq y}\frac{\left\vert H\left(
x\right) -H\left( y\right) \right\vert }{\left\vert x-y\right\vert }
+\sup\limits_{x,y\in \mathbb{R}^{d},~x\neq y}\frac{|\left[ H\left(x\right)
\right] ^{-1}-\left[ H\left(y\right)\right]^{-1}|}{\left\vert
x-y\right\vert},$$
where $\left\vert H\left( x\right) \right\vert \overset{def}{=}\left(\sum_{i,j=1}^{d}\left\vert h_{i,j}\left(x\right)\right\vert^{2}\right)
^{1/2}.$\medskip \newline
We shall call {\it oblique reflection} directions of the form
$$\nu_{x}=H\left( x\right) n_{x},\quad \textrm{with }x\in Bd\left(E\right),$$
where $n_{x}\in N_{E}\left(x\right)$.\medskip
If $E=\overline{E}\subset \mathbb{R}^{d}$ and $E^{c}=\mathbb{R}^{d}\setminus
E$, then we denote by
$$E_{\varepsilon }=\left\{x\in E:dist\left(x,E^{c}\right)\geq\varepsilon
\right\}=\overline{\left\{x\in E:B\left(x,\varepsilon \right)\subset
E\right\}}$$
the $\varepsilon-$interior of $E$.\medskip
We impose the following supplementary assumptions
\begin{equation}
\left\{
\begin{array}{rl}
\left(i\right) \quad & D=Dom\left(\varphi \right) \;\textrm{is a closed
subset of\ }\mathbb{R}^{d},\medskip \\
\left(ii\right) \quad & \exists \textrm{~}r_{0}>0,\;D_{r_{0}}\neq \emptyset
\quad \textrm{and}\quad h_{0}=\sup_{z\in D}dist\left( z,D_{r_{0}}\right)<\infty,\medskip \\
\left(iii\right) \quad & \exists ~L\geq 0,\;\textrm{such that }\left\vert
\varphi \left(x\right) -\varphi \left(y\right)\right\vert \leq
L+L\left\vert x-y\right\vert,\\ & \hfill \textrm{for all\ }x,y\in D.
\end{array}
\right. \label{osp-h3}
\end{equation}
\noindent For example, condition (\ref{osp-h3}-($iii$)) is verified by functions $\varphi:\mathbb{R}^{d}\rightarrow\mathbb{R}$ of the following type:
\[
\varphi\left(x\right)=\varphi_{1}\left(x\right)+\varphi_{2}\left(x\right)+I_{D}\left(x\right)\text{,}
\]
where $D$ is a convex set satisfying (\ref{osp-h3}-($ii$)), $\varphi_{1}:\mathbb{R}^{d}\rightarrow\mathbb{R}$ is a convex lower semicontinuous function, $\varphi_{2}:D\rightarrow\mathbb{R}$ is a Lipschitz function and $I_{D}$ is the convex indicator of the set $D$.
\subsection{A generalized Skorohod problem}
In this section we present the notion of solution for the generalized convex Skorohod problem with oblique subgradients (\ref{osp-eq1}) and, also, we provide full proofs for its existence and uniqueness.\medskip
If $k:\left[ 0,T\right] \rightarrow \mathbb{R}^{d}$ and $\mathcal{D}\left[ 0,T\right] $ is the set of the partitions of the time interval $\left[ 0,T\right] $, of the form $\Delta =(0=t_{0}<t_{1}<...<t_{n}=T)$, we denote
\[
S_{\Delta }(k)=\sum\limits_{i=0}^{n-1}|k(t_{i+1})-k(t_{i})|
\]%
and $\left\updownarrow k\right\updownarrow _{T}\overset{def}{=}%
\sup\limits_{\Delta \in \mathcal{D}}S_{\Delta }(k)$. In the sequel we
consider the space of bounded variation functions $BV(\left[ 0,T\right] ;%
\mathbb{R}^{d})=\{k~|~k:\left[ 0,T\right] \rightarrow \mathbb{R}^{d},$ $%
\left\updownarrow k\right\updownarrow _{T}<\infty \}.$
Taking on the space of continuous functions $C\left(\left[ 0,T\right];\mathbb{R}^{d}\right)$ the usual norm
\[
\left\Vert y\right\Vert _{T}\overset{def}{=}\left\Vert y\right\Vert _{C(\left[ 0,T\right];\mathbb{R}^{d})}=\sup \left\{ \left\vert y\left( s\right) \right\vert :0\leq s\leq T\right\},
\]
then $(C(\left[ 0,T\right] ;\mathbb{R}^{d}))^{\ast }=\{k\in BV(\left[ 0,T\right]; \mathbb{R}^{d}):k(0)=0\}$. The duality between these spaces is given
by the Riemann--Stieltjes integral $\left( y,k\right) \mapsto
\int_{0}^{T}\left\langle y\left( t\right) ,dk\left( t\right) \right\rangle.$
We will say that a function $k\in BV_{loc}([0,+\infty \lbrack ;\mathbb{R}^{d})$ if, for every $T>0$, $k\in BV(\left[ 0,T\right] ;\mathbb{R}^{d})$.$\smallskip$
\begin{definition}
Given two functions $x,k:\mathbb{R}_{+}\rightarrow$ $\mathbb{R}^{d}$, we say
that $dk\left( t\right) \in\partial\varphi\left( x\left( t\right) \right)
\left( dt\right) $ if
$$\begin{array}{ll}
\left(a\right) & x,k:\mathbb{R}_{+}\rightarrow\mathbb{R}^{d}\textrm{\ are continuous,}\medskip\\
\left(b\right) & x\left(t\right)\in\overline{Dom\left(\varphi\right)},\medskip\\
\left(c\right) & k\in BV_{loc}\left([0,+\infty \lbrack;\mathbb{R}^{d}\right),k\left(0\right)=0,\medskip\\
\left(d\right) & \displaystyle\int_{s}^{t}\left\langle y\left(r\right)-x(r),dk\left(r\right)\right\rangle+\displaystyle\int
_{s}^{t}\varphi\left(x\left(r\right)\right) dr\leq\displaystyle\int_{s}^{t}\varphi\left(y\left(r\right)\right)dr,\smallskip \\
\multicolumn{1}{r}{} & \multicolumn{1}{r}{\textrm{\ for all\ }0\leq s\leq t\leq
T\textrm{\ and\ }y\in C\left(\left[0,T\right];\mathbb{R}^{d}\right).}
\end{array}$$
\end{definition}
\noindent We state that
\begin{definition}
\label{def1}A pair of functions $\left(x,k\right)$ is a solution of the Skorohod problem with $H-$oblique subgradients (\ref{osp-eq1}) (and we write $\left(x,k\right) \in \mathcal{SP}\left(H\partial \varphi ;x_{0},m\right)$) if $x,k:\mathbb{R}_{+}\rightarrow$ $\mathbb{R}^{d}$ are continuous functions
and
\begin{equation}
\left\{
\begin{array}{ll}
\left( i\right)\; & x\left(t\right) +\displaystyle\int_{0}^{t}H\left(x\left(r\right)\right)dk\left(r\right)=x_{0}+m\left(t\right),\quad
\forall ~t\geq 0,\smallskip \\
\left(ii\right)\; & dk\left(r\right) \in \partial\varphi\left(x\left(r\right)\right)\left(dr\right).
\end{array}
\right. \label{osp-sp}
\end{equation}
\end{definition}
In Annex, Section 4.1., we present some lemmas with a priori estimates of the solutions $\left(x,k\right)\in \mathcal{SP}\left(H\partial \varphi;x_{0},m\right)$.
We here recall the result from Lemma \ref{oSP-l4-compact}.
\begin{proposition}
\label{p1-apri-estim}If $\left(x,k\right) \in \mathcal{SP}\left( H\partial
\varphi ;x_{0},m\right) $ then, under assumptions (\ref{osp-h0}), (\ref{osp-h0-A}), (\ref{osp-h2}) and (\ref{osp-h3}) there exists a constant $C_{T}\left(\left\Vert m\right\Vert_{T}\right)=C\left(T,\left\Vert m\right\Vert_{T},b,c,r_{0},h_{0}\right)$, increasing function with respect to $\left\Vert m\right\Vert_{T}$, such that, for all $0\leq s\leq t\leq T$,
\begin{equation}
\begin{array}{l}
(a)$\quad$\, \left\Vert x\right\Vert_{T}+\left\updownarrow
k\right\updownarrow_{T}\leq C_{T}\left(\left\Vert m\right\Vert_{T}\right),\medskip \\
(b)$\quad$\, \left\vert x\left(t\right)-x\left(s\right)
\right\vert +\left\updownarrow k\right\updownarrow_{t}-\left\updownarrow
k\right\updownarrow_{s}\leq C_{T}\left(\left\Vert m\right\Vert_{T}\right)
\times \sqrt{t-s+\mathbf{m}_{m}\left(t-s\right)},
\end{array}
\label{prop of the sol}
\end{equation}
where $\mathbf{m}_{m}$ represents the modulus of continuity of the continuous function $m$.
\end{proposition}
We renounce now at the restriction that the function $f$ is identically $0\ $%
and we consider the equation written under differential form%
\begin{equation}
\left\{
\begin{array}{l}
dx\left( t\right) +H\left( x\left( t\right) \right) \partial \varphi \left(
x\left( t\right) \right) \left( dt\right) \ni f\left( t,x\left( t\right)
\right) dt+dm\left( t\right),\quad t>0,\smallskip \\
x\left( 0\right)=x_{0},
\end{array}%
\right. \label{ob5}
\end{equation}%
where%
\begin{equation}
\begin{array}{ll}
\left( i\right) \; & \left( t,x\right) \longmapsto f\left( t,x\right):%
\mathbb{R}_{+}\times \mathbb{R}^{d}\rightarrow \mathbb{R}^{d}\textrm{ is a
Carath\'{e}odory function} \\
& \quad \quad \quad \textrm{(i.e. measurable w.r. to }t\textrm{ and continuous
w.r. to }x\textrm{),} \\
\left( ii\right) \; & {\displaystyle\int_{0}^{T}}\left( f^{\#}\left(
t\right) \right) ^{2}dt<\infty ,\quad \textrm{where}\quad f^{\#}\left(
t\right) =\sup_{x\in Dom\left( \varphi \right) }\left\vert f\left(
t,x\right) \right\vert.%
\end{array}
\label{osp-h4}
\end{equation}
The estimates (\ref{prop of the sol}) hold too for a solution of Eq.(\ref{ob5}), but, now, the constant $C_{T}(\left\Vert m\right\Vert _{T})$ depends also on the quantity $\int_{0}^{T}f^{\#}(t)dt$. We are now able to formulate the main result of this section.
\begin{theorem}
\label{oSP-t1} Let the assumptions (\ref{osp-h0}), (\ref{osp-h0-A}), (\ref%
{osp-h2}), (\ref{osp-h3}) and (\ref{osp-h4}) be satisfied. Then the
differential equation (\ref{ob5}) has at least one solution in the sense of
Definition \ref{def1}, i.e. $x,k:\mathbb{R}_{+}\rightarrow $ $\mathbb{R}^{d}$
are continuous functions and%
\begin{equation}
\left\{
\begin{array}{rl}
\left(j\right) \; & x\left(t\right) +\displaystyle\int_{0}^{t}H\left(x\left(r\right) \right) dk\left(r\right) =x_{0}+\displaystyle
\int_{0}^{t}f\left(r,x\left(r\right)\right) dr+m\left(t\right),\quad
\forall ~t\geq 0,\smallskip \\
\left(jj\right)\; & dk\left(r\right)\in\partial\varphi\left(x\left(r\right)\right)\left(dr\right).
\end{array}
\right. \label{osp-eq}
\end{equation}
\end{theorem}
\begin{proof}[{\bf Proof}]
We will divide the proof in two separate steps. First we will analyze the case of the regular function $m$ and, in the sequel, we consider the situation of the singular input $m$.\newline
\noindent{\bf{\it{Step 1.}} {\it Case}} $m\in C^{1}\left(\mathbb{R}_{+};
\mathbb{R}^{d}\right)$\medskip
It is sufficient to prove the existence of a solution on an interval $\left[0,T\right]$ arbitrary, fixed.
\noindent Let $n\in \mathbb{N}^{\ast }$, $n\geq T$, fixed, consider $\varepsilon =\frac{T}{n}$ and the extensions $f\left(s,x\right)=0$ and $m\left(s\right)=s \cdot m^{\prime}\left(0+\right)$ for $s<0$. Based on the notations from Annex 4.2., we consider the penalized problem
$$\begin{array}{l}
x_{\varepsilon}\left(t\right)=x_{0},\quad \textrm{if\ }t<0,\medskip \\
\multicolumn{1}{r}{x_{\varepsilon}\left(t\right)+\displaystyle
\int_{0}^{t}H\left( x_{\varepsilon}\left(s\right)\right) dk_{\varepsilon
}\left( s\right)=x_{0}+\displaystyle\int_{0}^{t}\left[ f\left(s-\varepsilon,\mathbb{\pi }_{D}\left(x_{\varepsilon}\left(s-\varepsilon
\right)\right)\right)+m^{\prime }\left(s-\varepsilon\right)\right]ds,}
\\ \multicolumn{1}{r}{\;t\in \left[0,T\right],}
\end{array}$$
or, equivalent,
\begin{equation}
\begin{array}{l}
x_{\varepsilon}\left(t\right)=x_{0},\quad \textrm{if\ }t<0,\medskip \\
x_{\varepsilon}\left(t\right)+\displaystyle\int_{0}^{t}H\left(x_{\varepsilon}\left(s\right)\right)\nabla \varphi
_{\varepsilon}\left(x_{\varepsilon}\left(s\right)\right)ds
=x_{0}+\displaystyle\int_{-\varepsilon}^{t-\varepsilon}\left[f\left(s,
\mathbb{\pi}_{D}\left(x_{\varepsilon}\left(s\right)\right)\right)
+m^{\prime}\left(s\right)\right]ds,\;\;t\in \left[0,T\right],
\end{array}
\label{ea-inte}
\end{equation}
where
\[
k_{\varepsilon }(t)=\int_{0}^{t}\nabla \varphi _{\varepsilon}(x_{\varepsilon }(s))ds
\]
and $\pi _{D}(x)$ is the projection of $x$ on the set $D=\overline{Dom(\varphi )}=Dom(\varphi )$, uniquely defined by $\pi _{D}(x)\in \mathcal{D}$ and $dist(x,D)=|x-\pi_{D}(x)|$.
Since $x\longmapsto H\left(x\right)\nabla\varphi_{\varepsilon}\left(x\right):\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ is a sublinear and
locally Lipschitz continuous function and, for $s\leq t-\varepsilon$,
$$\left\vert f\left(s,\mathbb{\pi}_{D}\left(x_{\varepsilon}\left(s\right)
\right)\right) \right\vert \leq f^{\#}\left(s\right),$$
then, recursively, on the intervals $[i\varepsilon,\left(i+1\right)
\varepsilon]$ the approximating equation admits a unique solution $x_{\varepsilon}\in C\left(\left[0,T\right];\mathbb{R}^{d}\right)$. The regularity of the function $x\mapsto \left\vert x-a\right\vert
^{2}+\varphi _{\varepsilon }(x)$ and the definition of the approximating sequence $\{x_{\varepsilon }\}_{\varepsilon }$ implies that, for $u_{0}\in
Dom(\varphi )$, we have
\begin{equation}
\begin{array}{l}
\left\vert x_{\varepsilon}\left(t\right)-u_{0}\right\vert^{2}+\varphi_{\varepsilon}\left( x_{\varepsilon}\left(t\right)\right)\smallskip \\
\quad \quad \quad \quad +\displaystyle\int_{0}^{t}\left\langle H\left(x_{\varepsilon}\left(s\right)\right)\nabla \varphi_{\varepsilon}\left(x_{\varepsilon}\left(s\right)\right),2\left[x_{\varepsilon}\left(s\right)-u_{0}\right]+
\nabla\varphi_{\varepsilon}\left(x_{\varepsilon}\left(s\right)\right)\right\rangle ds\smallskip \\
\quad =\left\vert x_{0}-u_{0}\right\vert ^{2}+\varphi_{\varepsilon}\left(x_{0}\right)\smallskip \\
+\displaystyle\int_{0}^{t}\left\langle 2\left[ x_{\varepsilon}\left(s\right)-u_{0}\right] +\nabla\varphi_{\varepsilon }\left(x_{\varepsilon}\left(s\right)\right),f\left(s-\varepsilon,\mathbb{\pi}_{D}\left(x_{\varepsilon}
\left(s-\varepsilon\right)\right)\right)+m^{\prime}\left(s-\varepsilon\right)ds\right\rangle.
\end{array}
\label{ea-ini}
\end{equation}
Let consider an arbitrary fixed pair $\left(u_{0},\hat{u}_{0}\right)\in\partial \varphi$. Since $\nabla\varphi_{\varepsilon}\left(u_{0}\right)=\partial\varphi_{\varepsilon}
\left(u_{0}\right)$, then it is easy to verify, from the definition of the subdifferential operator, that
$$\left\vert \varphi_{\varepsilon}\left(x_{\varepsilon}\right)-\varphi
_{\varepsilon }\left(u_{0}\right)\right\vert +\varphi_{\varepsilon
}\left(u_{0}\right)-2\left\vert\nabla \varphi_{\varepsilon}\left(u_{0}\right)\right\vert \left\vert x_{\varepsilon }-u_{0}\right\vert \leq
\varphi_{\varepsilon}\left( x_{\varepsilon }\right).$$
\noindent Also, since $\nabla \varphi _{\varepsilon }(u_{0})\in \partial \varphi
(J_{\varepsilon }(u_{0}))$, where $J_{\varepsilon }(x)=x-\varepsilon \nabla
\varphi _{\varepsilon }(x)$, then%
\[
\left\langle \hat{u}_{0}-\nabla \varphi _{\varepsilon
}(u_{0}),u_{0}-(u_{0}-\varepsilon \nabla \varphi _{\varepsilon
}(u_{0})\right\rangle \geq 0,
\]%
which yields, after short computations, $|\nabla \varphi _{\varepsilon
}(u_{0})|\leq |\hat{u}_{0}|$. Moreover,%
\begin{eqnarray*}
-\varepsilon |\hat{u}_{0}|^{2} &\leq &-\varepsilon \left\langle \hat{u}%
_{0},\nabla \varphi _{\varepsilon }(u_{0})\right\rangle =\left\langle \hat{u}%
_{0},J_{\varepsilon }(u_{0})-u_{0}\right\rangle \\
&\leq &\varphi (J_{\varepsilon }(u_{0}))-\varphi (u_{0}) \\
&\leq &\varphi _{\varepsilon }(u_{0})-\varphi (u_{0})\leq 0.
\end{eqnarray*}%
Due to%
\[
\varphi _{\varepsilon }(x_{\varepsilon }(t))\geq |\varphi _{\varepsilon
}(x_{\varepsilon }(t))-\varphi _{\varepsilon }(u_{0})|+\varphi (u_{0})-|\hat{%
u}_{0}|^{2}-2|\hat{u}_{0}||x_{\varepsilon }(t)-u_{0}|,
\]
from Eq.(\ref{ea-ini}) we obtain
\begin{equation}
\begin{array}{l}
\left\vert x_{\varepsilon }\left( t\right) -u_{0}\right\vert ^{2}+\left\vert
\varphi _{\varepsilon }\left( x_{\varepsilon }\left( t\right) \right)
-\varphi _{\varepsilon }\left( u_{0}\right) \right\vert \smallskip \\
\quad \quad \quad \quad +\displaystyle\int_{0}^{t}\left\langle H\left(
x_{\varepsilon }\left( s\right) \right) \nabla \varphi _{\varepsilon }\left(
x_{\varepsilon }\left( s\right) \right) ,2\left[ x_{\varepsilon }\left(
s\right) -u_{0}\right] +\nabla \varphi _{\varepsilon }\left( x_{\varepsilon
}\left( s\right) \right) \right\rangle ds\smallskip \\
\quad \leq \left\vert x_{0}-u_{0}\right\vert ^{2}+\varphi \left(
x_{0}\right) -\varphi \left( u_{0}\right) +\left\vert \hat{u}_{0}\right\vert
^{2}+2\left\vert \hat{u}_{0}\right\vert \left\vert x_{\varepsilon }\left(
t\right) -u_{0}\right\vert \smallskip \\
+\displaystyle\int_{0}^{t}\left\langle 2\left[ x_{\varepsilon }\left(
s\right) -u_{0}\right] +\nabla \varphi _{\varepsilon }\left( x_{\varepsilon
}\left( s\right) \right) ,f\left( s-\varepsilon,\mathbb{\pi }_{D}\left(
x_{\varepsilon }\left( s-\varepsilon \right) \right) \right)+m^{\prime
}\left( s-\varepsilon \right) \right\rangle ds.%
\end{array}
\label{ea-ini2}
\end{equation}%
Denoting by $C$ a generic constant independent of $\varepsilon$ ($C$
depends only of $c$ and $u_{0}$), the following estimates hold (to be
shortened we omit the argument $s$, writing $x_{\varepsilon}$ in the place
of $x_{\varepsilon }\left( s\right) $):\medskip
\begin{itemize}
\item $\dfrac{1}{c}\left\vert \nabla\varphi_{\varepsilon}\left(
x_{\varepsilon}\right) \right\vert ^{2}\leq\left\langle H\left(
x_{\varepsilon}\right) \nabla\varphi_{\varepsilon}\left( x_{\varepsilon
}\right) ,\nabla\varphi_{\varepsilon}\left( x_{\varepsilon}\right)
\right\rangle,$
\item\begin{align*}
\left\langle H\left( x_{\varepsilon }\right) \nabla \varphi _{\varepsilon
}\left( x_{\varepsilon }\right) ,2\left( x_{\varepsilon }-u_{0}\right)
\right\rangle & \geq -2\left\vert x_{\varepsilon }-u_{0}\right\vert
\left\vert H\left( x_{\varepsilon }\right) \nabla \varphi _{\varepsilon
}\left( x_{\varepsilon }\right) \right\vert \\
& \geq -2c\left\vert x_{\varepsilon }-u_{0}\right\vert \left\vert \nabla
\varphi _{\varepsilon }\left( x_{\varepsilon }\right) \right\vert \\
& \geq -C\sup_{r\leq s}\left\vert x_{\varepsilon }\left( r\right)
-u_{0}\right\vert ^{2}-\dfrac{1}{4c}\left\vert \nabla \varphi _{\varepsilon
}\left( x_{\varepsilon }\right) \right\vert ^{2},
\end{align*}
\item $2\left\vert \hat{u}_{0}\right\vert \left\vert x_{\varepsilon }\left(
t\right) -u_{0}\right\vert \leq \dfrac{1}{2}\sup\limits_{r\leq t}\left\vert
x_{\varepsilon }\left( r\right) -u_{0}\right\vert ^{2}+2\left\vert \hat{u}%
_{0}\right\vert ^{2},$
\item
\begin{align*}
& \left\langle 2\left( x_{\varepsilon }\left( s\right) -u_{0}\right) +\nabla
\varphi _{\varepsilon }\left( x_{\varepsilon }\left( s\right) \right),f\left(s-\varepsilon,\mathbb{\pi}_{D}\left(x_{\varepsilon}\left(
s-\varepsilon \right) \right) \right) +m^{\prime }\left( s-\varepsilon
\right) \right\rangle \\
& \leq \dfrac{1}{8c}\left\vert 2\left( x_{\varepsilon }\left( s\right)
-u_{0}\right) +\nabla \varphi _{\varepsilon }\left( x_{\varepsilon }\right)
\right\vert ^{2}+2c\left\vert f\left( s-\varepsilon ,\mathbb{\pi }_{D}\left(
x_{\varepsilon }\left( s-\varepsilon \right) \right) \right) +m^{\prime
}\left( s-\varepsilon \right) \right\vert ^{2} \\
& \leq \dfrac{1}{4c}\left\vert \nabla \varphi _{\varepsilon }\left(
x_{\varepsilon }\left( s\right) \right) \right\vert ^{2}+\dfrac{1}{c}%
\left\vert x_{\varepsilon }\left( s\right) -u_{0}\right\vert ^{2}+4c\left[
(f^{\#}\left( s-\varepsilon \right) )^{2}+\left\vert m^{\prime }\left(
s-\varepsilon \right) \right\vert ^{2}\right] .
\end{align*}
\end{itemize}
\noindent Using the above estimates in (\ref{ea-ini2}) we infer%
$$
\begin{array}{c}
\left\vert x_{\varepsilon }\left( t\right) -u_{0}\right\vert ^{2}+\left\vert
\varphi _{\varepsilon }\left( x_{\varepsilon }\left( t\right) \right)
-\varphi _{\varepsilon }\left( u_{0}\right) \right\vert +\dfrac{1}{2c}%
\displaystyle\int_{0}^{t}\left\vert \nabla \varphi _{\varepsilon }\left(
x_{\varepsilon }\left( r\right) \right) \right\vert ^{2}dr\smallskip \\
\leq \left\vert x_{0}-u_{0}\right\vert ^{2}+\varphi \left( x_{0}\right) -\varphi
\left( u_{0}\right) +3\left\vert \hat{u}_{0}\right\vert ^{2}\smallskip \\
+\dfrac{1}{2}\sup\limits_{\theta \leq t}\left\vert x_{\varepsilon }\left(
\theta \right) -u_{0}\right\vert ^{2}+4c\displaystyle\int_{0}^{t}\left[
\left( f^{\#}\left( r-\varepsilon \right) \right) ^{2}+\left\vert m^{\prime
}\left( r-\varepsilon \right) \right\vert ^{2}\right] dr\smallskip \\
+C\displaystyle\int_{0}^{t}\sup\limits_{\theta \leq r}\left\vert
x_{\varepsilon }\left( \theta \right) -u_{0}\right\vert ^{2}dr.%
\end{array}%
$$
We write the inequality for $s\in \left[ 0,t\right] $ and then we take the $%
\sup_{s\leq t}$. Hence
$$
\begin{array}{l}
\left\Vert x_{\varepsilon }-u_{0}\right\Vert _{t}^{2}+\sup\limits_{s\leq
t}\left\vert \varphi _{\varepsilon }\left( x_{\varepsilon }\left( s\right)
\right) -\varphi _{\varepsilon }\left( u_{0}\right) \right\vert +%
\displaystyle\int_{0}^{t}\left\vert \nabla \varphi _{\varepsilon }\left(
x_{\varepsilon }\left( r\right) \right) \right\vert ^{2}dr\smallskip \\
\leq 2\left[ \left\vert x_{0}-u_{0}\right\vert ^{2}+\varphi \left(
x_{0}\right) -\varphi \left( u_{0}\right) +\left\vert \hat{u}_{0}\right\vert
^{2}\right] \smallskip \\
\quad +8c\displaystyle\int_{-1}^{t}\left[ \left( f^{\#}\left( r\right)
\right) ^{2}+\left\vert m^{\prime }\left( r\right) \right\vert ^{2}\right]
dr+C\displaystyle\int_{0}^{t}\left\Vert x_{\varepsilon }-u_{0}\right\Vert
_{r}^{2}dr.%
\end{array}%
$$
By the Gronwall inequality we have%
$$
\left\Vert x_{\varepsilon }-u_{0}\right\Vert _{t}^{2}\leq Ce^{Ct}\left[ %
\left[ \left\vert x_{0}-u_{0}\right\vert ^{2}+\varphi \left( x_{0}\right)
-\varphi \left( u_{0}\right) +\left\vert \hat{u}_{0}\right\vert ^{2}\right] +%
\displaystyle\int_{-1}^{t}\left[ (f^{\#}\left( r\right) )^{2}+\left\vert
m^{\prime }\left( r\right) \right\vert ^{2}\right] dr\right].
$$
Hence, there exists a constant $C_{T},$ independent of $\varepsilon,$ such
that%
\begin{equation}
\sup_{t\in \left[ 0,T\right] }\left\vert x_{\varepsilon }\left( t\right)
\right\vert ^{2}+\sup_{t\in \left[ 0,T\right] }\left\vert \varphi
_{\varepsilon }\left( x_{\varepsilon }\left( t\right) \right) \right\vert +%
\displaystyle\int_{0}^{T}\left\vert \nabla \varphi _{\varepsilon }\left(
x_{\varepsilon }\left( s\right) \right) \right\vert ^{2}ds\leq C_{T}~.
\label{est1}
\end{equation}%
Since $\nabla \varphi _{\varepsilon }\left( x\right) =\dfrac{1}{\varepsilon }%
\left( x-J_{\varepsilon }x\right),$ then, we also obtain%
\begin{equation}
\displaystyle\int_{0}^{T}\left\vert x_{\varepsilon }\left( s\right)
-J_{\varepsilon }\left( x_{\varepsilon }\left( s\right) \right) \right\vert
^{2}ds\leq \varepsilon C_{T}. \label{est11}
\end{equation}%
From the approximating equation, for all $0\leq s\leq t\leq T,$ we have%
\begin{align*}
& \left\vert x_{\varepsilon }\left( t\right) -x_{\varepsilon }\left(
s\right) \right\vert \\
& \leq \left\vert \displaystyle\int_{s}^{t}H\left( x_{\varepsilon }\left(
r\right) \right) \nabla \varphi _{\varepsilon }\left( x_{\varepsilon }\left(
r\right) \right) dr\right\vert +\left\vert \displaystyle\int_{s-\varepsilon
}^{t-\varepsilon }f\left( r,\mathbb{\pi }_{D}\left( x_{\varepsilon }\left(
r\right) \right) \right) dr\right\vert \\
& +\left\vert m\left( t-\varepsilon \right) -m\left( s-\varepsilon \right)
\right\vert \\
& \leq c\displaystyle\int_{s}^{t}\left\vert \nabla \varphi _{\varepsilon
}\left( x_{\varepsilon }\left( r\right) \right) \right\vert dr+\displaystyle%
\int_{s-\varepsilon }^{t-\varepsilon }f^{\#}\left( r\right) dr+\mathbf{m}%
_{m}\left( t-s\right) \\
& \leq c\sqrt{t-s}\left( \displaystyle\int_{s}^{t}\left\vert \nabla \varphi
_{\varepsilon }\left( x_{\varepsilon }\left( r\right) \right) \right\vert
^{2}dr\right) ^{1/2}+\sqrt{t-s}\left( \displaystyle\int_{s-\varepsilon
}^{t-\varepsilon }(f^{\#}\left( r\right) )^{2}dr\right) ^{1/2}+\mathbf{m}%
_{m}\left( t-s\right) \\
& \leq C_{T}^{\prime }\left[ \sqrt{t-s}+\mathbf{m}_{m}\left( t-s\right) %
\right].
\end{align*}%
In fact, moreover we have%
\begin{align*}
\left\updownarrow x_{\varepsilon }\right\updownarrow _{\left[ s,t\right] }&
\leq \displaystyle\int_{s}^{t}\left\vert H\left( x_{\varepsilon }\left(
r\right) \right) \nabla \varphi _{\varepsilon }\left( x_{\varepsilon }\left(
r\right) \right) \right\vert dr+\displaystyle\int_{s-\varepsilon
}^{t-\varepsilon }\left\vert f\left( r,\mathbb{\pi }_{D}\left(
x_{\varepsilon }\left( r\right) \right) \right) \right\vert dr+\displaystyle%
\int_{s-\varepsilon }^{t-\varepsilon }\left\vert m^{\prime }\left( r\right)
\right\vert dr \\
& \leq C_{T}\sqrt{t-s}.
\end{align*}%
Hence $\left\{ x_{\varepsilon}:\varepsilon \in (0,1]\right\}$ is a bounded
and uniformly equicontinuous subset of $C\left( \left[ 0,T\right];\mathbb{R}%
^{d}\right)$. From Ascoli-Arzel\`{a}'s theorem it follows that there exists
$\varepsilon_{n}\rightarrow 0$ and $x\in C\left( \left[ 0,T\right];\mathbb{%
R}^{d}\right)$ such that%
$$
\lim_{n\rightarrow \infty }\left[ \sup_{t\in \left[ 0,T\right] }\left\vert
x_{\varepsilon _{n}}\left( t\right) -x\left( t\right) \right\vert \right]=0.
$$
By (\ref{est11}), there exists $h\in L^{2}\left( 0,T;\mathbb{R}^{d}\right) $
such that, on a subsequence, denoted also $\varepsilon_{n},$ we have
$$
J_{\varepsilon _{n}}\left( x_{\varepsilon _{n}}\right) \rightarrow x\quad
\textrm{in }L^{2}(0,T;\mathbb{R}^{d})\textrm{ and }a.e.\textrm{ in }\left[ 0,T%
\right],\quad \textrm{as }\varepsilon _{n}\rightarrow 0
$$ and
$$
\nabla \varphi_{\varepsilon _{n}} \left( x_{\varepsilon _{n}}\right) \rightharpoonup h,\quad
\textrm{weakly in }L^{2}(0,T;\mathbb{R}^{d}).
$$
Therefore, for all $t\in \left[ 0,T\right] $,%
\begin{equation}
\lim_{n\rightarrow \infty }\int_{0}^{t}H(x_{\varepsilon _{n}}(s))\nabla
\varphi _{\varepsilon _{n}}(x_{\varepsilon
_{n}}(s))ds=\int_{0}^{t}H(x(s))h(s)ds. \label{limhh}
\end{equation}
The lower semicontinuity property of $\varphi$ yields, a.e. $t\in \left[ 0,T%
\right],$%
$$
\varphi \left( x\left( t\right) \right) \leq \liminf_{n\rightarrow +\infty
}\varphi \left( J_{\varepsilon _{n}}\left( x_{\varepsilon _{n}}\left(
t\right) \right) \right) \leq \liminf_{n\rightarrow +\infty }\varphi
_{\varepsilon _{n}}\left( x_{\varepsilon _{n}}\left( t\right) \right) \leq
C_{T}~.
$$
Since $\nabla \varphi_{\varepsilon} \left( x_{\varepsilon }\right) \in \partial \varphi
\left( J_{\varepsilon }\left( x_{\varepsilon }\right) \right),$ then for
all $y\in C\left( \left[ 0,T\right];\mathbb{R}^{d}\right),$
$$
\displaystyle\int_{s}^{t}\left\langle \nabla \varphi_{\varepsilon} \left( x_{\varepsilon
}\left( r\right) \right) ,y\left( r\right) -J_{\varepsilon }(x_{\varepsilon }(r))
\right\rangle dr+\displaystyle\int_{s}^{t}\varphi \left( J_{\varepsilon
}\left( x_{\varepsilon }\left( r\right) \right) \right) dr\leq \displaystyle%
\int_{s}^{t}\varphi \left( y\left( r\right) \right) dr;
$$
passing to $\liminf_{\varepsilon _{n}\rightarrow 0}$ we obtain%
$$
\displaystyle\int_{s}^{t}\left\langle h\left( r\right),y\left( r\right)
-x\left( r\right) \right\rangle dr+\displaystyle\int_{s}^{t}\varphi \left(
x\left( r\right) \right) dr\leq \displaystyle\int_{s}^{t}\varphi \left(
y\left( r\right) \right) dr,
$$
for all $0\leq s\leq t\leq T$ and $y\in C\left( \left[ 0,T\right] ;\mathbb{R}%
^{d}\right)$, that is $h\left( r\right) \in \partial \varphi \left( x\left(
r\right) \right) $ a.e. $t\in \left[0,T\right].\smallskip$
Finally, taking into account (\ref{limhh}), by passing to limit for $\varepsilon =\varepsilon_{n}\rightarrow 0$ in the approximating equation (\ref{ea-inte}), via the Lebesgue dominated convergence theorem for the integral from the right-hand side, we get
$$x\left(t\right)+\displaystyle\int_{0}^{t}H\left(x\left(s\right)\right)
dk\left(s\right)=x_{0}+\displaystyle\int_{0}^{t}f\left(s,x\left(s\right)
\right)ds+m\left(t\right),$$
where
$$k\left( t\right)=\displaystyle\int_{0}^{t}h\left(s\right)ds.$$
\noindent {\it{Step 2.}} {\it{Case}} $m\in C\left(\left[0,T\right];\mathbb{R}^{d}\right).$\medskip
Let extend again $m\left(s\right)=0$ for $s\leq 0$ and define
$$m_{\varepsilon}\left(t\right)=\dfrac{1}{\varepsilon}\displaystyle
\int_{t-\varepsilon}^{t}m\left(s\right) ds=\dfrac{1}{\varepsilon}
\displaystyle\int_{0}^{\varepsilon }m\left( t+r-\varepsilon\right)dr,$$
We have
$$m_{\varepsilon}\in C^{1}(\left[0,T\right];\mathbb{R}^{d}),\quad
\left\Vert m_{\varepsilon}\right\Vert _{T}\leq \left\Vert m\right\Vert
_{T}\quad \textrm{and}\quad \mathbf{m}_{m_{\varepsilon}}\left(\delta \right)
\leq \mathbf{m}_{m}\left(\delta \right).
$$
Let $(x_{\varepsilon },k_{\varepsilon })$ be a solution of the approximating equation
$$
\left\{
\begin{array}{l}
x_{\varepsilon}\left(t\right)+\displaystyle\int_{0}^{t}H\left(x_{\varepsilon}\left(r\right)\right)dk_{\varepsilon}\left(r\right)
=x_{0}+\displaystyle\int_{0}^{t}f\left(r,x_{\varepsilon}\left(r\right)
\right)dr+m_{\varepsilon}\left(t\right),\;t\geq 0,\smallskip \\
dk_{\varepsilon}\left(r\right)\in\partial \varphi \left(x_{\varepsilon}\left(r\right)\right)\left(dr\right),
\end{array}
\right.
$$
solution which exists according to the first step of the proof. We have
$$
k_{\varepsilon}\left(t\right)=\displaystyle\int_{0}^{t}h_{\varepsilon}\left(s\right)ds,\quad h_{\varepsilon}\in L^{2}(0,T;\mathbb{R}^{d}),
$$
and
\begin{equation}
\displaystyle\int_{s}^{t}\left\langle y\left(r\right)-x_{\varepsilon}\left(r\right),dk_{\varepsilon}\left(r\right)\right\rangle+
\displaystyle\int_{s}^{t}\varphi\left(x_{\varepsilon}\left(r\right)
\right)dr\leq\displaystyle\int_{s}^{t}\varphi\left(y\left(r\right)
\right)dr, \label{ob-7}
\end{equation}
for all $0\leq s\leq t\leq T$ and $y\in C\left(\left[0,T\right];\mathbb{R}^{d}\right).\smallskip$
From Lemma \ref{oSP-l4-compact}, with $m$ replaced by
$$M_{\varepsilon }\left(t\right)=\displaystyle\int_{0}^{t}f\left(r,x_{\varepsilon}\left(r\right)\right) dr+m_{\varepsilon }\left( t\right),$$
we have%
\begin{align*}
\left\Vert x_{\varepsilon}\right\Vert _{T}+\left\updownarrow k_{\varepsilon
}\right\updownarrow _{T}& \leq C_{T}\left( \left\Vert M_{\varepsilon
}\right\Vert _{T}\right) \quad \textrm{and} \\
\left\vert x_{\varepsilon }\left( t\right)-x_{\varepsilon }\left( s\right)
\right\vert +\left\updownarrow k_{\varepsilon }\right\updownarrow
_{t}-\left\updownarrow k_{\varepsilon }\right\updownarrow _{s}& \leq
C_{T}\left( \left\Vert M_{\varepsilon }\right\Vert_{T}\right) \times \sqrt{%
\mathbf{\mu}_{M_{\varepsilon }}\left(t-s\right)},
\end{align*}%
where, for $\delta >0$, $\mu _{g}(\delta )\overset{def}{=}\delta +\mathbf{m}_{g}(\delta )$ and $\mathbf{m}_{g}$ is the modulus of continuity of the continuous function
$g:\left[ 0,T\right] \rightarrow \mathbb{R}^{d}$ (for more details see Annex 4.1.). Since, for all $0\leq s\leq t\leq T,$%
\begin{align*}
\mathbf{\mu }_{M_{\varepsilon }}\left( t-s\right) & \leq t-s+\sqrt{t-s}%
\displaystyle\int_{0}^{T}(f^{\#}\left( r\right) )^{2}dr+\mathbf{m}_{m}\left(
t-s\right) \overset{def}{=}\gamma \left( t-s\right) \quad \textrm{and} \\
\left\Vert M_{\varepsilon }\right\Vert _{T}& =\mathbf{m}_{M_{\varepsilon
}}\left( T\right) \leq \displaystyle\int_{0}^{T}f^{\#}\left( r\right)
dr+\left\Vert m\right\Vert _{T}\overset{def}{=}\gamma_{T},
\end{align*}%
then there exist the positive constants $C_{T}(\gamma_{T})$ and $\tilde{C}_{T}(\gamma_{T})$ such that%
\begin{align*}
\left\Vert x_{\varepsilon }\right\Vert _{T}+\left\updownarrow k_{\varepsilon
}\right\updownarrow _{T}& \leq C_{T}\left(\gamma_{T}\right) \quad \textrm{and} \\
\mathbf{m}_{x_{\varepsilon }}\left( t-s\right) +\left\updownarrow
k_{\varepsilon }\right\updownarrow _{t}-\left\updownarrow k_{\varepsilon
}\right\updownarrow _{s}& \leq \tilde{C}_{T}\left(\gamma_{T}\right) \times \sqrt{%
\gamma \left( t-s\right)}.
\end{align*}%
By Ascoli-Arzel\`{a}'s theorem it follows that there exists $\varepsilon
_{n}\rightarrow 0$ and $x,k\in C\left( \left[ 0,T\right];\mathbb{R}%
^{d}\right) $ such that%
$$
x_{\varepsilon _{n}}\rightarrow x\quad \textrm{and}\quad \textrm{ }%
k_{\varepsilon _{n}}\rightarrow k\quad \textrm{in }C(\left[ 0,T\right];%
\mathbb{R}^{d}).
$$
Moreover, since $\left\updownarrow \cdot \right\updownarrow:C\left( \left[
0,T\right];\mathbb{R}^{d}\right) \rightarrow \mathbb{R}$ is a lower
semicontinuous function, then%
$$
\left\updownarrow k\right\updownarrow _{T}\leq \liminf_{n\rightarrow +\infty
}\left\updownarrow k_{\varepsilon _{n}}\right\updownarrow _{T}\leq C_{T,m}~.
$$
By Helly-Bray theorem, we can pass to the limit and we have, for all $0\leq s\leq t\leq T$,%
\[
\lim_{n\rightarrow \infty }\int_{s}^{t}\left\langle y(r)-x_{\varepsilon_{n}}(r),dk_{\varepsilon _{n}}(r)\right\rangle =\int_{s}^{t}\left\langle y(r)-x(r),dk(r)\right\rangle
\]
Passing now to $\liminf_{n\rightarrow +\infty }$ in (\ref{ob-7}) we infer $%
dk\left( r\right) \in \partial \varphi \left( x\left( r\right) \right)
\left( dr\right)$. Finally, taking $\lim_{n\rightarrow \infty }$ in the
approximating equation we obtain that $\left( x,k\right)$ is a solution of
the equation (\ref{osp-eq}). The proof is now complete.\hfill
\end{proof}
In the next step we will show in which additional conditions the equation (\ref{ob5}) admits a unique solution.\medskip
\begin{proposition}
\label{oSP-p1-uniq}Let the assumptions (\ref{osp-h0-A}), (\ref{osp-h0}), (%
\ref{osp-h2}), (\ref{osp-h3}) and (\ref{osp-h4}) be satisfied. Assume also that
there exists $\mu \in L_{loc}^{1}\left( \mathbb{R}_{+};\mathbb{R}_{+}\right)$ such, that for all $x,y\in \mathbb{R}^{d},$%
\begin{equation}
\left\vert f\left( t,x\right) -f\left( t,y\right) \right\vert \leq \mu
\left( t\right) \left\vert x-y\right\vert,\quad a.e.\ t\geq 0.
\label{osp-h5}
\end{equation}%
If $m\in BV_{loc}\left( \mathbb{R}_{+};\mathbb{R}^{d}\right) $, then the
generalized convex Skorohod problem with oblique subgradients (\ref{osp-eq1}%
) admits a unique solution $(x,k)$ in the space $C(%
\mathbb{R}_{+};\mathbb{R}^{d})\times \lbrack C(\mathbb{R}_{+};\mathbb{R}^{d})\cap BV_{loc}(\mathbb{R}_{+};\mathbb{R}^{d})]$.
Moreover, if $\left( x,k\right)$ and $(\hat{x},\hat{k})$ are two solutions, corresponding to $m$, respectively $\hat{m},$
then%
\begin{equation}
\left\vert x\left( t\right) -\hat{x}\left( t\right) \right\vert \leq
Ce^{CV\left( t\right) }\left[ \left\vert x_{0}-\hat{x}_{0}\right\vert
+\left\updownarrow m-\hat{m}\right\updownarrow _{t}\right], \label{ob8}
\end{equation}%
where $V\left( t\right) =\left\updownarrow x\right\updownarrow
_{t}+\updownarrow \!\hat{x}\!\updownarrow _{t}+\left\updownarrow
k\right\updownarrow _{t}+\updownarrow\hat{k}\updownarrow_{t}+\displaystyle%
\int_{0}^{t}\mu \left( r\right) dr$ and $C$ is a constant depending only on $b$ and $c$.
\end{proposition}
\begin{proof}[{\bf Proof}]
The existence was proved in Theorem \ref{oSP-t1}. Let us prove the
inequality (\ref{ob8}) which clearly yields the uniqueness.
Consider the symmetric and strict positive matrix $Q\left( r\right) =\left[
H\left( x\left( r\right) \right) \right] ^{-1}+\left[ H\left( \hat{x}\left(
r\right) \right) \right] ^{-1}$. Remark that%
\begin{equation}
\begin{array}{l}
Q\left( r\right) \left[ H\left( \hat{x}\left( r\right) \right) d\hat{k}%
\left( r\right) -H\left( x\left( r\right) \right) dk\left( r\right) \right]
\\
=\left( \left[ H\left( x\left( r\right) \right) \right] ^{-1}-\left[ H\left(
\hat{x}\left( r\right) \right) \right] ^{-1}\right) \left[ H\left( \hat{x}%
\left( r\right) \right) d\hat{k}\left( r\right) +H\left( x\left( r\right)
\right) dk\left( r\right) \right] \\
+2\left[ d\hat{k}\left( r\right) -dk\left( r\right) \right].%
\end{array}
\label{eui_1}
\end{equation}%
Let $u\left( r\right) =Q^{1/2}\left( r\right) \left( x\left( r\right) -\hat{x%
}\left( r\right) \right).$ Then%
\begin{align*}
du\left( r\right) & =\left[ dQ^{1/2}\left( r\right) \right] \left( x\left(
r\right) -\hat{x}\left( r\right) \right) +Q^{1/2}\left( r\right) d\left[
x\left( r\right) -\hat{x}\left( r\right) \right] \\
& =\left[ \alpha \left( r\right) dx\left( r\right) +\hat{\alpha}\left(
r\right) d\hat{x}\left( r\right) \right] \left( x\left( r\right) -\hat{x}%
\left( r\right) \right) \\
& +Q^{1/2}\left( r\right) \left[ f\left( r,x\left( r\right) \right) -f\left(
r,\hat{x}\left( r\right) \right) \right] dr \\
& +Q^{1/2}\left( r\right) \left[ dm\left( t\right) -d\hat{m}\left( t\right) %
\right] \\
& +Q^{1/2}\left( r\right) \left[ -H\left( x\left( r\right) \right) dk\left(
r\right) +H\left( \hat{x}\left( r\right) \right) d\hat{k}\left( r\right) %
\right],
\end{align*}%
with $\alpha$, $\hat{\alpha}\in \mathcal{L(%
\mathbb{R}_{+}};\mathbb{R}^{d\times d}\mathcal{)}$, where $\mathcal{L(%
\mathbb{R}_{+}};\mathbb{R}^{d\times d}\mathcal{)}$ is the space of
continuous linear operators from $\mathbb{R}_{+}$ into $\mathbb{R}%
^{d\times d}$.
Using (\ref{eui_1}) and the assumptions on the matrix-valued functions $%
x\longmapsto H\left( x\right) $ and $x\longmapsto \left[ H\left( x\right) %
\right]^{-1},$ we have (as signed measures on $\mathbb{R}_{+}$), for some
positive constants $C_{1},C_{2},C_{3},C$ depending only on the constants $c$
and $b,$%
\begin{align*}
\left\langle u\left( r\right) ,du\left( r\right) \right\rangle & \leq
C_{1}\left\vert u\left( r\right) \right\vert ^{2}\left( d\left\updownarrow
x\right\updownarrow _{r}+d\updownarrow \!\hat{x}\!\updownarrow _{r}\right)
+C_{2}\mu \left( r\right) \left\vert u\left( r\right) \right\vert ^{2}dr \\
& +C_{3}\left\vert u\left( r\right) \right\vert d\left\updownarrow m-\hat{m}%
\right\updownarrow _{r} \\
& +\left\langle x\left( r\right) -\hat{x}\left( r\right) ,Q\left( r\right) %
\left[ H\left( \hat{x}\left( r\right) \right) d\hat{k}\left( r\right)
-H\left( x\left( r\right) \right) dk\left( r\right) \right] \right\rangle \\
& \leq C\left\vert u\left( r\right) \right\vert d\left\updownarrow m-\hat{m}%
\right\updownarrow _{r}+C\left\vert u\left( r\right) \right\vert
^{2}dV\left( r\right),
\end{align*}%
with $V\left( t\right) =\left\updownarrow x\right\updownarrow
_{t}+\updownarrow \!\hat{x}\!\updownarrow _{t}+\left\updownarrow
k\right\updownarrow _{t}+\updownarrow \hat{k}\updownarrow _{t}+\displaystyle%
\int_{0}^{t}\mu \left( r\right) dr.$ Now, by (\ref{AnC-dxRNV}), we infer, for
all $t\geq 0$,%
\begin{equation}
\left\vert u\left( t\right) \right\vert \leq e^{CV\left( t\right)
}\left\vert x_{0}-\hat{x}_{0}\right\vert +\displaystyle\int_{0}^{t}Ce^{C%
\left[ V\left( t\right) -V\left( r\right) \right] }d\left\updownarrow m-\hat{%
m}\right\updownarrow _{r}~. \label{o2unq}
\end{equation}%
and the inequality (\ref{ob8}) follows.\hfill
\end{proof}
\begin{proposition}
\label{p-aproxm}Under the assumptions of Proposition \ref{oSP-p1-uniq} and,
for $m\in C^{1}\left( \mathbb{R}_{+};\mathbb{R}^{d}\right)$, the solution $%
\left( x_{\varepsilon }\right) _{0<\varepsilon \leq 1}$ of the approximating
equation%
\begin{equation}
\begin{array}{l}
x_{\varepsilon }\left( t\right) +\displaystyle\int_{0}^{t}H\left(
x_{\varepsilon }\left( s\right) \right) dk_{\varepsilon }\left( s\right)
=x_{0}+\displaystyle\int_{0}^{t}f\left( s,\mathbb{\pi }_{D}\left(
x_{\varepsilon }\left( s\right) \right) \right) ds+m\left( t\right),\quad
t\geq 0,\smallskip \\
dk_{\varepsilon }\left( s\right) =\nabla \varphi _{\varepsilon }\left(
x_{\varepsilon }\left( s\right) \right) ds,%
\end{array}
\label{aproxeq}
\end{equation}%
has the following properties:\newline
$\bullet $ for all $T>0$ there exists a constant $C_{T}$, independent of $%
\varepsilon,\delta \in ]0,1],$ such that%
$$
\begin{array}{rl}
\left( j\right) \quad & \sup\limits_{t\in \left[ 0,T\right] }\left\vert
x_{\varepsilon }\left( t\right) \right\vert ^{2}+\sup_{t\in \left[ 0,T\right]
}\left\vert \varphi _{\varepsilon }\left( x_{\varepsilon }\left( t\right)
\right) \right\vert +\displaystyle\int_{0}^{T}\left\vert \nabla \varphi
_{\varepsilon }\left( x_{\varepsilon }\left( s\right) \right) \right\vert
^{2}ds\leq C_{T}~,\medskip \\
\left( jj\right) \quad & \left\updownarrow x_{\varepsilon
}\right\updownarrow _{\left[ s,t\right] }\leq C_{T}\sqrt{t-s},\quad \textrm{%
for all }0\leq s\leq t\leq T~,\medskip \\
\left( jjj\right) \quad & \left\Vert x_{\varepsilon }-x_{\delta }\right\Vert
_{T}\leq C_{T}\sqrt{\varepsilon +\delta }~.%
\end{array}%
$$
$\bullet$ Moreover, there exist $x,k\in C\left( \left[ 0,T\right];\mathbb{R%
}^{d}\right)$ and $h\in L^{2}\left( 0,T;\mathbb{R}^{d}\right) ,$ such that%
\newline
$$
\lim_{\varepsilon \rightarrow 0}k_{\varepsilon }\left( t\right) =k\left(
t\right) =\int_{0}^{t}h\left( s\right) ds\textrm{, for all }t\in \left[ 0,T%
\right],
$$
$$\lim\limits_{\varepsilon \rightarrow 0}\left\Vert x_{\varepsilon}-x\right\Vert_{T}=0$$
and $\left(x,k\right)$ is the unique solution of the variational
inequality with oblique subgradients (\ref{osp-eq}).
\end{proposition}
\begin{proof}[{\bf Proof}]
The proof for the estimates $\left(j\right)$ and $\left(jj\right)$ are
exactly as in the proof of Theorem \ref{oSP-t1}.
Let us prove $\left(jjj\right)$. Similarly to the proof of the uniqueness
result (Proposition \ref{oSP-p1-uniq}), we introduce
$Q_{\varepsilon,\delta}\left(s\right)=\left[H\left(x_{\varepsilon}\left(s\right)\right)\right]^{-1}+\left[H\left(x_{\delta}\left(s\right)\right)\right]^{-1}$. Once again, to simplify the reading, we
omit $s$ in the argument of $x_{\varepsilon}\left(s\right)$ and $x_{\delta}\left(s\right)$. Remark that
\begin{align*}
Q_{\varepsilon,\delta}\left(s\right) & \left[H\left(x_{\delta}\right)
\nabla\varphi_{\delta}\left(x_{\delta}\right)-H\left(x_{\varepsilon}\right)\nabla\varphi_{\varepsilon}\left(x_{\varepsilon}\right)\right]
\\
&=\left(\left[H\left(x_{\varepsilon}\right)\right]^{-1}-\left[H\left(x_{\delta}\right)\right]^{-1}\right)\left[H\left(x_{\delta}\right)\nabla\varphi _{\delta}\left(x_{\delta}\right)
+H\left(x_{\varepsilon}\right)\nabla\varphi_{\varepsilon}\left(x_{\varepsilon}\right)\right]\\
& +2\left[dk_{\delta}\left(s\right)-dk_{\varepsilon}\left(s\right)\right].
\end{align*}
Let $u_{\varepsilon,\delta}\left(s\right)=Q_{\varepsilon,\delta}^{1/2}\left(s\right)\left(x_{\varepsilon}\left(s\right)-x_{\delta}\left(s\right)\right)$. Then
\begin{align*}
du_{\varepsilon,\delta}\left(s\right)&=\left[ dQ_{\varepsilon,\delta
}^{1/2}\left( s\right) \right] \left(x_{\varepsilon}-x_{\delta}\right)
+Q_{\varepsilon,\delta}^{1/2}\left(s\right) d\left[x_{\varepsilon
}-x_{\delta}\right] \\
& =\left[\alpha_{\varepsilon,\delta}\left( s\right) dx_{\varepsilon
}+\beta_{\varepsilon,\delta}\left( s\right) dx_{\delta}\right] \left(
x_{\varepsilon}-x_{\delta}\right) \\
& +Q_{\varepsilon,\delta}^{1/2}\left( s\right) \left[ f\left( s,\mathbb{\pi}%
_{D}\left( x_{\varepsilon}\right) \right) -f\left( s,\mathbb{\pi}_{D}\left(
x_{\delta}\right) \right) \right] ds \\
& +Q_{\varepsilon,\delta}^{1/2}\left( s\right) \left( s\right) \left[
-H\left( x_{\varepsilon}\right) \nabla\varphi_{\varepsilon}\left(
x_{\varepsilon}\right) +H\left( x_{\delta}\right) \nabla\varphi_{\delta
}\left( x_{\delta}\right) \right] ds,
\end{align*}
where $\alpha_{\varepsilon,\delta}$, $\beta_{\varepsilon,\delta}:\mathbb{R}%
_{+}\rightarrow\mathbb{R}^{d\times d}$ are some continuous functions which
are bounded uniformly in $\varepsilon,\delta.$
Therefore, for $s\in \left[0,T\right],$%
\begin{align*}
\left\langle u_{\varepsilon,\delta }\left( s\right) ,du_{\varepsilon
,\delta }\left(s\right)\right\rangle & \leq C\left\vert u_{\varepsilon
,\delta }\left(s\right)\right\vert ^{2}\left( d\left\updownarrow
x_{\varepsilon }\right\updownarrow _{s}+d\left\updownarrow x_{\delta
}\right\updownarrow _{s}\right) +C\mu \left( s\right) \left\vert
u_{\varepsilon,\delta }\left( s\right) \right\vert ^{2}ds \\
& +2\left\langle x_{\varepsilon }-x_{\delta },Q_{\varepsilon ,\delta }\left(
s\right) \left[ H\left( x_{\delta }\right) \nabla \varphi _{\delta }\left(
x_{\delta }\right) -H\left( x_{\varepsilon }\right) \nabla \varphi
_{\varepsilon }\left( x_{\varepsilon }\right) \right] \right\rangle ds \\
& \leq C\left\vert u_{\varepsilon ,\delta }\left( s\right) \right\vert
^{2}dV\left( s\right) +4\left\langle x_{\varepsilon }-x_{\delta },\nabla
\varphi _{\delta }\left( x_{\delta }\right) -\nabla \varphi _{\varepsilon
}\left( x_{\varepsilon }\right) \right\rangle ds,
\end{align*}%
with $V\left( s\right) =\left\updownarrow x_{\varepsilon }\right\updownarrow
_{s}+\left\updownarrow x_{\delta }\right\updownarrow _{s}+\left\updownarrow
k_{\varepsilon }\right\updownarrow _{s}+\left\updownarrow k_{\delta
}\right\updownarrow _{s}+\displaystyle\int_{0}^{s}\mu \left( r\right) dr\leq
C_{T}.$
\noindent Since, according to Asiminoaei \& R\u{a}\c{s}canu \cite{Asiminoaei/Rascanu:97},
$$
\left\langle \nabla \varphi _{\varepsilon }(x)-\nabla \varphi _{\delta
}(y),x-y\right\rangle \geq -\left( \varepsilon +\delta \right) |\nabla
\varphi _{\varepsilon }(x)||\nabla \varphi _{\delta }(y)|,
$$
we have
\begin{align*}
\left\langle x_{\varepsilon }\left( r\right) -x_{\delta }\left( r\right)
,dk_{\delta }\left( r\right) -dk_{\varepsilon }\left( r\right) \right\rangle
& =\left\langle x_{\varepsilon }\left( r\right) -x_{\delta }\left( r\right)
,\nabla \varphi \left( x_{\delta }\left( r\right) \right) -\nabla \varphi
\left( x_{\varepsilon }\left( r\right) \right) \right\rangle dr \\
& \leq \left( \varepsilon +\delta \right) \left\vert \nabla \varphi \left(
x_{\delta }\left( r\right) \right) \right\vert \left\vert \nabla \varphi
\left( x_{\varepsilon }\left( r\right) \right) \right\vert dr.
\end{align*}
\noindent Consequently,
$$
\left\langle u_{\varepsilon,\delta }\left( r\right),du_{\varepsilon,\delta }\left( r\right) \right\rangle \leq 4\left( \varepsilon +\delta
\right) \left\vert \nabla \varphi \left( x_{\delta }\left( r\right) \right)
\right\vert \left\vert \nabla \varphi \left(x_{\varepsilon}\left( r\right)
\right) \right\vert dr+C\left\vert u_{\varepsilon,\delta }\left( r\right)
\right\vert ^{2}dV\left(r\right),
$$
\noindent Using inequality (\ref{ineq1-Anex}) from Annex 4.3. we deduce that there exists some positive constants, that will be denoted by a generic one $C$, such that
\begin{align*}
\left\Vert x_{\varepsilon}-x_{\delta}\right\Vert _{T} & \leq C\left\Vert
u_{\varepsilon,\delta}\right\Vert _{T} \\
& \leq C\sqrt{\varepsilon+\delta}\left( \displaystyle\int
_{0}^{T}\left\vert \nabla\varphi\left( x_{\delta}\left( r\right) \right)
\right\vert \left\vert \nabla\varphi\left( x_{\varepsilon}\left( r\right)
\right) \right\vert dr\right) ^{1/2} \\
& \leq C\sqrt{\varepsilon+\delta}\left[ \left( \displaystyle\int
_{0}^{T}\left\vert \nabla\varphi\left( x_{\delta}\left( r\right) \right)
\right\vert ^{2}dr\right) ^{1/2}+\left( \displaystyle\int _{0}^{T}\left\vert
\nabla\varphi\left( x_{\varepsilon}\left( r\right) \right) \right\vert
^{2}dr\right) ^{1/2}\right] \\
& \leq C\sqrt{\varepsilon+\delta}.
\end{align*}
\noindent Now, the other assertions clearly follows and the proof is complete.\hfill
\end{proof}
\begin{corollary}
If $\left(\Omega,\mathcal{F},\mathbb{P},\{\mathcal{F}_{t}\}_{t\geq
0}\right) $ is a stochastic basis and $M$ a $\mathcal{F}_{t}-$progressively
measurable stochastic process such that $M_{\cdot }\left( \omega \right) \in
C^{1}\left( \mathbb{R}_{+};\mathbb{R}^{d}\right),\;\mathbb{P}-a.s.\;\omega
\in \Omega$, then, under the assumptions of Proposition \ref{oSP-p1-uniq}, $%
\mathbb{P}-a.s.\;\omega \in \Omega $, the random generalized Skorohod
problem with oblique subgradients:\newline
$$
\left\{
\begin{array}{r}
X_{t}\left( \omega \right) +\displaystyle\int_{0}^{t}H\left( X_{t}\left(
\omega \right) \right) dK_{t}\left( \omega \right) =x_{0}+\displaystyle%
\int_{0}^{t}f\left( s,X_{s}\left( \omega \right) \right) ds+M_{t}\left(
\omega \right),\quad t\geq 0,\medskip \\
\multicolumn{1}{l}{dK_{t}\left( \omega \right) \in \partial \varphi \left(
X_{t}\left( \omega \right) \right) \left( dt\right)}%
\end{array}%
\right.
$$
admits a unique solution $\left( X_{\cdot }\left( \omega \right) ,K_{\cdot
}\left(\omega\right)\right).$ Moreover $X$ and $K$ are $\mathcal{F}_{t}-$%
progressively measurable stochastic processes.
\end{corollary}
\begin{proof}[{\bf Proof}]
In this moment we have to prove that $X$ and $K$ are $\mathcal{F}_{t}-$%
progressively measurable stochastic processes. But this follows from
Proposition \ref{p-aproxm}, since the approximating equation (\ref{aproxeq})
admits a unique solution $\left( X^{\varepsilon},K^{\varepsilon }\right)$,
which is a progressively measurable continuous stochastic process.\hfill
\end{proof}
\section{SVI with oblique subgradients}
\subsection{Notations. Hypotheses}
In this section we will present the Stochastic Variational Inequalities (for
short, SVI) with oblique subgradient and the definition of theirs strong and
weak solutions. The proof of the existence and uniqueness results are given
in the next subsection.
Let $\left( \Omega ,\mathcal{F},\mathbb{P},\{\mathcal{F}_{t}\}_{t\geq
0}\right) $ be a stochastic basis and $\left\{ B_{t}:t\geq 0\right\} $ a $%
\mathbb{R}^{k}-$valued Brownian motion. Our objective is to solve the SVI
with oblique reflection%
\begin{equation}
\left\{
\begin{array}{l}
X_{t}+\displaystyle\int_{0}^{t}H\left( X_{t}\right) dK_{t}=x_{0}+%
\displaystyle\int_{0}^{t}f\left( s,X_{s}\right) ds+\displaystyle%
\int_{0}^{t}g\left( s,X_{s}\right) dB_{s},\quad t\geq 0,\smallskip \\
dK_{t}\in \partial \varphi \left( X_{t}\right) \left( dt\right),%
\end{array}%
\right. \label{oSP-eq2}
\end{equation}%
where $x_{0}\in \mathbb{R}^{d}$ and%
\begin{equation}
\begin{array}{rl}
\left( i\right) \quad & \left( t,x\right) \longmapsto f\left( t,x\right) :%
\mathbb{R}_{+}\times \mathbb{R}^{d}\rightarrow \mathbb{R}^{d}\textrm{\ and}%
\;\left( t,x\right) \longmapsto g\left( t,x\right) :\mathbb{R}_{+}\times
\mathbb{R}^{d}\rightarrow \mathbb{R}^{d\times k}\;\textrm{are}\smallskip \\
& \quad \textrm{Carath\'{e}odory functions (i.e. measurable w.r. to }t\textrm{
and continuous w.r. to }x\textrm{),}\medskip \\
\left( ii\right) \quad & \displaystyle\int_{0}^{T}(f^{\#}\left(
t\right)) ^{2}dt+\displaystyle\int_{0}^{T}(g^{\#}\left(t\right)
)^{4}dt<\infty,%
\end{array}
\label{ob-h6}
\end{equation}%
with%
$$
f^{\#}\left( t\right) \overset{def}{=}\sup_{x\in Dom\left( \varphi \right)
}\left\vert f\left( t,x\right) \right\vert \quad \textrm{and}\quad
g^{\#}\left( t\right) \overset{def}{=}\sup_{x\in Dom\left( \varphi \right)
}\left\vert g\left( t,x\right) \right\vert.
$$
We also add Lipschitz continuity conditions:%
\begin{equation}
\begin{array}{rl}
& \exists ~\mu \in L_{loc}^{1}\left( \mathbb{R}_{+}\right) ,\;\;\exists
~\ell \in L_{loc}^{2}\left( \mathbb{R}_{+}\right) \textrm{ s.t. }\forall
~x,y\in \mathbb{R}^{d},\quad a.e.\ t\geq 0,\medskip \\
\left( i\right) \quad & \quad \quad \left\vert f\left( t,x\right) -f\left(
t,y\right) \right\vert \leq \mu \left( t\right) \left\vert x-y\right\vert
,\medskip \\
\left( ii\right) \quad & \quad \quad \left\vert g\left( t,x\right) -g\left(
t,y\right) \right\vert \leq \ell \left( t\right) \left\vert x-y\right\vert .%
\end{array}
\label{ob-h7}
\end{equation}
\begin{definition}
\label{def-weak-strong-sol} $\left( I\right) $ Given a stochastic basis $%
(\Omega ,\mathcal{F},\mathbb{P},\left\{ \mathcal{F}_{t}\right\} _{t\geq 0})$
and a $\mathbb{R}^{k}-$valued $\mathcal{F}_{t}-$Brownian motion $\left\{
B_{t}:t\geq 0\right\} ,$ a pair $\left( X,K\right) :\Omega \times \left[
0,\infty \right[ \rightarrow \mathbb{R}^{d}\times \mathbb{R}^{d}$ of
continuous $\mathcal{F}_{t}-$progressively measurable stochastic processes
is a strong solution of the SDE (\ref{oSP-eq2}) if, $\mathbb{P}-a.s.\;\omega
\in \Omega :$%
\begin{equation}
\left\{
\begin{array}{rl}
i)\; & X_{t}\in \overline{Dom\left( \varphi \right) },\textrm{ \thinspace }%
\forall \,t\geq 0,\;\varphi \left( X_{\cdot }\right) \in L_{loc}^{1}\left(
\mathbb{R}_{+}\right) ,\smallskip \\
ii)\; & K_{\cdot }\in BV_{loc}\left( \left[ 0,\infty \right[ ;\mathbb{R}%
^{d}\right) ,\textrm{\quad\ }K_{0}=0\textrm{,}\smallskip \\
iii)\; & X_{t}+\displaystyle\int_{0}^{t}H\left( X_{s}\right) dK_{s}=x_{0}+%
\displaystyle\int_{0}^{t}f\left( s,X_{s}\right) ds+\displaystyle%
\int_{0}^{t}g\left( s,X_{s}\right) dB_{s},\ \forall ~t\geq 0,\smallskip \\
iv)\; & \forall \,0\leq s\leq t,\;\forall y:\mathbb{R}_{+}\rightarrow
\mathbb{R}^{d}\textrm{ continuous}:\smallskip \\
& \quad \quad \displaystyle\int_{s}^{t}\left\langle y\left( r\right)
-X_{r},dK_{r}\right\rangle +\displaystyle\int_{s}^{t}\varphi \left(
X_{r}\right) dr\leq \displaystyle\int_{s}^{t}\varphi \left( y\left( r\right)
\right) dr.%
\end{array}%
\right. \label{sp-20a}
\end{equation}%
That is%
$$
\left( X_{\cdot }\left( \omega \right) ,K_{\cdot }\left( \omega \right)
\right) \in \mathcal{SP}\left( H\partial \varphi ;x_{0},M_{\cdot }\left( \omega
\right)\right),\quad \mathbb{P}-a.s.\;\omega \in \Omega,
$$
with%
$$
M_{t}=\displaystyle\int_{0}^{t}f\left( s,X_{s}\right) ds+\displaystyle%
\int_{0}^{t}g\left( s,X_{s}\right) dB_{s}~.
$$
$\left( II\right) \quad $If there exists a stochastic basis $\left( \Omega ,%
\mathcal{F},\mathbb{P},\mathcal{F}_{t}\right)_{t\geq 0}$, a $\mathbb{R}%
^{k}-$valued $\mathcal{F}_{t}-$Brownian motion $\left\{ B_{t}:t\geq
0\right\} $ and a pair $\left( X_{\cdot },K_{\cdot }\right) :\Omega \times
\mathbb{R}_{+}\rightarrow \mathbb{R}^{d}\times \mathbb{R}^{d}$ of $\mathcal{F%
}_{t}-$progressively measurable continuous stochastic processes such that
$$
\left( X_{\cdot}\left(\omega\right),K_{\cdot}\left(\omega\right)
\right) \in \mathcal{SP}\left(H\partial \varphi;x_{0},M_{\cdot}\left(\omega
\right)\right),\quad \mathbb{P}-a.s.\;\omega \in \Omega,
$$
then the collection $\left(\Omega,\mathcal{F},\mathbb{P},\mathcal{F}%
_{t},B_{t},X_{t},K_{t}\right)_{t\geq 0}$ is called a weak solution of the
SVI (\ref{oSP-eq2}).
\noindent (In both cases $\left(I\right)$ and $\left(II\right)$ we will
say that $\left( X_{t},K_{t}\right) $ is a solution of the oblique reflected
SVI (\ref{oSP-eq2}).)
\end{definition}
\subsection{Existence and uniqueness}
In this section we will give the result of existence and uniqueness of the
solution for the stochastic variational inequality with oblique subgradients
introduced before. Theorem \ref{weak-ex-OSVI} deals with the existence of
a weak solution in the sense of Definition \ref{def-weak-strong-sol},
while Theorem \ref{path-w-uni} proves the uniqueness of a strong solution.
\begin{theorem}
\label{weak-ex-OSVI} Let the assumptions (\ref{osp-h0-A}), (\ref{osp-h2}), (%
\ref{osp-h3}) and (\ref{ob-h6}) be satisfied. Then the SVI (\ref{oSP-eq2})
has at least one weak solution $\left( \Omega,\mathcal{F},\mathbb{P},%
\mathcal{F}_{t},B_{t},X_{t},K_{t}\right) _{t\geq0}.$
\end{theorem}
\begin{proof}[{\bf Proof}]
The main ideas of the proof come from Rascanu \cite{Rascanu:10}. We extend $%
f\left( t,x\right) =0$ and $g\left( t,x\right) =0,$ for $t<0$.
{\bf{\it{Step 1.}}{\it Approximating problem.}}$\smallskip $
Let $0<\varepsilon \leq 1$ and consider the approximating equation%
\begin{equation}
\left\{
\begin{array}{l}
X_{t}^{n}=x_{0},\quad \textrm{if }t<0,\medskip \\
X_{t}^{n}+\displaystyle\int_{0}^{t}H\left( X_{t}^{n}\right)
dK_{t}^{n}=x_{0}+M_{t}^{n},\quad t\geq 0,\smallskip \\
dK_{t}^{n}\in \partial \varphi \left( X_{t}^{n}\right) dt,%
\end{array}%
\right. \label{oea_stoch}
\end{equation}%
where%
\begin{align*}
M_{t}^{n}& =\displaystyle\int_{0}^{t}f(s,\mathbb{\pi }%
_{D}(X_{s-1/n}^{n}))ds+n\displaystyle\int_{t-1/n}^{t}\left[ \displaystyle%
\int_{0}^{s}g(r,\mathbb{\pi }_{D}(X_{r-1/n}^{n}))dB_{r}\right] ds \\
& =\displaystyle\int_{0}^{t}f(s,\mathbb{\pi }_{D}(X_{s-1/n}^{n}))ds+%
\displaystyle\int_{0}^{1}\left[ \displaystyle\int_{0}^{t-\frac{1}{n}+\frac{1%
}{n}u}g(r,\mathbb{\pi }_{D}(X_{r-1/n}^{n}))dB_{r}\right] du
\end{align*}%
and $\mathbb{\pi }_{D}\left( x\right) $ is the orthogonal projection of $x$
on $D=\overline{Dom\left( \varphi \right) }.$ Since $M^{n}$ is a $C^{1}-$%
continuous progressively measurable stochastic process, then by Corollary $%
\mathbf{1}$, the approximating equation (\ref{oea_stoch}) has a unique
solution $\left( X^{n},K^{n}\right) $ of continuous progressively measurable
stochastic processes.$\smallskip $
{\bf{\it{Step 2.}}{\it Tightness.}}$\smallskip $
Let $T\geq 0$ be arbitrary fixed. We will point out the main reasonings of this step.
\begin{itemize}
\item Since, by standard arguments,%
\begin{align*}
& \mathbb{E}\left[ \sup\limits_{0\leq \theta \leq \varepsilon }\left\vert
M_{t+\theta }^{n}-M_{t}^{n}\right\vert ^{4}\right] \\
& \leq 8\left( \displaystyle\int_{t}^{t+\varepsilon }f^{\#}\left( r\right)
dr\right) ^{4}+8\displaystyle\int_{0}^{1}\mathbb{E}\sup_{0\leq \theta \leq
\varepsilon }\left( \displaystyle\int_{t-\frac{1}{n}+\frac{1}{n}u}^{t+\theta
-\frac{1}{n}+\frac{1}{n}u}g(r,\mathbb{\pi }_{D}(X_{r-1/n}^{n}))dBr\right)
^{4}du \\
& \leq 8\varepsilon \left( \displaystyle\int_{t}^{t+\varepsilon
}|f^{\#}\left( r\right) |^{2}dr\right) ^{2}+C\displaystyle\int_{0}^{1}\left( %
\displaystyle\int_{t-\frac{1}{n}+\frac{1}{n}u}^{t+\varepsilon -\frac{1}{n}+%
\frac{1}{n}u}|g^{\#}\left( r\right) |^{2}dr\right) ^{2}du \\
& \leq 8\varepsilon \left( \displaystyle\int_{t}^{t+\varepsilon
}|f^{\#}\left( r\right) |^{2}dr\right) ^{2}+C\varepsilon \displaystyle%
\int_{0}^{1}\left( \displaystyle\int_{t-\frac{1}{n}+\frac{1}{n}%
u}^{t+\varepsilon -\frac{1}{n}+\frac{1}{n}u}|g^{\#}\left( r\right)
|^{4}dr\right) du \\
& \leq C^{\prime }\varepsilon \times \sup \left\{ \left( \displaystyle%
\int_{s}^{\tau }|f^{\#}\left( r\right) |^{2}dr\right) ^{2}+\displaystyle%
\int_{s}^{\tau }|g^{\#}\left( r\right) |^{4}dr;0\leq s<\tau \leq T,\;\;\tau
-s\leq \varepsilon \right\},
\end{align*}
in conformity with Proposition \ref{ch1-p1-tight} the family of laws of $\left\{
M^{n}:n\geq 1\right\} $ is tight on $C\left( \left[ 0,T\right];\mathbb{R}%
^{d}\right)$.
\item We now show that the family of laws of the random variables $%
U^{n}=\left( X^{n},K^{n},\left\updownarrow K^{n}\right\updownarrow \right) $
is tight on $C\left( \left[ 0,T\right] ;\mathbb{R}^{d}\right) \times C\left( %
\left[ 0,T\right] ;\mathbb{R}^{d}\right) \times C\left( \left[ 0,T\right] ;%
\mathbb{R}\right) \left[ =C\left( \left[ 0,T\right] ;\mathbb{R}%
^{2d+1}\right) \right].$ From Proposition \ref{p1-apri-estim} we deduce%
\begin{align*}
\left\Vert U^{n}\right\Vert _{T}& \leq C_{T}\left( \left\Vert
M^{n}\right\Vert _{T}\right), \\
\mathbf{m}_{U^{n}}\left( \varepsilon \right) & \leq C_{T}\left( \left\Vert
M^{n}\right\Vert _{T}\right) \times \sqrt{\varepsilon +\mathbf{m}%
_{M^{n}}\left( \varepsilon \right)},
\end{align*}%
and, from Lemma \ref{ch1-p2-tight}, it follows that $\left\{ U^{n};n\in
\mathbb{N}^{\ast }\right\} $ is tight on $C\left(\left[ 0,T\right];\mathbb{%
R}^{2d+1}\right).$
\item By the Prohorov theorem there exists a subsequence such that, as $%
n\rightarrow \infty,$%
$$
\left( X^{n},K^{n},\left\updownarrow K^{n}\right\updownarrow,B\right)
\rightarrow \left( X,K,V,B\right),\quad \textrm{in law}
$$
on $C\left( \left[ 0,T\right] ;\mathbb{R}^{2d+1+k}\right)$ and, by the
Skorohod theorem, we can choose a probability space $\left( \Omega,\mathcal{%
F},\mathbb{P}\right) $ and some random quadruples $(\bar{X}^{n},\bar{K}^{n},%
\bar{V}^{n},\bar{B}^{n})$, $(\bar{X},\bar{K},\bar{V},\bar{B})$ defined on $%
\left( \Omega ,\mathcal{F},\mathbb{P}\right)$, having the same laws as
resp. $\left( X^{n},K^{n},\left\updownarrow K^{n}\right\updownarrow
,B\right) $ and $(X,K,V,B),$ such that, in $C\left(\left[0,T\right];\mathbb{%
R}^{2d+1+k}\right)$, as $n\rightarrow \infty,$%
$$
(\bar{X}^{n},\bar{K}^{n},\bar{V}^{n},\bar{B}^{n}){\xrightarrow[]{\mathbb{P}-a.s.}}(\bar{X},\bar{K},\bar{V},\bar{B}).
$$
\item Remark that, by Lemma \ref{ch2-p-conv}, $(\bar{B}^{n},\{\mathcal{F}%
_{t}^{\bar{X}^{n},\bar{K}^{n},\bar{V}^{n},\bar{B}^{n}}\}),n\geq 1,\;$and $(%
\bar{B},\{\mathcal{F}_{t}^{\bar{X},\bar{K},\bar{V},\bar{B}}\})$ are $\mathbb{%
R}^{k}-$Brownian motion.
\end{itemize}
{\bf{\it{Step 3.}}{\it Passing to the limit.}}$\smallskip$
Since we have $\left( X^{n},K^{n},\left\updownarrow
K^{n}\right\updownarrow,B\right) \rightarrow (\bar{X},\bar{K},\bar{V},\bar{B})$ in law, then by Proposition \ref{ch1-lsc-SI}, we deduce that, for all $
0\leq s\leq t,$ $\mathbb{P}-a.s.,$
\begin{equation}
\begin{array}{c}
\bar{X}_{0}=x_{0}~,\quad \quad \bar{K}_{0}=0,\quad \quad \bar{X}_{t}\in
E,\medskip \\
\left\updownarrow \bar{K}\right\updownarrow _{t}-\left\updownarrow \bar{K}%
\right\updownarrow _{s}\leq \bar{V}_{t}-\bar{V}_{s}\quad \textrm{and}\quad 0=%
\bar{V}_{0}\leq \bar{V}_{s}\leq \bar{V}_{t}.%
\end{array}
\label{sp-24}
\end{equation}%
Moreover, since for all $0\leq s<t$,$\;n\in \mathbb{N}^{\ast }$%
$$
\int_{s}^{t}\varphi \left( X_{r}^{n}\right) dr\leq \int_{s}^{t}\varphi
\left( y\left( r\right) \right) dr-\int_{s}^{t}\left\langle y\left( r\right)
-X_{r}^{n},dK_{r}^{n}\right\rangle \;\;a.s.,
$$
then, by Proposition \ref{ch1-lsc-SI}, we infer
\begin{equation}
\displaystyle\int_{s}^{t}\varphi \left( \bar{X}_{r}\right) dr\leq %
\displaystyle\int_{s}^{t}\varphi \left( y\left( r\right) \right) dr-%
\displaystyle\int_{s}^{t}\left\langle y\left( r\right) -\bar{X}_{r},d\bar{K}%
_{r}\right\rangle. \label{sp-25}
\end{equation}%
Hence, based on (\ref{sp-24}) and (\ref{sp-25}), we have
$$d\bar{K}_{r}\in \partial \varphi \left( \bar{X}_{r}\right)\left( dr\right).$$
\noindent Using the Lebesgue theorem and, once again Lemma \ref{ch2-p-conv},
we infer for $n\rightarrow \infty $,%
\begin{align*}
\bar{M}_{\cdot }^{n}& =x_{0}+\displaystyle\int_{0}^{\cdot }f(s,\mathbb{\pi }%
_{D}(\bar{X}_{s-1/n}^{n}))ds+n\displaystyle\int_{\cdot -1/n}^{\cdot }\left[ %
\displaystyle\int_{0}^{s}g(r,\mathbb{\pi }_{D}(\bar{X}_{r-1/n}^{n}))dB_{r},%
\right] ds \\
& \longrightarrow \bar{M}_{\cdot }=x_{0}+\displaystyle\int_{0}^{\cdot }f(s,%
\bar{X}_{s})ds+\displaystyle\int_{0}^{\cdot }g(s,\bar{X}_{s})d\bar{B}%
_{s},\quad \textrm{in }S_{d}^{0}\left[ 0,T\right],
\end{align*}
where $S_{d}^{0}\left[ 0,T\right]$ is the space of progressively measurable continuous stochastic processes defined in Annex, Section 4.3.{\smallskip}
\noindent By Proposition \ref{ch3-c4-cont} it follows that the probability laws equality holds%
$$
\mathcal{L}\left( \bar{X}^{n},\bar{K}^{n},\bar{B}^{n},\bar{M}^{n}\right) =%
\mathcal{L}\left( X^{n},K^{n},B^{n},M^{n}\right) \quad \textrm{on\ }C(\mathbb{R}%
_{+};\mathbb{R}^{d+d+k+d}),
$$
where by $\mathcal{L}(\cdot)$ we mean the probability law of the random variable.
Since, for every $t\geq 0,$%
$$
X_{t}^{n}+\displaystyle\int_{0}^{t}H\left( X_{s}^{n}\right)
dK_{s}^{n}-M_{t}^{n}=0,\;\;a.s.,
$$
then, by Proposition \ref{ch1-lsc-SI}, we have%
$$
\bar{X}_{t}^{n}+\displaystyle\int_{0}^{t}H\left( \bar{X}_{s}^{n}\right) d%
\bar{K}_{s}^{n}-\bar{M}_{t}^{n}=0,\;\;a.s.
$$
Letting $n\rightarrow \infty,$%
$$
\bar{X}_{t}+\displaystyle\int_{0}^{t}H\left( \bar{X}_{s}\right) d\bar{K}_{s}-%
\bar{M}_{t}=0,\;\;a.s.,
$$
that is, $\mathbb{P}-a.s.,$%
$$
\bar{X}_{t}+\displaystyle\int_{0}^{t}H\left(\bar{X}_{s}\right)d\bar{K}%
_{s}=x_{0}+\displaystyle\int_{0}^{t}f\left(s,\bar{X}_{s}\right)ds+%
\displaystyle\int_{0}^{t}g\left(s,\bar{X}_{s}\right) d\bar{B}_{s},\;\forall
~t\in \left[ 0,T\right].
$$
Consequently $(\bar{\Omega},\mathcal{\bar{F}},\mathbb{\bar{P}},\mathcal{F}%
_{t}^{\bar{B},\bar{X}},\bar{X}_{t},\bar{K}_{t},\bar{B}_{t})_{t\geq 0}$ is a
weak solution of the SVI (\ref{oSP-eq2}). The proof is complete.\hfill
\end{proof}
\begin{theorem}
\label{path-w-uni} If the assumptions (\ref{osp-h0-A}), (\ref{osp-h2}), (\ref%
{osp-h3}), (\ref{ob-h6}) and (\ref{ob-h7}) are satisfied, then the SVI (\ref%
{oSP-eq2}) has a unique strong solution $\left( X,K\right) \in
S_{d}^{0}\times S_{d}^{0}.$
\end{theorem}
\begin{proof}[{\bf Proof}]
It is sufficient to prove the {\it pathwise uniqueness}, since by Theorem
1.1, page 149, from Ikeda \& Watanabe \cite{Ikeda/Watanabe:81} {\it the
existence of a weak solution} and {\it the pathwise uniqueness} implies
the existence of a strong solution.\smallskip
Let $\left( X,K\right)$, $(\hat{X},\hat{K})\in S_{d}^{0}\times S_{d}^{0}$
two solutions of the SVI with oblique reflection (\ref{oSP-eq2}). Consider
the symmetric and strict positive matrix
$$
Q_{r}=H^{-1}\left( X_{r}\right) +H^{-1}(\hat{X}_{r}).
$$
We have that%
$$
dQ_{r}^{1/2}=dN_{r}+\displaystyle\sum_{j=1}^{k}\beta _{r}^{\left( j\right)
}dB_{r}^{\left( j\right)},
$$
where $N$ is a $\mathbb{R}^{d\times d}-$valued $\mathcal{P}-$measurabe bounded variation continuous stochastic process
(for short, m.b-v.c.s.p.), $%
N_{0}=0$ and, for each $j\in \overline{1,k}$, $\beta ^{\left( j\right)}$ is
a $\mathbb{R}^{d\times d}-$valued $\mathcal{P}-$measurable stochastic process (for short, m.s.p.) such that $%
\displaystyle\int_{0}^{T}|\beta _{r}^{\left( j\right) }|^{2}dr<\infty$, $%
a.s.,$ for all $T>0.$ \newline
Letting%
$$
U_{r}=Q_{r}^{1/2}(X_{r}-\hat{X}_{r}),
$$
then%
\begin{align*}
dU_{r}& =\left[ dQ_{r}^{1/2}\right] (X_{r}-\hat{X}_{r})+Q_{r}^{1/2}d(X_{r}-%
\hat{X}_{r})+\displaystyle\sum_{j=1}^{k}\beta _{r}^{\left(j\right)
}(g(r,X_{r})-g(r,\hat{X}_{r}))e_{j} \\
& =d\mathcal{K}_{r}+\mathcal{G}_{r}dB_{r},
\end{align*}%
where%
\begin{align*}
d\mathcal{K}_{r}& =\left( dN_{r}\right) Q_{r}^{-1/2}U_{r}+Q_{r}^{1/2}\left[
H(\hat{X}_{r})d\hat{K}_{r}-H\left( X_{r}\right) dK_{r}\right] \\
& +Q_{r}^{1/2}\left[ f\left( r,X_{r}\right) -f(r,\hat{X}_{r})\right] dr+%
\displaystyle\sum_{j=1}^{k}\beta _{r}^{\left( j\right) }(g(r,X_{r})-g(r,\hat{%
X}_{r}))e_{j},\medskip \\
\mathcal{G}_{r}& =\Gamma _{r}+Q_{r}^{1/2}\left[ g(r,X_{r})-g(r,\hat{X}_{r})%
\right],
\end{align*}%
and $\Gamma _{r}$ is a $\mathbb{R}^{d\times k}$ matrix with the columns $%
\beta _{r}^{\left( 1\right) }(X_{r}-\hat{X}_{r})$, \ldots\, $\beta
_{r}^{\left( k\right) }(X_{r}-\hat{X}_{r}).$
Using (\ref{eui_1}) and the properties of $H$ and $H^{-1}$, we have%
\begin{align*}
& \left\langle U_{r},Q_{r}^{1/2}\left[ H(\hat{X}_{r})d\hat{K}_{r}-H\left(
X_{r}\right) dK_{r}\right] \right\rangle \\
& =\left\langle X_{r}-\hat{X}_{r},\left( \left[ H\left( X_{r}\right) \right]
^{-1}-\left[ H(\hat{X}_{r})\right] ^{-1}\right) \left[ H(\hat{X}_{r})d\hat{K}%
_{r}+H\left( X_{r}\right) dK_{r}\right] \right\rangle \\
& -2\left\langle X_{r}-\hat{X}_{r},dK_{r}-d\hat{K}_{r}\right\rangle \\
& \leq bc|X_{r}-\hat{X}_{r}|^{2}(d\left\updownarrow
K\right\updownarrow _{r}+d\updownarrow \!\hat{K}\!\updownarrow _{r}).
\end{align*}
\noindent Hence, there exists a positive constant $C=C(b,c,r_{0})$ such that%
$$
\left\langle U_{r},d\mathcal{K}_{r}\right\rangle +\frac{1}{2}\left\vert
\mathcal{G}_{r}\right\vert ^{2}dt\leq |U_{r}|^{2}dV_{r},
$$
where%
$$
dV_{r}=C\times \left( \mu \left( r\right) dr+\ell ^{2}\left( r\right)
dr+d\left\updownarrow N\right\updownarrow _{r}+d\left\updownarrow
K\right\updownarrow _{r}+d\updownarrow \!\hat{K}\!\updownarrow _{r}\right) +C%
\displaystyle\sum_{j=1}^{k}|\beta _{r}^{\left( j\right) }|^{2}dr.
$$
By Proposition \ref{AnexC-p0-fsi} we infer%
$$
\mathbb{E}\frac{e^{-2V_{s}}\left\vert U_{s}\right\vert ^{2}}{%
1+e^{-2V_{s}}\left\vert U_{s}\right\vert ^{2}}\leq \mathbb{E}\frac{%
e^{-2V_{0}}\left\vert U_{0}\right\vert ^{2}}{1+e^{-2V_{0}}\left\vert
U_{0}\right\vert ^{2}}=0.
$$
Consequently,%
$$
Q_{s}^{1/2}(X_{s}-\hat{X}_{s})=U_{s}=0,\;\mathbb{P}-a.s.,\textrm{ for all }%
s\geq 0
$$
and, by the continuity of $X$ and $\hat{X},$ we conclude that, $\mathbb{P}%
-a.s.,$%
$$
X_{s}=\hat{X}_{s}\quad \textrm{for all\ }s\geq 0.
$$
\hfill
\end{proof}
\section{Annex}
For the clarity of the proofs from the main body of this article we will group in this section some useful results that are used along this paper.
\subsection{A priori estimates}
We give five lemmas with a priori estimates of the solutions $\left(
x,k\right) \in \mathcal{SP}\left( H\partial \varphi ;x_{0},m\right).$ These
lemmas and also theirs proofs are similar with those from the monograph of Pardoux $\&$ R\u{a}\c{s}canu
\cite{Pardoux/Rascanu:09}, but for the convenience of the reader we give here the proofs of the results
in this new framework.$%
\smallskip $
\begin{lemma}
If $\left( x,k\right) \in \mathcal{SP}\left( H\partial\varphi;x_{0},m\right) $
and $(\hat{x},\hat{k}) \in \mathcal{SP}\left( H\partial\varphi;\hat{x}_{0},\hat{m%
}\right), $ then for all $0\leq$ $s\leq t:$%
\begin{equation}
\displaystyle\int _{s}^{t}\left\langle x\left( r\right) -\hat{x}\left(
r\right),dk\left( r\right) -d\hat{k}\left( r\right) \right\rangle \geq0.
\label{osp-2}
\end{equation}
\end{lemma}
\noindent We recall the notation for modulus of continuity of a function $g:%
\left[ 0,T\right] \rightarrow \mathbb{R}^{d}:$%
$$
\mathbf{m}_{g}\left( \varepsilon \right) =\sup \left\{ \left\vert g\left(
u\right)-g\left( v\right) \right\vert :u,v\in \left[0,T\right],\;\left\vert u-v\right\vert \leq \varepsilon \right\}.
$$
\begin{lemma}
\label{l1-mc-x}Let the assumptions (\ref{osp-h0}), (\ref{osp-h0-A}), (\ref%
{osp-h2}) and (\ref{osp-h3}) be satisfied. If $\left( x,k\right) \in\mathcal{%
SP}\left( H\partial\varphi;x_{0},m\right) ,$ then for all $0\leq s\leq t\leq
T:$%
\begin{equation}
\begin{array}{r}
\mathbf{m}_{x}\left( t-s\right) \leq\left[ \left( t-s\right) +\mathbf{m}%
_{m}\left( t-s\right) +~\sqrt{\mathbf{m}_{m}\left( t-s\right) \left(
\left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s}\right) }\right] \medskip \\
\times\exp\left\{ C\left[ 1+\left( t-s\right) +\left( \left\updownarrow
k\right\updownarrow _{t}-\left\updownarrow k\right\updownarrow _{s}+1\right)
\left( \left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s}\right) \right] \right\},%
\end{array}
\label{ob1}
\end{equation}
where $C=C\left( b,c,L\right) >0.$
\end{lemma}
\begin{proof}[{\bf Proof.}]
Let $0\leq s\leq t$ and%
$$
h\left(t\right) =\left\langle H^{-1}\left( x\left( t\right) \right) \left[
x\left(t\right) -m\left( t\right) -x\left( s\right) +m\left( s\right) %
\right],x\left( t\right) -m\left( t\right) -x\left( s\right) +m\left(
s\right)\right\rangle.
$$
We have%
\begin{align*}
h\left( t\right) & =2\displaystyle\int_{s}^{t}\left\langle H^{-1}\left(
x\left( t\right) \right) \left[ x\left( r\right) -m\left( r\right) -x\left(
s\right) +m\left( s\right) \right] ,d\left[ x\left( r\right) -m\left(
r\right) -x\left( s\right) +m\left( s\right) \right] \right\rangle \\
& =-2\displaystyle\int_{s}^{t}\left\langle H^{-1}\left( x\left( t\right)
\right) \left[ x\left( r\right) -m\left( r\right) -x\left( s\right) +m\left(
s\right) \right] ,H\left( x\left( r\right) \right) dk\left( r\right)
\right\rangle \\
& =2\displaystyle\int_{s}^{t}\left\langle H^{-1}\left( x\left( t\right)
\right) \left[ m\left( r\right) -m\left( s\right) \right] ,H\left( x\left(
r\right) \right) dk\left( r\right) \right\rangle +2\displaystyle%
\int_{s}^{t}\left\langle x\left( s\right) -x\left( r\right) ,dk\left(
r\right) \right\rangle \\
& +2\displaystyle\int_{s}^{t}\left\langle \left[ H^{-1}\left( x\left(
r\right) \right) -H^{-1}\left( x\left( t\right) \right) \right] \left[
x\left( r\right) -x\left( s\right) \right] ,H\left( x\left( r\right) \right)
dk\left( r\right) \right\rangle .
\end{align*}%
Since%
\begin{align*}
\displaystyle\int_{s}^{t}\left\langle x\left( s\right) -x\left( r\right)
,dk\left( r\right) \right\rangle & \leq \displaystyle\int_{s}^{t}\left[
\varphi \left( x\left( s\right) \right) -\varphi \left( x\left( r\right)
\right) \right] dr \\
& \leq L\left( t-s\right) +L\displaystyle\int_{s}^{t}\left\vert x\left(
s\right) -x\left( r\right) \right\vert dr \\
& \leq L\left( t-s\right) +\frac{L}{2}\left( t-s\right) +\frac{L}{2}%
\displaystyle\int_{s}^{t}\left\vert x\left( r\right) -x\left( s\right)
\right\vert ^{2}dr
\end{align*}%
and%
$$
\frac{1}{2c}\left\vert x\left( t\right) -x\left( s\right) \right\vert ^{2}-%
\frac{1}{c}\left\vert m\left( t\right) -m\left( s\right) \right\vert
^{2}\leq h\left( t\right),
$$
then%
\begin{align*}
\left\vert x\left( t\right) -x\left( s\right) \right\vert ^{2}& \leq 2~%
\mathbf{m}_{m}^{2}\left( t-s\right) +4c^{3}~\mathbf{m}_{m}\left( t-s\right)
\left( \left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s}\right) +6cL\left( t-s\right) \\
& +\displaystyle\int_{s}^{t}\left[ 2cL\left\vert x\left( r\right) -x\left(
s\right) \right\vert ^{2}dr+4bc^{2}\left\vert x\left( r\right) -x\left(
t\right) \right\vert \left\vert x\left( r\right) -x\left( s\right)
\right\vert d\left\updownarrow k\right\updownarrow _{r}\right] .
\end{align*}%
Here we continue the estimates by
\begin{align*}
& 4bc^{2}\displaystyle\int_{s}^{t}\left\vert x\left( r\right) -x\left(
t\right) \right\vert \left\vert x\left( r\right) -x\left( s\right)
\right\vert d\left\updownarrow k\right\updownarrow _{r} \\
& \leq 4bc^{2}\left\vert x\left( s\right) -x\left( t\right) \right\vert %
\displaystyle\int_{s}^{t}\left\vert x\left( r\right) -x\left( s\right)
\right\vert d\left\updownarrow k\right\updownarrow _{r}+4bc^{2}\displaystyle%
\int_{s}^{t}\left\vert x\left( r\right) -x\left( s\right) \right\vert
^{2}d\left\updownarrow k\right\updownarrow _{r} \\
& \leq \frac{1}{2}\left\vert x\left( s\right) -x\left( t\right) \right\vert
^{2}+\frac{1}{2}\left( 4bc^{2}\right) ^{2}\left( \displaystyle%
\int_{s}^{t}\left\vert x\left( r\right) -x\left( s\right) \right\vert
d\left\updownarrow k\right\updownarrow _{r}\right) ^{2} \\
& +4bc^{2}\displaystyle\int_{s}^{t}\left\vert x\left( r\right) -x\left(
s\right) \right\vert ^{2}d\left\updownarrow k\right\updownarrow _{r}
\end{align*}%
and we obtain%
\begin{align*}
& \left\vert x\left( t\right) -x\left( s\right) \right\vert ^{2} \\
& \leq 4~\mathbf{m}_{m}^{2}\left( t-s\right) +8c^{3}~\mathbf{m}_{m}\left(
t-s\right) \left( \left\updownarrow k\right\updownarrow
_{t}-\left\updownarrow k\right\updownarrow _{s}\right) +12cL\left(
t-s\right) \\
& +4cL\displaystyle\int_{s}^{t}\left\vert x\left( r\right) -x\left( s\right)
\right\vert ^{2}dr+\left[ 16b^{2}c^{4}\left( \left\updownarrow
k\right\updownarrow _{t}-\left\updownarrow k\right\updownarrow _{s}\right)
+8bc^{2}\right] \displaystyle\int_{s}^{t}\left\vert x\left( r\right)
-x\left( s\right) \right\vert ^{2}d\left\updownarrow k\right\updownarrow
_{r}~.
\end{align*}%
By the Stieltjes-Gronwall inequality, from this last inequality, the
estimate (\ref{ob1}) follows.\hfill
\end{proof}
For the next result we first remark that, if $E\subset \mathbb{R}^{d}$ is a closed convex set such that%
$$
\exists \textrm{~}r_{0}>0,\;E_{r_{0}}\neq \emptyset \quad \textrm{and}\quad
h_{0}=\sup_{z\in E}dist\left( z,E_{r_{0}}\right) <\infty
$$
(in particular if $E$ is bounded), then for every $0<\delta \leq \frac{r_{0}%
}{2\left( 1+h_{0}\right)},$ $y\in E,$ $\hat{y}=\pi_{E_{r_{0}}}\left(
y\right),$ $v_{y}=\frac{1}{1+h_{0}}\left( \hat{y}-y\right)$ and for all $%
x\in E\cap \overline{B}\left(y,\delta \right)$ we have%
\begin{equation}
\overline{B}\left( x+v_{y},\delta \right) \subset \overline{B}\left( y+v_{y},%
\frac{r_{0}}{1+h_{0}}\right) \subset conv\left\{y,\overline{B}\left(\hat{y}%
,r_{0}\right) \right\} \subset E. \label{osp-suibc}
\end{equation}
\begin{lemma}
\label{l1-oSp}Let the assumptions (\ref{osp-h0}), (\ref{osp-h0-A}), (\ref%
{osp-h2}) and (\ref{osp-h3}) be satisfied. If $\left( x,k\right) \in
\mathcal{SP}\left( H\partial \varphi ;x_{0},m\right) ,$ $0\leq s\leq t\leq T$
and%
$$
\sup_{r\in \left[ s,t\right] }\left\vert x\left( r\right) -x\left( s\right)
\right\vert \leq 2\delta _{0}=\frac{\rho _{0}}{2bc}\wedge \rho _{0}~,\quad
\textrm{with }\rho _{0}=\frac{r_{0}}{2\left( 1+r_{0}+h_{0}\right) },
$$
then%
\begin{equation}
\left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s}\leq \frac{1}{\rho _{0}}\left\vert k\left( t\right)
-k\left( s\right) \right\vert +\frac{3L}{\rho _{0}}\left( t-s\right)
\label{osp-3}
\end{equation}%
and%
\begin{equation}
\left\vert x\left( t\right) -x\left( s\right) \right\vert +\left\updownarrow
k\right\updownarrow _{t}-\left\updownarrow k\right\updownarrow _{s}\leq ~%
\sqrt{t-s+\mathbf{m}_{m}\left( t-s\right)}\times e^{C_{T}\left(
1+\left\Vert m\right\Vert _{T}^{2}\right)}, \label{ob2}
\end{equation}%
where $C_{T}=C\left( b,c,r_{0},h_{0},L,T\right) >0.$
\end{lemma}
\begin{proof}
Remark first that $D_{r_{0}}\subset D_{\delta _{0}}~.$ Let $\alpha \in
C\left( \left[ 0,\infty \right[ ;\mathbb{R}^{d}\right),$ $\left\Vert \alpha
\right\Vert _{\left[ s,t\right] }\leq 1,$ be arbitrary. Consider $y=x\left(
s\right) \in D,$ $\hat{y}=\pi_{D_{r_{0}}}\left( y\right) $ and%
$$
v_{y}=\dfrac{1}{1+h_{0}}\left( \hat{y}-y\right).
$$
Let $z\left( r\right) =x\left( r\right) +v_{y}+\rho _{0}\alpha \left(
r\right) ,$ $r\in \left[ s,t\right] $. Since $\left\vert x\left( r\right)
-y\right\vert \leq 2\delta _{0}\leq \rho _{0},$ then%
\begin{align*}
x\left( r\right) +v_{y}+\rho _{0}\alpha \left( r\right) & \in \overline{B}%
\left( x\left( r\right) +v_{y},\rho _{0}\right) \\
& \subset \overline{B}\left( y+v_{y},\frac{r_{0}}{1+h_{0}}\right) \\
& \subset D.
\end{align*}%
Remark that%
$$
\left\vert z\left( r\right) -x\left( r\right) \right\vert \leq \frac{h_{0}}{%
1+h_{0}}+\rho _{0}\leq 2
$$
and%
$$
\left\vert \varphi \left( z\left( r\right) \right) -\varphi \left( x\left(
r\right) \right) \right\vert \leq 3L.
$$
Therefore%
\begin{align*}
\rho _{0}\displaystyle\int_{s}^{t}\left\langle \alpha \left( r\right)
,dk\left( r\right) \right\rangle & \leq -\displaystyle\int_{s}^{t}\left%
\langle v_{y},dk\left( r\right) \right\rangle +\displaystyle\int_{s}^{t}%
\left[ \varphi \left( z\left( r\right) \right) -\varphi \left( x\left(
r\right) \right) \right] dr \\
& \leq -\left\langle v_{y},k\left( t\right) -k\left( s\right) \right\rangle
+3L\left( t-s\right) .
\end{align*}%
Taking the $\sup_{\left\Vert \alpha \right\Vert _{\left[ s,t\right] }\leq 1}$,
we infer%
$$
\rho _{0}\left( \left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s}\right) \leq \left\vert k\left( t\right) -k\left(
s\right) \right\vert +3L\left( t-s\right) ,
$$
that is (\ref{osp-3}).
We have also%
\begin{align*}
& \left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s} \\
& \leq \frac{1}{\rho _{0}}\left\vert k\left( t\right) -k\left( s\right)
\right\vert +\frac{3L}{\rho_{0}}\left(t-s\right) \\
& =\frac{1}{\rho _{0}}\displaystyle\int_{s}^{t}\left[ H^{-1}\left( x\left(
r\right) \right) -H^{-1}\left( x\left( s\right) \right) \right] H\left(
x\left( r\right) \right) dk\left( r\right) +\frac{1}{\rho _{0}}H^{-1}\left(
x\left( s\right) \right) \displaystyle\int_{s}^{t}H\left( x\left( r\right)
\right) dk\left( r\right) \\
& +\frac{3L}{\rho _{0}}\left( t-s\right) \\
& \leq \frac{bc}{\rho _{0}}\displaystyle\int_{s}^{t}\left\vert x\left(
r\right) -x\left( s\right) \right\vert d\left\updownarrow
k\right\updownarrow _{r}+\frac{c}{\rho _{0}}\left\vert -x\left( t\right)
+x\left( s\right) +m\left( t\right) -m\left( s\right) \right\vert +\frac{3L}{%
\rho _{0}}\left( t-s\right) \\
& \leq \frac{bc}{\rho _{0}}2\delta _{0}\left( \left\updownarrow
k\right\updownarrow _{t}-\left\updownarrow k\right\updownarrow _{s}\right) +%
\frac{c}{\rho _{0}}\left\vert x\left( t\right) -x\left( s\right) \right\vert
+\frac{c}{\rho _{0}}\mathbf{m}_{m}\left( t-s\right) +\frac{3L}{\rho _{0}}%
\left( t-s\right) \\
& \leq \frac{1}{2}\left( \left\updownarrow k\right\updownarrow
_{t}-\left\updownarrow k\right\updownarrow _{s}\right) +\frac{c}{\rho _{0}}%
\left\vert x\left( t\right) -x\left( s\right) \right\vert +\frac{c}{\rho _{0}%
}\mathbf{m}_{m}\left( t-s\right) +\frac{3L}{\rho _{0}}\left( t-s\right)
\end{align*}%
and, consequently,%
\begin{equation}
\begin{array}{lll}
\left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s} & \leq & \dfrac{2c}{\rho _{0}}\left\vert x\left(
t\right) -x\left( s\right) \right\vert +\dfrac{2c}{\rho _{0}}\mathbf{m}%
_{m}\left( t-s\right) +\dfrac{6L}{\rho _{0}}\left( t-s\right) \medskip \\
& \leq & \dfrac{1}{b}+\dfrac{2c}{\rho _{0}}\mathbf{m}_{m}\left( t-s\right) +%
\dfrac{6L}{\rho _{0}}T\medskip \\
& \leq & C_{1}\left( 1+\left\Vert m\right\Vert _{T}\right) ,%
\end{array}
\label{o4}
\end{equation}%
with $C_{1}=C_{1}\left( T,b,c,\rho _{0},L\right) .$
Now, plugging this estimate in (\ref{ob1}), it clearly follows
$$
\mathbf{m}_{x}\left( t-s\right) \leq \left[ \left( t-s\right) +\mathbf{m}%
_{m}\left( t-s\right) +~\sqrt{\mathbf{m}_{m}\left( t-s\right) }\right] \exp %
\left[ C^{\prime }(1+\left\Vert m\right\Vert _{T}^{2})\right] ,
$$
with $C^{\prime }=C^{\prime }\left( b,c,L,r_{0},h_{0},T\right) .$ Now, this
last inequality, used in (\ref{o4}), yields the estimate (\ref{ob2}).\hfill
\end{proof}
\begin{lemma}
\label{sp-l2}Let the assumptions (\ref{osp-h0}), (\ref{osp-h0-A}), (\ref%
{osp-h2}) and (\ref{osp-h3}) be satisfied. Let $\left( x,k\right) \in%
\mathcal{SP}\left( H\partial\varphi;x_{0},m\right) ,$ $0\leq s\leq t\leq T$
and $x\left( r\right) \in D_{\delta_{0}}$ for all $r\in\left[ s,t\right] $.
Then%
$$
\left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s}\leq L\left( 1+\frac{2}{\delta_{0}}\right)\left(
t-s\right)
$$
and%
$$
\mathbf{m}_{x}\left( t-s\right) \leq C_{T}\times\left[ \left( t-s\right) +%
\mathbf{m}_{m}\left( t-s\right) \right],
$$
where $C_{T}=C_{T}\left( b,c,r_{0},h_{0},L,T\right) >0.$
\end{lemma}
\begin{proof}
Let $y\left( r\right) =x\left( r\right) +\dfrac{\delta _{0}}{2}\alpha \left(
r\right) $, with $\alpha \in C\left( \mathbb{R}_{+};\mathbb{R}^{d}\right) ,$ $%
\left\Vert \alpha \right\Vert _{\left[ s,t\right] }\leq 1.$ Then $y\left(
r\right) \in D$ and
\begin{align*}
\dfrac{\delta _{0}}{2}\displaystyle\int_{s}^{t}\left\langle \alpha \left(
r\right) ,dk\left( r\right) \right\rangle & =\displaystyle%
\int_{s}^{t}\left\langle y\left( r\right) -x\left( r\right) ,dk\left(
r\right) \right\rangle \\
& \leq \displaystyle\int_{s}^{t}\left[ \varphi \left( y\left( r\right)
\right) -\varphi \left( x\left( r\right) \right) \right] dr \\
& \leq L\left( t-s\right) +L\displaystyle\int_{s}^{t}\left\vert y\left(
r\right) -x\left( r\right) \right\vert dr \\
& \leq L\left( t-s\right) +L\dfrac{\delta _{0}}{2}\left( t-s\right) .
\end{align*}%
Taking the supremum over all $\alpha $ such that $\left\Vert \alpha
\right\Vert _{\left[ s,t\right] }\leq 1$, we have
$$
\left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s} \leq \left( \frac{2L}{\delta _{0}}+L\right)
\left( t-s\right)
$$
and, by Lemma \ref{l1-mc-x}, the result follows.\hfill
\end{proof}
Denote now $\mathbf{\mu}_{m}\left( \varepsilon\right) =\varepsilon +\mathbf{m}%
_{m}\left( \varepsilon\right) ,$ $\varepsilon\geq0.$
\begin{lemma}
\label{oSP-l4-compact}Let the assumptions (\ref{osp-h0}), (\ref{osp-h0-A}), (%
\ref{osp-h2}) and (\ref{osp-h3}) be satisfied and $\left( x,k\right) \in
\mathcal{SP}\left( H\partial \varphi ;x_{0},m\right) .$ Then, there exists a
positive constant $C_{T}\left( \left\Vert m\right\Vert _{T}\right) =C\left(
x_{0},b,c,r_{0},h_{0},L,T,\left\Vert m\right\Vert _{T}\right) ,$ increasing
function with respect to $\left\Vert m\right\Vert _{T},$ such that, for all $%
0\leq s\leq t\leq T:$%
\begin{equation}
\begin{array}{ll}
\left( a\right) \quad & \left\Vert x\right\Vert _{T}+\left\updownarrow
k\right\updownarrow _{T}\leq C_{T}\left( \left\Vert m\right\Vert _{T}\right)
,\medskip \\
\left( b\right) \quad & \left\vert x\left( t\right) -x\left( s\right)
\right\vert +\left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s}\leq C_{T}\left( \left\Vert m\right\Vert _{T}\right)
\times \sqrt{\mathbf{\mu }_{m}\left( t-s\right) }.%
\end{array}
\label{ob4}
\end{equation}
\end{lemma}
\begin{proof}
We will follow the ideas of Lions and Sznitman from \cite{Lions/Sznitman-84}.
{\bf{\it{Step 1.}}} Define the sequence%
\begin{align*}
t_{0}& =T_{0}=0 \\
T_{1}& =\inf \left\{ t\in \left[ t_{0},T\right] :dist\left( x\left( t\right)
,Bd(D)\right) \leq \frac{\delta _{0}}{2}\right\} , \\
t_{1}& =\inf \left\{ t\in \left[ T_{1},T\right] :\left\vert x\left( t\right)
-x\left( T_{1}\right) \right\vert >\delta _{0}\right\} , \\
T_{2}& =\inf \left\{ t\in \left[ t_{1},T\right] :dist\left( x\left( t\right)
,Bd(D)\right) \leq \frac{\delta _{0}}{2}\right\} , \\
& \cdots \;\cdots \;\cdots \\
t_{i}& =\inf \left\{ t\in \left[ T_{i},T\right] :\left\vert x\left( t\right)
-x\left( T_{i}\right) \right\vert >\delta _{0}\right\} , \\
T_{i+1}& =\inf \left\{ t\in \left[ t_{i},T\right] :dist\left( x\left(
t\right) ,Bd(D)\right) \leq \frac{\delta _{0}}{2}\right\} , \\
& \cdots \;\cdots \;\cdots
\end{align*}%
Clearly, we have%
$$
0=T_{0}=t_{0}\leq T_{1}<t_{1}\leq T_{2}<\cdots \leq T_{i}<t_{i}\leq
T_{i+1}<t_{i+1}\leq \cdots \leq T.
$$
Denote%
$$
K\left( t\right) ={\int_{0}^{t}}H\left( x(r)\right) dk\left( r\right)
$$
and it follows that there exists a positive constant $\tilde{c}$ such that
$$
\left\updownarrow K\right\updownarrow _{t}\leq c\left\updownarrow
k\right\updownarrow _{t}\leq \tilde{c}\left\updownarrow K\right\updownarrow
_{t}.
$$
We have
\begin{itemize}
\item for $t_{i}\leq s\leq t\leq T_{i+1}:$%
$$
\left\vert x\left( t\right) -x\left( s\right) \right\vert \leq
\left\updownarrow K\right\updownarrow _{t}-\left\updownarrow
K\right\updownarrow _{s}+\left\vert m\left( t\right) -m\left( s\right)
\right\vert .
$$
Since for $t_{i}\leq r\leq T_{i+1}$, $x\left( r\right) \in D_{\delta _{0}}$
then, by Lemma \ref{sp-l2}, for $t_{i}\leq s\leq t\leq T_{i+1}\,,$%
$$
\left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s}\leq L\left( 1+\frac{2}{\delta _{0}}%
\right) \left( t-s\right)
$$
and%
$$
\mathbf{m}_{x}\left( t-s\right) \leq \left[ \left( t-s\right) +\mathbf{m}%
_{m}\left( t-s\right) \right] \times C_{T}.
$$
Hence, denoting in what follows by $C_{T}(\left\Vert m\right\Vert _{T})$ a generic constant depending on the supremum norm of the continuous function $m$, we have
\begin{align*}
\mathbf{m}_{x}\left( t-s\right) +\left\updownarrow k\right\updownarrow
_{t}-\left\updownarrow k\right\updownarrow _{s}& \leq \mathbf{\mu }%
_{m}\left( t-s\right) \times C_{T} \\
& \leq \sqrt{\mathbf{\mu }_{m}\left( t-s\right) }\times C_{T}\left(
\left\Vert m\right\Vert _{T}\right).
\end{align*}
\item for $T_{i}\leq s\leq t\leq t_{i},$ by Lemma \ref{l1-oSp} we have%
$$
\left\vert x\left( t\right) -x\left( s\right) \right\vert +\left\updownarrow
k\right\updownarrow _{t}-\left\updownarrow k\right\updownarrow _{s}\leq\sqrt{%
\mathbf{\mu}_{m}\left( t-s\right) }\times C_{T}\left( \left\Vert
m\right\Vert _{T}\right).
$$
\item for $T_{i}\leq s\leq t_{i}\leq t\leq T_{i+1}~$,%
\begin{align*}
& \left\vert x\left( t\right) -x\left( s\right) \right\vert
+\left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s} \\
& \leq \left\vert x\left( t\right) -x\left( t_{i}\right) \right\vert
+\left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{t_{i}}+\left\vert x\left( t_{i}\right) -x\left(
s\right) \right\vert +\left\updownarrow k\right\updownarrow
_{t_{i}}-\left\updownarrow k\right\updownarrow _{s} \\
& \leq \sqrt{\mathbf{\mu }_{m}\left( t-t_{i}\right) }\times C_{T}+\sqrt{%
\mathbf{\mu }_{m}\left( t_{i}-s\right) }\times C_{T}\left( \left\Vert
m\right\Vert _{T}\right) \\
& \leq \sqrt{\mathbf{\mu }_{m}\left( t-s\right) }\times C_{T}\left( \left\Vert m\right\Vert _{T}\right).
\end{align*}
\end{itemize}
Consequently, for all $i\in \mathbb{N}$ and $T_{i}\leq s\leq t\leq T_{i+1}~,$%
$$
\left\vert x\left( t\right) -x\left( s\right) \right\vert +\left\updownarrow
k\right\updownarrow _{t}-\left\updownarrow k\right\updownarrow _{s}\leq
\sqrt{\mathbf{\mu }_{m}\left( t-s\right) }\times C_{T}\left( \left\Vert
m\right\Vert _{T}\right),
$$
where $C_{T}\left( \left\Vert m\right\Vert _{T}\right) =C\left(
b,c,r_{0},h_{0},L,\left\Vert m\right\Vert _{T}\right) $ is increasing with respect to $%
\left\Vert m\right\Vert _{T}.\smallskip $
{\bf{\it{Step 2.}}} Since $\mathbf{\mu }_{m}:\left[ 0,T%
\right] \rightarrow \left[ 0,\mathbf{\mu }_{m}\left( T\right) \right] $ is a
strictly increasing continuous function, then the inverse function $\mathbf{%
\mu }_{m}^{-1}:\left[ 0,\mathbf{\mu }_{m}\left( T\right) \right] \rightarrow %
\left[ 0,T\right] $ is well defined and it is, also, a strictly increasing
continuous function. We have%
\begin{align*}
\delta & \leq \left\vert x\left( t_{i}\right) -x\left( T_{i}\right)
\right\vert \\
& \leq \sqrt{\mathbf{\mu }_{m}\left( t_{i}-T_{i}\right) }\times C_{T}\left(
\left\Vert m\right\Vert _{T}\right) \\
& \leq \sqrt{\mathbf{\mu }_{m}\left( T_{i+1}-T_{i}\right) }\times
C_{T}\left( \left\Vert m\right\Vert _{T}\right)
\end{align*}%
and, consequently,%
$$
T_{i+1}-T_{i}\geq \mathbf{\mu }_{m}^{-1}\left[ \left( \frac{\delta }{%
C_{T}\left( \left\Vert m\right\Vert _{T}\right) }\right) ^{2}\right] \overset%
{def}{=}\frac{1}{\Delta _{m}}>0.
$$
Therefore, the bounded increasing sequence $\left( T_{i}\right) _{i\geq 0}$ is
finite.\newline
Considering $j$ be such that $T=T_{j},$ we have%
$$
T=T_{j}=\sum_{i=1}^{j}\left( T_{i}-T_{i-1}\right) \geq \frac{j}{\Delta _{m}}%
~.
$$
Let $0\leq s\leq t\leq T$ and we have
\begin{align*}
\left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s}& =\sum_{i=1}^{j}\left( \left\updownarrow
k\right\updownarrow _{\left( t\wedge T_{i}\right) \vee s}-\left\updownarrow
k\right\updownarrow _{\left( t\wedge T_{i-1}\right) \vee s}\right) \\
& \leq \sum_{i=1}^{j}\sqrt{\mathbf{\mu }_{m}\Big(\left( t\wedge T_{i}\right)
\vee s-\left( t\wedge T_{i-1}\right) \vee s\Big)}\times C_{T}\left(
\left\Vert m\right\Vert _{T}\right) \\
& \leq j\times \sqrt{\mathbf{\mu }_{m}\left( t-s\right) }\times C_{T}\left(
\left\Vert m\right\Vert _{T}\right) \\
& \leq T~\Delta _{m}~\sqrt{\mathbf{\mu }_{m}\left( t-s\right) }\times
C_{T}\left( \left\Vert m\right\Vert _{T}\right).
\end{align*}%
Consequently,%
$$
\left\updownarrow k\right\updownarrow _{T}\leq T~\Delta _{m}~\sqrt{\mathbf{%
\mu }_{m}\left( T\right) }\times C_{T}\left( \left\Vert m\right\Vert
_{T}\right) \leq C_{T}^{\prime }\left( \left\Vert m\right\Vert _{T}\right)
$$
and
\begin{align*}
\left\vert x\left( t\right) \right\vert & =\left\vert x_{0}+m\left( t\right)
-\displaystyle\int_{0}^{t}H\left( x\left( s\right) \right) dk\left( s\right)
\right\vert \\
& \leq \left\vert x_{0}\right\vert +\left\Vert m\right\Vert
_{t}+c\left\updownarrow k\right\updownarrow _{t} \\
& \leq \left\vert x_{0}\right\vert +\left\Vert m\right\Vert
_{T}+c\left\updownarrow k\right\updownarrow _{T}.
\end{align*}%
We conclude that there exists a positive constant $C_{T}\left( \left\Vert m\right\Vert
_{T}\right) =C\left( b,c,r_{0},h_{0},L,\left\Vert m\right\Vert _{T}\right)
>0 $ (increasing with respect to $\left\Vert m\right\Vert _{T}$) such that%
$$
\left\updownarrow k\right\updownarrow _{T}\leq C_{T}\left( \left\Vert
m\right\Vert _{T}\right) \;\;\textrm{and \ \ }\left\Vert x\right\Vert _{T}\leq
\left\vert x_{0}\right\vert +C_{T}\left( \left\Vert m\right\Vert _{T}\right),
$$
that is (\ref{ob4}-a).$\smallskip $
By Lemma \ref{l1-mc-x}, for every $\,0\leq s\leq t\leq T:$%
\begin{align*}
\mathbf{m}_{x}\left( t-s\right) & \leq \left[ \left( t-s\right) +\mathbf{m}%
_{m}\left( t-s\right) +~\sqrt{\mathbf{m}_{m}\left( t-s\right) C_{T}\left(
\left\Vert m\right\Vert _{T}\right) }\right] \times C_{T}\left( \left\Vert
m\right\Vert _{T}\right) \\
& \leq C_{T}^{\prime }\left( \left\Vert m\right\Vert _{T}\right) \times
\sqrt{\mathbf{\mu }_{m}\left( t-s\right) },
\end{align*}%
that means (\ref{ob4}-b) holds. The proof is now complete.\hfill
\end{proof}
\subsection{Moreau-Yosida regularization of a convex function}
By $\nabla \varphi _{\varepsilon }$ we denote the gradient of the Yosida's
regularization $\varphi _{\varepsilon }$ of the convex lower semicontinuous function $\varphi $, that is
\begin{align*}
\varphi _{\varepsilon }(x)& =\inf \,\{\frac{1}{2\varepsilon }%
|z-x|^{2}+\varphi (z):z\in \mathbb{R}^{d}\} \\
& =\dfrac{1}{2\varepsilon }|x-J_{\varepsilon }x|^{2}+\varphi (J_{\varepsilon
}x),
\end{align*}%
where $J_{\varepsilon }x=x-\varepsilon \nabla \varphi _{\varepsilon }(x).$
The function $\varphi _{\varepsilon }:\mathbb{R}^{d}\rightarrow \mathbb{R}$
is convex and differentiable and, for all $x,y\in \mathbb{R}^{d},$ $%
\varepsilon >0:$%
\begin{equation}
\begin{array}{ll}
(a)\quad & \nabla \varphi _{\varepsilon }(x)=\partial \varphi _{\varepsilon
}\left( x\right) \in \partial \varphi (J_{\varepsilon }x)\textrm{ and }\varphi
(J_{\varepsilon }x)\leq \varphi _{\varepsilon }(x)\leq \varphi
(x),\smallskip \smallskip \\
(b)\quad & \left\vert \nabla \varphi _{\varepsilon }(x)-\nabla \varphi
_{\varepsilon }(y)\right\vert \leq \dfrac{1}{\varepsilon }\left\vert
x-y\right\vert,\smallskip \smallskip \\
(c)\quad & \left\langle \nabla \varphi _{\varepsilon }(x)-\nabla \varphi
_{\varepsilon }(y),x-y\right\rangle \geq 0,\smallskip \smallskip \\
(d)\quad & \left\langle \nabla \varphi _{\varepsilon }(x)-\nabla \varphi
_{\delta }(y),x-y\right\rangle \geq -(\varepsilon +\delta )\left\langle
\nabla \varphi _{\varepsilon }(x),\nabla \varphi _{\delta }(y)\right\rangle .%
\end{array}
\label{sub6a}
\end{equation}%
Moreover, in the case $0=\varphi \left( 0\right) \leq \varphi \left( x\right) $, for
all $x\in \mathbb{R}^{d}$, we have%
\begin{equation}
\begin{array}{l}
\left( a\right) \quad \quad 0=\varphi _{\varepsilon }(0)\leq \varphi
_{\varepsilon }(x)\quad \textrm{and}\quad J_{\varepsilon }\left( 0\right)
=\nabla \varphi _{\varepsilon }\left( 0\right) =0,\smallskip \\
\left( b\right) \quad \quad \dfrac{\varepsilon }{2}|\nabla \varphi
_{\varepsilon }(x)|^{2}\leq \varphi _{\varepsilon }(x)\leq \left\langle
\nabla \varphi _{\varepsilon }(x),x\right\rangle,\quad \forall x\in \mathbb{%
R}^{d}.%
\end{array}
\label{sub6c}
\end{equation}
\begin{proposition}
\label{p12annexB}Let\ $\varphi :\mathbb{R}^{d}\rightarrow ]-\infty ,+\infty
] $ be a proper convex l.s.c. function such that $int\left( Dom\left(
\varphi \right) \right) \neq \emptyset .$ Let $\left( u_{0},\hat{u}%
_{0}\right) \in \partial \varphi,$ $r_{0}\geq 0$ and%
$$
\varphi _{u_{0},r_{0}}^{\#}\overset{def}{=}\sup \left\{ \varphi \left(
u_{0}+r_{0}v\right) :\left\vert v\right\vert \leq 1\right\} .
$$
Then, for all $\,0\leq s\leq t$ and $dk\left( t\right) \in \partial \varphi
\left( x\left( t\right) \right) \left( dt\right) $,%
\begin{equation}
r_{0}\left( \left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s}\right) +\displaystyle\int_{s}^{t}\varphi
(x(r))dr\leq \displaystyle\int_{s}^{t}\left\langle x\left( r\right)
-u_{0},dk\left( r\right) \right\rangle +\left( t-s\right) \varphi
_{u_{0},r_{0}}^{\#}~, \label{Ba6a}
\end{equation}%
and, moreover,%
\begin{equation}
\begin{array}{l}
r_{0}\left( \left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s}\right) +\displaystyle\int_{s}^{t}\left\vert \varphi
(x(r))-\varphi \left( u_{0}\right) \right\vert dr\leq \displaystyle%
\int_{s}^{t}\left\langle x\left( r\right) -u_{0},dk\left( r\right)
\right\rangle \smallskip \smallskip \\
\quad \quad \quad +\displaystyle\int_{s}^{t}\left( 2\left\vert \hat{u}%
_{0}\right\vert \left\vert x(r)-u_{0}\right\vert +\varphi
_{u_{0},r_{0}}^{\#}-\varphi \left( u_{0}\right) \right) dr.%
\end{array}
\label{Ba6b}
\end{equation}
\end{proposition}
\subsection{Useful inequalities}
Let now introduce the spaces that will appear in the next results.\smallskip
Denote by $S_{d}^{p}\left[0,T\right]$, $p\geq 0$, the space of
progressively measurable continuous stochastic processes $X:\Omega \times %
\left[0,T\right]\rightarrow\mathbb{R}^{d}$, such that%
\[
\left\Vert X\right\Vert _{S_{d}^{p}}=\left\{
\begin{array}{ll}
\left(\mathbb{E}\left\Vert X\right\Vert_{T}^{p}\right)^{\frac{1}{p}\wedge
1}<{\infty}, & \;\text{if }p>0,\bigskip \\
\mathbb{E}\left[1\wedge\left\Vert X\right\Vert_{T}\right], & \;\text{if }
p=0,
\end{array}
\right.
\]
where $\left\Vert X\right\Vert _{T}=\sup_{t\in \left[ 0,T\right]
}\left\vert X_{t}\right\vert$. The space $(S_{d}^{p}\left[ 0,T\right],\left\Vert \cdot \right\Vert_{S_{d}^{p}}),\ p\!\geq 1,$ is a Banach space
and $S_{d}^{p}\left[ 0,T\right]$, $0\leq p<1$, is a complete metric space
with the metric $\rho(Z_{1},Z_{2})=\left\Vert Z_{1}-Z_{2}\right\Vert
_{S_{d}^{p}}$ (when $p=0$ the metric convergence coincides with the
probability convergence).\smallskip
Denote by $\Lambda_{d\times k}^{p}\left( 0,T\right),\ p\in \lbrack 0,{\infty}[$, the space of progressively measurable stochastic processes $Z:{\Omega }\times ]0,T[\rightarrow \mathbb{R}^{d\times k}$ such that
\[
\left\Vert Z\right\Vert _{\Lambda ^{p}}=\left\{
\begin{array}{ll}
\left[ \mathbb{E}\left( \displaystyle\int_{0}^{T}\Vert Z_{s}\Vert ^{2}ds\right) ^{\frac{p}{2}}\right] ^{\frac{1}{p}\wedge 1}, & \;\text{if }p>0,\bigskip \\
\mathbb{E}\left[ 1\wedge \left(\displaystyle\int_{0}^{T}\Vert Z_{s}\Vert^{2}ds\right)
^{\frac{1}{2}}\right], & \;\text{if }p=0.
\end{array}
\right.
\]%
The space $(\Lambda_{d\times k}^{p}\left(0,T\right),\left\Vert \cdot
\right\Vert_{\Lambda ^{p}}),\ p\geq 1,$ is a Banach space and $\Lambda
_{d\times k}^{p}\left( 0,T\right) $, $0\leq p<1,$ is a complete metric space
with the metric $\rho (Z_{1},Z_{2})=\left\Vert Z_{1}-Z_{2}\right\Vert
_{\Lambda ^{p}}$.
\begin{proposition}
\label{AnC-GGI} Let $x\in BV_{loc}\left( \left[ 0,\infty \right[;\mathbb{R}%
^{d}\right)$ and $V\in BV_{loc}\left( \left[ 0,\infty \right[ ;\mathbb{R}%
\right)$ be continuous functions. Let $R,$ $N:\left[0,\infty \right[
\rightarrow \left[0,\infty \right[$ be two continuous increasing
functions. If%
$$
\left\langle x\left(t\right),dx\left(t\right)\right\rangle\leq dR\left(
t\right) +\left\vert x\left(t\right)\right\vert dN\left(t\right)
+\left\vert x\left(t\right)\right\vert ^{2}dV\left(t\right)
$$
as signed measures on $\left[0,\infty\right[$, then for all $0\leq t\leq
T,$%
\begin{equation}
\left\Vert e^{-V}x\right\Vert_{\left[t,T\right]}\leq 2\left[\left\vert
e^{-V\left( t\right) }x\left(t\right)\right\vert +\left(
\int_{t}^{T}e^{-2V\left( s\right)}dR\left(s\right)\right)
^{1/2}+\int_{t}^{T}e^{-V\left(s\right)}dN\left(s\right)\right].
\label{ineq1-Anex}
\end{equation}
If $R=0$ then, for all $0\leq t\leq s$,
\begin{equation}
|x(s)|\leq e^{V(s)-V(t)}|x(t)|+\int_{t}^{s}e^{V(s)-V(r)}dN(r).
\label{AnC-dxRNV}
\end{equation}
\end{proposition}
\begin{proof}
Let $u_{\varepsilon
}(r)=|x(r)|^{2}e^{-2V(r)}+\varepsilon $, for $\varepsilon >0$. We have as
signed measures on $[0,\infty )$%
\begin{eqnarray*}
du_{\varepsilon }(r) &=&-2e^{-2V(r)}|x(r)|^{2}dV(r)+2e^{-2V(r)}\left\langle
x(r),dx(r)\right\rangle \medskip \\
&\leq &2e^{-2V(r)}dR(r)+2e^{-2V(r)}|x(r)|dN(r)\medskip \\
&\leq &2e^{-2V(r)}dR(r)+2e^{-V(r)}\sqrt{u_{\varepsilon }(r)}dN(r).
\end{eqnarray*}
If $R=0$ then%
\begin{equation*}
d\left( \sqrt{u_{\varepsilon }(r)}\right) =\frac{du_{\varepsilon }(r)}{2%
\sqrt{u_{\varepsilon }(r)}}\leq e^{-V(r)}dN(r),
\end{equation*}
and, consequently, for $0\leq t\leq s$, $\sqrt{u_{\varepsilon }(s)}\leq
\sqrt{u_{\varepsilon }(t)}+\int_{t}^{s}e^{-V(r)}dN(r)$, that yields (\ref{AnC-dxRNV}) by passing to limit as $\varepsilon \rightarrow 0$.
If $R\neq 0$ we have%
\[
\begin{array}{l}
e^{-2V(s)}|x(s)|^{2}\medskip \\
\quad \leq
e^{-2V(t)}|x(t)|^{2}+2\int_{t}^{s}e^{-2V(r)}dR(r)+2%
\int_{t}^{s}e^{-2V(r)}|x(r)|dN(r)\medskip \\
\quad \leq e^{-2V(t)}|x(t)|^{2}+2\int_{t}^{s}e^{-2V(r)}dR(r)+2\left\Vert
e^{-V}x\right\Vert _{\left[ t,T\right] }\int_{t}^{s}e^{-V(r)}dN(r)\medskip
\\
\quad \leq |e^{-V(t)}x(t)|^{2}+2\int_{t}^{T}e^{-2V(r)}dR(r)+\dfrac{1}{2}%
\left\Vert e^{-V}x\right\Vert _{\left[ t,T\right] }^{2}+2\left(
\int_{t}^{T}e^{-V(r)}dN(r)\right) ^{2}.%
\end{array}%
\]%
Hence, for all $t\leq \tau \leq T$,%
\begin{eqnarray*}
e^{-2V(\tau )}|x(\tau )|^{2} &\leq &\left\Vert e^{-V}x\right\Vert _{\left[
t,T\right] }^{2}\medskip \\
&\leq &2e^{-2V(t)}|x(t)|^{2}+4\int_{t}^{T}e^{-2V(s)}dR(s)+4\left(
\int_{t}^{T}e^{-V(s)}dN(s)\right) ^{2}
\end{eqnarray*}%
and the result follows.
\end{proof}
Recall, from Pardoux \& R\u{a}\c{s}canu \cite{Pardoux/Rascanu:09}, an
estimate on the local semimartingale $X\in S_{d}^{0}$ of the form%
\begin{equation}
X_{t}=X_{0}+K_{t}+\int_{0}^{t}G_{s}dB_{s},\;\,t\geq 0,\quad \mathbb{P}-a.s.,
\label{AnexC-fsde0}
\end{equation}%
where\medskip \newline
$\lozenge $\quad $K\in S_{d}^{0},\;$ $K \in BV_{loc}\left(\left[
0,\infty \right[;\mathbb{R}^{d}\right),\;K_{0}=0,\;\mathbb{P}
-a.s.,\smallskip $\newline
$\lozenge $\quad $G\in \Lambda _{d\times k}^{0}.$\smallskip
\noindent For $p\geq 1$ denote $m_{p}\overset{def}{=}1\vee \left( p-1\right)$ and we have the following result.
\begin{proposition}
\label{AnexC-p0-fsi} Let $X\in S_{d}^{0}$ be a local semimartingale of the
form (\ref{AnexC-fsde0}). Assume there exist $p\geq 1$ and $V$ a $\mathcal{P}-$m.b-v.c.s.p.$,$ $V_{0}=0,$ such that as signed measures on $[0,\infty[:$
\begin{equation}
\left\langle X_{t},dK_{t}\right\rangle +\frac{1}{2}m_{p}\left\vert
G_{t}\right\vert ^{2}dt\leq |X_{t}|^{2}dV_{t},\;\;\mathbb{P}-a.s..
\label{AnexC-fsde-ip3}
\end{equation}%
Then, for all $\delta \geq 0$, $0\leq t\leq s,$ we have that%
\begin{equation}
\mathbb{E}^{\mathcal{F}_{t}}\frac{\left\vert e^{-V_{s}}X_{s}\right\vert ^{p}%
}{\left( 1+\delta \left\vert e^{-V_{s}}X_{s}\right\vert ^{2}\right) ^{p/2}}%
\leq \frac{\left\vert e^{-V_{t}}X_{t}\right\vert ^{p}}{\left( 1+\delta
\left\vert e^{-V_{t}}X_{t}\right\vert ^{2}\right) ^{p/2}}~,\;\mathbb{P}-a.s..
\label{AnexC-fsde-p1}
\end{equation}
\end{proposition}
\subsection{Tightness results}
The next five results are given without proofs; you can find them in the monograph \cite{Pardoux/Rascanu:09}.
\begin{proposition}
\label{ch1-p1-tight}Let $\left\{X_{t}^{n}:t\geq 0\right\}$, $n\in\mathbb{N}^{\ast}$, be a family of $\mathbb{R}^{d}-$valued continuous stochastic
processes defined on probability space $\left(\Omega,\mathcal{F},\mathbb{P}\right)$. Suppose that, for every $T\geq 0$, there exist $\alpha=\alpha_{T}>0$ and $b=b_{T}\in C\left(\mathbb{R}_{+}\right)$ with $b(0)=0$ (both independent of $n$), such that
$$
\begin{array}{ll}
\left(i\right)\quad & \lim\limits_{N\rightarrow \infty}\left[\sup\limits_{n\in \mathbb{N}^{\ast}}\mathbb{P}(\{\left\vert
X_{0}^{n}\right\vert\geq N\})\right]=0,\medskip \\
\left(ii\right) \quad & \mathbb{E}\left[1\wedge \sup\limits_{0\leq s\leq
\varepsilon}\left\vert X_{t+s}^{n}-X_{t}^{n}\right\vert^{\alpha}\right]
\leq \varepsilon\cdot b(\varepsilon),\,\forall ~\varepsilon>0,n\geq 1,\;t\in \left[0,T\right].
\end{array}
$$
Then $\left\{ X^{n}:n\in\mathbb{N}^{\ast}\right\}$ is tight in $C(\mathbb{R}_{+};\mathbb{R}^{d})$.
\end{proposition}
\begin{proposition}
\label{ch1-lsc-SI} Consider $\varphi:\mathbb{R}^{d}\rightarrow]-\infty,+\infty]$ a l.s.c. function. Let $\left(X,K,V\right)$, $\left(X^{n},K^{n},V^{n}\right),~n\in\mathbb{N}$, be $C\left(\left[0,T\right];
\mathbb{R}^{d}\right)^{2}\times C\left(\left[0,T\right];\mathbb{R}\right)-$valued random variables, such that
$$
\left(X^{n},K^{n},V^{n}\right)
\xrightarrow
[n\rightarrow\infty]{law}
\left(X,K,V\right)
$$
and, for all $0\leq s<t$, and $n\in\mathbb{N}^{\ast}$
$$
\left\updownarrow K^{n}\right\updownarrow _{t}-\left\updownarrow
K^{n}\right\updownarrow _{s}\leq V_{t}^{n}-V_{s}^{n}\;\;a.s.
$$
and%
$$
{\displaystyle\int_{s}^{t}}\varphi \left( X_{r}^{n}\right)dr\leq \displaystyle\int_{s}^{t}\left\langle X_{r}^{n}~,dK_{r}^{n}\right\rangle,\;\;a.s..
$$
Then $\left\updownarrow K\right\updownarrow_{t}-\left\updownarrow
K\right\updownarrow_{s}\leq V_{t}-V_{s}~,\;a.s.$ and
$$
\displaystyle\int_{s}^{t}\varphi\left(X_{r}\right)dr\leq\displaystyle\int_{s}^{t}\left\langle X_{r}~,dK_{r}\right\rangle,\;\;a.s..
$$
\end{proposition}
\begin{proposition}
\label{ch3-c4-cont} Let $X,\hat{X}\in S_{d}^{0}\left[0,T\right]$ and $B,\hat{B}$ be two $\mathbb{R}^{k}-$Brownian motions and $g:\mathbb{R}_{+}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{d\times k}$ be a function
satisfying
$$
\begin{array}{l}
g\left(\cdot,y\right) \textrm{ is measurable\ }\forall ~y\in \mathbb{R}^{d},\;\;\textrm{and}\medskip \\
y\mapsto g\left(t,y\right) \textrm{is continuous\ }dt-a.e..
\end{array}
$$
If
$$
\mathcal{L}\left( X,B\right) =\mathcal{L}(\hat{X},\hat{B}),\;\;\textrm{on\ }C(\mathbb{R}_{+},\mathbb{R}^{d+k}),
$$
then
$$
\mathcal{L}\left(X,B,\int_{0}^{\cdot}g\left( s,X_{s}\right) dB_{s}\right)=
\mathcal{L}\left(\hat{X},\hat{B},\int_{0}^{\cdot}g\left(s,\hat{X}_{s}\right) d\hat{B}_{s}\right),\;\;\textrm{on\ } C(\mathbb{R}_{+},\mathbb{R}^{d+k+d}).
$$
\end{proposition}
\begin{lemma}
\label{ch1-p2-tight} Let $g:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$ be a
continuous function satisfying $g\left(0\right)=0$ and $G:C\left(\mathbb{R}_{+};\mathbb{R}^{d}\right)\rightarrow \mathbb{R}_{+}$ be a mapping which
is bounded on compact subsets of $C\left(\mathbb{R}_{+};\mathbb{R}
^{d}\right).$ Let $X^{n},Y^{n}$, $n\in\mathbb{N}^{\ast}$, be random
variables with values in $C\left(\mathbb{R}_{+};\mathbb{R}^{d}\right)$. If
$\left\{ Y^{n}:n\in \mathbb{N}^{\ast }\right\}$ is tight and, for all $n\in
\mathbb{N}^{\ast},$%
$$
\begin{array}{rl}
\left(i\right) \quad & \left\vert X_{0}^{n}\right\vert \leq G\left(Y^{n}\right),\;a.s.,\medskip \\
\left(ii\right)\quad & \mathbf{m}_{X^{n}}\left(\varepsilon;\left[0,T\right]\right)\leq G\left(Y^{n}\right) g\left(\mathbf{m}_{Y^{n}}\left(\varepsilon;
\left[0,T\right]\right)\right),\;a.s.,\;\;\forall ~\varepsilon,T>0,
\end{array}
$$
then $\left\{ X^{n}:n\in \mathbb{N}^{\ast }\right\}$ is tight.
\end{lemma}
\begin{lemma}
\label{ch2-p-conv}Let $B,$ $B^{n}$, $\bar{B}^{n}:\Omega\times\left[
0,\infty \right[ \rightarrow \mathbb{R}^{k}$ and $X$, $X^{n}$, $\bar{X}^{n}:\Omega \times\left[0,\infty\right[\rightarrow\mathbb{R}^{d\times k}$ be continuous stochastic processes such that
\begin{itemize}
\item[$\left(i\right)$] $\quad B^{n}$ is $\mathcal{F}_{t}^{B^{n},X^{n}}-$ Brownian motion, for all $n\geq 1,$
\item[$\left(ii\right)$] $\quad\mathcal{L}(X^{n},B^{n})=\mathcal{L}\left(
\bar{X}^{n},\bar{B}^{n}\right)$ on $C(\mathbb{R}_{+},\mathbb{R}^{d\times
k}\times \mathbb{R}^{k})$, for all $n\geq 1$,
\item[$\left( iii\right)$] $\quad\displaystyle\int_{0}^{T}\left\vert \bar{X}_{s}^{n}-\bar{X}_{s}\right\vert^{2}ds+\sup\limits_{t\in \left[0,T\right]}\left\vert \bar{B}_{t}^{n}-\bar{B}_{t}\right\vert\longrightarrow 0$ in
probability, as $n\rightarrow \infty$, for all $T>0.$
\end{itemize}
\noindent Then $(\bar{B}^{n},\{\mathcal{F}_{t}^{\bar{B}^{n},\bar{X}^{n}}\}),n\geq 1,\;$and $(\bar{B},\{\mathcal{F}_{t}^{\bar{B},\bar{X}}\})$
are Brownian motions and, as $n\rightarrow\infty$,
$$
\sup_{t\in\left[0,T\right]}\left\vert\int_{0}^{t}\bar{X}_{s}^{n}d\bar{B}_{s}^{n}-\int_{0}^{t}\bar{X}_{s}d\bar{B}_{s}\right\vert
\underset{n\rightarrow \infty}{\longrightarrow}0\quad \textrm{in probability}.
$$
\end{lemma} \medskip
\section*{Acknowledgements}
The authors are grateful to the referees for the attention in reading this paper and for theirs very useful suggestions.
\bibliographystyle{model1-num-names}
|
1,314,259,994,842 | arxiv | \section{Main}
The linear nature of quantum mechanics poses a persistent problem for reconciliation of the classical and quantum mechanics. The superposition principle is foreign to classical physics. We don't see macroscopic objects in two different places or the cat being alive and dead at the same time. However, such states are commonplace in the microworld. The notable attempts to resolve the situation include non-linear modifications of the Schr{\"o}dinger equation to account for the transition to states observed in a measurement, the De Broglie-Bohm theory of classical particles lead by a pilot-wave, an appeal to many coexisting worlds representing the components of a superposition and the decoherence program aiming to explain how superpositions decohere to probabilistic mixtures. None of these attempts is generally accepted as successful in resolving the problem. There is no experimental evidence for needing to modify the Schr{\"o}dinger equation. The simultaneous presence of quantum and classical trajectories in the pilot-wave theory seems redundant. The many-world approach fails to explain the world that is real and unique to us. The decoherence program derives the laws of probability valid for macroscopic bodies but fails to account for a specific outcome of a measurement.
On the other hand, the Newtonian and Schr{\"o}dinger dynamics have a simple relationship that does not seem to be known nor used in the above-mentioned attempts. Namely, the Newtonian dynamics can be identified with a constrained Schr{\"o}dinger dynamics. The latter is similar to the Newtonian dynamics of a constrained system, e.g., a bead on a wire. However, since the Schr{\"o}dinger equation describes the dynamics in a Hilbert space of states, the constraint is imposed on the state of the system.
Consider the subset $M^{\sigma}_{3,3}$ of the Hilbert space $L_{2}(\mathbb{R}^3)$, formed by the states
\begin{equation}
\varphi({\bf x})=g_{{\bf a}, \sigma}({\bf x})e^{i{\bf p}{\bf x}/\hbar},
\end{equation}
where
\begin{equation}
g_{{\bf a}, \sigma}=\left(\frac{1}{2\pi\sigma^{2}}\right)^{3/4}e^{-\tfrac{({\bf x}-{\bf a})^{2}}{4\sigma^{2}}}
\end{equation}
is the Gaussian function of a sufficiently small variance $2\sigma^2$ centered at a point ${\bf a}$ in the Euclidean space $\mathbb{R}^3$ and ${\bf p}$ is a fixed vector in $\mathbb{R}^3$. The state $\varphi({\bf x})$ represents a narrow wave packet with group velocity ${\bf p}/m$, where $m$ is the mass of the particle.
Let's identify the set of all pairs $({\bf a}, {\bf p})$ with the classical phase space $\mathbb{R}^3 \times \mathbb{R}^3$ of possible positions ${\bf a}$ and momenta ${\bf p}$ of a particle. The map $\Omega: ({\bf a}, {\bf p}) \longrightarrow g_{{\bf a}, \sigma}e^{i{\bf p}{\bf x}/\hbar}$ identifies then the classical phase space with the submanifold $M^{\sigma}_{3,3}$ of $L_{2}(\mathbb{R}^3)$. The equivalence classes of states in $L_{2}(\mathbb{R}^3)$ differing only by a constant phase factor $e^{i\alpha}$ form the projective space $CP^{L_{2}}$.
Under the equivalence relation, the embedded manifold $M^{\sigma}_{3,3}$ becomes a submanifold of $CP^{L_{2}}$, denoted here by the same symbol.
The six-dimensional manifold $M^{\sigma}_{3,3}$ is embedded into the space of states in a very special way. There are no vectors in the Hilbert space $L_{2}(\mathbb{R}^3)$ orthogonal to all of $M^{\sigma}_{3,3}$. Instead, the points of $M^{\sigma}_{3,3}$ represent an overcomplete basis in the Hilbert space \cite{KryukovMath}. Furthermore, the projective space $CP^{L_{2}}$ possesses the Fubini-Study metric, induced by the embedding of $CP^{L_{2}}$ into the sphere $S^{L_{2}}$ of unit-normalized states in $L_{2}(\mathbb{R}^3)$ furnished itself with the induced round Riemannian metric. For any two vectors $\xi, \eta$ in $L_{2}(\mathbb{R}^3)$ tangent to the sphere $S^{L_{2}}$ and the corresponding vectors $X=(\mathrm{Re} \xi, \mathrm{Im} \xi)$ and $Y=(\mathrm{Re} \eta, \mathrm{Im} \eta)$ in the realization $L_{2R}(\mathbb{R}^3)$ of $L_{2}(\mathbb{R}^3)$, the Riemannian metric $G_{\varphi}$ on the sphere is defined by
\begin{equation}
G_{\varphi}(X,Y)=\mathrm{Re}(\xi,\eta).
\end{equation}
Perhaps surprisingly, the metric induced on the submanifold $M^{\sigma}_{3,3}$ of $CP^{L_{2}}$ with $2\sigma$ as a unit of length turns out to be the ordinary Euclidean metric. In other words, the map $\Omega: \mathbb{R}^3 \times \mathbb{R}^3 \longrightarrow CP^{L_{2}}$ is an isometric embedding of the Euclidean space into the space of states. The manifold $M^{\sigma}_{3,3}$ can be also furnished with a compatible linear structure, making it isomorphic to the Euclidean space $\mathbb{R}^3 \times \mathbb{R}^3$.
A simple way to prove that Newtonian dynamics is a constrained Schr{\"o}dinger dynamics is by using the variational principle. The variation of the functional
\begin{equation}
\label{SS}
S[\varphi]=\int \overline{\varphi}({\bf x},t) \left[i\hbar \frac{\partial}{\partial t}-{\widehat h}\right] \varphi({\bf x},t) d^3 {\bf x} dt
\end{equation}
with the Hamiltonian ${\widehat h}=-\frac{\hbar^{2}}{2m}\Delta+V({\bf x},t)$ yields the Schr{\"o}dinger equation for $\varphi$. For the states $\varphi$ constrained to the manifold $M^{\sigma}_{3,3}$, this functional reduces to the classical action
\begin{equation}
S=\int \left[{\bf p}\frac{d {\bf a}}{dt}-h({\bf p},{\bf a},t)\right]dt,
\end{equation}
where $h({\bf p},{\bf a},t)=\frac{{\bf p}^2}{2m}+V({\bf a},t)+const$ is the Hamiltonian function for the particle. We used here the fact that for a sufficiently small $\sigma$, the terms $V({\bf a},t)$ and $\int g^2_{{\bf a}, \sigma}({\bf x})V({\bf x},t)d^3 {\bf x}$ are arbitrarily close to each other. In fact, as $\sigma$ approaches $0$, the terms $g^2_{{\bf a}, \sigma}$ form a delta sequence.
It follows that the variation of the functional (\ref{SS}) subject to the constraint that $\varphi$ belongs to $M^{\sigma}_{3,3}$ yields Newtonian equations of motion.
The transition from Schr{\"o}dinger to Newtonian dynamics can be expressed in terms of the transition from quantum commutators to Poisson brackets.
First of all, a simple calculation with the above $h({\bf p},{\bf a},t)$ and ${\widehat h}$, and $\varphi$ in $M^{\sigma}_{3,3}$ yields the following:
\begin{equation}
\label{aa}
(\varphi, \tfrac{1}{i\hbar}[{\widehat {\bf x}}, {\widehat h}]\varphi)= \{{\bf a}, h \},
\end{equation}
and
\begin{equation}
\label{bb}
(\varphi, \tfrac{1}{i\hbar}[{\widehat {\bf p}}, {\widehat h}]\varphi)=\{{\bf p}, h\}.
\end{equation}
The brackets on the right side of (\ref{aa}) and (\ref{bb}) are the usual Poisson brackets. Consider the linear vector fields $x_{\varphi}={\widehat {\bf x}}\varphi$ and $p_{\varphi}={\widehat {\bf p}}\varphi$ on $L_{2}(\mathbb{R}^3)$ associated with the operators of position and momentum. For a given state $\varphi$, let $\overline{\bf x}$ and $\overline{\bf p}$ be the expected values of the operators on this state. The components $x_{\varphi \perp}=({\widehat {\bf x}}-\overline{\bf x} I)\varphi$ and $p_{\varphi \perp}=({\widehat {\bf p}}-\overline{\bf p} I)\varphi$ are tangent to the sphere $S^{L_{2}}$ at $\varphi$ and orthogonal to the fibres $\{\varphi \}$ of the fibre bundle $CP^{L_{2}}=S^{L_{2}}/S^1$. Therefore, they are tangent to the projective space $CP^{L_{2}}$ itself. Moreover, the vector fields $x_{\varphi \perp}$ and $p_{\varphi \perp}$ constrained to the submanifold $M^{\sigma}_{3,3}$ of $CP^{L_{2}}$ are tangent to it. The integral curves of these fields are lines of constant position and momentum that provide the coordinate grid in the classical phase space $\mathbb{R}^3 \times \mathbb{R}^3$. This can be seen directly by comparing the right hand sides of the equations
\begin{equation}
({\widehat {\bf x}}-\overline{\bf x}I)\varphi=({\bf x}-{\bf a})\varphi
\end{equation}
and
\begin{equation}
({\widehat {\bf p}}-\overline{\bf p}I)\varphi=-\frac{i\hbar}{2\sigma^2}({\bf x}-{\bf a})\varphi,
\end{equation}
valid for $\varphi$ in $M^{\sigma}_{3,3}$, with the gradients $\nabla_{\bf a} \varphi$ and $\nabla_{\bf p}\varphi$. The latter gradients represent vectors tangent to the lines of constant values of ${\bf p}$ and ${\bf a}$ through a point of the projective manifold $M^{\sigma}_{3,3}$.
Using the Ehrenfest theorem written in terms of the vector fields $x_{\varphi \perp}$ and $p_{\varphi \perp}$ and (\ref{aa}) and (\ref{bb}), we have for the initial state $\varphi$ in $M^{\sigma}_{3,3}$ at $t=0$:
\begin{equation}
\label{EE1}
2\mathrm{Re} \left(\frac{d\varphi}{dt}, x_{\varphi \perp}\right)
=\{{\bf a}, h\}=\frac{d{\bf a}}{dt}
\end{equation}
and
\begin{equation}
\label{EE2}
2\mathrm{Re} \left(\frac{d\varphi}{dt}, p_{\varphi \perp} \right)
=\{{\bf p}, h\}=\frac{d{\bf p}}{dt}.
\end{equation}
So, at any point $({\bf a}, {\bf p})$ of the phase space $\mathbb{R}^3\times \mathbb{R}^3$, the derivatives $\frac{d{\bf a}}{dt}$ and $\frac{d{\bf p}}{dt}$ of a Newtonian motion are projections at the point $\Omega({\bf a}, {\bf p})$ of the velocity $\frac{d\varphi}{dt}$ of the Schr{\"o}dinger evolution onto the coordinate lines of the phase space submanifold $M^{\sigma}_{3,3}$. This is a restatement of the variational principle for the functional (\ref{SS}) with the state $\varphi$ constrained to manifold $M^{\sigma}_{3,3}$.
Furthermore, as explained in \cite{Kryukov2020}, it is possible to decompose the initial velocity of state of a particle in a potential $V$ with the initial state in $M^{\sigma}_{3,3}$ into four physically meaningful orthogonal components. The two components tangent to the phase space submanifold $M^{\sigma}_{3,3}$ were already identified with the usual classical velocity ${\bf v}=d{\bf a}/dt$ and acceleration ${\bf w}=d{\bf v}/dt=-\nabla V/m$ of the particle. The other two components orthogonal to $M^{\sigma}_{3,3}$ are the {\em phase velocity} (projection of $d\varphi/dt$ onto the unit vector $-i\varphi$) and the {\em velocity of spreading} of the wave-packet (projection of $d\varphi/dt$ onto the unit vector in the direction $id\varphi/d\sigma$).
The norm of the total velocity $d \varphi/dt$ at $t=0$, i.e., the speed of motion of the initial state $\varphi$, is given by the formula
\begin{equation}
\label{decomposition}
\left\|\frac{d\varphi}{dt}\right\|^{2}=\frac{{\overline E}^{2}}{\hbar^{2}}+\frac{{\bf v}^{2}}{4\sigma^{2}}+\frac{m^{2}{\bf w}^{2}{\sigma}^{2}}{\hbar^{2}}+\frac{\hbar^{2}}{32\sigma^{4}m^{2}}.
\end{equation}
When the state is constrained to the manifold $M^{\sigma}_{3,3}$, the first term (the phase velocity squared) and the last term (the velocity of spreading squared) disappear. The only surviving terms are associated with the classical velocity and acceleration and the motion follows the classical Newtonian dynamics in the phase space.
Similar results are true for systems of $n$ particles. In this case, the transition to classical Newtonian dynamics happens when the state of the system is constrained to the phase space submanifold $M^{\sigma}_{3n,3n}= M^{\sigma}_{3,3} \otimes ... \otimes M^{\sigma}_{3,3}$ in the space of states of the system. That is, the constrained state is the product of states of the $n$ particles in the system with the state of each particle constrained to a copy of $M^{\sigma}_{3,3} $. A more detailed analysis leads to the following theorem \cite{KryukovPosition}:
\begin{itquote}{\bf{(A)}}
The Newtonian dynamics of an arbitrary mechanical system is the Schr{\"o}dinger dynamics of that system with the state constrained to the classical phase space submanifold of the space of states of the system. Furthermore, the Schr{\"o}dinger dynamics is the only unitary evolution that reduces under the constraint to the Newtonian one.
\end{itquote}
We note that the relationship between commutators and Poisson brackets established in (\ref{aa}) and (\ref{bb}) generalizes directly to systems of $n$ particles.
The identification of the classical phase space of a particle with the manifold $M^{\sigma}_{3,3}$ yields a useful metric relationship. Let $M^{\sigma}_{3}$ denote the submanifold of $M^{\sigma}_{3,3}$ of the Gaussian states $g_{{\bf a}, \sigma}=\left(\frac{1}{2\pi\sigma^{2}}\right)^{3/4}e^{-\tfrac{({\bf x}-{\bf a})^{2}}{4\sigma^{2}}}$. The map $\omega: {\bf a} \longrightarrow g_{{\bf a}, \sigma}$, $\omega(\mathbb{R}^3)=M^{\sigma}_{3}$ identifies the submanifold $M^{\sigma}_{3}$ of $CP^{L_{2}}$ with the classical space $\mathbb{R}^3$.
If $\theta(g_{{\bf a}, \sigma}, g_{{\bf b}, \sigma})$ denotes the distance from $g_{{\bf a}, \sigma}$ to $g_{{\bf b}, \sigma}$ in $M^{\sigma}_{3}$ in the Fubini-Study metric on the space of states $CP^{L_{2}}$ and $({\bf a}-{\bf b})^{2}$ is the square of the Euclidean distance between the corresponding points ${\bf a}$ and ${\bf b}$ in $\mathbb{R}^3$, then
\begin{equation}
\label{mainO}
e^{-\frac{({\bf a}-{\bf b})^{2}}{4\sigma^{2}}}=\cos^{2}\theta(g_{{\bf a}, \sigma}, g_{{\bf b}, \sigma}).
\end{equation}
Likewise, for arbitrary states $\varphi=g_{{\bf a}, \sigma}e^{i{\bf p}{\bf x}/\hbar}$ and $\psi=g_{{\bf b}, \sigma}e^{i{\bf q}{\bf x}/\hbar}$ in the classical phase space $M^{\sigma}_{3,3}$, the distances in the Euclidean phase space and the space of states are related by
\begin{equation}
\label{mainOG}
e^{-\frac{({\bf a}-{\bf b})^{2}}{4\sigma^{2}}-\frac{({\bf p}-{\bf q})^{2}}{\hbar^2/\sigma^{2}}}=\cos^{2}\theta(\varphi, \psi).
\end{equation}
The derivation of (\ref{mainO}) and (\ref{mainOG}) is explained in the Appendix.
The relation (\ref{mainO}) implies a connection between probability distributions of multivariate random variables valued in the classical space and the space of states.
Namely, consider a random variable $\varphi$ with values in the space of states $CP^{L_{2}}$.
Because the classical space $M^{\sigma}_{3}$ is a submanifold of $CP^{L_{2}}$, we can restrict $\varphi$ to take values in $M^{\sigma}_{3}$. We then have the following result (see \cite{KryukovPosition} and the Appendix):
\begin{itquote}{\bf{(B)}}
Suppose the conditional probability of $\varphi$ given that $\varphi$ is in the classical space is described by the normal distribution. Suppose the probability $P$ of transition between any two states depends only on the distance between the states. Then $P$ is given by the Born rule, i.e., $P= |(\varphi, \varphi_0)|^2$, where $\varphi_0$ is the initial and $\varphi$ is the observed state. The opposite is also true: For states in the classical space the Born rule yields the normal distribution of the observed position.
\end{itquote}
So, the normal probability distribution is the Born rule in disguise!
These results establish a deep connection between Newtonian and Schr{\"o}dinger dynamics and between the classical space and classical phase space, and the submanifolds $M^{\sigma}_{3}$ and $M^{\sigma}_{3,3}$ of the space of states. Based on these results we put forward the following embedding hypothesis:
\begin{itquote}{\bf{(EH)}}
The constructed mathematical embedding of the classical space and classical phase space into the space of states and the resulting identification of Newtonian dynamics with the constrained Schr{\"o}dinger dynamics are physical. That is, the classical space, phase space and the Newtonian dynamics of a system are not only fully derivable from, but also physically originate in the Schr{\"o}dinger dynamics in the Hilbert space of states of the system.
\end{itquote}
The hypothesis asserts that the provided mathematical derivation of the classical Newtonian dynamics from the Schr{\"o}dinger dynamics of the system is the true physical correspondence between classical and quantum systems.
To validate the hypothesis, we need to show that it is consistent with all observed classical and quantum phenomena. This involves showing that all observable classical phenomena can be derived from the corresponding quantum phenomena by constraining the state to the classical phase space submanifold in the space of states. This also involves physically explaining the constraint itself. Now, an arbitrary deterministic classical motion is described by the Newtonian equations of motion. The theorem ${\bf (A)}$ demonstrates that these equations follow from the Schr{\"o}dinger equation with the constraint. Therefore, given the constraint, the hypothesis is consistent with an arbitrary deterministic classical motion. To validate the hypothesis, it remains then to: (1) verify its consistency for the motion of particles described in statistical mechanics, specifically, for the Brownian motion, and (2) explain the origin of the constraint. The rest of the paper will be dedicated to these two tasks.
Instead of diving into the subject of a quantum version of the Brownian motion, note that a macroscopic Brownian particle in a medium can be considered a classical chaotic system. For instance, the motion of the particle through an appropriate lattice of round obstacles provides a classical chaotic realization of the Brownian motion. A lively discussion of the stochastic, deterministic chaotic, and regular characterizations of the Brownian motion can be found in \cite{Nature1, Nature2, Cecconi}. With the view that the Brownian particle in a medium is a chaotic system comes the applicability of the BGS-conjecture to the system \cite{BGS}. It asserts that the Hamiltonian of the corresponding quantum system can be represented by a random matrix.
Random matrices were originally introduced into quantum mechanics by Wigner \cite{Wigner} in a study of excitation spectra of heavy nuclei. Wigner reasoned that the motion of nucleons within the nucleus is so complex that the Hamiltonian of the system can be modeled by a random matrix from an ensemble that respects symmetries of the system but otherwise contains no additional information.
It was later discovered that correlations in the spectrum of random matrices possess a remarkable universality in being applicable to a large number of quantum systems. That includes nuclear, atomic, molecular and scattering systems, chaotic systems with few degrees of freedom as well as complex systems, such as solid state systems and lattice systems in the field theory.
This wealth of experimental evidence suggests that all quantum systems whose classical counterpart is chaotic exhibit random matrix statistics. This is the essence of the BGS-conjecture.
As mentioned, from the possibility to attribute chaotic character to Brownian motion and the BGS conjecture it follows that the Hamiltonian of the quantum analogue of the Brownian motion at any time is given by a random matrix. On the physical grounds, we can also claim that the random matrices that represent the Hamiltonian at two different moments of time must be independent. This leads us to the following version of the BGS-conjecture:
\begin{itquote}{\bf{(RM)}}
The quantum-mechanical analogue of the Brownian motion can be modeled by a random walk of state in the space of states of the system. The steps of the random walk without drift satisfy the Schr{\"o}dinger equation with the Hamiltonian represented by a random matrix from the Gaussian unitary ensemble (GUE). The matrices representing the Hamiltonian at two different times are independent and belong to the same ensemble.
\end{itquote}
Note that the Schr{\"o}dinger equation with the Hamiltonian represented by a random matrix describes evolution of the state and not of the density matrix for the system. In that sense, it is analogous to the Langevin equation for the position of a Brownian particle rather than the diffusion equation for the probability density function.
First of all, let us prove that the random walk in ${\bf (RM)}$ yields the Brownian motion in $\mathbb{R}^3$. More precisely,
\begin{itquote}{\bf{(C)}}
The random walk described in ${\bf (RM)}$ but conditioned to stay on the submanifold $M^{\sigma}_{3}$ in the space of states yields a random walk on $\mathbb{R}^3$ that approximates the Brownian motion of a particle in a medium.
\end{itquote}
In fact, a general Schr{\"o}dinger evolution with Hamiltonian ${\widehat h}$ can be thought of as a sequence of steps connecting the points $\varphi_{t_{0}}, \varphi_{t_{1}}, . . . $ in the space of states. For small time intervals $\Delta t=t_{k}-t_{k-1}$, the state $\varphi_{t_{N}}$ at time $t_N$ is given by the time ordered product
\begin{equation}
\label{tN}
\varphi_{t_{N}}=e^{-\frac{i}{\hbar}{\widehat h}(t_N)\Delta t}e^{-\frac{i}{\hbar}{\widehat h}(t_{N-1})\Delta t}... e^{-\frac{i}{\hbar}{\widehat h}(t_1)\Delta t}\varphi_{t_{0}}.
\end{equation}
Suppose the evolution of the state $\varphi$ of a particle is constrained to the classical space submanifold $M^{\sigma}_{3}$. The points $\varphi_{t_{0}}, \varphi_{t_{1}}, . . . $ belong then to the submanifold $M^{\sigma}_{3}$ and the steps can be identified with translations in the classical space. This is to say that for each $k$, the operator ${\widehat h}(t_{k})$ acts as the generator of translation by a vector ${\bf \xi}_{k}$ in $\mathbb{R}^3$, so that ${\widehat h}(t_{k})={\bf \xi}_{k}{\widehat {\bf p}}$, where ${\widehat {\bf p}}$ is the momentum operator. Because all operators of translation commute with each other, the equation (\ref{tN}) yields the following expression:
\begin{equation}
\label{tN1}
\varphi_{t_{N}}({\bf x})=\varphi_{t_{0}}(x-{\bf \xi}_{1}\Delta t-{\bf \xi}_{2}\Delta t- ... -{\bf \xi}_{N}\Delta t).
\end{equation}
That is, the initial state is simply translated by the vector
\begin{equation}
\label{walk}
{\bf d}=\sum^{N}_{k=1}{\bf \xi}_{k}\Delta t
\end{equation}
in $\mathbb{R}^3$.
Now, the probability distribution of steps $-\frac{i}{\hbar} {\widehat h}(t_{k+1})\varphi_{t_{k}}$ in the tangent space $T_{\varphi_{k}}M^{\sigma}_{3}$ must be the conditional probability distribution of steps for the Hamiltonian satisfying ${\bf (RM)}$ under the condition that the steps take place in $T_{\varphi_{k}}M^{\sigma}_{3}$.
From the properties of the random matrix, it follows that ${\bf \xi}_k$ are independent and identically normally distributed random vectors, so that the equation (\ref{walk}) defines a random walk with Gaussian steps in $\mathbb{R}^3$. This is known to approximate the Brownian motion in $\mathbb{R}^3$ and yield the normal distribution of the position vector ${\bf d}$ at any time $t$, proving the claim.
$\hfill\square$
By the extension of a random walk on $\mathbb{R}^3$ to the space of states $CP^{L_{2}}$, we understand a walk in $CP^{L_{2}}$ that satisfies ${\bf (RM)}$ and that reduces to the original random walk on $\mathbb{R}^3$ when conditioned to stay on $M^{\sigma}_{3}$.
From ${\bf (C)}$, we know that an extension of the walk with Gaussian steps on $\mathbb{R}^3$ exists. We claim that such an extension is {\it unique}. In fact, because the random walk conditioned to stay on $M^{\sigma}_{3}$ must be Gaussian, the entries of the matrix of the Hamiltonian associated with the directions tangent to $M^{\sigma}_{3}$ at a point are distributed normally. But the probability distribution of a single entry of the matrix defines the corresponding Gaussian unitary ensemble. Because of that, the random walk with Gaussian steps in $\mathbb{R}^3$ defines a unique random walk satisfying ${\bf (RM)}$.
In what follows, the random walk of the state of a particle in ${\bf (RM)}$ will always be assumed to be this unique extension of the random walk with Gaussian steps that approximate the Brownian motion of the particle. We are free to make this choice as long as the resulting extension satisfies our needs. At the same time, this choice makes sense physically, because a displacement of the particle in $\mathbb{R}^3$ results in the like displacement of its state in $M^{\sigma}_{3}$, tying the two motions together.
The theorems ${\bf (A)}$ and ${\bf (C)}$ prove the consistency of the hypothesis ${\bf (EH)}$ for the deterministic Newtonian dynamics and the Brownian motion.
Our goal now is to explain the origin of the constraint to the classical space and phase space submanifolds in the Hilbert space of states. For this, we need to establish properties of the random walk on the space of states in the conjecture ${\bf (RM)}$ and establish its applicability to the process of measurement.
\begin{itquote}{\bf{(D)}}
The probability distribution of steps of the random walk specified in ${\bf (RM)}$ is isotropic and homogeneous. That is, for all initial states $\{\varphi\}$ in the space of states $CP^{H}$,
the vector $d\varphi=-\frac{i}{\hbar}{\widehat h}\varphi dt$ is a normal random vector in the tangent space $T_{\{\varphi\}}CP^{H}$, with spherical distribution.
The probability of reaching a certain state during the walk depends only on the distance between the initial and the final state.
\end{itquote}
To prove, note that because for any $t$ the matrix of ${\widehat h}$ is in GUE, the probability density function $P({\widehat h})$ of ${\widehat h}$ is invariant with respect to conjugations by unitary transformations. That is, $P(U^{-1}{\widehat h}U)=P({\widehat h})$ for all unitary transformations $U$ acting in the Hilbert space of states.
Also, for all unitary transformations $U$ that leave $\{\varphi\}$ unchanged and therefore all $U$ that act in the tangent space $T_{\{\varphi\}}CP^{H}$, we have
\begin{equation}
\label{v}
(U^{-1}{\widehat h}U \varphi, v)=({\widehat h}U \varphi, Uv)=({\widehat h}\varphi, Uv),
\end{equation}
where $v$ is a unit vector in $T_{\{\varphi\}}CP^{H}$. It follows that
\begin{equation}
\label{vv}
P({\widehat h} \varphi, v)=P({\widehat h} \varphi, Uv),
\end{equation}
where $P$ is the probability density of the components of ${\widehat h}\varphi$ in the given directions.
By a proper choice of $U$, we can make $Uv$ to be an arbitrary unit vector in $T_{\{\varphi\}}CP^{H}$, proving the isotropy of the distribution.
On the other hand, for all unitary operators $V$ in $H$ and a unit vector $v$ in $T_{\{\varphi\}}CP^{H}$, we have
\begin{equation}
\label{vw}
P({\widehat h} \varphi, v)=P(V^{-1}{\widehat h}V \varphi, v)=P({\widehat h}V\varphi, Vv).
\end{equation}
Because $V\varphi$ is an arbitrary state and $Vv$ is in the tangent space $T_{\{V\varphi\}}CP^{H}$, we conclude with the help of (\ref{v}) that the probability density function is independent of the initial state of the system, proving the homogeneity of the distribution. The components of the vector ${\widehat h}\varphi dt$ are independent, because the entries of the matrix of ${\widehat h}$ are independent. It follows that $-\frac{i}{\hbar}{\widehat h}\varphi dt$ is a normal random vector with spherical distribution.
Finally, because different steps of the walk are independent, the probability of reaching a certain state during the walk may depend only on the distance between the initial and the final state.
$\hfill\square$
Let us show now that the version ${\bf{(RM)}}$ of the BGS-conjecture provides a consistent approach to the process of observation of the position of a single particle in the classical and quantum mechanics alike.
\begin{itquote}{\bf{(E)}}
The Schr{\"o}dinger evolution with the Hamiltonian that satisfies ${\bf (RM)}$
models the measurement of the position of a particle in classical and quantum physics. Namely, under the constraint that the random walk that approximates the evolution of state takes place on the submanifold $M^{\sigma}_{3}$, the probability distribution of the position random vector is normal. Without the constraint,
the probability to find the initial state $\varphi_{0}$ at a point $\varphi$ is given by the Born rule. The transition from the initial to final state is time-irreversible.
\end{itquote}
In fact, according to ${\bf (C)}$, the random walk that satisfies ${\bf (RM)}$ but is conditioned to stay on $M^{\sigma}_{3}$ is the random walk with Gaussian steps on $\mathbb{R}^3$. The latter random walk considered over the time interval of observation yields the normal distribution of the position random vector.
Note that the normal distribution of the position agrees with observations in the macro world. It is also consistent with the central limit theorem applied to describe the cumulative effect of uncontrollable fluctuations from the mean in a series of measurement outcomes.
On the other hand, according to ${\bf (D)}$, the probability $P$ of reaching a state $\varphi$ from the initial state $\varphi_0$ by means of a unconstrained random walk that satisfies ${\bf (RM)}$ depends only on the distance between the states. From ${\bf (B)}$, it then follows that $P$ is given by the Born rule: $P=|(\varphi, \varphi_0)|^2$. Also, the choice of the Gaussian unitary ensemble corresponds to time-irreversible systems \cite{Wigner1}. It follows that the model satisfies the basic properties of observation of the position in classical and quantum mechanics.
$\hfill\square$
Quantum mechanics does not explain how and when the deterministic Schr{\"o}dinger evolution is replaced by probabilistic evolution, whose outcomes obey the Born rule. It does not explain why macroscopic bodies are never observed in superpositions of position eigenstates.
The wave-function collapse models \cite{Bassi1},\cite{Bassi2} aim to answer these questions by introducing a non-linear stochastic modification of the Schr{\"o}dinger equation. It is assumed that the modified equation must be non-linear to be able to suppress superpositions of states. It is also assumed that the modified equation must vary with a change in the observed quantity. However, here the Born rule was derived from the linear Schr{\"o}dinger equation with the Hamiltonian satisfying ${\bf (RM)}$. Moreover, it was derived for all observed quantities at once, without needed to change the equation of motion. Is there a contradiction?
The modified Schr{\"o}dinger equations in the collapse models make the state of the system converge to an eigenstate of the measured quantity, usually the position of a particle. An equation like that must break an arbitrary initial superposition of states, so it must be non-linear. Under the Schr{\"o}dinger evolution with the Hamiltonian satisfying the conjecture ${\bf (RM)}$, the state of the system does not converge to a position eigenstate.
The equation does not suppress superpositions. It makes the state wander around the space of states and ensures that the probability of reaching a particular neighborhood in the space is given by the Born rule. That explains why the two approaches do not contradict each other.
We are now ready to investigate why the state of a macroscopic body is confined to the classical space submanifold of the space of states.
The key to this is that the Brownian motion and the motion of state under a measurement are now derived from the same dynamics. We know that large particles do not experience a Brownian motion. That is because the total force acting on any such particle from the particles of the medium is nearly zero. As a result, in the absence of an external potential, the particle remains at rest in the medium. A similar mechanism explains why the state of a particle may be confined to the submanifold $M^{\sigma}_{3}$:
\begin{itquote}{\bf{(F)}}
Suppose the Hamiltonian of a particle in the natural surroundings satisfies the conjecture ${\bf (RM)}$. Then, the state of the particle is constrained to the submanifold $M^{\sigma}_{3}$ precisely when the induced Brownian motion of the particle described in ${\bf (C)}$ vanishes.
More precisely, under ${\bf (RM)}$, the boundary between the quantum and the classical occurs for particles that satisfy the following two conditions. The particles must be sufficiently large in size so that their Brownian motion in the natural surroundings is observable. At the same time, the particles must not be too large, when the Brownian motion trivializes, the state initially in $M^{\sigma}_{3}$ becomes constrained to $M^{\sigma}_{3}$ and the Schr{\"o}dinger evolution becomes Newtonian motion.
\end{itquote}
Recall that the spaces $M^{\sigma}_{3}$ and $\mathbb{R}^3$ are metrically identical. Recall also that the random walk in ${\bf (RM)}$ is the unique extension of the random walk in $\mathbb{R}^3$ that approximates the Brownian motion of the particle in the surroundings.
Consider a particle whose initial state is in $M^{\sigma}_{3}$.
Suppose the particle is sufficiently large in size so that the Brownian motion of the particle in the natural surroundings is negligible. Therefore, the motion of state in the directions tangent to $M^{\sigma}_{3}$ is negligible. From ${\bf (D)}$, we know that for the Hamiltonian satisfying ${\bf (RM)}$, the probability distribution of steps of the random walk in the space of states of the particle is isotropic. Because the probability of steps in the directions tangent to $M^{\sigma}_{3}$ vanishes, the same holds true for any other direction tangent to the space of states at the same point.
Therefore, in the absence of an external potential, the motion of state of the particle in the space of states is trivial. When an external potential is applied to the particle, the two middle terms in the decomposition (\ref{decomposition}) may appear. However, these terms can only contribute to the motion of state in the direction tangent to the classical phase space submanifold $M^{\sigma}_{3,3}$. Therefore, the state will remain constrained to the submanifold. The theorem ${\bf{(A)}}$ asserts then that the particle in such a state will move in accord with the Newtonian dynamics.
On the contrary, suppose the particle is sufficiently small, but not too small, so that the interaction between the particle and the surroundings cannot be ignored and results in a noticeable Brownian motion of the particle. By the isotropy of the probability distribution of steps of the random walk of state, a displacement of the particle away from $M^{\sigma}_{3}$ is then equally likely. Such a displacement would mean that the particle is now in a superposition of states of different positions. If ${\bf a}$ is the initial position and ${\bf l}$ is the observed displacement of the particle in $M^{\sigma}_{3}$ during the measurement, then the states $g_{{\bf a}, \sigma}$ and $g_{{\bf a}+{\bf l}, \sigma}$ are distinguishable in the experiment. It means that the superpositions of these states can be observed as well, indicating that we are dealing with a quantum system.
$\hfill\square$
We see how the properties of the Hamiltonian in the conjecture ${\bf (RM)}$ are responsible for the fact that the state of a macroscopic particle driven by the Schr{\"o}dinger dynamics with this Hamiltonian is constrained to the classical phase space manifold. One might be tempted to say that the resulting ``freezing" of the state is in agreement with the quantum Zeno effect for the particle whose position is continuously measured. However, the essential difference is that the result is derived here from the unitary evolution generated by a random Hamiltonian without ever needing to involve the projection operators.
As an example, in the Appendix we find the displacement of a particle of radius $1mm$ in still air in normal conditions. Assuming the particle's position is measured by scattering and observing visible light, the displacement during the time interval of the measurement is found to be of the order of $10^{-12}m$. The corresponding displacement of state in the space of states is of the order of $10^{-7}rad$. These displacements are too small to be observed in the described experiment. The state of the particle remains ``frozen". The particle is constrained to $M^{\sigma}_{3}$ and no superpositions of states of a different position of the particle can be observed.
As already discussed, the phase space of $n$ particles in the classical space $\mathbb{R}^3$ can be identified with the $6n$-dimensional submanifold $M^{\sigma}_{3n, 3n}=M^{\sigma}_{3,3}\otimes M^{\sigma}_{3,3} \otimes ... \otimes M^{\sigma}_{3,3}$ in the space of states of the system. By ${\bf (A)}$, the Schr{\"o}dinger dynamics for the system with the state constrained to $M^{\sigma}_{3n,3n}$ is equivalent to its Newtonian dynamics.
We know that sufficiently small macroscopic particles in a medium exhibit a Brownian motion. To establish the underlying quantum process, we need to find the Hilbert space of states of the system and to apply the conjecture ${\bf (RM)}$. For instance, whenever the interaction between the particles can be neglected, the Brownian motion of each particle can be considered independently. In this case ${\bf (RM)}$ is applicable to each particle and the previously obtained results apply.
In the opposite case, when the particles interact strongly and form a rigid body, the body as a whole undergoes a Brownian motion in the medium.
As an example of the latter case, consider two particles connected by a weightless rigid rod of length $\Delta$. Then the state of the pair in $M^{\sigma}_{3}\otimes M^{\sigma}_{3}$ is given by ${\widetilde \delta}^3_{\bf a}\otimes {\widetilde \delta}^3_{{\bf a}+{\bf \Delta}}$, where ${\bf \Delta}$ is a vector of length $\Delta$ in $\mathbb{R}^3$. The Hilbert space $H$ of states for the system is a completion of the space of all linear combinations of such states. Under ${\bf (RM)}$, the state of the measured system undergoes a random walk in the space $CP^H$ of the obtained entangled states. The positions of both particles are known when the position of one of them is known, as the path of the walk passes in this case through a point of the manifold $M^{\sigma}_{3}\otimes M^{\sigma}_{3}$. As before, the probability of passing through a particular point is given by the Born rule.
Consider now a system consisting of a microscopic particle $P$ and a macroscopic device $D$ capable of measuring the position of $P$. The particle and the device form a two-particle system whose state belongs to the product Hilbert space $H=H_P \otimes H_D$. Consider the motion of the corresponding classical system. Let say, $D$ is a cloud chamber, small enough to be considered a material point, when treated classically.
The macroscopic device interacts with the surroundings. It also interacts with the particle. However, in classical physics, the effect of the particle on the device can be neglected. The interaction of the device with the surroundings results in a Brownian motion of the device. When the device is sufficiently large, its Brownian motion is trivial and the device is at rest in the lab system. By
applying ${\bf (RM)}$ to the device, we conclude that the state of the device positioned initially in $M^{\sigma}_{3}$ can be treated independently of the state of the particle and is at rest in the space of state $CP^{H_D}$. On the other hand, a small macroscopic particle placed in the medium of the chamber would undergo a Brownian motion. By
applying ${\bf (RM)}$ to this case, we conclude that the state of the corresponding quantum system will perform a random walk in the space $CP^{H_P}$. According to ${\bf (E)}$, the probability to find the particle at a particular point of $\mathbb{R}^3$ during the walk is given by the Born rule.
By the previous discussion, the deterministic Schr{\"o}dinger evolution and the random walk that satisfies ${\bf (RM)}$ are unique extensions of their classical counterparts from a classical space submanifold to the space of states of the system. We conclude that the particle-device system initially in a product state must remain in a product state during the evolution. Furthermore, according to ${\bf (F)}$, the state of the device initially positioned in $M^{\sigma}_{3}$ will be confined to this submanifold.
In particular, the theorem ${\bf (A)}$ applies and ensures that in the presence of an external potential, the device will be evolving in accord with the Newtonian dynamics. Under these conditions, the system remains in a product state with both factors being able to change. In particular, the position of the particle can be mechanically recorded with no entanglement between the particle and the device ever appearing or needing to appear in the process.
To summarize:
\begin{itquote}{\bf{(G)}}
Suppose the initial state of the system consisting of a microscopic and a macroscopic particle in the natural surroundings is a product state with the state of the macroscopic particle in the manifold $M^{\sigma}_{3}$. Suppose also that the conjecture ${\bf (RM)}$ holds true. Then, during the evolution, the state of the system remains in a product form with the state of macroscopic particle confined to $M^{\sigma}_{3}$ and evolving classically.
\end{itquote}
Theorem ${\bf (G)}$ provides an explanation of the issue of an apparent inconsistency of the description of quantum mechanics by macroscopic observers. The issue was pointed out first by Wigner \cite{Wigner2} and has picked up a significant interest in recent times.
The reason for the contradiction in the accounts of Wigner and the friends in the Wigner's friend-type thought experiments is the assumption that a macroscopic observer or the whole lab may exist in a superposition of classical states. Provided the conjecture ${\bf (RM)}$ for systems in the natural surroundings holds true, the evolution of the state of the observers and the lab in the space of states can only happen within the classical space submanifold. The state of the measured system and the observer or the lab remains the product state. This invalidates the no-go theorem in \cite{Renner} and related results claiming the inconsistency of accounts by the observers.
Note that the experiment reported in \cite{photonObserver}, in which the photons played the role of an observer does not change this conclusion. The reality as ``described" by microscopic particles is different than the one the macroscopic observers are aware of.
Note also that the random matrix description of the interaction between macroscopic bodies and the surroundings allowed us to bypass the decoherence-based approach to the transition from quantum to classical. The modifications of the Schr{\"o}dinger equation with the goal of accounting for the measurement results and explaining the transition to classicality \cite{Bassi1,Bassi2} become unnecessary as well. The single outcome of a measurement, the probability of the outcome and the transition from quantum to classical all have now a simple explanation, rooted in the Schr{\"o}dinger equation, the embedding of the classical space into the space of states and the conjecture ${\bf (RM)}$.
Let us also remark that the elegance and the scope of the proposed solution to the issue of the relationship between the quantum and classical systems suggests that natural phenomena are happening in the space of states and not in the classical space. If this is the case, the space of states becomes a new arena for all physical processes with the classical space of macroscopic objects as a submanifold of the space of states.
\section{Appendix}
The isomorphism of the classical space and classical phase space and the submanifolds $M^{\sigma}_{3}$ and $M^{\sigma}_{3,3}$ in the space of states $CP^{L_{2}}$ is explained in \cite{Kryukov2020,KryukovMath}. The decomposition (\ref{decomposition}) is obtained by projecting the velocity $\frac{d \varphi}{dt}$ of state under the Schr{\"o}dinger evolution with an arbitrary Hamiltonian ${\widehat h}=-\frac{\hbar^{2}}{2m}\Delta+V({\bf x},t)$ at a point $\varphi=g_{{\bf a}, \sigma}e^{i{\bf p}{\bf x}/\hbar}$ in the classical phase space $M^{\sigma}_{3,3}$ onto an orthonormal set of vectors specified by changing the parameters, ${\bf a}, {\bf p}, \sigma$ that define $\{\varphi\}$ and the phase parameter $\theta$ of a possible constant phase factor $e^{-i\theta}$ of $\varphi$. Calculation of the classical space components of $\frac{d \varphi}{dt}$ at an arbitrary point $\varphi$ in $M^{\sigma}_{3,3}$ yields
\begin{equation}
\label{pproj}
\left.\mathrm{Re}\left(\frac{d \varphi}{dt}, -\widehat{\frac{\partial \varphi}{\partial a^{\alpha}}}\right)\right|_{t=0}=\frac{p^{\alpha}}{2m\sigma},
\end{equation}
where the hat here and in other calculations denotes normalization.
For the momentum space components of $\frac{d \varphi}{dt}$ at $\varphi$ we similarly obtain, assuming that $\sigma$ is small enough to make the linear approximation of $V({\bf x})$ valid:
\begin{equation}
\label{pproj1}
\left.\mathrm{Re}\left(\frac{d \varphi}{dt}, \widehat{\frac{\partial \varphi}{\partial p^{\alpha}}}\right)\right|_{t=0}=\frac{mw^{\alpha} \sigma}{\hbar}, \quad {\text where} \quad
mw^{\alpha}=-\left.\frac{\partial V({\bf x})}{\partial x^{\alpha}}\right|_{{\bf x}={\bf a}}.
\end{equation}
The components (\ref{pproj}) and (\ref{pproj1}) are tangent to $M^{\sigma}_{3,3}$ and orthogonal to the fibre $\{\varphi\}$.
The component of the velocity $\frac{d\varphi}{dt}$ due to change in $\sigma$ (spreading) is orthogonal to the phase space $M^{\sigma}_{3,3}$ and the fibre $\{\varphi\}$, and is equal to
\begin{equation}
\label{spreadcomp}
\mathrm{Re} \left (\frac{d\varphi}{dt}, i\widehat{\frac{d\varphi}{d\sigma}}\right)=\frac{\sqrt{2}\hbar}{8\sigma^{2}m}.
\end{equation}
The component of the velocity parallel to the fibre $\{\varphi\}$ is the expected value of energy divided by $\hbar$:
\begin{equation}
\label{phase}
\mathrm{Re} \left (\frac{d\varphi}{dt}, -\widehat{\frac{d\varphi}{d\theta}}\right)=\frac{1}{\hbar}(i{\widehat h}\varphi, i\varphi)=\frac{{\overline E}}{\hbar}.
\end{equation}
Calculation of the norm of $\frac{d\varphi}{dt}=\frac{i}{\hbar}{\widehat h}\varphi$ at $t=0$ gives
\begin{equation}
\label{decompositionP}
\left\|\frac{d\varphi}{dt}\right\|^{2}=\frac{{\overline E}^{2}}{\hbar^{2}}+\frac{{\bf p}^{2}}{4m^2\sigma^{2}}+\frac{m^{2}{\bf w}^{2}{\sigma}^{2}}{\hbar^{2}}+\frac{\hbar^{2}}{32\sigma^{4}m^{2}},
\end{equation}
which is the sum of squares of the found components. This completes a decomposition of the velocity of state at any point $\varphi$ in $M^{\sigma}_{3,3}$.
The metric relationship (\ref{mainO}) follows from the inner product of two states in $M^{\sigma}_{3}$:
\begin{equation}
\label{mainOP}
(g_{{\bf a}, \sigma}, g_{{\bf b}, \sigma})=e^{-\frac{({\bf a}-{\bf b})^{2}}{8\sigma^{2}}}.
\end{equation}
This expression squared is equal to the right hand side of (\ref{mainO}) by the definition of the Fubini-Study distance between states in $CP^{L_{2}}$. The result (\ref{mainOG}) is obtained in a similar way by evaluating the Fourier transform of a Gaussian function along the way.
The estimate of the diffusion coefficient of a macroscopic spherical particle of radius $1mm$ in still air in normal conditions is based on the Stokes-Einstein relationship
\begin{equation}
D=\frac{k_{B}T}{6\pi \eta r},
\end{equation}
where $D$ is the diffusion coefficient, $r$ is the radius of the particle, $\eta$ is the dynamic viscosity, $T$ is temperature of the medium and $k_{B}$ is the Boltzmann constant. Using the room temperature and the known value of dynamic viscosity $\eta \sim 10^{-5}N\cdot s/m^2$, we get $D \sim 10^{-12}m^2/s$. The variance of the $x$-coordinate of position of the particle is given by
$\overline{x^{2}}=2Dt$. If we scatter visible light off the particle to determine its position, the time interval of observation can be estimated to be as short as $10^{-13}s$. This amounts to the displacement of the order of $10^{-12}m$. The accuracy of measurement is limited by the wavelength $\lambda \sim 10^{-5}m$. The Fubini-Study distance between Gaussian states that are $10^{-12}m$ apart in $M^{\sigma}_{3}$ with $\sigma \sim 10^{-5}m$ is calculated via (\ref{mainO}) and is about $10^{-7}rad$.
\section*{References}
|
1,314,259,994,843 | arxiv | \section{Introduction}
Symmetries play a pivotal role in our understanding of interactions that dominate nuclear structure. Their importance extends from quantum field theories such as the SU(3) color group used to proffer an understanding of the quark and gluon dynamics in individual nucleons, to effective field theories and the breaking of chiral symmetry which imposes a critical constraint that generates and controls the dynamics of nuclei in the low-energy regime that can be used to establish a connection with QCD \cite{MACHLEIDT20111, ROBERTS2000S1, CLOET20141, PhysRevLett.106.162002, PhysRevD.91.054035, Eichmann:2016yit}.
Similarly, Effective Field Theories (EFT) have proven to be useful in gaining model-independent approaches in various other analyses \cite{WeinbergEFT,GEORGI1990447,PhysRevLett.114.052501,doi:10.1146/annurev.nucl.52.050102.090637,RevModPhys.81.1773,PAPENBROCK201136}. The main foundation of any EFT is its ability to exploit a separation of scales between two phenomena, those of interest like low-energy collective nuclear excitations such as rotations and vibrations, and others that focus on the higher energy aspects of the interaction. Notable nuclear collective models\cite{BohrMottelson,Iachello} have been identified as leading-order Hamiltonians of an EFT approach.
Significant progress achieved in the past decade using continuum Schwinger function methods (CSM) paves a way to observe how an effective strong interaction relevant for low-energy nuclear physics emerges from quark and gluon interactions in the strong QCD regime, characterized by a coupling $\alpha_s/\pi \simeq 1$ \cite{Proceedings:2020fyd, Barabanov:2020jvn, Chen:2020wuq, Chen:2021guo}. Furthermore, the CSM results have been rigorously checked in comparisons with a broad array of different experimental results on meson and baryon structure \cite{Carman:2020qmb, Mokeev:2022xfo, Barabanov:2020jvn, Qin:2020rad, Roberts:2021nhw}.
The importance of symmetries is not limited to EFT analyses. For example, it is well-known that SU(3) is the symmetry group of the spherical harmonic oscillator that underpins the many-particle microscopic nuclear shell model \cite{mayer1955elementary} patterned in large part after the atomic case that treats the nucleus as a closed core system of interacting single particles in valence shells with residual interactions. The latter successfully describes single-particle separation energies and binding energies at shell closures called magic numbers; however, it fails to describe effects due to the collective motion of the core such as the emergence of rotational bands in heavy nuclei that can be described phenomenologically by the Bohr-Mottelson collective model \cite{BohrMottelson}, and the fact that the first excited state of the doubly closed shell nucleus of ${}^{16}$O is part of a strongly deformed rotational band that leads to an experimentally observed non-zero quadrupole moment for its ground state.
The SU(3) model advanced by Elliott \cite{Elliot} was the first group-theoretical model that captured rotational collectivity in a shell-model framework. One can find its roots in the Nilsson model \cite{Nilsson}, which is simply a deformed version of the single-particle shell model. This unveiling of the microscopic origin of collectivity within a nuclear shell-model framework through an algebraic model and the fact that most nuclei are deformed, along with the coexistence of low-lying states in a single nucleus with different quadrupole moments \cite{RevModPhys.83.1467}, paved the way to the development of the Sp(3,\B{R}) Symplectic Model \cite{PhysRevLett.38.10}.
The Sp(3,\B{R}) model is a multi-shell extension of the SU(3) model that allows one to organize the spatial parts of many-particle nuclear configurations into a collection of Pauli-allowed shapes. This is a logical first-step of a far more robust theory for grouping many-nucleon configurations into cluster-like shell-model configurations on the lowest rung of what is now known to be an algebraically defined pyramid of deformed eigensolutions coupled through enhanced B(E2) linkages.
Multiple phenomenological and schematic interactions that employ the symplectic symmetry group have been found to give energy spectra, B(E2) quadrupole transitions and radii that are in remarkable agreement with experimental data across the nuclear chart from the lightest systems like ${}^{16}$O \cite{PhysRevLett.97.202501} and ${}^{12}$C \cite{DREYFUSS2013511} through to intermediate-mass nuclei spanning from ${}^{20}$Ne \cite{DRAAYER1984215,PhysRevC.89.034312} and ${}^{24}$Mg \cite{ROSENSTEEL19841,CASTANOS1989349,PhysRevLett.82.5221}, up to strongly deformed nuclei of the rare-earth and actinide regions like ${}^{166}$Er \cite{BAHRI2000125} and even ${}^{238}$U\cite{CASTANOS1991469}.
While such applications of the symplectic model reproduce observed collective patterns in nuclei, they typically rely on schematic or phenomenological interactions. However, recent results from the {\it{ab initio}} Symmetry-adapted No-Core Shell Model (SA-NCSM) \cite{LAUNEY2016101,PhysRevLett.98.162503,PhysRevLett.111.252501,PhysRevLett.124.042501}, that employs realistic chiral effective field theory interactions strongly suggest that the symplectic symmetry is a natural symmetry for nuclei and that its origins should be investigated starting from first principles; that is, from a symplectic effective field theory. Below we show the construction of a symplectic effective field theory, one which when applied to symplectic basis states yields a polynomial of quantum mechanical Hamiltonians for nuclear structure applications. As an application of the theory, results for the ${}^{20}$Ne, ${}^{22}$Ne and ${}^{22}$Mg isotopes are presented.
\section{Symplectic Effective Field Theory}
In this section we present a step-by-step method for building the symplectic effective field theory referenced above. The main concept is to formulate a self-interacting real scalar effective field theory that represents the excitations of a system of $\mathcal{A}$ interacting nucleons. The degrees of freedom of the system are the real scalar fields that represent harmonic oscillator excitations. At leading order, the fields are plane waves that satisfy the equations of motion. For every next-to-leading order, the fields can be taken to be plane waves without the need for perturbative corrections if one imposes a specific requirement on the coupling coefficient of the theory. That requirement, together with the scalar field constraint, set the overall scale of the theory, which can be stated simply in terms of the ratio of the average spherical volume of a nucleus in its ground state to its average volume determined in terms of the number of harmonic oscillator excitations it hosts that allows the system to stretch in ways that are consistent with the pervasive plane-wave constraint.
The construction of this EFT is done through the following steps: In the first subsection (A) we introduce the harmonic oscillator Lagrangian, extend it to $n$-th order, and present the corresponding solutions. In the second subsection (B) we review features of the symplectic Sp(3,\B{R}) group, its generators, and the nomenclature we will use in defining actions of these generators on states within an irreducible representation (irrep) of the symplectic group, especially on the irrep's lowest-weight state from which all others can be built. In subsection (C) we move to the more familiar Hamiltonian rendering of the dynamics, the details of which -- being quite expansive -- are relegated to an appendix. In subsection (D), the physical features associated with the diagonal elements of the Hamiltonian are examined; while in subsection (E), we do the same for the various off-diagonal elements resulting into a quantum mechanical Hamiltonian applicable for nuclear systems.
Throughout this paper we will use the Einstein notation for repeated indices and natural units in derivations ($\hbar=c=1$), the following four vector notation for our covariant position vector $x^{\mu}(t,\B{r})$ and $x_{\mu}(t,-\B{r})$ for its contravariant component, $k^{\mu}(E,\B{k})$ for the momentum four vector and $\partial^{\mu}(\frac{\partial}{\partial t},\frac{\partial}{\partial\B{r}})$ for the derivative. For overall simplicity, we will use $L$ and $H$ for the regular Lagrangian and Hamiltonian, and $\mathcal{L}, \mathcal{H}$ for their density-dependent equivalents, respectively.
\subsection{The Harmonic Oscillator (HO) Lagrangian and its $n$-th order extension}
The simplest Lagrangian density one can write for a real scalar field $\varphi$ is
\begin{equation}\label{HOLagrangian}
\mathcal{L}=\frac{1}{2}\partial_{\mu}\varphi\partial^{\mu}\varphi,
\end{equation}
which is the Lagrangian density of a harmonic oscillator (HO) for massless excitations (bosons). The classical, still not quantized, fields that satisfy the equations of motion of this Lagrangian are given by the plane-wave solution.
\begin{equation}\label{PW}
\varphi(\B{r},t)=\frac{1}{(2\pi)^{3/2}}\int_{-\infty}^{+\infty}\psi(\B{k},E)e^{ik^{\mu}x_{\mu}}dEd\B{k}.
\end{equation}
The integration is over four variables, the three momenta (\B{k}, a spatial vector) and the energy (E, a scalar). The construction of the effective field theory is accomplished by taking the Lagrangian in Eq.\,\eqref{HOLagrangian} and extending it naturally to its $n$-th order and adding a mass term at every order,
\begin{equation}\label{EFTLagrangian}
\mathcal{L}^{(n)}=\frac{\alpha^n}{2^{n+1}(n+1)!}\big(\partial_{\mu}\varphi\partial^{\mu}\varphi-nm^2\varphi^2\big)^{n+1}\red{.}
\end{equation}
The total Lagrangian density is $\mathcal{L}=\sum_n\mathcal{L}^{(n)}$, where for the $n=0$ term we recover Eq.\,\eqref{HOLagrangian}.
We have added a term $nm^2\varphi^2$, often called ``the mass'' term, at every order. This is added to capture all possible combinations of interaction terms that could result from lowest powers of $\varphi$ and $\partial_{\mu}\varphi\partial^{\mu}\varphi$. Including $\varphi$ only shifts the equations of motion by a constant, therefore the lowest possible power is $\varphi^2$. In this formulation $\alpha$ is the coupling coefficient of the theory. Since the dimension of a Lagrangian density always has to be $[\mathcal{L}]=4$, it follows logically that $[\alpha]=-4$ and as well that this is a \emph{non-renormalizable} EFT.
The main advantage of this systematic construction of the Lagrangian density given by Eq.\,\eqref{EFTLagrangian} is that it is unique in the sense that the plane wave solution given by Eq.\,\eqref{PW} satisfies the equations of motion at every $n$-th order without the need to consider perturbative corrections if one imposes a specific condition on $\alpha\sim 1/N^{3/2}_{av}$, where $N_{av}=\sqrt{N_fN_i}$ is the geometric average of the total number of excitations (bosons) between the initial $\ket{N_i}$ and final $\ket{N_f}$ Fock states that the interaction is acting on, as shown in Appendix (A). At $n=0$ the excitations (bosons) are massless with energy $|E_k|=|\B{k}|$ and for any arbitrary $n>0$ a mass-like term is introduced through the self interaction that turns out to be the main driver of a $Q\cdot Q$ quadrupole-quadrupole type interaction.
This, alongside the imposed condition on $\alpha$, allows us to include all terms up to an arbitrary $n$-th order.
The EFT has to be suitable for describing nuclei, and therefore it is necessary to use discretized fields instead of Eq.\,\eqref{PW}, through localized plane waves within cubic elements of volume $V$ with periodic boundary conditions. This condition requires that the plane waves have the following discrete form:
\begin{gather}
\varphi(\B{r},t)=\frac{1}{\sqrt{V}}\Big(\sum_{\B{k}}\frac{b^+_{\B{k}}}{\sqrt{|2E_k|}}e^{\iota k^{\mu}x_{\mu}}
+ \frac{b^-_{\B{k}}}{\sqrt{|2E_k|}}e^{-\iota k^{\mu}x_{\mu}}\Big),
\label{df1}
\end{gather}
where $b^+_{\B{k}}$ creates an excitation (boson) and $b_{\B{k}}$ destroys an excitation (boson), respectively, with energy $|E_k|=|\B{k}|$ by acting on a $\ket{N_k}$ state, where $N_k$ is the number of excitations (bosons) with momentum $\B{k}$.
Now that we have identified the required fields and operators, we need for them to describe excitations of a system of $\mathcal{A}$ nucleons, which means the fields must enter pair-wise (quadratically, to preserve the parity of each single-nucleon wave function), and therefore for a nucleus with $\mathcal{A}$ nucleons the Lagrangian density given in Eq.\,\eqref{EFTLagrangian} has to be generalized to
\begin{equation}
\mathcal{L}^{(n)}=\frac{\alpha^n}{2^{n+1}(n+1)!}\big(\partial_{\mu}\varphi_{p}\partial^{\mu}\varphi_{p}-nm^2\varphi_{p}^2\big)^{n+1}, \label{EFTLp}
\end{equation}
which is the Lagrangian density of an $\mathcal{A}$-component real scalar field and is O$(\mathcal{A}-1)$ symmetric [$(\mathcal{A}-1)$ to remove the center-of-mass contribution]. It has been established that the symplectic Sp(3,\B{R}) group is a complementary dual of the O$(\mathcal{A}-1)$ symmetry group [32]. This implies that the Lagrangian itself is part symplectic, meaning that the resulting quantum mechanical Hamiltonian from it, after making specific couplings, preserves symplectic symmetry and doesn’t mix configurations belonging to different symplectic irreps. As for the p subscript, it denotes the sum over all nucleons in the system
\subsection{The Sp(3,\B{R}) algebra and Symplectic basis}
The symplectic symmetry is the natural extension of the SU(3) symmetry and is realized by its 21 many-body generators in their Cartesian form. Since we are constructing a 4-dimensional EFT it is appropriate to represent the symplectic generators in the interaction picture (Heisenberg representation) where the operators explicitly depend on time. This is done through
\begin{equation}
b^{\pm}(t)=b^{\pm}e^{\pm \iota\Omega t}.
\end{equation}
Using this definition we get the following
\begin{gather}
\nonumber
A_{ij}= \frac{1}{2}b^+_{ip}b^+_{jp}e^{2\iota\Omega t}, \\
B_{ij}= \frac{1}{2}b^-_{ip}b^-_{jp}e^{-2\iota\Omega t}, \label{SpO} \\
\nonumber
C_{ij}= \frac{1}{2}(b^+_{ip}b^-_{jp}+b^-_{jp}b^+_{ip}),
\end{gather}
where the $i,j$ in subscripts denote the spatial directions and the repeated index $p$ implies a sum over the number of nucleons in the system being described. The objects $2\mathcal{Q}_{ij}=C_{ij}+C_{ji}$ are the generators of the Elliott SU(3) group that act within a major harmonic oscillator shell, whereas the symplectic raising $A_{ij}$ operator and its conjugate $B_{ij}$ lowering operator connect states differing in energy by $2\Omega$, twice the harmonic oscillator energy.
The interaction picture clearly states that the symplectic operators $A$ and $B$ are the ones responsible for the dynamics (they depend on time) in nuclei that can be interpreted as vibrations in space and time. Whereas the $C$ operators (independent of time) are responsible for the static deformed configurations in nuclei that can rotate freely. This was perhaps implicitly evident from the fact that the Sp(3,\B{R}) symmetry ($A$ and $B$) is the dynamical extension of the SU(3) symmetry ($C$), but now it is explicitly evident through their representation in the interaction picture.
To further understand the significance of these operators it is useful to define the following set of operators which are more suitable for a physical interpretation of the symplectic operators
\begin{gather}
\nonumber
Q_{ij}= \mathcal{Q}_{ij}+A_{ij}+B_{ij}, \\
K_{ij}= \mathcal{Q}_{ij}-A_{ij}-B_{ij}, \label{PSPO} \\
\nonumber
L_{ij}= -i(C_{ij}-C_{ji}), \\
\nonumber
S_{ij}= 2\iota(A_{ij}-B_{ji}),
\end{gather}
where $Q_{ij}$ is the quadrupole tensor and is responsible for deformation, $K_{ij}$ is the the many-body kinetic tensor, $L_{ij}$ is the angular momentum tensor responsible for rotations and $S_{ij}$ is the vorticity tensor responsible for the flow of deformation.
Symplectic basis states are constructed by acting with the symplectic raising operator $A$ on the so-called band-head of the symplectic irrep $\ket{\sigma}$ which, is unique and is defined as $B\ket{\sigma}=0$ to be a lowest weight state. The band-head $\ket{\sigma}$ is mathematically similar to how the vacuum behaves $b\ket{0}=0$. However, unlike vacuum, $\ket{\sigma}$ can contain physical particles; namely, nucleons.
The complete labeling of a sympletic basis state that is constructed from its $\ket{\sigma}$ bandhead irrep is $\ket{\sigma n\rho\omega\kappa LM}$ where $\sigma\equiv N_{\sigma}(\lambda_{\sigma}\mu_{\sigma})$ labels the bandhead, $n\equiv N_n(\lambda_n\mu_n)$ labels the excited state that, coupled to the bandhead, yields the final configuration labeled by $\omega\equiv N_{\omega}(\lambda_{\omega}\mu_{\omega})$ with $\rho$ multiplicity, $L$ angular momentum with $\kappa$ multiplicity and its $M$ projection. $N_{\omega}=N_{\sigma}+N_{n}$ is the total number of bosons (oscillator quanta), $\lambda_a=(N_a)_z-(N_a)_x$ and $\mu_a=(N_a)_x-(N_a)_y$ are the SU(3) quantum numbers, where $a=\sigma,\omega,n$. They denote the intrinsic deformation of the state since they count the difference in the number of oscillator quanta in the z and x, and x and y directions respectively.
These states are ideal for describing collective features of nuclei and for serving as basis states for the EFT. Similar to the Fock state notation $\ket{N_k}$, symplectic basis states also describe bosons numbered by $N_{\omega}$ making them a suitable basis state to which an application of our EFT interaction yields a quantum mechanical Hamiltonian that can be utilized for carrying out nuclear structure studies.
\subsection{SpEFT Hamiltonian}
The $n$-th order Hamiltonian density, derived in Appendix (B) is
{\allowdisplaybreaks
\begin{gather}
\mathcal{H}^{(n)}=\frac{\alpha^n}{2^{n+1}(n+1)!}\big(\dot{\varphi}_{p_1}^2-\varphi_{p_1}^{\prime}\cdot \varphi_{p_1}^{\prime}-nm^2\varphi_{p_1}^2\big)^n \nonumber \\ \label{EFTHamiltonian}
\times\big((2n+1)\dot{\varphi}_{p_2}^2+\varphi_{p_2}^{\prime}\cdot \varphi_{p_2}^{\prime} + nm^2\varphi_{p_2}^2\big),
\end{gather}
where $\dot{\varphi}\equiv \frac{\partial\varphi}{\partial t}$ and $\varphi^{\prime}\equiv \frac{\partial\varphi}{\partial \B{r}}$.
The total Hamiltonian density at the $n$-th order is a sum over all possible $n+1$ combinations of the $n+1$-th term in the second parenthesis in Eq.\,\eqref{EFTHamiltonian} with respect to the $n$ terms in the first paranthesis, therefore it is Hermitian (see Appendix (B) for proof). However, for purposes of calculating matrix elements it is sufficient to only consider Eq.\,\eqref{EFTHamiltonian} and then add all the possible other combinatorial terms as described.
}
The coupled fields in Eq.\,\eqref{EFTHamiltonian} are
\begin{gather}
\varphi_{p}^2=\frac{1}{V}\sum_{\B{k},\B{q}}\frac{1}{\sqrt{|2E_k||2E_q|}} \nonumber
\times \\ \nonumber
\Big(b^+_{p\B{k}}e^{\iota k^{\mu}x_{\mu}} + b^-_{p\B{k}}e^{-\iota k^{\mu}x_{\mu}}\Big) \Big(b^+_{p\B{q}}e^{\iota q^{\mu}x_{\mu}}+b^-_{p\B{q}}e^{-\iota q^{\mu}x_{\mu}} \Big), \\ \nonumber
\dot{\varphi}_{p}^2=\frac{1}{V}\sum_{\B{k},\B{q}}\frac{\iota E_k }{\sqrt{|2E_k|}}\frac{\iota E_q}{\sqrt{|2E_q|}} \times \\ \nonumber
\Big(b^+_{p\B{k}}e^{\iota k^{\mu}x_{\mu}}
- b^-_{p\B{k}}e^{-\iota k^{\mu}x_{\mu}}\Big) \Big(b^+_{p\B{q}}e^{\iota q^{\mu}x_{\mu}}-b^-_{p\B{q}}e^{-\iota q^{\mu}x_{\mu}} \Big) , \\ \nonumber
\varphi_{p}^{\prime}\cdot\varphi_{p}^{\prime}=\frac{1}{V}\sum_{\B{k},\B{q}}\frac{(-\iota \B{k})}{\sqrt{|2E_k|}}\frac{(-\iota \B{q})}{\sqrt{|2E_q|}} \times \\ \label{FP}
\Big(b^+_{p\B{k}}e^{\iota k^{\mu}x_{\mu}} - b^-_{p\B{k}}e^{-\iota k^{\mu}x_{\mu}}\Big) \Big(b^+_{p\B{q}}e^{\iota q^{\mu}x_{\mu}}-b^-_{p\B{q}}e^{-\iota q^{\mu}x_{\mu}} \Big) .
\end{gather}
For convenience, from here on we will drop the index $p$ denoting the sum over particle numbers from the fields since they don't affect any of the follow-on derivations and can be recovered as may be required at any time. As evident from the formulas above the creation and annihilation operators enter into the Hamiltonian density in pairs of $b^+b^+$, $b^-b^-$, $b^+b^-$ and $b^-b^+$.
This allows us to describe them through the symplectic operators defined in Eqs.\,\eqref{SpO}.
This definition further enables us to transition from $\ket{N_k}$ to $\ket{\sigma}$
where $N_{\omega}$ will be equivalent to a state with number of bosons $N_k$ created(destroyed) by $b^+_{\B{k}}(b^-_\B{k})$ with momentum $\B{k}$.
Knowing this we can rewrite the fields in Eqs.\,\eqref{FP} as follows:
\begin{gather}
\varphi^2=\frac{1}{V}\sum_{\B{k},\B{q}}\frac{1}{\sqrt{|2E_k||2E_q|}} \nonumber
\times \\ \nonumber
\Big(Z^+_{\B{k}}Z^+_{\B{q}}+Z^-_{\B{k}}Z^-_{\B{q}}+Z^+_{\B{k}}Z^-_{\B{q}}+Z^-_{\B{k}}Z^+_{\B{q}} \Big), \\ \nonumber
\dot{\varphi}^2=\frac{1}{V}\sum_{\B{k},\B{q}}\frac{\iota E_k }{\sqrt{|2E_k|}}\frac{\iota E_q}{\sqrt{|2E_q|}} \times \\ \nonumber
\Big(Z^+_{\B{k}}Z^+_{\B{q}}+Z^-_{\B{k}}Z^-_{\B{q}}-Z^+_{\B{k}}Z^-_{\B{q}}-Z^-_{\B{k}}Z^+_{\B{q}} \Big) , \\ \nonumber
\varphi^{\prime}\cdot\varphi^{\prime}=\frac{1}{V}\sum_{\B{k},\B{q}}\frac{(-\iota\B{k})}{\sqrt{|2E_k|}}\frac{(-\iota\B{q})}{\sqrt{|2E_q|}} \times \\ \label{ZFP}
\Big(Z^+_{\B{k}}Z^+_{\B{q}}+Z^-_{\B{k}}Z^-_{\B{q}}-Z^+_{\B{k}}Z^-_{\B{q}}-Z^-_{\B{k}}Z^+_{\B{q}} \Big) ,
\end{gather}
where, for further convenience we use the notation
$Z^{\pm}_{\B{k}}=b^{\pm}_{\B{k}}e^{\pm ik^{\mu}x_{\mu}}$. And finally, with these further simplifying definitions in play, the Hamiltonian density in Eq.\,\eqref{EFTHamiltonian} can be rewritten as follows:
\begin{gather}
\label{Hn}
\mathcal{H}^{(n)}=\frac{1}{2^{n+1}(n+1)!}\frac{\alpha^n}{V^{n+1}}\sum_{\B{k}_1\B{k}_2...\B{k}_{n+1}}\sum_{\B{q}_1\B{q}_2...\B{q}_{n+1}} \\ \nonumber \frac{\mathcal{Z}_1\mathcal{Z}_2......\mathcal{Z}_n\Xi_{n+1}}{2^{n+1}\sqrt{E_{k_1}E_{k_2}.....E_{k_n+1}E_{q_1}E_{q_2}.....E_{q_n+1}}},
\end{gather}
where we further use the following notation:
\begin{gather}
\mathcal{Z}_n=\bigg((-E_{k_n}E_{q_n}+\B{k}_n\cdot\B{q}_n-nm^2)(Z^+_{\B{k}_n}Z^+_{\B{q}_n}+Z^-_{\B{k}_n}Z^-_{\B{q}_n})\nonumber \\ -(-E_{k_n}E_{q_n}+\B{k}_n\cdot\B{q}_n+nm^2)(Z^+_{\B{k}_n}Z^-_{\B{q}_n}+Z^-_{\B{k}_n}Z^+_{\B{q}_n})\bigg).
\end{gather}
\begin{gather}
\Xi_{n+1}=\bigg((-(2n+1)E_{k_{n+1}}E_{q_{n+1}}-\B{k}_{n+1}\cdot\B{q}_{n+1}+nm^2) \nonumber \times \\ (Z^+_{\B{k}_{n+1}}Z^+_{\B{q}_{n+1}}+Z^-_{\B{k}_{n+1}}Z^-_{\B{q}_{n+1}}) \nonumber \\ -(-(2n+1)E_{k_{n+1}}E_{q_{n+1}}-\B{k}_{n+1}\cdot\B{q}_{n+1}-nm^2) \nonumber \times \\ (Z^+_{\B{k}_{n+1}}Z^-_{\B{q}_{n+1}}+Z^-_{\B{k}_{n+1}}Z^+_{\B{q}_{n+1}})\bigg).
\end{gather}
With all of this in place, we finally come to an expression for the Hamiltonian density, $\mathcal{H}$, that enters into the integral for $H$ that we consider next.
\subsection{Diagonal coupling and Monopole Hamiltonian}
The Hamiltonian is
\begin{equation}
H= \int_{-L}^{+L}\mathcal{H}dV.
\end{equation}
Substituting $\mathcal{H}$ into the integration we notice that for every order $n$ we have $n+1$ pairs of $Z^+Z^+$, $Z^+Z^-$ and their conjugates multiplied with each other inside the integration. To demonstrate this, let us consider a term like $Z^+Z^+$ for example for the simplest case $n=0$ which gives a term like
\begin{equation}
\int_{-L}^{+L}b_{\B{k}}^+(t)b_{\B{q}}^+(t)e^{ i(\B{k}+\B{q})\cdot\B{r}}dV,
\end{equation}
where we absorbed the time component of the exponent into the operators. This integral is zero because of the periodic boundary condition on $\B{k}$ and $\B{q}$ unless $\B{k}+\B{q}=0$. To generalize for $n+1$ pairs, each term $Z^+Z^+$ and $Z^-Z^-$
has to have $\B{k}+\B{q}=0$, and each term $Z^+Z^-$ and $Z^-Z^+$ has to have $\B{k}-\B{q}=0$. We call this ``diagonal coupling'' because for each pair in the integral we couple $\B{k}$ to $\B{q}$ for example $\B{k}_1$ to $\B{q}_1$, $\B{k}_2$ to $\B{q}_2$ and $\B{k}_n$ to $\B{q}_n$, etc. The result of this simple coupling is that each $\mathcal{Z}$ term in Eq.\,\eqref{Hn} doesn't interact with other similar terms, meaning the sums don't mix with each other inside the integral, which leads to the Hamiltonian presented in Eq.\,\eqref{3-1}, below. (See Appendix (C) for this derivation,)
\begin{widetext}
\begin{gather}
H^{(n)}=\frac{1}{2^{n+1}(n+1)!}\sum_{\B{k}_1\B{k}_2...\B{k}_{n+1}}\frac{1}{2^{n+1}E_{k_1}E_{k_2}.....E_{k_n+1}}\frac{\alpha^n}{V^{n}}\times \nonumber \\ \bigg((-2E_{k_1}^2-2(2n-1)E_{k_1}^2)(b^+_{\B{k}_1}b^+_{-\B{k}_1}e^{2iE_{k_1}t}+b^-_{\B{k}_1}b^-_{-\B{k}_1}e^{-2iE_{k_1}t})\bigg)\times \nonumber \\ ..... \times
\bigg((-2E_{k_n}^2-2(2n-1)E_{k_n}^2)(b^+_{\B{k}_n}b^+_{-\B{k}_n}e^{2iE_{k_1}t}+b^-_{\B{k}_n}b^-_{-\B{k}_n}e^{-2iE_{k_1}t})\bigg)\times \nonumber \\
\bigg(-2E_{k_{n+1}}^2(b^+_{\B{k}_{n+1}}b^+_{-\B{k}_{n+1}}e^{2iE_{k_{n+1}}t}+b^-_{\B{k}_{n+1}}b^-_{-\B{k}_{n+1}}e^{-2iE_{k_{n+1}}t}) +(2n+2)E_{k_{n+1}}^2(b^+_{\B{k}_{n+1}}b^-_{\B{k}_{n+1}}+b^-_{\B{k}_{n+1}}b^+_{\B{k}_{n+1}})\bigg). \label{3-1}
\end{gather}
\end{widetext}
Comparing the terms in the Hamiltonian to the symplectic operators defined in Eq.\,\eqref{SpO} it is straightforward to see that this Hamiltonian is symplectic in nature because, for example, $b^-_{\B{k}_1}b^-_{-\B{k}_1}$ is simply $b^-_ib^-_i$ which is the symplectic lowering operator $B_{ii}$ after we recover the sum over particle number $p$. This implies that symplectic symmetry emerges naturally from the EFT Lagrangian in Eq.\,\eqref{EFTLagrangian} whose sole construction was done naturally by extending the harmonic oscillator Lagrangian out to its $n$-th order. This implies that symplectic symmetry is an extension of the SU(3) symmetry of the harmonic oscillator which algebraically was known and well understood through nuclear physics applications but here, as seen through these developments, symplectic symmetry in nuclear physics has its origin at a more fundamental level than previously considered; specifically, it derives from and is underpinned by a logical EFT formulation.
In this effective field theory for determining the structure of atomic nuclei the nucleons in a nucleus are represented through the total energy quanta (bosons) they bring forward, which in turn can be created and destroyed at all possible energy values. This can be envisioned as having an infinite quantum mechanical harmonic oscillator systems, each with $E_k$ wherein the nucleons are contained. Such excitations are constrained to a very narrow range of possible energy values. Studies done with mean field models and also realistic interactions all support such a claim, as do calculations using the NCSpM \cite{PhysRevC.89.034312} and SA-NCSM \cite{PhysRevLett.124.042501}. Specifically, in the latter cases, one typically finds that utilizing a single symplectic irrep suffices to recover nearly $70\%$-$80\%$ of the probability distribution, and more specifically, accounts for nearly $90\%$-$100\%$ of observables, such as energy spectra, B(E2) values and radii.
Given the fact that in all such cases the excitation quanta include only a single energy mode; $\hbar\Omega=41\mathcal{A}^{-1/3}$ MeV,
the application of the Hamiltonian on a single symplectic state further reduces it to only one term, where from each sum over $\B{k}$ survives; namely, the term where $E_k=\hbar\Omega$ which reduces Eq.\, \eqref{3-1} to
\begin{gather}
H^{(n>1)}_d=(-n)^n(2\hbar\Omega)^{n+1}\frac{\alpha^n}{V^n}(A_{ii}+B_{ii})^n\times \nonumber \\
\big((n+1)C_{jj}-(A_{jj}+B_{jj})\big). \label{3-2}
\end{gather}
This is a quantum mechanical Hamiltonian that is the natural extension to the harmonic oscillator Hamiltonian for $n\geq 1$. It represents a one-body interaction extended to an arbitrary $n$-th order. This results from the diagonal coupling discussed above and hence the $d$ subscript, and is solely responsible for generating monopole excitations in nuclei that do not contribute to the dynamics since they are just powers of $A_{ii}+B_{ii}$. They destroy the leading order harmonic oscillator at every $n\geq 1$ which is unphysical and hence they have to be removed from the final Hamiltonian. What is responsible for dynamics are vibrations in space and time due to the quadrupole excitations in nuclei that result from off-diagonal couplings in the Hamiltonian in Eq.\,\eqref{Hn}. The only term from the diagonal coupling that contributes to the Hamiltonian is the harmonic oscillator which, is $H^{(0)}=\hbar\Omega C_{ii}$.
\subsection{Off-Diagonal coupling the Quadrupole Hamiltonian}
In this section, we consider two pairs of Z's, $Z^+Z^+$, for example inside the integral resulting from multiplying two $\mathcal{Z}$ in Eq.\,\eqref{Hn}, namely,
\begin{equation}
\int_{-L}^{+L}b_{\B{k}_1}^+(t)b_{\B{q}_1}^+(t)b_{\B{k}_2}^+(t)b_{\B{q}_2}^+(t)e^{ i(\B{k}_1+\B{q}_1+\B{k}_2+\B{q}_2)\cdot\B{r}}dV.
\end{equation}
For $n=1$, which as we discussed before, this will be zero unless $\B{k}_1+\B{q}_1+\B{k}_2+\B{q}_2=0$. We managed this before by picking $\B{k}_1+\B{q}_1=0$ and $\B{k}_2+\B{q}_2=0$, etc., which resulted to the diagonal coupling. However there are many possibilities to make $\B{k}_1+\B{q}_1+\B{k}_2+\B{q}_2=0$. What is particularly interesting is if we choose $\B{k}_1+\B{q}_2=0$ and $\B{k}_2+\B{q}_1=0$. This results in a pair of $A_{\B{k}_1-\B{k}_2}A_{\B{k}_2-\B{k}_1}$ which creates a boson pair with momentum $\B{k}_1$ and $-\B{k}_2$, respectively, and creates another pair with momentum $\B{k}_2$ and $-\B{k}_1$, respectively, such that the total momentum of both pairs is conserved. If we pick $\B{k}_1=-\B{k}_2$ this will result to the diagonal coupling Hamiltonian in Eq.\, \eqref{3-2} derived in the previous section. However we can pick $|\B{k}_1|=|\B{k}_2|$ such that $\B{k}_1\perp\B{k}_2$. This reduces $A_{\B{k}_1-\B{k}_2}A_{\B{k}_2-\B{k}_1}$ to $A_{ij}A_{ji}$ which creates two boson pairs in the $i$-th and $j$-th direction such that $i\neq j$. The same argument applies to other terms like $Z^+Z^+Z^-Z^-$, $Z^+Z^+Z^+Z^-$, $Z^+Z^+Z^+Z^-$, $Z^-Z^-Z^+Z^-$ etc.
The expression for the off-diagonal Hamiltonian depends on $n$. If $n$ is odd then we have $n+1$ even pairs of $Z^{\pm}$ that all could be coupled to each other resulting in $(n-1)/2$ identical pairs and one unique pair.
If $n$ is even then we have $n+1$ odd pairs, from which we can form either $n/2$ identical pairs, and a unique pair or $(n-2)/2$ identical pairs and two unique pairs.
This results in three off-diagonal Hamiltonians, one for odd $n$ and two for even $n$ for $n>0$, since for $n=0$ only diagonal coupling is possible. The resulting expressions will contain terms like $A_{ij}A_{ji}$, $C_{ij}C_{ji}$, $B_{ij}C_{ji}$ etc., which can be represented in terms of $Q_{ij}$ and $K_{ij}$ resulting into the following three two-body Hamiltonian expansions, see Appendix (D) for this derivation. The expansion resulting from the off-diagonal coupling in Eq.\,\eqref{Hn} at every $n=odd$ is
\begin{gather}
H^{(n=odd)}_{od}=\frac{(\hbar\Omega)^{n+1}}{2^{n+1}}\frac{\alpha^n}{V^{n}}\times \nonumber \\ \big(g_n^2Q_{ij}Q_{ji}+K_{ij}K_{ji} -g_n\lbrace Q_{ij},K_{ji}\rbrace\big)^{(n-1)/2} \times \nonumber \\\big(-g_n^2Q_{ij}Q_{ji}+(2n+1)K_{ij}K_{ji} -ng_n\lbrace Q_{ij},K_{ji}\rbrace\big).
\end{gather}
In the above equation, $g_n$ denotes the strength of the quadrupole operator and is tied to the mass parameter introduced in Eq.\,\eqref{EFTLagrangian} (see Appendix (A) for derivation).
The expansion resulting from the off-diagonal coupling in Eq.\,\eqref{Hn} at every $n=even$ is
\begin{gather}
H^{(n=even)}_{od}=\frac{(\hbar\Omega)^{n+1}}{2^{n+1}}\frac{\alpha^n}{V^{n}}\times \nonumber \\ \big(g_n^2Q_{ij}Q_{ji}+K_{ij}K_{ji}-g_n\lbrace Q_{ij},K_{ji}\rbrace\big)^{n/2} \times \nonumber \\ \big((n+1)C_{ll}-(A_{jj}+B_{jj})\big).
\end{gather}
This is one of the expansions resulting from the off-diagonal coupling in Eq.\,\eqref{Hn} at every $n=even$ and the other one is removed since every term is proportional to $A_{ii}+B_{ii}$, see Appendix (D) for additional details.
\section{Analysis and Results}
Incorporating all the above considerations, the final quantum mechanical Hamiltonian is as follows:
\begin{equation}
H= \hbar\Omega C_{ii} + \sum_{n=1}H^{(n=odd)}_{od}+\sum_{n=2}H^{(n=even)}_{od}. \label{QMH}
\end{equation}
Each $n$-th term in this Hamiltonian is a ($n+1$)-body interaction. Therefore this Hamiltonian, as formulated below, includes all possible interaction terms up to infinity, but excludes terms resulting from triple, quadruple or even higher off-diagonal couplings that are naturally associated with new power series of 3-body and 4-body character, for example terms like $Q_{ij}Q_{jf}Q_{fi}$ and $Q_{ij}Q_{jf}Q_{fl}Q_{li}$, respectively. These terms lie outside the scope of the present paper as here we have chosen to limit the theory to at most two off-diagonal coupling terms, and the resulting interactions and their respective powers.
In this section we will outline in subsection (A) how the parameters of the theory are chosen, how the effective parameters of the Hamiltonian tie to the parameters introduced in the Lagrangian density, and finally discuss their physical implications. In subsection (B) the dynamical effects of the interaction and time average of the Hamiltonian will be derived. Finally in subsection (C) we will present some result for applications of this Hamiltonian to ${}^{20}$Ne, ${}^{22}$Ne and ${}^{22}$Mg.
\subsection{Parameters of the EFT}
The resulting Hamiltonian for this EFT [Eq.\eqref{QMH}] is effectively a two parameter theory; the parameters being $\frac{\alpha}{V}\hbar\Omega$ and $g_n$. The parameter $\frac{\alpha}{V}\hbar\Omega=\frac{V_{\mathcal{A}}}{b^3N^3_{av}}$ establishes a clear seperation of scales. It is simply the ratio of the average spherical nuclear volume $V_A=\frac{4}{3}\pi R^3$, where $R=1.2\mathcal{A}^{1/3}$ is the average radius of the ground state, over the volume corresponding to the harmonic oscillator excitations $V=b^3N_{av}^{3/2}$, where $b$ is the oscillator length, along with the plane-wave condition, $\alpha\sim 1/N^{3/2}_{av}$ that allows further stretching of $V$.
If $\frac{V_{\mathcal{A}}}{b^3N^3_{av}}<< 1$ the average volume of the nucleus is less than a volume determined by adding up the total number of its excitation quanta, when a plane wave solution is valid therefore preserving the harmonic oscillator structure. But if $\frac{V_{\mathcal{A}}}{b^3N^3_{av}} \sim 1$, signaling that the volume determined by adding up the number of oscillator excitations in play is approaching the average volume, a plane wave solution becomes untenable with higher order corrections becoming ever more relevant, ending in a complete breakdown when the $Q\cdot Q$ interactions destroys the HO structure. In short, as more nucleons are added to the system their corresponding boson excitations have to also increase appropriately such that they can be represented by plane waves as indicated by the $\frac{V_{\mathcal{A}}}{b^3N^3_{av}}$ measure. This further emphasizes the fact that this scale is valid as long as a shell structure description is appropriate for the systems being studied.
The second parameter of the theory is $g_n$ which is the strength of the quadrupole operator because only the quadrupole terms in Eq.\,\eqref{QMH} carry this parameter as a multiplier. The value of this parameter determines the strength with which the quadrupole tensor enters into any analysis relative to that of the kinetic tensor. In particular, it should be clear that if $g_n=0$ one gets a power series in $K_{ij}K_{ji}$. It is therefore important that the value chosen for $g_n>1$ for $\forall n$ is required to balance the interaction between $Q_{ij}Q_{ji}$, $K_{ij}K_{ji}$ and $Q_{ij}K_{ji}$.
The parameter $g_n$ is expressed in terms of the mass-like parameter introduced in Eq.\,\eqref{massterm} to represent the strength of the interaction. It results from off-diagonal couplings that depend on $n$ which has a simple physical interpretation,
\begin{equation}
g_n=\frac{2n-1}{n}g.
\end{equation}
The consequences of this is that the energy of the bosons contributing to the formation of a given final symplectic configuration, starting from an initial one, increases uniformly as more pairs are considered. This simple picture suggests that if $g_1=g$ and $g_{\infty}=2g$ then as $n\to\infty$ the weight of $Q_{ij}Q_{ji}$ will scale as
\begin{equation}
\lim_{n\to\infty}\frac{(\hbar\Omega)^{n+1}}{2^{n+1}}\frac{\alpha^n}{V^{n}}g_n^{n+1}=0.
\end{equation}
Although this means there will be a new parameter $g_n$ at every order of the Hamiltonian, they are all determined once $g$ is fixed, and therefore the theory is effectively a two parameter theory; namely, $\frac{V_A}{b^3N^3_{av}}$ and $g$.
In applications of the theory, unlike $\frac{V_A}{b^3N^3_{av}}$, $g$ can be fitted to known observables.
\subsection{Time Average of the Hamiltonian}
The Hamiltonian in Eq.\,\eqref{QMH} depends on time implicitly. This dependence comes through the symplectic operators defined in Eqs.\,\eqref{SpO} and since they enter in pairs, for example, in the two-body interaction terms they will have the following time factors $e^{\pm4\iota\Omega t}$ for $A_{ij}A_{ji}$ and $B_{ij}B_{ji}$, $e^{\pm2\iota\Omega t}$ for $A_{ij}C_{ji}$ and $B_{ij}C_{ji}$, unity for $A_{ij}B_{ji}$ and $C_{ij}C_{ji}$ (``$+$ for $A$s and ``$-$'' for $B$s). It is evident that the time independent terms, like $A_{ij}B_{ji}$ and $C_{ij}C_{ji}$ are responsible for rotations. As for the dynamical terms, like $A_{ij}A_{ji}$ and $B_{ij}B_{ji}$, they are responsible for vibrations in nuclei.
The time dependence has to be integrated out. This is done by averaging the Hamiltonian over the time period that the nucleons interact with each other.
\begin{equation}
H=\frac{1}{T}\int^T_0 H(t) dt,
\end{equation}
where $H(t)$ is the Hamiltonian given in Eq.\,\eqref{QMH} with its time dependence written explicitly. $T$ is the upper time limit in which the strong interaction propagates and it is of order
$T=\frac{2R}{c}=\frac{2\times10^{-15}}{3\times10^8}\sim 10^{-23}s$. This allows us to evaluate the integral in the limit of $T\to0$ which implies that the self interacting fields in our EFT interact almost simultaneously
\begin{equation}
H=\lim_{T\to0}\frac{1}{T}\int^T_0 H(t) dt.
\end{equation}
The time-independent terms come out of this integral unchanged. As for the time-dependent ones, let us show an explicit example like $e^{4\iota\Omega t}$ which will be to the power of $\frac{n+1}{2}$ for $n=odd$ and to power of $\frac{n}{2}$ for $n=even$. For the odd ones we have
\begin{equation}
\lim_{T\to0}\frac{1}{T}\int^T_0 e^{2(n+1)\iota\Omega t} dt=\lim_{T\to0}\frac{e^{2(n+1)\iota\Omega T} -1}{2(n+1)\iota\Omega T}=1 .
\end{equation}
This proves that the time-dependent terms also come out unchanged except they drop their exponential time factors. The same proof applies to the even expansion terms as well.
So finally, the Hamiltonian in Eq.\,\eqref{QMH} could be applied to any nucleus as though it is independent of time.
\subsection{${}^{20}$Ne, ${}^{22}$Ne and ${}^{22}$Mg}
The ground state rotational bands for ${}^{20}$Ne, ${}^{22}$Ne and ${}^{22}$Mg all display a structure that is close to that of a rigid rotor, which makes them good tests of our EFT to see if the theory can reproduce this rotational behavior. Moreover, the $K_{ij}K_{ji}$ and $Q_{ij}K_{ji}$ in Eq.\,\eqref{QMH} should introduce irrotational-like departures from the simple $L(L+1)$ rule for the spectrum of a rigid rotor, which can as well be seen in the ${}^{20}$Ne, ${}^{22}$Ne, and ${}^{22}$Mg set, making these ideal candidates for probing a range of rotational features within the context of our EFT theory.
Specifically, we carried out SpEFT calculations in the $N_{\rm{max}}=12$ model space, which was required for gaining good convergence of the observed B(E2) transitions, by adjusting only one parameter; $g=14$ for ${}^{20}$Ne and $g=14.7$ for ${}^{22}$Ne and ${}^{22}$Mg. For these nuclei, the $N_{\rm{max}}=12$ model space is down-selected to only one spin-0 leading symplectic irrep; namely $48.5(8,0)$ for ${}^{20}$Ne, $55.5(8,2)$ for ${}^{22}$Ne and ${}^{22}$Mg. These selections also proved sufficient to simultaneously reproduce reasonably well-converged values for the observed energy spectra and nuclear radii. And most importantly, the results very clearly demonstrate that the SpEFT is able to do this without the need for introducing effective charges which is confirmed pictorially in Fig. (\ref{f1},\ref{f2},\ref{f3}), and by a comparison of the rms radii given in Table (\ref{tablerms}) that are as well in very good agreement with observations, all with only a single fitting parameter, $g$.
In Table (\ref{tablescale}) we further give the maximum and minimum values of the scale parameter, $\frac{V_{\mathcal{A}}}{b^3N^3_{av}}$, for these nuclei. All the presented results are calculated by including terms up to $n\leq 4$ in the Hamiltonian, which, as stressed above, was necessary for gaining good convergence of the theory to known observables. Additionally, we note that for the case of ${}^{20}$Ne, terms with $4<n\leq 6$ contribute $\sim -0.004$ MeV to the ground state energy and $\sim -0.004$ W.u. to the $2^+\rightarrow 0^+$ B(E2) transition. And beyond these data-focused measures, it is interesting to note that these calculations were all carried out on a laptop, taking from about 10 minutes for the ${}^{20}$Ne case and up to approximately 2 hours for the ${}^{22}$Ne and ${}^{22}$Mg cases, a feature which serves to stress that as complex as the underpinning algebraic structure may seem to be (Section II together with the associated Appendices), its applications are computationally quite simple, even to the point of rendering the formalism suitable for more pedagogical uses.
\begin{figure}
\centering
\includegraphics[scale=0.27]{Ne20.png}
\caption{The energy spectrum and B(E2) values of the $48.5(8,0)$ symplectic irrep for ${}^{20}$Ne using the SpEFT in a $N_{\rm{max}}=12$ model space (EFT) compared to experimental data (Expt.) \cite{TILLEY1998249}. B(E2) values are in W.u.}
\label{f1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.27]{Ne22.png}
\caption{SpEFT energy spectrum and B(E2) values of the $55.5(8,2)$ symplectic irrep for ${}^{22}$Ne in a $N_{\rm{max}}=12$ model space compared to experimental data (Expt.) \cite{BASUNIA201569}. B(E2) values are in W.u. }
\label{f2}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.27]{Mg22.png}
\caption{SpEFT energy spectrum and B(E2) values of the $55.5(8,2)$ symplectic irrep for ${}^{22}$Mg in a $N_{\rm{max}}=12$ model space compared to experimental data (Expt.) \cite{BASUNIA201569}. B(E2) values are in W.u. }
\label{f3}
\end{figure}
\begin{table}
\caption{SpEFT matter rms radii $r_m$ (fm) of the ground states compared to their experimental counterparts for the nuclei under consideration \cite{CHULKOV1996219,SUZUKI1998661}.}
\centering \label{tablerms}
\begin{tabular}{ |m{1.5cm}|m{1.5cm}||m{1.5cm}| }
\hline
Nucleus & SpEFT & Expt. \\
\hline
${}^{20}$Ne & 2.80 & 2.87(3) \\
\hline
${}^{22}$Ne & 2.84 & ----- \\
\hline
${}^{22}$Mg & 2.84 & 2.89(6) \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{The maximum and minimum value of the scale of SpEFT, $\frac{V_{\mathcal{A}}}{b^3N^3_{av}}$ when $N_{av}=N_{\sigma}$, $N_{av}=N_{\sigma}+N_{max}$ accordingly.}
\centering \label{tablescale}
\begin{tabular}{ |m{1.5cm}|m{3cm}||m{3cm}| }
\hline
Nucleus & The maximum scale & The minimum scale \\
\hline
${}^{20}$Ne & $2.79\times 10^{-4}$ & $1.44\times 10^{-4}$ \\
\hline
${}^{22}$Ne & $1.95\times 10^{-4}$ & $1.08\times 10^{-4}$ \\
\hline
${}^{22}$Mg & $1.95\times 10^{-4}$ & $1.08\times 10^{-4}$ \\
\hline
\end{tabular}
\end{table}
\section{Conclusions}
In this paper we outlined a step-by-step process for how symplectic symmetry emerges from a simple self-interacting scalar field theory extended to $n$-th order. We then went on to identify the scale of this interaction and imposed the necessary conditions that justify treating the scalar fields as plane waves. Furthermore, the application of the fields onto a single symplectic irrep generated a quantum mechanical Hamiltonian. It is composed of a harmonic oscillator at leading order and a dominant quadrupole-quadrupole interaction at next-to-leading, and higher orders. These results explain why, from an EFT perspective, phenomenological models using a simple harmonic oscillator with quadrupole type interactions have been successful in capturing the relevant physics in nuclei. Their origin lies in a simple scalar-field theory framework. Moreover, for the first time, we identified the dynamical operators of the symplectic algebra, and showed how they explicitly behave under vibrations in time.
The resulting SpEFT Hamiltonian is a complex, yet simple interaction to study nuclear observables. It can produce energy spectra, enhanced electromagnetic transitions, and rms matter radii without the need for effective charges. Moreover, it does this with only a single fitted parameter $g$. The main advantage of this theory lies in its simplicity to explain how deformation arises and drives nuclear dynamics. This key feature allows the SpEFT to take advantage of the underlying symmetry, and therefore doesn't require access to large-scale computational resources. This was successfully demonstrated by its application to ${}^{20}$Ne, ${}^{22}$Ne, ${}^{22}$Mg and its ability to produce results that are in very reasonable agreement with experiment on just a laptop.
\begin{acknowledgments}
Work supported by: DK \& JPD -- Louisiana State University (College of Science, Department of Physics \& Astronomy) as well as the Southeastern Universities Research Association (SURA, a U.S. based Non-profit Organization);
VIM -- In part by the U.S. Department of Energy (SURA operates the Thomas Jefferson National Accelerator Facility for the U.S.; Department of Energy under Contract No. DE-AC05-06OR23177); and CDR -- National Natural Science Foundation of China (Grant No.\,12135007).
\end{acknowledgments}
|
1,314,259,994,844 | arxiv | \section{Introduction:}\label{aba:sec0}
A look at the history of elementary particles reveals that a majority
of the constituents of the Standard Model, were conceived by theory
before they were experimentally established. This includes quarks that
were predicted by Murray Gell-Mann\cite{one} and George Zweig\cite{oneb}
to explain hadron spectra. In this note we want to discuss the
elementary constituents of space-time and their interactions from
which emerges a theory of gravitation.
We hope that the theoretical
conceptions we dwell upon in this note contribute to a quantum theory
of gravity that enables a description of phenomena like the formation
and evaporation process of a black hole and basic issues in cosmology
like the origin and fate of the universe. Besides being a framework
to address these difficult questions, `string theoretic' methods seem to
have applications to various problems in gauge theories, fluid
dynamics and condensed matter physics. This note is written with the
hope that the ideas and methods of string theory are accessible to a
large community of physicists.
In order to pose the question better we first consider the more
familiar setting of quantum chromo-dynamics (QCD) where the quarks
interact via color gluons. This theory accounts for the spectrum of
hadrons, their interactions and properties of nuclei. In particular in
the limit of long wave lengths the chiral non-linear sigma
model\cite{two} (another invention of Gell-Mann with Levy) describes
the interactions of pions. This theory is characterized by a
dimensional coupling, the pion coupling constant $f^{-1}_{\pi}$. In 3+1
dimensions $f_{\pi}$ has the dimensions of $({\rm mass})^2$ and if we
generalize the QCD gauge group from $SU(3)$ to $SU(N)$, then
$f_{\pi}\sim N$. Hence in the limit of large N the pions are weakly
coupled and the theory has soliton solutions with `baryon number'. The
mass of the baryon is proportional to N, the number of quark
constituents\footnote{This theory, which emerges from QCD, had
phenomenological antecedents in the theory of super-conductivity in
the work of Nambu and Jona-Lasinio\cite{three,four}.}. In modern
terminology one would say that the chiral model is `emergent' from an
underlying theory of more elementary constituents and their
interactions. The phenomenon of `emergence' occurs in complex systems
in many areas of science, and also social sciences. Gell-Mann has
also contributed to this area\cite{five}.
{\it Over the last 25 years one question that has occupied theoretical
physicists is: In what sense is gravity an emergent phenomenon? What
are the fundamental constituents of `space-time' and their
interactions}. The question is quite akin to that of the emergence
of the chiral model from QCD. If gravity is the analogue of the chiral
model, where the gravitational (Newton) coupling is dimensional and
akin to the `pion coupling constant', what is the QCD analogue for
gravity, from which gravity is emergent?
Perturbative string theory gave the first hint that gravity may be
derived from a more microscopic theory because its spectrum contains a
massless spin 2 excitation\cite{six,seven}. However real progress
towards answering this question came about with the discovery of
D-branes\cite{eight}.
\section{D-branes the building blocks of string theory}
A D-p brane (in the simplest geometrical
configuration) is a domain wall of dimension $p$, where $0
\leq p \leq 9$. It is
characterized by a charge and it couples to a $(p+1)$ form
abelian gauge field $A^{(p+1)}$, e.g. a D0 brane couples to a
1-form gauge field $A^{(1)}_\mu$, a D1 brane couples to a 2-form
gauge field $A^{(2)}_{\mu\nu}$ etc. The D-p brane has a brane
tension $T_p$ which is its mass per unit volume. The crucial point is
that $T_p \propto 1/g_s$. This dependence on the coupling
constant (instead of $g^{-2}_s$) is peculiar to string theory. It has
a very important consequence. A quick estimate of the gravitational
field of a D-p brane gives, $G_N^{(10)} T_p \sim g^2_s/g_s \sim
g_s$. Hence as $g_s \rightarrow 0$, the gravitational field goes to
zero! If we stack $N$ D-p branes on top of each other then the
gravitational field of the stack $\sim Ng_s$. A useful limit to study
is to hold $g_s N = \lambda$ fixed, as $g_s \rightarrow 0$ and $N
\rightarrow \infty$. In this limit when $\lambda \gg 1$ the stack of
branes can source a solution of supergravity. On the other hand when
$\lambda \ll 1$ there is a better
description of the stack of $D$-branes in terms of open strings.
A stack of $D$-branes interacts by the exchange of open strings very
much like quarks which interact by the exchange of gluons. Fig. 1a
illustrates the self-interaction of a D2-brane by the emission and
absorption of an open string and Fig. 1b illustrates
the interaction of 2 D2-branes by the exchange of an open string.
In the infra-red limit only the lowest mode of the open
string contributes and hence the stack of $N$ $D$-branes can be
equivalently described as a familiar $SU(N)$ non-abelian gauge theory
in $p+1$ dim.
\begin{center}
\includegraphics[scale=.4]{srw4.eps}
\end{center}
\section{Statistical Mechanics of D-brane Systems and Black Hole
Thermodynamics}\label{aba:sec2}
One of the earliest applications of the idea that $D$-branes are the
basic building blocks of `string theory' (and hence of a theory of
gravity) was to account for the entropy and dynamics of certain near
extremal black holes in $4+1$ dim. As is well known, Strominger and
Vafa\cite{nine} in a landmark paper showed that the
Benkenstein-Hawking entropy of these black holes is equal to the
Boltzmann entropy calculated from the micro-states of a system of D1
and D5 branes
\[
S_{BH} = {A_h \over 4G_N} = k_B \ell n \Omega = S_{\rm Boltzmann}
\]
This result established that black hole entropy can be obtained from
the statistical mechanics of the collective states of the brane system
and it provided a macroscopic basis of the first law of thermodynamics,
$dS_{BH} = T dM$, where $M$ is the mass of the black hole. Hawking
radiation can be accounted for from the averaged scattering amplitude of
external particles and the micro-states\cite{ten}.
\section{$D3$ branes and the AdS/CFT
corres\-pondence {\small \cite{eleven,twelve,thirteen,fourteen,fifteen}}:}\label{aba:sec3}
We now discuss the dynamics of a large number $N$, of D3 branes. A D3
brane is a 3+1 dim. object. A stack of $N$ D3 branes interacts by the
exchange of open strings. In the long wavelength limit ($\ell_s
\rightarrow 0$, $\ell_s$ is the string length), only the massless
modes of the open string are relevant. These correspond to 4 gauge
fields $A_\mu$, 6 scalar fields $\phi^I \ (I = 1,\cdots,6)$
(corresponding to the fact that the brane extends in 6 transverse
dimensions) and their supersymmetric partners. These massless degrees
of freedom are described by ${\cal N} = 4, \ SU(N)$ Yang-Mills theory
in 3+1 dim. This is a maximally supersymmetric, conformally invariant
superconformal field theory in 3+1 dimensions. The coupling constant
of this gauge theory $g_{YM}$, is simply related to the string
coupling $g_s = g^2_{YM}$. The 'tHooft coupling is $\lambda = g_s N$
and the theory admits a systematic expansion in $1/N$, for fixed
$\lambda$. Further as $\ell_s \rightarrow 0$ the coupling of the D3
branes to gravitons also vanishes, and hence we are left with the
${\cal N}=4$ SYM theory and free gravitons.
On the other hand when $\lambda \gg 1$, various operators of the large
$N$ gauge theory source a supergravity fluctuations in 10-dimension e.g. the
energy-momentum tensor $T_{\mu\nu}$ sources the gravitational field in
one higher dimension. The supergravity fields include the metric, two
scalars, two 2-form potentials, and a 4-form potential whose field
strength $F_5$ is self-dual and proportional to the volume form of
$S^5$. The fact that there are $N$ D3 branes is expressed as
$\displaystyle{\int_{S^5}} F_5 = N$. There are also fermionic fields required by
supersymmetry. It is instructive to write down the supergravity
metric: \begin{eqnarray} ds^2 &=& H^{-1/2} (-dt^2 + d\vec x \cdot d\vec x) +
H^{1/2} (dr^2 + r^2 d\Omega^2_5) \nonumber \\[2mm] && \\ H &=& \left(1
+ {R^4 \over r^4}\right), \ \left({R \over \ell_s}\right)^4 = 4\pi g_s
N \nonumber
\label{fourteen}
\end{eqnarray}
Since $|g_{00}| = H^{-1/2}$ the energy depends on the 5th coordinate
$r$. In fact the energy at $r$ is related to the energy at $r =
\infty$ (where $g_{00} = 1$) by $E_\infty = \sqrt{|g_{00}|} E_r$. As
$r \rightarrow 0$ (the near horizon limit), $E_\infty = {r \over R}
E_r$ and this says that $E_\infty$ is red-shifted as $r \rightarrow
0$. We can allow for an arbitrary excitation energy in string units
(i.e. arbitary $E_r\ell_s$) as $r \rightarrow 0$ and $\ell_s
\rightarrow 0$, by holding a mass scale `$U$' fixed:
\begin{equation}
{E_\infty \over \ell_s E_r} \sim {r \over \ell^2_s} = U
\label{fifteen}
\end{equation}
Note that in this limit the gravitons in the asymptotically flat
region also decouple from the near horizon region. This is the famous
near horizon limit of Maldacena\cite{twelve} and in this limit the metric
(\ref{fourteen}) becomes \begin{equation} ds^2 = \ell^2_s \left[{U^2 \over
\sqrt{4\pi\lambda}} \left(-dt^2 + d\vec x \cdot d\vec x\right) +
4\sqrt{4\pi\lambda} {dU^2 \over U^2} + \sqrt{4\pi\lambda}
d\Omega^2_5\right]
\label{sixteen}
\end{equation}
This is locally the metric of ${\rm AdS}_5 \times {\rm S}^5$. ${\rm AdS}_5$
is the anti-de Sitter space in 5 dim. This space has a boundary at $U
\rightarrow \infty$, which is conformally equivalent to 3+1
dim. Minkowski space-time.
\bigskip
\noindent{\bf The AdS/CFT conjecture}
{\it The conjecture of Maldacena is that ${\cal N} = 4$, $SU(N)$ super
Yang-Mills theory
in 3+1 dim. $\!\!$ is dual to type IIB string theory with ${\rm AdS}_5
\times S^5$ boundary conditions}.
The gauge/gravity parameters are related as $g^2_{YM}=g_s$ and
$R/\ell_s = (4\pi g^2_{YM} N)^{1/4}$. It is natural to consider the
$SU(N)$ gauge theory living on the boundary of ${\rm AdS}_5$. The
gauge theory is conformally invariant and its global exact symmetry
$SO(2,4) \times SO(6)$, is also an isometry of ${\rm AdS}_5 \times
{\rm S}^5$. The metric (\ref{sixteen}) has a ``horizon'' at $U=0$
where $g_{tt} = 0$. It admits an extension to the full AdS$_5$
geometry which has a globally defined time like killing vector. The
boundary of this space is conformal to $S^3 \times {I\!\!\!R}^1$ and
the gauge theory on the boudary is well-defined in the IR since $S_3$
is compact.
The AdS/CFT conjecture is difficult to test because at $\lambda \ll 1$
the gauge theory is perturbatively calculable but the dual string
theory is defined in ${\rm AdS}_5 \times S^5$ with $R \ll \ell_s$. On
the other hand for $\lambda \gg 1$, the gauge theory is strongly
coupled and hard to calculate. In this regime $R \gg \ell_s$ and the
string theory can be approximated by supergravity in a derivative
expansion in $\ell_s/R$. It turns out that for large $N$ and large
$\lambda$, $D$-branes source supergravity fields $\leq$ spin 2. The
gravitational coupling is given by
\[
G_N \sim g^2_s \sim {\lambda^2 \over N^2} \ll 1
\]
Note the analogy with the constituent formula $f^{-1}_\pi \sim {1 \over N}$.
The region $\lambda \sim 1$ is most intractable as we can study
neither the gauge theory nor the string theory in a reliable way.
{\it However since the conjecture can be verified for supersymmetric states
on both sides of the duality, one assumes that the duality is true in
general and then uses it to derive interesting consequences for both
the gauge theory and the dual string theory (which includes quantum gravity).}
\noindent {\bf Interpretation of the radial direction of AdS}:
Before we discuss the duality further we would like to explain the
significance of the extra dimension `$r$'. Let us recast the ${\rm
AdS}_5$ metric by a redefinition: $\displaystyle{r \over R} = e^{-\phi}$
\begin{equation}
ds^2 = e^{-2\phi} \left(-dt^2 + d\bar x \cdot d\bar x\right) +
R^2(d\phi)^2 + R^2 d\Omega^2_5
\label{seventeen}
\end{equation}
The boundary in these coordinates is situated at $\phi = -\infty$.
Now this metric has a scaling symmetry. For $\alpha > 0$, $\phi
\rightarrow \phi + \log \alpha$, $t \rightarrow \alpha t$ and $\vec x
\rightarrow \alpha \vec x$, leaves the metric invariant. From this it
is clear that the additional dimension `$Re^\phi$' represents a length
scale in the boundary space-time: $\phi \rightarrow -\infty$
corresponds to $\alpha \rightarrow 0$ which represents
a localization or short distances in the boundary
coordinates $(\vec x,t)$, while $\phi \rightarrow +\infty$ represents
long distances on the boundary. $\phi$ is reminiscent of the
Liouville or conformal mode of non-critical string theory, where the
idea of the emergence of a space-time dim. from string theory was
first seen\cite{sixteen}.
The AdS/CFT correspondence clearly indicates that gravity is an
emergent phenomenon. What this means is that all gravitational
phenomena can be calculated in terms of the correlators of the
energy-momentum tensor of the gauge theory, whose microscopic
constituents are D-branes interacting via open strings.
\section{Black holes and AdS/CFT}
The ${\cal N} = 4$, super Yang-Mills theory defined on $S^3 \times
R^1$ can be considered at finite temperature if we work with euclidean
time and compactify it to be a circle of radius $\beta = 1/T$, where
$T$ is the temperature of the gauge theory. We have to
supply boundary conditions which are periodic for bosonic fields and
are anti-periodic for fermions. These boundary conditions break the
${\cal N} = 4$ supersymmetry, and the conformal symmetry. However the
AdS/CFT conjecture continues to hold and we will discuss the
relationship of the thermal gauge theory with the physics of black
holes in AdS.
As we have mentioned, in the limit of large $N$ (i.e. $G_N \ll 1$) and
large $\lambda$ (i.e. $R \gg \ell_s$), the string theory is well
approximated by supergravity, and we can imagine considering the
Euclidean string theory partition function as a path integral over all
metrics which are asymptotic to AdS$_5$ space-time. (For the moment we
ignore $S^5$).
The saddle points are given by the solutions to Einstein's equations
in 5-dim. with a negative cosmological constant
\begin{equation}
R_{ij} + {4 \over R^2} g_{ij} = 0
\label{twentyfive}
\end{equation}
As was found by Hawking and Page, a long time ago, there are only two
spherically symmetric metrics which satisfy these equations with
AdS$_5$ boundary conditions: AdS$_5$ itself and a black hole solution.
It was shown in Ref. [15] that the `deconfinement' phase of the
gauge theory corresponds to the presence of a large black hole in AdS.
The temperature of the black hole is the temperature of the
deconfinement phase. The AdS/CFT correspondence says that the
equilibrium thermal properties of the gauge theory in the regime when
$\lambda \rightarrow \infty$ are the same as those of the black
hole. This correspondence enables us to make precise quantitative
statements about the gauge theory at strong coupling $(\lambda \gg
1)$, using the fact that on the AdS side the calculation in gravity is
semi-classical.
We list a few exact results of thermodynamics of the gauge theory at
strong coupling\cite{fourteen}.
\begin{enumerate}
\item[{(i)}] the temperature at which the first order
confinement-deconfinement transition occurs:
\begin{equation}
T_c = {3 \over 2\pi R_{S^3}}
\label{six}
\end{equation}
where $R_{S^3}$ is the radius of $S^3$.
\item[{(ii)}] the free energy for $T > T_c$
\[
F(T) = - N^2 {\pi^2 \over 8} T^4
\]
\end{enumerate}
Here we see a typical use of the AdS/CFT correspondence calculations
in the strongly coupled gauge theory $(\lambda \gg 1)$ which can be done
using the correspondence by using semi-classical gravity since $G_N
\sim {1 \over N^2} \ll 1$ and ${R \over \ell_s} \sim \lambda^{1/4} \gg
1$.
\vspace{3ex}
\noindent {\bf Conformal Fluid Dynamics and Dynamical Horizons}
We have seen that the thermodynamics of the strongly coupled gauge
theory in the limit of large $N$ and large $\lambda$ is calculable, in
the AdS/CFT correspondence, using the thermodynamic properties of a
large black hole in AdS$_5$ with horizon $r_h \gg R$. Similar results
hold for a black brane, except that in this case the gauge theory in
$R^3 \times S^1$ is always in the deconfinement phase since $T_c = 0$
if $R_{S^3} = \infty$, by Eq. (\ref{six}). We now discuss how this
correspondence can be generalized to real time dynamics in this gauge
theory when both $N$ and $\lambda$ are large.
Let us generalize black brane (hole) thermodynamics to fluid
dynamics. In conformal fluid dynamics the system is in local thermodynamic
equilibrium over a length scale $L$ so that $L \gg {1 \over T}$. In
the bulk theory in one higher dim. this corresponds to a horizon that is
a slowly varying (see Fig. 2) function of the boundary co-ordinates
$(\vec x,t)$.
\[
r_h \rightarrow r_h + \delta r_h(\vec x,t)
\]
\[
T \rightarrow T + \delta T(\vec x,t)
\]
\[
{1 \over T} {\partial \over \partial x^\mu} {\delta T \over T} \sim {1 \over LT} \ll 1
\]
The ripples on the horizon of a black brane at the linerized level are
analysed in terms of quasi-normal modes with a complex frequencies
$\omega = \omega_R + i\omega_I, \ \omega_I \propto T$, where $T$ is
the temperature of the non-fluctuating brane. The complex frequency
arises because of the presence of a horizon when we impose only
`in-falling' boundary conditions. For the dual gauge theory the
quasi-normal mode spectrum implies the dissipation of a small
disturbance of the fluid in a characteristic time. This is the
qualitative reasoning behind the calculation of `transport
coefficients' of the gauge theory like viscosity, thermal and heat
conductivity which can be done using semi-classical gravity and the
Kubo formula for retarded Green's functions of the corresponding
conserved currents. This important step was taken by Policastro, Son
and Starinets\cite{seventeen}.
\begin{center}
\includegraphics[scale=.4]{spc11.eps}
\end{center}
\begin{center} Fig. 2 \end{center}
While linear response theory enables us to calculate transport
coefficients of fluid dynamics, we now briefly discuss non-linear
fluid dynamics and gravity, and indicate a remarkable connection
between the (relativistic) Navier-Stokes equations of fluid dynamics
and the long wavelength oscillations of the horizon of a black brane
which is described by Einstein's equations of general relativity with
a negative cosmological constant.
On general physical grounds a local quantum field theory at very high
density can be approximated by fluid dynamics. In a conformal field
theory in $3+1$ dim. we expect the energy density $\epsilon \propto T^4$, where
$T$ is the local temperature of the fluid. Hence fluid dynamics is a
good approximation for length scales $L \gg 1/T$. The
dynamical variables of relativistic fluid dynamics are the four
velocities: $u_\mu (x)$ $(u_\mu u^\mu = -1)$, and
the densities of local conserved currents. The conserved currents are
expressed as local functions of the velocities, charge densities and
their derivatives. The equations of motion are given by the
conservation laws. An example is the conserved energy-momentum tensor
of a charge neutral conformal fluid:
\begin{equation}
T^{\mu\nu} = (\epsilon + P) u^\mu u^\nu + P\eta^{\mu\nu} -
\eta\left(P^{\mu\alpha} P^{\nu\beta}(\partial_\alpha u_\beta +
\partial_\beta u_\alpha) - {1\over3} P^{\mu\nu} \partial_\alpha
u^\alpha\right) + \cdots
\label{thirtythree}
\end{equation}
where $\epsilon$ is the energy density, $P$ the pressure, $\eta$
is the shear viscocity and $P^{\mu\nu} = u^\mu u^\nu + \eta^{\mu\nu}$.
These are functions of the local temperature. Since the fluid
dynamics is conformally invariant (inheriting this property from the
parent field theory) we have $\eta_{\mu\nu} T^{\mu\nu} = 0$ which
implies $\epsilon = 3P$. Since the speed of sound in the fluid is
given by $v^2_s = \displaystyle{\partial P \over \partial \epsilon}$, $v_s = \displaystyle{1
\over \sqrt{3}}$ or re-instating units $v_s = \displaystyle{c \over \sqrt{3}}$,
where $c$ is the speed of light in vacuum. The pressure and the
viscosity are then determined in terms of temperature from the
microscopic theory. In this case conformal symmetry and the
dimensionality of space-time tells us that $P \sim T^4$ and $\eta \sim
T^3$. However the numerical coefficients need a microscopic
calculation. The Navier-Stokes equations are given by (\ref{thirtythree}) and
\begin{equation}
\partial_\mu T^{\mu\nu} = 0
\label{thirtyfour}
\end{equation}
The conformal field theory of interest to us is a gauge theory and a
gauge theory expressed in a fixed gauge or in terms of manifestly gauge
invariant variables is not a local theory. In spite of this
(\ref{thirtythree}) seems to be a reasonable assumption and the local
derivative expansion in (\ref{thirtythree}) can be justified using the
AdS/CFT correspondence.
We now briefly indicate that the eqns.(\ref{thirtythree}),
(\ref{thirtyfour}) can be deduced systematically from black brane
dynamics\cite{eighteen}. Einstein's equation (\ref{twentyfive}) admits a boosted
black-brane solution
\begin{equation}
ds^2 = -2u_\mu dx^\mu dv - r^2 f(br)u_\mu u_\nu dx^\mu dx^\nu + r^2
P_{\mu\nu} dx^\mu dx^\nu
\label{thirtyfive}
\end{equation}
where $v,r,x^\mu$ are in-going Eddington-Finkelstein coordinates and
\begin{eqnarray}
f(r) &=& 1 - {1 \over r^4} \nonumber \\ && \\
u^v &=& {1 \over \sqrt{1 - \beta^2_i}}, \ u^i = {\beta^i \over
\sqrt{1 - \beta^2_i}} \nonumber
\label{thirtysix}
\end{eqnarray}
where the temperature $T = 1/\pi b$ and the velocities
$\beta_i$ are all constants. This 4-parameter solution can be
obtained from the solution with $\beta^i = 0$ and $b=1$ by a boost
and a scale transformation. The key idea is to make $b$ and $\beta^i$
slowly varying functions of the brane volume i.e. of the co-ordinates
$x^\mu$. One can then develop a perturbative non-singular solution of
(\ref{twentyfive}) as an expansion in powers of $1/LT$. Einstein's
equations are satisfied provided the velocities and pressure that
characterise (\ref{thirtyfive}) satisfy the Navier-Stokes
eqns. The pressure $P$ and viscosity $\eta$ can be
exactly calculated to be\cite{eighteen,nineteen}
\begin{equation}
P = (\pi T)^4 \ {\rm and} \ \eta = 2(\pi T)^3
\label{thirtyseven}
\end{equation}
Using the thermodynamic relation $dP = sdT$ we get the entropy density
to be $s = 4\pi^4 T^3$ and hence obtain the famous equation of
Policastro, Son and Starinets,
\begin{equation}
{\eta \over s} = {1 \over 4\pi}
\label{thirtyeight}
\end{equation}
which is a relation between viscosity of the fluid and the entropy
density. Strongly coupled fluid behaves more like a liquid than a
gas. Systematic higher order corrections to (\ref{thirtythree})
can also be worked out.
The experiments at RHIC seem to support very rapid thermalization and
a strongly coupled quark-gluon plasma with very low viscosity
coefficient, ${\eta \over s} \:\raisebox{-0.75ex}{$\stackrel{\textstyle>}{\sim}$}\: {1 \over 4\pi}$.
The fluid dynamics/gravity correspondence can also be used to study
non-equilibrium processes like thermelization which are dual to black
hole formation in the gravity theory. An important result in this
study is that the thermalization time is more rapid than the expected
value $\propto {1 \over T}$ where $T$ is the temperature\cite{twenty}.
Another important result is the connection between the area theorems
of general relativity and the positivity of entropy in fluid
dynamics\cite{1one22}.
The fluid/gravity correspondence is firmly established for a $3+1$
dim. conformal fluid dynamics which is dual to gravity in AdS$_5$
space-time. A similar connection holds for $2+1$ dim. fluids and
AdS$_4$ space-time. We shall discuss the case of non-conformal fluid
dynamics in $1+1$ dim. separately. A special (asymmetric) scaling
limit of the relativistic Navier-Stokes equations, where we send $v_s
= \displaystyle{c \over \sqrt{3}} \rightarrow \infty$ leads to the standard
non-relativistic Navier-Stokes equations for an incompressible
fluid\cite{twentyone}. {\it In summary we have a truly remarkable
relationship between two famous equations of physics viz. Einstein's
equations of general relativity and the Navier-Stokes equations}.
Finally it is hoped that the AdS/CFT correspondence lends new
insights to the age old problem of turbulence in fluids. Towards this goal
the AdS/CFT correspondence has also been established for forced
fluids, where the `stirring' term is provided by an external metric and
dilaton field\cite{twentytwo}.
\section{Non-conformal fluid dynamics in 1+1 dim.
from gravity {\small \cite{twentythree,twentyfour}}}\label{aba:sec12}
The famous Policastro, Son and Starinets result (\ref{thirtyeight}) is
indeed a cornerstone of the gauge/gravity duality. It was originally
derived in the context of conformal fluid dynamics. However one
suspects that the conjectured bound ${\eta \over s} \geq {1 \over
4\pi}$ may be more generally valid. We present a summary of a
project of the fluid dynamics description, via the gauge/gravity
duality for the case of $N$ D1 branes at finite temperature $T$. The
gauge theory describing the collective excitations of this system is a
$1+1$ dim. $SU(N)$ gauge theory with 16 supersymmetries. Note that
this gauge theory is not conformally invariant. At high temperatures
we expect the theory to have a fluid dynamics description, in terms of
a 2-velocity $u^\mu$ and stress tensor
\[
T^{\mu\nu} = (\epsilon + P) u^\mu u^\nu + P \eta^{\mu\nu} - \xi P^{\mu\nu}
\partial_\lambda u^\lambda
\]
Note that $\eta_{\mu\nu} T^{\mu\nu} = -3\xi \partial_\lambda
u^\lambda$, where $\xi$ is the bulk viscosity. The dual gravity
description corresponds to 2 regimes. For $\sqrt{\lambda} N^{-2/3}
\ll T \ll \sqrt{\lambda}$, the gravity dual is a classical solution
corresponding to a non-external D1 brane. For $\sqrt{\lambda} N^{-1}
\ll T \ll \sqrt{\lambda} N^{-2/3}$ the gravity solution corresponds to
a fundamental string. Here $\lambda = g^2_{YM} N$.
For both regimes we find the following exact answers for the strongly
coupled fluid dynamics. There is exactly one gauge invariant
quasi-normal mode with dispersion:
\begin{equation}
\omega = {q \over \sqrt{2}} - {i \over 8\pi T} q^2
\label{thirtynine}
\end{equation}
The linearized fluid dynamics equations lead to the dispersion relation:
\begin{equation}
\omega = v_s q - {i\xi \over 2(\epsilon + P)} q^2
\label{fourty}
\end{equation}
$v^2_s = {\partial P \over \partial \epsilon}$ is the velocity of
the sound mode. Using $v^2_s = {1\over2}$ and the relation $\epsilon
+ P = Ts$, we once more arrive at
\begin{equation}
{\xi \over s} = {1 \over 4\pi}
\label{fourtyone}
\end{equation}
It is worth pointing out that (\ref{fourtyone}) is valid even if we work
with the geometry of D1 branes at cones over Sasaki-Einstein
manifolds. Here the corresponding gauge theory is different from the
gauge theory with 16 supercharges that we mentioned before. Using
similar techniques we have also studied the case of the $SU(N)$ gauge
theory in $1+1$ dim. with finite $R$-charge density. The dual
supergravity solution is that of a non-extremal D1 brane spinning
along one of the Cartan directions of $SO(8)$ which reflects the isometry of
$S^7$ present in the near horizon geometry. In this case, besides
energy transport, there is also charge transport. The transport
coefficients like electrical and heat conductivity can be
calculated, and the Weidemann-Franz law can be verified. Once again (14)
is valid.
\bigskip
\section{A New Term in Fluid dynamics:}
The fluid dynamics of a charged fluid is described by the conserved
stress tensor $T_{\mu\nu}$ and charged current $J_\mu$. The
constituent equations are (to leading order in the derivative
expansion)
\begin{equation}
T_{\mu\nu} = P(\eta_{\mu\nu} + 4u_\mu u_\nu) - 2\eta \sigma_{\mu\nu} + \cdots
\label{fourtytwo}
\end{equation}
\begin{equation}
J_\mu = n u_\mu - D P_\mu^\nu D_\nu n.
\label{fourtythree}
\end{equation}
where $n$ is the charge density. However in the study of the
charged black brane dual to a fluid at temperature $T$ and chemical
potential $\mu$, a new term was discovered in the charged current \begin{equation}
J_\mu = n u_\mu - D P_\mu^\nu D_\nu n + \zeta \ell_\mu
\label{fourtyfour}
\end{equation}
$\ell_\mu = \epsilon_{\mu\nu\rho\sigma} u_\nu
\omega_{\rho\sigma}$, $\omega_{\rho\sigma} = \partial_\rho u_\sigma -
\partial_\sigma u_\rho$ (the vorticity). The appearance of the new
voriticity induced current in (\ref{fourtyfour}) is directly related
to the presence of the Chern-Simons term in the Einstein-Maxwell
lagrangian in the dual gravity description\cite{twentyfive,twentysix}.
In a remarkable paper Son and Surowka\cite{twentyseven} showed that
the vorticity dependent term in (\ref{fourtyfour}) always arises in a
relativistic fluid dynamics in which there is an anomalous axial
$U(1)$ current: $\partial_\mu J_\mu^A = -{1 \over 8}
CF_{\mu\nu} F_{\rho\sigma} \epsilon_{\mu\nu\rho\sigma}$. They showed
on general thermodynamic grounds that
\[
\zeta = C\left(\mu^2 - {2\over3} {\mu^3 n \over \epsilon + P}\right)
\]
where $\epsilon$ and $P$ are the energy density and pressure, and
$\mu$ is the chemical potential. If $C = 0$ one recovers the result
(\ref{fourtythree}) of Landau and Lifshitz. This new term may be
relevant in understanding bubbles of strong parity violation observed
at RHIC and generally in the description of rotating charged fluids.
\section{Implications of the gauge fluid/gravity correspondence for the
information paradox of black hole physics}
The Navier-Stokes equations imply dissipation and violate time
reversal invariance. The scale of this violation is set by
$\eta/\rho$ ($\eta$ is the viscosity and $\rho$ is the density) which
has the dim. of length (in units where the speed of light $c = 1$).
There is no paradox here with the fact that the underlying theory is
non-dissipative and time reversal invariant, because we know that the
Navier-Stokes equations are not a valid description of the system for
length scales $\ll \eta/\rho$, where the micro-states should be taken
into account. {\it An immediate important implication of this fact
via the AdS/CFT correspondence is that there will always be
information loss in a semi-classical treatment of black holes in
general relativity.} This fact raises an important question: while
we understand that information loss in fluid dynamics because we know
the underlying constitutent gauge theory, a similar level of
understanding does not exist on the string/gravity side, because we as
yet do not know the exact equations for all values of the string
coupling.
\section{Concluding remarks:}
In this note we have reviewed the emergence, via the AdS/CFT
correspondence, of a quantum theory of gravity from an interacting
theory of D-branes. Besides giving a precise definition of quantum
gravity in terms of non-abelian gauge theory, this correspondence
turns out to be a very useful tool to calculate properties of strongly
coupled gauge theories using semi-classical gravity. The
correspondence of dynamical horizons and the fluid dynamics limit of the
gauge theory enables calculation of transport coefficients like
viscosity and conductivity. We also indicated
that dissipation in fluid dynamics implies that in semi-classical
gravity there will always be `information loss'.
We conclude with a brief mention of other applications of the AdS/CFT
correspondence to various problems in physics.
\medskip
\noindent {\bf Condensed Matter:}
\medskip
The AdS/CFT correspondence offers a tool to explore many questions
in strongly coupled condensed matter systems in the vicinity of a
quantum critical point. It enables calculation of transport properties,
non-fermi liquid behavior, quantum oscillations and properties of
fermi-surfaces etc.\cite{twentyeight,twentynine,thirty,thirtyone}. There is a
puzzling aspect in the application of semi-classical gravity to
condensed matter systems: what determines the smallness of the
gravitational coupling, which in the gauge theory goes as $N^{-2}$?
Another interesting development is bulk superconductivity i.e. the
presence of a charged scalar condensate in a black hole
geometry\cite{thirtytwo}. This has interesting implications for
superfluidity in the quantum field theory on the boundary.
\medskip
\noindent {\bf QCD and Gauge theories:}
\medskip
The AdS/CFT correspondence is a powerful tool to calculate multi-gluon
scattering amplitudes in ${\cal N} = 4$ gauge theories in 3+1 dims. This
is done by relating the amplitude to the calculation of polygonal Wilson
lines in a momentum space version of $AdS_5$.\cite{thirtythree,thirtyfour}
Even though the basic theory of the quark-gluon plasma is QCD,
calculations in ${\cal N} = 4$ gauge
theories do indicate qualitative agreement with RHIC observations. We
have already remarked that the observed value of ${\eta \over s}$ in
(11) is in qualitative agreement with RHIC data. Another
calculation of interest is that of jet quenching which corresponds
to computing the drag force exerted by a trailing string attached to a
quark on the boundary, in the presence of a AdS black hole\cite{thirtyfive}.
The AdS/CFT correspondence has also yielded a geometric understanding
of the phenomenon of chiral symmetry breaking\cite{2two34,3three35}.
\medskip
\noindent {\bf Singularities in Quantum Gravity}
\medskip
The AdS/CFT correspondence provides a way to discuss the quantum
resolution of the singularities of classical general relativity. One
strategy would be to study the resolution of singularities that occur
in the gauge theory in the $N \rightarrow \infty$ limit. Among these
are singularities corresponding to transitions of order greater than
two\cite{4four37,5five38} which admit a resolution in a double scaling
limit. The difficult part here is the construction of the map between
the gauge theory simgularity and the gravitational singularity. A
proposal in the context of the Horowitz-Polchinski cross-over was made
in reference 42.
\section{Acknowledgement:}
I would like to thank K.K. Phua and Belal Baaquie for their warm
hospitality and Gautam Mandal for a critical reading of the draft and
very useful discussions.
|
1,314,259,994,845 | arxiv | \section{Introduction}
\label{intro}
Globular clusters (GCs) are among the oldest stellar structures in the Universe. Their redshift of formation is estimated to be around $z = 2-6$ from their stellar population but their formation channels are still debated \citep[see reviews by][]{Forbes18, Renaud18}. In particular, the formation environment must be able to host very dense and massive gas clouds to allow the formation of these bound stellar clusters. Based on theoretical grounds, it has for instance been proposed that galaxy mergers at high redshift could be an important formation channel of current metal-rich GC populations \citep{Ashman92,Li14,Kim17}. Giant gas clumps in high-redshift, gas dominated galaxies could also host a favorable environment for GC formation \citep{Shapiro10, Kruijssen15}. The metal-poor part of the GC populations is proposed to be formed in the high-redshift highly turbulent gas-rich dwarf galaxies, such as the \emph{little blue dots} seen at redshifts 0.5-4 in the Hubble Frontier Fields \citep{Elmegreen17}, which would be accreted, with their GC populations, onto more massive galaxies \citep{Cote98,Elmegreen12b,Tonini13,Renaud17}. Unfortunately, current instrumentation cannot probe the physical conditions of the GC birth environment at high redshifts, except for exceptional cases of strong gravitational lenses \citep{Vanzella17a, Vanzella17b,Bouwens17}. Observational studies of star cluster formation have thus mainly focused on favorable environments for massive star cluster formation in the Local Universe so far.
Local dwarf galaxies are particularly interesting for the problem of GC formation. In starbursting dwarfs one may typically find young massive star clusters (YMCs) with masses above $10^{5}~\Msun$ and radii around 3~pc, that is, in the mass and size range of GCs \citep[see e.g.][and references therein]{deGrijs13,Hunter16}. Furthermore, local starbursting dwarf galaxies, such as the blue compact dwarf galaxies (BCDGs), can have up to 50\% of their current star formation rate (SFR) occurring in YMCs \citep[see e.g.][]{Adamo11} Finally, old evolved dwarf galaxies typically have a very large number of old GCs per unit luminosity, also called specific frequency\footnote{The specific frequency (S$_{\mathrm{N}}$) is the number of GCs per unit -15 absolute magnitude in the V-band ($M_V$): S$_{\mathrm{N}}$ = N$_{\mathrm{GC}} \times 10^{0.4(M_V + 15)}$.}: that is, much larger than for late-type galaxies and similar to massive early-type galaxies \citep[see e.g.][]{Lotz04, Peng08, Georgiev10}. Although the conditions of formation of present-day YMCs and old GCs are most certainly significantly different, these observations suggest that dwarf galaxies provide a favorable environment for both the formation and the survival of massive star clusters.
To extend the parameter space of dwarf galaxy environments, we present a study of massive star cluster formation and survival in dwarf galaxies which differ significantly from typical starbursting galaxies: tidal dwarf galaxies (TDGs). These galaxies are formed from gas and stars originating from the outskirts of a massive galaxy after a galaxy-galaxy interaction \citep[see review by][]{Duc99}. Because of this particular mode of formation, they are typically young, gas-dominated and are expected to be dark-matter free. Most importantly, their gas content is pre-enriched in metals and may already have a metallicity of one third to half solar. Thus, they deviate from the luminosity-metallicity diagram and have a significantly higher metallicity than starbursting dwarfs for a similar luminosity \citep[see e.g.][]{Weilbacher03}. Previous studies of the formation of tidal tails have shown that star clusters and TDGs may be able to form together in some cases \citep{Knierman03, Mullan11}. Possible examples of tidal dwarfs at redshifts of 0.5-1 are presented by \citet{Elmegreen07b}. It should be noted that the merger frequency was much higher at high-redshift so that their contribution to the GC and TDG formation may have been more important. However, mergers at high-redshift seem to be much less efficient at triggering an enhancement of star formation \citep{Rodighiero11, Perret14, Lofthouse2017}, probably because of their high gas fraction \citep{Fensch17}. The formation of GCs and TDG in high-redshift galaxies still needs to be investigated.
The system studied in this paper is composed of young TDGs\footnote{These galaxies formed in collisional rather than in tidal debris. Even though they are thus not formally of tidal origin they are also found in the halo of a more massive galaxy and share the same physical properties as \emph{bona fide} TDGs. We will therefore also use the term TDGs for these galaxies.} located in a huge HI ring (M$_{\mathrm{HI}} > 10^{11} \Msun$, \citealp{Duc98}) expelled from the massive galaxy NGC~5291 (distance: 63.5~Mpc\footnote{The previous studies used this NED distance, assuming the following cosmological parameters: \emph{h} = 73, $\Omega_m$= 0.27, $\Omega_\Lambda$ = 0.73.}, distance modulus of 34.0~mag), most probably after an encounter with a bullet galaxy around 360~Myr ago \citep{Bournaud07}. This ring hosts four gravitationally bound objects with masses as high as $2 \times 10^{9}~\Msun$ \citep{Lelli15}, in the range of dwarf galaxies. These TDGs have a gas to stellar mass fraction of $\sim 50\%$ \citep{Bournaud07, Lelli15} and their spectral energy distribution (SED) is consistent with no stellar population older than 1~Gyr \citep{Boquien09}. Their material has been pre-enriched inside the host galaxy: they typically show half-solar metallicity \citep{Duc98, Fensch16}.
This unique system has an extensive wavelength coverage: 21-cm HI line observations with the VLA \citep{Bournaud07}, molecular gas (\citealp{Braine01}, Lelli et al., in prep.), far-infrared with PACS and SPIRE on Herschel (Boquien et al., in prep.), mid-infrared with Spitzer \citep{Boquien07}, H${\alpha}$ with Fabry-Perot interferometry on the ESO 3.2m \citep{Bournaud04} and optical IFU with MUSE \citep{Fensch16} and far and near ultra-violet with GALEX \citep{Boquien07}. Radio and optical spectroscopy have shown the kinematical decoupling of the TDGs from the ring and their complex internal dynamics. MUSE has probed the variation of ionization processes throughout the most massive TDG of this system. However, none of the previously used instruments had the spatial resolution to investigate the TDG substructures and star cluster population.
In this paper we present optical and near-IR imaging data from the \emph{Hubble Space Telescope} (\emph{HST}) which covers three of these TDGs. The pixel size in the optical is 0.04$\arcsec$, which corresponds to 12~pc at the distance of NGC~5291, and is small enough to allow us to distinguish the expected YMCs formed inside the dwarfs. We obtained broadband imaging covering a wavelength range from the near-UV to the near-IR. This allows us to derive the mass and age distributions of these TDGs' star cluster populations and study their formation and survival up to several hundred Myr in this particular environment. \\
We present the data acquisition and reduction in Section~\ref{Obs}. The star cluster selection and photometry measurements are presented in Section~\ref{Technics}. The derivation of their physical parameters (masses and ages) is described in Section~\ref{Derivation}. We discuss the cluster formation efficiency and cluster evolution in Section~\ref{Discussion} and conclude the paper in \ref{Conclusion}.
\section{Observation and Data Reduction}
\label{Obs}
NGC~5291 collisional ring was observed with the WFC3 instrument on board the \emph{HST} (Project ID 14727, PI: Duc). The location of the field of view and the collisional ring are shown in Fig.~\ref{map}. We obtained photometry in the F336W, F475W, F606W, F814W and F160W bands. As will be discussed in Section~\ref{Derivation}, this set of filters was chosen for its ability to disentangle color effects from metallicity, age, and extinction in young star clusters \citep{Anders04}. The respective exposure times are given in Table~\ref{exptime}. We used the product of the regular MultiDrizzle reduction pipeline \citep{Koekemoer02}. The pixel size is 0.04$\arcsec$ for F336W, F475W, F606W, and F814W. The pixel size is 0.12$\arcsec$ for F160W. At the distance of NGC~5291 this corresponds to respectively 12~pc and 36~pc.\\
Only Field 1 and Field 4 were observed in the F336W and F160W bands. The field of view of the F160W data is slightly different, and is shown in the right part of Fig.~\ref{map}. The massive galaxy NGC~5291 and its companion (the Seashell Galaxy, \citealt{Duc99}) can be seen in the top part of Field 3, the TDG N in Field 1, and the the TDGs S and SW in Field 4. \\
The right-hand side image shows instrument artefacts: a bright saturation shape in Field 2 and the presence of 8-shaped reflection effects in Fields 1 and 4. Furthermore, one can see a small stripe of higher noise in the middle of each field of view: it is the location of the gap between the two UVIS CCDs of the camera, where we only have one exposure, and thus no cosmic ray removal. We allowed for any orientation to maximize the chance of observability. Unfortunately, the gap fell on both TDGs of Field 4. Only Fields 1 and 4 will be considered in the rest of the paper. Fields 2 and 3 will be the subject of a companion paper (Fensch et al., \emph{in prep.}).\\
We use the \emph{HST} image header keyword \textsc{PHOTFNU} to convert image units to Jansky. All magnitude values will be given according to the AB system in the following, unless specified otherwise.
\begin{table}[h!]
\centering
\caption{Exposure times for each field and filters in the following format: (number of exposures) x (time in second for a single exposure).\label{exptime}}
\begin{tabular}{l c c c c c}
\\ \hline \hline
& F336W & F475W & F606W & F814W & F160W \\ \hline
Field 1 & 4 x 378.\ & 2 x 368. & 2 x 368. & 2 x 368. & 4 x 903. \\
Field 2 & - & 2 x 368. & 2 x 368. & 2 x 368. & - \\
Field 3 & - & 2 x 368. & 2 x 368. & 2 x 368. & - \\
Field 4 & 4 x 378. & 2 x 368. & 2 x 368. & 2 x 368. & 4 x 903.\\ \hline
\end{tabular}
\end{table}
\begin{figure*}
\centering{
\includegraphics[width=18.5cm]{Fig/Fig1.png}
\caption{Left: Composite color \emph{HST} image of the system using the F475W (blue), F606W(green) and F814W (red) filters. North is up and East is to the left. Each field of view is 49.55~kpc $\times$ 53.34~kpc. Three regions contaminated by strong artefacts were masked. They are shown with black rectangles in the right image. Right: F475W image. The blue contour encircles regions where the HI column density is higher than $10^{20}$ N$_{\mathrm{HI}}$~cm$^{-2}$ \citep[VLA data, ][]{Bournaud07}. The two F160W-band fields of view are shown by the red rectangles. The central galaxy NGC~5291, the Seashell, and the three TDGs (N, S and SW) are indicated by black arrows, \label{map}}}
\end{figure*}
\section{Cluster selection and photometry}
\label{Technics}
We extracted the star cluster candidates using SExtractor \citep{Bertin96} in the optical bands (F475W, F606W and F814W). The images were convolved through a \emph{mexican hat}-type filter\footnote{We used the filters provided in the SExtractor repository of astromatic.iap.fr. } with a width of two pixels to enhance the contrast with respect to the diffuse stellar light, and the detection threshold was set to 1.25~$\sigma$ for at least three adjacent pixels.
As we only have two exposures for each of the three optical bands (F475W, F606W, F814W), the standard pipeline cannot remove cosmic rays which are coincident on the two exposures. We proceeded to apply a first cosmic ray subtraction by matching the location of the sources on these three filters. Only sources detected on at least both the F606W and either F475W and/or F814W images are considered for subsequent analysis. We also rejected 13 sources which are part of the GAIA DR2 \citep{GaiaDR2} catalog with a non-zero parallax and proper motion, which are likely foreground stars. After this step, we have 826 detections. This catalog of detections is then applied on the five bands to extract the photometry of the detected clusters.\\
The crowdedness of the sources in the TDGs prevented us from using a standard aperture photometry method. Instead, we used point spread function (PSF) fitting using GALFIT \citep{Peng02,Peng10b}. We first removed the background light using the sigma-clipping method implemented in SExtractor. In order to remove the diffuse stellar light in the TDGs we chose a tight mesh of 6x6 pixels, further smoothed with a 3x3 pixel kernel. The photometry was computed using PSF-fitting with GALFIT, using the PSF of the brightest unsaturated star available in the field. To avoid blending issues, we restricted the location of the peak of the PSF to vary by less than 0.08$\arcsec$, compared to the center of the detection in the F606W band. Some extracted sources appeared extended and were not well fitted in the F336W, F475W, F606W, and F814W bands. They were identified by a high pixel value dispersion in the residual image. The pixel size in these bands is 12~pc. In the early evolutionary stages of YMCs (1-10~Myr), the ionized gas surrounding the cluster may have a radius of around 20~pc, with a dependence on age \citep[see e.g.][]{Whitmore11}. One therefore expects to have barely resolved star clusters in these bands. For 40 sources out of the 826 detected sources, we performed S\'ersic photometry for proper subtraction.
To avoid unrealistic fits, we constrain the half-light radius to be smaller than 3~pixels and a S\'ersic index below 5. For consistency, we also fit the data without these constraints and the resulting values change only by less than half a standard deviation.
These sources were not resolved in the F160W image, which has a coarser resolution, hence we keep using PSF models for these sources. To ensure that the background subtraction method did not remove flux from either our point-like or extended sources, we verified the consistency within the error bars between the GALFIT method and an aperture photometry method for isolated sources. We used a 6~pixel radius and background estimation from the median background of 8 other same size apertures located around the source with an offset from the source of -13, 0, +13 pixels for both the vertical and horizontal directions. Some sources were too faint for the GALFIT subtraction to converge. For those we used aperture photometry and considered this value an upper limit to the flux.
The error on the flux is obtained by combination of the Poissonian noise from the source and the removed background, the pixel-to-pixel root mean-square of the background, the pixel-to-pixel root mean square of the residuals, the systematic flux-dependent GALFIT flux uncertainty and the read noise from WFC3.\\
A comparison of the original image and the residual after background and source subtraction is shown in Fig.~\ref{residual} for the three TDGs. In this figure some small extended stellar features remain on the location of star formation complexes. These features were not considered as detections by SExtractor, because of their elongated shape through the \emph{mexican hat} filter. They were also not accounted for by the background subtraction, being smaller than the background mesh. In our subsequent analysis we restrict ourselves to sources which have a signal-to-noise ratio higher than three in at least four bands, which leaves us with 439 cluster candidates. \\
\begin{figure*}
\centering
\includegraphics[width=5.0cm]{Fig/Maps/01040_N_im-min.png}
\includegraphics[width=5.0cm]{Fig/Maps/01040_N_mod-min.png}
\includegraphics[width=5.0cm]{Fig/Maps/01040_N_res-min.png}\\
\includegraphics[width=5.0cm]{Fig/Maps/01010_N_im-min.png}
\includegraphics[width=5.0cm]{Fig/Maps/01010_N_mod-min.png}
\includegraphics[width=5.0cm]{Fig/Maps/01010_N_res-min.png}\\
\includegraphics[width=5.0cm]{Fig/Maps/01020_N_im-min.png}
\includegraphics[width=5.0cm]{Fig/Maps/01020_N_mod-min.png}
\includegraphics[width=5.0cm]{Fig/Maps/01020_N_res-min.png}\\
\includegraphics[width=5.0cm]{Fig/Maps/01030_N_im-min.png}
\includegraphics[width=5.0cm]{Fig/Maps/01030_N_mod-min.png}
\includegraphics[width=5.0cm]{Fig/Maps/01030_N_res-min.png}\\
\includegraphics[width=5.0cm]{Fig/Maps/01050_N_im-min.png}
\includegraphics[width=5.0cm]{Fig/Maps/01050_N_mod-min.png}
\includegraphics[width=5.0cm]{Fig/Maps/01050_N_res-min.png}\\
\caption{Data, model and residual images for the TDG N. The two other TDGs are shown in Appendix~\ref{appendix::fits}. For each filter we show the data in the left column, the model of the clusters in the middle column and the background subtracted residuals in the right column. From top to bottom: F336W, F475W, F606W, F814W and F160W. We used the L.A. Cosmics algorithm \citep{vanDokkum01} to remove the cosmic rays. North is up and East is to the left.The field of view covers 14.4~kpc~x~14.4~kpc.\label{residual}}
\end{figure*}
The completeness of our star cluster candidate extraction was computed by simulating point-like sources in the image using GALFIT and testing their detection and correct flux measurement using the same analysis as described above. A simulated source was considered recovered if detected by SExtractor and if its flux was recovered within 0.3~mag by GALFIT, as we accept a minimum signal-to-noise ratio down to 3 in a given filter. The completeness curves are shown in Fig.~\ref{complfig}. The 95\% completeness limit will be considered in the following.\\
\begin{figure}
\centering
\includegraphics[width=9cm]{Fig/completeness.png}
\caption{Completeness curve of the star cluster detection algorithm for each filter. The horizontal dashed line at 0.95 shows the 95\% completeness limit. \label{complfig}}
\end{figure}
\section{Deriving cluster physical properties}
\label{Derivation}
\subsection{SED fitting procedure}
\label{CIGALE}
The set of filters we used was chosen for its ability to recover ages and extinction for young star clusters using SED fitting procedures \citep{Anders04}. In this work, we used the SED fitting code CIGALE\footnote{Code available at https://cigale.lam.fr} \citep{Burgarella05, Noll09, Giovannoli11, Boquien19}, This code first compu.es a grid of flux models for a given input of discrete parameters from the stellar models of \citet{Bruzual03}, normalized to a fixed mass. In a second step, the code performs a $\chi^{2}$ analysis between the source and the flux grid, including a normalization to obtain the mass corresponding to the fit. \\
We have chosen the following range of parameters:
\begin{itemize}
\item{{\bf Star formation history:} We use the \citet{Chabrier03} initial stellar mass function with lower and upper mass limits of respectively 0.1 and 100~$\Msun$. We model our clusters as a single quasi-instantaneous burst of star formation with an exponential decay with a 0.1~Myr timescale. To quantify the senstivity of the results on this parameter, we also modelled the star formation burst with a 1~Myr timescale. The small variations in the resulting values are quantified in the following and do not affect our conclusions.}
\item{{\bf Age:} to account for both very young star clusters as well as GCs, we used models from 1~Myr to 12~Gyr. We use an adaptive spacing to account for the rapid change of the spectra at young ages. In particular, we have one model per Myr from 1 to 20~Myr and one per 5~Myr from 20 to 50~Myr. The weights of the fits depend on the age grid spacing, in order to have a flat age prior.}
\item{{\bf Metallicity:} the metallicity of the ring is approximately constant at around half solar metallicity \citep{Duc98, Fensch16}. We therefore fix the metallicity to Z = 0.008 to avoid degeneracies with age and extinction. The impact of changing the metallicity prior will be discussed in the following sections.}
\item{{\bf Extinction:} we use the LMC extinction curve from \citet{Gordon03} as it is the most suitable to our half-solar metallicity. The extinction obtained from MUSE from the Balmer decrement and the LMC extinction curve gave extinction values of the order of A$_{V} = 0.6 \pm 0.2$ mag throughout the northern TDG, on a spatial scale of 180~pc~x~180~pc \citep{Fensch16}. NASA infrared science archive service\footnote{https://irsa.ipac.caltech.edu/applications/DUST/} indicates a Milky Way extinction value in the line-of-sight of NGC~5291 of around 0.15 mag. In order to stay conservative, we allow for extinction ranging from A$_{V}$ = 0 to 2~mag.}
\end{itemize}
Furthermore, we allow the ionisation parameter log U to vary between -4 to -2 with 0.5 dex steps, according to the range determined from emission line ratios with MUSE \citep[see Fig.10 in ][]{Fensch16}. Finally we allow for a fraction of escaping Lyman continuum photons between 0 and 20\% \citep[dwarfs with strong outflows have a fraction of escaping Lyman continuum photons around 15\% see e.g. ][]{Bik15}. We assume a gas density of 100~cm$^{-3}$ \citep{Fensch16}.
\subsection{Physical parameters and degeneracies}
\label{sub_deg}
We are interested in recovering good estimates of the ages and masses of the clusters. However, for our set of filters, there is a degeneracy between extinction and age, which is illustrated in Fig.~\ref{deg}. In the top panels one can see two models which fit the data well, with very different ages and extinction values. The cumulative probability density function shown in the bottom-left panel shows two characteristic values for the age and its rise is quite extended. The origin of this wide distribution is an age-extinction degeneracy: in the bottom right panel we see that both young and attenuated models, and old and unattenuated models can reproduce our photometry for this particular cluster. Even though their metallicity is known, this degeneracy prevents us from deriving precise ages for all clusters.
\begin{figure*}[h!]
\centering
\includegraphics[width=6.5cm]{Fig/specatt.png}
\includegraphics[width=6.5cm]{Fig/specatt0.png}\\
\includegraphics[width=6.5cm]{Fig/age_pdf.png}
\includegraphics[width=6.5cm]{Fig/deg.png}
\caption{Top left: example of the best fit for one cluster candidate. The retrieved physical parameters and the reduced $\chi^{2}$ are given in the title of the plot. Top right: Best fit for the same cluster candidate if one imposes A$_V = 0$~mag. Bottom left: cumulative age PDF for the cluster. The star shows the best fitting age. The blue dot shows the output value of CIGALE. Bottom right: Normalized likelihood distribution for age and extinction for the given cluster candidate.\label{deg}}
\end{figure*}
To quantify this effect, we use a proxy for the width of the age PDF: $r_{\mathrm{age}} = \mathrm{F}_{0.95} / \mathrm{F}_{0.05}$, where $F_ {x}$ is the age for which the cumulative age probability distribution function reaches $x$. The extent between $\mathrm{F}_{0.95}$ and $\mathrm{F}_{0.05}$ is shown in the bottom left panel of Fig.~\ref{deg}. The particular cluster candidate shown in Fig.~\ref{deg} has $r_{\mathrm{age}} = 58$. We note that emission-line information, such as H$\alpha$ emission mapping at the scale of the size of a star cluster (10-20~pc) would help break this degeneracy: the presence of ionized gas would classify a given cluster as unambigously young \citep[see e.g.][]{deGrijs13}. However, we cannot distinguish the star clusters on the emission-line maps we have obtained with MUSE and Fabry-Perot interferometry (see Section~\ref{intro}).\\
Fig.~\ref{mass_age} shows the retrieved masses and ages for the cluster candidates, where the width of their age PDF is color-coded. We note an increase of $r_{\mathrm{age}}$ with age, which is due to the slower spectral evolution with age, and also a significant number sources for which $r_{\mathrm{age}} > 50$ which might be subject to degeneracies. In this study we are not interested in old GCs. This is the reason why we only consider clusters with retrieved age below 3~Gyr in Fig.~\ref{mass_age}. Older clusters, for which the fixed half-solar metallicity prior is not adapted for mass and age determination, will be presented in a companion paper, along with the study of Fields 2 and 3 (Fensch et al., \emph{in prep.}).\\
Fig.~\ref{mass_age} also shows the completeness limit as obtained in Section~\ref{Technics}. These curves were obtained from the flux models computed by CIGALE. For each age, the curves show the minimum mass for which a cluster would have a 95\% probability to be detected with a S/N ratio above 3 in at least 4 bands. We show the completeness curves for two assumed extinctions, A$_{\mathrm{V}} $ = 0 and 1.1~mag. This latter value was the maximum extinction obtained using the Balmer decrement from the field of the Northern TDG in \citet{Fensch16}. Based on this figure, we assume that our sample is complete for clusters younger than 30~Myr above a mass of $1.5\times10^4~\Msun$.
\begin{figure}
\centering
\includegraphics[width=10cm]{Fig/mass_age.png}
\caption{Estimated mass-age distribution for the cluster candidates. The color indicates the width of the age PDF, as defined in the text. The two dashed lines indicates the 95\% completeness limit in the age-mass plane, assuming a given extinction. The black line shows the estimated time of the interaction which created this system (see text). \label{mass_age}}
\end{figure}
\subsection{Young cluster mass function}
\label{young}
In the following we explore the properties of the young clusters. In order to have a sufficient number of detections, we chose to consider only clusters with an age below 30~Myr.
Given the degeneracy effect, one cannot compile a complete sample. Indeed, young and strongly attenuated clusters could in principle masquerade as old and unattenuated clusters. To define our sample, we will use the shape of the age PDF. In particular, we write
\begin{equation}
\mathrm{P}[\mathrm{age} < X] = \int_{0}^{X} \mathrm{PDF}(t) ~ \mathrm{d}t
\end{equation}
with X in Myr, the probability that the cluster candidate has an age younger than X~Myr.\\
In the following, we define our young cluster sample adopting $\mathrm{P}[\mathrm{age} < 40] > 0.5$ and the modal value of the age PDF being enclosed in [1~Myr, 30~Myr].\\
We used 40~Myr as the upper bound for the PDF integral because using 30~Myr led to rejecting clusters with ages between 20 and 30~Myr, which have a large fraction of their PDFs extending beyond 30~Myr. We then chose a higher upper bound for the integral calculation, and use the condition on the mode of the PDF to ensure that the highest likelihood is still reached within the [0,30] Myr interval. To quantify this effect, we use this sample selection in association with the spectral models created by CIGALE with input ages between 20 and 40~Myr. Only 28\% of the clusters with input age in the range [20,30]~Myr are included in our sample if we use 30~Myr as upper bound, while this fraction rises to 71\% if one uses 40~Myr as upper bound. The contamination fraction, that is the fraction of models assigned to the sample which have ages in (30,40]~Myr, is 0\% in the first case and 6\% in the second case. Using 40~Myr instead of 30~Myr in the definition of our sample therefore gives a better representation of the clusters with ages genuinely younger than 30~Myr. \\
We discuss in Appendix~\ref{Annex_deg} two other sample selections: a \emph{Secure} sample, defined by P[age < 40] > 0.9, and an \emph{Inclusive} sample defined by P[age < 40] > 0.1. The exact same analysis is performed on these two samples for comparison purposes. Since the former is very restrictive and the latter will include clusters that are too old, this additional analysis gives an idea of the strict boundaries within which our result may vary.
Fig.~\ref{CMF} shows the cluster mass function (CMF) for our young cluster sample. A power-law fit to the diagram, for bins more massive than $1.5\times10^4~\Msun$, gives a slope of $-1.16\pm0.19$ for the evolution of $\frac{dN}{dlogM}$ with $M$. This gives $\frac{dN}{dM} = \frac{dN}{M dlogM} \propto M^{\alpha}$, with $\alpha = -2.16 \pm 0.19$. The values obtained for the lower metallicity prior (Z=0.004) and for the 1~Myr timescale are consistent within the 1-sigma uncertainty. The obtained mass distribution is consistent with a mass distribution decreasing with a power-slope of $\alpha \sim -2$, as in many other studies of young star cluster formation have shown \citep[see e.g.][]{Portegies10}. This suggests that the formation of star clusters in the gas ring and TDGs occurs in a similar fashion to that of the other studied environments. This can be interpreted as a legacy of the hierarchical collapse of gas clouds \citep[see e.g.][]{Elmegreen97}.
\begin{figure}
\centering
\includegraphics[width=10cm]{Fig/CMF_fid.png}
\caption{CMFs for the young cluster sample described in the text. The power-law fit is determined for bins with masses higher than $1.5\times10^4~\Msun$, shown by the black vertical line. The legend shows the slope and uncertainty of the corresponding fit. \label{CMF}}
\end{figure}
\subsection{Star cluster formation efficiency in the TDGs}
\label{res_gamma}
\begin{table}[]
\centering{
\caption{SFR, Area and CFE for the presented TDGs. The last two columns show the expectation of the CFE from the \citet{Kruijssen12} and \citet{Johnson16} models for the measured SFR surface density, named respectively K12 and J16.}
\label{cfe_fid}
\begin{tabular}{l c c c c c }
\hline \hline
& SFR & Area & CFE & K12 & J16 \\
Galaxy & [$\Msun$/yr] & [kpc$^{2}$] & [$\%$] & [$\%$] & [$\%$] \\ \hline
TDG N & $ 0.19 \pm 0.06 $ &$ 12.76 $ & 47$^{+21}_{-21}$ & $14^{+2}_{-2} $ & $22^{+10}_{-9} $ \\
TDG SW & $ 0.14 \pm 0.05 $ & $ 17.44 $ & 33$^{+17}_{-16}$ & $10^{+2}_{-2} $ & $15^{+8}_{-7} $ \\
TDG S & $ 0.12 \pm 0.03 $ & $ 17.17 $ & 45$^{+16}_{-15}$ & $10^{+1}_{-1} $ & $14^{+8}_{-6} $ \\
including S* & $ 0.08 \pm 0.03 $ & $ 4.59 $ & 60$^{+26}_{-26}$ & $ 15^{+2}_{-3}$ & $23^{+11}_{-9} $ \\
\hline
\end{tabular}}
\end{table}
\begin{figure}
\centering
\includegraphics[width=7.5cm]{Fig/map_N.png}
\includegraphics[width=7.5cm]{Fig/map_SW.png}
\includegraphics[width=7.5cm]{Fig/map_S.png}
\caption{True color image of the three TDGs: NGC~5291N (top panel), NGC~5291SW (middle panel) and NGC~5291S (bottom panel). The definitions of the young sample, degenerate clusters and older than 30~Myr samples are given in the text. The Intermediate clusters are part of the older than 30~Myr sample and are discussed in Sect.~\ref{res_int}. We also show detected clusters which do not have S/N > 3 in at least four bands, as the \emph{Low S/N} sample. The dashed white contours show the area considered to compute $\Sigma_\mathrm{SFR}$. The bottom panel show two contours: see text. The inset shows a VRI image from FORS \citep{Fensch16}, with the same contours. Only clusters inside the white contours are considered for the computation of the CFE. \label{map_gamma}}
\end{figure}
\begin{table}[]
\centering{
\caption{Significance, in standard deviations, of the offset of the data points compared to the three relations: \citet{Kruijssen12}, \citet{Johnson16} and \citet{Chandar17}, named respectively K12, J16 and C17.}
\label{sigma_fid}
\begin{tabular}{l c c c }
\hline \hline
Galaxy & K12 & J16 & C17 \\ \hline
TDG N & 1.6 & 1.1 & 1.0 \\
TDG SW & 1.3 & 1.0 & 0.5 \\
TDG S & 2.2 & 1.7 & 1.1 \\
including S* & 1.7 & 1.3 & 1.3 \\
TDG N+SW+S & 3.8 & 3.1 & 2.5 \\
TDG N+SW+S* & 3.5 & 2.8 & 2.6 \\
\hline
\end{tabular}}
\end{table}
One may characterise the \emph{cluster formation efficiency} (CFE) of galaxies by $\frac{\mathrm{CFR}}{\mathrm{SFR}}$, where CFR is the cluster formation rate (in $\Msun$/yr). It has been argued that galaxies follow a power-law relation with positive index in the CFE and the SFR surface density ($\Sigma_\mathrm{SFR}$) plane \citep{Larsen00, Billett02, Goddard10}. A similar relation was derived from theoretical grounds by \citet{Kruijssen12}. However, \citet{Chandar17} claim that the former empirical relation was driven by an under-estimation of the CFR of both the LMC and the SMC due to an inconsistent age range selection. On the contrary, they find a constant value of the CFE, of $24\% \pm 9\%$, independent of $\Sigma_\mathrm{SFR}$. \\
In order to compute the CFE for our system, we construct our young cluster samples based on the same definition as above, but for each of the three TDGs. In order to limit the effects of degeneracies, we use as a minimum value for the fitting prior A$_V$ > 0.3~mag, justified by the extinction maps obtained with MUSE by \citet{Fensch16}. We show the location of our young cluster sample in the three TDGs in Fig.~\ref{map_gamma}. We also show the detections that are \emph{degenerate} and \emph{securely old} (see definition in Sect.~\ref{young}). We also consider a smaller star cluster sub-sample for the S dwarf, shown in Fig.~\ref{map_gamma}, which we will call S* in the following. This is motivated by the elongated shape of this TDG, which suggests that TDG~5291S could actually be composed of two distinct objects, despite the apparent coherent HI rotation \citep{Lelli15}. \\
\begin{figure}
\includegraphics[width=9cm]{Fig/gamma_likely.png}
\caption{Distribution of our TDGs in the CFE-$\Sigma_\mathrm{SFR}$ plane. For the TDG NGC~5291S we show two points, the full TDG (with lower CFE and $\Sigma_\mathrm{SFR}$) and only S*. The dataset of \citet{Goddard10} is shown in black. The sample of \citet{Chandar17}, and their fit to their data is shown in purple. For the SMC and the LMC we only show the values computed by the latter reference (see text). In red are shown the BCDGs of \citet{Adamo11}. The continuous blue line shows the prediction of the model by \citet{Kruijssen12} (see text). The grey band shows a modified version of this model using the \citet{Bigiel08} relation \citep{Johnson16}. \label{gamma}}
\end{figure}
Following previous studies on the CFE \citep[see e.g.][]{Goddard10, Adamo11}, we use the CMF to infer the total mass in clusters down to $10^{2}~\Msun$ by forcing a canonical power-law shape $\frac{dN}{dM} \propto M^{\alpha}$, with $\alpha = -2$ fit to the histogram.
The SFR is obtained from the H$\alpha$ \citep{Boquien07}.
This fixed index of -2 is also used in the studies we are referring to in this section, and is also supported by the shape of the CMF covering the full cluster sample (see Fig.~\ref{CMF}). Note that \citet{Boquien07} used a \citet{Salpeter55} IMF, whereas our mass estimates were obtained using a \citet{Chabrier03} IMF. We therefore multiply the SFR obtained using the H$\alpha$ by \citet{Boquien07} by a factor 0.70 to account for the different SFR to H$\alpha$ flux ratio obtained for the two IMFs \citep[see e.g.][]{Kennicutt09}. We also correct the SFR by the mean extinction measured in TDG~N, A$_V$ = 0.6 mag \citep[][]{Fensch16}. The obtained values of the CFE are summarized in Table~\ref{cfe_fid}. Using the lower metallicity prior (Z=0.004) give consistent results within the 1-sigma uncertainty and changes the CFE values by less than $10\%$. Using the 1~Myr star formation timescale changes the CFE values by less than 3\%.\\
To compare the TDGs with other star cluster forming galaxies, we place these values in the CFE-$\Sigma_\mathrm{SFR}$ plane in Fig.~\ref{gamma}. We see that the TDGs are located in the same regime as the BCDGs, with CFE above 45$\%$ for TDG N and TDG S. They are located systemically above the empirical \citet{Chandar17} relation although consistent within 0.5 to 1.3 $\sigma$.
In Figure~\ref{gamma} we show the current model and empirical predictions. The blue curve shows the model\footnote{Model accessible at: https://wwwmpa.mpa-garching.mpg.de/cfe/. We used the integrated CFE model.} by \citet[][K12 in the following]{Kruijssen12}, for a gas velocity dispersion of 30~km~s$^{-1}$. We also show the version of the model calibrated with the \citet{Bigiel08} relation between the SFR and the gas density \citep[][J16]{Johnson16}. In purple is shown the universal value of 24\% suggested by \citet[][C17]{Chandar17}.
The computed CFE are systemically above these three relations. The significance of this deviation for the TDGs and the full system is measured with random draws of relation and data values, assuming gaussian distributions. For the combined TDGs, it is equivalent to multiplying the probabilities that each TDG's CFE has to be compatible with the relation. The significances are given in Table~\ref{sigma_fid}. While the measurement of each TDGs is less than 2.1$\sigma$ off of each relation, the combination of the TDG N, SW and S is above these relations by 3.8$\sigma$ for K12, 3$\sigma$ for J16 and 2.5$\sigma$ for C17. These numbers slightly change if one uses S* instead of S in the sample. Our sample of TDGs is then significantly above the current model and empirical relations.
We use an age range [1-30]~Myr which is broader than that typically used in these studies ([1-10]~Myr). We chose this range because there were not enough clusters younger than 10~Myr to properly measure the CFE. We did not correct for the mass evolution or destruction that may have happened, in particular cluster disruption by gas removal \citep[\emph{infant mortality,}][]{Boutloukos03, Whitmore07} which has a time-scale of 10-40~Myr \citep{Kroupa02, Fall05, Goodwin06}. This means that we are missing clusters which have been disrupted and mass which has been lost from the detected clusters. The fact that we do not correct for this effect suggests that we might be under-estimating the CFE of our TDGs (see discussion in C17). Finally we note that, at the distance of NGC~5291, we are contaminated by young star associations that are unbound and did not have time to dissolve \citep[see e.g.][]{Messa18}. This unresolved process might lead to an overestimation of the computed CFE. \\
As explained in Section~\ref{sub_deg}, we also performed the same analysis on two different samples of clusters with more restrictive or more relaxed age constraints. The analysis is presented in Appendix \ref{Annex_deg}. In particular, we introduce the {\it Secure} sample, which only contains clusters that are almost not affected by degeneracies and have a narrow age PDF, thus underestimating the genuine sample of clusters younger than 30~Myr. For this sample, the sample of TDGs (N,SW,S) is above the relations by 2.8$\sigma$ for K12, 1.8$\sigma$ for J16 and 1$\sigma$ for C17.
The fact that the CFE of the full sample is above the model relations of K12 by 2.8$\sigma$ confirms that this mismatch is robust against the age selection procedure. However, the CFE of the full sample of TDGs is only 1.8$\sigma$ from the J16 relation and is consistent with the C17 relation within 1$\sigma$. The combination of the CFEs of the TDGs are thus not statistically significantly above these two relations if one considers only this restricted sample.
Finally, we combined bands with different PSF. We added the F160W as it provides a good filter combination to reduce degeneracies \citep{Anders04}. However, the coarser spatial resolution of the F160W might lead to an over-estimation of the photometry in this band \citep[see e.g.][, but with aperture photometry]{Bastian14}. This effect will be discussed in section \ref{noIR}.
\subsection{Brightest cluster - SFR relation}
\label{res_Mv}
\begin{figure}
\includegraphics[width=9.5cm]{Fig/Mv_SFR.png}
\caption{Brightest cluster M$_V$-SFR relation for different galaxy samples. The NGC~5291S data point with the lowest SFR shows the value for S* only. The black line shows the fit of the \citet{Larsen02} sample. The dashed green line shows the maximum M$_V$ expected for a given SFR if all the star formation occurs in clusters with a $\frac{dN}{dM} \propto M^{-2}$ power-law \citep{Bastian08}. \label{Mv}}
\end{figure}
\citet{Larsen02} found a positive correlation between the V-band absolute magnitude ( $M_\mathrm{V}$) of the brightest cluster versus the SFR of the host, which was interpreted as a size-of-sample effect: the higher the SFR, the more clusters and thus the more likely high mass clusters would be found. The location of the three TDGs and other cluster forming systems in the $M_\mathrm{V}$-SFR plane is shown in Fig.~\ref{Mv}. We see that our three TDGs are located within the intrinsic scatter of the relation of \citet{Larsen02}. This suggests that the magnitude of the brightest cluster is a good tracer of the SFR for these systems, similar to what has been generally observed for star forming galaxies.
\subsection{Presence of intermediate age clusters}
\label{res_int}
It is interesting to study whether the peculiar environment of NGC 5291 may allow for the survival of clusters over timescales of $\simeq$100~Myr. \citet{Bournaud07} estimated that the interaction which triggered the formation of the ring happened around 360~Myr ago. As we have seen in Sect.~\ref{sub_deg} and Fig.~\ref{mass_age}, one may not estimate an age with a high precision. In Fig.~\ref{mass_age}, we can see that some cluster candidates with an estimated age between 100 and 2000~Myr have a relatively low r$_{\mathrm{age}}$ for the inferred age, that is, below 30. A large PDF is expected around these ages, as the stellar spectrum does not change much in this part of the stellar evolution period. Some of these clusters might therefore have formed at the time of formation of the ring and survived for several 100~Myr in this environment.\\
To construct a conservative sample of candidates with intermediate ages we first select clusters with P[50 < age < 2000] > 0.9. We chose an upper limit of 2000~Myr because the age PDFs can be quite extended for this age range (see Fig.~\ref{mass_age}). We note that this selection does not change if we allow for an extended star formation history with an exponential decrease timescale of 1 or 5~Myr, compared with our fiducial value of 0.1~Myr, chosen to model a quasi-instantaneous burst.\\
Moreover, we ensure that the photometry of these clusters is not consistent with them being old metal-poor or metal-rich GCs from the GC system of NGC~5291. For this, we run CIGALE with a broader metallicity prior (Z can be 0.0004, 0.004 or 0.008, instead of only 0.008), and we rule out clusters for which P[ age > 2500 ] > 0.1. We end up with seven clusters. Their ages and masses are shown in Fig.~\ref{int}. Their mass range is between $2\times10^{4}$ and $2\times10^{5} \Msun$. Their location is shown with purple squares in Fig.\ref{map_gamma}. Three are located close to the TDG N, one close to TDG SW and three close to TDG S. \\
One should note that clusters with similar masses and ages have already been seen in a number of dwarf galaxies \citep[see e.g.][]{Larsen04, deGrijs13}. However, their presence in NGC~5291 shows that massive star clusters can survive the very turbulent and gaseous environment of a tidal dwarf galaxy from its formation up to hundreds of millions of years.
\begin{figure}
\includegraphics[width=9.5cm]{Fig/intermediates.png}
\caption{Masses and ages of our conservative sample of intermediate age star clusters. The x-axis error-bar shows the width between the first and last decile of the age PDF. The y-axis error-bar shows the standard deviation for the mass estimate. The vertical black line shows the time of the formation of the ring structure, $\sim$360~Myr, as determined by \citet{Bournaud07}. \label{int}}
\end{figure}
\section{Discussion}
\label{Discussion}
\subsection{Effect of including the F160W band}
\label{noIR}
\begin{table}[]
\centering
\caption{Second column shows the values of the CFE for the three TDGs, without the F160W band. The three last columns show the significance, in standard deviations, of the offset of the data with respect to the models. \label{table::CFE_noIR}}
\label{my-label}
\begin{tabular}{l c c c c c} \hline \hline
Galaxy & CFE [$\%$] & K12 & J16 & C17\\ \hline
TDG N & 37$^{+15}_{-15}$ & 1.5 & 0.8 & 0.7 \\
TDG SW & 26$^{+11}_{-10}$ & 1.4 & 0.8 & 0.1 \\
TDG S & 33$^{+14}_{-14}$ & 1.6 & 1.2 & 0.5 \\
including S* & 46$^{+25}_{-25}$ & 1.2 & 0.8 & 0.8 \\
TDG N+SW+S & - & 3.5 & 2.6 & 1.9 \\
TDG N+SW+S* & - & 3.3 & 2.4 & 2.0 \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\includegraphics[width=9cm]{Fig/gamma_noIR.png}
\caption{Same legend as Fig.~\ref{gamma}. The analysis was done without considering the F160W band. \label{CFE_noIR}}
\end{figure}
\begin{figure}
\includegraphics[width=9cm]{Fig/IR_mass_change.png}
\caption{Comparison between the mass estimation between the analysis including or not the F160W band. The blue points show clusters that are included as {\it younger than 30~Myrs} only in the analysis without F160W. The red points show clusters that are included in this category only when the F160W band is included. The thick line shows the identity function. The two dashed lines show the mass completeness limit. \label{mass_change}}
\end{figure}
In Section 4.4 we saw that two TDGs, N and S, have very high CFEs, with an average value of 42 \%. While it has been argued on theoretical grounds that star cluster formation should be more efficient in low-metallicity environments, all other factors being equal \citep{Peebles84, Kimm16}, these metal-rich TDGs reach a similar CFE of metal-poor BCDGs.
As pointed out in Section~\ref{res_gamma}, the estimated masses could have been affected by an overestimation of the flux in the F160W band. We included the F160W filter to reduce the degeneracies on the estimation of the age and mass of the clusters \citep{Anders04}. However, this band has a coarser spatial resolution than the four other ones. The F160W band PSF has a FWHM of about 0.18\arcsec, while the four other bands have PSF FWHMs of 0.06\arcsec. Some F160W flux measurements might have been contaminated by regions that are very close to the clusters and that are not included in the other bands, which might affect the derivation of the physical quantities \citep[e.g.][in the case of aperture photometry]{Bastian14}.
To test the effect of adding the F160W band, we removed it from the analysis for the measurement of the CFE. The CFEs we obtain are summarised in Table~\ref{table::CFE_noIR}. They are lower than the ones obtained with the five bands, by typically 22\%.
In Fig.~\ref{mass_change} we see the change in the mass estimation of the clusters that are considered as {\it younger than 30 Myrs} and located in the TDGs. Of the clusters we that determined to be younger than 30 Myr, a few are estimated to be older than this limit when ignoring the F160W filter (5 of them above the completeness limit) while others are estimated to be younger with the F160W band (8 above the limit). Second we see that, while the estimated masses are very similar, there is a trend towards lower masses if one does not consider the F160W band.
In Fig.~\ref{CFE_noIR} we see that even without the F160W band the CFEs of the TDGs are systemically above the three relations from the literature. The computation of the significance of the discrepancies are given in Table~\ref{table::CFE_noIR}. The full sample of TDGs (N,SW,S) is respectively 3.5$\sigma$ and 2.6$\sigma$ above the K12 and J16 relations, that is with more than 99.5\% certainty. However, the offset with the C17 relation is only 1.9$\sigma$.
Thus, the exclusion of the F160W reduces the discrepancy between the data and the relations from the literature, which becomes statistically insignificant only for C17 (< 2.5$\sigma$). This suggests that the discrepancy to the three relations from the literature is robust against over the possible overestimation of the F160W band due to a coarser spatial resolution.
\subsection{What is the origin of the high cluster formation efficiency ?}
\label{disc_gamma}
In Section~\ref{res_gamma} we saw that our sample of TDGs have high CFEs, above 30\%, similar to what is observed for BCDGs While it has been argued on theoretical grounds that star cluster formation should be more efficient in low-metallicity environments, all other factors being equal \citep{Peebles84, Kimm16}, these metal-rich TDGs reach a similar CFE of metal-poor BCDGs. \\
It is interesting to note that all galaxies from the literature which have a CFE above $20\% $ are galaxies involved in an interaction of some sort, with the exception of the center of M~83 in the sample of \citet{Goddard10}. It is also interesting to note that the late stages of mergers, while leading to similar $\Sigma_\mathrm{SFR}$ as the early stages, triggers the formation of only a few clusters \citep{Renaud15}, and have thus a much lower CFE. Their interpretation is that cluster formation is triggered by the onset of compressive turbulence which is triggered mainly during the early times of galaxy interaction.\\
Although the TDGs in NGC~5291 were not formed in bona fide merging galaxies, they are located in a gas-dominated environment and are probably not fully relaxed yet. It is thus possible that their dynamical state, in terms of compressive turbulence, is similar to that of interacting galaxies, possibly because of accretion from the gas ring \citep{Fensch16}.\\
\subsection{Evolution of the star cluster system}
\label{disc_int}
We saw in Section~\ref{res_int} that some clusters could survive their birth environment for several $100$~Myr. The fact that we could find some in the gaseous ring shows that we can expect the survival of massive star clusters from the formation of the tidal dwarf galaxy to at least several $100$~Myr.\\
One may now wonder how this specific star cluster system will evolve in the future. In the following, we consider that our clusters survived after gas expulsion and we do not consider their \emph{infant mortality} rate, which is due to the internal feedback expelling the gas and destabilizing the cluster and which has a timescale of 10-40~Myr (see Sect.\ref{young}). We model the mass loss due to cluster evaporation during its relaxation as $\Delta$M(t) = $\mu_{\mathrm{ev}} t$ with $\mu_{\mathrm{ev}}$ the evaporation rate \citep{Henon61, Fall01, Jordan07}, given by:
\begin{equation}
\mu_{\mathrm{ev}} = 345~\Msun~\mathrm{Gyr}^{-1} \Big( \frac{\rho}{\Msun~\mathrm{pc}^{-3}}\Big) ^{1/2}
\end{equation}
where $\rho$ = 3M / (8$\pi$R$_{\mathrm{eff}}^{3}$) is the half-mass density of the cluster, with M and R$_{\mathrm{eff}}$ its mass and half-mass radius. This is a likely lower limit to the genuine evaporation rate of stellar clusters, as it does not include the effects of stellar evolution, gas cloud encounters and tidal effects from the host TDG.\\
The typical density of YMCs is $10^3~\Msun$~pc$^{-3}$ \citep[see review by][]{Portegies10}. For such a density, we obtain $\mu_{\mathrm{ev}} \sim 10^4~\Msun~\mathrm{Gyr}^{-1} $. Under this hypothesis, one may conclude that most of the stellar clusters of our system will be destroyed in a few Gyr at most, at least by internal relaxation. \\
Now consider a cluster with a mass of $2\times10^4~\Msun$. Reaching the typical density of YMCs implies a typical half-mass radius of 1.3~pc. As one pixel corresponds to a physical size of 12~pc (36~pc for the F160W band) at the distance of NGC~5291, we cannot constrain the size of our clusters. Most of our sources are well fitted by a PSF, which means a half-mass radius securely below 6~pc. The few sources which are not well fitted by a PSF (see Sect.~\ref{Technics}) have half-light radii which can reach up to 2.5~pixels. However, they could also be blended detections or extended nebular emission from the ionized outskirts of young clusters. For a mass of $2\times10^4~\Msun$ and a half-mass radius of 6~pc, we obtain $\mu_{\mathrm{ev}} \sim 10^3 ~\Msun~\mathrm{Gyr}^{-1}$. The timescale for the destruction -- considering only the effects of relaxation -- is therefore a Hubble time for $2\times10^4~\Msun$ clusters. Under this hypothesis it is possible that the most massive clusters of in sample (reaching typically $2\times10^5~\Msun$) can survive evaporation from internal processes for several Gyr. However, the disruption from tidal effects will be faster \citep{Gieles06}.\\
Mass loss from stellar evolution (40\% to 60\% of the mass over 12~Gyr; \citealt{Kruijssen08, Sippel12}), from gas cloud encounters and from tidal harassment still needs to be taken into account. The study of these two mass loss processes goes beyond the scope of the present study. Given the very gaseous environment of these TDGs and the high gas turbulence, one may expect the latter two processes to be more efficient than in an isolated and kinematically relaxed systems. At the same time, star clusters in such a gas-rich environment may continue to accrete gas from their surroundings \citep{Pflamm09, Naiman11, Li16}.\\
Thus, if the YMCs of these TDGs are similar to YMCs observed in other environments in terms of density, we do not expect these dwarfs to form a system of massive star clusters which could last for a Hubble time. This conclusion is mitigated if we allow for a lower star density, which remains empirically unconstrained given our spatial resolution.
\subsection{Evolution of the TDGs}
The $\Lambda$CDM paradigm predicts a different dark matter (DM) content for two classes of dwarf galaxies: TDGs, formed during interactions and which should be devoid of DM, and \emph{normal} dwarf galaxies, such as the dwarf ellipticals (dE), dwarf irregulars or BCDGs, formed inside a DM halo. This difference in DM would result in different kinematics and provide us with a new test for the $\Lambda$CDM paradigm \citep[see e.g.][]{Kroupa10}. Even though the absence of dark matter in TDGs is predicted from numerical simulations \citep{Bournaud06,Wetzstein07, Bournaud08}, it is hard to prove as these are young and turbulent systems. Under the assumption of dynamical equilibrium, suggested by simulations to occur in less than one orbital time \citep{Bournaud06, Bournaud07}, HI kinematics are consistent with a purely baryonic content \citep{Lelli15}. One needs to investigate the kinematics of old TDGs, which are kinematically relaxed, to confirm such a purely baryonic content, which requires to distinguish TDGs from dEs.
TDGs are known to be outliers of the luminosity-metallicity relation \citep[e.g.][]{Duc00, Weilbacher03}. However for old, gas-poor TDGs, obtaining the metallicity from the stellar population might still be very challenging with current observing facilities. Moreover, as the metallicity of the host is a decreasing function of redshift, one may argue that the deviation from the magnitude-metallicity relation will decrease, making it harder to separate old and equally aged TDGs and dEs. TDGs are also known to be outliers from the size-mass relation for dwarf galaxies, having unusually large effective radii for their mass \citep{Duc14}.\\
A final means to distinguish these two categories could be to use their stellar cluster content \citep{Dabringhausen13}. dEs are known to host a significant number of GCs compared to their mass, with specific frequencies reaching up to 100 \citep{Peng08, Georgiev10}. Our analysis showed that even quite massive star clusters may form in TDGs. Some may be able to survive for several Gyr and thus be visible in rather old TDGs, but they will likely evaporate within a Hubble time. This is due to the fact that their SFR is too low to form clusters that are massive enough to survive evaporation for several Gyr. For a Hubble time, the minimum mass would be around the turn-over value for the GC mass function (GCMF), $2\times 10^5~\Msun$ \citep{Fall01, Jordan07}. \\
Moreover, as a TDG potential well does not trap significant amounts of DM or old stars from the host, one may argue that the capture of GCs from the host, which are kinematically coupled to either the DM halo or the bulge component \citep[see review by][]{Brodie06} is also unlikely. This will be verified for our system in a future paper which will focus on the old cluster population, as described in Section~\ref{Obs}. The accretion of old GCs onto TDGs also needs to be investigated by means of numerical simulations to understand the effect of varying the orbital parameters. \\
However the conditions at higher redshift are most likely different, as the host galaxy is likely to have a more substantial gas component \citep[see e.g.][]{Combes13}. Among the rare literature on TDG formation at high redshift, simulations by \citet{Wetzstein07} showed that more gas-rich disk galaxies are more likely to form TDGs, and \citet{Elmegreen07} found five young TDG candidates at $z = 0.15-0.7$, which have higher stellar masses than typical local TDGs (up to $5\times10^9~\Msun$). As claimed by the latter, the higher velocity dispersion of both the gaseous and stellar components of higher redshift galaxies could lead to Jeans masses of up to $10^{10}~\Msun$ in tidal tails.
Thus, one may argue that at a given higher redshift TDGs will have higher gas masses and higher SFRs. If star cluster formation at this cosmic epoch follows the empirical relation between the SFR of a galaxy and the magnitude of its brightest star cluster, given that the stellar models we used predict a M$_\mathrm{V} = -12.4$~mag for a 10~Myr old cluster of $2\times10^5~\Msun$, then a SFR of 5-10$\Msun$ yr$^{-1}$ would be sufficient to form some clusters more massive than the peak of the GCMF, which would be able to survive cluster dissolution for a Hubble time. \\
Although our analysis shows that TDGs formed under the current conditions are not likely to keep a GC system, more investigation is needed to understand if TDGs formed at higher redshifts would be able to harbour a GC system until the present epoch, and if one could distinguish them from other dwarfs using this criterion. Recently, a UDG candidate, DF2 and DF4, were found sharing several of the properties expected for TDGs: a putative lack of DM, a large effective radius and the proximity of a massive galaxy \citep{vanDokkum18a}, which led to speculation of a tidal origin. Note that the DM of this galaxy is still the subject of intense debate in the community \citep[see e.g.][]{Martin18,Trujillo18, Blakeslee18, Emsellem19, Danieli19}. However one unique feature is its large number of massive GCs \citep{vanDokkum18b}. \citet{Fensch19}, found that the metallicity of the stellar body of DF2 and its GCs could be consistent with DF2 being an old TDG. However, a massive TDG like those around NGC~5291 did not form such massive clusters.
\subsection{Link to the formation of GCs in high-redshift galaxies}
A prevailing theory for the formation of the metal-rich population of GC around present-day massive galaxies is that they may have formed in the star-forming disk of the host galaxy at high-redshift \citep{Shapiro10, Kruijssen12}, when their morphology was dominated by 5 to 10 UV-bright giant clumps (mass $\sim 10^{7-9}~\Msun$, radius $\sim~1-3$~kpc, \citealt{Cowie96, Elmegreen09}). A resolved study of clustered star formation in these clumps is unfortunately not possible with current instrumentation, except in some fortuitous cases of strong gravitational lensing \citep{Cava18}. Thus local analogues are often used as laboratories to investigate the possible ISM and star cluster formation, such as the nearby BCDGs \citep{Elmegreen12}. In particular, they have been shown to be very efficient at forming YMCs \citep[][and Section~\ref{res_gamma}]{Ostlin03, Adamo10, Lagos11}. Although BCDGs are characterized by high gas fractions and turbulence \citep[see e.g.][]{Lelli14}, similar to what is expected for higher redshift galaxies. However they usually have low to very low metallicities (typically 0.2~Z$_\odot$, \citealt{Zhao13}), while the giant clumps at high redshift already reach moderate metallicity, between 1/3 and 1/2 solar \citep{Erb06, Cresci10, Zanella15}, which may have a strong impact on gas fragmentation \citep{Krumholz12}. Moreover, most of their stellar mass resides in an old stellar component \citep{Loose86, Papaderos96}, making these systems dynamically old. \\
TDGs are gas-rich, dynamically young and have moderate metallicity, and thus should be better analogues to the clumps of high-redshift galaxies. The high to very high CFEs (up to 50$\%$) observed in the TDGs presented in this study suggest that the physical conditions in high-redshift galaxies could be very favorable to the formation of star clusters. Moreover, if the empirical relation between the SFR of a galaxy and the magnitude of its brightest star cluster holds at these redshifts, since giant clumps have SFRs of about $1-10~\Msun$ yr$^{-1}$ \citep{Guo12}, one may expect them to produce star clusters more massive $2\times10^5~\Msun$, the likely threshold mass which would allow them to survive dissolution over a Hubble time (see previous subsection).\\
It should be noted that the molecular surface gas density of TDGs is much lower than that of high-redshift galaxies, by 2 orders of magnitude \citep{Lisenfeld16}, and their depletion timescale is higher by a factor of 10 (2~Gyr for TDGs, \citealt{Braine01}, 0.2~Gyr for $z \simeq 2$ galaxies, \citealt{Combes13}). Moreover, the tidal forces from the host are likely different, and are important for the formation of YMCs in colliding galaxies \citep{Renaud15}, as well as for their survival \citep{Baumgardt03, Renaud11}. Numerical simulation work is therefore still needed to understand cluster formation in the giant clumps of high-redshift galaxies.\\
\section{Conclusion}
\label{Conclusion}
We investigated star cluster formation and evolution in three tidal dwarf galaxies, whose physical properties differs from the ones of starbursting dwarfs. In particular, they are gas-rich, highly turbulent and have a gas metallicity already enriched to up to half-solar. \\
The three TDGs are located in a huge collisional ring around NGC~5291. We observed this system with the \emph{HST} using five broad bands from the near-UV to the near-IR. The photometry was extracted using PSF and S\'ersic-fitting, and we compared the obtained SED with stellar evolution models using the CIGALE code. \\
{We find that star clusters are observed in TDGs, with masses of up to $10^5~\Msun$, with a mass distribution similar to those observed in other star cluster forming systems. After taking into account the effect of the extinction-age degeneracies, we studied the star cluster formation efficiency in the TDGs. We showed that the three TDGs have high CFEs, above 30\%, with an average of 42\%. This is comparable to BCDGs, but with a lower SFR surface density, a higher metallicity and without being bona fide merging systems. The full sample of TDGs is located 2.5 to 3.8 $\sigma$ above the relations from the literature. There may be uncertainties not yet recognised which still allow a constant CFE at this time \citep[see e.g.][]{Chandar17}, and more data is needed for similar special type of galaxies. Nevertheless, our results suggest that such a constant CFE relation would have a large scatter, and that there would be structure within this scatter, correlated with galaxy type and/or environment.
We next probed the existence of intermediate age clusters, which could have formed during the early stages of the formation of the gaseous ring structure and may have survived for several 100~Myr. The fact that we could find some of them shows that cluster formation started early and we can expect the survival of young massive (above $10^4\Msun$) star clusters from the formation of their host dwarf to several $100$~Myr. However, if they have a similar density to what is observed for YMCs in other known environments (BCDGs, mergers), they might be present for a few Gyr but destroyed in a Hubble time because of relaxation-driven dissolution effects. If TDGs formed at high redshift have a higher SFR, we may expect them to form more massive clusters that would be able to survive cluster dissolution for a Hubble time.
\begin{acknowledgements}
The authors thank the referee for their very useful comments which helped improve the paper.
MB acknowledges support of FONDECYT regular grant 1170618.
Support for Program number HST-GO-14727 was provided by NASA through a
grant from the Space Telescope Science Institute, which is operated by the
Association of Universities for Research in Astronomy, Incorporated, under NASA
contract NAS5-26555. DME acknowledges support from grant HST-GO-14727.002-A. B.G.E. acknowledges support from grant HST-GO-14727.004-A. EB acknowledges support from the UK Science and Technology Facilities Council [grant number ST/M001008/1]. FR acknowledges support from the Knut and Alice Wallenberg Foundation.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,314,259,994,846 | arxiv | \section{Introduction}
\label{sec:intro}
A wide variety of systems exhibit critical phenomena. Near a critical
point, some quantities obey scaling laws. As an example, consider
\begin{equation}
A(t, h) = t^x\Psi(ht^{-y}),
\eqlabel{s_f}
\end{equation}
where $t$ and $h$ are variables describing a system, and the critical
point is located at $t=h=0$. The scaling law is derived by the
renormalization group
argument\cite{goldenfeld92:_lectur_phase_trans_and_renor_group,
*cardy96:_scalin_and_renor_in_statis_physic}. The scaling exponents
$x$ and $y$ are called \textit{critical exponents}. The universality
of critical phenomena means that different systems share the same set
of critical exponents. Thus, this set defines a \textit{universality
class} of critical phenomena. In addition, the \textit{scaling
function} $\Psi$ also exhibits universality. For example, Mangazeev
et al. numerically obtained scaling functions of the Ising models on
square and triangular lattices\cite{2010PhRvE..81f0103M}. Since Ising
models on both lattices belong to the same universality class, the two
scaling functions with nonuniversal metric factors are perfectly
equal.
An important issue to study critical phenomena is to determine the
universality class. The object of \textit{scaling analysis} is to
determine the universality class from data. We assume the scaling law
of \eqref{s_f} for data. If we plot data with rescaled coordinates as
$(X_i, Y_i) \equiv (h_it_i^{-y}, t_i^{-x}A(t_i, h_i))$, all points
must collapse on a scaling function as $Y_i=\Psi(X_i) $. To determine
critical exponents, we need a mathematical method to estimate how well
all rescaled points collapse on a function for a given set. In other
words, we need to estimate the goodness of data
collapse. Unfortunately, we do not know the form of $\Psi$
\textit{a priori~}. The conventional method for the scaling analysis is a
least-square method while assuming a polynomial. However, it may be
difficult to choose the degree of the polynomial for data, because
there are overfitting problems associated with increasing the
degree. To use a polynomial of low degree, we usually limit the data
to a narrow region near a critical point. However, it may require
high accuracy. In addition, it may be difficult to obtain a universal
scaling function in a wide critical region. Thus, the scaling analysis
by the least-square method must be carefully done as shown in the
reference \cite{slevin99:_correc_to_scalin_at_ander_trans}.
In this paper, we propose a method of statistical inference in the
scaling analysis of critical phenomena. The method is based on
Bayesian statistics. Bayesian statistics has been widely used for
data analysis \cite{bishop06:_patter_recog_and_machin_learn}. However,
to the best of our knowledge, it has not been applied to the scaling
analysis of critical phenomena. In particular, since our method
assumes only the smoothness of a scaling function, it can be applied
to data for which the least-square method cannot be used.
In \secref{method}, we first introduce a Bayesian framework in the
scaling analysis of critical phenomena. Next, we propose a Bayesian
inference using a Gaussian process (GP) in this framework. In
\secref{demo}, we demonstrate this method for critical phenomena of
the Ising models on square and triangular lattices. Finally, we
give the conclusions in \secref{conclusion}.
\section{Bayesian framework and Bayesian inference in scaling
analysis}
\label{sec:method}
By using two functions $X$ and $Y$ that calculate rescaled
coordinates, the scaling law of an observable $A$ can be rewritten as
\begin{equation}
\label{eq:law}
Y(A(\vec{v}), \vec{v}, \vec{\theta_p}) = \Psi(X(\vec{v}, \vec{\theta_p})),
\end{equation}
where $\vec{v}$ denotes the variables describing a system and
$\vec{\theta_p}$ denotes the additional parameters as critical
exponents. Our purpose is to infer $\vec{\theta_p}$ so that data
$A(\vec{v_i}), (1 \le i \le N)$ obey the scaling law of \eqref{law}.
In the following, for convenience, we abbreviate $X(\vec{v_i},
\vec{\theta_p})$ and $Y(A(\vec{v_i}), \vec{v_i}, \vec{\theta_p})$ to
$X_i$ and $Y_i$, respectively.
When the statistical error of $Y_i$ is $E_i$, the distribution
function of $\{Y_i\}$, $P(\vec{Y}|\Psi, \vec{\theta_p})$, is a
multivariate Gaussian distribution with mean vector $\vec{\Psi}$ and
covariance matrix $\mathcal{E}$:
\begin{equation}
P(\vec{Y}|\Psi,\vec{\theta_p}) \equiv
\mathcal{N}(\vec{Y}|\vec{\Psi}, \mathcal{E}),
\eqlabel{pofp}
\end{equation}
where $(\vec{Y})_i \equiv Y_i$, $(\vec{\Psi})_i \equiv \Psi(X_i)$,
$(\mathcal{E})_{ij}\equiv E_i^2\delta_{ij} $, and
$$
\mathcal{N}(\vec{y}|\vec{\mu}, \Sigma) \equiv
\frac{1}{\sqrt{|2\pi\Sigma|}} \exp\left(-\frac12
(\vec{y}-\vec{\mu})^{t} \Sigma^{-1} (\vec{y}-\vec{\mu}) \right) .
$$
Next, we introduce a statistical model for a scaling function as
$P(\Psi | \vec{\theta_h})$. Here, $\vec{\theta_h}$ denotes the
control parameters and is referred to as \textit{hyper parameters}.
Then, the conditional probability of $\vec{Y}$ for $\vec{\theta_p}$
and $\vec{\theta_h}$ is formally defined as
\begin{eqnarray}
P(\vec{Y} | \vec{\theta_p}, \vec{\theta_h})
\equiv \int P(\vec{Y}|\Psi, \vec{\theta_p})
P(\Psi | \vec{\theta_h}) d\Psi.
\eqlabel{conditional}
\end{eqnarray}
According to Bayes' theorem, a conditional probability of
$\vec{\theta_p}$ and $\vec{\theta_h}$ for $\vec{Y}$ can be written as
\begin{equation}
P(\vec{\theta_p},\vec{\theta_h} | \vec{Y}) =
{P(\vec{Y} | \vec{\theta_p}, \vec{\theta_h}) P(\vec{\theta_p},
\vec{\theta_h})}/ {P(\vec{Y})},
\eqlabel{bayes-th}
\end{equation}
where $P(\vec{\theta_p}, \vec{\theta_h})$ and $P(\vec{Y})$ denote the
\textit{prior distributions} of $\vec{\theta_p}$ and $\vec{\theta_h}$
and that of $\vec{Y}$, respectively. In Bayesian statistics,
$P(\vec{\theta_p}, \vec{\theta_h} | \vec{Y})$ is called a
\textit{posterior distribution} of $\vec{\theta_p}$ and
$\vec{\theta_h}$. Using \eqref{bayes-th}, a posterior probability of
$\vec{\theta_p}$ and $\vec{\theta_h}$ for $\vec{Y}$ can be
estimated. This is a Bayesian framework for the scaling analysis of
critical phenomena.
In Bayesian statistics, the conventional method of inferring
parameters is the maximum a posteriori (MAP) estimate. In this paper,
for simplicity, we assume that all prior distributions are
uniform. Then,
\begin{equation}
P(\vec{\theta_p},\vec{\theta_h} | \vec{Y})
\propto P(\vec{Y} | \vec{\theta_p}, \vec{\theta_h}).
\eqlabel{bayes_th}
\end{equation}
Therefore, the MAP estimate is equal to a maximum
\textit{likelihood}(ML) estimate with a likelihood function of
$\vec{\theta_p}$ and $\vec{\theta_h}$, defined as
\begin{equation}
\eqlabel{likelihood}
\mathcal{L}(\vec{\theta_p}, \vec{\theta_h}) =
P(\vec{Y}| \vec{\theta_p}, \vec{\theta_h}).
\end{equation}
In addition, the confidence intervals of the parameters can be
estimated through \eqref{bayes_th}.
In this framework, the statistical model of a scaling function plays
an important role. We start from a polynomial scaling function as
$\Psi(X) \equiv \sum_{k} c_k X^k$. If a coefficient $c_k$ is
distributed by a probability density $P(c_k|\vec{\theta_h})$, then
$P(\Psi|\vec{\theta_h})d\Psi \equiv \prod_k
P(c_k|\vec{\theta_h})dc_k$. We first consider the strong constraint
for $c_k$ as $ P(c_k|\vec{\theta_h}) \equiv \delta(c_k-m_k)$, where
$m_k$ is a hyper parameter. Then, $P(\vec{Y}|\vec{\theta_p},
\vec{\theta_h})$ is a multivariate Gaussian distribution with mean
vector $\vec{\mu}$ and covariance matrix $\Sigma$:
\begin{equation}
\label{eq:mean-vector}
(\vec{\mu})_i \equiv \sum_k m_k {X_i}^k, \quad \Sigma \equiv \mathcal{E}.
\end{equation}
Thus, the ML estimate in \eqref{likelihood} is equal to the
least-square method. We soften this constraint as
$P(c_k|\vec{\theta_h})\equiv\mathcal{N}(c_k|m_k, \sigma_k^2)$, where
$m_k$ and $\sigma_k$ are hyper parameters. Then,
$P(\vec{Y}|\vec{\theta_p}, \vec{\theta_h})$ is again a multivariate
Gaussian distribution, and the covariance matrix changes as follows:
\begin{equation}
\Sigma \equiv \mathcal{E} + \Sigma', \quad (\Sigma')_{ij} \equiv \sum_k
(X_i X_j)^k \sigma_k^2.
\eqlabel{sigma-poly}
\end{equation}
This includes the case of a strong constraint such as $\sigma_k^2 =
0$.
To calculate a MAP estimate, a log-likelihood function is used. If a
posterior distribution is described by a multivariate Gaussian
function as $P(\vec{\theta_p}, \vec{\theta_h} | \vec{Y}) \propto
\mathcal{N}(\vec{Y}|\vec{\mu},\Sigma)$, the log-likelihood function
can be written as
\begin{equation}
\log \mathcal{L}(\vec{\theta_p}, \vec{\theta_h}) \equiv
-\frac{1}{2} \log\left|2\pi\Sigma\right|
-\frac12 (\vec{Y}-\vec{\mu})^t \Sigma^{-1}(\vec{Y}-\vec{\mu}).
\eqlabel{BayesGP0}
\end{equation}
Although the likelihood function is nonlinear in parameters
$\vec{\theta_p}$ and $\vec{\theta_h}$, a multidimensional maximization
method may be applied to calculate a MAP estimate. Under a strong
constraint such as $\sigma_k^2 = 0$, the Levenberg-Marquardt algorithm
is efficient. Under a weak constraint such as $\sigma_k^2 > 0$, we may
use an efficient maximization algorithm such as the Fletcher-Reeves
conjugate gradient algorithm. In such efficient algorithms, we
sometimes need the derivative of \eqref{BayesGP0} for a parameter
$\theta$. Then, we can use the following formula:
\begin{eqnarray}
\frac{\partial \log \mathcal{L}(\vec{\theta_p}, \vec{\theta_h})}{
\partial \theta} &=& -\frac{1}{2}\mathbf{Tr}\left(\Sigma^{-1}
\frac{\partial \Sigma}{\partial \theta}\right)\nonumber\\
&-& (\vec{Y}-\vec{\mu})^t \Sigma^{-1}\frac{\partial
(\vec{Y}-\vec{\mu})}{\partial \theta}
\nonumber\\
&+& \frac{1}{2}(\vec{Y}-\vec{\mu})^t \Sigma^{-1}
\frac{\partial\Sigma}{\partial \theta} \Sigma^{-1}
(\vec{Y}-\vec{\mu}).
\label{eq:derivative}
\end{eqnarray}
However, to compute the inverse of a covariance matrix, the
computational cost of an iteration is $O(N^3)$. On the other hand,
$O(N^2)$ for the least-square method. Fortunately, using a
high-performance numerical library for linear algebra, we can easily
make a code and we can efficiently calculate for some hundred data
points. Another method is based on Monte Carlo (MC) samplings. In
particular, MC samplings may be useful for the estimate of the
confidence intervals of parameters.
We demonstrate the MAP estimate based on \eqref{BayesGP0} and
\eqref{sigma-poly}. \figref{test-sub1} shows the data points rescaled
by a MAP estimate. Here, we assume that a scaling function is
linear. To show the flexibility of Bayesian inference, we fix
$m_0=m_1=0$. Thus, $\sigma_0$ and $\sigma_1$ are the only free
parameters. We artificially generate mock data so that they obey a
scaling law:
\begin{equation}
\label{eq:fss}
A(T,L)=L^{-\beta/\nu}\Psi((T-T_c)L^{1/\nu}),
\end{equation}
where $T$ and $L$ denote the temperature and linear dimension of a
system, respectively. This is a well-known scaling law for
finite-size systems. In \figref{test-sub1}, we set $T_c=\beta/\nu=1,
1/\nu=2 $ and $\Psi(X)=2+X$. Then,
\begin{equation}
\eqlabel{mock}
A_i = \frac{2}{L_i} + (T_i-1)L_i + r_i / 50,
\end{equation}
where $r_i$ is a Gaussian noise. These mock data are shown in the
inset of the left panel of \figref{test-sub1}. The right panel of
\figref{test-sub1} shows the maximization of a likelihood, when we
start from $T_c=1/\nu=\beta/\nu=0$ and $\sigma_0=\sigma_1=2$. The
results for $T_c, \beta/\nu$, and $1/\nu$ are $1.00745$, $0.999008$,
and $2.00638$, respectively. They are close to the correct values.
\begin{figure}
{\includegraphics[width=0.25\textwidth]{fig-test-1-2-new.eps}}
{\includegraphics[width=0.225\textwidth]{fig-test-2-2-new.eps}}
\caption{(Color on-line) Left panel:
The data points rescaled by a MAP estimate. We assume that a scaling
function is linear. The results of the MAP estimate for $T_c$,
$\beta/\nu$, and $1/\nu$ are $1.00745$, $0.999008$, and $2.00638$, respectively.
The dotted (pink) line is the scaling function inferred from the
MAP estimate. Inset of left panel: Mock data set. Right panel:
Maximization of a likelihood. \figlabel{test-sub1}}
\end{figure}
Unfortunately, we usually do not know the form of a scaling function
\textit{a priori~}. The Bayesian inference based on \eqref{BayesGP0} and
\eqref{sigma-poly} may not be effective in some cases. Thus, we
consider an extension of \eqref{sigma-poly}. From \eqref{BayesGP0}, we
may regard data points as obeying a GP. Since the covariance matrix
represents statistical correlations in data, we may design it for a
wide class of scaling functions. Thus, we introduce a generalized
covariance matrix $\Sigma$ as
\begin{equation}
\Sigma = \mathcal{E} + \Sigma', (\Sigma')_{ij} \equiv K(X_i, X_j),
\label{eq:covariance}
\end{equation}
where $K(X_i, X_j)$ is called a \textit{kernel function}. Note that
$\Sigma'$ must be a positive definite. The Bayesian inference based on
\eqref{BayesGP0} and \eqref{covariance} is called a \textit{GP
regression}. \eqref{sigma-poly} is a special case of
\eqref{covariance}. As shown in \figref{test-sub1}, even if
$\vec{\mu}=0$, the GP regression is successful. For simplicity, we
consider only a zero mean vector ($\vec{\mu}=0$) in this paper.
In the GP regression, we can also infer the scaling function. In fact,
we assume that all data points obey a GP. In other words, the joint
probability distribution of obtained data points and a new additional
point $(X, Y)$ is also a multivariate Gaussian
distribution. Therefore, a conditional probability of $Y$ for obtained
data can be written by a Gaussian distribution with mean $\mu(X)$ and
variance $\sigma^{2}(X)$:
\begin{equation}
\eqlabel{meanGPK}
\mu(X) \equiv
\vec{k}^t\Sigma^{-1}\vec{Y},\quad
\sigma^{2}(X)
\equiv K(X, X)-\vec{k}^t\Sigma^{-1}\vec{k},
\end{equation}
where $(\vec{k})_{i} \equiv K(X_{i}, X)$. We regard $\mu(X)$ in
\eqref{meanGPK} as a scaling function. For example, the dotted (pink)
line in \figref{test-sub1} is $\mu(X)$ in \eqref{meanGPK} for mock
data with a MAP estimate.
In general, a scaling function is smooth. Since $\mu(X)$ in
\eqref{meanGPK} is the weighted sum of kernel functions, the kernel
function should smoothly decrease for increasing distance between two
arguments. In this paper, we propose the use of a \textit{Gaussian
kernel function} (GKF) for the scaling analysis of critical
phenomena. GKF is defined as
\begin{equation}
\label{eq:GKF}
K_G(X_i, X_j) \equiv \theta_0^2 \exp \left( -\frac{(X_i - X_j)^2}{2\theta_1^2}
\right),
\end{equation}
where $\theta_0$ and $\theta_1$ are hyper parameters. Since GKF is
smooth and local, the GP regression with GKF may be effective for a
wide class of scaling functions.
\section{Bayesian finite-size scaling analysis of the two-dimensional
Ising model}
\label{sec:demo}
We demonstrate the GP regression with GKF for the finite-size scaling
(FSS) analysis of the two-dimensional Ising model. FSS is widely used
in numerical studies of critical phenomena for finite-size systems.
It is based on the FSS law derived by the renormalization group
argument. The Hamiltonian of the Ising model can be written as
\begin{equation}
\label{eq:Ising}
\mathcal{H}(\{s_i\}) \equiv - J \sum_{\langle ij \rangle} s_i s_j,
\end{equation}
where $s_i$ is the spin variable ($\pm 1$) of site $i$ and $\langle ij
\rangle$ denotes the nearest neighbor pairs and $J$ denotes a positive
coupling constant. The partition function can be written as
\begin{equation}
\label{eq:partition}
Z \equiv \sum_{\{s_i\}} \exp[-H(\{s_i\})/k_BT],
\end{equation}
where $k_B$ is the Boltzmann constant. For simplicity, we set
$J/k_B=1$ in the following. The two-dimensional Ising model has a
continuous phase transition at a finite temperature. Since there are
exact results for the Ising models on square and triangular
lattices\cite{onsager44:_cryst_statis, *1952PhRv...85..808Y}, we can
check the results of FSS.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig-ising-S-B.eps}
\caption{\figlabel{Binder-ratio} (Color on-line) The Binder ratios of
the Ising model on three square lattices. The total number of data items
is 86. Inset: Binder ratio near a critical point. The value of the
Binder ratio is limited to the region $[0.8, 0.97]$. The number of
data items in the inset is $24$. }
\end{figure}
To obtain the Binder ratios\cite{1981ZPhyB..43..119B} and magnetic
susceptibility on square and triangular lattices, MC simulations
have been done. For the square lattice, $L=L_r=64$, $128$, and $256$,
where $L_r$ and $L$ denote the number of rows and columns of the
lattice, respectively. For the triangular lattice, $L=(65L_r/75)=65$,
$130$, and $260$ so that the aspect ratio of a triangular lattice is
approximately $1$. We set periodic boundary conditions for both
lattices. The number of MC sweeps by the cluster
algorithm\cite{swendsen87:_nonun_critic_dynam_in_monte_carlo_simul} is
$80000$ for each simulation. The Binder ratio is based on the ratio of
the fourth and second moments of an order parameter. The order
parameter of the Ising model is a magnetization defined as $M \equiv
\sum_i s_i$. Then, the Binder ratio can be written as
\begin{equation}
\label{eq:def-binder}
U \equiv \frac{1}{2}\left(3 - \frac{\langle M^4\rangle}{\langle
M^2\rangle^2}\right),
\end{equation}
where $\langle \cdot \rangle$ denotes the canonical ensemble average.
In the thermodynamic limit, the Binder ratio takes values 1 and 0 in
the order and disorder phases, respectively. Since the Binder ratio is
dimensionless, the FSS form is
\begin{equation}
\label{eq:scaling-form-B}
U(T, L) = \Psi_B ((1/T -1/T_c)L^{1/\nu}),
\end{equation}
where $T_c$ is a critical temperature and $\nu$ is a critical exponent
that characterizes the divergence of a magnetic correlation length.
From \eqref{scaling-form-B}, the value of the Binder ratio at the
critical temperature is universal. Magnetic susceptibility can be
written as
\begin{equation}
\label{eq:sus}
\chi \equiv \frac{1}{TV} \left(\langle M^2 \rangle - \langle M \rangle^2\right),
\end{equation}
where $V$ is the number of spins. The scaling form of magnetic
susceptibility is
\begin{equation}
\label{eq:sf-sus}
\chi(T,L) = L^{\gamma/\nu} \Psi_\chi((1/T - 1/T_c)L^{1/\nu}),
\end{equation}
where $\gamma$ and $\nu$ are critical exponents.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig-ising-S-B-1.eps}
\caption{\figlabel{Bayes-S-B} (Color on-line) Result of a Bayesian FSS
of the Binder ratio of the Ising model on square lattices. We
apply the GP regression with \eqref{kernel} to the data shown in
\figref{Binder-ratio}. The results of the MC estimate are
$1/T_c=0.440683(7)$ and $1/\nu = 0.996(2)$. The dotted (pink) curve is
the scaling function inferred from a MAP estimate. }
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig-ising-S-B-0.eps}
\caption{\figlabel{CS-BR} (Color on-line) Result of a FSS of the
Binder ratio of the Ising model on square lattices by the
least-square method. In the least-square method, we use only the data in the
inset of \figref{Binder-ratio} and assume that the scaling
function is a quadratic function. The best estimate of the
least-square method is $1/T_c=0.44069(2)$ and
$1/\nu=1.00(2)$. All data points are rescaled by these values.
The data points used in the least-square method are shown in the filled
gray (pink) region. The dotted (pink) curve is the scaling function
inferred from the best estimate of the least-square method. Inset:
Rescaled data points used in the least-square method. }
\end{figure}
\begin{figure}
\includegraphics[width=0.235\textwidth]{fig-ising-S-B-2.eps}
\includegraphics[width=0.235\textwidth]{fig-ising-S-B-3.eps}
\caption{\figlabel{Bayes2-S-B} (Color on-line) Left panel: Result of
a Bayesian FSS for the data in the inset of
\figref{Binder-ratio}. The results of the MC estimate are
$1/T_c=0.44070(2)$ and $1/\nu = 1.00(1)$. Right panel: Result of
a Bayesian FSS for data not included in the inset of
\figref{Binder-ratio}. The results of the MC estimate are
$1/T_c=0.440675(9)$ and $1/\nu = 0.997(2)$. The dotted (pink)
curves in left and right panels are the scaling functions inferred
from the MAP estimates. }
\end{figure}
We first apply the GP regression to the Binder ratios of square
lattices shown in \figref{Binder-ratio}. The kernel function based on
GKF can be written as
\begin{equation}
\eqlabel{kernel}
K(X_i, X_j) \equiv K_G(X_i, X_J) + \theta_2^2 \delta_{ij},
\end{equation}
where a hyper parameter $\theta_2$ denotes the data fidelity. We note
that the maximization of a likelihood is much improved by
$\theta_2$. Although $\theta_2$ finally goes to zero, it helps to
escape from a local maximum of a likelihood. \figref{Bayes-S-B} shows
the result of the GP regression for Binder ratios. The results of the
MC estimate are $1/T_c=0.440683(7)$ and $1/\nu = 0.996(2)$. This is
consistent with the exact results $1/T_c =
\ln(1+\sqrt{2})/2=0.4406867925\cdots$ and $1/\nu=1$. The dotted
(pink) curve in \figref{Bayes-S-B} is the scaling function inferred
from a MAP estimate by using \eqref{meanGPK}. All points collapse on
this curve. The value of the Binder ratio at the critical temperature
is $0.9158(4)$. This is consistent with the exact value
$0.916038\cdots$\cite{Salas:2000fk}. It is difficult to represent this
curve as a polynomial of low degree. Thus, we limit the value of a
Binder ratio to the region $[0.8, 0.97]$ (see the inset of
\figref{Binder-ratio}). We apply the least-square method with a
quadratic function to the limited data. The result is shown in
\figref{CS-BR}. The inset of \figref{CS-BR} shows the data points
rescaled by the best estimate of the least-square method. All points
in the inset collapse on a quadratic function (see the dotted (pink)
curve in \figref{CS-BR}). The reduced chi-square is $2.96$. The
results of the least-square method are $1/T_c = 0.44069(2)$ and $1/\nu
=1.00(2)$. This is consistent with the exact result. However, it may
be difficult to extend the region of data for the least-square
method. The main panel of \figref{CS-BR} shows all data points
rescaled by the best estimate of the least-square method. While all
points again collapse on a smooth curve, the curve is not equal to the
quadratic function outside the limited region (see the filled gray
(pink) region in \figref{CS-BR}). The left panel in
\figref{Bayes2-S-B} shows the result of the GP regression to the same
data for the least-square method. The results of the MC estimate are
$1/T_c=0.44070(2)$ and $1/\nu = 1.00(1)$. This is consistent with the
exact results and similar to that of the least-square method. The GP
regression with GKF assumes only the smoothness of a scaling
function. Thus, it may be effective even for the data not near a
critical point. In fact, even if we use only data not included in the
inset of \figref{Binder-ratio}, we can do FSS by the GP
regression. The result is shown in the right panel in
\figref{Bayes2-S-B}. The results of the MC estimate are
$1/T_c=0.440675(9)$ and $1/\nu=0.997(2)$. Although we do not use the
important data near a critical point, the result of the GP regression
is close to the exact result.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig-ising-S-X.eps}
\caption{\figlabel{Bayes-S-X} (Color on-line) Result of a Bayesian FSS
of the magnetic susceptibility of the Ising model on square
lattices. We apply the GP regression with \eqref{kernel} to data
with the same temperatures and lattice sizes of data as in
\figref{Binder-ratio}. The results of the MC estimate are
$1/T_c=0.44072(8)$, $1/\nu = 0.98(2)$, and $\gamma / \nu =
1.74(2)$. The dotted (pink) curve is the scaling function inferred
from a MAP estimate. }
\end{figure}
We also apply the GP regression to the magnetic susceptibility of
square lattices. The result is shown in \figref{Bayes-S-X}. The
results of the MC estimate are $1/T_c=0.44072(8)$, $1/\nu=0.98(2)$,
and $\gamma/\nu=1.74(2)$. This is consistent with the exact result
($\gamma/\nu=7/4=1.75$). The dotted (pink) curve is the scaling
function inferred from the MAP estimate by using \eqref{meanGPK}. All
points collapse on this curve. However, it is difficult to represent
this curve as a polynomial of low degree.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig-ising-T-B.eps}
\caption{\figlabel{Bayes-T-B} (Color on-line) Result of a Bayesian
FSS of the Binder ratio of the Ising model on triangular
lattices. We apply the GP regression with \eqref{kernel}. The
results of the MC estimate are $1/T_c=0.274652(7)$ and $1/\nu =
0.989(4)$. The dashed (light-blue) curve is the scaling function
of a square lattice in \figref{Bayes-S-B} with a nonuniversal
metric factor $C_1=1.748(3)$. Inset: Binder ratios of the Ising
model on triangular lattices. The number of data items is 86. }
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig-ising-T-X.eps}
\caption{\figlabel{Bayes-T-X} (Color on-line) Result of a Bayesian
FSS of the magnetic susceptibility of the Ising model on
triangular lattices. We apply the GP regression with
\eqref{kernel} to the data with the same temperatures and lattice
sizes of data as in \figref{Bayes-T-B}. The results of the MC
estimate are $1/T_c=0.27466(7)$, $1/\nu = 0.95(2)$, and $\gamma /
\nu = 1.71(2)$. The dashed (light-blue) curve is the scaling
function of a square lattice in \figref{Bayes-S-X} with
nonuniversal metric factors $C_1=1.70(2)$ and $C_2=0.777(7)$. }
\end{figure}
Next, we apply the GP regression to the Binder ratio and magnetic
susceptibility on triangular lattices. These results are shown in
\figref{Bayes-T-B} and \figref{Bayes-T-X}, respectively. All points of
each quantity collapse on a curve. The results of the MC estimate for
$1/T_c$, $1/\nu$, and $\gamma/\nu$ are summarized in
Tab.~\ref{tab:all-results}. Although they are almost consistent with
the exact results, the accuracy of inference is lower than that for
the data of square lattices. Since the region of the data of
triangular lattices is wide (compare \figref{Bayes-T-B} with
\figref{Bayes-S-B}), we may consider the correction to scaling.
\begin{table*}[ht]
\label{tab:all-results}
\centering
\caption{Results of the MC estimates for $1/T_c$, $1/\nu$, and $\gamma/\nu$.
The exact values of $1/T_c$ for square and triangular lattices
\cite{onsager44:_cryst_statis, *1952PhRv...85..808Y} are
$\ln(1+\sqrt{2})/2=0.4406867925\cdots$ and
$(\ln3)/4=0.2746530723\cdots$, respectively. The exact values of
$1/\nu$ and $\gamma/\nu$ are $1$ and $\frac74$, respectively.
}
\begin{ruledtabular}
\begin{tabular}{|l|l|l|l|l|l|}
Data & Lattice & Method & $1/T_c$ & $1/\nu$ & $\gamma/\nu$\\
\hline
Binder ratio & Square & GP regression & $0.440683(7)$ & $0.996(2)$ & -\\
Binder ratio & Triangular & GP regression &
$0.274652(7)$ & $0.989(4)$ & - \\
Binder ratio\footnotemark[1] & Square & Least-square & $0.44069(2)$
& $1.00(2)$ &-\\
Binder ratio\footnotemark[1] & Square & GP regression & $0.44070(2)$ &
$1.00(1)$ & -\\
Binder ratio\footnotemark[2] & Square & GP regression &
$0.440675(9)$ & $0.997(2)$ & -\\
Magnetic susceptibility & Square & GP regression & $0.44072(8)$ & $0.98(2)$ & $1.74(2)$ \\
Magnetic susceptibility & Triangular & GP regression & $0.27466(7)$ & $0.95(2)$ & $1.71(2)$\\
\end{tabular}
\footnotetext[1]{Data in the inset of \figref{Binder-ratio}.}
\footnotetext[2]{Data not included in the inset of \figref{Binder-ratio}.}
\end{ruledtabular}
\end{table*}
Privman and Fisher proposed the universality of the finite-size
scaling function \cite{1984PhRvB..30..322P}. If two critical systems
belong to the same universality class, the two finite-size scaling
functions with nonuniversal metric factors are equal as
\begin{equation}
\label{eq:UFSS}
\Psi(x) = C_2 \ \Psi'(C_1x),
\end{equation}
where $\Psi$ and $\Psi'$ are finite-size scaling functions and $C_1$
and $C_2$ are nonuniversal metric factors. Hu et al. checked this idea
for bond and site percolation on various
lattices\cite{1995PhRvL..75..193H}.
The Ising models on square and triangular lattices belong to the
same universality. Thus, the two scaling functions must be equal via
nonuniversal metric factors as in \eqref{UFSS}. To check the
universality of finite-size scaling functions, we compared the data on
triangular lattices with the scaling function inferred from the data
on square lattices. We estimated nonuniversal metric factors to
minimize the residual between them. The result for the Binder ratio is
$C_1=1.748(3)$. The results for the magnetic susceptibility are
$C_1=1.70(2)$ and $C_2=0.777(7)$. Note that there is no metric factor
$C_2$ for the Binder ratio, because the Binder ratio is
dimensionless. The scaling functions of a square lattice with
nonuniversal metric factors are shown using the dashed (light-blue)
curves in \figref{Bayes-T-B} and \figref{Bayes-T-X}. They agree well
with the data on triangular lattices. The reduced chi-square of the
Binder ratio is $2.65$, and that of magnetic susceptibility is
$0.36$. Therefore, we confirm the universality of finite-size scaling
functions for the Binder ratio and magnetic susceptibility of the
two-dimensional Ising model. We note that Tomita et al.
\cite{Tomita:1999fk} confirmed the universality of finite-size scaling
functions for other quantities, and Mangazeev et al.
\cite{2010PhRvE..81f0103M} studied the universality of the scaling
function in the thermodynamic limit.
\section{Conclusions}
\label{sec:conclusion}
In this paper, we introduced a Bayesian framework in the scaling
analysis of critical phenomena. This framework includes the
least-square method for the scaling analysis as a special case. It can
be applied to a wide variety of scaling hypotheses, as shown in
\eqref{law}. In this framework, we proposed the GP regression with GKF
defined by Eqs. (\ref{eq:BayesGP0}), (\ref{eq:covariance}), and
(\ref{eq:GKF}). This method assumes only the smoothness of a scaling
function, and it does not need a form. We demonstrated it for the FSS
of the Ising models on square and triangular lattices. For the data
limited to a narrow region near a critical point, the accuracy of the
GP regression was comparable to that of the least-square method. In
addition, for the data to which we cannot apply the least-square
method with a polynomial of low degree, our method worked
well. Therefore, we confirm the advantage of the GP regression with
GKF for the scaling analysis of critical phenomena.
The GP regression can also infer a scaling function as the mean
function $\mu$ in \eqref{meanGPK}. By comparing the data on triangular
lattices with the scaling function inferred from the data on square
lattices, we confirmed the universality of the FSS function of the
two-dimensional Ising model. The use of the scaling function may help
in the determination of a universality class.
In this paper, we assume that the data obey a scaling law. However, in
some cases, a part of the data may not obey the scaling law. In such a
case, we usually introduce a correction to scaling. If we can assume
the form of a correction to scaling, we change only the function $Y$
in \eqref{law}. However, the assumption of the correction term may
cause a problem. In other words, the identification of a critical
region remains.
As shown in this paper, the GP regression is a powerful method. In
particular, the GP regression can be applied to the statistical check
for data collapse. For example, we can apply it to the estimate of
nonuniversal metric factors in \figref{Bayes-T-B} and
\figref{Bayes-T-X}. Another interesting application may be found in
the data analysis of physics.
\begin{acknowledgments}
I would like to thank Toshio Aoyagi, Naoki Kawashima, Jie Lou, and
Yusuke Tomita for the fruitful discussions, and the Kavli Institute
for Theoretical Physics for the hospitality. This research was
supported in part by Grants-in-Aid for Scientific Research
(No. 19740237, No. 22340111, No. 23540450), and in part by the
National Science Foundation under Grant No. PHY05-51164.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
|
1,314,259,994,847 | arxiv | \section{Introduction}
The foundations of analyzing thermal and vacuum fluctuations of the electromagnetic field inside matter were laid in the seminal work of S.M.Rytov\cite{rytov_principles_1989}. This later gave rise to a unified approach of understanding fluctuational forces\cite{lifshitz_theory_1956} (Lifshitz theory of Casimir forces), near field thermal emission and radiative heat transfer\cite{polder_theory_1971,zhang_nano/microscale_2007,chen_nanoscale_2005,biehs_nanoscale_2011,shchegrov_near-field_2000,guo_broadband_2012,joulain_surface_2005,laroche_near-field_2006,lussange_radiative_2012,nefedov_giant_2011,mulet_enhanced_2002,narayanaswamy_thermal_2004,rousseau_radiative_2009,volokitin_radiative_2001,volokitin_near-field_2007,pendry_radiative_1999,joulain_noncontact_2010,pendry_radiative_1999,vinogradov_thermally_2009,fu_nanoscale_2006,francoeur_near-field_2008,otey_thermal_2010,francoeur_near-field_2008,hu_near-field_2008,shen_surface_2009,liu_taming_2011}. (Polder-Van-Hove theory\cite{polder_theory_1971}). Recent developments in nanoengineering and detection have led to experimental regimes\cite{guha_near-field_2012,de_wilde_thermal_2006,hu_near-field_2008,narayanaswamy_surface_2003,shen_surface_2009,jones_thermal_2012,liu_taming_2011} where these effects can play a dominant role. Simultaneously, theoretical work has shed light on the fact that the classical scattering matrix along with the temperatures of objects of various geometries can completely characterize these fluctuations in both equilibrium and non-equilibrium situations\cite{bimonte_scattering_2009,kruger_nonequilibrium_2011,kruger_trace_2012,maghrebi_scattering_2013,messina_casimir-lifshitz_2011,messina_scattering-matrix_2011,rahi_scattering_2009,antezza_casimir-lifshitz_2006,bimonte_general_2007,bimonte_theory_2006,reid_fluctuation-induced_2013,rodriguez_fluctuating-surface-current_2012,rodriguez_fluctuating-surface-current_2013}.
Metamaterials are artificial media designed to achieve exotic electromagnetic responses that are beyond those available in conventional materials\cite{engheta_metamaterials:_2006,shalaev_optical_2007,smith_metamaterials_2004}. A large body of work has emerged in the last decade which in principle engineers the classical scattering matrix to achieve effects such as negative refraction\cite{pendry_negative_2000,veselago_electrodynamics_1968}, enhanced chirality\cite{plum_metamaterial_2009,wang_chiral_2009,zhang_negative_2009,pendry_negative_2000,veselago_electrodynamics_1968}, invisibility\cite{cai_optical_2007,pendry_controlling_2006,schurig_metamaterial_2006} and subwavelength imaging\cite{fang_subdiffraction-limited_2005,cai_optical_2007,pendry_controlling_2006,schurig_metamaterial_2006,jacob_optical_2006}. Recently, it was shown that a specific class of metamaterials, known as hyperbolic media\cite{cortes_quantum_2012,guo_applications_2012,jacob_optical_2006,krishnamoorthy_topological_2012,smith_electromagnetic_2003,smith_negative_2004,podolskiy_strongly_2005}(indefinite media) has the potential for thermal engineering. Such media support unique modes which can be thermally excited and detected in the near-field due to the super-Planckian nature of their thermal emission spectrum\cite{biehs_hyperbolic_2012,guo_broadband_2012,guo_thermal_2013,simovski_optimization_2013,biehs_super-planckian_2013}.
In this paper, we adopt the techniques of fluctuational electrodynamics to provide a first-principle account of the thermal emission characteristics of hyperbolic media. We show that the conventional approach of utilizing the second kind of fluctuation dissipation theorem\cite{rytov_principles_1989,eckhardt_first_1982,eckhardt_macroscopic_1984} is equivalent to the scattering matrix method\cite{bimonte_general_2007,bimonte_scattering_2009,eckhardt_first_1982,eckhardt_macroscopic_1984} for calculating the metamaterial energy density. We specifically provide the derivations of the fluctuational effects in both effective medium theory and practical thin film multilayer metamaterial designs\cite{kidwai_effective-medium_2012,cortes_quantum_2012}. While the characteristics can in principle be obtained from formulas related to the reflection coefficients, it does not shed light on various aspects of equilibrium or non-equilibrium fluctuations in the context of metamaterials. Our aim is to provide an insightful look at prevailing approaches adopted to the case of hyperbolic media.
We also consider the case of a practical phonon-polaritonic metamaterial\cite{guo_broadband_2012,korobkin_measurements_2010} and show the stark contrast in the far-field and near-field thermal emission characteristics\cite{shchegrov_near-field_2000}. This should help experimentalists design experiments starting from analyzing the far-field characteristics, retrieving effective medium characteristics and then look for our predicted near-field effects. We show that the far-field characteristics are dominated by the epsilon-near-zero and epsilon-near-pole responses as expected from Kirchoff's laws\cite{molesky_high_2013}. This is true independent of material choice and can occur for both nanowire and multilayer hyperbolic media\cite{molesky_high_2013}. We comment here that for practical applications high temperature plasmonics and metamaterials would be needed\cite{molesky_high_2013}.
We also study the limitations of effective medium theory (EMT) but focus on cases where there is good agreement between practical structures and EMT\cite{tschikin_limits_2013,cortes_quantum_2012,kidwai_effective-medium_2012}. We emphasize that it is known in the metamaterials community that the unit cell of a metamaterial can show characteristics similar to the bulk medium\cite{cortes_quantum_2012}. In the context of thin film hyperbolic media, this was experimentally elucidated in Ref.~\onlinecite{kim_improving_2012} and theoretically explained in detail in Ref.~\onlinecite{cortes_quantum_2012}.
In this paper we also describe another effect connected to hyperbolic super-Planckian thermal emission\cite{guo_broadband_2012}. We analyze the spatial coherence\cite{carminati_near-field_1999,henkel_spatial_2000,joulain_near_2008,joulain_surface_2005,lau_spatial_2007} of the near-field thermal emission and relate it to the metamaterial modes. We show that there is a subtle interplay in near-field spatial coherence due to competition between surface waves and hyperbolic modes. We expect our work to aid experimentalists in isolating thermal effects related to metamaterials and also form the theoretical foundation for developing the macroscopic quantum electrodynamics\cite{scheel_macroscopic_2008} of hyperbolic media.
\section{Fluctuation dissipation theorem}
In global thermal equilibrium, the first kind of fluctuation dissipation theorem\cite{rytov_principles_1989,eckhardt_macroscopic_1984}(FDT) directly specifies the correlation function of electric fields. It is expressed by
\begin{align}
\label{first kind}
\left\langle{\vec E(\boldsymbol r_1,\omega)\otimes \vec E^*(\boldsymbol r_2,\omega')}\right\rangle
&= \nonumber \\
\frac{\mu_0\omega}{\pi}\Theta(\omega,T)&\operatorname{Im}{\stackrel{\leftrightarrow}{G}}(\boldsymbol r_1,\boldsymbol r_2,\omega)\delta(\omega-\omega').
\end{align}
Here ${\stackrel{\leftrightarrow}{G}}$ is the dyadic Green's function\cite{tai_dyadic_1994,kong_electromagnetic_1990}(DGF), $\Theta(\omega,T)=\hbar\omega/(e^{\hbar\omega/{k_BT}}-1)$ is the mean energy of a thermal oscillator.
Eq. (\ref{first kind}) has two main applications. Firstly, it can be used to derive the electromagnetic stress tensor at a certain point. Secondly, it directly gives the cross-spectral density tensor\cite{carminati_near-field_1999,lau_spatial_2007} which characterizes the spatial coherence of a thermal radiative source. The second kind of FDT\cite{rytov_principles_1989,eckhardt_macroscopic_1984} that specifies the correlation function of thermally generated random currents is
\begin{align}
&\left\langle{\vec{j}(\boldsymbol r_1,\omega)\otimes\vec{j}^*(\boldsymbol r_2,\omega')}\right\rangle=\nonumber \\
&\frac{\omega \epsilon_0}{\pi} \boldsymbol \epsilon''(\omega)\Theta(\omega,T)\delta(\boldsymbol r_1-\boldsymbol r_2)\delta(\omega-\omega').
\end{align}
We assume the permittivity $\boldsymbol \epsilon$ is a diagonal matrix; $\boldsymbol \epsilon''$ denotes the imaginary part.
The first kind of FDT can only be used in global thermal equilibrium. In non-equilibrium situation, we should first employ Maxwell equations to obtain the electromagnetic fields generated by random currents through the DGF,
\begin{align}
&\vec E(\boldsymbol r) = i\omega {\mu _0}\iiint {{\stackrel{\leftrightarrow}{G}} (\boldsymbol r,\boldsymbol r')\vec j(\boldsymbol r')}d\boldsymbol r', \\
&\vec H(\boldsymbol r)=\iiint \nabla \times {\stackrel{\leftrightarrow}{G}} (\boldsymbol r, \boldsymbol r')\vec j(\boldsymbol r')d \boldsymbol r',
\end{align}
and then calculate the electromagnetic stress tensor or the cross-spectral density tensor.
The dyadic Green's function (DGF) satisfies an important identity\cite{eckhardt_macroscopic_1984,novotny_principles_2006},
\begin{align}
&\operatorname{Im} {\stackrel{\leftrightarrow}{G}} (\boldsymbol r_1, \boldsymbol r_2,\omega)=\nonumber \\
&\frac{\omega^2}{c^2}\int_V {\stackrel{\leftrightarrow}{G}} (\boldsymbol r_1,\boldsymbol r',\omega)\boldsymbol \epsilon''(\boldsymbol r',\omega){\stackrel{\leftrightarrow}{G}}^\dag(\boldsymbol r_2,\boldsymbol r',\omega)d^3 \boldsymbol r'.
\end{align}
This identity ensures that at global thermal equilibrium the first kind and the second kind of FDT lead to identical results.
\section{Thermal emission from half space uniaxial media}
In this section, we consider an uniaxial medium located in the lower space $(z<0)$ at temperature T while the upper space vacuum part is at zero temperature. The relative permittivity of the uniaxial medium is a diagonal matrix, $\boldsymbol{\epsilon}=diag[\epsilon_{\parallel};\epsilon_{\parallel};\epsilon_{\perp}]$. Note that hyperbolic metamaterials are a special kind of uniaxial medium satisfying $\epsilon_{\parallel}\epsilon_{\perp}<0$. As mentioned before, we should employ the second kind of FDT because this is a non-equilibrium problem.
To solve DGF in planar structures, it is convenient to work in the wavevector space. DGF in vacuum\cite{kong_electromagnetic_1990} is ($z>z'$)
\begin{align}
{\stackrel{\leftrightarrow}{G}}(\boldsymbol r,\boldsymbol r',\omega) =& \frac{i}{{8{\pi ^2}}}\iint \frac{{d{k_x}d{k_y}}}{k_{z0}}{e^{i{\boldsymbol k_\perp}\cdot(\boldsymbol r_\perp^{} -\boldsymbol r_\perp^\prime)}} \nonumber \\
&\{{{\hat s}_ + ^0 }{{\hat s}_ +^0 }{e^{i{k_{z0}}(z - z')}} + {{\hat p}_ +^0 }{{\hat p}_ + ^0}{e^{i{k_{z0}}(z - z')}}\}
\end{align}
Here we define ${\hat k_ + } = ({k_x},{k_y},{k_{z0}})/k_0$ is the normalized wave-vector of upward waves ($z > z'$) in free space,
$\boldsymbol k_\perp=(k_x,k_y)$, ${k_\rho} = \sqrt {k_x^2 + k_y^2}$, $k_{z0} = \sqrt {k_0^2 - k_\rho ^2}$, and $\boldsymbol r_\perp=(x,y)$. ${\hat s_+^0 } = {{{\hat k}_ + } \times \hat z} = {({k_y}, - {k_x},0)} / {k_\rho}$ is the unit direction vector of s-polarized waves, ${\hat p_ +^0} = {\hat s_ + ^0} \times {\hat k_ + } = {{( - {k_x}{k_{z0}}, - {k_y}{k_{z0}},k_\rho ^2)}/{{k_0}{k_\rho }}}$ is the unit direction vector of p-polarized waves. Correspondingly and for later use, ${\hat k_ - } = ({k_x},{k_y},-{k_{z0}})/{k_0}$ is the normalized wave-vector of downward waves (when $z < z'$), ${\hat s_ - ^0 } = {{{\hat k}_ - } \times \hat z} = {({k_y}, - {k_x},0)} / {k_\rho}$ same with ${\hat s_ + ^0}$ , and ${\hat p_ -^0 } = {\hat s_ -^0 } \times {\hat k_ -} = {{({k_x}{k_{z0}},{k_y}{k_{z0}},k_\rho ^2)} / {{k_0}{k_\rho }}}$.
The DGF relating thermally generated random currents inside the medium in the lower space to the fields in upper space vacuum is
\begin{align}
&{\stackrel{\leftrightarrow}{G}}_{01}(\boldsymbol r,\boldsymbol r') = \frac{i}{{8{\pi ^2}}}\iint \frac{{d{k_x}d{k_y}}}{{{k_{z0}}}}{e^{i{\boldsymbol k_\perp}\cdot(\boldsymbol r_\perp^{} -\boldsymbol r_\perp^\prime)}} \nonumber \\
& \{ {t^s}{{\hat s}_ + ^0 }{{\hat s}_+^1 }{e^{i{k_{z0}}z-i{k_{zs}}z'}} + {t^p}{{\hat p}_ +^0 }{{\hat p}_ +^1 }{e^{i{k_{z0}}z-i{k_{zp}}z'}} \}.
\end{align}
Here, ${k_{zs}} = \sqrt {\epsilon_\parallel k_0^2 - k_\rho ^2}$, ${k_{zp}} = \sqrt {\epsilon_\parallel k_0^2 - \frac{\epsilon_\parallel}{\epsilon_\perp} k_\rho ^2}$. $\hat s_+^1=\hat s_+^0$, and ${\hat p_ + ^1 } = {{( -{k_x}{k_{zp}}, -{k_y}{k_{zp}},k_\rho ^2{\epsilon_\parallel}/{\epsilon_\perp})}/{{k_0}{k_\rho }}\sqrt{\epsilon_\parallel}}$ which are the unit direction vectors of s- and p-polarized waves inside the unaxial medium, respectively.
Note the transmission coefficients incident from the vacuum side should be in terms of the electric fields,
\begin{equation}
t^s=\frac{2k_{z0}}{k_{z0}+k_{zs}},\quad t^p=\frac{2k_{z0}\sqrt{\epsilon_{\parallel}}}{\epsilon_{\parallel}k_{z0}+k_{zp}}.
\end{equation}
To calculate the magnetic fields, we should evaluate $\nabla\times{\stackrel{\leftrightarrow}{G}}_{01}$, which can be easily done in the wavevector space. The curl operator will work on the first vector of ${\stackrel{\leftrightarrow}{G}}_{01}$,
\begin{align}
&\nabla\times{\stackrel{\leftrightarrow}{G}}_{01}(\boldsymbol r,\boldsymbol r')=\frac{k_0}{{8{\pi ^2}}}\iint \frac{{d{k_x}d{k_y}}}{{{k_{z0}}}}{e^{i{\boldsymbol k_\perp}\cdot(\boldsymbol r_\perp^{} -\boldsymbol r_\perp^\prime)}} \nonumber \\
&\{{t^s}{{\hat p}_ + ^0 }{{\hat s}_+^1 }{e^{i{k_{z0}}z-i{k_{zs}}z'}} - {t^p}{{\hat s}_ +^0 }{{\hat p}_ +^1 }{e^{i{k_{z0}}z-i{k_{zp}}z'}}\}.
\end{align}
The free space energy density is defined by
\begin{align}
u(\omega,\boldsymbol r)=2\left(\frac{1}{2}\epsilon_0\operatorname {Tr}\left\langle{\vec E(\omega,\boldsymbol r)\otimes \vec E^*(\omega,\boldsymbol r)}\right\rangle \right. \nonumber \\
\left. +\frac{1}{2}\mu_0\operatorname {Tr}\left\langle{\vec H(\omega,\boldsymbol r)\otimes \vec H^*(\omega,\boldsymbol r)}\right\rangle \right),
\end{align}
where the prefactor 2 accounts for the negative frequency counterpart.
Following the formalism in Ref.~\onlinecite{lau_spatial_2007}, we define
\begin{align}
&g_{e}(\boldsymbol k_{\perp},z,z',\omega)=-\frac{1}{2k_{z0}} \nonumber \\
&\left\{ {{t^s}{{\hat s}_ + ^0 }{{\hat s}_ +^1 }{e^{ik_{z0}z-i{k_{zs}}z'}} + {t^p}{{\hat p}_ +^0 }{{\hat p}_ +^1 }{e^{ik_{z0}z-i{k_{zp}}z'}}} \right\},\\
&g_{h}(\boldsymbol k_{\perp},z,z',\omega)=\frac{1}{2k_{z0}} \nonumber \\
& \left\{{{t^s}{{\hat p}_ + ^0 }{{\hat s}_+^1 }{e^{ik_{z0}z-i{k_{zs}}z'}} - {t^p}{{\hat s}_ +^0 }{{\hat p}_ +^1 }{e^{ik_{z0}z-i{k_{zp}}z'}}}\right\}.
\end{align}
\begin{widetext}
One can then find
\begin{equation}
u(\omega,z)=\frac{{\omega}^3}{\pi c^4}\Theta(\omega,T)\int _{-\infty}^0 dz' \int _{-\infty}^{+\infty} \frac{d^2 \boldsymbol k_{\perp}}{4{\pi}^2}
\left( \operatorname {Tr}\left( g_{e}^{}\boldsymbol \epsilon'' g_{e}^\dagger \right) + \operatorname {Tr}\left( g_{h}^{}\boldsymbol \epsilon'' g_{h}^\dagger \right) \right)
\end{equation}
Inserting the expressions of $g_e$ and $g_h$, we have
\begin{align}
u(\omega,z)=\frac{{\omega}^3}{8\pi^2 c^4}\Theta(\omega,T)e^{-2\operatorname{Im}(k_{z0})z}\int _{-\infty}^0 dz' \int _{0}^{+\infty}k_{\rho}{dk_{\rho}}\frac{1}{|k_{z0}|^2}\left(1+\frac{k_{\rho}^2+|{k_{z0}^2|}}{k_0^2}\right) \nonumber \\
\left(\epsilon_{\parallel}''|t^s|^2e^{2\operatorname {Im}(k_{zs})z'}+\left(\dfrac{\epsilon_{\perp}''|\epsilon_{\parallel}/\epsilon_{\perp}|^2k_{\rho}^2+\epsilon_{\parallel}''|{k_{zp}^{2}|}}{k_0^2|\epsilon_{\parallel}^2|}\right)|t^p|^2e^{2\operatorname {Im}(k_{zp})z'}\right).
\end{align}
\end{widetext}
The integration on $z'$ can be easily done. Further by taking the imaginary part of the dispersion relation
\begin{equation}
\frac{k_\rho^2}{\epsilon_\parallel}+\frac{k_{zs}^2}{\epsilon_\parallel}=\frac{\omega^2}{c^2},\quad \frac{k_\rho^2}{\epsilon_\perp}+\frac{k_{zp}^2}{\epsilon_\parallel}=\frac{\omega^2}{c^2}
\end{equation}
for s- and p-polarized waves, this result can be simplified as
\begin{align}
\label{general expression}
&u(\omega,z) = \frac{{U{}_{BB}(\omega ,T)}}{2} \nonumber \\
&\left\{ {\int_0^{{k_0}} {\frac{{{k_\rho }d{k_\rho }}}{{{k_0}\left| {{k_{z0}}} \right|}}} } \right.\frac{{(1 - {{\left| {{r^s}} \right|}^2}) +(1- {{\left| {{r^p}} \right|}^2})}}{2} \nonumber \\
& + \left. {\int_{{k_0}}^\infty {\frac{{k_\rho ^3d{k_\rho }}}{{k_0^3\left| {{k_{z0}}} \right|}}{e^{ - 2\operatorname{Im} ({k_{z0}})z}}(\operatorname{Im} ({r^s}) + \operatorname{Im} ({r^p}))} } \right\}.
\end{align}
Here $U_{BB}=\frac{\omega^2}{\pi^2 c^3}\Theta(\omega,T)$ is the energy density of blackbody.$r^s$ and $r^p$ are the standard reflection coefficients given by
\begin{equation}
r^s=\frac{k_{z0}-k_{zs}}{k_{z0}+k_{zs}},\quad r^p=\frac{\epsilon_{\parallel}k_{z0}-k_{zp}}{\epsilon_{\parallel}k_{z0}+k_{zp}}.
\end{equation}
The propagating wave part $ 1-|r|^2 $ in Eq.~(\ref{general expression}) is the far field emissivity, equivalent to Kirchhoff's law. Correspondingly, the evanescent wave part can be interpreted as Kirchhoff's law in the near field and $ 2\operatorname{Im}(r) $ is the near field emissivity\cite{pendry_radiative_1999,biehs_mesoscopic_2010,guo_thermal_2013,mulet_enhanced_2002}, which is widely used in heat transfer problems. $ 2\operatorname{Im}(r) $ is also proportional to the near field local density of states (LDOS) proposed in Ref.~\onlinecite{pendry_radiative_1999} and is related to the tunneling and subsequent absorption of energy carried by evanescent waves. Recently extensive theoretical and experimental works have demonstrated the ability of HMMs to enhance the near field LDOS\cite{cortes_quantum_2012,jacob_engineering_2010,krishnamoorthy_topological_2012}. Thus we expect the use of HMMs in thermal and energy management.
\subsection{Energy in matter and fields}
We can use the above definitions to compare the energy density in the near-field of the hyperbolic media to any other control sample. A pertinent question is about how much energy density is in matter degrees of freedom as opposed to the fields. This is difficult to answer inside the medium but can be done unambiguously in the near-field.
In the high-k approximation, where the wavevector parallel to the interface $k_\rho$ is sufficiently large, the near-field energy density is governed by the tunneling parameter which we define as the imaginary part of the p-polarized reflection coefficient. Thus studying the behavior of this tunneling parameter sheds light on the near-field energy density. In the low loss limit, the reflection for p-polarized waves incident on an interface between vacuum and HMM can be expressed by\cite{guo_broadband_2012,miller_effectiveness_2013}
\begin{equation}
\label{high_k_HMM}
\operatorname{Im}(r_p^{\text{HMM}})\approx \frac{2\sqrt{|\epsilon_{\parallel}\epsilon_{\perp}|}}{1+|\epsilon_{\parallel}\epsilon_{\perp}|}.
\end{equation}
While for an isotropic medium, the high-k approximation gives
\begin{equation}
\label{high_k_iso}
\operatorname{Im}(r_p^{\text{iso}})\approx \frac{2\epsilon''}{|1+\epsilon|^2}.
\end{equation}
The most striking difference between the above equations is that for a conventional isotropic medium the near-field energy density is completely dominated by the imaginary part of the dielectric constant. These fluctuations disappear in the low loss limit and can be attributed to matter degrees of freedom. This is because the imaginary part of the dielectric constant which governs field fluctuations also characterizes the irreversible conversion of electromagnetic energy into thermal energy of matter degrees of freedom. On the other hand, the hyperbolic medium shows near-field fluctuations arising from high-k modes completely indpendent of material losses and the energy resides in the field.
Let us analyze what would happen at mid-infrared frequencies where phonon polaritonic materials can give rise to this low loss high-k limit for hyperbolic media. We clearly see from Eq.~\ref{high_k_iso} that the near field emissivity would be very small when the frequency is away from the surface phonon polariton resonance (SPhPR) frequency where $\operatorname{Re}(\epsilon)=-1$. However, for HMMs made of phonon polaritonic materials and dielectrics, the near field emissivity (Eq.~\ref{high_k_HMM}) can be comparably large in broad frequency region, though in this approximation its magnitude cannot exceed one. Note here we do not account for surface wave resonances which can change the picture considerably especially if one wants to optimize near-field heat transfer\cite{miller_effectiveness_2013}. Our aim is to focus on the bulk modes only.
\section{Thermal emission from multilayered structures}
In this section we will consider multilayered structures. In the field of metamaterials, multilayered structures are widely used to achieve effective uniaxial media. The aim here is to go beyond effective medium theory and calculate the exact thermal emission from multilayered structures using the second kind of FDT. We assume that the medium in all layers is isotropic and non-magneto-optical for simplicity. To find DGFs relating the random currents in each layer to the vacuum region, we will follow the method in Ref.~\onlinecite{kong_electromagnetic_1990}. Firstly assuming the current source is in the vacuum region, we can calculate the fields induced by the source in all the layers by transfer matrix method which matches the boundary conditions at all the interfaces. Thus the DGFs with source in the vacuum region are ready to be employed. Next we use the reciprocal property of the DGF to achieve DGF when the sources are in the lower space.
DGF in the vacuum region ($z<z'$) is
\begin{align}
&{\stackrel{\leftrightarrow}{G}}_{00}(\boldsymbol r,\boldsymbol r') = \frac{i}{{8{\pi ^2}}}\iint {\frac{{d{k_x}d{k_y}}}{k_{z0}}}{e^{i{\boldsymbol k_\perp}\cdot(\boldsymbol r_\perp^{} -\boldsymbol r_\perp^\prime)}} \nonumber \\
&\Big\{\left({\hat s}_-^0 e^{-ik_{z0}z}+r^s {\hat s}_+^0 e^{ik_{z0}z}\right){\hat s}_-^0 e^{ik_{z0}z'} \nonumber \\
&+ ({\hat p}_-^0 e^{-ik_{z0}z}+r^p {\hat p}_+^0 e^{ik_{z0}z}){\hat p}_-^0 e^{ik_{z0}z'}\Big\}
\end{align}
DGF in the intermediate slabs are
\begin{align}
&{\stackrel{\leftrightarrow}{G}}_{l0}(\boldsymbol r,\boldsymbol r') = \frac{i}{{8{\pi ^2}}}\iint {\frac{{d{k_x}d{k_y}}}{k_{z0}}}{e^{i{\boldsymbol k_\perp}\cdot(\boldsymbol r_\perp^{} -\boldsymbol r_\perp^\prime)}} \nonumber \\
&\Big\{\left(B_l{\hat s}_-^l e^{-ik_{zl}z}+A_l {\hat s}_+^l e^{ik_{zl}z}\right){\hat s}_-^0 e^{ik_{z0}z'} \nonumber \\
&+ (D_l{\hat p}_-^l e^{-ik_{zl}z}+C_l {\hat p}_+^l e^{ik_{zl}z}){\hat p}_-^0 e^{ik_{z0}z'}\Big\}
\end{align}
DGF in the last layer is
\begin{align}
&{\stackrel{\leftrightarrow}{G}}_{(N+1)0}(\boldsymbol r,\boldsymbol r') = \frac{i}{{8{\pi ^2}}}\iint {\frac{{d{k_x}d{k_y}}}{k_{z0}}}{e^{i{\boldsymbol k_\perp}\cdot(\boldsymbol r_\perp^{} -\boldsymbol r_\perp^\prime)}} \nonumber\\
&\left\{t_s{\hat s}_-^t e^{-ik_{zt}z}{\hat s}_-^0 e^{ik_{z0}z'} \right.
+ \left.t_p{\hat p}_-^t e^{-ik_{zt}z}{\hat p}_-^0 e^{ik_{z0}z'}\right\}
\end{align}
Note in the last layer we only have the downward waves, namely, the transmission.
The boundary conditions give\cite{kong_electromagnetic_1990}
\begin{align}
& A_l e^{ik_{zl}z_l}+B_l e^{-ik_{zl}z_l} =\nonumber\\
& A_{l+1} e^{ik_{z(l+1)}z_l}+B_{l+1} e^{-ik_{z(l+1)}z_l} \\
& k_{zl}(A_l e^{ik_{zl}z_l}-B_l e^{-ik_{zl}z_l}) =\nonumber \\
& k_{z(l+1)}(A_{l+1} e^{ik_{z(l+1)}z_l}-B_{l+1} e^{-ik_{z(l+1)}z_l})
\end{align}
for s-polarized waves, and
\begin{align}
&\sqrt{\epsilon_l}(C_l e^{ik_{zl}z_l}+D_l e^{-ik_{zl}z_l})=\nonumber\\
&\sqrt{\epsilon_{l+1}}(C_{l+1} e^{ik_{z({l+1})}z_l}+D_{l+1} e^{-ik_{z(l+1)}z_l}) \\
&\frac{k_{zl}}{\sqrt{\epsilon_l}}(C_l e^{ik_{zl}z_l}-D_l e^{-ik_{zl}z_l}) = \nonumber\\
&\frac{k_{z({l+1})}}{\sqrt{\epsilon_{l+1}}}(C_{l+1} e^{ik_{z(l+1)}z_l}-D_{l+1} e^{-ik_{z(l+1)}z_l})
\end{align}
for p-polarized waves.
Following the same steps as in the uniaxial case, the final expression is
\begin{widetext}
\begin{align}
u(\omega,z)=\frac{{\omega}^3}{8 \pi^2 c^4}\Theta(\omega,T)e^{-2\operatorname{Im}(k_{z0})z}\sum \limits _{l=1}^{N+1}\int _{z_l}^{z_{l-1}} dz' \int_{0}^{+\infty}k_{\rho}{dk_{\rho}}\frac{1}{|k_{z0}|^2}\left(1+\frac{k_{\rho}^2+|{k_{z0}^2|}}{k_0^2}\right)\epsilon_{l}'' \nonumber \\
\left(\left |A_le^{ik_{zl}z'}+B_le^{-ik_{zl}z'}\right |^2+\left |\frac{k_{zl}(C_le^{ik_{zl}z'}-D_le^{-ik_{zl}z'})}{k_0\sqrt{\epsilon_l}}\right |^2+\left |\frac{k_{\rho}(C_le^{ik_{zl}z'}+D_le^{-ik_{zl}z'})}{k_0\sqrt{\epsilon_l}}\right |^2 \right),
\end{align}
\end{widetext}
where $N$ is the total number of layers in the structure.
To simplify the above result, we first note that the integral
\begin{align}
\label{last layer}
&\int _{z_l}^{z_{l-1}} dz' k_0^2\epsilon_l''\left |A_le^{ik_{zl}z'}+B_le^{-ik_{zl}z'}\right |^2= \nonumber \\
&\operatorname{Re} \left[ k_{zl}(-A_l e^{ik_{zl}z}+B_l e^{-ik_{zl}z})(A_l e^{ik_{zl}z}+B_l e^{-ik_{zl}z})^*\right]\Big |_{z_{l}}^{z_{l-1}} \nonumber \\
&=Q_l(z_{l-1})-Q_l(z_{l}),
\end{align}
which is valid for all layers.
From the boundary condition, we have
\begin{equation}
Q_l(z_{l})=Q_{l+1}(z_{l})
\end{equation}
Thus we find
\begin{align}
&\sum \limits _{l=1}^{N+1} \int _{z_l}^{z_{l-1}} dz' k_0^2\epsilon_l''\left |A_le^{ik_{zl}z'}+B_le^{-ik_{zl}z'}\right |^2= \nonumber \\
&=Q_0(z_{0})-Q_{N+1}(z_{N+1}).
\end{align}
For the last term, $z_{N+1}=-\infty$, so $Q_{N+1}(z_{N+1})=0$, and in our convention, $z_0=0$.
The final result is
\begin{align}
\operatorname{Re} \left[ k_{z0}(1-r^s)(1+r^s)^*\right]= \nonumber \\
\left\{
\begin{gathered}
(1-|r^s|^2)|k_{z0}|,\qquad k_\rho<k_0 \\
2\operatorname{Im}(r^s)|k_{z0}|,\qquad k_\rho>k_0
\end{gathered} \right. .
\end{align}
This is the contribution from s-polarized waves. For p-polarized waves, the corresponding identity is
\begin{align}
\int _{z_l}^{z_{l-1}} dz' k_0^2\epsilon_l''\left (\left | \frac{k_{zl}(C_le^{ik_{zl}z'}-D_le^{-ik_{zl}z'})}{k_0\sqrt{\epsilon_l}}\right |^2 \right. \nonumber\\
\left.+\left |\frac{k_{\rho}(C_le^{ik_{zl}z'}+D_le^{-ik_{zl}z'})}{k_0\sqrt{\epsilon_l}}\right |^2 \right) \nonumber\\
=\operatorname{Re} \left[ \frac{k_{zl}}{\sqrt{\epsilon_l}}(C_l e^{ik_{zl}z}-D_l e^{-ik_{zl}z}) \right. \nonumber\\
\left.(\sqrt{\epsilon_l}(C_l e^{ik_{zl}z}+D_l e^{-ik_{zl}z}))^*\right]\Big |_{z_{l}}^{z_{l-1}}
\end{align}
Then the contribution from p-polarized waves can be evaluated in the similar way.
The final expression for thermal emission from a half space multilayered structure will be given by Eq.~\ref{general expression}. The reflection coefficients should be that of the whole structure.
If we are interested in a slab inside vacuum rather than a half space structure, we can eliminate the contribution from the last layer vacuum part. To do so, in Eq.~(\ref{last layer}), for the last layer $A_{N+1}=0$ and $B_{N+1}=t^s$, the right hand side is therefore $\operatorname{Re} (k_{z0}) |t^s|^2$, which vanishes for evanescent waves. Subtracting this term from Eq.~\ref{general expression} gives the thermal emission from a multilayered slab inside vacuum,
\begin{align}
\label{thermal slab}
&u(\omega,z) = \frac{{U{}_{BB}(\omega ,T)}}{2} \nonumber \\
&\left\{ {\int_0^{{k_0}} {\frac{{{k_\rho }d{k_\rho }}}{{{k_0}\left| {{k_{z0}}} \right|}}} } \right.\frac{{(1 - |r^s|^2-|t^s|^2) +(1- |r^p|^2-|t^p|^2})}{2} \nonumber \\
& + \left. {\int_{{k_0}}^\infty {\frac{{k_\rho ^3d{k_\rho }}}{{k_0^3\left| {{k_{z0}}} \right|}}{e^{ - 2\operatorname{Im} ({k_{z0}})z}}(\operatorname{Im} ({r^s}) + \operatorname{Im} ({r^p}))} } \right\}.
\end{align}
The above expression can be also obtained by replacing $ 1-|r|^2 $ in Eq.~\ref{general expression} with $ 1-|r|^2-|t|^2 $, which is consistent with Kirchoff's law.
\section{Scattering matrix method and spatial coherence}
We now describe another approach to evaluating the near-field energy density near metamaterials using the scattering matrix approach. However, first we will discuss a few important points related to the concept of the thermal environment. We note that when the lower space is vacuum, the reflection coefficients are zero. As a result of Eq. (\ref{general expression}), the contribution from the evanescent waves part is zero while that from the propagating waves is nonzero. However, this is not very intuitive from FDT. The reason is that losses of vacuum i.e. $\epsilon''$ of vacuum is zero and from the second kind of FDT, the correlation function of random currents of vacuum should be zero, suggesting a zero field correlation. It turns out that for an unbounded vacuum region, we should add an infinitesimal imaginary part to $\epsilon_0$, integrate over the region and then take the limit of the imaginary part to be zero in the final expression\cite{landau_statistical_1980-1,kruger_trace_2012}. This is needed to preserve causality requirements. In the derivation of Eq.~(\ref{general expression}), we have integrated the source region $z'$ from $-\infty$ to $0$. However, for a vacuum gap with any finite width, the final fields correlation originating from the gap can be shown to be zero\cite{eckhardt_macroscopic_1984}. For this reason, fluctuations in vacuum can be interpreted to come from infinity.
It is then natural to think about the thermal emission from the upper space vacuum region as well. If the vacuum region is also at temperature T, the system is at global thermal equilibrium. Therefore we can employ the first kind of FDT to calculate the thermal energy density. This approach is used in Ref.~\onlinecite{joulain_definition_2003} to define the local density of states. Here we directly cite the final result,
\begin{align}
\label{total emission}
&u_{eq}(z,\omega ,T) = \frac{{U{}_{BB}(\omega ,T)}}{2} \nonumber \\
&\left\{ {\int_0^{{k_0}} {\frac{{{k_\rho }d{k_\rho }}}{{{k_0}\left| {{k_{z0}}} \right|}}} } \right.(2+\frac{k_\rho^2}{k_0^2}[\operatorname{Re}(r^s e^{2ik_{z0}z})+\operatorname{Re}(r^p e^{2ik_{z0}z})]) \nonumber \\
& + \left. {\int_{{k_0}}^\infty {\frac{{k_\rho ^3d{k_\rho }}}{{k_0^3\left| {{k_{z0}}} \right|}}{e^{ - 2\operatorname{Im} ({k_{z0}})z}}(\operatorname{Im} ({r^s}) + \operatorname{Im} ({r^p}))} } \right\}
\end{align}
Note again that the contribution from evanescent waves equals that of Eq.~(\ref{general expression}), implying no evanescent waves contribution from the upper space vacuum region.
However, in non-equilibrium, to determine electromagnetic fields induced by every random current inside the medium using second kind of FDT is quite laborious. We note from the second kind of FDT that the currents are not spatially correlated, which suggests that the thermal emission from different spatial regions can be calculated separately. In thermal equilibrium, we can calculate the thermal energy density by the first kind of FDT. Thus if we can calculate the thermal emission from the upper space vacuum part at temperature T, thermal emission only from the lower space can be achieved by excluding the vacuum part from the total thermal energy density.
The electric field generated by the upper half vacuum space can be written as\cite{bimonte_general_2007}
\begin{equation}
\vec E_f(\omega,\boldsymbol r)=\int \frac{d^2 \boldsymbol k_\perp}{4\pi^2} \vec E_f(\omega,\boldsymbol k_\perp, z) e^{i\boldsymbol k_\perp \cdot \boldsymbol r_\perp}
\end{equation}
where
\begin{equation}
\vec E_f(\omega,\boldsymbol k_\perp, z)=(a_s(\omega,\boldsymbol k_\perp)\hat s_-^0 + a_p(\omega,\boldsymbol k_\perp)\hat p_-^0 )e^{-ik_{z0}z}.
\end{equation}
$a_s$ and $a_p$ are the field amplitude for s and p-polarized waves, respectively. The operator $a=(a_s,a_p)^T$ satisfies the correlation function\cite{bimonte_general_2007},
\begin{align}
\left\langle {a(\omega,\boldsymbol k_\perp) \otimes a^\dagger (\omega',\boldsymbol k_\perp^\prime)} \right\rangle &=\nonumber\\
4\pi^2 C(\omega,\boldsymbol k_\perp) &\delta(\omega-\omega')\delta^2(\boldsymbol k_\perp-\boldsymbol k_\perp^\prime).
\end{align}
The coefficient $C$ can be read directly from FDT and the free space DGF,
\begin{equation}
C(\omega,k_\perp)=\frac{\mu_0\omega}{4\pi }\Theta(\omega,T)\operatorname{Re}{(\frac{1}{k_{z0}})},
\end{equation}
which vanishes for evanescent waves.
These fluctuations from the upper vacuum region shines on the interface and get reflected. The total fields due to fluctuations in the vacuum part are
\begin{align}
\label{eq:vacuum fluctuation}
\vec E_0(z, & \omega,\boldsymbol k_\perp)=(a_s(\omega,\boldsymbol k_\perp)s_-^0 + a_p(\omega,\boldsymbol k_\perp)p_-^0 )e^{-ik_{z0}z} \nonumber \\
&+(r^s a_s(\omega,\boldsymbol k_\perp)s_+^0 + r^p a_p(\omega,\boldsymbol k_\perp)p_+^0 )e^{ik_{z0}z}.
\end{align}
The magnetic fields can be calculated using Eq.~(\ref{eq:vacuum fluctuation}) and Maxwell equations.
Then one can find the energy density due to the fluctuations in the upper space vacuum,
\begin{align}
\label{thermal vacuum}
& u_0(z,\omega,T)=\frac{{U{}_{BB}(\omega ,T)}}{2} \int_0^{k_0} \frac{{{k_\rho }d{k_\rho }}}{k_0|k_{z0}|} \Big \{1+ \nonumber \\
& \frac{|r^s|^2+|r^p|^2}{2}+ \frac{k_\rho^2}{k_0^2}[\operatorname{Re}(r^s e^{2ik_{z0}d})+\operatorname{Re}(r^p e^{2ik_{z0}d})]\Big \}
\end{align}
Subtracting Eq.~(\ref{thermal vacuum}) from Eq.~(\ref{total emission}), we recover the expression by the second kind of FDT.
From the definition of the cross-spectral density tensor
\begin{equation}
W(\boldsymbol r_1,\boldsymbol r_2, \omega)\delta(\omega-\omega')=\left\langle{\vec E(\boldsymbol r_1,\omega)\otimes \vec E^*(\boldsymbol r_2,\omega')}\right\rangle,
\end{equation}
one can find the spatial coherence due to fluctuations in the upper space vacuum,
\begin{align}
W_{zz}^0(\boldsymbol r_1,\boldsymbol r_2,\omega)=&\frac{{U{}_{BB}(\omega ,T)}}{4\epsilon_0} \int_0^{k_0} \frac{{k_\rho^3 d{k_\rho }}}{k_0^3|k_{z0}|} J_0(k_\rho d)\nonumber \\
&\big[\frac{1+|r^p|^2}{2}+\operatorname{Re}(r^p e^{2ik_{z0}d})\big]
\end{align}
where $ \boldsymbol r_1=(0,0,z) $, $ \boldsymbol r_2=(d,0,z) $ and $ \int_{0}^{2\pi}d\theta e^{ik_\rho d \cos \theta}=2\pi J_0(k_\rho d) $ is used; $ J_0(k_\rho d) $ is the zeroth order Bessel function of the first kind.
Further, from Eq.~(\ref{first kind}), the first kind of FDT, we have
\begin{align}
W_{zz}^{eq}(\boldsymbol r_1,\boldsymbol r_2,\omega)=& \frac{{U{}_{BB}(\omega ,T)}}{4\epsilon_0} \Big \{\int_0^{k_0} \frac{{k_\rho^3 d{k_\rho }}}{k_0^3 |k_{z0}|} J_0(k_\rho d) \nonumber \\
&\big[1+\operatorname{Re}(r^p e^{2ik_{z0}d})\big]+ \int_{k_0}^{\infty} \frac{{k_\rho^3 d{k_\rho }}}{k_0^3 |k_{z0}|} \nonumber \\
& J_0(k_\rho d) \operatorname{Im}(r^p)e^{-2\operatorname{Im}(k_{z0})z} \Big \}
\end{align}
Then the contribution from the lower space structure is
\begin{align}
W_{zz}(\boldsymbol r_1,\boldsymbol r_2,\omega)=& \frac{{U{}_{BB}(\omega ,T)}}{4\epsilon_0} \Big \{\int_0^{k_0} \frac{{k_\rho^3 d{k_\rho }}}{k_0^3 |k_{z0}|} J_0(k_\rho d) \nonumber \\
&\frac{1-|r^p|^2}{2}+ \int_{k_0}^{\infty} \frac{{k_\rho^3 d{k_\rho }}}{k_0^3 |k_{z0}|} \nonumber \\
& J_0(k_\rho d) \operatorname{Im}(r^p)e^{-2\operatorname{Im}(k_{z0})z} \Big \}
\end{align}
Only p-polarized waves contributes to $W_{zz}$ since s-polarized waves do not have $E_z$ components.
Once again, if the structure is a multilayered slab in vacuum, the contribution from the lower vacuum space can be evaluated using the scattering matrix method in a similar way to the upper vacuum space. The fields due to the vacuum fluctuations in the lower space transmit through the planar structure,
\begin{equation}
\vec E_t(\omega,\boldsymbol k_\perp, z)=(t^s a_s(\omega,\boldsymbol k_\perp)\hat s_-^0 + t^p a_p(\omega,\boldsymbol k_\perp)\hat p_-^0 )e^{ik_{z0}z}.
\end{equation}
It is clear that the contributing energy density will be proportional to the $|t|^2$, so that we recover the result of Eq.~(\ref{thermal slab}). Note that due to the reciprocal property, the transmission coefficients from two sides of the structure should be identical.
Generally speaking, considering a single object in thermal equilibrium, the energy density can be determined by the first kind of FDT, which is simply a single scattering event. To find the contribution from the object only, we can exclude the contribution from the environment, which can be also expressed by the scattering matrix of the object. If there are several objects at different temperatures, we can first decide the thermal emission from one specific object in the absence of other objects and then build the scattering part from other objects, in which procedure the temperatures of the other objects and the environment are assumed to be zero. Note this is the basic idea of M.Kardar and co-authors in sequent works\cite{kruger_nonequilibrium_2011,golyk_heat_2012,kruger_trace_2012}. Beyond the multilayered structures considered here, the authors also give the scattering matrix of various geometries including sphere and cylinder. For more complicated objects, numerical methods are also well developed.\cite{mccauley_modeling_2012,rodriguez_fluctuating-surface-current_2012,rodriguez_fluctuating-surface-current_2013,otey_fluctuational_2014}
\section{Results and discussions}
\begin{figure}
\includegraphics{fig_1.pdf}
\caption{\label{fig:emt_para} (a) Schematic of the multilayered structure and the coordinates. The spatial coherence are calculated between $\boldsymbol r_1=(0,0,z)$ and $\boldsymbol r_2=(d,0,z)$. (b) Effective permittivities of a SiO$_2$-SiC multilayered structure, where the fill fraction of SiC is 0.4. Only real part of the permittivity is plotted. The insets from left to right, denote the iso-frequency dispersion of dielectric, type II HMM and type I HMM.}
\end{figure}
\begin{figure}
\includegraphics{fig_2.pdf}
\caption{\label{fig:far field} Normalized far field thermal emission of a 3$\mu$m SiO$_2$-SiC multilayered structure, with fill fraction of SiC is 0.4.}
\end{figure}
There are multiple approaches to achieving hyperbolic dispersion\cite{cortes_quantum_2012,guo_applications_2012}. Two of the prominent geometries consists of 1D or 2D periodic metal-dielectric structures. We consider here a multilayer combination of silicon dioxide (SiO$_2$) and silicon carbide (SiC) which has a metallic response in the Reststrahlen band due to phonon polaritons ($\operatorname{Re}(\epsilon)<0$ between $\omega_{TO}=149.5 \times 10^{12} $ Hz and $\omega_{LO}=182.7\times 10^{12}$ Hz, the transverse and longitudinal optical phonon resonance frequencies). The permittivity of SiC is given by $\epsilon_m=\epsilon_\infty(\omega_{LO}^2-\omega^2-i\gamma\omega)/(\omega_{TO}^2-\omega^2-i\gamma\omega)$, where $\omega$ is the frequency of operation, $\omega_\infty=6.7$ and $\gamma=0.9\times10^{12}$ Hz. We note that this realization formed the testbed for the first complete characterization of the modes of hyperbolic media due to their low loss as compared to plasmonic media\cite{korobkin_measurements_2010}. The modes of this HMM can be excited at relatively lower temperatures (400-500K) when the peak of black body emission lies within the Reststrahlen band of SiC.
\begin{figure}
\includegraphics{fig_3.pdf}
\caption{\label{fig:1_EMT_dm} Wavevector resolved thermal emission (normalized to blackbody emission into the upper space) from a SiO$_2$-SiC multilayered structure calculated by (a) transfer matrix method and (b) EMT at z=200nm. The thermal emission is normalized to the black body emission to the upper half-space and in log scale. The structure consists of 40 layers of SiO$_2$/SiC , 30nm/20nm achieving a net thickness of 1$\mu$m. The presence of high-k modes are clearly evident in both the EMT calculation and the multilayer practical realization which takes into account all non-idealities due to dispersion, losses, finite unit cell size and finite sample size. The bright curves denote the enhanced thermal emission due to high-k modes in the HMM. In the practical multilayered structure, the high-k modes come from the coupled short range surface phonon polaritons at the silicon carbide and silicon dioxide interfaces.}
\end{figure}
To understand the thermal properties of phonon-polaritonic hyperbolic metamaterials we need to focus only on the Reststrahlen band of SiC where it is metallic. The multilayer structure (see schematic in Fig.~\ref{fig:emt_para}(a)) shows a host of different electromagnetic responses as predicted by effective medium theory $\epsilon_\parallel=\epsilon_mf+\epsilon_d(1-f)$ and $\epsilon_\perp=\epsilon_m\epsilon_d/(\epsilon_df+\epsilon_m(1-f))$, here $f$ is the fill fraction of the metallic medium\cite{cortes_quantum_2012}.
We classify the effective uniaxial medium\cite{cortes_quantum_2012,guo_applications_2012} using the isofrequency surface of extraordinary waves which follow $k_z^2/{\epsilon_\parallel}+(k_x^2+k_y^2)/{\epsilon_\perp}=\omega ^2/c^2$ and the media are hyperboloidal only when $\epsilon_\parallel\epsilon_\perp<0$ . We can effectively achieve a type I hyperbolic metamaterial with only one negative component in the dielectric tensor ($\epsilon_\parallel>0$, $\epsilon_\perp<0$), type II hyperbolic metamaterial with two negative components ($\epsilon_\parallel<0$, $\epsilon_\perp>0$), effective anisotropic dielectric ($\epsilon_\parallel>0$, $\epsilon_\perp>0$) or effective anisotropic metal ($\epsilon_\parallel<0$, $\epsilon_\perp<0$). In Fig.~\ref{fig:emt_para}(b), we plot the effective permittivities of a SiO$_2$-SiC multilayered structure with the fill fraction 0.4 and label the two hyperbolic regions. As the purpose of this work is to examine how extraordinary waves in HMMs impact thermal emission properties, we only consider p-polarized waves in our numerical simulations.
\subsection{Far field thermal emission}
We first characterize the thermal emission of a HMM slab in the far field. This is extremely important for experiments currently being pursued in multiple groups. We clearly observe two peaks in Fig.~\ref{fig:far field} in agreement with the previous work on epsilon-near-zero and epsilon-near-pole resonances for thermal emission\cite{molesky_high_2013}. The right one occurs when $\epsilon_\perp$ is close to zero. From the displacement field boundary condition, $\epsilon_{0}E_{0\perp}=\epsilon_{\perp}E_{1\perp}$, when $\epsilon_{\perp}\rightarrow 0$, the fields inside HMM $E_{1\perp}$ should be very large. Thus large absorption is expected at this epsilon near zero region. The epsilon-near-pole resonance results in narrowband thermal emission due to the increase in the imaginary part of the dielectric constant in this ENP spectral region. The most critical aspect is the direction of the dielectric tensor components which show ENZ or ENP\cite{molesky_high_2013}. An ENZ in the component parallel to the interface or an ENP perpendicular to the interface does not show such effects.
\subsection{Near field thermal emission}
\begin{figure}
\includegraphics{fig_4.pdf}
\caption{\label{fig:3_30_EMT} Wavevector resolved thermal emission (normalized to blackbody emission into the upper space) from (a) a 3$\mu$m thickness HMM slab and (b) a 30$\mu$m thickness HMM slab. The fill fraction of SiC is 0.4, same as the 1$\mu$m HMM slab. The two hyperbolic regions where the thermal emission is enhanced are evident. The modes supported by 3$\mu$m thickness slab are denser than that of 1$\mu$m slab and the modes supported by the 30$\mu$m slab are almost continuous.}
\end{figure}
\begin{figure}
\includegraphics{fig_5.pdf}
\caption{\label{fig:emission} (a) Normalized thermal emission from slabs with various thicknesses. The dashed black line is calculated using transfer matrix method while the solid lines are calculated using EMT parameters, where 'DM' in the legend means the top layer of SiO$_2$(Dielectric)-SiC(Metal) multilayers is SiO$_2$. Despite the clear difference of the density of modes supported by the slabs shown in Fig.~\ref{fig:1_EMT_dm} and \ref{fig:3_30_EMT}, the thermal emission spectrum are interestingly in good agreement. The two main peaks where the thermal emission are largely enhanced are due to the high-k states in the two hyperbolic regions. (b) Wavevector resolved thermal emission at $\omega=1.6\times10^{14}$Hz. The sharp peaks on the left ($k_\rho/k_0<2 $) are the surface modes. When $k_\rho/k_0>3$, the curve for 30$\mu$m slab is almost flat with no oscillations, while that of 1$\mu$m and 3$\mu$m slabs show the discrete modes denoted by crests and troughs.}
\end{figure}
\begin{figure}
\includegraphics{fig_6.pdf}
\caption{\label{fig:SiC_phase} Thermal emission by a 30$\mu$m SiC slab. The red bright curve represents the dispersion of the SPhP mode between the vacuum and SiC interface since the slab is very thick.}
\end{figure}
\begin{figure}
\includegraphics{fig_7.pdf}
\caption{\label{fig:spatial} spatial coherence of (a) a 30$\mu$m SiC slab and (b) a 30$\mu$m HMM slab at 0.2$\mu$m and 1$\mu$m from the surface with $\omega=1.6\times10^{14}$Hz and $\omega=1.79\times10^{14}$Hz. (a) At $\omega=1.6\times10^{14}$Hz, the SiC slab supports a single degenerate SPhP mode. As a result, SiC slab has large spatial coherence at both 0.2$\mu$m and 1$\mu$m. At $\omega=1.79\times10^{14}$Hz, the SPhP resonance frequency where $\operatorname{Re}\epsilon_{\text{SiC}}=-1$, this frequency corresponds to a bright horizontal line in the SPhP dispersion curve shown in Fig.~\ref{fig:SiC_phase}. This means at this frequency, multi-modes with different wavevectors can be thermally excited. Thus the spatial coherence is poor both at 0.2$\mu$m and 1$\mu$m. (b)At $\omega=1.6\times10^{14}$Hz, the HMM slab supports high-k states besides the SPhP mode. At 0.2$\mu$m, the high-k states contribute a lot to the fluctuating electric fields, and consequently the spatial coherence is poor. But when the distance becomes larger at 1$\mu$m, the high-k states will not reach that far because of their large wavevector $k_\rho$. Thus the electric fields will be dominated by the surface mode which has smaller $k_\rho$. The spatial coherence length is large due to this dominant surface mode. At $\omega=1.79\times10^{14}$Hz, the HMM slab can only supports multiple high-k states, and unlike the type II HMM region, there is no lower bound for the high-k wavevectors. Thus the spatial coherence is poor both at 0.2$\mu$m and 1$\mu$m.}
\end{figure}
Here we analyze the near-field thermal emission from multilayer hyperbolic media\cite{guo_broadband_2012}. We first focus on how thermal emission will depend on the thickness of the slabs. In Fig.~\ref{fig:1_EMT_dm}, we plot the wavevector resolved thermal emission from a structure consists of 40 layers of SiO$_2$/SiC , 30nm/20nm achieving a net thickness of 1$\mu$m. We clearly see multiple discrete high-k modes in both the type I and type II hyperbolic region. Note the thickness 1$\mu$m is about one tenth of the operating wavelength, so these high-k modes will not occur in conventional isotropic dielectrics. The excellent agreement between the EMT prediction and the practical multilayered structure is seen, which validates the use of EMT in our structure. Further, we increase the thickness of the slab to 3$\mu$m and 30$\mu$m while keeping the same unit cell. The waveguide modes will be denser as expected. At the thickness of 30$\mu$m, the high-k modes are almost continuous and result in two bright bands in Fig.~\ref{fig:3_30_EMT}(b). This is close to the bulk metamaterial limit.
We show the thermal emission spectrum in Fig.~\ref{fig:emission}(a) for various thicknesses of the metamaterial. The two main peaks are due to the high-k modes in the hyperbolic region. In Fig.~\ref{fig:emission}(b), we plot the wavevector resolved thermal emission at a specific frequency $\omega=1.6\times10^{14}$Hz within the type II hyperbolic region where the structure supports both surface mode and high-k modes. The sharp peaks at the left are due to the surface mode while the high-k modes emerge at larger $k_\rho$. In the high-k modes region, the curve for 30$\mu$m slab is almost flat indicative of a continuum of high-k modes. In contrast, the curves of 1$\mu$m and 3$\mu$m slabs clearly show the existence of discrete high-k waveguide modes featured by crests and troughs.
\subsection{Spatial coherence of hyperbolic metamaterial slab}
Surface waves can lead to large spatial coherence length in the near field\cite{carminati_near-field_1999}. To see this, we first show in Fig.~\ref{fig:SiC_phase} the wavevector resolved thermal emission from a 30$\mu$m thick SiC slab. The bright curve gives the dispersion of surface phonon polariton (SPhP) between the vacuum and SiC interface. Note we will not see the splitting of the vacuum-SiC interface SPhP mode into long range and short range modes since 30$\mu$m is in the order of several operating wavelengths. In the time domain, the temporal coherence is best for monochromatic waves. Thus for the spatial coherence, one can imagine it will be favorable if a single wavevector dominates the fields among all the wavevectors. This is indeed the case for surface waves. In Fig.~\ref{fig:spatial}(a), we plot the spatial coherence of the SiC slab at $\omega=1.6\times 10^{14}$Hz and $\omega=1.79\times 10^{14}$Hz. At the frequency $\omega=1.6\times 10^{14}$Hz, the SPhP mode wavevector $k_\rho$ is about $1.1k_0$. Large spatial coherence length is seen at both 0.2$\mu$m and 1$\mu$m from the interface. However, near the surface phonon polariton resonance (SPhPR) frequency $\omega=1.79\times 10^{14}$Hz where $\epsilon_\text{SiC}=-1$, the mode dispersion curve is almost a horizontal line, which means that multiple modes with different wavevectors can be thermally excited. Thus a poor spatial coherence is expected. In Fig.~\ref{fig:spatial}(a), the spatial coherence is poor at at both 0.2$\mu$m and 1$\mu$m from the interface. This feature could be used to determine the resonance frequency.
Hyperbolic metamaterials can support multiple high-k modes. Therefore the spatial coherence length should not be long in the hyperbolic region. This is true for type I HMM. In Fig.~\ref{fig:spatial}(b), we plot $W_{zz}$ at $\omega=1.79\times 10^{14}$Hz, where the multilayered structure effectively behaves in the type I hyperbolic region. The spatial coherence lengths are only a fraction of the operating wavelength at both 0.2$\mu$m and 1$\mu$m from the interface.
But the situation for type II hyperbolic region is interestingly different. For a HMM slab in the type II hyperbolic region ($\epsilon_\parallel<0$, $\epsilon_\perp>0$), the slab can support a surface wave mode as well as multiple high-k modes. Thus we have two sets of modes that can result in a unique interplay of spatial coherence effects. Furthermore, these modes are separated in wavevector space because of the lower bound of the high-k states in type II hyperbolic region\cite{cortes_quantum_2012}. High-k modes are confined to the surface better than suface waves and these high-k waves will dominate at a shorter distance from the interface. We choose $\omega=1.6\times 10^{14}$Hz within the type II hyperbolic region to confirm this point. At distance 0.2$\mu$m, the spatial coherence is very poor. However, at a larger distance 1$\mu$m, the fluctuating fields have large spatial coherence length. This is because at this distance, the contribution from surface wave mode dominates the electric fields while the high-k states rarely contribute to the fields. This distance dependence behavior can have applications such as obtaining the modes distribution at a given frequency.
\subsection{Thermal Topological Transitions}
\begin{figure*}
\includegraphics{fig_8.pdf}
\caption{\label{fig:TTT} (a) Optical phase diagram of SiC-SiO$_2$ multilayered structure predicted by EMT. Red region denotes effective dielectric, blue region means effective metal, yellow region stands for type I hyperbolic metamaterial, green region is type II hyperbolic metamaterial. Thermal emission at z=200nm (log scale plot normalized to the black body radiation into the upper half-space) by the multilayered structure depending on the operating frequency and the fill fraction calculated by (b) EMT, (c) SiO$_2$-SiC multilayer (with first layer SiO$_2$), (d) SiC-SiO$_2$ multilayer (with first layer SiC). In the effective metal region, the dark red line is due to surface phonon polariton resonance. Both type I and type II region have a clear thermal emission enhancement due to bulk high-k modes in agreement with the optical phase diagram.}
\end{figure*}
Until now, we have fixed the fill fraction to be 0.4. It is useful to examine the structure's behavior at various fill fractions. In Fig.~\ref{fig:TTT}(a), we plot the optical phase diagram\cite{cortes_quantum_2012,guo_broadband_2012} of this metamaterial which shows the isofrequency surfaces achieved at different frequencies and fill fractions of SiC. The phase diagram is classified as effective dielectric, effective metal, type I and type II HMM as introduced before\cite{cortes_quantum_2012,guo_applications_2012}.
Figure.~\ref{fig:TTT}(b) shows the thermal energy density (normalized to black body radiation into the upper half space) evaluated using Rytov's fluctuational electrodynamics for an effective medium slab at a distance of z=200nm from the metamaterial. It is seen that the regions of hyperbolic behavior exhibit super-Planckian thermal emission in agreement with our previous analytical approximation, but here we will go beyond effective medium theory and consider practical structures. The role of the surface waves is very important and can lead to significant deviations when the unit cell size is not significantly subwavelength\cite{kidwai_effective-medium_2012,tschikin_limits_2013,guo_thermal_2013}.
The macroscopic homogenization utilized to define a bulk electromagnetic response is valid when the wavelength of operation exceeds the unit cell size ($\lambda\gg a$). However, even at such wavelengths if one considers incident evanescent waves on the metamaterial the unit cell microstructure causes significant deviations from EMT. This is an important issue to be considered for quantum and thermal applications where the near-field properties essentially arise from evanescent wave engineering (high-k modes)\cite{cortes_quantum_2012,guo_applications_2012}. For the multilayer HMM, at distances below the unit cell size, the thermal emission is dominated by evanescent waves with lateral wavevectors $k_\rho\gg1/a$. Since this is above the unit-cell cut off of the metamaterial, the high-k modes do not contribute to thermal emission at such distances. It is therefore necessary to consider thermal emission from a practical multi-layer structure taking into account the layer thicknesses. This is shown in Fig.~\ref{fig:TTT}(c) and Fig.~\ref{fig:TTT}(d). The unit cell size is 200nm, and we consider a semi-infinite multilayer medium using the formalism outlined in Ref.~\onlinecite{kidwai_effective-medium_2012}. An excellent agreement is seen of the optical phases of the multilayer structure with the EMT calculation.
\section{conclusion}
This work shows that extension of equilibrium and non-equilibrium fluctuational electrodynamics to the case of metamaterials can lead to novel phenomena and applications in thermal photonics.
We presented a unified picture of far-field and near-field spectra for experimentalists and also introduced the near-field spatial coherence properties of hyperbolic media. We have analyzed in detail thermal topological transtions and super-Planckian thermal emission in practical phonon-polaritonic hyperbolic metamaterials. We paid particular attention not only to the effective medium approximation but discussed all non-idealities limiting the super-planckian thermal emission from HMMs. We have provided practical designs to experimentally measure and isolate our predicted effect. Our work should lead to a class of thermal engineering applications of metamaterials.
\section{acknowledgment}
Z. Jacob wishes to acknowledge discussions with E. E. Narimanov. This work was partially supported by funding from Helmholtz Alberta Initiative, Alberta Innovates Technology Futures and National Science and Engineering Research Council of Canada.
|
1,314,259,994,848 | arxiv |
\section{Introduction}
In the real world, noise and speaker interference can degrade the system performance of back-end speech applications. Speech separation effectively solves this problem by extracting the target speech from the mixed utterance.
Early methods called blind speech separation, such as Deep Clustering (DPCL)~\cite{hershey2016deep,Isik2016SingleChannelMS}, Deep Attractor Network (DANet)~\cite{chen2017deep}, and Permutation Invariant Training (PIT)~\cite{yu2017permutation, kolbaek2017multitalker}, can separate each source from a mixed speech. These algorithms formulated in the time-frequency domain have an upper bound on reconstructing waves~\cite{luo2018tasnet}. Recent solutions in the time-domain, such as Time-Domain Audio Source Separation (Tas-Net)~\cite{luo2018tasnet,luo2019conv,luo2018real} and Dual-Path RNN (DPRNN)~\cite{luo2020dual}, break through the constraints and achieve state-of-the-art performance in the separation task. Despite this, the unknown number of speakers and the global permutation problem are still two challenges for blind speech separation.
To address the above two problems, a framework called speech extraction~\cite{ge2021multi,9414998} or target speech separation~\cite{wang2018voicefilter,Li2020AtssNetTS} can extract a target speaker' speech from the mixed audio by utilizing an auxiliary reference speech of the target speaker. However, it is required to filter out multiple target speakers in certain tasks, e.g., meeting scenarios. The common approach is to infer the mixed speech for several times and each process is independent of the other, ignoring the interrelationship between the speech of different speakers at each frame. In addition, obtaining the reference speech of multiple target speakers in advance is difficult to achieve. Considering the aforementioned problems, repeatedly processing the mixture speech towards different target speakers separately is not a sensible solution.
It is worth noting that conference speech usually has a long duration and contains both single-talker and overlapped voice segments. Thus, it is feasible to use the single-talker segments labeled by the diarization system as the reference speech for participants instead of obtaining additional speech for enrollment. Speaker Diarization (SD)~\cite{wang2018speaker,sell2018diarization} technology is very suitable for this role. SD aims to slice different speaker segments in a continuous multiple speakers conversation and determine which speaker each segment belongs to. More recently, MC-TS-VAD~\cite{wang2022cross}, which selects TSVAD as the post-processing module and employs cross-channel self-attention, achieved the best result in the Multi-party Meeting Transcription Challenge (M2Met).
In this work, we propose the SD-MTSS framework, which is a speech extraction method for multiple target speakers. It associates target speech separation with speaker diarization and selects the TSVAD system as the speech diarization network. Based on the decisions from TSVAD~\cite{Ding2020PersonalVS}, we can obtain each speakers' reference speech directly from the mixed audio. In the separation stage, each speaker's reference speech are fed into the Multiple Target Speech Separation network (MTSS). The MTSS model infers each speaker's mask simultaneously and limit their estimated masks to be sum to 1. We consider that the energies of different speakers at each time-frequency cell are not independent of each other. Moreover, SD-MTSS set the non-target single-talker voice to silence using the binarized TSVAD decision, just like what~\cite{Lin2021SparselyOS} does. To the best of our knowledge, this work is the first to use diarization results for simultaneously extracting speech from multiple target speakers.
The rest of this work is organized as follows. In Section 2, we present the architecture of the proposed SD-MTSS method. In Section 3, we report the experimental results and discussions. The conclusions are drawn in Section 4.
\begin{figure*}[t]
\centering
\includegraphics[width=0.75\linewidth]{system.pdf}
\caption{Schematic diagram of the SD-MTSS system. \(\gamma\) is a threshold (0.5 often). \(s_1\), \(s_2\) represent the two speakers of the mixture. \(m\), \(aux_*\), and \(est_*\) are the symbol of the input sequence, reference speeches, and inference results, respectively. \(d_*\) indicates the binarized TSVAD decision. }
\label{fig:system}
\vspace{-0.4cm}
\end{figure*}
\section{Methods}
\subsection{System design}
The system architecture is shown in Figure~\ref{fig:system}, it only takes the mixed speech as input for the target speech separation task. The SD-MTSS system consists of a speaker diarization (SD) module and a multiple target speech separation (MTSS) module. The SD module produces the TSVAD decision for multiple speakers, which are the probabilities of each speaker's presence at the frame level. The MTSS module adopts each speakers' reference speech from the SD module and the mixture audio as inputs, and then outputs the estimation for multiple target speakers. We will describe these two modules in this section.
\subsection{Speaker diarization module}
The SD module in this system consists of a clustering-based module for target speaker embedding extraction and a TSVAD module for diarization results refinement~\cite{wang2022cross}.
\subsubsection{Clustering-based module}
The affinity matrix extraction model of TSVAD is based on the neural network in~\cite{Lin2019LSTMBS}, using an LSTM-based model in similarity measurement for speaker diarization. It consists of two bidirectional long short-term memory networks (Bi-LSTM) layers and two fully connected layers. The LSTM-based model first splits the entire audio into short speech clips and extracts the speaker embedding of all segments. Then it takes these segments as inputs and produces the initialized diarization result through adopting spectral clustering.
\subsubsection{TSVAD system}
The architecture of the TSVAD~\cite{wang2022cross} system is shown in Figure~\ref{fig:TSVAD}, and it mainly has three parts:
\begin{enumerate}
\item A pre-trained speaker embedding model ResNet~\cite{he2016deep} based with ArcFace~\cite{deng2019arcface} and cosine similarity scoring.
\item A front-end model with the same architecture as the pre-trained model is used to extract the frame-level speaker embedding.
\item A back-end model consists of an encoder layer, a BiLSTM layer, a liner layer, and a sigmoid function.
\end{enumerate}
First, the pre-trained speaker embedding model extracts the target speaker embeddings. Meanwhile, the front-end network loads its parameters to extract the frame-level speaker embeddings. The target speaker embeddings are repeatedly concatenated with the frame-level speaker embeddings and then fed into the back-end. Next, the encoder layer of the back-end model produces each target speaker's detection state. The BiLSTM layer inputs these detection states and models the relationship between speakers. Finally, the linear layer coupled with a sigmoid function generates each speaker's final decision (TSVAD decision). More details can be found in~\cite{wang2022cross}.
Using the TSVAD decision, we can get the single-talker audio segments as the reference speech for each speaker. The scheme of obtaining reference speech is shown in Figure~\ref{fig:system}. \(m\) \(\in\) \(\mathbb{R}^{1 \times T}\) is the input sequence. \(s_1\) and \(s_2\) indicate the two different speakers in the mixture. First, the TSVAD decision is passed through a threshold mechanism and produces the binarized results \(d_{s_1}\) and \(d_{s_2}\) . Its values consist of 0 and 1. Then the decisions of both \(s_1\) and \(s_2\) can be formulated as:
\begin{equation}
d_{mix} = d_{s_1} \otimes d_{s_2}
\label{eq1}
\end{equation}
where \(d_{s_1}\), \(d_{s_2}\) \(\in\) \(\mathbb{R}^{1 \times T}\), \(\otimes\) indicates the element-wise product. Thus, the reference speech can be formulated as:
\begin{equation}
aux_{s_1} = m \otimes {({d_{s_1}} - {d_{mix}})}
\label{eq2}
\end{equation}
\begin{equation}
aux_{s_2} = m \otimes {({d_{s_2}} - {d_{mix}})}
\label{eq3}
\end{equation}
where \(({d_{s_*}} - {d_{mix}})\) indicates the decision regarding the frames where only \(s_1\) or \(s_2\) is present. Selected continuous audio segments of \(aux_{s_1}\) and \(aux_{s_2}\) will be fed into the MTSS module as the reference speech for the subsequent separation task.
\subsection{Multiple target speech separation module}
\subsubsection{Backbone}
The backbone of the MTSS module is SpEx+~\cite{ge2020spex+}, which consists of a twin speech encoder, a speaker encoder, a speaker extractor, and a speech decoder. The twin speech encoder model the input sequence and auxiliary speech in a common latent space through sharing the structure and parameters. The speaker encoder model is a ResNet-based speaker classifier used to generate the speaker embedding of the reference speech. The speaker extractor takes both the speaker embedding and the output of the twin speech encoder as the inputs, and then produces masks in three different scales. The speech decoder outputs the speech for the target speaker by multiplying the input sequence and the multi scales masks.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.85\linewidth]{TSVAD.pdf}
\caption{The structure of TSVAD system. The Front-end share the same architecture with the pre-trained speaker embedding model. The target speaker embedding concatenates with the frame-level speaker embedding repeatedly and then is fed into the Back-end model.}
\label{fig:TSVAD}
\vspace{-0.4cm}
\end{figure*}
\begin{figure}[t!]
\centering
\includegraphics[width=0.65\linewidth]{mtss.pdf}
\caption{The details of MTSS module. \(\otimes\) is an operation for element-wise product. \(aux_*\) and \(d_*\) are both obtained from the speaker diarization module.}
\label{fig:MTSS}
\vspace{-0.5cm}
\end{figure}
\subsubsection{MTSS module}
Here, we propose a speech extraction model for multiple target speakers (MTSS), which can simultaneously separate the speech of each speaker who presents in the conversation. The schematic diagram of the MTSS is shown in Figure~\ref{fig:MTSS}. Unlike the original SpEx+ neural network takes only one speaker's reference speech, MTSS takes both two speaker's reference speech as the inputs. Rather than requiring additional registration, we directly obtain reference speech from the long utterance itself through the SD module. In real applications, the online TSVAD~\cite{wang2022} can be used here. Moreover, we replace the ReLU with softmax to establish the relationship between the masks of each speaker in the same utterance. We believe that taking the interrelation into account will improve the final separation performance of the model. Because in the definition of binary masks, each time-frequency cell belongs to a speaker with stronger energy. Specifically, the responses of MTSS \({s^{\prime}}_1\), \({s^{\prime}}_2\) can be formulated as:
\begin{equation}
({{s^{\prime}}_1, {{s^{\prime}}_2}}) = m \otimes \{ \text{softmax}(cat(mask_{s_1}, mask_{s_2})) \}
\label{eq4}
\end{equation}
where \(mask_{s_1}\), \(mask_{s_1}\) \(\in\) \(\mathbb{R}^{N \times B\times 1 \times T}\). \(\otimes\) is an operation of element-wise product. \(softmax(*)\) and \(cat(*)\) indicates that a softmax function and concatenation operates on the penultimate dimension, respectively. We also implement a multi-task learning framewrok for the target speech separation, and the loss function is shown in Equation~(\ref{eq6}). In addition, we optimize the results with the binarized decision \(d_{s_1}\) and \(d_{s_2}\) in the inference stage under the meeting scenarios.
\begin{equation}
\begin{aligned}
\mathscr{L}({\theta}|m,ref_{s_{1,2}},s_{1,2},I_{s_{1,2}})&={\lambda}_1{\mathscr{L}}_{SI-SDR_{s_1}}+{\lambda}_2{\mathscr{L}}_{CE_{s_1}} + \\
& {\lambda}_1{\mathscr{L}}_{SI-SDR_{s_2}}+{\lambda}_2{\mathscr{L}}_{CE_{s_2}}
\label{eq6}
\end{aligned}
\end{equation}
where \(m\) is the input sequence, \(ref_*\) is the reference speech, \(s_*\) is the target speech, \({\lambda}_{1,2}\) is the weights of SI-SNR loss and cross-entropy loss, respectively. In this work, we set \({\lambda}_1 = 0.5\) and \({\lambda}_2 = 0.25\) as the default values.
\begin{table*}[t]
\caption{SDR(dB), SI-SDR(dB), and PESQ of separated speech based on the MTSS method. N indicates the number of outputs per inference}
\label{tab:wsj}
\centering
\begin{tabular}{c|c|cc|cc|cc}
\toprule
\multirow{2}{*}{\textbf{Methods}} & \multirow{2}{*}{\textbf{N}} & \multicolumn{2}{c|}{\textbf{SDR}} & \multicolumn{2}{c|}{\textbf{SI-SDR}} & \multicolumn{2}{c}{\textbf{PESQ}} \\ \cline{3-8}
& & s1 & s2 & s1 & s2 & s1 & s2 \\ \hline
Mixture & - & 2.60 & -2.14 & 2.50 & -2.50 & 2.31 & 1.86 \\ \hline
SpeakerBeam~\cite{delcroix2018single} & 1 & 9.62 & - & 9.22 & - & 2.64 & - \\
SBF-MTSAL-Concat~\cite{xu2019optimization} & 1 & 11.39 & - & 10.60 & - & 2.77 & - \\
TseNet~\cite{xu2019time} & 1 & 15.24 & - & 14.73 & - & 3.14 & - \\
SpEx~\cite{xu2020spex} & 1 & 17.15 & - & 16.68 & - & 3.36 & - \\
SpEx+~\cite{ge2020spex+} & 1 & 18.54 & - & 18.20 & - & 3.49 & - \\ \hline
Pre-trained model & 1 & \multicolumn{1}{l}{18.15} & \multicolumn{1}{l|}{16.42} & \multicolumn{1}{l}{17.55} & \multicolumn{1}{l|}{15.89} & \multicolumn{1}{l}{3.44} & \multicolumn{1}{l}{3.28} \\
MTSS-unbind & 2 & 19.18 & 17.29 & 18.72 & 16.84 & 3.56 & 3.39 \\
MTSS-bind & 2 & \textbf{19.92} & \textbf{17.42} & \textbf{19.54} & \textbf{16.99} & \textbf{3.62} & \textbf{3.41} \\
\bottomrule
\end{tabular}
\vspace{-0.2cm}
\end{table*}
\section{Experimental Results and Discussions}
\subsection{Dataset}
\textbf{SD module:} We use the train set and evaluation set of Alimeeting to train the clustering-based affinity matrix extraction neural network. For the TSVAD model, we create a simulated dateset based on the Alimeeting training set. The simulation scheme is the same as~\cite{ge2020spex+}. Alimeeting contains 118.75 hours of speech data, including 104.75 hours (426 speakers) of the training set, 4 hours (25 speakers) of the validation set, and 10 hours of the test set. The training and validation sets contain 212 and 8 meeting sessions, respectively, in which each session consists of 15 to 30 minutes of discussion by multiple speakers. The speaker diarization module has a 4.12\% Diarization Error Rate (DER) on the test set of Alimeeting.
\noindent\textbf{MTSS module:} Firstly, we simulated a two-speakers database WSJ0-2mix-extr\footnotemark[1]\footnotetext[1]{\url{https://github.com/xuchenglin28/speaker_extraction}}. The simulation process is the same as~\cite{ge2020spex+}, and the only difference is that we produce a couple of target speakers speech (\(s_1\), \(s_2\)) and reference speech (\(ref_{s_1}\), \(ref_{s_2}\) ) for each mixture utterance, while~\cite{ge2020spex+} only select the first talker as the target speaker. The utterance from \(s_1\) and \(s_2\) are set in a relative SNR between 0 to 5 dB. The average SI-SDR of mixed speech is 2.50dB and -2.50dB when it takes \(s_1\) and \(s_2\) as the reference. Secondly, We select the samples with only two-speakers from the training set and test set of the Alimeeting dataset and use the audio signal on the channel 0. We set the far-field audio as the mixture and the data from the close-talking microphone as the ground truth. The sizes of the selected training set, validation set, and test set are 20, 2, and 2 hours, respectively.
\subsection{Experimental setup}
\textbf{SD module:} The signal channel TSVAD model chooses the Adam and binary cross-entropy loss as the optimizer. The input chunk size is 16s, and the acoustic feature is 80-dim log Mel-filterbank energies (Fbank) with a frame length of 25ms and a frameshift of 10ms. The training details can be found in \cite{wang2021dku}.
\noindent\textbf{MTSS module:} To compare with the baseline, the hyperparameters and learning schedule of speech separation model are set roughly the same as \cite{ge2020spex+}. Specifically, the number of filters in the encoder is 256, the number of convolutional blocks in each repeat is 8, the number of repeat is 4, the number of channels in the convolutional blocks is 512, the kernel size of the convolutional blocks is 3, and the sampling rate is 8KHz. We downsample the Alilmeeting data to 8KHz to match with the WSJ0-2mix-extr data.
We evaluate our proposed SD-MTSS system for two steps: 1) Examining the performance of MTSS on WSJ0-2mix-extr dataset. We train the MTSS model with a pre-trained model\footnotemark[2]\footnotetext[2]{\url{https://github.com/gemengtju/SpEx_Plus}} for 30 epochs on the training set of WSJ0-2mix-extr. Then, we compare MTSS-bind (Using softmax function to limit the sum of \(mask_{s_1}\) and \(mask_{s_2}\) to 1) and MTSS-unbind (Using ReLu as the activation function and do not impose constraints on masks) in terms of SDRi, SI-SDR, and PESQ. 2) Examining the performance of SD-MTSS system on Alimeeting. We compare SD-MTSS-modified (Modifying the estimation with binarized TSVAD decision) and SD-MTSS in terms of SDRi and SI-SDRi. SDRi and SI-SDRi are the SDR and SI-SDR improvements over the mixture.
\subsection{Results on WSJ0-2mix-extr}
The results of our proposed MTSS model and the baseline system is shown in Table~\ref{tab:wsj} and. Since we used the same simulation test set as \cite{ge2020spex+} uses, we directly use the evaluation results of SpeakerBeam, SBF-MTSAL-Concat, TseNet, SpEx, and SpEx+ in \cite{ge2020spex+}. As shown in Table~\ref{tab:wsj}, SePx+~\cite{ge2020spex+} is the baseline which we implemented, and MTSS are the model we proposed. Our proposed MTSS model achieves significantly better results across all the metrics.\footnotemark[3]\footnotetext[3]{Samples of separated audio are available at : \url{https://github.com/ZBang/SD-MTSS}} Specifically, MTSS-bind outperforms SpEx+ with relative improvements of 7.4\% in terms of SDR, 7.3\% in terms of SI-SNR, and 3.2\% in terms of PESQ, respectively. In addition, we get better improvement on each speaker (\(s_1\), \(s_2\)) while extracting their target speech simultaneously. Comparing the results of MTSS-unbind and MTSS-bind, we can conclude that setting the constrain for each speakers' mask mainly contribute to the improvements.
\begin{table}[t]
\caption{Results of SD-MTSS on Alimeeting.}
\label{tab:ali}
\centering
\begin{tabular}{c|cc|cc}
\toprule
\multirow{2}{*}{\textbf{Methods}} & \multicolumn{2}{c|}{\textbf{SDRi}} & \multicolumn{2}{c}{\textbf{SI-SDRi}} \\ \cline{2-5}
& s1 & s2 & s1 & s2 \\ \hline
\multicolumn{1}{l|}{pre-trained model} & \multicolumn{1}{l}{1.31} & \multicolumn{1}{l|}{1.91} & \multicolumn{1}{l}{-5.77} & \multicolumn{1}{l}{-5.28} \\
SD-MTSS & 0.8 & 0.78 & 0.89 & -1.73 \\
SD-MTSS-modified & \textbf{3.3} & \textbf{2.01} & \textbf{4.15} & \textbf{3.7} \\
\bottomrule
\end{tabular}
\vspace{-0.4cm}
\end{table}
\subsection{Results on Alimeeting}
The results of our proposed SD-MTSS system are shown in Table~\ref{tab:ali}. Since we evaluate the system on the far-field data and use the corresponding close-talking data as the ground truth, the model does not performance well in terms of SDRi and SI-SDRi. Nevertheless, from Table~\ref{tab:ali}, we can draw two conclusions: 1) Our proposed multiple target speech separation model surpasses the pre-trained model (SpEx+) with a large margin in terms of SI-SDRi. 2) Modifying the estimated speech with the TSVAD decision as shown in the Figure~\ref{fig:MTSS} could significantly enhance the system's performance, for this approach somehow solves the non-target speaker residual problem.
\section{Conclusions}
In this work, we proposed a speech extraction system (SD-MTSS), which consists of a speaker diarization (SD) module and a multiple target speech separation (MTSS) module. By associating the speaker diarization task and the target speech separation task together, we do not require the additional reference speech for enrollment. Moreover, we propose to constrain the sum of each speakers' estimated mask to 1 when extract their speech simultaneously. We also modify the separated meeting-style audio with the binarized TSVAD decision. The experimental results show that our proposed system make significantly improvement on the simulated mixed data. For future works, we will implement our method with different state-of-art networks and improve the system's performance in the far-field scenarios.
\section{Acknowledgements}
This research is funded in part by the National Natural Science Foundation of China (62171207), Science and Technology Program of Guangzhou City (202007030011). Many thanks for the computational resource provided by the Advanced Computing East China Sub-Center.
\bibliographystyle{IEEEtran}
\normalem{ |
1,314,259,994,849 | arxiv | \section{Introduction}
\label{Sect:1}
It is commonly believed that the primary energy release regions in solar flares are
located in the low corona \cite{Aschwanden05}. Radio spectral and imaging observations in
the decimetric and metric wavelength ranges in combination with magnetic field
extrapolations are considered to be very promising tools for a~study of these processes.
Unfortunately, imaging observations of solar flares in the decimetric range are very
rare. In the metric range, such observations are commonly done by the Nan\c{c}ay
radioheliograph \cite{Kerdraon97}.
To successfully combine radio observations and magnetic extrapolations, the radio
observations have to include sufficiently precise positional information. Such
a~combination was done \textit{e.g.} by \inlinecite{Trottet06}, where a~detailed analysis
of radio spectral and imaging observations in the 10--4500\,MHz range was presented for
the 5~November~1998 flare. \inlinecite{Subramanian07} have studied a~post-flare source
imaged at 1060\,MHz to calculate the power budget for the efficiency of the plasma
emission mechanism in a~post-flare decimetric continuum source. \inlinecite{Aurass07}
have analyzed the topology of the potential coronal magnetic field near the source site
of the meter-decimeter radio continuum to find that this radio source occurs near the
contact of three separatrixes between magnetic flux cells. \inlinecite{Aurass11} have
examined meter-decimeter dynamic radio spectra and imaging with longitudinal magnetic
field magnetograms to describe meter-wave sources. \inlinecite{Chen11} have used an
interferometric dm-radio observation and nonlinear force-free field extrapolation to
explore the zebra pattern source in relation to the magnetic field configuration.
\inlinecite{Zuccarello11} investigated the morphological and magnetic evolution of an
active region before and during an X3.8 long duration event. They found that coronal
magnetic null points played an important role in this flare.
A~comprehensive review of magnetic fields and magnetic reconnection theory as well as
observational findings was provided by \inlinecite{Aschwanden05}. The topological methods
for the analysis of magnetic fields were reviewed in \inlinecite{Longcope05} and the
3D~null point reconnection regimes in \inlinecite{Priest09}. \inlinecite{McLaughlin11}
have presented a~review of the theoretical studies of the MHD waves in the vicinity of
the magnetic null points. Furthermore, \inlinecite{Afana12} have studied analytically the
propagation of a~fast-mode magnetohydrodynamic wave near a~2D magnetic null point. Using
the nonlinear geometrical acoustic method they have found complex behavior of these waves
in the vicinity of this point. In spite of the wealth of theoretical work presented in
these papers, the authors concluded that there is still no clear observational evidence
for the presence of MHD waves near null points of the magnetic field. We note that this
is also in spite the fact that MHD waves are commonly observed in the solar corona
\cite{DeMoortel00,Kliem02,Harrison02,Tomczyk07,Ofman08,DeMoortel09,Marsh09,Marsh11}.
\begin{figure}[]
\begin{center}
\includegraphics[scale=0.60]{figure01.eps}
\caption{Solar maps with main sources associated with the 26~November~2005 radio event
observed at 7:01:30\,UT by the GMRT instrument. Regions U1--U3 and D1--D3 were
identified as the main radio sources at 244 and 611\,MHz, respectively. The magenta
circle in the left panel has the diameter of 32\,arc\,min and indicates an
approximate position and size of the visible solar disk. Disk centers are
shown by a~cross. The cross has dimensions of 400'' in both directions to indicate
the scale in the maps. Synthesized beam dimensions giving the error in GMRT positions
are represented by the small ovals shown on the bottom right corners.}
\label{figure1}
\end{center}
\end{figure}
Since both the standing and propagating magnetoacoustic waves modulate the
plasma density and the magnetic field in the radio source \cite{Aschwanden05},
some modulation of the radio emission by both these waves can be expected.
\citeauthor{Roberts83} (\citeyear{Roberts83,Roberts84}) studied impulsively generated
fast magnetoacoustic waves trapped in a~structure with enhanced density (\textit{e.g.}
loop). They showed that these propagating waves exhibit both periodic and quasi-periodic
phases. \inlinecite{Nakariakov04} numerically modelled impulsively generated fast
magnetoacoustic wave trains and showed that the quasi-periodicity is a~consequence of the
dispersion of the guided modes. Using wavelet analysis, these authors found that typical
wavelet spectrum of such fast magnetoacoustic wave trains is a~tadpole consisting of
a~broadband head preceded by a~narrowband tail.
The tadpoles as characteristic wavelet signatures of fast magnetoacoustic wave trains
were observed in solar eclipse data \cite{Katsiyannis03} as well as in radio spectra of
decimetric gyrosynchrotron emission \cite{Meszarosova09a}, and also in decimetric plasma
emission \cite{Meszarosova09b}. While the tadpoles in the gyrosynchrotron emission was
detected simultaneously at all radio frequencies, the tadpoles in the plasma emission
drifted towards low frequencies. This type of ``drifting tadpoles'' was studied in
details in \inlinecite{Meszarosova11} in the radio dynamical spectrum with fibers.
The observed parameters of fast magnetoacoustic waves reflect properties of the plasma in
the waveguides where these waves are propagating. Therefore, one could use observed
waves and their wavelet tadpoles as a~potentially useful diagnostic tool
\cite{Jelinek09,Jelinek10} for determining physical conditions in these waveguides,
(\textit{e.g.} loops or current sheets). \inlinecite{Karlicky11} compared parameters of
wavelet tadpoles detected in the radio dynamical spectra with narrowband spikes to those
computed in the model with the Harris current sheet. Based on this comparison the authors
proposed that the spikes are generated by driven coalescence and fragmentation processes
in turbulent reconnection outflows. We note here that, in general, flare current sheets
can be formed not only near magnetic null points, but also \textit{e.g.} between
interacting magnetic loops. \inlinecite{Jelinek12} numerically studied impulsively
generated magnetoacoustic waves for the Harris current sheet and a density slab. In both
cases they find that wave trains were generated and propagated in a~similar way for
similar geometrical and plasma parameters.
In this paper, we analyze a~rare decimetric imaging observation of the 26~November~2005
solar flare made by the \textit{Giant Metrewave Radio Telescope} (GMRT). Combing the results of
this analysis with the magnetic field extrapolation (Section~\ref{Sect:3}), we present
a~scenario of this radio event. For the first time, we detected the magnetoacoustic waves
in the radio sources (Section~\ref{Sect:4}) located in the fan of magnetic field lines
connected with a~coronal null point. The basic plasma parameters in the radio sources are
estimated and the results are discussed (Section~\ref{Sect:5}).
\begin{figure}[]
\begin{center}
\includegraphics[scale=0.65]{figure02.eps}
\caption{Selected contour maps showing the time evolution of the emission sources at 244\,MHz
(upper panel) and 611\,MHz (bottom panel). The times (UT) are in blue.
Synthesized beam dimensions representing the error in GMRT positions are shown as
small green circles on the bottom left corners of maps at 6:57\,UT.
Upper panel: Positions of the sources U1--U3 (in red) are indicated on the map at 6:58:30\,UT.
Bottom panel: Positions of the sources D1--D3 (in red) are indicated on the map at 6:59\,UT.}
\label{figure2}
\end{center}
\end{figure}
\begin{table}[]
\caption[]{Time evolution of the individual GMRT sources.}
\label{table1}
\begin{tabular}{cccc}
\hline
\hline
Source & Start time & Time of max & End time \\
& $[$UT] & [UT] & [UT] \\
\hline
U1 & 6:58:04 & 7:01:30 & 7:04:03 \\
U2 & 6:57:59 & 7:01:30 & \\
U3 & 6:58:20 & 7:06:45 & \\
D1 & 6:58:05 & 7:03:52 & 7:05:11 \\
D2 & 6:58:54 & 6:59:33 & 7:04:00 \\
D3 & 6:58:13 & 6:58:43 & 7:01:59 \\
\hline
\end{tabular}
\end{table}
\section{Observations and data analysis}
\label{Sect:2}
The B8.9~solar flare occurred on 26~November~2005 in the active regions NOAA AR~10824 and
10825 located near the disk center. The flare lasted from approximately 06:31 to
07:49\,UT with the GOES maximum at 07:05\,UT.
The radio counterpart of this flare was a~22 minutes long radio event lasting from the
06:50 to 07:12\,UT recorded by the \textit{Giant Metrewave Radio Telescope} (GMRT)
in Pune, India. The \textit{Michelson Doppler Imager} (MDI, \citeauthor{Scherrer95} \citeyear{Scherrer95}) onboard the SoHO
spacecraft \cite{Domingo95} performed routine full-disk observations with a~96~minute
cadence. The magnetogram nearest to the radio event was observed at 06:24\,UT. The EUV
counterparts of this flare were observed by the \textit{Extreme Ultraviolet Imaging
Telescope} \cite{Delaboudiniere95} onboard the SoHO spacecraft.
\begin{figure}[]
\begin{center}
\includegraphics[scale=0.50]{figure03.eps}
\caption{Light curves of the 26~November~2005 radio event obtained from the GMRT maps.
Upper panel: Radio fluxes at 244\,MHz corresponding to the sources U1~(blue),
U2~(dashed green) and U3~(dotted red).
Bottom panel: Radio fluxes at 611\,MHz corresponding to the sources D1~(blue),
D2~(dashed green) and D3~(dotted red).}
\label{figure3}
\end{center}
\end{figure}
\begin{figure}[]
\begin{center}
\includegraphics[scale=0.8]{figure04.eps}
\caption{Superposition of the radio sources at 244\,MHz (orange contours)
and 611\,MHz (blue contours) observed at 06:59\,UT on top of the
photospheric magnetic
map obtained with the SoHO/MDI at 6:24\,UT. Individual radio sources
are labelled. The MDI magnetogram is saturated to $\pm$\,10$^3$\,G.
Heliographic coordinates are shown for orientation. Solar North is up.}
\label{figure4}
\end{center}
\end{figure}
\subsection{GMRT observations and data analysis}
\label{Sect:2.1}
On 26~November~2005, the GMRT observed the Sun at two
frequencies, 244 and 611\,MHz. The GMRT instrument \cite{Swarup91,Ana02,Mercier06} is
a~radio interferometer consisting of 30 fully steerable individual radio telescopes. Each
telescope has a~parabolic antenna with a~diameter of 45\,m and 16 of these individual
antennas are arranged in an Y~shape array with each arm having length of 14\,km from the
array centre. The remaining 14 telescopes are located in the central area of 1\,km$^2$.
The interferometer operates at wavelengths longer than 21 cm, with six frequency bands
centered on the 38, 153, 233, 327, 610, and 1420\,MHz frequencies. The maximum resolution
depends on the configuration, and varies between 2~and 60\,arc\,secs.
The observed interferometric data at 244 and 611\,MHz were Fourier transformed to
generate a~series of 1320 snapshot images of the Sun at 1~second time-cadence, from 06:50
to 07:12\,UT. The images were cleaned using the algorithm developed by
\inlinecite{Schwab84} and rotated to correct for the solar North. The synthesized beam
dimensions giving the GMRT positional error are are 77.7~$\times$~50.8 and
17.7~$\times$~13.4\,arc\,sec at 244 and 611\,MHz, respectively.
An example of the observations showing the main radio sources is shown in Figure
\ref{figure1}. On this figure, six sources can be identified, three for each frequency.
At the frequency of 244\,MHz, the sources are outlined in the left panel and denoted as
U1, U2, and U3. The main sources at the frequency 611\,MHz are shown in the right panel
of Figure \ref{figure1} and labelled as D1, D2, and D3. The evolution of these sources
during the radio event is shown in Figure \ref{figure2}. Positions of the individual
radio sources U1--U3 and D1--D3 are shown in red. Notable is the merging of the sources
U1 and U2 at around 6:59\,UT. The source U3 remains far from the others (U1 and U2).
We constructed light curves of the individual radio sources by enclosing these sources in
rectangular regions on the GMRT maps. The light curves are shown in Figure \ref{figure3}.
The radio fluxes at 244\,MHz corresponding to the sources U1, U2, and U3 are shown in the
upper panel as the blue, dashed green and dotted red lines, respectively. The radio
fluxes at 611\,MHz frequency corresponding to the sources D1~(blue), D2~(dashed green)
and D3~(dotted red) are present in the bottom panel. The calibration of the individual
fluxes is relative only and is expressed in arbitrary units (a.u.).
Analysis of these time series of radio fluxes shows different time evolution profiles for
the different radio sources. The sources U1, D1, and D3 show about 5~minutes durations
with a~well defined main peak. On the other hand, the sources U2, U3, and D2 show
profiles with a~gradual rise and several peaks on top of a~roughly constant background.
The temporal properties of these sources are presented in Table~1. The start and end
times of each source were determined as the times when the fluxes were above or below
half the average flux of the whole burst profile of the source, respectively.
The cross-correlation coefficients between pairs of all GMRT U~and~D sources show that
there is only one pair with correlation higher than~75\%. This pair is the sources U1 and
D1 with the cross-correlation coefficient of~81\%. This degree of correlation indicates
that these radio sources can have a~common origin.
The maximum of the cross-correlation coefficient is flat if the time lag of U1 with
respect to D1 ranges 0--50\,s. It indicates that first parts of radio fluxes D1 and U1
are correlated by some fast moving agents (possibly beams) and the second parts by slowly
moving agents (possibly waves).
\begin{figure}[]
\begin{center}
\includegraphics[scale=0.9]{figure05.eps}
\caption{Superposition of the radio sources (same as in Figure \ref{figure4}) on top of the
SoHO/EIT 195\,\AA~observations taken at 7:48\,UT. The
EIT 195\,\AA~observations are saturated to 2$\times$10$^3$ DN\,s$^{-1}$\,px$^{-1}$
and shown in logarithmic intensity scale for better visibility of fainter
emitting loop systems. The flare arcade footpoints observed using the
filter 304\,\AA~at 07:19\,UT are shown as white contours corresponding
to 10$^3$ DN\,s$^{-1}$\,px$^{-1}$.}
\label{figure5}
\end{center}
\end{figure}
\subsection{Photospheric magnetic field and the EUV flare}
\label{Sect:2.3}
The GMRT interferometric observations offer the advantage of direct spatial comparison
with the photospheric magnetic field and the EUV flare morphology, as observed by the
SoHO/MDI and SoHO/EIT instruments, respectively. The comparison of the radio sources loci
with the MDI magnetogram is presented in Figure \ref{figure4}. The magnetogram was
observed at 06:24\,UT and rotated to the time 06:59 corresponding to the radio
observations using the SolarSoftware routine {\it drot\_map.pro}. The time of the radio
observations, 06:59\,UT, is chosen because it corresponds to the times of the first radio
peaks in the sources U1 and D1.
The radio sources are located in a~quadrupolar magnetic configuration consisting of two
active regions, NOAA 10824 and 10825 (Figure \ref{figure4}). Both active regions are
bipolar with a~$\beta$-configuration. The AR 10824 is located approximately at the
heliographic latitude of $\approx -13^\circ$ and contains a~well-developed, leading
negative-polarity sunspot. Other polarities consist of plage regions or pores, which is
the case also for the AR 10825 located at latitudes of $\approx -7^\circ$. A~notable
feature is that the radio sources are {\it not} located on top of the main magnetic
polarities, but, in general, they overlie weak-field regions.
The SoHO/EIT instrument was performing full-disc observations in the 195\,\AA~filter with
a~cadence of approximately 12 minutes. Observations in other filters, \textit{i.e.} 171,
284 and 304\,\AA~were performed at 07:00, 07:06 and 07:19\,UT, respectively. Due to poor
pointing information of the EIT instrument, the EIT 304\,\AA~observations were coaligned
manually with the MDI observations by matching the EIT 304\,\AA~brightenings with small
magnetic polarities observed by MDI, which show good spatial correlation
\cite{Ravindra03}. We estimate the error of this manual coalignment to be
$\approx$~5\,arc\,sec.
In the EIT 304\,\AA~observations, three flare arcade footpoints are discernible, shown in
Figure \ref{figure5} as white contours corresponding to the observed intensity of 10$^3$
DN\,s$^{-1}$\,px$^{-1}$. All three footpoints are located well within the AR 10824, with
one of them in the positive polarity and the other two in the negative polarities North
of the sunspot. EIT 195\,\AA~observations show cooling system of flare loops connecting
these three footpoints. The flare loops are well-visible at 195\,\AA~at 07:48\,UT and are
shown in Figure \ref{figure5}. The global magnetic configuration of AR~10824 is that
of a~sigmoid with large shear. The magnetic configuration of the neighboring AR 10825 is
near-potential.
Comparison of the loci of the radio sources with the EUV flare morphology (Figure
\ref{figure5}) shows that the radio sources have no direct spatial correspondence with
the EUV flare loops or their footpoints. The source D3 is an exception, since it overlies
a~portion of the flare loops. The sources U1, U3, D1 and D2 are located in the area of
weak EUV emission.
\begin{table}[]
\caption[]{Basic parameters derived for radio bursts at GMRT frequencies.}
\label{table2}
\begin{tabular}{cccc|c}
\hline
\hline
Frequency & Source & Fundamental & Plasma density & First harmonic \\
& & frequency altitude & & altitude \\
$[$MHz] & & [Mm] & [cm$^{-3}$] & [Mm] \\
\hline
244 & U1--U3 & 48 & 7.4 x 10$^{8}$ & 86 \\
611 & D1--D3 & 22 & 4.6 x 10$^{9}$ & 40 \\
\hline
\end{tabular}
\end{table}
\section{Magnetic structure of active regions NOAA 10824 and 10825}
\label{Sect:3}
\subsection{Magnetic field extrapolation and the altitude of the radio emission}
\label{Sect:3.1}
To investigate the relationship between the radio sources and the structure of the
magnetic field of active regions NOAA 10824 and 10825, we performed an extrapolation of
the SoHO/MDI magnetogram (Figure \ref{figure4}) observed at 06:24\,UT prior to the flare
and associated radio events. The extrapolation was carried out in linear force-free
approximation, where the magnetic field $\vec{B}$ given by the solution of the equation
\begin{equation}
\vec{\nabla} \times \vec{B} = \alpha\vec{B}\,,
\label{potential}
\end{equation}
for $\alpha = const$. The solution is subject to the boundary condition $B_z(x,y,z=0)$
given by the observed magnetogram, where $0 < x < L_x$ and $0 < y < L_y$. The constant
$\alpha$ is subject to the condition $\alpha < \alpha_\mathrm{max} = 2\pi /
\mathrm{max}(L_x, L_y)$, otherwise the magnetic field is non-physical. We utilized the
Fourier transform method developed by \inlinecite{Alissandrakis81} and
\inlinecite{Gary89}. This method allows for extrapolation of the part of the observed
magnetogram in a~carthesian geometry. The computational box is shown in Figures
\ref{figure6} and~\ref{figure7}.
We calculated a~range of linear force-free models with various values of~$\alpha$.
However, the flare loops are poorly approximated with $\alpha = const.$, even with large
values of $\alpha$ close to $\alpha_\mathrm{max}$. The reason for this probably is the
presence of differential shear within the active region \cite{Schmieder96}. Moreover,
using large values of $\alpha$ leads to poor fit to the observed shape of the coronal
loops in the AR 10825, which is close to the potential state ($\alpha = 0$). Therefore,
we chose to extrapolate in the potential approximation. We also note that the potential
approximation does usually a~good job in capturing the topological structure of the
active region (though not at sigmoid locations, \textit{e.g.} \citeauthor{Schmieder03}
\citeyear{Schmieder03}).
To compare the 3D~magnetic field geometry with the loci of the radio sources, observed in
a~2D~plane of the sky, the approximate altitude at which the radio emission originates
must be determined. To do that, we consider the parameters of the radio bursts (at GMRT
frequencies) and the solar atmosphere density model \cite{Aschwanden02} for the radio
plasma emission at fundamental frequency and the first harmonic (see also
Section~\ref{Sect:3.2}). We use Aschwanden's density model because it was derived
from radio observations. The basic parameters of the radio sources are derived and
summarized in Table~\ref{table2} where the plasma density values belong to the
fundamental frequency altitude.
\begin{figure}[]
\begin{center}
\includegraphics[scale=1.0]{figure06.eps}
\caption{Plane-of-the-sky projection of the extrapolated magnetic field.
Red (SE) and blue (NW) arrows show the locations of the positive and negative coronal magnetic null
points, respectively. The separator lying at the intersection of the respective fans is
plotted as thick red line. The magnetic field lines are colored according to the local
magnetic field, in the range of log$_{10}(B/\mathrm{T})$\,$\in$\,$\left<-1.0,3.5\right>$.
The thin, dark-red and dark-blue contours denote positive and negative photospheric
polarities, respectively. The two contour levels correspond to $B_z(z=0) = \pm$ 50 and
500\,G. Shades of gray show the photospheric intersections of the quasi-separatrix
layers. The contours of the GMRT sources are located at the altitudes corresponding to
the fundamental frequency (Table~\ref{table2}) and are shown as thick orange (U1--3) and
blue (D1--3) contours, respectively.}
\label{figure6}
\end{center}
\end{figure}
\begin{figure}[]
\begin{center}
\includegraphics[scale=1.0]{figure07.eps}
\caption{Same as in Figure~\ref{figure6}, but in a~3D~projection to show the
altitudes of the individual field lines as well as individual radio
sources. Only the field lines lying in the vicinity of the negative
null point or passing through the source D2 are shown for clarity.}
\label{figure7}
\end{center}
\end{figure}
\subsection{Magnetic topology and its relation to radio sources}
\label{Sect:3.2}
The extrapolated magnetic field contains a~pair of coronal magnetic null points between
the two active regions. The negative and positive coronal null points are denoted
by red (SE) and blue (NW) arrows in Figure \ref{figure6}. On this figure, the dark-red and
dark-blue contours correspond to positive and negative photospheric polarities,
respectively, with $B_z$\,=\,$\pm$50\,and\,500\,G. Since the magnetic field of
AR~10825 is weaker, the null points are located closer to this active region than to the
larger AR 10824. The existence of these null points is a~consequence of the multipolar
structure of the magnetic field (\textit{e.g.} \citeauthor{Longcope05}
\citeyear{Longcope05} and references therein). The magnetic structure in the vicinity of
these null points is shown using several field lines regularly distributed
near the null points. The fan surfaces of these null points intersect, forming
a~separator \cite{Baum80,Longcope05}. The separator is found manually by trial-and-error
and is shown by a~thick red line.
Both the spine and the fan surface of the negative null point are closed within
the computational domain, while the fan surface of the positive null point is
open. This is not typical of coronal null points \cite{Pariat09,DelZanna11}.
Both fan surfaces form a~part of the quasi-separatrix layers
\cite{Priest95,Demoulin96,Demoulin97} associated with the active regions. The
intersections of these quasi-separatrix layers with the photosphere are shown
in Figures \ref{figure6} and~\ref{figure7} by shades of gray corresponding to
the unsigned norm~$N$ of the field line footpoint mapping, defined as
\begin{equation}
N(x,y) = \left( \left(\frac{dX}{dx}\right)^2 +\left(\frac{dX}{dy}\right)^2 +\left(\frac{dY}{dx}\right)^2 +\left(\frac{dY}{dy}\right)^2 \right)^{1/2}\,,
\label{norm}
\end{equation}
where the starting footpoint of a~field line at $z~=~0$ is denoted as $(x,y)$ and its
opposite footpoint as $(X,Y)$. The unsigned norm is computed irrespective of the
direction of the footpoint mapping. We note that the correct definition of
quasi-separatrix layers is through the squashing factor $Q$ or the expansion-contraction
factor $K$ \cite{Titov02}, which are invariants with respect to the footpoint mapping
direction. Here, we use the unsigned norm, since it is much less sensitive to the errors
arising because of the measurement noise and numerical derivation due to finite
magnetogram resolution, and at the same time gives a~good indication of the
expansion-contraction factor~$K$.
The sources~U and~D are plotted as thick blue and thick orange contours, respectively.
These sources are plotted in height corresponding to the altitudes given by the
fundamental frequencies (Table~\ref{table2}). It is clearly seen that both U1 and D1
are lying along the fan of the negative null point, with the field lines passing through the
D1 being separated by the separator (Figures \ref{figure6} and~\ref{figure7}). Therefore,
both the U1 and D1 lie along a~common magnetic structure, providing straightforward
explanation for the observed correlation between these two sources
(Section~\ref{Sect:2.1}). The separator also passes through U1, as does some other fan
field lines rooted in AR~10825.
We note that the fan of the negative null-point covers only a~part of the source U1. The
rest of the U1 is associated with closed field lines rooted in the positive polarity of
the AR 10825, in direct vicinity of the footpoints of the fan field lines. One example of
such field lines is plotted in Figures~\ref{figure6} and~\ref{figure7}. That there are
structures other than the fan of the negative null-point passing through the source U1
offers explanation for the fact that the correlation coefficient between U1 and D1 is
smaller than one.
The sources D2 and U3 are located on the field lines that are open within the
computational box. The footpoints of these field lines are located in the AR~10824 in the
vicinity of the photospheric quasi-separatrices with highest $N$ separating the open and
closed magnetic flux (Figure \ref{figure6}). Overall, we conclude that the field lines
passing through the sources D1, D2, U1 and U3 are rooted in the vicinity of the flare
arcade footpoints (compare Figures \ref{figure5} and~\ref{figure6}), while the source D3
appears to be connected directly to flare loops. These observations point out the fact
that even a~localized, weak flare can cause a~widespread perturbation involving a~large
portion of the surrounding magnetic field and its topological features.
We stress that the goodness of the match between the extrapolated magnetic field and the
radio sources is subject to the approximations used. \textit{E.g.} presence of electric
current in the model of the magnetic field would alter the geometry of the magnetic
field. In particular, using a~linear force-free approximation would mostly affect the
longest field lines passing through the source U1. Additionally, the density model of
\cite{Aschwanden02} provides only a~single altitude for the emission of each radio
frequency, while in reality the density profile and thus the altitude of emission can be
different for each field line. However, the available longitudinal MDI magnetogram and
the EIT data do not provide necessary constraints (Section~\ref{Sect:2}) for using more
elaborate models of the magnetic field or density throughout the observed radio-emitting
atmosphere.
We also note that the magnetic connectivity between U1 and D1 exists only if these
sources are assumed to emit at the fundamental frequency (Table~\ref{table2}). If the
origin of the radio emission would be the second harmonic, the radio emission should
occur at altitudes which are higher than the vertical extent of the negative null point
fan. This point is discussed further in Section~\ref{Sect:5}.
\begin{table}[]
\caption[]{Characteristics periods~$P$ for wavelet power spectra showing a~tadpole pattern.}
\label{table3}
\begin{tabular}{ccc}
\hline
\hline
Source & Frequency & Period $P$ \\
& $[$MHz] & [s] \\
\hline
U1 & 244 & 83, 25, 22, 11, 10 \\
D1 & 611 & 76, 22, 13 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[]
\begin{center}
\includegraphics[scale=0.60]{figure08.eps}
\caption{Examples of the wavelet power spectra showing tadpole patterns with
characteristics periods~$P$ (bottom panels) in comparison with the GMRT radio
fluxes (upper panels) at 244\,MHz (source U1) and 611\,MHz (source D1) frequencies.
The lighter area shows a~greater power in the power spectrum and it is bound
by the solid contour of the confidence level~$>$95\%. The hatched regions belong
to the cone of influence with edge effects due to finite-length time series.
The arrows (1) and (2) correspond to 7:01:30 and 7:03:52\,UT, respectively.}
\label{figure8}
\end{center}
\end{figure}
\begin{figure}[]
\begin{center}
\includegraphics[scale=0.4]{figure09.eps}
\caption{Scheme of the suggested scenario for the well-correlated sources U1 and D1
lying along the fan of the coronal null point. Magnetoacoustic waves move along
magnetic field lines (trajectories (1) and (2)). The separator is located between
these trajectories.}
\label{figure9}
\end{center}
\end{figure}
\section{Wavelet analysis of the GMRT time series}
\label{Sect:4}
To investigate the nature of the disturbance giving rise to the radio sources, an
analysis of possible periodicity of radio time series at all frequencies, detected by the
GMRT instrument, was made using the Morlet wavelet transform method \cite{Torrence98}. We
focused on the well correlated GMRT sources U1 and~D1.
The wavelet power spectra at all frequencies of the GMRT instrument show the tadpole
patterns \cite{Nakariakov04} with characteristics periods~$P$
(\citeauthor{Meszarosova09a}, \citeyear{Meszarosova09a, Meszarosova09b}) in the range
10--83\,s.
These periods were detected in the entire studied time interval 06:50--07:12\,UT and are
listed in Table~\ref{table3}. The periods of about 10\,s were found at all frequencies.
Figure~\ref{figure8} shows two examples of the wavelet power spectra of the well
correlated radio sources U1 and D1 showing tadpole patterns (bottom panels) with the
longest characteristic periods $P$ listed in Table~\ref{table3}. In this figure, the
arrows (1) and (2) are used to denote the times of 7:01:30 and 7:03:52\,UT, respectively.
These times correspond to the main peaks of the flux shown in the upper panels and the
occurring tadpole head maximum shown in bottom panels. It has been proposed that the tadpoles indicate
arrival of the magnetoacoustic wave trains at the radio source
(\citeauthor{Nakariakov04}, \citeyear{Nakariakov04}; \citeauthor{Meszarosova09a},
\citeyear{Meszarosova09a, Meszarosova09b}).
Although the U1 and D1 radio fluxes are well correlated (81\%) the sources U1 and D1 are
not the same in all details. Different appearance of the tadpoles in Figure~\ref{figure8}
reflects different plasma parameters in the sources U1 and D1 \cite{Jelinek12}. This is
expected, since these tadpoles have different characteristic period~$P$, which means that
they are propagating in different waveguides (see Section~\ref{Sect:5}).
\section{Discussion and Conclusions}
\label{Sect:5}
We studied solar interferometric maps (Figures \ref{figure1} and~\ref{figure2}) and time
series of the six individual radio sources, U1--U3 at 244\,MHz and D1--D3 at 611\,MHz
(Figure~\ref{figure3}), observed by the GMRT instrument with 1\,s time resolution during
the B8.9~flare on 26~November~2005. We determined that only sources U1 and D1 are well
correlated with the cross-correlation coefficient of~81\%.
Generally, the positions of the radio sources can be influenced by refractive
effects. They play a~role in the solar atmosphere as well as in the Earth ionosphere. As
concerns to the effects in the solar atmosphere, our low frequency 244\,MHz source U1
(which is the main feature under study together with the source D1) is nearly in the disk
center, see Figure~\ref{figure1} or~\ref{figure4}. In the disk center the refractive
effects come to zero. Similarly, the D1 source at 611\,MHz is close to the disk center.
We note that the phase calibration process at GMRT corrects the ionospheric refraction.
Comparison of the GMRT interferometric maps with the EUV observations of the flare by the
SoHO/EIT instrument showed that only the source D3 can be connected to the system of
flare loops. All other radio sources are located well away from the X-ray and EUV flare
(\textit{cf.} \citeauthor{Benz01} \citeyear{Benz01}). Potential extrapolation of the
observed SoHO/MDI magnetogram in combination with the solar coronal density model of
\inlinecite{Aschwanden02} allowed us to find the connection of these other radio sources
to the magnetic field of the observed active regions and their magnetic topology. The
sources U1 and D1 were found to lie along the fan of a~negative coronal null point. That
the sources U1 and D1 lie along a~common magnetic structure offers explanation for their
observed correlation. The sources D2 and U3 lie at open field lines anchored near the
quasi-separatrices in the vicinity of the flare arcade footpoints.
The connection between the radio sources and the magnetic field, as found in Section
\ref{Sect:3.2} is valid only if the radio emission originates at the fundamental
frequency. This is because the altitudes corresponding to the first harmonic are too high
(Table~\ref{table2}). We note that the emission at the first harmonic is usually
considered to be stronger than at the fundamental frequency, due to the strong absorption
for the fundamental emission. However, there are cases (\textit{e.g.} enhanced plasma
turbulence in the radio sources) which reduces this absorption and thus the emission at
the fundamental frequency can be stronger than at the first harmonic.
Based on the results above, we conclude that even the observed localized flare is able to
cause a~widespread perturbation involving a~large portion of the topological structure of
the two active regions. This perturbation can be understood in the following terms: As
the flare progresses, the flare arcade footpoints move away from each other. Since these
footpoints correspond to the intersections of the quasi-separatrix layers with the
photosphere \cite{Demoulin96,Demoulin97}, the increase of their distance perturbs the
surrounding magnetic field and excite waves. If the surrounding, perturbed magnetic field
contains null points, these can collapse and form a~current sheet, thereby commencing
reconnection in regions not directly involved in the original flare. This can then be
also the mechanism for sympathetic flares and eruptions
\cite{Moon02,Khan06,Jiang08,Torok11,Shen12}.
For the first time, wavelet tadpole patterns with the characteristic periods of 10--83\,s
(Table~\ref{table3}) were found at metric radio frequencies. We have interpreted them in
accordance with the works of \inlinecite{Nakariakov04} and \citeauthor{Meszarosova09a}
(\citeyear{Meszarosova09a,Meszarosova09b}) as signatures of the fast magnetoacoustic
waves propagating from their initiation site to studied radio sources U1 and D1.
The mechanism for initiation of the inferred magnetoacoustic waves present in the radio
sources can thus be both the observed EUV flare and the flare-induced collapse of the
null point located in the surrounding magnetic field. These waves propagated towards
radio sources along magnetic field lines. They arrived at the radio sources (see the
wavelet tadpoles in Figure~\ref{figure8}) and modulated the radio emission there. We
expect that the magnetoacoustic waves, through their density and magnetic field
variations, modulate the growth rates of instabilities (like the bump-on-tail or
loss-cone instabilities), which generate plasma- as well as the observed radio waves.
The schematic scenario for the well-correlated sources U1 and D1 lying along the fan of the
coronal null point is shown in Figure~\ref{figure9}. Magnetoacoustic wave trains move
along magnetic field lines (trajectories~1 and~2). The separator is located between these
trajectories (\textit{i.e.} between two groups of magnetic field lines (\textit{cf.}
Figure~\ref{figure6}). The distances between the null point and radio sources along the
magnetic field lines are obtained directly from the extrapolation and allow the
determination of the velocities of the magnetoacoustic waves. The averaged distances
between the radio source and the null point, are about 53 and 103\,Mm for the U1 and D1,
respectively. We considered the time intervals $t2-t1$ and $t3-t1$ where $t1$ is the time
of triggering of the magnetoacoustic wave trains. We assume that these trains propagating
to the source U1 as well as to the source D1 were generated simultaneously at the
beginning of the radio event, \textit{i.e.} at the start time of U1 and D1 sources
(Table~\ref{table1}). The times $t2$ and $t3$ correspond to the times of tadpole head
maxima (arrows (1) and (2), Figure~\ref{figure8}), respectively. Thus, the mean velocities of
the magnetoacoustic waves are 260 and 300\,km\,s$^{-1}$ for U1 and D1 sources,
respectively.
Now, let us compare these mean velocities with the Alfv\'en velocities at the radio
sources at U1 and D1. Taking plasma densities from Table~\ref{table2} and the magnetic
field from the extrapolation ($B$~=~3\,G at U1 and 13\,G at D1) the Alfv\'en velocity at
these sources are $v_A$~=~220 and 390\,km\,s$^{-1}$, respectively. These velocities are
in agreement with previous results, considering that the Alfv\'en velocities change along
the trajectory of these magnetoacoustic waves, starting from the coronal null point.
Thus, knowing the Alfv\'en velocities in the radio sources, we can estimate also the
width~$w$ of the structure through which the magnetoacoustic wave trains are guided.
Namely, the period of this magnetoacoustic wave can be estimated as $P~\simeq~w/v_A$
(Nakariakov \etal, 2004). Thus, the widths~$w$ of the structures guiding the
magnetoacoustic waves (modulating the radio emission) are 18\,Mm
(83\,s~$\times$~220\,km\,s$^{-1}$) and 30\,Mm (76\,s~$\times$~390\,km\,s$^{-1}$) in the
U1 and D1 radio sources, respectively. This rough estimation confirms that the extent of
the structure guiding magnetoacoustic waves is within the width of the fan of magnetic
field lines in both radio sources. The wavelet tadpoles with shorter periods
(Table~\ref{table3}) show that within the large structure guiding long period
magnetoacoustic wave trains ($P\sim$~80\,s), there are also narrower waveguides guiding
shorter period waves.
The complex of radio sources U1--U3 and D1--D3 includes the whole topological structure
of the active area contrary to the rather localized EUV sources (Figure~\ref{figure5}).
This is in line with the observed radio sources being usually larger that the real
sources due to scattering of radio waves (\textit{e.g.} \citeauthor{Benz02},
\citeyear{Benz02}).
Considering the above presented estimations we can conclude that they support the
scenario (Figure~\ref{figure9}) of magnetoacoustic waves (wavelet tadpoles) in a~fan
structure above the coronal magnetic null point.
\acknowledgements We are thankful to Dr. E.~Dzif\v{c}\'akov\'a for encouraging
discussions. H.~M. thanks Dr.~J.~Ryb\'ak for his help with the wavelet analysis that was
performed using the software based on tools provided by C.~Torrence and G.~P.~Compo at
\texttt{http://paos.colorado.edu/research/wavelets}. H.~M. acknowledges support from
the PCI/INPE Grant 33/2011 of the National Space Research Institute in Brazil. H.~M. and
M.~K. acknowledge support from the Grant GACR P209/12/0103, the research project
RVO:67985815 of the Astronomical Institute AS and the Marie Curie PIRSES-GA-2011-295272
RadioSun project. J.~D. and M.~K. acknowledge support from the bilateral project No.
SK-CZ-11-0153. J.~D. also acknowledges the support from the Scientific Grant Agency,
VEGA, Slovakia, Grant No. 1/0240/11, Grant No. P209/12/1652 of the Grant Agency of the
Czech Republic, Collaborative grant SK-CZ-0153-11 and Comenius University Grant
UK/11/2012. We also thank the staff of the GMRT who have made these observations possible
as well as the SoHO/MDI and SoHO/EIT consortia for their data. GMRT is run by the
National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research.
\bibliographystyle{spr-mp-sola-cnd}
|
1,314,259,994,850 | arxiv | \section{Introduction}
The magnification bias \citep{Sch92} is a gravitational lensing effect that consists of an increase or decrease in the detection probability of background sources near the positions of lenses. It is due to a modification of the integrated source number counts of the background objects, which produces an excess or lack of sources at a given flux density limit, and is mainly related to the logarithmic slope of the integrated number counts ($\beta$; $n(>S)= A S^{-\beta}$). Indeed, very steep source number counts ($\beta>2$) enhance the effect of a magnification bias, making the more frequent (though less easily identified) weak lensing events more easily detectable. We can estimate the magnification bias through the non-zero signal that is produced in the cross-correlation function (CCF) between two source samples with non-overlapping redshift distributions \citep{Scr05,Men10,Hil13,Bar01}.
Not only do the submillimetre galaxies (SMGs) discovered within the
\textit{Herschel} Astrophysical Terahertz Large Area Survey
\citep[H-ATLAS;][]{Eal10} data have steep source number counts, $\beta > 3$, but
many of them are also at high redshift, $z>1$, and show very low
cross-contamination with respect to the optical band (i.e. the foreground lens
is `transparent' at submillimetre wavelengths and the background source is
invisible in the optical band). These properties make SMGs promising candidates
to be successfully used as a background sample for magnification bias studies,
as in \citet{GON14,GON17} where the CCF was measured with high significance, and
as directly observed by \citet{DUN20} with the Atacama Large Millimetre Array
(ALMA). Moreover, the SMGs are used to investigate the halo mass, projected mass
density profile, and concentration of foreground samples of quasi-stellar
objects \citep[QSOs; ][]{BON19}, for cosmological studies \citep{BON20, GON21, BON21} and to observationally constrain the halo mass function \citep[][Cueli et al., in prep.]{CUE21}.
On the other hand, galaxy clusters are massive bound systems usually placed in the knots of filamentary structures and are used to track the large-scale structure of the Universe.
They are being exploited for cosmological studies \citep[e.g. ][]{ALL11} exploring the evolution of galaxies \citep[][]{DRE80,BUT78,BUT84,GOT03} and the lensed high-redshift galaxies \citep[e.g.][]{BLA99}. Moreover, clusters have been correlated with background objects to investigate the potential lensing effects \citep[][]{MYE05,LOP08}.
Stacking techniques are often used when the signal to be detected is faint but highly probable, as in the case of weak lensing. These methods allow a statistical study of the overall signal by co-adding the emission from many weak or undetected objects, because single weak lensing events are hardly detectable in general.
Some examples of the applications of stacking techniques are: exploiting \textit{Planck} data to recover the very weak integrated signal of the Sachs–Wolfe effect \citep{Pla14,Pla16b}, studying the faint polarised signal of radio and infrared sources in the NRAO VLA Sky Survey (NVSS) and \textit{Planck} \citep[see][]{Stil14, BON17a, BON17b}, obtaining the mean spectral energy distribution of optically selected quasars \citep{Bia19}, detecting weak gravitational lensing of the cosmic microwave background in the \textit{Planck} lensing convergence map \citep{Bia18}, and probing star formation in dense environments of $z\sim 1$ lensing haloes \citep{Wel16}.
In addition, \cite{UME16} estimated the average surface mass density profile of an X-ray-selected subsample of galaxy clusters by stacking their individual profiles. They found that the stacked density profile is well described by the Navarro–Frenk–White (NFW), Einasto, and DARKexp models and that cuspy halo models with a large-scale two-halo term improve the agreement with the data. In particular, a concentration of $C_{200c} = 3.79^{+0.30}_{-0.28}$ at M$_{200c} = 14.1^{+1.0}_{-1.0} M_{\odot}$ is found for the NFW halo model.
In this work, we apply the stacking technique to obtain the mass density profile of galaxy clusters. In particular, the paper is organised as follows. Section \ref{sec:data} describes the data, while section \ref{sec:method} gives details of the methodology applied for the stacking and CCF estimation. Section \ref{sec:denprof} addresses the theoretical framework for the CCF, weak gravitational lensing, and halo density profiles. Our results and conclusions are presented in Sects. \ref{sec:results} and \ref{sec:concl}.
A flat $\Lambda$ cold dark matter ($\Lambda$CDM) cosmology has been adopted throughout the paper, with the cosmological parameters estimated by \cite{PLA18_VI} ($\Omega_m$ = 0.31, $\sigma_8$ = 0.81 and $h = H_0 /100$ $km$ $s^{-1} Mpc^{-1} = 0.67$).
\section{Data}
\label{sec:data}
The background sample consists of the officially detected galaxies in the three H-ATLAS \citep{Pil10} GAMA fields from the first data release (DR1) \citep[][in the equatorial regions at 9, 12, and 14.5 h]{Val16,BOU16,Rig11,Pas11,Iba10} and the field centred at the North Galactic Pole \citep[NGP,][]{Smi17,MAD18} from DR2.
In both H-ATLAS DRs there is an implicit $4\sigma$ detection limit at 250 $\mu$m ($\sim S_{250} > 29$ mJy) \citep{Val16,MAD18} and a $3\sigma$ limit at 350 $\mu$m has been applied to increase the robustness of the photometric redshift estimation \citep[as in][]{GON17}.
In addition, we select sources with a photometric redshift $z>1$ in order to avoid any overlap in the redshift distribution of lenses and background sources (see top panel of Fig. \ref{fig:histograms}).
The photometric redshifts were estimated by means of a minimum $\chi^2$ fit of a template spectral energy distribution (SED) to the Spectral and Photometric Imaging REceiver \citep[SPIRE;][]{GRI10} data \citep[using Photodetector Array Camera and Spectrometer, PACS,][data when possible]{POG10} . It was shown that a good template is the SED of SMM J2135-0102 (`The Cosmic Eyelash' at $z = 2.3$; \cite{Ivi10,Swi10}), which was found to be the best overall template with $\Delta z/(1 + z) = -0.07$ and a dispersion of 0.153 \citep{Ivi16,GON12,Lap11}.
We are finally left with 70707 sources that constitute approximately 29 $\%$ of the initial number of sources.
The redshift distribution of the background sample is shown in Fig. \ref{fig:histograms} (top panel, black line). The mean redshift of the sample is $\left< z\right> = 2.3_{-0.5}^{+0.4}$ (the uncertainty indicates the $1\sigma$ limits).
The potential effect of blazars or local galaxy interlopers is considered negligible: the number of detectable blazars is completely negligible while the local galaxies would have photometric redshifts much lower than 1 or, even in the event of a catastrophic photometric redshift failure with resolved individual star-forming regions with abnormal temperatures, they will have redshifts lower than the clusters themselves \citep[see ][for more details]{GON10,Lap11,GON12,LOP13}.
As for the potential lenses, the galaxy cluster sample has been extracted from the catalogue presented in \citet{WEN12} (hereafter WHL12), which contains $132684$ galaxy clusters from the Sloan Digital Sky Survey III (SDSS-III) with given photometric redshifts in the range of $0.05\leq z< 0.8$.
We select those objects corresponding to the NGP region and the three H-ATLAS GAMA fields. This leads to a total of $3651$ galaxy clusters, which constitute our sample of target lenses. Figure \ref{fig:histograms} (top) shows in red the redshift distribution of the foreground sources. The mean redshift of the sample is $\langle z\rangle=0.38$.
Furthermore, following \cite{BAU14}, we divide the galaxy clusters into five bins according to the richness information provided in WHL12. The cluster richness estimated by WHL12 is defined as $R=L_{200}/L_*$ with $L_{200}$ as the total r-band luminosity within the radius $r_{200}$ (the radius where the mean density of a cluster is 200 times the critical density of the Universe). $L_*$ is the evolved characteristic luminosity of galaxies in the r-band, defined as $L_*(z)=L_*(z=0)10^{0.4Qz}$, adopting a passive evolution with $Q=1.62$ \citep{Bla03}.
Table \ref{tab:richness} shows the number of target lenses in each bin and the associated richness range. The redshift distributions of the richness subsamples are depicted and compared in Fig. \ref{fig:histograms} (bottom panel).
\begin{table}[h]
\centering
\caption{Cluster sample information for different richness ranges.}
\begin{tabular}{ccrr}
\hline\hline
Bin number&Richness& \# Targets & \# CG pairs\\
\hline
Total& 12-220 & 3651 & 11789\\
1& 12-17&1977 & 6158\\
2& 18-25& 1102 & 3723\\
3&26-40& 430 & 1427\\
4&41-70& 127 & 424\\
5&71-220& 15 & 57\\
\hline\hline
\end{tabular}
\label{tab:richness}
\tablefoot{Richness subdivision of the cluster sample with the number of targets in each richness bin and the number of cluster--galaxy pairs. The richness ranges are chosen following \citet{BAU14}.}
\end{table}
\begin{figure}[ht]
\includegraphics[width=0.49\textwidth]{PLOTS/histogram2.png}
\caption{Top: Redshift distribution of the target lenses from the \citet{WEN12} catalogue (in red) and the background sources from the H-ATLAS sample (in black). Bottom: Redshift distribution of the galaxy clusters for each of the five richness bins.}
\label{fig:histograms}
\end{figure}
\section{Measurements}
\label{sec:method}
\subsection{Stacking}
\label{sec:stack}
Stacking is a statistical method that proves useful when the desired signal is frequent but too weak: by adding up many regions of the sky centred in previously selected positions \citep[see][]{Dol06,Mar09, Bet12}, the signal is enhanced.
In this way, overall statistical information might be obtained when the single event does not have a sufficiently high signal-to-noise ratio (S/N) to be detected.
Similar to what is done in \cite{BON19}, the signal of interest in this work is the cross-correlation due to magnification bias: the excess of detected sources in the background within a certain angular separation with respect to the random scenario. It should be stressed that, as we are looking for the number of background sources near the lens positions, we are stacking at the position of the sources, not their flux densities. This is a very similar approach to the traditional cross-correlation function (CCF) estimator with the additional advantage that it accounts for positional errors and identifies the foreground--background pairs in the stacked map.
\cite{BON19} derived the stacked magnification bias of lensed SMGs in lens positions signposted by QSOs. In this case, we study the stacked magnification bias produced by clusters acting as lenses on background SMGs. Given a lens, we search for background sources within a circular region centred on its position and within an angular radius of $250$ arcsec. In this way, we obtain a map of 500 $\times$ 500 pixels (the chosen pixel size is 1 arcsec) centred at the target position (the position of the brightest cluster galaxy, or BCG, of each galaxy cluster) containing the nearest background sources of the lens, hereafter referred to as cluster--galaxy (CG) pairs.
This procedure is repeated for all the clusters in the target sample and all the maps are then added to obtain the final stacked map, which is normalised to the number of clusters (the 3651 total targets). Then, a $\sigma=2.4$ arcsec Gaussian filter is applied to the map in order to take into account the positional accuracy present in the catalogues. As the cluster centre positional uncertainty is negligible compared to that of the SMGs (the SDSS positional accuracy is better than 0.1 arcsec), the selected $\sigma$ value corresponds to the positional uncertainty estimated for the H-ATLAS catalogues \citep[][]{BOU16,MAD18}.
The smoothing step is equivalent to substituting a single pixel at the position of every CG pair for a 2D isotropic Gaussian centred on that pixel to take into account the positional uncertainty (i.e. the fact that the background galaxy could not be exactly at the position that appears in the catalogue). This additional step is taken because random positional displacements toward the lens would produce higher excess probabilities than in the opposite direction, introducing an observational bias (different mass density shape or concentration, and therefore mass) that is more important at the smallest angular scales.
The resulting map with the identified CG pairs is plotted in the top panel of Fig. \ref{fig:totaldensitymap}. The corresponding bottom panel shows the expected signal in the absence of lensing: random cluster positions are simulated and the corresponding distribution maps of CG pairs are produced. To these maps, we apply the same procedure as that applied to the data. As in \citet{BON19}, we simulate ten times the total number of targets, namely 36510, to obtain a homogeneous random map (as we average the results over the number of targets, their total number becomes irrelevant, apart from when calculating statistical uncertainties).
For both panels of Fig. \ref{fig:totaldensitymap}, we decided to use a colour scale that represents the relative excess probability with respect to the random mean value (stacked pairs/ random mean -1) of finding a CG pair. The mean and standard deviation per pixel of the random stacked image are $1.5\times10^{-5}$ and $2.5\times10^{-6}$.
By comparing both images we can extract some preliminary conclusions.
There is an excess of CG pairs with respect to the random alignment case (at least twice as probable or more than five times the expected random statistical deviation, 0.16), especially in the region located at the centre where a much higher probability is clearly shown (a peak value of relative excess of 4.76 or about 30 times the random statistical deviation), corresponding to a larger lensing effect. As discussed in more detail below, even if most of our signal is in general produced by weak lensing, this stronger excess of CG pairs below 10-20 arcsec is due to the strong lensing effect. At larger angular distances, the distribution of CG pairs is almost isotropic, even if not completely homogeneous. Moreover, its intensity tends to decrease towards the border as expected.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{PLOTS/maps.png}
\caption{Bottom: Relative excess probability (stacked image/mean - 1) of random pairs placing random targets in the centre and considering the background sources within an angular radius of $250$ arcsec from the position of the target. The pixel size is $1$ arcsec and we apply a $2.4\sigma$ beam Gaussian filter to take into account the positional uncertainties (see text for more details). The mean and standard deviation per pixel of the random stacked image are $1.5\times10^{-5}$ and $2.5\times10^{-6}$.
Top: Relative excess probability (stacked image/random mean - 1) for the actual CG pairs using the same radius and pixel size as for the random case and smoothed with the same Gaussian filter.}
\label{fig:totaldensitymap}
\end{figure}
Furthermore, we repeat the same procedure but considering only the galaxy clusters in each of the richness bins that were described earlier. The relative excess probability images of stacked CG pairs are shown in Fig. \ref{fig:bindensitymap}.
The number of CG pairs is typically more than three times the number of targets (see Table \ref{tab:richness}). As expected, this number decreases proportionally to the number of targets in each richness bin. We find a marginal tendency of more massive clusters to have more pairs.
For the bins with the lowest richness, and therefore a larger number of targets and CG pairs, the images are more or less similar to the total case.
In fact, bins 1, 2, and 3 show a higher density in the centre and an almost isotropic distribution of CG pairs at an angular distance $<250$ arcsec. In bins 4 and 5, the distribution is very discrete due to the poorer statistics (see the number of targets and CG pairs in Table \ref{tab:richness}). As already discussed for the total case, most of the signal is produced by the weak lensing effect, except at angular scales lower than 10-20 arcsec, where the strong lensing effect causes the higher density region located close to the centre of all the bins. In bin 4, and especially bin 5, this effect is less evident, which is expected given the lower statistics and the fact that strong lensing is a rare event. However, it is remarkable that even for bin 5 ---the one with the highest richness and just 15 galaxy clusters---, we have a reasonable number of pairs. Indeed, with $57$ CG pairs, this means that each target galaxy cluster has more than 3 CG pairs on average.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{PLOTS/imgbins.png}
\caption{Relative excess probability (stacked image/random mean - 1) of CG pairs for each richness bin, in analogy to the total case in Fig. \ref{fig:totaldensitymap}. The maps are for bins 1 to 5, in the panels from left to right, top to bottom. The colour bars have the same meaning as in Fig. \ref{fig:totaldensitymap}.}
\label{fig:bindensitymap}
\end{figure}
\subsection{Estimation of the cross-correlation function}
\label{sec:xcorr}
We analyse the stacked images through the measurement of the CCF, which allows us to extract physical information about the average properties of the lensing system in general and the lens sample in particular. For this reason, we estimate the CCF using the stacked CG pair maps as in \cite{BON19} instead of applying the traditional methodology as in \cite{GON14,GON17}.
We draw a finite set of concentric circles centred at the central position of the images. The radii increase logarithmically in steps of 0.05, starting with $1$ arcsec (the first measurements are limited by the pixel size). This defines one initial circle and a set of rings. The pixel values in each circular annulus are added up as DD. The same procedure is applied to the random map (RR). The standard estimator \citep{Dav83} is then calculated as:
\begin{equation}
\tilde{w}_{x}(\theta)=\frac{\text{DD}}{\text{RR}}-1
\label{eq:wx}
.\end{equation}
The errors on $\tilde{w}_{x}$ were calculated in two steps: first, we divided each ring into $15$ equal sections, except for the first four rings where there are no more than eight pixels (in these cases, we divided the rings into fewer sections). We then applied a Jackknife method as in \citet{BON19} to estimate the uncertainties for $\text{DD}$ and $\text{RR}$. Finally, we used error propagation according to eq. \eqref{eq:wx}.
The estimated CCFs are shown in Figs. \ref{fig:SISSA_plots} and \ref{fig:NFW_plots} (black points) for the five richness bins (numbered 1 to 5, from left to right, top to bottom) and the total case (bottom right panel).
With the current stacking configuration, we obtain CCFs from 2 to 250 arcsec. These angular scales correspond to $\sim 10$ kpc to $\sim 1.3$ Mpc for $z=0.38$. It should be noted that, by using a single method, we are able to study the mass density profile in a wide spectrum of physical scales. This is a interesting novel characteristic of this methodology with respect to the strong and weak lensing analysis of individual clusters that have to be combined to cover a comparable angular scale range: strong lensing can only be measured in the central part of the clusters where the strong lensing features are produced and can be measured, while the weak lensing cannot be measured towards the most central part of the cluster because of the influence of central galaxies.
In addition, the CCFs confirm the conclusions derived preliminarily from the images: a stronger signal at the smallest angular separations that decreases logarithmically towards the largest. In addition, the maximum of the CCFs increases with richness (mass) as expected for an event related to gravitational lensing. As is clear from the measurements, the CCF of the bins with the highest richness (4 and 5) shows strong oscillations due to the lack of CG pairs. We discuss this matter in more detail in section \ref{sec:feature}.
\section{Theoretical framework}
\label{sec:denprof}
In order to extract important physical information about the mass distribution in a galaxy cluster halo from the measured magnification bias, we need to rely on a theoretical framework that firstly connects the observed cross-correlation function with the gravitational lensing amplification, and secondly relates such amplification with a mass density profile. The analysis will then consist of the determination of the mass density profile (or a combination of two profiles) that better explains the observations and can be used to decipher the physical halo characteristics of such a profile, such as mass or concentration.
\subsection{Gravitational lensing and the cross-correlation function}
\label{sec:theory}
Let
\begin{equation}
n_0(>S,z)\equiv \int_S^{\infty} \frac{dN}{dSdz\,d\Omega}dS
\end{equation}
denote the unlensed integrated background source number counts, $n_0$, which is the number of background sources per solid angle and redshift with observed flux density larger than $S$ in the absence of gravitational lensing. Due to the influence of foreground lenses, this quantity is modified at every angular position in the image on account of two separate effects, namely a magnification that allows fainter sources to be observed and a dilution that enlarges the solid angle subtended by the sources in question. More precisely, at an angular position $\vec{\theta}$ within an image, we have \citep{Bar01}
\begin{equation}
n(>S,z;\vec{\theta})=\frac{1}{\mu(\vec{\theta})}n_0\Big(>\frac{S}{\mu(\vec{\theta})},z \Big),\label{numbercounts1}
\end{equation}
where $\mu(\vec{\theta})$ is the magnification field at angular position $\vec{\theta}$.
Assuming a redshift-independent power-law behaviour of the unlensed integrated number counts, that is, $n_0(>S,z)=AS^{-\beta}$,
\eqref{numbercounts1} becomes
\begin{equation}
\frac{n(>S,z;\, \vec{\theta})}{n_0(>S,z)}=\mu^{\,\beta-1}(\vec{\theta}).
\end{equation}
As we aim to relate the magnification field to a direct observable based on galaxy counting, one can interpret that, from the point of view of a lens at a certain redshift $z_d$, the quantity $n(>\!\!S,z_s;\vec{\theta})/n_0(>\!\!S,z_s;\vec{\theta})$ represents the excess (or lack) of background sources (in direction $\vec{\theta}$ relative to the lens) at redshift $z_s>z_d$ with respect to what would be expected in the absence of lensing. Indeed, the angular cross-correlation function between a set of foreground lenses at redshift $z_d$ and a set of background sources at redshift $z_s$ is defined as
\begin{equation}
w_x(\vec{\theta};z_d,z_s)\equiv \langle\delta n_f(\vec{\phi},z_d)\,\delta n_b(\vec{\phi}+\vec{\theta},z_s)\rangle,
\end{equation}
where $\delta n_f$ is the foreground galaxy density contrast, which is due to pure clustering, and
\begin{equation}
\delta n_b(\vec{\theta},z)=\frac{n(>S,z;\vec{\theta})}{n_0(>S,z;\vec{\theta})}-1=\mu^{\,\beta-1}(\vec{\theta})-1
,\end{equation}
is the background galaxy density contrast, which is due to magnification. As we are stacking the lenses at a fixed position, which we take as the origin, we have
\begin{equation}
w_x(\vec{\theta},z_d,z_s)=\mu^{\,\beta-1}(\vec{\theta})-1.\label{crossmag}
\end{equation}
The magnification field can in turn be written in terms of the convergence ($\kappa$) and the shear $(\vec{\gamma})$ fields, which describe the local matter density and the tidal gravitational field, respectively. Indeed, we have \citep{Bar01}
\begin{equation}
\mu(\vec{\theta})=\frac{1}{(1-\kappa(\vec{\theta}))^2-|\vec{\gamma}(\vec{\theta)}|^2}.\label{magkapgam}
\end{equation}
Therefore, plugging \eqref{magkapgam} into \eqref{crossmag} yields a relation between the angular cross-correlation function and the convergence and shear fields, which are determined by the mass density profile of the lens.
\subsection{Mass density profiles}
\label{sec:profiles}
Let us assume that a lens at an angular diameter distance $D_d$ from the observer deflects the light rays from a source at an angular diameter distance $D_s$. If $\vec{\theta}=\vec{\xi}/D_d$ denotes the angular position of a point on the image plane, then the convergence at that point, $\kappa(\vec{\theta})$, is defined as a dimensionless surface mass density, that is,
\begin{equation}
\kappa(\vec{\theta})\equiv\frac{\Sigma(D_d\vec{\theta})}{\Sigma_{\text{cr}}},
\end{equation}
where $\Sigma(\vec{\xi})$ is the mass density projected onto a plane perpendicular to the incoming light ray, and
\begin{equation}
\Sigma_{\text{cr}}=\frac{c^2}{4\pi G}\frac{D_s}{D_dD_{ds}}
\end{equation}
is the so-called critical surface mass density, where $D_{ds}$ is the angular diameter distance from the lens to the background source.
If a lens is axially symmetric, that is, if $\Sigma(\vec{\xi})=\Sigma(\xi)$, then choosing the symmetry centre as the origin, we have $\kappa(\vec{\theta})=\kappa(\theta)$ and the magnification field is given by
\begin{equation}\label{eq:mu}
\mu(\theta)=\frac{1}{(1-\bar{\kappa}(\theta))(1+\bar{\kappa}(\theta)-2\kappa(\theta))},
\end{equation}
where $\bar{\kappa}(\theta)$ is the mean surface mass density inside the angular radius $\theta$.
\subsubsection{Navarro-Frenk-White profile}
Let us now assume that the mass of the lens is dominated by dark matter. The best known model for describing its mass density is the Navarro-Frenk-White (NFW) profile \citep{NAV96},
\begin{equation}
\rho_{\text{NFW}}(r;r_s,\rho_s)=\frac{\rho_s}{(r/r_s)(1 + r/r_s)^2},
\end{equation}
where $r_s$ and $\rho_s$ are the scale radius and density parameters, respectively. If we identify halos at redshift $z$ with spherical regions with a mean overdensity of $200\rho_c(z)$, with $\rho_c(z)$ the critical density of the Universe at redshift $z$, then
\begin{equation}
\frac{\rho_s}{\rho_c(z)}=\frac{200}{3}\frac{C^3}{\ln{(1+C)}-C/1+C},
\end{equation}
where $C=C(M_{200c},z)\equiv R_{200}/r_s$ is the mean concentration of a halo of mass $M_{200c}$ identified at redshift $z$, and $R_{200}$ is its radius. Throughout the paper, in order to avoid confusion between the different kinds of masses, we use $M_{\text{NFW}}$ to indicate the NFW $M_{\text{200c}}$ mass of a halo.
This profile satisfies \citep{SCH06}:
\begin{equation}
\kappa_{\text{NFW}}(\theta)=\frac{2r_s\rho_s}{\Sigma_{\text{cr}}}f(\theta/\theta_s)\quad\quad\quad \bar{\kappa}_{NFW}(\theta)=\frac{2r_s\rho_s}{\Sigma_{cr}}h(\theta/\theta_s),
\end{equation}
where $\theta_s\equiv r_s/D_d$ is the angular scale radius,
\begin{equation}
f(x)\equiv \begin{cases} \frac{1}{x^2-1}-\frac{\arccos{(1/x)}}{(x^2-1)^{3/2}}\quad\quad&\text{if } x>1\\
\,\,\frac{1}{3} \quad\quad&\text{if } x=1\\
\frac{1}{x^2-1}+\frac{\text{arccosh}(1/x)}{(1-x^2)^{3/2}} \quad\quad&\text{if } x<1
\end{cases}
\end{equation}
and
\begin{equation}
h(x)\equiv \begin{cases} \frac{2}{x^2}\Big(\frac{\arccos{(1/x)}}{(x^2-1)^{1/2}}+\log{\frac{x}{2}}\Big)\quad\quad&\text{if } x>1\\
\,{\scriptstyle 2\,(1-\log{2})} \quad\quad&\text{if } x=1\\
\frac{2}{x^2}\Big(\frac{\text{arccosh }(1/x)}{(1-x^2)^{1/2}}+\log{\frac{x}{2}}\Big) \quad\quad&\text{if } x<1
\end{cases}.
\end{equation}
\subsubsection{Singular isothermal sphere profile}
Another option for parametrising the halo density profile is the singular isothermal sphere (SIS) profile, given by
\begin{equation}
\rho_{\text{SIS}}=\frac{\sigma_v^2}{2\pi Gr^2},
\end{equation}
which corresponds to a system of particles whose velocity distribution at every radius follows a Maxwell-Boltzmann law with one-dimensional velocity dispersion $\sigma_v$. This profile satisfies \citep{SCH06}
\begin{equation}
\kappa_{\text{SIS}}=\frac{\theta_E}{2|\theta|}\quad\quad\bar{\kappa}_{\text{SIS}}(\theta)=\frac{\theta_E}{|\theta|},
\end{equation}
where
\begin{equation}
\theta_E=4\pi\,\bigg(\frac{\sigma_v}{c}\bigg)^2\frac{D_{ds}}{D_s}
\end{equation}
is the Einstein radius of the model. When the angular separation becomes similar to the Einstein radius, the magnification goes singular, producing an `Einstein ring'.
\begin{table*}[ht]
\centering
\caption{Estimated parameters of the best-fit mass density profile scenarios for each richness range.}
\begin{tabular}{cccccccc}
\hline\hline
& & Bin 1 & Bin 2 & Bin 3 & Bin 4 & Bin 5 & Total\\
\hline
& $M_{SIS} [10^{13} M_\odot]$ & 0.5 & 0.6 & 0.6 & 0.6 & 1.0 & 0.5\\
SIS+NFW & $M_{NFW} [10^{13} M_\odot]$ & 4.9 & 5.3 & 10.1 & 14.0 & 51.5 & 5.5 \\
& $C$ & 0.94 & 0.30 & 1.17 & 0.65 & 0.56 & 1.84\\
\hline
Outer & $M_{NFW} [10^{13} M_\odot]$ & 5.8 & 7.9 & 11.2 & 27.4 & 51.5 & 7.1 \\
($\gtrsim 100$ kpc) & $C$ & 0.74 & 0.39 & 1.00 & 1.74 & 0.56 & 1.72\\
\hline
Inner & $M_{NFW} [10^{13} M_\odot]$ & 3.8 & 2.3 & 7.2 & 1.0 & 1.0 & 4.1\\
($\lesssim 100$ kpc) & $C$ & 3.63 & 6.83 & 3.81 & 11.91 & 14.8 & 4.17\\
\hline
Inner + Outer & $M [10^{13} M_\odot]$ & 9.6 & 10.2 & 18.4 & 28.4 & 52.5 & 11.2\\
\hline
& $\langle R\rangle$ & 14.6 & 20.9 & 31.4 & 50.4 & 91.4 & 20.0\\
From & $\langle M_{200}\rangle [10^{13} M_{\odot}]$ & 7 & 11 & 18 & 32 & 64 & 11 \\
catalogue & $\langle z \rangle$ & 0.38 & 0.39 & 0.37 & 0.32 & 0.24 & 0.38\\
& scale [kpc/"] & 5.42 & 5.51 & 5.33 & 4.85 & 3.96 & 5.42\\
\hline\hline
\end{tabular}
\tablefoot{Estimated mass and concentration with the NFW+SIS and NFW profiles for the outer and inner parts and the total mass combining them (from top to bottom) for each richness bin and for the total case (from left to right). The last rows provide the average richness, mass, and redshift estimated from the catalogue and the corresponding scale factor.}
\label{tab:datos}
\end{table*}
\section{Results}
\label{sec:results}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.4\textwidth]{PLOTS/NFWplusSIS_best_fit_combined_bin1.png}
\includegraphics[width=0.4\textwidth]{PLOTS/NFWplusSIS_best_fit_combined_bin2.png}
\includegraphics[width=0.4\textwidth]{PLOTS/NFWplusSIS_best_fit_combined_bin3.png}
\includegraphics[width=0.4\textwidth]{PLOTS/NFWplusSIS_best_fit_combined_bin4.png}
\includegraphics[width=0.4\textwidth]{PLOTS/NFWplusSIS_best_fit_combined_bin5.png}
\includegraphics[width=0.4\textwidth]{PLOTS/NFWplusSIS_best_fit_combined_total.png}
\caption{NFW+SIS fits (black line) to the cross-correlation data (black points) for bins 1 to 5 (from left to right, top to bottom) and the general case (bottom right). The corresponding SIS (blue lines) and NFW (green lines) fits are also separately shown. As discussed in more detail in sections \ref{sec:sisnfw} and \ref{sec:feature}, the best fits are always below the data for angular separations of between 5 and 10 arcsec and there is a potential lack of power at $\sim10$ and $\sim25$ arcsec. Grey points are considered outliers and are not taken into account for the analysis. The dashed lines for bin 5 indicates that the values were chosen by hand in order to produce a reasonable fit because the fitting algorithm does not converge (see text for more details).}
\label{fig:SISSA_plots}
\end{figure*}
\subsection{Mass density profile fits}
\label{sec:profile_fits}
In order to analyse the measured CCFs and to extract physical conclusions, we try to fit the data to different combinations of the two common mass density profiles discussed above. In the theoretical modelling, the same smoothing as that applied to the data because of the positional uncertainty of the background sample was taken into account.
The different fits to the data produced in this work clearly show that a single mass density profile is unable to fit the data at all scales. Taking into account previous related analyses \citep[e.g. ][]{JOH07, BAU14, LAP12}, in this work we interpret the inner excess as the contribution of a galactic halo associated with the BCG, which is common in the centre of galaxy clusters.
A summary of the best-fit values for the different masses and concentrations is shown in Table \ref{tab:datos}. Additional useful information from the cluster catalogue can also be found in the same table: mean redshift, richness, and mass for each richness bin and the total sample.
\subsubsection{SIS+NFW fit}\label{sec:sisnfw}
We first try to combine an SIS profile (a dark matter galactic halo plus a stellar contribution in the centre) to describe the BCG contribution plus a NFW for the contribution from the cluster halo (see Fig. \ref{fig:SISSA_plots}, black solid line). The black dots correspond to the CCFs obtained with stacking in each case.
We do not impose any angular scale restriction on either of the two profiles; it naturally arises from the different shapes of each profile. In addition, we notice that some bins show important fluctuations in the data with respect to the main trend. In order to derive reasonable fits, we decide to omit the clearest outliers (grey points in Figs. \ref{fig:SISSA_plots} and \ref{fig:NFW_plots}), that is, those measurements with a very low value compared to the adjacent ones. Otherwise, the fitting algorithm tries to take these points into account providing an unreasonable fit below the main trend indicated by the rest of the data. Removing additional data does not further affect the behaviour of the algorithm and always provides the same results. See Sect. \ref{sec:feature} for a more detailed discussion on the potential physical interpretation of these fluctuations.
Moreover, the low number of CG pairs in the stacked images for bin 5 causes the strong oscillatory behaviour in the measured data. This issue prevents the fitting algorithm from converging and therefore the selected values are chosen by hand in order to produce a reasonable fit. This fact is indicated by the use of dashed lines in Figs. \ref{fig:SISSA_plots} and \ref{fig:NFW_plots}.
From the NFW profile (green lines), we find that the estimated masses for each richness bin increase monotonically from M$_{\text{NFW}}=4.9\times 10^{13}M_{\odot}$ to M$_{\text{NFW}}=51.5\times 10^{13}M_{\odot}$ (M$_{\text{NFW}}=5.5\times 10^{13}M_{\odot}$ for the total sample). These values are always lower than the estimated ones for the mean richness in each bin, using the WHL12 mass-richness relationship (see Table \ref{tab:datos}). The retrieved concentration values do not show any clear trend with richness and they are much smaller than the expected values from the most common mass--concentration relationships \citep[e.g.][]{MAN08,DUT14,CHI18}.
In the case of the SIS profile (blue lines), angular separations of a few arcseconds are close to the Einstein radius for the range of typical halo masses discussed in this work. As a consequence, both the magnification and the CCF diverge.
The subsequent required smoothing then produces a very particular mass density profile \citep[as already discussed in][]{BON19} that contributes only to the most inner data.
Moreover, the derived masses, M$_{\text{NFW}}\sim 0.6\times 10^{13}M_{\odot}$ in all cases, are much smaller than the ones expected for a typical BCG. In fact, an effective halo mass of M$_{\text{eff}}=2.8-4.4\times10^{13}M_{\odot}$ is derived from the modelling of the large red galaxy (LRG) angular correlation function \citep{BLA08}. Similarly, consistent masses are also estimated from the analysis of their large-scale redshift-space distortions \citep[$M=3.5^{+1.8}_{-1.4}\times10^{13}M_{\odot}$, ][]{CAB09,BAU14}. BCGs are expected to have similar physical characteristics to the LRGs.
Therefore, this combination of profiles, NFW+SIS, provides a good overall fit to the different set of data. For the total case, the fit correctly describes the data at all angular scales, but there is an issue at intermediate angular scales that is easy to identify when the sample is divided into different richness bins. For angular scales of between $\sim$5 and 10 arcsec, the fits are consistently below the data.
\subsubsection{Inner and outer independent fits}
For the reasons given above, we decide to perform independent analyses for the data at small (red line; inner part) and large (blue line; outer part) angular scales (see Fig. \ref{fig:NFW_plots}). In this case, we use a NFW for each of the regimes. We set the boundary (vertical dotted line in each panel) between the two regimes at around 10--20 arcsec ($\sim$ 52--105 kpc at z=0.38), where there is an unexpected increase in the CCFs. This feature is clearly seen for the highest richness bins but it is only a fluctuation in the total case.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.4\textwidth]{PLOTS/NFW_best_fit_combined_bin1.png}
\includegraphics[width=0.4\textwidth]{PLOTS/NFW_best_fit_combined_bin2.png}
\includegraphics[width=0.4\textwidth]{PLOTS/NFW_best_fit_combined_bin3.png}
\includegraphics[width=0.4\textwidth]{PLOTS/NFW_best_fit_combined_bin4.png}
\includegraphics[width=0.4\textwidth]{PLOTS/NFW_best_fit_combined_bin5.png}
\includegraphics[width=0.4\textwidth]{PLOTS/NFW_best_fit_combined_total.png}
\caption{NFW fits to the cross-correlation data for bins 1 to 5 (from left to right, top to bottom) and the general case (bottom right). The red line corresponds to the fit to the points at the small scales only and the blue one is for large scales. The grey dashed lines show the NFW profile using the mass--concentration relationship by \citet{MAN08} and, as explained in the text, it is not a best fit to the data; neither is the red dashed line for bin 5 (the parameter values are chosen by hand in order to produce a reasonable fit). The vertical dotted lines separate the two regimes that have been studied for each case. Grey points are considered outliers and are not taken into account for the analysis.}
\label{fig:NFW_plots}
\end{figure*}
If we focus first on the large angular scales, we find that the estimated masses for each richness bin increase monotonically from M$_{\text{NFW}}=5.8\times 10^{13}M_{\odot}$ to M$_{\text{NFW}}=51.5\times 10^{13}M_{\odot}$ (M$_{\text{NFW}}=7.1\times 10^{13}M_{\odot}$ for the total sample). Although compatible, these values are always lower than the estimated ones for the mean richness in each bin, using the WHL12 mass--richness relationship (see Table \ref{tab:datos}). The agreement improves towards higher richness.
With respect to the concentration parameter, it also increases with richness, from $C=0.74$ to $C=2.22$ ($C=1.72$ for the total sample). However, these values are generally lower than the ones retrieved from the most common mass--concentration relationships \citep[e.g.][]{MAN08,DUT14,CHI18}. As a comparison, in Fig. \ref{fig:NFW_plots} we also plot (with a grey dashed line) the NFW profile using the one by \citet{MAN08}. It should be stressed that it is not the best fit to the large-scale data in each case, but serves to illustrate the different conclusions that would be drawn if it were used. Any fitting algorithm would have provided one of the following solutions: producing an excess at the smallest angular separations (as shown by the grey dashed lines) to fit the larger scales or completely underestimating the largest angular separations (similar to the inner fits shown as the red lines) to fit the smaller scales. In addition, the estimated masses derived using this mass-concentration relationship are higher than the ones obtained with the free fit. This issue becomes less relevant for higher masses or richness values.
The inner part is also well described by an NFW profile. In this case, the derived masses do not show any clear relationship with richness; the values fluctuate around M$_{\text{NFW}}=3-4\times10^{13}M_{\odot}$ (M$_{\text{NFW}}=4.1\times10^{13}M_{\odot}$ for the total sample). These halo mass values are in good agreement with previous independent LRG halo mass estimations, as discussed above.
The derived inner concentration values are in better agreement with those expected from the mass--concentration relationships, with values around $C\sim 4$.
This behaviour is opposite to that predicted by the mass--concentration relationships at higher halo masses. However, the lack of pairs in the highest richness bins makes this potential conclusion less reliable. Indeed, for bin 5, with just a single CG pair in the centre, the profile is completely dominated by the Gaussian kernel used to take positional uncertainties into account. The fit is completely degenerate (lower concentration values can be counterbalanced by higher masses) and we simply draw one of the possible fits with the lowest concentration value (again we use a dashed line to indicate that it is not a real best fit to the data). It is likely that, with better statistics, as in the other richness bins or the total case, the conclusion would be that the concentration remains almost constant, as expected for BCGs with similar masses.
Finally, it is interesting that by adding up the mass from the inner and outer fits (see `Inner+Outer' in Table \ref{tab:datos}) we get an almost perfect agreement with the average mass estimated from the WHL12 mass--richness relationship. We interpret this better agreement as validation of our procedure and as the necessity to use more complex mass density profiles in future works.
\subsection{Discussion}
\label{sec:discuss}
As presented in the previous section, our large-scale measurements ($\gtrsim 100$ kpc) are well described by an NFW profile, which is in agreement with a large number of previous studies. The derived masses are slightly lower but compatible to the WHL12 estimations, and therefore they also increase with richness, as expected.
In this respect, by comparing the CCF normalisation, we can approximately infer the average richness of previous works related to magnification bias. The CCF that is being used for cosmological analyses \citep[][]{GON17,BON20,GON21,CUE21} is measured using foreground samples built from galaxy catalogues. The values around 100 kpc are similar to the bin 1 or bin 2 measurements, which implies a richness of below 25. This is additional confirmation of the conclusion that is arrived at by these latter authors that the lenses that produce the magnification bias of the SMGs are not isolated massive galaxies but groups of galaxies or low-richness clusters.
As in previous works \citep[e.g. ][]{BAU14,JOH07,OKA16}, we also find the necessity to include an additional central mass to explain our data. Although the use of a SIS profile helps to provide a good overall description of the data at all relevant angular scales, we find that the estimated masses are too low for BCGs and that the fit can be further improved at intermediate angular scales of $\sim5-10$ arcsec. By using a second independent NFW profile for the inner part, we find a better fit with an estimated central mass of M$_{\text{NFW}}=3-4\times10^{13}M_{\odot}$ that is more or less independent of richness. Therefore, these results confirm our assumption that this central mass corresponds to the presence of a BCG.
Moreover, these halo masses are in agreement with the measured ones for massive LRGs. The fact that the inner data ($\lesssim 100$ kpc) are well described by an NFW profile implies that the lensing effect at these angular scales is dominated by the BCG galactic dark matter halo. \citet{GAV07} arrived at similar conclusions based on a weak lensing analysis of 22 early-type (strong) lens galaxies, as did \citet{OKA16} based on an analysis of the central mass profiles of the nearby cool-core galaxy clusters Hydra A and A478. In both cases, the transition towards a central point like BCG stellar (baryonic) dominance is around 10 kpc, $<2$ arcsec for $z=0.38$ \citep[see also, ][]{LAP12}. These physical scales are beyond the current resolution of our measurements and therefore we can only observe the lensing effect of the galactic dark matter halo. This fact can also be related to the relatively poor fit using the SIS profile.
In addition, for the lowest richness bins ($R<40$), the central mass is $\sim 40$\% of the total mass. Therefore, these low-richness clusters are mainly composed of a massive central galaxy or BCG with several smaller satellite galaxies with 20 times lower masses on average. For example, if we consider the total case, we have a central galaxy of M$_{\text{NFW}}=4.1\times
10^{13}M_{\odot}$ and the rest of the mass, M$_{\text{NFW}}=7.1\times10^{13}M_{\odot}$, is made up of the contribution from another 20 members, each of them with an average halo mass of $\sim 3.5\times10^{12}M_{\odot}$.
This result also confirms the conclusions from \citet{DUN20}. These latter authors observed an overdensity of high-redshift SMGs around a statistically complete sample of twelve 250 $\mu m$-selected galaxies at $z=0.35$, which were targeted by ALMA in a study of gas tracers. This observed overdensity is consistent with the magnification bias produced by halos of mass of the order of $7.1\times10^{13}M_{\odot}$, which are supposed to host one or possibly two bright galaxies and several smaller satellites. Indeed, of the six fields with unexpected SMGs, one is associated with a spectroscopically defined group and another four show observational evidence of an interaction between the central galaxy and the satellites.
Moreover, this scenario could also be related to the low concentration values obtained for our outer data. As already described in the previous section, all mass--concentration relations predict that the concentration should increase as halo mass decreases. However, these relationships are in general derived from the detailed analysis of individual clusters, which is only possible for the most massive ones \citep[e.g.][ with $M_{200c}\gtrsim 5\times10^{14}M_{\odot}$]{UME16}. As a comparison, our derived mass for the total case is at least seven to ten times smaller.
If we generalise the observational evidence obtained by \citet{DUN20}, we would expect the group of galaxies, or equivalently the clusters of lowest richness, to have lower concentration values. As halos are dynamically evolving objects, their mass and concentration is probably related to their recent assembly history \citep[][]{SER13}. Simulations show that in unrelaxed halos, much of the mass is far from the centre and therefore they tend to have lower concentrations than relaxed ones \citep[][]{CHI18}. At the same time, after a recent merger, the halo profile may not be well described by the NFW profile because of the dynamically unrelaxed state \citep[][]{CHI18}, although this problem should be mitigated in stacked halo profiles as in our case.
In addition, the fact that the cluster centre is determined as the position of the BCGs ---and not for example the centre of mass--- helps to obtain more homogeneous results even if a portion of the targets are in a dynamically unrelaxed state.
However, if this misalignment has become systematic and important for the chosen target sample, it can provide an additional smoothing factor that would limit the precision at the central region even for background samples with better positional accuracy.
\subsection{Lack of signal at $\sim 10$ and $\sim 25$ arcsec}
\label{sec:feature}
As already mentioned, the large-scale fits have a number of issues and in some cases we do not consider certain data points in order to derive reasonable fits. At first sight, the issue seems to be related to the lack of CG pairs in the bins with the highest richness. However, we notice that this lack of signal at a certain angular separation is also present in all bins, and even in the total sample, but more subtly. From Fig. \ref{fig:NFW_plots}, we identify this issue at at least two angular separations, $\sim 10$ and $\sim 25$ arcsec ($\sim 55$ and 125 kpc, respectively). This lack of signal in the profiles has to be produced by a lack of CG pairs in rings with such a radius and with a similar width to the angular resolution used in the radial profile. Figure \ref{fig:bincircles} is visual confirmation of the presence of such `rings'.
The most central part of the stacked images for all the cases is shown in this figure, and we have plotted two concentric circles (white dashed lines) with radii of $\sim 10$ and $\sim 25$ arcsec.
We confirm that the presence of these features does not depend on the smoothing step, although their relevance and shape are affected for values of $\sigma$ greater than 5 arcsec, as expected. Considering that these rings can also be detected in the lowest richness bins, with hundreds of CG pairs, this indicates that they are not a statistical fluctuation, as could have been concluded simply from bins 4 and 5.
Moreover, once this issue was recognised, we realised that a similar lack of signal was already present in previous works. In our recent studies, there is always an anomalous measurement (much lower than expected) around $\sim 30$ arcsec, which was chosen as the lowest angular separation for a weak lensing analysis using the CCF. This anomalous point is always present independently of the particular lens catalogue used: galaxies with spectroscopic redshifts from GAMAII \citep[][]{GON17,BON20,CUE21,DRI11}, SDSS galaxies with photometric redshifts \citep[][]{GON21}, or QSOs \citep[][]{BON19}. In addition, it can also
be found in the radial profile measurements using independent methodologies and/or catalogues: the weak lensing of the WHL12 cluster catalogue \citep[][]{BAU14}, the stacking analysis of the shear profile produced by galaxy clusters \citep[][]{JOH07}, and even the detailed joint analysis of strong lensing, weak lensing, shear, and magnification of individual galaxy clusters \citep[e.g. Abel209 or MACSJ0717.5+3745,][]{UME16}.
Interestingly, these angular scales correspond to the transition from cluster dark matter halo dominance to BCG dark matter halo dominance. A similar behaviour was seen by \citet{GAV07} and \citet{OKA16} for the inner transition between the BCG galactic dark matter halo and the central stellar (baryonic) component: the measurements near the transition angular scales are below the theoretical expectation from the addition of both profiles.
Therefore, we conclude that the lack of signal in the transitions between different profile dominance regimes could be an indication of a physical phenomenon and not simply a statistical fluctuation. However, the detailed analysis and a potential physical interpretation of this effect are beyond the scope of this work.
\begin{figure}[ht]
\includegraphics[width=0.45\textwidth]{PLOTS/binscir.png}
\caption{Highlight of the stacked maps of CG pairs for each richness bin at angular scales of lower that 50 arcsec. The maps are for bins 1 to 5 in the panels from left to right, and top to bottom. The bottom-right panel shows the highlight for the total case. The colour scale was chosen to improve visibility of the individual CG pairs and is the same for all panels. The two concentric circles (white dashed lines) indicate the radii of $\sim 10$ and $\sim 25$ arcsec.}
\label{fig:bincircles}
\end{figure}
\section{Conclusions}
\label{sec:concl}
In this study we exploited the magnification bias ---a gravitational lensing effect--- produced on SMGs observed by \textit{Herschel} at $1.2<z<4.0$ by galaxy clusters in SDSS-III with photometric redshifts of $0.05 < z < 0.8$ in order to analyse the average mass density profile properties of tens to hundreds of clusters of galaxies.
The measurements are obtained by stacking the CG pairs to estimate the CCF using the Davis-Peebles estimator. This methodology allows us to derive the mass density profile for a wide range of angular scales, $\sim$ 2-250 arcsec or $\sim$ 10-1300 kpc for $z=0.38$, with a high radial resolution, and in particular to study the inner part of the DM halo (< 100 kpc; the BCG does not impose any limitation as in other techniques). In addition, we also divide the cluster sample into 5 bins of richness.
Moreover, this methodology has some advantages from the point of view of analysis: it is straightforward to take positional uncertainties into account, which are critical at small angular separations, and to consider both the weak and strong lensing effects.
In order to completely describe the data for the full angular separation range, we need to take into account two dark matter halos (two different mass density profiles): a more massive halo to describe the outer part of the cluster ($>100$ kpc), and another halo for the inner part due to the presence of the BCG ($<100$ kpc). A good overall description is achieved by assuming a combination of a SIS profile (a dark matter galactic halo plus a stellar contribution in the centre) to describe the BCG contribution, plus an NFW profile to describe the contribution from the cluster halo. However, better results are derived for each regime individually using two independent NFW profiles.
The average total masses (taking into account both NFW profiles) are in perfect agreement with the mass--richness relationship estimated by WHL12 (see Table \ref{tab:datos}). For the bins of lowest richness, the central galactic halo constitutes $\sim 40$\% of the total mass of the cluster and its relevance diminishes as richness increases. While the estimated concentration values of the central galactic halos are in agreement with traditional mass--concentration relationships, we find lower concentrations for the outer part. Moreover, the concentrations decrease for lower richness, probably indicating that the group of galaxies cannot be considered relaxed systems.
Finally, we notice a systematic lack of signal at the transition between the dominance of the cluster halo and the central galactic halo ($\sim 100$ kpc). This is not deemed to be a statistical fluctuation or related to the smoothing step in the methodology pipeline. Moreover, this feature is also present in previous works using different catalogues and/or methodologies. Therefore, we conclude that it has a physical nature and merits a more detailed analysis. However, the physical interpretation of this lack of signal is beyond the scope of this paper and will be analysed in detail in a future study.
\begin{acknowledgements}
MMC, JGN, LB, DC, JMC acknowledge the PGC 2018 project PGC2018-101948-B-I00 (MICINN/FEDER).
MMC acknowledges PAPI-20-PF-23 (Universidad de Oviedo).\\
AL is supported by the EU H2020-MSCA-ITN-2019 Project 860744 “BiD4BESt: Big Data applications for black hole Evolution STudies.” and by
the PRIN MUR 2017 prot. 20173ML3WW “Opening the ALMA window on the cosmic evolution of gas, stars and supermassive black holes”.\\
We deeply acknowledge the CINECA award under the ISCRA initiative, for the availability of high performance computing resources and support. In particular the COSMOGAL projects “SIS20\_lapi”, “SIS21\_lapi” in the framework “Convenzione triennale SISSA-CINECA”.\\
The \textit{Herschel}-ATLAS is a project with \textit{Herschel}, which is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. The H-ATLAS web-site is http://www.h-atlas.org. GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope.\\
This research has made use of the python packages \texttt{ipython} \citep{ipython}, \texttt{matplotlib} \citep{matplotlib} and \texttt{Scipy} \citep{scipy}.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,314,259,994,851 | arxiv | \section{Introduction}\label{sec:Introduction}
The optimal covariance steering (OCS) problem is a stochastic optimal control problem such that a controller steers the covariance of a stochastic system to a target terminal value, while minimizing a state and control dependent cost in expectation.
Since the late `80s~\cite{hotz1985covariance, hotz1987covariance}, the infinite horizon case for linear time invariant systems has been thoroughly researched.
On the other hand, until recently, the finite horizon case has not been investigated~\cite{chen2016optimalI,bakolas2016optimalCDC,bakolas2018finite,beghi1996relative,goldshtein2017finite}.
Our previous work~\cite{okamoto2018Optimal} introduced state chance constraints into the OCS problem.
While it is possible to separately design mean steering (feedforward) and covariance steering (feedback) controllers, when state chance constraints exist, the state mean and state covariance constraints are coupled, and one needs to simultaneously design the mean and covariance steering controllers.
In addition, in~\cite{okamoto2018Optimalb} we proposed an OCS controller that is computationally efficient and can deal with non-convex state chance constraints; the algorithm was then applied to vehicle path planning problems.
Note that the above mentioned OCS controllers cannot deal with input hard constraints, because they are affine functions of the state or the disturbance, which are both unbounded, and the control commands become unbounded as well.
Thus, they cannot satisfy input hard constraint specifications.
Such a situation occurs in many real-world scenarios.
For example, in aircraft~\cite{okamoto2018Optimal} or spacecraft~\cite{Ridderhof2019MinimumFuel} control problems the control command values have physical restrictions such as minimum/maximum thrust.
The contribution of this article is the introduction of an OCS controller that can accommodate input \emph{hard} constraints.
To the best of our knowledge, input hard constraints have not been discussed in the framework of OCS, while input \emph{soft} constraints have been investigated by several prior works.
These include the incorporation of a maximum value of the expectation of quadratic functions of the control inputs~\cite{bakolas2016optimalCDC,bakolas2018finite}, and the maximum probability of control constraint violation~\cite{Ridderhof2019MinimumFuel}.
Our formulation of the input constraint is different than these approaches, in the sense that we directly impose hard constraints on the input.
Inspired by the approach in~\cite{paulson2017stochastic,hokayem2012stochastic}, we propose to use a saturation function into our OCS controller~\cite{okamoto2018Optimalb} to impose input hard constraints.
The remainder of this paper is organized as follows.
Section~\ref{sec:ProblemStatement} formulates the problem and introduces the input constrained OCS problem setup.
In Section~\ref{sec:ProposedApproach}, we introduce the newly developed input hard constrained OCS controller.
Section~\ref{sec:Numerical Simulation} validates the effectiveness of the proposed approach using numerical simulations.
Finally, Section~\ref{sec:Summary} summarizes the results and discusses some possible future research directions.
We use $P \succ (\succeq)~0$ to denote that the matrix $P$ is symmetric positive (semi)-definite.
Also, we use $P > (\geq)~0$ to denote element-wise inequalities of the matrix $P$.
The trace of $P$ is denoted with $\mathtt{tr}(P)$, and $\mathtt{blkdiag}(P_0,\ldots,P_N)$ denotes the block-diagonal matrix with block-diagonal matrices $P_0,\ldots, P_N$.
$\|v\|$ is the 2-norm of the vector $v$, and $\|P\|$ is the induced 2-norm of the matrix~$P$.
$x \sim \mathcal{N}(\mu, \Sigma)$ denotes a random variable $x$ sampled from a Gaussian distribution with mean $\mu$ and (co)variance $\Sigma$.
$\mathbb{E}[x]$ denotes the expected value, or the mean, of the random variable $x$.
Finally, $\mathtt{erf}(\cdot)$ is the error function.
\section{Problem Statement\label{sec:ProblemStatement}}
In this section we formulate the input-hard constrained OCS problem.
\subsection{Problem Formulation}
We consider the following discrete-time stochastic linear time-varying system with additive noise
\begin{equation} \label{eq:SystemDynamics}
x_{k+1} = A_kx_k + B_ku_k + w_k,
\end{equation}
where $k$ denotes the time index, $x\in\Re^{n_x}$ is the state, $u\in\Re^{n_u}$ is the control input, and $w\in\Re^{n_x}$ denotes a random vector distributed according to a zero-mean white Gaussian noise with the properties,
\begin{equation}
\mathbb{E}\left[w_k\right] = 0, \
\mathbb{E}\left[w_{k_1}{w_{k_2}^{\top}}\right] =
\begin{cases} D_{k_1}D_{k_2}^\top, &\mbox{if\ } k_1 = k_2,\\
0, &\mbox{otherwise.}
\end{cases}\label{eq:GaussianNoise}
\end{equation}
In addition, $A_k$, $B_k$, and $D_k$ are known system matrices having appropriate dimensions.
Furthermore, we assume that
\begin{equation}
\mathbb{E}\left[x_{k_1}w_{k_2}^{\top}\right]=0,\qquad0 \leq k_1 \leq k_2 \leq N.
\end{equation}
We also assume that the state and control inputs are subject to the constraints
\begin{align}\label{eq:state_and_input_const}
x_k \in \mathcal{X},\quad u_k \in \mathcal{U},
\end{align}
for all $k \geq 0$, where $\mathcal{X} \subseteq \Re^{n_x}$ and $\mathcal{U}\subseteq \Re^{n_u}$ are convex sets containing the origin.
Throughout this work, we assume that the sets $\mathcal{X}$ and $\mathcal{U}$ are represented as an intersection of $N_s$ and $N_c$ linear inequality constraints, respectively, as follows
\begin{align}
\mathcal{X} &= \bigcap_{j = 0}^{N_s-1} \left\{x: \alpha_{x,j}^\top x \leq \beta_{x,j}\right\},\label{eq:Xdefinition}\\
\mathcal{U} &= \bigcap_{s = 0}^{N_c-1} \left\{u: \alpha_{u,s}^\top u \leq \beta_{u,s}\right\},\label{eq:Udefinition}
\end{align}
where $\alpha_{x,j}\in\Re^{n_x}$ and $\alpha_{u,s}\in\Re^{n_u}$ are constant vectors, and $\beta_{x,j} \in \Re$ and $\beta_{u,s} \in \Re$ are constant scalars.
Notice that, since the system noise in~(\ref{eq:SystemDynamics}) is possibly unbounded, the state is unbounded as well.
Thus, we formulate the state constraints $x_k \in \mathcal{X}$ probabilistically, as chance constraints,
\begin{align}\label{eq:origCC}
\Pr(x_k \notin \mathcal{X}) \leq \epsilon,
\end{align}
where $\epsilon \in (0, 1)$.
We keep the second set inclusion in~(\ref{eq:state_and_input_const}) since for the control inputs, hard constraints are preferable.
Using Boole's inequality, (\ref{eq:Xdefinition}) and (\ref{eq:origCC}) can be satisfied if
\begin{align}\label{eq:stateChance}
\Pr\left(\alpha_{x,j}^\top x_k > \beta_{x,j} \right) &\leq p_{x,j},
\end{align}
for $j=0,\ldots,N_s-1$, where $p_{x,j}$ are pre-specified such that
\begin{align}\label{eq:p_xj}
\sum_{j=0}^{N_s-1} p_{x,j}\leq \epsilon.
\end{align}
The initial state $x_0 \in \Re^{n_x}$ is a random vector that is drawn from a normal distribution according to
\begin{equation}\label{eq:x0}
x_0\sim\mathcal{N}(\mu_0,\Sigma_0),
\end{equation}
where $\mu_0 \in \Re^{n_x}$ and $\Sigma_0 \in \Re^{n_x \times n_x}$. We assume that $\Sigma_0 \succeq 0$.
Our objective is to design a control sequence $\{u_0,\ldots,u_{N-1}\}$ that steers the system state $x_k$ to a target Gaussian distribution at time step $N$, that is,
\begin{align}\label{eq:xf}
x_N \sim \mathcal{N}(\mu_f, \Sigma_f),
\end{align}
where $\mu_f \in \Re^{n_x}$, $\Sigma_f \in \Re^{n_x \times n_x}$.
We assume that $\Sigma_f \succ 0$ and wish to minimize the state and control expectation-dependent quadratic cost
\begin{align}\label{eq:originalCostFunc}
J(u_0,\ldots,u_{N-1}) = \mathbb{E}\left[\sum_{k=0}^{N-1}x_k^\top Q x_k + u_k^\top R u_k\right],
\end{align}
where $Q \succeq 0$ and $R \succ 0$.
In summary, we wish to solve the following finite-horizon optimal control problem
\begin{subequations}\label{prob:FiniteHorizonOptimalCovarianceSteeringProblem}
\begin{align}
&\textrm{minimize} \nonumber\\
&J(u_0,\ldots,u_{N-1}) = \mathbb{E}\left[\sum_{k=0}^{N-1}x_k^\top Q x_k + u_k^\top R u_k\right], \\
&\textrm{subject to}\nonumber \\
& \hspace{8pt} x_{k+1} = A_kx_k + B_ku_k + w_k, \\
& \hspace{8pt} \Pr\left(\alpha_{x,j}^\top x_k > \beta_{x,j} \right) \leq p_{x,j},\; \forall j \in [0,N_s-1], \\
& \hspace{8pt} \alpha_{u,s}^\top u_k \leq \beta_{u,s},\; \forall s \in [0,N_c-1],\label{eq:OCSInputConstraint}\\
& \hspace{8pt} x_N = x_f \sim \mathcal{N}(\mu_f,\Sigma_f).
\end{align}
\end{subequations}
\begin{Remark}
System (\ref{eq:SystemDynamics}) is assumed to be controllable in the absence of constraints and disturbances, that is, $x_f$ is reachable from $x_0$ for any $x_f \in \Re^{n_x}$, provided that $w_k = 0$ for $k = 0,\ldots,N-1$. This assumption implies that given any $x_f \in \Re^{n_x}$ and $x_0 \in \Re^{n_x}$, there exists a sequence of control inputs $\{u_0,\ldots,u_{N-1}\}$ that steers $x_0$ to $x_f$ in the absence of disturbances or any constraints.
\end{Remark}
\section{Proposed Approach\label{sec:ProposedApproach}}
This section introduces the proposed approach to solve Problem~(\ref{prob:FiniteHorizonOptimalCovarianceSteeringProblem}).
Instead of~(\ref{eq:SystemDynamics}), we use the following equivalent form of the system dynamics
\begin{equation}\label{eq:X=Ax0+BU+DW}
X = \mathcal{A} x_{0} + \mathcal{B} U+\mathcal{D} W,
\end{equation}
where $X\in\Re^{(N+1) n_x}$, $U\in\Re^{Nn_u}$, and $W\in\Re^{Nn_x}$ represents the concatenated state, input, and disturbance vector, respectively, e.g., $X = \begin{bmatrix}
x_{0}^\top x_{1}^\top \cdots x_{N}^\top
\end{bmatrix}^\top$,
while the matrices $\mathcal{A}\in\Re^{(N+1)n_x\times n_x}$, $\mathcal{B} \in\Re^{(N+1)n_x\times Nn_u}$, and $\mathcal{D} \in\Re^{(N+1)n_x\times Nn_x}$ are defined accordingly~\cite{okamoto2018Optimal}.
Note that
\begin{subequations}\label{eq:Ex0x0x0WWW}
\begin{align}
\mathbb{E}[x_0x_0^\top] &= \Sigma_0 + \mu_0\mu_0^\top,\\
\mathbb{E}[x_0W^\top] &= 0, \\
\mathbb{E}[WW^\top] &=\mathtt{blkdiag}(D_0D_0^\top,,\ldots,D_{N-1}D_{N-1}^\top).
\end{align}
\end{subequations}
We use the matrix $E_k = \left[0_{n_x,kn_x}, I_{n_x},0_{n_x,(N-k)n_x}\right]\in \Re^{n_x\times(N+1)n_x}$ and $F_k = \left[0_{n_u,kn_u}, I_{n_u},0_{n_u,(N-1-k)n_u}\right]\in \Re^{n_u\times Nn_u}$ such that $x_k = E_k X$ and $u_k = F_k U$.
\subsection{Input-Constrained OCS Problem \label{subsec:InputConstrainedOCS}}
In this section, we discuss the OCS with input hard constraints.
We start by introducing a relaxed form of Problem~(\ref{prob:FiniteHorizonOptimalCovarianceSteeringProblem}) summarized in the following lemma.
\begin{Lemma}
Given~(\ref{eq:Xdefinition}), (\ref{eq:Udefinition}), (\ref{eq:p_xj}), and (\ref{eq:xf}), Problem~(\ref{prob:FiniteHorizonOptimalCovarianceSteeringProblem}) can be converted to the following form with a convex relaxation of the terminal covariance constraint.
\begin{subequations}\label{prob:OptimalCovarianceSteeringConvertedInputConstrained}
\begin{align}
&\textrm{minimize}\; J(U) =\ \mathbb{E}\left[X^\top \bar{Q} X + U^\top \bar{R} U \right],\label{eq:ObjInputConstConvert1}\\
&\textrm{subject to}\nonumber \\
&\hspace{20pt}X = \mathcal{A} x_{0} + \mathcal{B} U+\mathcal{D} W,\\
&\hspace{20pt}\Pr\left(\alpha_{x,j}^\top E_k X > \beta_{x,j} \right) \leq p_{x,j}, ~ j \in [0, N_s-1], \label{eq:CCInputContConvert}\\
&\hspace{20pt}\alpha_{u,s}^\top F_k U \leq \beta_{u,s},~~s \in [0, N_c-1], \label{eq:ICInputConstConvert}\\
&\hspace{20pt}\mu_f = E_N\mathbb{E}[X],\label{eq:TermMeanInputConstConvert}\\
&\hspace{20pt}\Sigma_f \succeq E_N\left(\mathbb{E}[XX^\top] - \mathbb{E}[X]\mathbb{E}[X]^\top\right) E_N^\top,\label{eq:TermCovInputConstConvert}
\end{align}
\end{subequations}
where $\bar{Q} = \mathtt{blkdiag}(Q,\ldots,Q,0) \in \Re^{(N+1)n_x\times(N+1)n_x}$ and $\bar{R} = \mathtt{blkdiag}(R,\ldots,R) \in \Re^{Nn_u\times Nn_u}$.
\end{Lemma}
\begin{proof}
The procedure to relax Problem~(\ref{prob:FiniteHorizonOptimalCovarianceSteeringProblem}) to (\ref{prob:OptimalCovarianceSteeringConvertedInputConstrained}) is straightforward using the result of~\cite{okamoto2018Optimalb} and by relaxing the original terminal covariance constraint $\Sigma_f = E_N\left(\mathbb{E}[XX^\top] - \mathbb{E}[X]\mathbb{E}[X]^\top\right) E_N^\top$ to~(\ref{eq:TermCovInputConstConvert}).
We would like to highlight that the Gaussian distribution can be fully defined by its first two moments.
This relaxation makes the terminal state less distributed than the target distribution.
In many real-world applications~\cite{Ridderhof2019MinimumFuel} it may be preferable to achieve smaller uncertainty in the terminal state.
\end{proof}
In order to solve Problem~(\ref{prob:OptimalCovarianceSteeringConvertedInputConstrained}), we propose a control law that is summarized in the following theorem, which is the main result of this paper.
This control policy is an extension of our previous controller~\cite{okamoto2018Optimalb} and is inspired from the approach in~\cite{paulson2017stochastic,hokayem2012stochastic}.
\begin{Theorem}\label{theorem:InputConstrainedController}
The control law
\begin{align}\label{eq:inputConstrainedCSController}
u_k = v_k + K_k z_k,
\end{align}
where $z_k$ is governed by the dynamics
\begin{subequations}
\begin{align}
z_{k+1} &= A z_k + \varphi(w_k), \label{eq:zDynamics}\\
z_0 &= \varphi(\zeta_0), \quad \zeta_0 = x_0 - \mu_0, \label{eq:zInit}
\end{align}
\end{subequations}
where $\varphi(\cdot): \Re^d \mapsto \Re^d$ is an element-wise symmetric saturation function with pre-specified saturation value of the $i$th entry of the input $\zeta_i^{\rm max} > 0$
as
\begin{align}\label{eq:saturationFunc}
\varphi_i(\zeta) = \max(-\zeta_i^{\rm max}, \min(\zeta_i,\zeta_i^{\rm max}) ),
\end{align}
converts Problem~(\ref{prob:OptimalCovarianceSteeringConvertedInputConstrained}) to the following convex programming problem
\begin{subequations}\label{prob:OptimalCovarianceSteeringFinalInputConstrained}
\begin{align}
&\textrm{minimize}\;J(V, K, \Omega) =\nonumber \\
& \mathtt{tr}\left(\bar{Q} \begin{bmatrix}I & \mathcal{B} K\end{bmatrix}\Sigma_{XX} \begin{bmatrix}I \\ K^\top \mathcal{B}^\top \end{bmatrix}\right) + \mathtt{tr}\left(\bar{R}K\Sigma_{UU} K^\top\right) \nonumber \\
& \hspace{20pt}+ (\mathcal{A} \mu_0 + \mathcal{B} V)^\top\bar{Q}(\mathcal{A} \mu_0 + \mathcal{B} V) + V^\top\bar{R}V \label{eq:finalCostFunc}\\
&\textrm{subject to }\nonumber \\
&\alpha_{x,j}^\top E_k\left(\mathcal{A} \mu_0 + \mathcal{B} V\right)- \beta_{x,j} +\nonumber \\
& \hspace{20pt}\sqrt{\frac{1-p_{x,j}}{p_{x,j}}} \|\Sigma_{XX}^{1/2}\begin{bmatrix}I & \mathcal{B} K\end{bmatrix}^\top
E_k^\top\alpha_{x,j}\| \leq 0, \label{eq:OCSFIC_SC}\\
&HF_kV + \Omega^\top\sigma \leq h,\\
&HF_kK[
\mathcal{A}\ \mathcal{D}
] = \Omega^\top S,\\
&\Omega \geq 0,\\
& \mu_f = E_N\left(\mathcal{A} \mu_0 + \mathcal{B} V\right),\\
&\Sigma_f \succeq E_N \begin{bmatrix}I & \mathcal{B} K\end{bmatrix}\Sigma_{XX}\begin{bmatrix}I \\ K^\top \mathcal{B}^\top \end{bmatrix} E_N^\top, \label{eq:OCSIC_TerminalCovConst}
\end{align}
\end{subequations}
where $\Omega\in\Re^{2(N+1)n_x\times N_c}$ is a decision (slack) variable,
\begin{align}
&\Sigma_{XX} = \nonumber\\
&\begin{bmatrix}\mathcal{A} & \\ & \mathcal{A}\end{bmatrix}
\begin{bmatrix}\Sigma_0 & \mathbb{E}[\zeta_0\varphi(\zeta_0)^\top]\\ \mathbb{E}[\varphi(\zeta_0)\zeta_0^\top] & \mathbb{E}[\varphi(\zeta_0)\varphi(\zeta_0)^\top]\end{bmatrix}
\begin{bmatrix}\mathcal{A}^\top & \\ & \mathcal{A}^\top\end{bmatrix}\nonumber \\
& + \begin{bmatrix}\mathcal{D} & \\ & \mathcal{D}\end{bmatrix}
\begin{bmatrix}\mathbb{E}[WW^\top] & \mathbb{E}[W\varphi(W)^\top]\\
\mathbb{E}[\varphi(W)W^\top] & \mathbb{E}[\varphi(W)\varphi(W)^\top]\end{bmatrix}
\begin{bmatrix}\mathcal{D}^\top & \\ & \mathcal{D}^\top\end{bmatrix},\label{eq:SigmaXX}
\end{align}
\begin{align}
&\Sigma_{UU} =
\mathcal{A} \mathbb{E}[\varphi(\zeta_0)\varphi(\zeta_0)^\top]\mathcal{A}^\top + \mathcal{D}\mathbb{E}[\varphi(W)\varphi(W)^\top]\mathcal{D}^\top.\label{eq:SigmaUU}
\end{align}
Furthermore,
\begin{subequations}\label{eq:Hh}
\begin{align}
H &= \begin{bmatrix}
\alpha_{u,0},&
\cdots,&
\alpha_{u, N_c-1}
\end{bmatrix}^\top \in \Re^{N_c\times n_u}, \\
h &= \begin{bmatrix}
\beta_{u,0},&
\cdots &
\beta_{u, N_c-1}
\end{bmatrix}^\top \in \Re^{N_c},
\end{align}
\end{subequations}
and
\begin{align*}
V &= \begin{bmatrix}
v_0^\top &\cdots& v_{N-1}^\top
\end{bmatrix}^\top,\\
K &= \begin{bmatrix}
\mathtt{blkdiag}(K_0,K_1,\ldots,K_{N-1}) & 0_{Nn_u\times n_x}
\end{bmatrix}.
\end{align*}
In addition, $S \in \Re^{2(N+1)n_x\times(N+1)n_x}$ and $\sigma \in \Re^{2(N+1)n_x}$ are constant.
Specifically, for $i = 1,\ldots, (N + 1)n_x$
\begin{subequations}\label{eq:Ssigma}
\begin{align}
S_{2i-1} &= e_{2i-1}^\top,\ S_{2i} = -e_{2i}^\top,\\
\sigma_{2i-1} &= \zeta_i^{\rm max},\ \sigma_{2i} = \zeta_i^{\rm max}.
\end{align}
\end{subequations}
where $S_i$ denotes the $i$th row of $S$, and $e_i \in \Re^{2(N+1)n_x}$ is a unit vector with $i$th element 1.
\end{Theorem}
\begin{proof}
It follows from~(\ref{eq:zDynamics}) and (\ref{eq:zInit}) that
\begin{align}
Z = \mathcal{A} \varphi(\zeta_0) + \mathcal{D} \varphi(W),
\end{align}
where $ Z = \begin{bmatrix} z_0^\top & \ldots & z_N^\top \end{bmatrix}^\top \in \Re^{(N+1)n_x}$.
In addition, $U$ is represented as $U = V + K Z$,
and thus,
\begin{align} \label{eq:newControlPolicy}
U = V + K(\mathcal{A} \varphi(\zeta_0) + \mathcal{D} \varphi(W)).
\end{align}
Since the saturation function~(\ref{eq:saturationFunc}) is an odd function and $\zeta_0$ is zero-mean Gaussian distributed,
\begin{align*}
\mathbb{E}[\varphi(\zeta_0)] = 0,\qquad
\mathbb{E}[\varphi(W)] = 0,
\end{align*}
and hence,
\begin{align*}
\mathbb{E}[U] = V,\quad
\tilde{U} = U - \mathbb{E}[U]
=K(\mathcal{A} \varphi(\zeta_0) + \mathcal{D} \varphi(W)).
\end{align*}
Thus, it follows from~(\ref{eq:X=Ax0+BU+DW}) that
\begin{align}
\bar{X} &= \mathbb{E}[X] = \mathcal{A} \mu_0 + \mathcal{B} V,\label{eq:barXVK}\\
\tilde{X} &= X -\mathbb{E}[X] \nonumber \\
&\hspace{-15pt}= \begin{bmatrix}I & \mathcal{B} K\end{bmatrix} \left(
\begin{bmatrix}\mathcal{A} & \\ & \mathcal{A}\end{bmatrix}
\begin{bmatrix}\zeta_0 \\ \varphi(\zeta_0)\end{bmatrix}
+
\begin{bmatrix}\mathcal{D} & \\ & \mathcal{D}\end{bmatrix}
\begin{bmatrix}W \\ \varphi(W)\end{bmatrix}
\right).\label{eq:tildeXVK}
\end{align}
It also follows from~(\ref{eq:Ex0x0x0WWW}),~(\ref{eq:zInit}),~(\ref{eq:SigmaXX}), and~(\ref{eq:SigmaUU}) that
\begin{align}
\mathbb{E}[\tilde{X}\tilde{X}^\top] &= \begin{bmatrix}I & \mathcal{B} K\end{bmatrix} \Sigma_{XX} \begin{bmatrix}I \\ K^\top \mathcal{B}^\top\end{bmatrix},\label{eq:EXX}\\
\mathbb{E}[\tilde{U}\tilde{U}^\top] &= K\Sigma_{UU} K^\top.\label{eq:EUU}
\end{align}
Following~\cite{okamoto2018Optimal}, it can be shown that the cost function~(\ref{eq:ObjInputConstConvert1}) may be written as
\begin{align} \label{eq:objInputConstConvert2}
J(V,\tilde{U}) &= \mathtt{tr}(\bar{Q}\mathbb{E}[\tilde{X}\tilde{X}^\top]) + \bar{X}^\top \bar{Q} \bar{X} \nonumber \\
&\hspace{20pt}+ \mathtt{tr}(\bar{R}\mathbb{E}[\tilde{U}\tilde{U}^\top]) + V^\top \bar{R} V.
\end{align}
Using~(\ref{eq:barXVK}), (\ref{eq:EXX}), and (\ref{eq:EUU}), we can further convert~(\ref{eq:objInputConstConvert2}) to~(\ref{eq:finalCostFunc}).
Next, we discuss the conversion of the chance constraint~(\ref{eq:CCInputContConvert}) to a more amenable form to facilitate the numerical solution of the optimization problem (\ref{prob:OptimalCovarianceSteeringFinalInputConstrained}).
First, notice that $\alpha_j^\top E_k X$ is a univariate random variable with mean $\alpha_{x,j}^\top E_k \mathbb{E}[X]$ and variance $\alpha_{x,j}^\top E_k \mathbb{E}[\tilde{X}\tilde{X}^\top] E_k^\top \alpha_{x,j}$.
It follows from the Chebyshev-Cantelli inequality~(Lemma~\ref{Lemma:Cantelli} in the Appendix) that
\begin{align*}
&\Pr\bigg(\alpha_{x,j}^\top E_k X \leq \alpha_{x,j}^\top E_k \mathbb{E}[X] \nonumber\\ &\hspace{20pt}+\sqrt{\frac{1-p_{x,j}}{p_{x,j}}}\sqrt{ \alpha_{x,j}^\top E_k \mathbb{E}[\tilde{X}\tilde{X}^\top]E_k^\top\alpha_{x,j}^\top}\bigg) > 1-p_{x,j}.
\end{align*}
Therefore, the inequality~(\ref{eq:CCInputContConvert}) is satisfied if
\begin{align*}
\alpha_{x,j}^\top E_k \mathbb{E}[X] +\sqrt{\frac{1-p_{x,j}}{p_{x,j}}}\sqrt{ \alpha_{x,j}^\top E_k \mathbb{E}[\tilde{X}\tilde{X}^\top]E_k^\top\alpha_{x,j}^\top} \leq \beta_{x,j},
\end{align*}
is satisfied, which is, from (\ref{eq:barXVK}), (\ref{eq:tildeXVK}), and a similar discussion to~\cite{okamoto2018Optimal}, equivalent to a second order cone constraint in terms of $V$ and $K$~(\ref{eq:OCSFIC_SC}).
In addition, using~(\ref{eq:barXVK}) and (\ref{eq:EXX}), the terminal state constraints~(\ref{eq:TermMeanInputConstConvert}) and (\ref{eq:TermCovInputConstConvert}) can be written as
\begin{subequations}
\begin{align}
\mu_f &= E_N\left(\mathcal{A} \mu_0 + \mathcal{B} V\right),\\
\Sigma_f &\succeq E_N \begin{bmatrix}I & \mathcal{B} K\end{bmatrix}
\Sigma_{XX}
\begin{bmatrix}I \\ K^\top \mathcal{B}^\top \end{bmatrix} E_N^\top.\label{eq:XNNew}
\end{align}
\end{subequations}
Using the Schur complement, (\ref{eq:XNNew}) can be further converted to
\begin{align*}
\begin{bmatrix}
\Sigma_f & E_N \begin{bmatrix}I & \mathcal{B} K\end{bmatrix}\Sigma_{XX}^{1/2}\\
\Sigma_{XX}^{1/2} \begin{bmatrix}I & \mathcal{B} K\end{bmatrix}^\top E_N^\top & I
\end{bmatrix} \succeq 0.
\end{align*}
Note that $\Sigma_{XX}\succeq 0$ (see Lemma~\ref{lemma:SigmaXXisPositiveSemiDef} in the Appendix).
Finally, we rewrite the input hard constraint~(\ref{eq:ICInputConstConvert}) as follows.
Using~(\ref{eq:Hh}), we first rewrite~(\ref{eq:ICInputConstConvert}) to the following equivalent form
\begin{equation}\label{eq:InputConstraintsHu<h}
H F_k U \leq h.
\end{equation}
Then, using~(\ref{eq:newControlPolicy}), this inequality is further converted to
\begin{equation}
HF_k\left(V + K \left(\mathcal{A}\varphi(\zeta_0) + \mathcal{D}\varphi(W)\right) \right) \leq h,
\end{equation}
and thus,
\begin{equation} \label{eq:InputHardConstRaw}
HF_kV + HF_kK \begin{bmatrix} \mathcal{A} & \mathcal{D} \end{bmatrix}\begin{bmatrix}
\varphi(\zeta_0) \\\varphi(W) \end{bmatrix} \leq h.
\end{equation}
In addition, using~(\ref{eq:Ssigma}), the condition~($\ref{eq:saturationFunc}$) can be represented as
\begin{equation}
S \begin{bmatrix} \varphi(\zeta_0) \\\varphi(W) \end{bmatrix} \leq \sigma.
\end{equation}
It follows from the discussion in~\cite{paulson2017stochastic} that the constraint~(\ref{eq:InputHardConstRaw}) can be converted to
\begin{subequations}
\begin{align}
&HF_kV + \Omega^\top\sigma \leq h,\\
&HF_kK[
\mathcal{A}\ \mathcal{D}
] = \Omega^\top S,\\
&\Omega \geq 0.
\end{align}
\end{subequations}
In summary, we have converted Problem~(\ref{prob:OptimalCovarianceSteeringConvertedInputConstrained}) to Problem~(\ref{prob:OptimalCovarianceSteeringFinalInputConstrained}).
\end{proof}
Note that the values of $\mathbb{E}[\zeta_0\varphi(\zeta_0)^\top]$, $\mathbb{E}[\varphi(\zeta_0)\varphi(\zeta_0)^\top]$, $\mathbb{E}[W\varphi(W)^\top]$, and $\mathbb{E}[\varphi(W)\varphi(W)^\top]$ can be obtained using Monte Carlo or Lemma~\ref{lemma:Integrals} in the Appendix.
\begin{Remark}
When designing $u_k$ in~(\ref{eq:inputConstrainedCSController}), the value of $z_k$ can be obtained from $z_{k-1}$ and $\varphi(w_{k-1})$.
The noise $w_{k-1}$ is obtained from the system dynamics~(\ref{eq:SystemDynamics}), i.e.,
\begin{align}
w_{k-1} = x_{k} - A_{k-1}x_{k-1} - B_{k-1}u_{k-1},
\end{align}
and thus, $\varphi(w_{k-1})$ can be computed before computing the value of $u_k$ at time step $k$.
\end{Remark}
\begin{Remark}
Instead of~(\ref{eq:originalCostFunc}) or (\ref{eq:finalCostFunc}), one can separately design the mean and covariance steering cost, i.e.,
\begin{align*}
&J(V,K) = \mathtt{tr}\left(\bar{Q}_v \begin{bmatrix}I & \mathcal{B} K\end{bmatrix}
\Sigma_{XX}
\begin{bmatrix}I & \mathcal{B} K\end{bmatrix}^\top\right) + V^\top\bar{R}_mV \nonumber \\
&+(\mathcal{A} \mu_0 + \mathcal{B} V)^\top\bar{Q}_m(\mathcal{A} \mu_0 + \mathcal{B} V) + \mathtt{tr}\left(\bar{R}_vK\Sigma_{UU} K^\top\right),
\end{align*}
where $\bar{Q}_m$ $\in$ $\Re^{(N+1)n_x\times(N+1)n_x}$, $\bar{Q}_v$ $\in$ $\Re^{(N+1)n_x\times(N+1)n_x}$, $\bar{R}_m$ $\in$ $\Re^{Nn_u\times Nn_u}$, and $\bar{R}_v$ $\in$ $\Re^{Nn_u\times Nn_u}$ are block diagonal matrices, e.g., $\bar{Q}_m = \mathtt{blkdiag}(Q_m$,$\ldots$, $Q_m, 0)$.
\end{Remark}
\begin{Remark}
Because the terminal covariance constraint~(\ref{eq:OCSIC_TerminalCovConst}) can be formulated as a linear matrix inequality (LMI) constraint, in order to solve Problem (\ref{prob:OptimalCovarianceSteeringFinalInputConstrained}), we need to use an SDP solver such as MOSEK~\cite{mosek}.
\end{Remark}
\section{Numerical Simulation\label{sec:Numerical Simulation}}
In this section we validate the proposed OCS algorithm using a numerical example.
We consider a path-planning problem for a vehicle under the following double integrator dynamics with $x_k = [x,y,v_x,v_y]^\top\in \Re^{4}$, $u_k =[a_x,a_y]^\top\in \Re^{2}$, $w_k \in \Re^{4}$ and
\begin{align*}
&A = \begin{bmatrix}
1 & 0 & \Delta t & 0\\
0 & 1 & 0 & \Delta t\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{bmatrix},\;
B = \begin{bmatrix}
\Delta t^2/2 & 0\\
0 &\Delta t^2/2\\
\Delta t & 0\\
0 & \Delta t
\end{bmatrix},
\end{align*}
and $D = \mathtt{blkdiag}(0.01, 0.01, 0.01, 0.01)$,
where $\Delta t$ is the time-step size set to $\Delta t = 0.2$.
The feasible state space is
\begin{align*}
0.2(x-1) \leq y \leq -0.2(x-1).
\end{align*}
The mean and the covariance of the initial and target terminal distributions are set to
\begin{align*}
&\mu_0 = [-10, 1, 0, 0],\;\Sigma_0 = \mathtt{blkdiag}(0.05, 0.05, 0.01, 0.01),\\
&\mu_f = [0, 0, 0, 0], \; \Sigma_f = \mathtt{blkdiag}(0.025, 0.025, 0.005, 0.005).
\end{align*}
The cost function weights are chosen as
\begin{align*}
Q = \mathtt{blkdiag}(0.5,4.0,0.05,0.05),\; R = \mathtt{blkdiag}(20,20).
\end{align*}
The horizon is set to $N=20$, and the probability threshold to $p_{x,j} = 0.05$ for $j = 0, 1$.
Finally, we restrict the maximum acceleration at $U_{\rm max}= $ 2.9 $\mathrm{m/s^2}$ along each axis, i.e.,
\begin{align}\label{eq:controlConstraintEq}
\| u_k \|_\infty \leq U_{\rm max},
\end{align}
for $k= 0,\ldots,N-1$.
We also set the saturation function~(\ref{eq:saturationFunc}) to be saturated when the input exceeds the 3$\sigma$ values.
We employ YALMIP~\cite{yalmip} along with MOSEK~\cite{mosek} to solve this problem.
We first show the results when the controller from~\cite{okamoto2018Optimalb} is used in Fig.~\ref{fig:ResultsFree}.
The trajectories are depicted in Fig.~\subref*{fig:Traj_F}.
Red circles denote the initial and the target distributions, and blue ellipses represent the 3$\sigma$ confidence region at each time step.
We also illustrate randomly picked 100 trajectories with gray lines and observe that the state chance constraints are satisfied.
Note, however, that the controller in~\cite{okamoto2018Optimalb} cannot deal with input hard constraints.
Figure~\subref*{fig:ACC_F} depicts the acceleration commands of the 100 sample trajectories with gray lines along with the acceleration limits $\pm$ 2.9 $\mathrm{m/s^2}$ with red dashed lines.
The cost for this scenario is 2,285.
Next, we show the results when the newly developed OCS controller in Theorem~\ref{theorem:InputConstrainedController} is used.
Figure~\ref{fig:ResultsConstrained} depicts the results.
While, as shown in Fig.~\subref*{fig:Traj_C}, the state chance constraints are satisfied, the control commands depicted in Fig.~\subref*{fig:ACC_C} satisfy the input hard constraint~(\ref{eq:controlConstraintEq}).
The cost for this constrained scenario is 2,301, which is, as expected, slightly larger than the cost for the unconstrained case owing to the additional input constraint~(\ref{eq:controlConstraintEq}).
\begin{figure}
\centering
\subfloat[Trajectories.\label{fig:Traj_F}]{\centering\includegraphics[width=0.9\columnwidth]{Traj_InputFree_new}}\\
\subfloat[Input acceleration commands.\label{fig:ACC_F}]{\centering\includegraphics[width=0.9\columnwidth]{ACC_InputFree_new.pdf}}
\caption{Results when input is constraint free~\cite{okamoto2018Optimalb}.\label{fig:ResultsFree}}
\end{figure}
\begin{figure}
\centering
\subfloat[Trajectories.\label{fig:Traj_C}]{\centering\includegraphics[width=0.9\columnwidth]{Traj_InputConstrained_new}}\\
\subfloat[Input acceleration commands.\label{fig:ACC_C}]{\centering\includegraphics[width=0.9\columnwidth]{ACC_InputConstrained_new.pdf}}
\caption{Results when input constraints are imposed.\label{fig:ResultsConstrained}}
\end{figure}
\section{Summary}\label{sec:Summary}
In this work, we have addressed the problem of OCS under state chance constraints and input hard constraints.
Similarly to our previous works~\cite{okamoto2018Optimal,okamoto2018Optimalb}, we solved this problem by converting the original problem into a convex programing problem.
The input hard constraints are formulated using saturation functions to limit the effect of possibly unbounded disturbance.
Numerical simulations show that the proposed algorithm successfully constructs control commands that satisfy the state and input constraints.
Future work includes OCS with measurement noise.
\vspace*{-1ex}
\bibliographystyle{IEEEtran}
|
1,314,259,994,852 | arxiv | \section{Introduction}
The \emph{Catalan numbers} $C_n = \frac{1}{n+1}{2n \choose n}$ are
among the most important sequences of numbers in combinatorics. To name
just a few examples (see \cite{StanEC2} for many more), the number
$C_n$ counts 123-avoiding
permutations in $\mathfrak{S}_n$,
Dyck paths of length $2n$, standard Young
tableaux of shape $2 \times n$, noncrossing or nonnesting
set partitions of $[n]$, and rooted plane trees with $n+1$ vertices.
Certain families of Catalan objects
come equipped with a natural notion of \emph{type}. For example, the
type of a noncrossing set partition of $[n]$ is the sequence
$\mathbf{r} = (r_1, \dots, r_n)$,
where $r_i$ is the number of blocks of size $i$.
In the cases of noncrossing/nonnesting set partitions of $[n]$, Dyck paths
of length $2n$, and plane trees on $n+1$ vertices, there exists a nice
product formula (Theorem 1.1) which counts Catalan objects with fixed type
$\mathbf{r}$. These four classes of Catalan objects also carry a notion
of \emph{connectivity}. In this paper we give a product formula which
counts these objects with a fixed type and a fixed number of connected
components.
\begin{figure}
\centering
\includegraphics[scale=.6]{connected.eps}
\caption{A connected nonnesting partition of $[13]$,
a connected noncrossing partition of $[13]$, a plane tree with $14$
vertices with a terminal rooted twig, and a Dyck path of length $26$ with
no returns}
\end{figure}
The \emph{bump diagram} of a set partition $\pi$ of $[n]$
is obtained by drawing
the numbers $1$ through $n$ in a line and drawing an arc between $i$ and $j$
with $i < j$ if $i$ and $j$ are blockmates in $\pi$ and there does not
exist $k$ with $i < k < j$ such that $i,k,$ and $j$ are blockmates in $\pi$.
The set partition $\pi$ is \emph{noncrossing} if the bump diagram of $\pi$
has no crossing arcs or, equivalently, if there do not exist $a < b < c < d$
with $a,c$ in a block of $\pi$ and $b,d$ in a different block of $\pi$.
Similarly, the set partition $\pi$ is \emph{nonnesting} if the bump diagram
of $\pi$ contains no nesting arcs, that is, no pair of arcs of the form
$ad$ and $bc$ with $a < b < c < d$. As above, the \emph{type} of any
set partition $\pi$ of $[n]$ is the sequence $(r_1, \dots, r_n)$, where
$r_i$ is the number of blocks in $\pi$ of size $i$.
The set partition $\pi$ is called \emph{connected} if there does not
exist an index $i$ with $1 \leq i \leq n-1$ such that there are no
arcs connecting the intervals $[1,i]$ and $[i+1,n]$ in the bump
diagram of $\pi$. The set partition $\pi$ is said to have
\emph{$m$ connected components} if there exist numbers
$1 \leq i_1 < i_2 < \dots < i_{m-1} \leq n$ such that the restriction of
the bump diagram of $\pi$ to each of the intervals
$[1,i_1], [i_1+1,i_2], \dots, [i_{m-1}+1,n]$ is a connected set partition.
The bump diagram
of the noncrossing partition
$\{1, 8, 13 / 2, 5, 6, 7 / 3 / 4 / 9, 12 / 10, 11 \}$ of $[13]$ with type
$(2,2,1,1,0,\dots,0)$ is shown in the middle of Figure 1.1. The
bump diagram of the nonnesting partition
$\{1, 5, 7 / 2, 6, 8, 11 / 3 / 4 / 9, 12 / 10, 13 \}$ of $[13]$ with type
$(2,2,1,1,0,\dots,0)$ is shown in the top of Figure 1.1. Both of
these set partitions are connected. The set partition
$\{ 1, 4 / 2, 3 / 5 / 6, 7, 8 \}$ is a noncrossing partition of $[8]$ with
$3$ connected components and type $(1,2,1,0,\dots,0)$.
A \emph{Dyck path} of length $2n$ is a lattice path in $\mathbb{Z}^2$
starting at $(0,0)$ and ending at $(2n, 0)$ which contains steps of
the form $U = (1,1)$ and $D = (1,-1)$ and never goes below the
$x$-axis. An \emph{ascent} in a Dyck path is a maximal sequence
of $U$-steps. The \emph{ascent type} of a Dyck path of length $2n$ is
the sequence $(r_1, \dots, r_n)$, where $r_i$ is the number of
ascents of length $i$.
A \emph{return} of a Dyck path of length $2n$ is a lattice point
$(m,0)$ with $0 < m < 2n$ which is contained in the Dyck path.
The ascent type of the Dyck path of length $26$
shown on the lower right of Figure 1.1 is $(2,2,1,1,0,\dots,0)$. This
Dyck path has no returns. The Dyck path
$UUDDUDUUDUDD$ has length $12$, ascent type $(2,2,0,\dots,0)$, and
$2$ returns.
A \emph{(rooted) plane tree} is a graph $T$ defined recursively as follows.
A distinguished vertex is called the
\emph{root} of $T$ and the vertices of $T$
excluding the root are partitioned into an
\emph{ordered} list of $k$ sets
$T_1, \dots, T_k$, each of which is a plane tree.
Given a plane
tree $T$ on $n+1$ vertices, the \emph{downdegree sequence} of $T$ is the
sequence $(r_0, r_1, \dots, r_n)$, where $r_i$ in the number of vertices
$v \in T$
with $i$ neighbors
further from the root than $v$.
If $T$ is a plane tree with $n+1$ vertices, there
exists a labeling of the vertices of $T$ with $[n+1]$ called \emph{preorder}
(see \cite{StanEC1} for the precise definition). The plane tree
$T$ with $n+1$ vertices is said to have a \emph{terminal rooted twig} if
the vertex labeled $n+1$ is attached to the root. A \emph{plane forest}
$F$ is an \emph{ordered} list of plane trees $F = (T_1, \dots, T_k)$.
The \emph{downdegree sequence} of a plane forest $F$ is the sum of
the downdegree sequences of its constituent trees.
The downdegree sequence of the plane tree with $14$
vertices shown on the lower left of Figure 1.1 is
$(8,2,2,1,1,0,\dots,0)$. This plane tree has a terminal rooted
twig.
In order to avoid enforcing conventions such as $\frac{(-1)!}{(-1)!} = 1$
in the `degenerate' cases of our product formulas, we adopt the
following notation of Zeng \cite{Zeng}. Given any vectors
$\mathbf{r} = (r_1, \dots, r_n), \mathbf{v} = (v_1, \dots, v_n) \in \mathbb{N}^n$, set
$| \mathbf{r} | := r_1 + \cdots + r_n$,
$\mathbf{r}! := r_1! r_2! \cdots r_n!$, and
$\mathbf{r} \cdot \mathbf{v} := r_1 v_1 + \cdots + r_n v_n$.
Let $x$ be a variable and for any
vectors $\mathbf{r}, \mathbf{v} \in \mathbb{N}^n$ let
$A_{\mathbf{r}}(x;\mathbf{v}) \in \mathbb{R}[x]$ be the polynomial
\begin{equation}
A_{\mathbf{r}}(x;\mathbf{v}) =
\frac{x}{x + \mathbf{r} \cdot \mathbf{v}}
\frac{(x + \mathbf{r} \cdot \mathbf{v})_{|\mathbf{r}|}}{\mathbf{r}!},
\end{equation}
where $(y)_k = y (y-1) \cdots (y-k+1)$ is a falling factorial. Zeng
used the polynomials $A_{\mathbf{r}}(x;\mathbf{v})$ to prove various
convolution identities involving multinomial coefficients.
\begin{thm}
Let $n \geq 1$, let $\mathbf{v} = (1,2,\dots, n)$, and suppose
$\mathbf{r} = (r_1, \dots, r_n) \in \mathbb{N}^n$ satisfies
$\mathbf{r \cdot v} = n$.
The polynomial evaluation
$A_{\mathbf{r}}(1; \mathbf{v}) = [A_{\mathbf{r}}(x; \mathbf{v})]_{x = 1}$
is equal to
\footnote{This polynomial evaluation can also be expressed as
$\frac{n!}{(n-|\mathbf{r}|+1)!\mathbf{r}!}$.}
the cardinality of:
\\
1. the set of noncrossing partitions of $[n]$ of type
$\mathbf{r}$; \\
2. the set of nonnesting partitions of $[n]$ of type
$\mathbf{r}$; \\
3. the set of Dyck paths of length $2n$ with ascent
type $\mathbf{r}$; \\
4. the set of plane trees with $n+1$
vertices and with downdegree sequence
$(n-|\mathbf{r}|+1, r_1, \dots, r_n)$.
\end{thm}
Part 1 of Theorem 1.1 is due to Kreweras \cite[Theorem 4]{Kreweras}.
A type-preserving
bijection showing
the equivalence of Parts 1 and 2 was discovered by Athanasiadis
\cite[Theorem 3.1]{AthNC}. A similar bijection showing the equivalence
of Parts 1 and 3 was proven by Dershowitz and Zaks \cite{DZ}. Armstrong
and Eu \cite[Lemma 3.2]{ArmEu} give an example of a bijection proving the
equivalence of Parts 1 and 4.
The rest of this paper is organized as follows.
In Section 2 we
prove an analogous product formula (Theorem 2.2)
which counts connected objects according to type.
The proof of Theorem 2.2 is bijective and relies on certain properties
of words in monoids.
We extend this
result to another product formula (Theorem 2.3) which counts objects
which have a fixed number of connected components according to type. These
product formulas have found a geometric application in \cite{ArmRho} where
they are used to count regions of hyperplane arrangements related to
the Shi arrangement according to `ideal dimension' in the sense
of Zaslavsky \cite{ZIdeal}. We then apply our product formulas
to the theory of symmetric functions, refining a formula of
Stanley
\cite{StanPark}. In Section
3 we present an alternative proof of Theorem 2.3
communicated to us by Christian Krattenthaler \cite{KrattComm}
which uses generating functions and Lagrange inversion.
\section{Main Results}
The proofs of our product formulas will rest on a lemma about
words in monoids which can be viewed as a `connected analog'
of the `cycle lemma' due to Dvoretzky and Motzkin \cite{DM}
(see also \cite{DZCycle}).
For a more leisurely introduction
to this material, see \cite{StanEC2}.
Let $\mathcal{A}$
denote the infinite alphabet $\{x_0, x_1, x_2, \dots \}$ and let
$\mathcal{A}^*$
denote the free (noncommutative) monoid generated by
$\mathcal{A}$. Denote
the empty word by $e \in \mathcal{A}^*$.
The \emph{weight function} is the monoid
homomorphism $\omega: \mathcal{A}^* \rightarrow (\mathbb{Z}, +)$ induced by
$\omega(x_i) = i-1$ for all $i$. We define a subset
$\mathcal{B} \subset \mathcal{A}^*$ by
\begin{equation*}
\mathcal{B} = \{ w = w_1 \dots w_n \in \mathcal{A}^* \,|\, \omega(w) = 1,
\text{$\omega(w_1 w_2 \dots w_j) > 0$ for $1 \leq j \leq n$} \}.
\end{equation*}
That is, a word $w \in \mathcal{A}^*$ is contained in
$\mathcal{B}$ if and only if
it has weight $1$ and all of its nonempty prefixes have positive weight.
In particular, we have that $e \notin \mathcal{B}$.
Given any word $w = w_1 \dots w_n \in \mathcal{A}^*$,
a \emph{conjugate} of $w$
is an element of $\mathcal{A}^*$ of the form
$w_i w_{i+1} \dots w_n w_1 w_2 \dots w_{i-1}$ for some $1 \leq i \leq n$
(this is the monoid-theoretic analog of conjugation in groups). We have
the following result concerning conjugacy classes of elements of
$\mathcal{B}$. It
is our `connected analog of' \cite[Lemma 5.3.7]{StanEC2} and is an analog of
the `cycle lemma' in tree enumeration.
\begin{lem}
A word $w \in \mathcal{A}^*$ is conjugate to an element of
$\mathcal{B}$ if and only if
$\omega(w) = 1$, in which case $w$ is conjugate to a unique element of
$\mathcal{B}$ and the
conjugacy class of $w$ has size equal to the length of $w$.
\end{lem}
\begin{proof}
Let $w \in \mathcal{B}$ have length $n$ and suppose the conjugacy
class of $w$ has size $k | n$. Then we can write
$w = v^{n/k}$ for some $v \in \mathcal{A}^*$. Since $v$ is a nonempty
prefix of $w$, we have $\omega(v) > 0$ and the fact that
$1 = \omega(w) = \frac{n}{k} \omega(v)$ forces $k = n$.
Since conjugation does not affect weight,
every element $w$ of the conjugacy class of an element of
$\mathcal{B}$
satisfies
$\omega(w) = 1$.
Suppose that $w \in \mathcal{A}^*$ satisfies $\omega(w) = 1$.
We show that $w$ is conjugate to an element of $\mathcal{B}$.
The proof of this fact
breaks up into three cases depending on the letters
which occur in $w$.
\noindent
\emph{Case 1: $w$ contains no occurrences of $x_0$.}
Since $\omega(w) = 1$, in this case $w$ must be of the form
$x_1 \dots x_1 x_2 x_1 \dots x_1$ and $w$ is conjugate to
$x_2 x_1 \dots x_1 \in \mathcal{B}$.
\noindent
\emph{Case 2: $w$ contains at most one occurrence of a letter
other than $x_0$.}
In this case, the
condition $\omega(w) = 1$ forces a conjugate of
$w$ to be of the form $x_s x_0^{s-2} \in \mathcal{B}$ for
some $s > 1$.
\noindent
\emph{Case 3: $w$ at least one occurrence of $x_0$ and at least two
occurrences of letters other than $x_0$.}
We claim that
there exists a conjugate $w'$ of $w$ of the form
$w' = x_{s+1} x_0^s v$ for some $s \geq 0$. If this were not the case,
consider the word $w$ written around a circle. Every maximal contiguous
string of $x_0$'s in $w$ of length $\ell$ must be preceded by a letter
of the form $x_s$ for some $s > \ell + 1$. The weight
of any such contiguous string taken together with its
preceding letter is
$\omega(x_s x_0^{\ell}) = s - 1 - \ell > 0$. Since $\omega(w) = 1$, it
follows that $w$ has a conjugate of the form $x_s x_0^{s-2}$, which
contradicts our assumption that $w$ has at least two occurrences of
a letter other
than $x_0$.
Let $w'$ be a conjugate of $w$ of the form $w' = x_{s+1} x_0^s v$ for
some $v \in \mathcal{A}^*$.
Since $1 = \omega(w) = \omega(w') = s -s + \omega(v)$,
by induction
on length
we can assume that a conjugate of $v$ is contained in
$\mathcal{B}$. Say that $v = yz$ such that $zy \in \mathcal{B}$ with
$z \neq e$. Then $z x_{s+1} x_0^s y$ is a conjugate of
$w' = x_{s+1} x_0^s yz$ satisfying $z x_{s+1} x_0^s y \in \mathcal{B}$.
\end{proof}
Let $\mathcal{B}^*$ denote the submonoid of $\mathcal{A}^*$
generated by $\mathcal{B}$.
In view of \cite[Lemma 5.3.7]{StanEC2}, it is tempting to guess that any
element $w \in \mathcal{A}^*$ obtained by permuting the letters of
an element of $\mathcal{B}^*$ is itself conjugate to an element of
$\mathcal{B}^*$.
However, this is false. For example, the element
$x_3 x_0 x_2 = (x_3 x_0) (x_2) \in \mathcal{A}^*$ is contained
in $\mathcal{B}^*$
but $x_3 x_2 x_0$ has no conjugate
in $\mathcal{B}^*$.
(However, the analog of \cite[Lemma 5.3.6]{StanEC2} does hold in this
context - the monoid $\mathcal{B}^*$ is very pure.)
Lemma 2.1 is the key tool we will use in proving our connected analog
of Theorem 1.1.
\begin{thm}
Let $n \geq 1$, let $\mathbf{v} = (1,2,\dots,n)$, and suppose
$\mathbf{r} = (r_1, \dots, r_n) \in \mathbb{N}^n$ satisfies
$\mathbf{r \cdot v} = n$.
The polynomial evaluation
$-A_{\mathbf{r}}(-1;\mathbf{v}) = [-A_{\mathbf{r}}(x;\mathbf{v})]_{x = -1}$
is equal to
\footnote{In the case where $n > 1$ and $\mathbf{r} \neq (n,0,\dots,0)$, this
can also be expressed as $\frac{(n-2)!}{(n-|\mathbf{r}|-1)! \mathbf{r}!}$.}
the cardinality of:\\
1. the set of connected noncrossing partitions of $[n]$ of type
$\mathbf{r}$; \\
2. the set of connected nonnesting partitions of $[n]$ of type
$\mathbf{r}$; \\
3. the set of Dyck paths of length $2n$ with no returns and ascent
type $\mathbf{r}$; \\
4. the set of plane trees with a terminal rooted twig and $n+1$
vertices with downdegree sequence $(n-|\mathbf{r}|+1, r_1, \dots, r_n)$. \\
\end{thm}
\begin{proof}
The line of reasoning which we follow here should be compared to that in
\cite[Chapter 5]{StanEC2}.
Observe first that when $\mathbf{r} = (n,0,\dots,0)$ we have that
\begin{equation}
-A_{\mathbf{r}}(-1;\mathbf{v}) = \begin{cases}
1 & \text{if $n = 1$,} \\
0 & \text{if $n > 1$.}
\end{cases}
\end{equation}
This is in agreement with the relevant set cardinalities, so from now on
we assume that $n > 1$ and $\mathbf{r} \neq (n,0,\dots,0)$.
Let $\mathcal{B}(\mathbf{r})$ denote the set of
length $n-1$
words $w \in \mathcal{B}$
with $n-|\mathbf{r}|-1$ $x_0$'s, $r_1$ $x_1$'s, $\dots$, and $r_n x_n$'s.
By Lemma 2.1, we have that
\begin{equation}
|\mathcal{B}(\mathbf{r})| = \frac{1}{n-1}{n-1 \choose
n - |\mathbf{r}| - 1, r_1, \dots, r_n} = -A_{\mathbf{r}}(-1;\mathbf{v}),
\end{equation}
where the second equality follows from the definition of
$A_{\mathbf{r}}(x;\mathbf{v})$. Therefore, it suffices to biject
each of the sets in Parts 1-4 with the set
$\mathcal{B}(\mathbf{r})$. We present a bijection in each case.
1. Let $NC(\mathbf{r})$ be the set of noncrossing partitions we wish
to enumerate. Given any partition $\pi$ of $[n]$, define a word
$\psi(\pi) = w_1 w_2 \dots w_{n-1} \in \mathcal{A}^*$ as follows. For
$1 \leq i \leq n-1$, if $i$ is the minimal element of a block of $\pi$,
let $w_i = x_j$ where $j$ is the size of the block containing $i$.
Otherwise, let $w_i = x_0$. For example, if $\pi$ is the connected
nonnesting partition of $[13]$ shown on the top of Figure 1.1,
we have that
$\psi(\pi) = x_3 x_4 x_1 x_1 x_0 x_0 x_0 x_0 x_2 x_2 x_0 x_0$.
It is easy to see that the mapping $\pi \mapsto \psi(\pi)$ sets up a
bijection between set partitions in $NC(\mathbf{r})$ and words in
$\mathcal{B}(\mathbf{r})$.
2. Let $NN(\mathbf{r})$ be the set of nonnesting partitions we wish
to enumerate.
It is easy to verify (see
\cite[Solution to Exercise 5.44]{StanEC2}) that the map
$\psi$ from the proof of Part 1 restricts to a bijection between
$NN(\mathbf{r})$ and $\mathcal{B}(\mathbf{r})$.
3. Let $\mathbb{D}$ be a Dyck path with no returns of length $2n$
and ascent type $\mathbf{r}$. Define a length $n-1$ word
$\delta(\mathbb{D}) \in \mathcal{A}^*$ as follows. Let
$w_1 w_2 \dots w_n \in \mathcal{A}^*$ be the word obtained by reading
$\mathbb{D}$ from left to right, replacing every ascent of length
$i$ with $x_i$ and replacing every maximal contiguous sequence of downsteps
of length $\ell$ with $x_0^{\ell-1}$. Set
$\delta(\mathbb{D}) := w_1 w_2 \dots w_{n-1}$. For example, if $\mathbb{D}$
is the Dyck path shown in Figure 1.1 we have that
$\delta(\mathbb{D}) = x_3 x_0 x_4 x_1 x_1 x_0 x_2 x_0^3 x_2 x_0$. It is
easy to verify that
$\delta(\mathbb{D}) \in \mathcal{B}(\mathbf{r})$
and that the map
$\mathbb{D} \mapsto \delta(\mathbb{D})$ sets up a bijection between
Dyck paths with no returns of length $2n$ and ascent type
$\mathbf{r}$ to $\mathcal{B}(\mathbf{r})$.
4. For $T$ be a plane tree on $n+1$ vertices with a terminal rooted twig,
let $w_1 w_2 \dots w_{n+1} \in \mathcal{A}^*$ be the word obtained by
setting $w_i = x_j$, where $j$ is the downdegree of the $i^{th}$ vertex
of $T$ in preorder. Since $T$ has a terminal rooted twig, we have
$w_n = w_{n+1} = x_0$. Set
$\chi(T) := w_1 w_2 \dots w_{n-1} \in \mathcal{A}^*$.
For example, if $T$ is the tree shown in Figure 1.1, we have
that $\chi(T) = x_3 x_4 x_1 x_1 x_0 x_0 x_0 x_0 x_2 x_2 x_0 x_0$.
The mapping $T \mapsto \chi(T)$ sets up a bijection between the
set of trees of interest and $\mathcal{B}(\mathbf{r})$.
\end{proof}
An alternative proof of Parts 1 and 2 of Theorem 2.2 which relies
on a product formula enumerating noncrossing partitions by `reduced
type' due to Armstrong and Eu \cite{ArmEu} (which in turn relies on
the original enumeration of noncrossing partitions by type due to
Kreweras) can be found in \cite{ArmRho}.
It is natural to ask if the formula in Theorem 2.2 can be generalized to
the case of multiple connected components. The answer is `yes', and
to avoid enforcing nonstandard conventions in degenerate cases
we will again
state the relevant product formula in terms of a polynomial specialization.
Suppose that $\mathbf{r}, \mathbf{v} \in \mathbb{N}^n$ and $1 \leq m \leq
|\mathbf{r}|$. We define the polynomial $A^{(m)}_{\mathbf{r}}(x;\mathbf{v})
\in \mathbb{R}[x]$ by
\begin{equation}
A^{(m)}_{\mathbf{r}}(x;\mathbf{v}) = \frac{(|\mathbf{r}|-1)!}
{(|\mathbf{r}|-m)!}
\frac{x}{x + \mathbf{r \cdot v}}
\frac{(x + \mathbf{r \cdot v})_{|\mathbf{r}|-m+1}}{\mathbf{r!}}.
\end{equation}
Observe that in the case $m = 1$ we have $A^{(1)}_{\mathbf{r}}(x;\mathbf{v})
= A_{\mathbf{r}}(x;\mathbf{v})$.
\begin{thm}
Let $n \geq m \geq 1$ and let $\mathbf{v} = (1,2,\dots,n) \in \mathbb{N}^n$.
Suppose that $\mathbf{r} = (r_1, \dots, r_n) \in \mathbb{N}^n$ satisfies
$\mathbf{r \cdot v} = n$ and $m \leq |\mathbf{r}|$.
The polynomial
evaluation $-A^{(m)}_{\mathbf{r}}(-m;\mathbf{v})
= [-A^{(m)}_{\mathbf{r}}(x;\mathbf{v})]_{x = -m}$
is equal to
\footnote{In the case where $n > m$ and $\mathbf{r} \neq (n,0,\dots,0)$, this
can also be expressed as
$\frac{m(n-m-1)!(|\mathbf{r}|-1)!}{(n-|\mathbf{r}|-1)!(|\mathbf{r}|-m)! \mathbf{r}!}$.}
the cardinality of:\\
1. the set of noncrossing partitions of $[n]$ with exactly $m$
connected components of type $\mathbf{r}$;\\
2. the set of nonnesting partitions of $[n]$ with exactly $m$
connected components of type $\mathbf{r}$;\\
3. the set of Dyck paths of length $2n$ with exactly $m-1$ returns of
ascent type $\mathbf{r}$;\\
4. the set of plane forests with $n + m$ vertices and exactly $m$ trees with
downdegree sequence $(n-|\mathbf{r}|+m,r_1,\dots,r_n)$ such that
every tree has a terminal rooted twig.
\end{thm}
\begin{proof}
In light of Theorem 2.2, it suffices to prove Part 1.
The polynomial $A_{\mathbf{r}}(mx;\mathbf{v})$ can be obtained via the
following convolution-type identity which follows from
a result of Raney
\cite[Theorems 2.2,
2.3]{Raney}
and induction:
\begin{equation}
\sum_{\mathbf{r^{(1)}} + \cdots + \mathbf{r^{(m)}} = \mathbf{r}}
A_{\mathbf{r^{(1)}}}(x;\mathbf{v}) \cdots A_{\mathbf{r^{(m)}}}(x;\mathbf{v}) =
A_{\mathbf{r}}(mx;\mathbf{v}),
\end{equation}
where $\mathbf{r^{(i)}} \in \mathbb{N}^n$ for all $i$.
Let $\mathbf{0} \in \mathbb{N}^n$ be the zero vector.
By Theorem 2.2 and the fact that $A_{\mathbf{0}}(x;\mathbf{v}) = 1$,
we can set $x = -1$ to obtain
\begin{equation}
\sum_{k=1}^m (-1)^k {m \choose k} C(n,k,\mathbf{r}) =
A_{\mathbf{r}}(-m;\mathbf{v}),
\end{equation}
where $C(n,k,\mathbf{r})$ denotes the number of noncrossing partitions of
$[n]$ with exactly $k$ connected components and type $\mathbf{r}$. By the
Principle of Inclusion-Exclusion (see \cite{StanEC1}), it follows that
\begin{equation}
C(n,m,\mathbf{r}) = \sum_{k=1}^m (-1)^k {m \choose k}
A_{\mathbf{r}}(-k;\mathbf{v}).
\end{equation}
Therefore, it suffices to show that the right hand side of
Equation 2.6
is equal to $-A^{(m)}_{\mathbf{r}}(-m;\mathbf{v})$. We sketch
this verification here for the case $m, |\mathbf{r}| < n$; the other
degenerate cases are left to the reader.
We start with the following binomial coefficient identity:
\begin{equation}
\sum_{k=1}^m (-1)^{k+1} k {m \choose k} {n - k - 1 \choose |\mathbf{r}| - 1}
= m {n - m - 1 \choose n - |\mathbf{r}| - 1}.
\end{equation}
This
identity can be
obtained by comparing like powers of $x$ on both sides
of the equation
$r(1+x)^{r+s-1} = (1+x)^s \frac{d}{dx}(1+x)^r = (1+x)^s
({r \choose 1} + 2 {r \choose 2} x + 3 {r \choose 3} x^2 + \cdots)$.
Multiplying both sides
of Equation 2.7
by $\frac{(|\mathbf{r}|-1)!}{\mathbf{r}!}$ and
using the definition of $A_{\mathbf{r}}(x;\mathbf{v})$ we obtain
\begin{equation}
\sum_{k=1}^m (-1)^{k} {m \choose k} A_{\mathbf{r}}(-k;\mathbf{v}) =
\frac{m (n-m-1)! (|\mathbf{r}|-1)!}
{(n-|\mathbf{r}|-1)!(|\mathbf{r}|-m)!\mathbf{r}!}.
\end{equation}
The right hand side of Equation 2.8 is equal to
$-A_{\mathbf{r}}^{(m)}(-m;\mathbf{v})$.
\end{proof}
We close this section
by relating the product formulas in this paper to
Frobenius characters arising from the theory of parking functions.
For $n \geq 0$ a \emph{parking function of length $n$} is a sequence
$(a_1, \dots, a_n)$ of positive integers whose nondecreasing rearrangement
$(b_1, \dots, b_n)$ satisfies $b_i \leq i$ for all $i$. A nondecreasing
parking function is called \emph{primitive} and primitive parking
functions of length $n$ are in an obvious bijective correspondence
(see \cite{ArmEu}) with Dyck paths of length $2n$. The \emph{type} of
a parking function is the ascent type of its nondecreasing rearrangement.
A parking function $(a_1, \dots, a_n)$ will be said to have \emph{$m$
returns} if its nondecreasing rearrangement has $m$ returns when viewed
as a Dyck path.
The symmetric group $\mathfrak{S}_n$ acts on the set of parking
functions of length $n$. Stanley \cite{StanPark} computed the Frobenius
character of this action with respect to the standard bases
(monomial, homogeneous, elementary, power sum, and Schur) of the
ring of symmetric functions. To compute this character in the basis
$\{ h_{\lambda} \}$ of complete homogeneous symmetric functions, he
observed that every orbit $\mathcal{O}$ of this action
contains a unique primitive parking
function $(b_1, \dots, b_n)$
and that the Frobenius
character of
the action of $\mathfrak{S}_n$ on
$\mathcal{O}$ is $h_{\lambda}$, where
$\lambda = (1^{r_1} 2^{r_2} \dots n^{r_n})$ and
$(r_1, \dots, r_n)$ is the type of $(b_1, \dots, b_n)$. By applying
the formula in Theorem 1.1, one immediately gets the expansion
\begin{equation}
\mathrm{Frob}(P_n) = \sum_{\lambda \vdash n} \frac{n!}
{(n-|\mathbf{r}(\lambda)|+1)!
\mathbf{r}(\lambda)!} h_{\lambda},
\end{equation}
where $\mathrm{Frob}$ is the Frobenius characteristic map,
$P_n$ is the permutation module for the action of $\mathfrak{S}_n$ on
parking functions of length $n$,
$r_i(\lambda)$ is
the multiplicity of $i$ in $\lambda$ for $1 \leq i \leq n$, and
$\mathbf{r}(\lambda) = (r_1(\lambda), \dots, r_n(\lambda))$. (See \cite{ArmEu}
for a nonhomogeneous generalization of this formula.)
Using the same reasoning as in \cite{StanPark} we can compute the
Frobenius characters of other modules related to parking functions.
In particular, for $m > 0$, the symmetric group
$\mathfrak{S}_n$ acts on the set of parking functions of length $n$
with $m-1$ returns. Let $P^{(m)}_n$ be the permutation module
corresponding to this action, so that $P_n \cong \bigoplus_{m=0}^{n-1}
P_n^{(m)}$. Applying Theorem 2.3, we have that the Frobenius
character of this module is
\begin{equation}
\mathrm{Frob}(P^{(m)}_n) = \sum_{\lambda \vdash n}
-A^{(m)}_{\mathbf{r}(\lambda)}(-m;\mathbf{v}) h_{\lambda},
\end{equation}
where $\mathbf{v} = (1,2,\dots,n) \in \mathbb{N}^n$.
\section{Proof of Theorem 2.3 using Lagrange Inversion}
In this section we outline an alternative proof of Theorem 2.3 using
generating functions and Lagrange inversion which was pointed out to the
author by Christian Krattenthaler \cite{KrattComm}. This method has
the advantage of immediately proving Theorem 2.3 without first proving
the single connected component case of Theorem 2.2. We only handle
the case of noncrossing partitions.
Let $y = \{ y_1, y_2 , \dots, \}$
and $z$ be
commuting variables. If $\pi$ is a noncrossing partition of $[n]$ for
$n \geq 0$, the \emph{weight} of $\pi$ is the monomial
\begin{equation}
\mathrm{wt}(\pi) = z^n \prod_{i \geq 1} y_i^{r_i(\pi)},
\end{equation}
where $r_i(\pi)$ is the number of blocks in $\pi$ of size $i$. (The
unique partition of $[0]$ has weight $1$.) We define
$P(z) \in \mathbb{R}(y_1,y_2,\dots)[[z]]$ by grouping these monomials
together in a generating function. That is,
\begin{equation}
P(z) = \sum_{\pi} \mathrm{wt}(\pi),
\end{equation}
where the sum is over all noncrossing partitions $\pi$.
Given any noncrossing partition $\pi$ of $[n]$ with $n > 1$, if the block
of $\pi$ containing $1$ has size $k$, drawing $\pi$ on a circle one obtains
$k$ (possibly empty) noncrossing partitions `between' each successive pair
of elements in this $k$ element block. This combinatorial observation
yields the following formula:
\begin{equation}
P(z) = 1 + \sum_{k = 1}^{\infty} y_k z^k P(z)^k.
\end{equation}
Rearranging this expression, we get that
\begin{equation}
\frac{zP(z)}{1 + \sum_{k = 1}^{\infty} y_k z^k P(z)^k} = z,
\end{equation}
and therefore $zP(z)$ is the compositional inverse of
$\frac{z}{X(z)}$, where $X(z) = 1 + \sum_{k = 1}^{\infty} y_k z^k$. This
implies that
\begin{equation}
P\left(\frac{z}{X(z)}\right) = X(z).
\end{equation}
In order to prove Theorem 2.3, we need to keep track of the number of
connected components of a noncrossing partition. To do this,
let $C(z) \in \mathbb{R}(y_1,y_2,\dots)[[z]]$ the generating function
\begin{equation}
C(z) = \sum_{\pi} \mathrm{wt}(\pi),
\end{equation}
where the sum ranges over all \emph{connected}
noncrossing
partitions of $[n]$
where $n \geq 1$. It is immediate that the generating functions
$P(z)$ and $C(z)$ are related by
\begin{equation}
P(z) = \frac{1}{1 - C(z)}
\end{equation}
or equivalently,
\begin{equation}
C(z) = \frac{P(z) - 1}{P(z)}.
\end{equation}
As in the first proof of Theorem 2.3, let $C(n,m,\mathbf{r})$ denote
the number of noncrossing partitions of $[n]$ with
exactly $m$ connected components and type $\mathbf{r}$. It is evident
that
\begin{equation}
C(z)^m = \left( \frac{P(z) - 1}{P(z)} \right)^m = \sum_{n \geq 0}
\sum_{\mathbf{r} \geq \mathbf{0}} C(n,m,\mathbf{r})y^{\mathbf{r}}z^n,
\end{equation}
where the inequality in the inner summation is componentwise and
$y^{\mathbf{r}} = y_1^{r_1} y_2^{r_2} \cdots$ if $\mathbf{r}
= (r_1, r_2, \dots)$.
To find an expression for $C(n,m,\mathbf{r})$ it is enough to extract the
coefficient of $z^n y^{\mathbf{r}}$ from the generating
function in Equation 3.9. We use Lagrange inversion to do this.
Set $F(z) := \frac{z}{X(z)}$, so that the compositional inverse of $F(z)$
is $F^{\langle -1 \rangle}(z) = z P(z)$. Also set
$H(z) := \left( \frac{X(z) - 1}{X(z)} \right)^m$. In light of Equation
3.5 we have the identity
$\left( \frac{P(z) - 1}{P(z)} \right)^m = H (F^{\langle -1 \rangle}(z))$.
Let $\langle - \rangle$ denote taking a coefficient in a Laurent series.
Applying Lagrange inversion as in \cite[Corollary 5.4.3]{StanEC2}
we get that
\begin{align*}
C(n,m,\mathbf{r}) &= \langle z^n y^{\mathbf{r}} \rangle
H (F^{\langle -1 \rangle}(z)) \\
&= \frac{1}{n} \langle z^{n-1} y^{\mathbf{r}} \rangle
H'(z) \left( \frac{z}{F(z)} \right)^n \\
&= \frac{1}{n} \langle z^{n-1} y^{\mathbf{r}} \rangle
m X(z)^{n-m-1} (X(z) - 1)^{m-1} X'(z) \\
&= \frac{m}{n} \langle z^{n-1} y^{\mathbf{r}} \rangle
(X(z) - 1)^{m-1} \sum_{\ell \geq 0} {n - m - 1 \choose \ell}
(X(z) - 1)^{\ell} X'(z) \\
&= \frac{m}{n} \langle z^{n-1} y^{\mathbf{r}} \rangle
(X(z) - 1)^{m-1} \sum_{\ell \geq 0} \frac{1}{m + \ell} {n - m - 1 \choose
\ell} \left( (X(z)-1)^{m + \ell} \right)',
\end{align*}
where all derivatives are partial derivatives with respect to $z$.
Suppose $\mathbf{r} = (r_1, r_2, \dots)$. Taking the coefficient in the
bottom line yields the equality
\begin{equation}
C(n,m,\mathbf{r}) = \frac{m}{|\mathbf{r}|} {n-m-1 \choose |\mathbf{r}| - m}
{|\mathbf{r}| \choose r_1, r_2, \dots},
\end{equation}
which is equivalent to Part 1 of Theorem 2.3.
\section{Acknowledgments}
The author is grateful to Drew Armstrong, Christos Athanasiadis,
Christian Krattenthaler,
Vic Reiner, and Richard Stanley for many helpful conversations.
|
1,314,259,994,853 | arxiv | \section{Introduction}
Isoperimetric inequalities play an important role in geometry and analysis. In the last decades the deep and beautiful connection between isoperimetric inequalities and functional inequalities has been discovered. This discovery started with the work of Meyer, Bakry and Emry on the famous `carr\'ee du champs' or gradient form, and was brought to perfection by Varopoulos, Saloff-Coste \cite{sfc,sfc2}, Coulhon \cite{VSCC}, Diaconis \cite{DSF},
Bobkov, G\"otze \cite{BG,BG2}, Barthe and his coauthors \cite{BaK,BMil,BO1,BaK,BCE}, and Ledoux \cite{L1,L2,L3,L4,L5,L6}. It appears that the right framework of this analysis is given by abstract semigroup theory, i.e.\! starting with a semigroup of measure preserving maps on a measure space.
A crucial application of isoperimetric inequalities on compact manifolds is the famous \emph{concentration of measure} phenomenon, used fundamentally in \cite{FLM}, and analyzed systematically by Milman and Schechtmann, see \cite{MS}. Thanks to the work of Gross \cite{G1,G2,G3,G4}, it is know well-known that concentration of measure can occur in noncommutative spaces and in infinite dimension in form of a logarithmic Sobolev inequality. Indeed, let $T_t=e^{-tA}$ be a measure preserving semigroup, acting on $L_{\infty}(\Om,\mu)$ with energy form $\E(f)=(f,Af)$. Then $(T_t)$ (or $A$) satisfies a \emph{logarithmic Sobolev inequality, in short $\la$-LSI}, if
\begin{equation}\label{LSI} \la \int f^2\log f^2d\mu \kl \E(f) \end{equation}
holds for all $f$ in the domain of $A^{1/2}$. We will use the notation $\Ent(f)=\int f\log fd\mu$ for the entropy. To simplify the exposition, we will assume throughout this paper that $\mathcal{A}\subset {\rm dom}(A)\cap L_{\infty}$ is a not necessarily closed $^*$-algebra in the domain and invariant under the semigroup. Semigroup techniques have been very successfully combined with the notion of hypercontractivity
\[ \|T_t:L_2\to L_{q(t)}\|\kl 1 \quad \mbox{for} \quad q(t)\gl 1+e^{ct}\pl.\]
Indeed, the standard procedure to show that the Laplace Beltrami operator on a compact Riemannian manifold satisfies $\la$-LSI is to derive hypercontractivity from heat kernel estimates, and then use the Rothaus Lemma to derive LSI from hypercontractivity. In this argument ergodicity of the underlying semigroup appears to be crucial.
A major breakthrough in this development is Talagrand's inequality which connects entropic quantities with a given distance. A triple $(\Om,\mu,d)$ given by a measure and a metric satisfies Talagrand's inequality if
\[ W_1(f\mu,\mu) \kl \sqrt{ \frac{2\Ent(f)}{\la}} \pl .\]
Here
\begin{equation}\label{wass} W_1(\nu,\mu)\lel \inf_{\pi} \int d(x,y)d\pi(x,y)
\lel \sup_{\|g\|_{Lip}\le 1} \big|\int g(x)d\nu(x)-\int g(x)d\mu(x)\big|
\end{equation}
is the Wassertein 1-distance, and the last equality is a famous duality result by Kantorovitch and Rubenstein, see \cite{KR,Vi}. The infimum is taken over all probability measures $\pi$ on $\Om\times \Om$ with marginals $\nu$ and $\mu$. Using the triangle inequality for the Wasserstein distance, it is easy to derive the \emph{geometric Talagrand inequality}
\[ d(A,B)\gl h \quad \Longrightarrow\quad \mu(A)\mu(B)\kl e^{-\frac{h^2}{C}} \pl. \]
If in addition $\mu(A)\gl 1/2$ and $B_h=\{x| d(x,A)\gl h\}$, this inequality implies exponential decay in $h$, i.e. concentration of measure usually proved via isoperimetric inequalities. We refer to Tao's blog for applications of Talalgrand's inequality \cite{TA} in particular to eigenvalues of random matrices \cite{TV1,TV2}.
As pointed out by Otto and Villani \cite{OV}, Talagrand proved a much stronger inequality ($\la$-TA$_2$), namely
\begin{equation}
\label{Ta}\la W_2(f\mu,\mu) \kl \sqrt{2\Ent(f)}
\end{equation}
for $\Om=[0,1]^n$ and the Euclidean distance and for $\{0,1\}^n$ and the Hamming distance, with a constant $\la$ not depending on $n$. Here the $p$-Wasserstein distance is obtained by replacing the $1$-norm by the $p$-norm in $L_p(\pi)$ in the middle term of \eqref{wass}. Indeed, in the very insightful paper by Otto and Villani \cite{OV}, they point out that the correct way to understand Talagrand's inequality consists in pushing the semigroup into the state space of the underlying commutative $C^*$-algebra. Then Talagrand's concentration inequality can be reformulated as a convexity condition for a suitably defined sub-Riemannian metric. In that sense Otto and Villani reconnect to the geometric aspect of concentration inequalities. The key idea in the Otto-Villani approach is to define a sub-Riemannian matric such that the function given by \emph{relative entropy}
\[ D(\nu||\mu) \lel \int \log \frac{d\nu}{d\mu} d\nu \]
admits $T_t(\nu)$ as a path of steepest descent. Here $\frac{d\nu}{d\mu}$ is the Radon-Nikodym derivative.
A key tool in their analysis was to consider the modified version of the logarithmic Sobolev inequality, (in short $\la$-FLSI)
\begin{equation} \label{FLSI}
\la \Ent(f) \kl \int A(f)\log f d\mu \pl =:\pl \I_A(f) \end{equation}
and show that it implies $\la$-TA$_2$. The right hand side is known as Fisher information and turns out to be the energy functional for the relative entropy with respect to the sub-Riemannian metric.
In this paper we extend the theory of logarithmic Sobolev inequalities in \emph{two directions}, by including matrix valued functions and non-ergodic semigroups. The main road block, discovered in the quantum information theory literature, is that the Rothaus Lemma (\cite{Ro})
\begin{equation}\label{rot} \Big(\exists_{\la>0} \forall_{E(f)=0} \pl:\pl \la D(f^2|E(f^2))\kl \E(f)\Big)
\quad \stackrel{?}{\Longrightarrow} \quad \Big(\exists_{\tilde{\la}>0} \forall_f \pl:\pl \tilde{\la} D(f^2|E(f^2)) \kl \E(f)\Big)
\end{equation}
may fail for matrix-valued functions. The relevance of variations of the Rothaus Lemma is, however, very well-known in the commutative setting, see \cite{BaK}. We are not aware of any investigation of Talagrand's inequality in the commutative, non-ergodic setting. For selfadjoint semigroups the fixpoint algebra $N=\{x: \forall_t T_t(x)=x\}$ admits a normal conditional expectation $\E_{fix}(M)=N$. This remains true in the noncommutative setting, i.e.\! for a semigroup $(T_t)$ of (sub-)unital completely positive maps on a finite von Neumann algebra $M$ provided each $T_t$ is selfadjont with respect to the inner product $(x,y)=\tau(x^*y)$ of a normal, faithful tracial state $\tau$. The reader less familiar with von Neumann algebras is welcome to think of $M=L_{\infty}(\Om,\mu;\Mz_m)$, the space of bounded random matrices equipped with $\tau(f)=\int_{\Om} \frac{tr(f(\om))}{m} d\mu(\om)$ such that $\tau(T_t(x^*)y)=\tau(x^*T_t(y))$.
The failure of \eqref{rot} forces us to introduce new tools, and study a new gradient condition. This allows us to show that Talagrand's inequality is still valid in the context of group representations. Our main examples are of the form $T_t=S_t\ten id_{\Mz_m}$, where $S_t$ is a nice ergodic semigroup. These examples are natural in the context of operator spaces (see \cite{Po} and \cite{ER} for more background), despite being obviously not ergodic. We say that a semigroup satisfies $\la$-FLSI (also called $\la$-MLSI but we want to stress the Fisher information aspect) if
\[ \la D(\rho||\E_{fix}(\rho))\kl \I_A(\rho) \lel \tau(A(\rho)\ln \rho) \pl .\]
The noncommutative Fisher information, introduced under the name entropy production by Spohn \cite{Spo},
is very well-known in the quantum information theory.
At the time of this writing it is not known whether $\la$-FLSI is stable under tensorization. However, tensorization is an important feature and allows to deduce gaussian estimates from an elementary $2$-point inequality, see e.g. \cite{BCL}. Therefore, we introduce the \emph{complete logarithmic Sobolov inequality} (in short $\la$-CLSI) by requiring that $A\ten id_{\Mz_m}$ satisfies $\la$-FLSI for all $m\in \nz$. Using the data processing inequality it is easy to show that the CLSI is stable under tensorization, see section 7.1. Before this paper the list of examples which satisfy good tensorization properties could all be deduced from one key example (see however \cite{DR}), due to \cite{BaRo}:
\begin{lemma}(Bardet/Rouz\'{e}) Let $E:M\to N$ be a conditional expectation. Then $T_t=e^{-t(I-E)}$ satisfies $1$-CLSI.
\end{lemma}
Indeed, for conditional expectations we have
\[ \I_{I-E}(\rho) \lel D(\rho ||E(\rho))+D(E(\rho)||\rho) \gl D(\rho||E(\rho)) \pl .\]
In this case stabilization is trivial. The middle term is the original symmetrized divergence introduced by Kullback and Leibler \cite{KL}, which is interesting from a historical point of view. Using the tensorization, one can now deduce that gaussian systems (and certain depolarizing channels) also satisfy CLSI, see also \cite{DR}. Our new tool to prove CLSI is based on the gradient form
\[ 2\Gamma(f,g) \lel A(f^*)g+f^*A(g)-A(f^*g) \pl .\]
We say that the generator $A$ satisfies $\la$-$\Gamma\E$ if
\[ \sum_{jk} \bar{\al}_j\la \Gamma_{I-\E_{fix}}(f_j,f_k)\al_k \kl \sum_{j,k}\bar{\al}_j \Gamma_{A}(f_j,f_k)\al_k \]
holds for all finite families $(f_k)$ and scalars $(\al_k)$.
The next lemma states the two new basic facts used in this paper.
\begin{lemma}\label{keyel} i) $\la$-$\Gamma\E$ implies $\la$-CLSI. ii) If $A$ and $B$ satisfy $\la$-CLSI, then $A\ten id+id\ten B$ satisfies $\la$-CLSI.
\end{lemma}
Our main contribution is to identify large classes of examples from representation theory satisfying $\Gamma\E$. For this we have to recall the definition of a H\"ormander system on a Riemannian manifold $(\M,g)$ given by vector fields $X=\{X_1,...,X_r\}$ such that the iterated commutators $[X_{i_1},[X_{i_2},\cdots]]$ generate the tangent $T_p\M$ space at every point $p\in \M$. Building on the famous heat kernel estimates from \cite{SR}, see also \cite{LZ}, and the work of Saloff-Coste on return time, we find entropic concentration inequalities subordinated sub-Laplacians. Quite surprisingly, qualitative forms of non-locality turn out to be helpful in this context.
\begin{theorem} Let $X$ be a H\"ormander system on a compact Riemannian manifold, and the selfadjoint generator $\Delta_X=\sum_j \nabla_{X_j}^*\nabla_{X_j}$ the corresponding sub-Laplacian. Then $\Delta_X^{\theta}$ satisfies $\la$-$\Gamma\E$, and $\la$-CLSI for some constant $\la=\la(X,\theta)$.
\end{theorem}
It is widely open wether the $\Delta_X$ itself satisfies CLSI, even when $\Delta_X$ is replaced by the Laplace-Beltrami operator on a compact Riemannian manifold. For $\Om=S^1$ the standard semigroup given by $A=-\frac{d^2}{dx^2}$ satisfies $1$-CLSI, however fails $\la$-$\Gamma\E$. For more information on Barkry-Emery theory for sub-Laplacians, we refer to the deep work work of Baudoin and his coauthors \cite{B0,B1,B2,B3,B4,BOV}. Subordinated semigroups(in a slightly different meaning) have also been investigated in the gaussian setting, see \cite{Lind1,Lind2}. From a rough kernel perspective (see \cite{GF1,FP1,FP2}) it may appear lear surprising that subordinated semigroups outperform their smooth counterparts.
In the context of group actions, we can transfer logarithmic Sobolev inequalities. Indeed, let $\al:G\to {\rm Aut}(M)$ be a trace preserving action on a finite von Neumann algebra $(M,\tau)$, i.e. \! $\al$ is strongly continuous group homomorphism with values in the set of trace preserving automorphism on $M$.
A semigroup $S_t:L_{\infty}(G)\to L_{\infty}(G)$ which is invariant under right translations is given by an integral operator of the form
\[ S_t(f)(g) \lel \int k_t(gh^{-1}) f(h) d\mu(g) \]
where $\mu$ is the Haar measure. We will assume that $G$ is compact and $\mu$ is a probability measure. Then we may define the \emph{transferred semigroup}
\[ T_t(x) \lel \int k_t(g) \al_g(x) d\mu(x) \pl .\]
For ergodic $S_t$ the fixpoint algebra of the transferred semigroup $T_t$ is then given by fixpoint algebra of the action $N^G=\{x|\forall_{g} \al_g(x)=x\}$, which is generally not trivial.
\begin{theorem} Let $G$ be a compact group acting on a finite von Neumann algebra $(M,\tau)$. It the generator $S_t=e^{-tA}$ satisfies $\la$-CLSI ($\la$-$\Gamma\E$), then $T_t$ satisfies $\la$-CLSI ($\la$-$\Gamma\E$).
\end{theorem}
For a compact Lie group $G$
a generating set $X=\{X_1,...,X_r\}$ of the Lie-algebra $\mathfrak{g}$ defines
a H\"ormander system $X=\{X_1,...,X_r\}$ given by the corresponding right translation invariant vector fields. Then we conclude that for \emph{any} group representation the semigroup $T_t^{\theta}$ transferred from $S_t=e^{-t\Delta_X^{\theta}}$ satisfies $\la$-$\Gamma\E$. Our investigation is motivated by quantum information theory and previous results. Starting with the seminal papers \cite{DaLi,OZ}, Temme and his coauthors
\cite{T1,T2,T3,T4,T5} made hypercontractivity in matrix algebras available in the ergodic setting, see \cite{JPPP,JPPP2,JPPPR,RiX} for results in group von Neumann algebras. Using the concrete description, the so-called Lindblad generators, we can prove the following density result:
\begin{theorem} The set of selfadjoint generators of semigroups in $\Mz_m$ satisfying $\Gamma\E$ and CLSI is dense.
\end{theorem}
Indeed, combining all the results from above, we can show that for such generators $A$ the subordinated $A^{\theta}$ satisfies $\la(\theta)$-$\Gamma\E$ for all $0<\theta<1$. Let us mention the deep work of Carlen-Maas \cite{CM}. They were able to tanslate the work of Otto-Villani \cite{OV} to the state space of matrices and identify a truly noncommutative Wasserstein $2$-distance $d_{A,2}(\rho,\si)$. They also showed (in the ergodic setting) that $\la$-FLSI implies
\[ d_{A,2}(\rho,E(\rho))\kl 2\sqrt{ \frac{D(\rho||E(\rho))}{\la}} \pl .\]
An analogue of an intrinsic Wasserstein distance has already been introduced in \cite{JZ,JRZ}:
\[ d_{\Gamma}(\rho,\si)
\lel \sup_{f=f^*, \Gamma_A(f,f)\le 1} |\tau(\rho f)-\tau(\si f)| \pl. \]
Based on \cite{JRS} we show that $d_{\Gamma}\kl 2\sqrt{2}d_{A,2}$, and hence we see that $\la$-FLSI does indeed imply a noncommutative geometric Talagrand inequality: Let $e_1$ and $e_1$ be projections in $M$ such for some test function $f$ with $\Gamma_A(f,f)\le 1$ we have
\[ |\frac{\tau(e_1f)}{\tau(e_1)}-\frac{\tau(e_2f)}{\tau(e_2)}|\gl h \quad \Longrightarrow \quad \tau(e_1)\tau(e_2)\kl e^{-h^2/C}\pl ,\]
where $C$ only depends on the $\la$-FLSI constant of the generator $A$. Thus we have identified large classes of new examples with satisfy the Talagrand's concentration inequality, not only for $T_t=e^{-tA}$, but also for the $n$-fold tensor product $(T_t^{\ten_n})$.
The paper is organized at follows: We discuss gradient forms, derivations and Fisher information in section 2. In section 3 we first consider kernel and decay time estimates in the ergodic, and then in the non-ergodic case. The latter analysis relies on the theory of mixed $L_p(L_q)$ spaces from \cite{JPar}, which has been recently rediscovered in \cite{BaRo}.
In section 4 we discuss group representations and the density result in section 5. Section 6 is devoted to geometric applications and deviation inequalities. In Section 7 we discuss examples and counterexamples. A chart of the different properties considered in this paper is given in the following diagram
\[ \begin{array}{ccccccccccc}
\mbox{$\la$-$\Gamma \E$} &\stackrel{Cor. \p 2.10}{\Rightarrow}&
\mbox{$\la$-CLSI}&\stackrel{}{\Rightarrow}& \mbox{$\la$-FLSI}&\stackrel{[CM]}{\Rightarrow}& \mbox{$\la$-TA$_2$} \\
\Downarrow_{Thm \p 2.13} & & \Downarrow_{7.1} & & & & \Downarrow_{Rem \p 6.10} \\
\mbox{return}
& &\mbox{$\la$-CLSI for $T_t^{\ten_n}$} & &
\mbox{quant. metric} &\stackrel{Prop. \p 6.14}{\Rightarrow}& \mbox{geom. Talag.}
\end{array} \]
Here the return time estimate of the form
$\|T_t(\rho)-E(\rho)\|_1\le e^{-\la t}\|\rho-E(\rho)\|_1$ is inspired by the work of Saloff-Coste \cite{sfc}.
Open problems will be mentioned at the end of section 7. In fact, we expect CLSI to hold for matrix valued functions with respect to smooth generators of semigroups. Due to space restrictions we ignore the deep and interesting connection to free Fischer information.
{\bf Aknowledgment:} The starting point of this paper is a conversation by I. Bardet and the second named author during the IHP program ``Analysis in Quantum Information'', Fall 2017. The authors are thankful for disccusions with Ivan Bardet, David Stilk Fran\c{c}a, and Nilanjana Datta.
\section{Gradient forms and Fisher information}
\subsection{Modules and Gradient forms}
Let $(M,\tau)$ be a finite von Neumann algebra $M$ equipped with a normal faithful tracial state $\tau$. We denote the noncommutative $L_p$-spaces by $L_p(M,\tau)$ or $L_p(M)$ if the trace is clear from the context. Throughout the paper we consider that $\displaystyle T_t=e^{-tA}:M\to M$ is a strongly continuous semigroup of completely positive, unital and self-adjoint maps. Then $\tau(x^*T_t(y))=\tau(T_t(x)^*y)$ for all $x,y\in M$ and hence $T_t$ is also trace preserving. The generator $A$ is the (possibly unbounded) positive operator on $L_2(M,\tau)$ given by $Ax=\lim_{t\to 0} \frac{1}{t}(T_t(x)-x)$ (see \cite{CS} for more background.) We will assume that there exists a weakly dense $^*$-subalgebra $\mathcal{A}\subset M$ such that
\begin{enumerate}
\item[i)] $\A\subset dom(A)\cap \{ x | A(x)\in M\}$;
\item[ii)] $T_t(\A)\subset \A$ for all $t>0$.
\end{enumerate}
In most cases, it is enough to assume that $\A\subset {\rm dom}(A^{1/2})$ and the $\Gamma$-regularity from \cite{JRS}. The gradient form of $A$ is defined as
\[ \Gamma(x,y)(z) :\lel \frac{1}{2}\Big(\tau(A(x)^*yz)+ \tau(x^*A(y)z)-\tau(x^*yA(z))\Big) \pl\]
We say the generator $A$ satisfies $\Gamma$-regularity if its gradient form $\Gamma(x,y)\in L_1(M,\tau)$ for all $x,y\in {\rm dom}(A^{1/2})$.
\begin{theorem}\label{JRS1} Suppose $\Gamma(x,x)\in L_1(M,\tau)$ for all $x\in {\rm dom}(A^{1/2})$. Then there exists a finite von Neumann algebra $(\hat{M},\tau)$ containing $M$, and a self-adjoint derivation $\delta:{\rm dom}(A^{1/2})\to L_2(\hat{M})$ such that \[ \tau(\Gamma(x,y)z) \lel \tau(\delta(x)^*\delta(y)z) \pl.\]
Equivalently, $E_M(\delta(x)^*\delta(y))=\Gamma_A(x,y)$ where $E_M:\hat{M} \to M$ is the conditional expectation.
\end{theorem}
The $\Gamma$-regularity allows us to define the Hilbert $W^*$-module $\Om_{\Gamma}$ as the completion of ${{\rm dom}(A^{1/2})\ten M}$ with $\Gamma$-inner product
\[ \lan x_1\ten x_2,y_1\ten y_2\ran_{\Gamma}
\lel x_2^*\Gamma(x_1,y_1)y_2 \pl .\]
Note that in Theorem \ref{JRS1} the completion $\hat{M}$ with respect to $M$-valued inner product
\[ \langle x,y\rangle_{E_M} :\lel E_M(x^*y)\pl, \pl x,y\in \hat{M}\pl\]
also gives a $W^*$-module, which we denote as $L_\infty^c(M\subset \hat{M})$.
Recall that for a $W^*$-module $\Ha$ of $M$, the norm is given by its $M$-valued inner product $\lan \cdot, \cdot\ran_\Ha$ as follows,
\[ \|\xi\|_{\Ha}\lel \|\langle \xi,\xi\rangle_\Ha\|_{M}^{1/2} \pl .\]
Then it is readily to verify that the map
\[ \phi:\Om_{\Gamma}\to L_\infty^c(M\subset \hat{M})\pl, \pl \phi(x\ten y) \lel \delta(x)y \]
is an isometric right $M$-module map. Moreover, thanks to Theorem of \cite{Pasch} (see also \cite{JS}) the range $\phi (\Om_\Gamma)$ is $1$-complemented in $L_\infty^c(M\subset \hat{M})$.
\begin{rem}{\rm Our notation $\Om_\Gamma$ is motivated by the universal $1$-forms bimodule \[ \Omega^1 \A \lel \{x\ten y-1\ten xy\pl |\pl x,y \in \A\} \subset \A\ten \A \pl \]
(see \cite{NCG}). Indeed the map $\phi$ induces a representation of the universal derivation $\delta(x)=x\ten 1-1\ten x$.}
\end{rem}
We refer to \cite{JRS} for the proof of Theorem \ref{JRS1}. Here we discuss the following special case. Let $T:M\to M$ be a unital completely positive self-adjoint map. Recall that the $W^*$-module $M\ten_{T}M$ is given by GNS-construction
\[ \langle x_1\ten x_2,y_1\ten y_2\rangle_T
\lel x_2^*T(x_1^*y_1)y_2 \pl .\]
It is shown in \cite{JRS}(see also \cite{Speich}) that there exists a finite von Neumann algebra $\hat{M}$ containing $M$ and a self-adjoint element $\xi\in L_2(\hat{M})$ of norm $1$ such that
\[ T(x)\lel E_M(\xi x\xi) \pl .\]
This gives an isometric $M$-module map $\phi_1:M\ten_{T}M \to L_\infty^c(M\subset \hat{M})$
\[ \phi_1(x\ten y) \lel x\xi y \pl \pl.\]
On the other hand, let $I:M\to M$ be the identity map. The map $A=I-T$ is a generator of a semigroup of completely positive maps (see \cite{Jesse} for a characterization of generators of trace preserving self-adjoint semigroups). Its gradient form is
\[\Gamma_{I-T}(x,y)(z)=\frac{1}{2}(x^*y-T(x)^*y-x^*T(y)+T(x^*y))\pl.\]
One has another isometric module map
\[ \phi_2:\Om_{\Gamma_{I-T}}\to M\ten_{T}M \pl ,\pl \phi_2(x\ten y) \lel x\ten y-1\ten T(x)y \pl .\]
Then for the special case $A=I-T$, one can choose the derivation as follows,
\[\delta:M\to \hat{M}\pl, \pl \delta(x)=\phi_1\circ \phi_2 (x\ten 1)= x\xi-\xi T(x)\pl.\]
In the following we will use the complete positive order in two ways. For two completely positive maps $T$ and $S$, we write $T\le_{cp}S$ if $S-T$ is completely positive. For two gradient forms $\Gamma,\Gamma'$,
we write $\Gamma\le_{cp}\Gamma'$ if
\[ [\sum_k \Gamma(x_{ik},x_{kj})]_{i,j}\kl [\sum_k \Gamma'(x_{ik},x_{kj})]_{i,j} \]
holds for all $M$-valued matrices $(x_{jk})$ in the domain of $\Gamma$.
\begin{lemma}\label{grad1}
i) Let $T:M \to M $ be a completely positive unital map. Then for any state $\rho $,
\[ \rho(\lan \xi,\xi\ran_{\Gamma_{I-T}})
\lel \inf_{c\in M} \rho(\lan \xi-1\ten c,\xi-1\ten c\rangle_{T}) \pl . \]
ii) Let $T_1,T_2: M \to M$ be two completely positive unital maps. Then $\la T_1\le_{cp} T_2$ implies $\la \Gamma_{I-T_1}\le_{cp} \Gamma_{I-T_2}$.
\end{lemma}
\begin{proof} We choose a module basis $\{\xi_i\}_{i\in I}$ of $M\ten_TM$ (see \cite{Pasch,JS}) and let $\xi_0=1\ten 1$. Then \[ \langle \xi_i,\xi_j \rangle_T \lel \delta_{ij}e_i \pl,\]
where $(e_i)_{i\in I} \subset M$ is a family of projections and $e_0=1$. Let $P: M\ten_{T}M\to 1\ten M$ be the orthogonal projection given by
\[ P(\xi) \lel P(\sum_i \al_i\xi_i) \lel \al_0\xi_0 \pl, \pl\pl \al_i\in M\]
Note that
\[ \langle 1\ten z,x\ten y\rangle_T
\lel z^*T(x)y \lel \langle 1\ten z,1\ten T(x)y\rangle \pl.\]
This implies that $P(x\ten y)= 1\ten T(x)y$ and hence for $\xi=\sum_i \al_i\xi_i$, we find that
\[\lan \xi,\xi\ran_{\Gamma_{I-T}}=\lan \phi_2(\xi),\phi_2(\xi)\ran_{T}=\lan\xi-P(\xi),\xi-P(\xi)\ran_{T}=\sum_{i\neq 0} \al_i^*e_i\al_i \pl .\]
On the other hand
\[ \lan\xi-1\ten c, \xi-1\ten c\ran_T
\lel (\al_0-c)^*(\al_0-c)+\sum_{i\neq 0} \al_i^*e_i\al_i\pl .\]
This exactly implies that for any $c\in M$,
\[\lan\xi,\xi\ran_{\Gamma_{I-T}}\le \lan\xi-1\ten c, \xi-1\ten c\ran_T\]
and clearly for any state $\rho$,
\[ \rho(\langle \xi,\xi\rangle_{\Gamma_{I-T}})
\lel \inf_{c\in N} \rho(\langle \xi-1\ten c, \xi-1\ten c\rangle_T) \pl \pl.\]
For b), we find a $c\in M$ such that for any state $\rho$,
\begin{align*}
\rho(\langle \xi,\xi\rangle_{\Gamma_{I-T_2}})
&= \rho(\langle \xi-1\ten c,\xi-1\ten c\rangle_{T_2}) \\
&\gl \la \rho(\langle \xi-1\ten c,\xi-1\ten c\rangle_{T_1})
\gl \la \rho(\langle \xi,\xi\rangle_{\Gamma_{I-T_1}}) \pl.
\end{align*}
Here we used a) twice. The argument for arbitrary matrix levels is the same. \qd
Our next observation is based on operator integral calculus. (see \cite{Su} and references therein for more information). Let $F:\rz \to \rz$ be a continously differentiable function and $\delta$ be a derivation in Theorem \ref{JRS}. Then for a positive $\rho$, the functional calculus for $\delta$ is given by the following operator integral,
\begin{equation}\label{derv}
\delta(F(\rho)) \lel \int_{\rz_+}\int_{\rz_+}
\frac{F(s)-F(t)}{s-t} dE_s^{\rho}\delta(\rho)dE_t^{\rho} \pl .
\end{equation}
where $E^{\rho}((s,t])=1_{(s,t]}(\rho)$ is the spectral projection of $\rho$. Indeed, this is obvious for monomials
\[ \delta(\rho^n)
\lel \sum_{j=0}^{n-1} \rho^j\delta(\rho)\rho^{n-j-1}
\lel \int_{\rz_+}\int_{\rz_+} \frac{s^n-t^n}{s-t} dE_s^{\rho}\delta(\rho)dE_t^{\rho} \pl .\]
The convergence of \eqref{derv} in $L_2$ follows from the boundedness of the derivative $F'$ (and in $L_p$ from the theory of singular integrals, see again \cite{Su}). Let us introduce the \emph{double operator integral}
\[ J_F^{\rho}(y) \pl:=\pl \int_{\rz_+}\int_{\rz_+} \frac{F(s)-F(t)}{s-t} dE_s^{\rho}ydE_t^{\rho} \pl .\]
For $\rho=\sum_k \rho_k e_k$ with discrete spectrum, this simplifies to a Schur multiplier
\[J_F^{\rho}(y)\lel \sum_{k,l} \frac{F(\rho_k)-F(\rho_l)}{\rho_k-\rho_l} e_kye_l\pl.\]
\begin{lemma}\label{mm} Let $F:\rz \to \rz$ be a continuously differentiable monotone function and $\rho\in M$ be positive. Let $A$ and $B$ be two generators of semigroups on $M$ with corresponding deviation $\delta_A$ and $\delta_B$. Suppose their gradient forms satisfy
\[ \la\Gamma_A \pl \le_{cp} \pl \Gamma_B \pl .\]
Then ${\rm dom}(B^{\frac{1}{2}})\subset {\rm dom}(A^{\frac{1}{2}})$ and for any $x\in {\rm dom}(B^{\frac{1}{2}})$,
\[ E_M(\delta_A(x)^*J_F^{\rho}(\delta_A(x)))
\kl \la \pl E_M(\delta_B(x)^*J_F^{\rho}(\delta_B(x))) \pl .\]
\end{lemma}
\begin{proof} Let us first assume that $\rho=\sum_k \la_k e_k$ has discrete spectrum. Then for any $d\in M$,
\begin{align*}
\tau(E_M(\delta_A(x)^*J_F^{\rho}(\delta_A(x)))dd^*)
&= \sum_{k,l} \frac{F(\la_k)-F(\la_l)}{\la_k-\la_l}
\tau(\delta_A(x)^*e_k\delta_A(x)e_ldd^*)\\
&= \sum_{k,l} \frac{F(\la_k)-F(\la_l)}{\la_k-\la_l}
\|e_k\delta_A(x)e_ld\|_{L_2(\hat{M}_A)}^2 \pl .
\end{align*}
Recall that $\Om_{\Gamma_A}$ (resp. $\Om_{\Gamma_B}$) is a submodule of $L_\infty^c(M\subset \hat{M}_A)$ (resp. $L_\infty^c(M\subset \hat{M}_B)$) and hence there is a $M$-module projection $P_B$ onto $\Om_{\Gamma_B}$. Our assumption implies that the map
\[ \Phi: \Om_{\Gamma_B}\to L_\infty^c(M\subset \hat{M}_A)\pl, \pl \Phi(x\ten y)=\delta_A(x)y \]
is of norm less than $\sqrt{\la}$. Thus $\norm{\Phi \circ P_B:L_2(\hat{M}_B)\to L_\infty(\hat{M}_A)}{}$ is also less than $\sqrt{\la}$. Moreover, $\Phi$ is also a left $\A$-module map:
\begin{align*}
\Phi(a \delta_B(x) y)
&= \Phi(\delta_B(ax)y-\delta_B(a)xy)
\lel\delta_{A}(ax)y-\delta_A(a)xy \lel
a\delta_A(x)y \pl .
\end{align*}
Using strongly converging bounded nets from the weak$^*$-dense algebra $\A\subset \dom(A^{1/2})$, we deduce that $\Phi$ extends to a $M$-bimodule map, and hence for all $k,l$,
\[ \|e_k\delta_A(x)e_ld\|_{L_2(M_A)} \kl \sqrt{\la} \|e_k\delta_B(x)e_ld\|_{L_2(M_B)} \pl. \]
Since $F$ is increasing, $\frac{F(\la_k)-F(\la_l)}{\la_k-\la_l}$ is positive. Therefore we obtain
\[\tau(E_M(\delta_A(x)^*J_F^{\rho}(\delta_A(x)))dd^*)
\kl \la \tau(E_M(\delta_B(x)^*J_F^{\rho}(\delta_B(x)))dd^*) \]
for all $dd^*\gl 0$. This implies the assertion for $\rho$ with discrete spectrum.
Let $\rho\in \M$ be a general positive element. Then we can approximate $F(s,t)=\frac{F(s)-F(t)}{s-t}$ by the sequence
\[ F_n(s,t) \lel \frac{F(n\lfloor
\frac{s}{n}\rfloor) -F(n\lfloor \frac{t}{n}\rfloor)}{n(\lfloor \frac{s}{n}\rfloor)-n(\lfloor \frac{t}{n}\rfloor)} \pl,\]
and find $\displaystyle\lim_{n\to \infty} J_{F_n}^{\rho}(x)=J_{F}^{\rho}(x)$. \qd
\subsection{Fischer Information}
Recall that the Fisher information of a generator $A$ is defined as
\[ {\I}_A(x) \lel \tau(A(x)\ln(x)) \pl, \pl x\in \A \cap M_+\pl, \]
provided $A(x)\ln x\in L_1(M)$. Equivalently, one can define $\displaystyle {\I}_A(x)=\lim_{\eps\to 0}\tau(A(x)\ln(x+\eps 1))$.
In the quantum information theory literature $\mathcal{I}_A$ is also called \emph{entropy production} (see \cite{Spo}).
\begin{cor}\label{Fisher} Let $\Gamma_A,\Gamma_B$ be the gradient forms of two
semigroups on $M$. Suppose $\Gamma_A\le_{cp} \la \Gamma_B$. Then
\[ \la \mathcal{I}_A(x) \kl \mathcal{I}_B(x) \pl .\]
\end{cor}
\begin{proof} Let $x\in {\rm dom}(B^{1/2})\cap M$. Then by differentiation formula , \[\|B^{1/2}(x^*x)\|_2=\|\delta_B(x^*x)\|_2=\|\delta_B(x^*)x+x^*\delta_B(x)\|_2\pl,\] Hence $x^*x$ is also the domain of $B^{1/2}$. Thus we have enough positive elements in ${\rm dom}(B^{1/2})\cap M$. Take the function $F(t)=\ln t$. Then
\begin{align*}
\tau(B(x)\ln(x+\eps 1))
&= \tau(\delta_B(x)\delta_B(\ln(x+\eps 1))) \lel \tau(\delta_B(x)I_F^{x+\eps 1}(\delta_B(x))) \\
&\gl \la \tau(\delta_A(x)I_F^{x+\eps 1}(\delta_A(x))) \lel \la \tau(A(x)\ln(x+\eps 1)) \pl .
\end{align*}
The assertion following from sending $\eps\to 0$.\qd
Let $N\subset M$ be a von Neumann subalgebra and $E_N$ be the conditional expectation. (Or shortly $E$ for $E_N$ if no ambiguity.) We define the Fisher information for the subalgebra $N$ with help of the generator $I-E$,
\[ \I_N(\rho) \lel \I_{I-E}(\rho)= \tau\Big((\rho-E(\rho) ) \ln \rho \Big)\pl .\]
Recall that for two positive elements $\rho,\si\in M$, the relative entropy is
\[ D(\rho||\si) :\lel \begin{cases}
\tau(\rho \ln \rho)-\tau(\rho \ln \si), & \mbox{if } \rho\ll \si \\
+\infty, & \mbox{otherwise}.
\end{cases} ,\]
Equivalently one can define $D(\rho||\si)=\lim_{\delta\to 0}D(\rho||\si+\delta 1)$.
When $\tau(\rho)=\tau(\si)$, $D(\rho||\si)$ is always positive. The relative entropy with respect to $N$ is defined as
\[ D_N(\rho)\lel D(\rho||E(\rho))\lel \inf_{\si \in N, \tau(\si)=\tau(\rho)} D(\rho||\si) \pl .\]
See \cite{Marv,GJLR2} for more information on $D_N$ as an asymmetry measure. The following result is well-known (see \cite{Spo,BaRo}), but the simple proof is crucial for this paper.
\begin{lemma}\label{fis} The Fisher information satisfies
\[ \I_N(\rho) \lel D(\rho||E(\rho))+D(E(\rho)||\rho) \]
and hence $\mu_N\le I_N$.
\end{lemma}
\begin{proof} We first note that
\begin{align*}
\I_{N}(\rho)&=\tau(\rho \ln \rho)-\tau(E(\rho)\ln \rho) \\
&=\tau(\rho \ln \rho)-\tau(\rho \ln E(\rho))+ \tau(E(\rho)\ln E(\rho))-\tau(E(\rho)\ln \rho) \\
&= D(\rho||E(\rho))+D(E(\rho)||\rho) \pl .
\end{align*}
The positivity of the relative entropy implies the assertion.\qd
Now let $T_t=e^{-At}:M \to M$ be a semigroup of completely positive unital maps and
$N\subset M$ be the fixed-point algebra of $T_t$. It is easy to see that \[E\circ T_t=T_t\circ E=E\pl.\] The Fisher information $I_A$ appears as the negative derivative of the asymmetry measure $D_N$ under the semigroup $T_t$.
\begin{prop}\label{dmu} Suppose that
\[ \la D_N(\rho)\kl \I_A(\rho) \pl, \pl \forall \pl \rho\ge 0\pl.\]
Then
\[ D_N(T_t(\rho)) \kl e^{-\la t}D_N(\rho)\pl, \pl \forall \pl \rho\ge 0.\]
\end{prop}
\begin{proof} Take $f(t) = D_N(T_t(\rho))$. The idea is to differentiate
\begin{align*}
f(t) &= D_N(T_t(\rho))
= \tau(T_t(\rho)\ln T_t(\rho))-
\tau(E(T_t(\rho))\ln E(T_t(\rho))) \\
&=
\tau(T_t(\rho)\ln T_t(\rho))-\tau(E(\rho)\ln E(\rho)) \pl .
\end{align*}
It was proved in \cite{Su} that for a function $F:\rz_+ \to \rz$ that
\[ \lim_{s\to 0} \frac{F(\rho+s \si)-F(\rho)}{s}
\lel J^{\rho}_{F}(\si) \pl,\]
and hence for the trace
\[ \lim_{s\to 0} \frac{\tau(F(\rho+s \si)-\tau(F(\rho))}{s} \lel \tau(J^{\rho}_{F}(\si))\lel \tau(F'(\rho)\si) \pl . \]
Note that \[\lim_{s\to 0}\frac{1}{s}(T_{t+s}(\rho)-T_{t}(\rho))=-A(T_t(\rho))\pl.
\] Now we use the chain rule for $F(s)=s\ln s$ and $F'(s)=1+\ln s$ and deduce that
\begin{align*}
f'(t) &= \tau\Big(-A(T_t(\rho))\Big)+\tau\Big(-A(T_t(\rho))\ln\big(T_t(\rho)\big)\Big) \lel
-\mathcal{I}_A(T_t(\rho)) \pl .
\end{align*}
Here the first term vanishes because $A$ is self-adjoint and $A(1)=0$. Thus the assumption implies that \[f'(t)=-\mathcal{I}_A(T_t(\rho))\kl -\la D_N(T_t(\rho))=-\la f(t).\]
Then the assertion follows from Gr\"onwall's Lemma.
\qd
\begin{defi} The semigroup $T_t$ or the generator $A$ with fixed-point algebra $N$ is said to satisfy:
\begin{enumerate}
\item[a)]the gradient condition $\la$-$\Gamma \E$ if there exists a $\la>0$ such that
\[ \la \Gamma_{I-E_{N}} \pl \le_{cp}\pl \Gamma_A \pl .\]
\item[b)] the generalized logarithmic Sobolev inequality $\la$-FLSI if
\[ \la D(\rho||E_N(\rho))\kl \I_A(\rho) \pl .\]
\item[c)] satisfies the \emph{complete} logarithmic Sobolev inequality $\la$-CLSI if $A\ten id_{\Mz_m}$ satisfies $\la$-FLSI for all $m\in \nz$
\end{enumerate}
We also say that $T_t$ or $A$ has $\Gamma \E$ (resp. FLSI, CLSI) if it satisfies $\la$-$\Gamma\E$ (resp. $\la$-FLSI, $\la$-CLSI) for some $\la>0$.
\end{defi}
An immediate Corollary is that $\la$-$\Gamma \E$ implies $\la$-CLSI.
\begin{cor}\label{kkeeyy} If the generator $A$ satisfies $\la$-$\Gamma \E$, then
$A\ten id_{M}$ has $\la$-$\Gamma\E$ for any finite von Neumann algerba $M$.
In particular, $\la$-$\Gamma \E$ implies $\la$-CLSI.
\end{cor}
\begin{proof} For the first, easy, assertion see
\cite{JZ}, for the second Proposition \ref{dmu}.
\qd
In the rest of this section we discuss some interesting consequences of $\Gamma E$ condition. The first result is motivated by the original definition of relative entropy as a symmetric metric.
\begin{prop}\label{dIA} Let $(T_t):M\to M$ be a semigroup of completely positive unital self-adjoint maps with fixed-point algebra $N\subset M$. Then
\[ \la \mathcal{I}_N(\rho)\kl \mathcal{I}_A(\rho) \]
implies
\[ \mathcal{I}_N(T_t(\rho)) \kl e^{-\la t}\mathcal{I}_N(\rho) \pl .\]
\end{prop}
\begin{proof} Let us consider the function
\begin{align*} f(t) \lel I_N(T_t(\rho)) &\lel D(T_t(\rho)||E(T_t(\rho))+D(E(T_t(\rho))||T_t(\rho))\\ &\lel D(T_t(\rho)||E(\rho))+D(E(\rho)||T_t(\rho))\pl. \end{align*}
We have seen in Proposition \ref{dmu} that the derivative of the first term is $-\mathcal{I}_N(T_t(\rho))$.
This leads us to differentiate $h(t)=\tau(E(\rho)\ln T_t(\rho))$. Let us recall that for $F(x)=\ln x$ the double operator integral $J^{\rho}_{\ln}$ has a specific form as follows, (see \cite{CM})
\[ J^{\rho}_{\ln}(y) \lel \int_0^{\infty}(s+\rho)^{-1}y(s+\rho)^{-1} ds \pl .\]
Write $\rho_t=T_t(\rho)$. We find that
\begin{align*}
h'(t) &= \tau(E(\rho)J^{T_t(\rho)}_{\ln}(-AT_t(\rho)))\\
&\lel - \int_0^\infty \tau((s+\rho_t)^{-1}E(\rho)(s+\rho_t)^{-1}AT_t(\rho))ds \\
&= - \int_0^\infty \tau(\delta((s+\rho_t)^{-1}E(\rho)(s+\rho_t)^{-1})
\delta(\rho_t)) ds \pl ,
\end{align*}
where $\delta$ is the deviation of $A$. We use the Leibniz rule and deduce from $\delta(E(\rho))=0$ that
\begin{align*}
\delta\Big((s+\rho)^{-1}E(\rho)(s+\rho)^{-1}\Big)
&= \delta((s+\rho)^{-1})E(\rho)(s+\rho)^{-1}+ (s+\rho)^{-1}E(\rho)\delta((s+\rho)^{-1})
\end{align*}
For a commutator,
\[ [a^{-1},\xi] \lel - a^{-1}[a,\xi]a^{-1} \pl .\]
It is proved in \cite{JRS} that a deviation in Theorem \cite{JRS} is a ultra-limit of commutator and hence
\[ \delta((s+\rho)^{-1}) \lel -(s+\rho)^{-1}\delta(\rho)(s+\rho)^{-1} \pl .\]
Then the tracial property implies that \begin{align*}
&-\tau\Big(\delta\big((s+\rho_t)^{-1}E(\rho)(s+\rho_t)^{-1}\big)
\delta(\rho_t)\Big)
\\
&= -\tau\Big(\delta\big((s+\rho_t)^{-1}\big)E(\rho)(s+\rho_t)^{-1}
\delta(\rho_t)\Big) -\tau\Big((s+\rho_t)^{-1}E(\rho)\delta\big((s+\rho_t)^{-1}\big)
\delta(\rho_t)\Big)
\\
& =
\tau\Big((s+\rho_t)^{-1}\delta(\rho_t)(s+\rho_t)^{-1}
E(\rho)(s+\rho_t)^{-1}\delta(\rho_t)\Big) \\
&\quad + \tau\Big((s+\rho_t)^{-1}
E(\rho) (s+\rho_t)^{-1}\delta(\rho_t)(s+\rho_t)^{-1}\delta(\rho_t)\Big)\\
&=
2 \tau\Big(E(\rho)(s+\rho_t)^{-1}\delta(\rho_t)(s+\rho_t)^{-1}\delta(\rho_t)(s+\rho_t)^{-1}\Big) \gl 0 \pl .
\end{align*}
We deduce that $h'(0)\gl 0$ and hence
\begin{align*}
\frac{d}{dt} D(E(\rho_t)||\rho_t)
&= \frac{d}{dt} \Big( \tau(E(\rho_t)\ln E(\rho_t))-\tau(E(\rho_t)\ln\rho_t)\Big)
\lel -h'(t) \kl 0 \pl
\end{align*}
because $E(\rho_t)=E(\rho)$ is a constant function. Therefore,
\[ \frac{d}{dt}\mathcal{I}_N(\rho_t) \kl -\mathcal{I}_A(\rho_t) \kl -\la \mathcal{I}_N(\rho_t) \pl .\]
We conclude that $f$ satisfies $f'(t)\le -\la f(t)$ and hence $f(t)\le e^{-\la t}f(0)$.
\qd
Another application of deviation calculus is that $\Gamma \E$ gives exponential decay of $L_p$-distance for all $1\le p\le \infty$. Let us start with a lemma.
\begin{lemma}\label{pnm} Let $\la \Gamma_A\le \Gamma_B$ and $N\subset M$ be the fixed-point subalgebra of both semigroups $e^{-tA}$ and $e^{-tB}$. Let $1< p<\infty$.
Then for $x\in M$ self-adjoint, the functions
\[ f_A(t) \lel \|e^{-tA}(x)-E(x)\|_p^p \quad,\quad
f_B(t)\lel \|e^{-tB}(x)- E(x)\|_p^p \]
satisfy $-\la f_A'(t)\le -f_B'(t)$ for all $t\ge 0$.
\end{lemma}
\begin{proof} Let $x\in M$ be self-adjoint. Then $a=x-E(x)$ is again self-adjoint. We use the notation $a_+$ and $a_-$ for the positive and negative part of $a$. Recall that the spectral projections of $a_+$ and $a_-$ are mutually disjoint and commute with $a$. Thus $|a|^p=a_+^p+a_-^p$. Note that
\[f_A=\|e^{-tA}(x)-E(x)\|_p^p=\|e^{-tA}(x-E(x))\|_p^p=\|e^{-tA}(a)\|_p^p\]
Differentiating $f_A$ at $t=0$, we obtain that
\begin{align*}
f_A'(t) &= -p\tau(|a|^{p-1}A(|a|)) \\
&= -p\Big(\tau(a_+^{p-1}A(a^+))+
\tau(a_+^{p-1}A(a_-))+\tau(a_-^{p-1}A(a_-))
+\tau(a_-^{p-1}A(a_+)\Big) \pl .
\end{align*}
Let $\delta_A$ be the derivation of $A$. Write $a_-=(\sqrt{a_-})^2=b^2$. Then \eqref{derv} implies that
\begin{align*}
\tau(\delta(a_+^{p-1})\delta(b^2))
&= \int_{\rz_+\times \rz_+}\int_{\rz_+\times \rz_+}
\frac{s^{p-1}-t^{p-1}}{s-t} \frac{r^2-v^2}{r-v} \tau(dE_s\delta(a_+)dE_tdF_r\delta(b)dF_v)
\end{align*}
where $E_s$ (resp. $F_r$) are spectral projections of $a_+$ (resp. $\sqrt{a_-}$). Because $E_s$ and $F_r$ are disjoint, and hence we obtain $\tau(a_+^{p-1}A(a^-))=0$. The same argument applies to $\tau(a_-^{p-1}A(a_+))$.
By Lemma \ref{mm},
$\la E(\delta_A(a_+^{p-1})\delta_A(a_+))
\kl E(\delta_B(a_+^{p-1})\delta_B(a_+))$, and \[ \la E(\delta_A(a_-^{p-1})\delta_A(a_-))
\kl E(\delta_B(a_-^{p-1})\delta_B(a_-)) \]
This implies
\begin{align*}
-\la f_A'(0)
&= \la p\Big(\tau(\delta_A(a_+^{p-1})\delta_A(a_+))+
\tau(\delta_A(a_-^{p-1})\delta_A(a_-))\Big) \\
&\le p\Big(\tau(\delta_B(a_+^{p-1})\delta_B(a_+))+
\tau(\delta_B(a_-^{p-1})\delta_B(a_-)\Big)
\lel - f_B'(0) \pl .
\end{align*}
Replacing $x$ by $e^{-tA}(x)$, we obtain that $-\la f_A'(t)\le f_B'(t)$ for all $t\ge 0$.
\qd
\begin{theorem} Let $T_t=e^{-tA}$ be a generator satisfying $\la$-$\Gamma\E$. Then for all $1\le p\le \infty$
\[ \|T_t(x)-E(x)\|_p \kl e^{-\la t}\|x-E(x)\|_p \pl .\]
\end{theorem}
\begin{proof} Let us first assume that $x=T_s(y)$ for some $y$ so that $x$ belongs to the domain of $A$ and $\sqrt{A}$. Then we note that
\[ T_t(I-E)(x)) \lel T_t(x)-E(x) \pl. \]
Write $a=x-E(x)$ and $S_t=e^{-t(I-E)}$. According to Lemma \ref{pnm} we have that
\[ - \frac{d}{dt}\|T_t(a)\|_p^p
\kl - \la \frac{d}{dt}\|S_t(a)\|_p^p \pl \pl.\] However $E_{N}(a)=0$ and hence $S_t(a)=e^{-t}a$. Then
\[ - \frac{d}{dt}\|S_t(a)\|_p^p \lel pe^{-tp}\|a\|_p^p \pl .\]
We apply Lemma \ref{pnm} and deduce that
\[ \la p f_A(0) \lel
\la p \|a\|_p^p \lel
-\la f_{I-E}'(0) \kl -\la f_A'(0) \pl .\]
Repeat the argument for $a_t=T_t(a)$ and deduce
\[ \la p f_A(t) \kl - f_A'(t) \pl .\]
This implies
$f_A(t)\kl e^{-\la p t}f_A(0)$. Taking
the $p$-root we obtain that
\[ \|T_t(a)\|_p \kl e^{-\la t}\|a\|_p \pl .\]
For general self-adjoint $x$, the assertion follows from the approximation $x=\lim_s T_s(x)$. By considering the $2\times 2$ matrix $\kla \begin{array}{cc} 0&x\\x^*&0\end{array}\mer $, we deduce the assertion for all $x$. The cases $p=1$ and $p=\infty$ are obtained by passing to the limit for $p\to 1$ or $p\to \infty$. \qd
The next corollary studies the $\Gamma \E$ condition under tensor product.
\begin{cor} Let $T_t^j:M_j\to M_j$ be a family of semigroups with fixed-point subalgebras $N_j\subset M_j$. Then the tensor product semigroup $T_t=T_t^1\ten T_t^2\ten \cdots \ten T_t^n$ has the fixed-point algebra $N=\ten_{j=1}^n N_j$.
Suppose for each $j$, $T_j^t$ satisfies $\la_j$-$\Gamma$E. Then
\[ \|T_t-E_{N}(x)\|_p \kl 2(\sum_{j=1}^n e^{-t\la_j}) \|x\|_p \pl .\]
\end{cor}
\begin{proof} Let us consider a two-folds tensor product $S_t=T_t^1\ten T_t^2$. Then
\begin{align*}
S_t-E_{N_1}\ten E_{N_2} &= S_t(id-E_{N_1}\ten E_{N_2})
\lel S_t\Big((id-E_{N_1})\ten id_{M_2}+E_1\ten (id_{M_2}-E_{N_2})\Big) \\
&= (T_{t}^1-E_{N_1})\ten T_t^2+ E_{N_1}\ten (T_{t}^2-E_{N_2}) \pl .
\end{align*}
Since $T_t^2$ and $E_{N_1}$ are completely contractive on $L_p$ spaces, we deduce that
\begin{align*}
\|S_t-E_{N_1}\ten E_{N_2}(x)\|_p
& \kl \|(T_t^1-E_{N_1})\ten T_t^2(x)\|_p+ \|id\ten (T_t^2-E_{N_2})(x)\|_p \\
&\le
2 e^{-\la_1t} \|x\|_p+ \|id\ten (T_t^2-E_{N_2})(x)\|_p
\\
&\le 2(e^{-\la_1t}+e^{-\la_2t})\|x\|_p \pl .
\end{align*}
For the $n$-fold tensor product, we may use induction. \qd
\section{Kernel estimates and module maps}
\subsection{Kernels on noncommutative spaces} In this part we derive kernel estimates for ergodic and non-ergodic semigroups and their matrix valued-extension. Let $N,M$ be two von Neumann algebra and $N_*$ be the predual of $N$. The kernel of maps between noncommutative measure spaces are given by the following theorem due to Effros and Ruan \cite{ER}:
\begin{equation}\label{ER}
CB(N_*,M) \lel N\bar{\ten}M
\end{equation}
which states that space of completely bounded maps $CB(N_*,M)$ is completely isometric isomorphic to the von Neumann algebra tensor product $N\bar{\ten}M$.
$CB(N_*,M)$ is valid for two von Neumann algebras $N,M$. Let us now assume that $N$, is semifinite with trace $tr$. The linear duality bracket $\langle x,y\rangle \lel tr(xy)$
gives a completely isometric pairing between $L_1(N,\tau)$ and $N^{op}$. More precisely, for every kernel $K\in N^{op}\bar{\ten}M$ the linear map
\[ T_K(x) \lel (tr\ten id)(K(x\ten 1)) \]
satisfies $\|K\|_{\min}=\|T_K:L_1(N,\tau)\to M\|_{cb}$. Let us pause for a moment and consider $M=\mm_d$ and $N=\mm_k$. For a linear map $T:S_1^k\to \mm_d$ the Choi matrix is given by
\[ \chi_T \lel \sum_{rs} |r\ran \lan s| \ten T(|r\ran \lan s|) \in \mm_k\ten \mm_d \pl .\]
The map $\phi(a)=a^{t}$ given by the transposed is $^*$-homomorphism between $\mm_k$ and $\mm_k^{op}$. Therefore we should consider
\[ K_T \lel \sum_{rs} |s\ran\lan r| \ten T(|r\ran\lan s|)\in \mm_k^{op}\ten \mm_m \]
and find
\[ T_{K_T}(|r\ran\lan s|)
\lel tr(K_T(|r\ran\lan s|\ten 1))
\lel \sum_{tv} tr(|t\ran\lan v||r\ran \lan s|)\pl T(|v\ran\lan t|) \lel T(|r\ran\lan s|) \pl .\]
Equivalently this shows that our description via kernels in $N^{op}\bar{\ten}M$ is compatible with the standard choice of a Choi matrix, see also \cite{Pau}.
\subsection{Saloff-Coste's estimates}
After these preliminary observation, we now assume that $T_t$ is a semigroup of (sub-)unital completely positive, selfadjoint maps on a finite von Neumann algebra $M$ so that $T_t:L_1(M)\to M$ are completely bounded. According to the previous section, the kernels of $T_t$ are given by positive element $K_t\in M^{op}\bar{\ten}M$. Let $N\subset M$ be the fixed-point subalgebra of the semigroup $T_t$ and $E:M\to N$ be the conditional expectation. Recall that $T_t\circ E\lel E\circ T_t \lel E $ by the self-adjointness on $L_2(M)$.
\begin{lemma}\label{cbc} Let $(T_t):\M\to \M$ be a semigroup of self-adjoint $^*$-preserving maps. Then for any $t\ge 0$,
\begin{enumerate}
\item[i)] $\|T_{2t}:L_1(M)\to L_{\infty}(M)\|=\|T_t:L_2(M)\to L_{\infty}(M)\|^2$.
\item[ii)]$\|T_{2t}-E_\N:L_1(M)\to L_{\infty}(M)\|=\|(T_t-E):L_2(M)\to L_{\infty}(M)\|^2$.
\end{enumerate}
The same estimates also hold for $cb$-norm, instead of the operator norm.
\end{lemma}
\begin{proof} We start by recalling a general fact. Let $v:X\to H$ be a linear map from a Banach space $X$ to a Hilbert space $H$. By $H^*\cong \bar{H}$, we have for any $x,y \in X$,
\[ \lan v(y),v(x)\ran_H =\lan y, \bar{v}^*v(x)\ran_{(X,\bar{X}^*)} \pl.\]
Here $\bar{v}^*:\bar{H}\to \bar{X}^*$ is conjugate adjoint of $v$ and the right hand side is the sesquilinear bracket between $X$ and its conjugate dual $\bar{X}^*$. Then we have
\begin{equation}\label{vv}
\|v:X\to H\|^2 \lel \sup_{\|x\|\le 1} \lan v(x),v(x)\ran
\lel \sup_{\|x\|,\|y\|\le 1} |\lan y, \bar{v}^*v(x)\ran|
\lel \|\bar{v}^*v:X\to \bar{X}^*\| \pl .\end{equation}
In our situation, we use $X=L_1(N)$ and $v=T_t:X\to L_2(N)$. Note that
\[ \lan T_t(x),T_t(y)\ran \lel \tau(T_t(x)^*T_t(y))
\lel (x^*,T_{2t}(y))\pl. \]
and the anti-linear bracket \[ (x,y) \lel \tau(x^*y) \] gives a completely isometry between $M$ and $\overline{L_1(M)}^*$. Therefore,
\[ \|T_{2t}:L_1(M)\to L_{\infty}(M)\|
\lel \|T_t:L_1(M)\to L_2(M)\|^2 \pl .\]
Similarly, we deduce ii) from
\[ (T_t-E)(T_t-E) \lel T_{2t}-ET_t-T_tE+E \lel T_{2t}-E \pl.\]
For the cb-norm estimate we take $X=S_2(L_1(N))$ from \cite{Po} (see also Section \ref{ner}). For the Schatten $2$-space $S_2$, the trace bracket $\lan x,y\ran =tr(x^*y)$ identifies $S_2^*$ with $\bar{S}_2$. Thus
\[ \|id\ten T_t:S_2(L_1(M))\to S_2(L_2(M))\|^2 \lel \|id\ten T_{2t}:S_2(L_1(M))\to S_2(L_{\infty}(M))\| \]
follows again from the general principle. The same argument applies to $(T_t-E)^2=T_{2t}-E$. \qd
The following observation is essentially due to Saloff-Coste (\cite{sfc}).
\begin{prop}\label{sfc} Let $(T_t)$ be a semigroup of self-adjoint and $^*$-preserving maps such that
\begin{enumerate}
\item[i)] $\|T_t:L_1(M)\to L_{\infty}(M)\|_{cb}\kl c t^{-d/2}$ for $c>0$ and $0\le t\le 1$;
\item[ii)] the self-adjoint positive generator $A$ satisfies
\[ \|A^{-1}(I-E):L_2(M)\to L_2(M) \|_{cb}\kl \la^{-1} \pl .\]
\end{enumerate}
Then
\[ \|T_t-E:L_1\to L_{\infty}\|_{cb}
\kl \begin{cases} 2c t^{-d/2} &0\le t\le 1\\
C(d,\la) e^{-\la t} & 1\le t< \infty \pl .
\end{cases} \]
where $C(d,\la)$ is a constant depending only on $d$ and $\la$.
\end{prop}
\begin{proof} First, note that $T_t(I-E) = T_t-E$ and $\|I-E: L_{\infty}(M)\to L_{\infty}(M)\|_{cb}\kl 2$. Then the estimate for $t\le 1$ follows from the assumption i). For $t\gl 1/2$, we use functional calculus that
\[ \|T_{t-1/4}(I-E)\|_{cb}\kl e^{-\la(t-1/4)} \pl .\]
Thus we obtain
\begin{align*}
\|(T_t-E):L_1\to L_2\|_{cb} &\kl
\|T_{t-1/4}(I-E):L_2(M)\to L_2(M)\|_{cb} \|T_{1/4}:L_1(M)\to L_2(M)\|_{cb} \\
& \kl e^{-\la(t-1/4)} \|T_{1/2}:L_1(M)\to L_\infty(M)\|^2_{cb} \kl 2^dc e^{\la/4} e^{-\la t} \pl .
\end{align*}
Applying Lemma \ref{cbc} yields the assertion for $t'=2t\gl 1$.
\qd
\subsection{From ultracontractivity to gradient estimates}In this part we apply kernel estimate to functional calculus of the generator $A$ and the semigroup $T_t$. Recall that $(I-T_t)$ is a generator a of semigroup. For a positive function $F$, we can define a new generator \[ \Phi_F(A) \lel \int_0^{\infty} (I-T_t) F(t) \frac{dt}{t} \pl,\] provided the integral is well-defined. Then the gradient form of $\Phi_F(A)$ is given by
\[ \Gamma_{\phi_F(A)}(x,y) \lel \int_0^{\infty} \Gamma_{I-T_t}(x,y) F(t) \frac{dt}{t} \pl .\]
We also define the modified Laplace transform
\[ \phi_F(\la) \lel \int_0^{\infty} (1-e^{-t\la}) F(t) \frac{dt}{t} \pl. \]
For positive $F$ we may use the integrability (I), quasi-monotonicity (QM), or the well-known $(\Delta_2)$ conditions:
\begin{enumerate}
\item[(I)] $C_F:=\int \min(1,t) F(t) \frac{dt}{t}<\infty$;
\item[(QM)] For some $0<\mu<1$, there exists $C_\mu>0$ such that $F(\mu t)\kl C_\mu F(t)$ for all $t>0$.
\item[($\Delta_2$)] There exists $0<\al<1$, $t_{\al}>0$, $c_\al>0$ such that $F(\mu t) \kl c_{\al} \mu^{-\al} F(t)$ for $t_\al\le \mu t\le t$.
\end{enumerate}
Since $1-e^{-\la t}\sim \min(1,\la t)\kl (1+\la) \min(1,t)$ we deduce that
\[ \phi_F(\la) \kl C_F (1+\la) \]
and hence
\[ \Phi_F(A)\kl C_F(I+A) \pl.\]
Then $\Phi_F(A)$ is a closable operator well-defined on the domain of $A$, and hence according to our assumptions also defined on the dense subalgebra $\A$.
\begin{rem} {\rm Our calculus is closely related to the theory of symmetric positive definite functions on $\rz$, which can be represented as
\[ \psi_G(\la) \lel \int_{\rz} (1-\cos(s\la)) G(s) \frac{ds}{s} \pl ,\]
where $G$ is a positive function such that \[\displaystyle \int_{\rz} \min(1,s^2)G(s)\frac{ds}{s}=4\int_0^{\infty} \min(1,t)G(\sqrt{t}) \frac{dt}{t}<\infty\pl.\] Let $g$ be a gaussian distribution. Then we obtain a randomized new positive definite function
\[ \tilde{\psi}_G(\la) \lel \ez \psi(g\la) \lel \int_{\rz} (1-e^{-s^2\la^2}) G(s) \frac{ds}{s}
\lel 2\int_0^\infty (1-e^{-t\la^2}) G(\sqrt{t}) \frac{dt}{t} \pl .\]
Thus for any positive definite function $\psi_G$, $\phi(\la)=\tilde{\psi}(\sqrt{\la})$ gives the function in our generator calculus.
}\end{rem}
In particular all the subordinated generator are example of our calculus.
\begin{exam}\label{aal} {\rm Let $0<\al<1$ and $F_{\al}(t)=c(\al) t^{-\al}$. Then \[ \phi_{\al}(\la) \lel c(\al) \int_0^{\infty} (1-e^{-\la t}) t^{-\al} \frac{dt}{t} \lel \la^{\al} c(\al) \int_0^{\infty} (1-e^{-s}) s^{-\al} \frac{ds}{s} \lel \la^{\al}\]
holds for a suitable choice of the normalization $c(\al)$. Here $F_{\al}$ actually corresponds to the positive definite function $\psi(x)=x^{2\al}$. It is clear that $F_\al$ satisfies the condition (I) and (QM). We refer to \cite{TM} for monotonicity results overlapping with our approach.}
\end{exam}
Let us now fix an $F$ satisfy the condition (I) and (QM). A key technical tool for our estimates is the following family of unital completely positive map,
\[ \Psi_F(r) \lel g(r)^{-1}\int_0^{\infty} e^{-r/t}T_t F(t) \frac{dt}{t} \pl, \pl r>0\pl,\]
where $g(r)$ is the normalization constant given by
\[ g(r) \lel \int_0^{\infty} e^{-r/t} F(t) \frac{dt}{t} \pl . \]
\begin{lemma}\label{limit} Let $T_t$ be a semigroup of unital completely positive, self-adjoint maps. Suppose the generator $A$ satisfy $\Gamma$-regularity and $F$ satisfies ${\rm (}I{\rm )}$. Then
\begin{enumerate}
\item[i)] For $r\gl s$, $g(r)\le g(s)$ and $g(r)\Psi_F(r)\le_{cp} g(s)\Psi_F(s)$.
\item[ii)] $\displaystyle \lim_{r\to 0} g(r)(I-\Psi_F(r))=\Phi_F(A)$.
\item[iii)] $\displaystyle \lim_{r\to 0} g(r)\Gamma_{I-\Psi_F(r)}\lel \Gamma_{\Phi_F(A)}$.
\end{enumerate}In particular,
$\Phi_F(A)$ satisfies $\Gamma$-regularity.
\end{lemma}
\begin{proof} Let $r\gl s>0$. Then obviously $e^{-r/t}\le e^{-s/t}$ and hence
\[g(r)\le g(s) \pl, \pl g(r)\Psi(r) \le_{cp}\pl g(s) \Psi(s)\]
Since $\Psi(r)$ is completely positive and self-adjoint, then $A_r:=g(r)(I-\Psi(r))$ is a generator of a semigroup of unital completely positive and self-adjoint maps.
Let us consider the function
\[ \psi_r(\la) \lel g(r)^{-1} \int e^{-r/t} (1-e^{-\la t}) F(t) \frac{dt}{t} \pl. \]
Using
\[ (1-e^{-\la t})\kl \min(1,\la t) \kl (1+\la) \min(1,t) \]
we deduce from the Dominated Convergence Theorem that
\[ \lim_{r\to 0} g(r)\psi_r(\la) \lel \Phi_F(\la) \pl .\]
Note that $g(r)\phi_r\le g(s)\phi_s$ if $r\ge s$. Applying the monotone convergence theorem, we have that for any $x\in L_2(M)$
\[ \lim_{r\to 0} \lan x,g(r)\psi_r(A)x\ran
\lel \lim_{r\to 0} \int_0^{\infty} g(r)\psi(r)(\la) d\mu_x(\la) \lel \lan x,\Phi(A)x\ran \pl .\]
Here $d\mu_x(\la)$ is the spectral measure $d \lan x, 1_{A\le \la} x\ran$. We split the function $\Phi_F$ into two pieces
\[ \Phi_F(\la) \lel \Phi_F'(\la)+\Phi_{F}''(\la)
\lel \int_0^1 \frac{1-e^{-\la t}}{t} F(t) dt
+ \int_1^\infty (1-e^{-\la t}) F(t)\frac{dt}{t} \pl .\]
According to our assumption $\displaystyle \int_0^1 F(t)dt, \int_1^\infty F(t)\frac{dt}{t}$ are finite. For any $x,y\in \A$, $\frac{1}{t}\Gamma_{1-T_t}(x,y)$ is uniformly bounded in $L_1(M)$. Therefore the gradient form of $\Phi_F'(A)$ is well-defined on $\A$ and has range in $L_1(M)$. On the other hand, the map $T_t:M\to M$ is complete positive and bounded from $L_{\infty}(M)$ to itself. Then $\Phi''_F(A)$ converges and the gradient forms $\Gamma_{\Phi''_F(A)}$ take range in $L_{\infty}(M)$ and in particular also in $L_1(M)$.\qd
A direct consequence of the above lemma is as follows:
\begin{cor}\label{ccc} For every $r>0$,
\[g(r) \Gamma_{I-\Psi_F(r)} \kl \Gamma_{\Phi_F(A)} \pl ,\pl
g(r)\mathcal{I}_{I-\Psi_F(r)}\kl \mathcal{I}_{\Phi_F(A)} \pl .\]
\end{cor}
\begin{proof}The previous lemma shows that $g(r)\Psi_F(r)$ is decreasing in $cp$-order as $r\to 0$. By Lemma \ref{grad1}, the gradient $g(r)\Gamma_{I-\Psi_F(r)}$ is also decreasing in $cp$-order. Then the assertion follows from the limit
\[\lim_{r\to 0} g(r)\Gamma_{I-\Psi_F(r)}=\Gamma_{\Phi_F(A)}\pl.\]
and Corollary \ref{Fisher}.
\end{proof}
We say a self-adjoint semigroup $T_t$ is \emph{ergodic} if its fixed-point subalgebra $\N=\mathbb{C}1$. In this situation, the conditional expectation is the trace $E_\tau(x)=\tau(x)1$ and the kernel is $K_E=1\ten 1$ in $M^{op}\overline{\ten} M$.
\begin{prop}\label{decay} Assume that a semigroup $(T_t)$ of unital completely positive and self-adjoint maps satisfy the conclusion of Proposition \ref{sfc} with respect to $E_\tau(x)=\tau(x)1$. Then there exists a $r_0>0$ such that for all $r\ge r_0$,
\begin{enumerate}
\item[i)] $\|\Psi_F(r)-E_{\tau}:L_1(M)\to L_{\infty}(M)\|_{cb}\kl 1/2$;
\item[ii)] $E_{\tau} \le_{cp} 2 \Psi_F(r)$;
\item[iii)] $g(r)\Gamma_{I-E_{\tau}}\le_{cp} 2 \Gamma_{\Phi_F(A)}$.
\end{enumerate}
\end{prop}
\begin{proof} Write $E_\tau=E$. For i) we use the triangle inequality
\begin{align*}
\|\Psi_F(r)-E\|_{cb}
&\le g(r)^{-1} \int_0^{\infty} e^{-r/t} \|T_t-E\|_{cb} F(t) \frac{dt}{t} \\
&\le g(r)^{-1}(2c \int_0^r e^{-r/t} t^{-d/2} F(t)\frac{dt}{t}+ C(d,\la)
\int_r^{\infty} e^{-r/t} e^{-\la t} F(t) \frac{dt}{t}) \\
&:= g(r)^{-1} ( 2c \text{I}+ C(d,\la) \text{II}) \pl.
\end{align*}
Then the condition (QM) of $F$ implies that for some some $0<\mu<1$,
\begin{align*}
I &= r^{-d/2} \int_1^{\infty} F(r/u) u^{d/2} e^{-u} \frac{du}{u}\\ &\kl r^{-d/2}c(d,\mu)
\int_1^{\infty} F(\mu r/\mu u) e^{-\mu u} \frac{du}{u} \\
&= r^{-d/2} c(d,\mu) C_\mu \int_{\mu}^{\infty} F(r/w) e^{-w} \frac{dw}{w} \kl
r^{-d/2} c(d,\mu) C_\mu g(r) \pl .
\end{align*}
Here we have used the change of variables $u=r/t$ and $w=\mu u$. Indeed, the same change of variable shows that
\[ g(r) \lel \int_0^{\infty} F(r/u) e^{-u} \frac{du}{u} \pl .\]
For the second part we see that
\begin{align*}
II &= \int_0^1 e^{-\frac{2\la r}{u}} F(r/u) e^{-u} \frac{du}{u} \kl e^{-\la r} \int_0^1 F(r/u) e^{-u} \frac{du}{u} \kl
e^{-\la r} g(r) \pl .
\end{align*}
Hence there exists a $r_0$ which only depends to $c, d,C(d,\la)$ and $C_\mu$ so that for all $r \ge r_0$
\[ \|\Psi_F(r)-E\|_{cb} \kl \frac{1}{2} \pl .\]
Let $K_r$ be the kernel of $\Psi_F(r)$ in $\M^{op}\overline{\ten} \M$ and recall that $K_E=1\ten 1$ is the kernel of $E$. Then by \eqref{ER},
\[ \|K_r-1\ten 1\|_{\M^{op}\overline{\ten} \M}\kl \frac{1}{2} \]
which implies $K_r\gl \frac{1}{2}1\ten 1= 2K_E$. The other assertions follow from Lemma \ref{grad1} and Corollary \ref{ccc}.
\qd
\begin{rem}{\rm The polynomial decay $t^{-d/2}$ is not really needed. Indeed, if
\[ \|T_t:L_1\to L_{\infty}\|\kl C_{\al} e^{c_{\al}t^{-\al}} \]
holds for some $\al<1$ and ($\Delta_\mu$) for all $t>0$. We choose $\beta>0$ such that $C_{\al}C_F(\mu)e^{-\frac{1-\mu}{2}\beta}\le \frac14$ and choose $r$ large enough so that $c_{\al}r^{-\al}\le \frac{1-\mu}{2}\beta^{1-\al}$.
Then we find that
\begin{align*}
I_{\beta} &\le C_F(\mu) C_{\al} \int_{\beta}^{\infty} e^{u^{\al}(c_{\al}r^{-\al}-(1-\mu)u^{1-\al})} F(r/\mu u) e^{-\mu u} \frac{du}{u} \\
&\le C_F(\mu) C_{\al} e^{-\frac{1-\mu}{2}\beta}
\int_\mu^{\infty} F(r/w) e^{-w} \frac{dw}{w} \pl .
\end{align*}
Using a small modification in Proposition \ref{sfc}, the spectral gap allows us to estimate $g(r) II_{\beta}\kl C(\beta,\la)e^{-\frac{r\la}{\beta}}$.
}\end{rem}
Motivated by the discussion above, let us introduce
\[ t_0 \lel \inf\{ t \pl | \pl \|k_{T_t}-1\ten 1\|\kl 1/2\} \pl .\]
\begin{prop}\label{lowest} Assume that $F$ satisfies $(\Delta_2)$. Let $r=\max\{t_0,t_{\al}\}$. Then
\[ \Gamma_{\Phi_F(A)}\gl \frac{F(r)}{2\al c_{\al}}\Gamma_{I-E} \pl. \]
\end{prop}
\begin{proof} Recall that $\Gamma_B$ is additive with respect to $B$. Therefore we deduce from $\|K_t-1\ten 1\|\kl 1/2$ and Lemma \ref{grad1} that $\Gamma_{I-T_t} \gl \frac{1}{2} \Gamma_{I-E}$ holds for $t\gl t_0$. Then we note that the $(\Delta_2)$ condition implies
\[ F(r)\lel F(\frac{r}{t}t)\kl c_{\al} (t/r)^{\al} F(t) \]
for all $r\le t$. Therefore,
\begin{align*}
\Gamma_{\Phi_F(A)}
&= \int_0^\infty \Gamma_{I-T_t} F(t) \frac{dt}{t}
\ge \frac{\Gamma_{I-E}}{2}
\int_{r}^{\infty} F(t) \frac{dt}{t}
\ge
\frac{\Gamma_{I-E}}{2c_{\al}} F(r) r^{\al} \int_{r}^{\infty} t^{-(1+\al)} dt \lel
\frac{\Gamma_{I-E}}{2\al c_{\al}} F(r) \p ,
\end{align*}
which completes the proof.\qd
\begin{theorem}\label{erg} Let $T_t$ be an ergodic semigroup of completely positive self-adjoint maps such that
\begin{enumerate}
\item[i)] $\|T_t:L_1(M)\to L_{\infty}(M)\|_{cb}\le c t^{-d/2}$ for $0\le t\le 1$ and $c,d>0$.
\item[ii)] the generator $A$ has a spectral gap $\si_{\min}$.
\end{enumerate}
Let $F$ be a function satisfying the condition (I)+(QM) or (I)+($\Delta_2$). Then $\Phi_F(A)$ satisfies $\la$-$\Gamma \E$ and hence $\la$-CLSI for some $\la$ depending on $c,d, F$ and $\si_{\min}$.
\end{theorem}
\begin{proof} Let $E=E_{\tau}$ be the conditional expectation on the trace.
According to Proposition \ref{decay}, we deduce that
\[ \Gamma_{I-E_{\tau}} \kl 2g(r_0) \Gamma_{\Phi_F(A)} \]
for some $r_0$ depending on $\si_{\min}$, $c$ and $d$. Then one can choose $\la=\frac{1}{2g(r_0)}$.
\qd
\subsection{Non-ergodic semigroups}\label{ner}
In this part, we want to adapt the kernel techniques for ergodic maps to the non-ergodic situation. This requires more operator space theory from the work in \cite{JP} on vector-valued noncommutative $L_p$-spaces associated with inclusion of von Neumann algebras. As usual we assume that $(T_t):M\to M$ is a semigroup of unital completely positive selfadjoint maps and $N\subset M$ is the fixed-point subalgebra. Whenever $N$ is infinite dimensional, we can longer hope for an ultracontractivity of $T_t:L_1(M)\to L_{\infty}(M)$ for operator norm or $cb$-norm, because the identity map \[ id:L_1(N)\to L_{\infty}(N) \]
is already unbounded. This leads us to use vector-valued $L_p$ norms.
Let $1\le p,q,r\le \infty$ and fix the relation $\frac{2}{r}=|\frac{1}{p}-\frac{1}{q}|$.
Recall that the $L_p(L_q)$ norms for the inclusion $N\subset\M$ is defined as,
\[ \|x\|_{L_p^q(N\subset M)}
\lel \begin{cases} \displaystyle\inf_{x=ayb} \|a\|_{L_{2r}(N)}\|y\|_{L_{q}(M)}\|b\|_{L_{2r}(N)} & p\le q \pl ,\\
\displaystyle\sup_{\|a\|_{L_{2r}(N)}=\|b\|_{L_{2r}(N)}\le 1} \|axb\|_{L_{q}(M)} &q\le p \pl.
\end{cases} \]
Here for $p\le q$, the infimum takes over all factorization $x=ayb$ with $a,b\in L_{2r}(N), y\in L_q(M)$ and for $p\ge q$, the supremum runs over all $a,b\in L_{2r}(N)$ with $\|a\|_{L_{2r}(N)}=\|b\|_{L_{2r}(N)}\le 1$. The Banach space $L_p^q(N\subset M)$ is then the completion of $M$ with respect to the corresponding norm. It follows from the H\o lder inequality that for $p=q$, $L_p^p(N\subset M)\cong L_p(M)$. These norms have been extensively studied in the quantum information theory and operator space community \cite{GJLR2,BaRo,muller}. For the spacial cases $M=R\overline{\ten}N$ and $M=\mm_k(N) $,
\[ L_p^q(N\subset R\overline{\ten}N) \lel L_p(R,L_q(N))\pl, L_p^q(N\subset \mm_k(N))=S_p^k(L_q(N)) \pl .\]
which are the vector-valued $L_p$ spaces introduced in \cite{pisier93}. In the following we mention the properties of $L_p^q(N\ssubset M)$ needed in our discussion and refer to \cite{JP} for a detail account of these $L_p$-spaces. First, we will use a duality relation that the anti-linear trace bracket $(x,y)=\tau(x^*y)$ provides an isometric embedding
\begin{equation}\label{dual}
L_p^q(N\ssubset M) \subset L_{p'}^{q'}(N\ssubset M)^* \pl,
\end{equation}
for $1\le p,q\le \infty$, $\displaystyle \frac{1}{p}+\frac{1}{p'}=\frac{1}{q}+\frac{1}{q'}=1$ and it is indeed an equality when for $1<p,q<\infty$. We will also need the following factorization property that
\begin{equation}\label{pp}
L_p(M) \lel L_{2p}(N) L_\infty^{p}(N\ssubset M) L_{2p}(N) \pl
\end{equation}
which reads as
\[\norm{x}{p}=\inf_{x=ayb}\norm{a}{L_{2p}(N)}\norm{y}{L_\infty^{p}(N\subset M)}\norm{b}{L_{2p}(N)}\pl.\]
This can be verified by interpolation. Indeed, it is obvious for $p=\infty$. For $p=1$ let us assume that $x$ is positive and $\tau(x)=1$. Then $\tau(E(x))=1$, and we may write
\[ x \lel E(x)^{1/2} (E(x)^{-1/2}xE(x)^{-1/2}) E(x)^{1/2} = E(x)^{1/2}yE(x)^{1/2} \pl .\]
Note that for every $a\in L_2(N)$,
\begin{align*}
\tau(a^*ya) &= \tau( E(x)^{-1/2}E(x)E(x)^{-1/2}aa^*) \lel \tau(aa^*) \lel \|a\|_2^2 \pl .
\end{align*}
Using the positivity of $y$ and a Cauchy-Schwarz type argument, it follows that $\|y\|_{L_\infty^1(N\subset M)}\le 1$. This factorization property is closely related to the following fact.
\begin{lemma}[Lemma 4.9 of \cite{JP}]\label{JP} Let $x\in M$. Then \[ \|x\|_{L_{\infty}^1(N\subset M)}
\lel \inf_{x=x_1x_2} \|E(x_1x^*_1)\|_{\infty}^{1/2}\|E(x_2^*x_2)\|_{\infty}^{1/2} \pl .\]
\end{lemma}
The following fact is an extension of Lemma 1.7 of \cite{pisier93}
\begin{lemma}\label{dfnorms} Let $T:M\to M$ be a $N$-bimodule map and $1\le p,q\le \infty$. Then for any $1\le s\le \infty$,
\[ \|T:L_{\infty}^p(N\ssubset M)\to
L_{\infty}^q(N\ssubset M)\| \lel \|T:L_s^p(N\ssubset M)\to L_s^q(N\ssubset M)\|
\]
\end{lemma}
\begin{proof} Let us introduce the short notation
$|\mkern-1.5mu\|T\|\mkern-1.5mu|_s=\|T:L_s^p(N\ssubset M)\to L_s^q(N\ssubset M)\|$. We first prove ``$\ge$'' for $s=1$. For an element $x\in L_1^p(N\ssubset M)$ of norm less than $1$, we have a decomposition $x=ayb$ with $a,b\in L_{2p'}(N)$, and $y\in L_p(M)$ all of norm less than $1$. Using the factorization \eqref{pp}, we may further write $y=\al Y \beta$ with $\al,\beta\in L_{2p}(N)$ and $Y\in L_{\infty}^p(M)$ all of norm less than 1. Therefore we have shown that $x=(a\al)Y(\beta b)$ with $Y\in L_{\infty}^p(N\subset M)$, and
\[ \|a\al\|_{L_2(N)}\le 1 \pl ,\pl \|\beta b\|_{L_2(N)}\le 1 \pl .\]
Thus we may write $a\al=a'\al'$ and $\beta b=\beta'b'$ such that
\[ \max\{\|b\|_{2q'},\|\beta\|_{2q},\|a\|_{2q'}\|\beta\|_{2q}\}\le 1 \pl .\]
Then we deduce from the module property that
\[T(x) \lel a'\al' T(Y)\beta' b'
\lel a' (\al' T(Y) \beta') b'
\]
and $\|T(Y)\|_{L_{\infty}^q(N\subset M)}\le |\mkern-1.5mu\|T\|\mkern-1.5mu|_{\infty}$. Using the other part of \eqref{pp}, $(\al' T(Y) \beta')\in L_q(M)$ of norm less than 1 and hence we have shown that $\norm{T(x)}{L_{1}^q(N\subset M)}\le 1$. By interpolation, we deduce that
\[ |\mkern-1.5mu\|T\|\mkern-1.5mu|_s \kl |\mkern-1.5mu\|T\|\mkern-1.5mu|_{\infty} \]
for all $1\le s\le \infty$. We dualize this inequality by applying it to $T^*$ and obtain
\[ |\mkern-1.5mu\|T\|\mkern-1.5mu|_s \lel \|T^*:L_{s'}^{q'}(N\ssubset M)\to L_{s'}^{p'}(N\ssubset M)\|\kl \|T^*: L_{\infty}^{q'}(N\ssubset M)\to L_{\infty}^{p'}(N\ssubset M)\| \lel |\mkern-1.5mu\|T\|\mkern-1.5mu|_1 \pl .\]
Then we dualize again to get
\[ |\mkern-1.5mu\|T\|\mkern-1.5mu|_{\infty} \kl |\mkern-1.5mu\|T^*\|\mkern-1.5mu|_{s'}
\kl |\mkern-1.5mu\|T\|\mkern-1.5mu|_{s}\kl |\mkern-1.5mu\|T\|\mkern-1.5mu|_{\infty}
\pl .\]
Hence all these norms coincide. \qd
Thanks to the independence of $s$, we may now introduce the short notation
\[ \|T\|_{p\to q}=\|T:L_\infty^p(N\ssubset M)\to L_{\infty}^q(N\ssubset M)\| \pl . \]
and similarly, the $cb$-version
\[ \|T\|_{p\to q,cb}=\sup_m \|id_{\mm_m}\ten T:L_\infty^p(\mm_m(N)\ssubset \mm_m(M))\to L_{\infty}^q(\mm_m(N)\ssubset \mm_m(M)\| \pl . \]
In particular, we understand $L_{\infty}^p(N\ssubset M)$ as an operator space with operator space structure
\[ \mm_m(L_\infty^p(N\ssubset M)) \lel
L_\infty^p(\mm_m(N)\subset \mm_m(M)) \pl .\]
The analogue of Lemma \ref{cbc} reads as follows:
\begin{lemma}\label{cbc2} Let $(T_t)$ be a semigroup of self-adjoint $^*$-preserving $N$-bimodule maps. Then
\begin{enumerate}
\item[i)] $\|T_{2t}\|_{1\to \infty}=\|T_t\|_{1\to 2}^2$.
\item[ii)]$\|T_{2t}-E\|_{1\to \infty}=\|(T_t-E)\|_{2\to \infty}^2$
\end{enumerate}
The same equality holds for $cb$-norms.
\end{lemma}
\begin{proof}Because $T_t$ are $N$-bimodule maps, we know by Lemma \ref{dfnorms} that
\begin{align*} &\|T_{2t} \|_{1\to \infty}=\|T_{2t}:L_2^1(N\subset \M ) \to L_2^\infty(N\subset M) \|\pl,\\ &\|T_t\|_{1\to 2}=\|T_t:L_2^1(N\subset M )\to L_2(M )\|
\end{align*}
Take $X=L_2^1(N\subset M)$ and $H=L_2(M)$. The anti-linear bracket $\lan x, y\ran=\tau(x^*y)$ gives a complete isometric embedding $L_2^{\infty}(N\subset M)\subset \bar{X}^*$. Then using the general principle in the proof Lemma \ref{cbc} implies the assertion because $T_t$ is $^*$-preserving and self-adjoint. The $cb$-norm case follows similarly with $X=S_2(L_2^1(N\subset M))$ and $H=S_2\ten_2L_2(M)$.
\end{proof}
We have seen in the last subsection that a complete positive order inequality $E_{\tau}\le_{cp} T$ can be deduced from kernel estimates. For non-ergodic cases, we have to modify the argument by introducing the appropriate Choi matrix for bimodule maps. Let us recall that the conditional expectation $E:M\to N$ generates a Hilbert $W^*$-module $\mathcal{H}_E=L_{\infty}^c(N\subset M)$ with $N$-valued inner product
\[ \langle x,y\rangle_{\mathcal{H}_E} \lel E(x^*y) \pl .\]
As observed in \cite{JD,JS} it is easy to identify the completion of this module in $\mathbb{B}(L_2(M)$, namely the strong closure of $\bar{\mathcal{H}}_E=\overline{Mp_E}$, where $p_E=E:L_2(M)\to L_2(M)$ is the Hilbert space projection onto the subspace $L_2(N)\subset L_2(M)$. The advantage of a complete $W^*$-module is the existence of a module basis $(\xi_i)_{i\in I}$ such that
\[ \langle \xi_i,\xi_j\rangle \lel \delta_{ij} p_i \]
where $p_i\in N$ are projections. Note that in our situation the inclusion $Mp_E\subset L_2(M)$ is faithful and hence, the basis elements $\xi_i$ (or more precisely $\hat{\xi}_i$ obtained from the GNS construction) are in $L_2(M)$. In particular, every element $x$ in $L_2(M)$ has a unique decomposition
\[ x \lel \sum_{i} \xi_ix_i \]
so that $x_i=p_ix_i\in N$. Indeed, we have $x_i=\langle \xi_i,x\rangle_E$. For a $N$-bimodule map $T:L_\infty^1(N\ssubset M)\to M$ we may therefore introduce the Choi matrix
\[ \chi_T \lel \sum_{i,j} |i\ran\lan j| \ten T(\xi_i^*\xi_j) \pl . \]
\begin{lemma}\label{Choi} Let $T:M\to M$ be a $N$-bimodule map. Then
\[ \|T\|_{1\to \infty,cb}\lel \|\chi_T\|_{\mathbb{B}(\ell_2(I))\bar{\ten}M} \pl .\]
\end{lemma}
\begin{proof} Let $q=\sum_{i,j}|i\ran\lan j|\ten \xi_i^*\xi_j\in \mathbb{B}(\ell_2(I))\bar{\ten}M$. Viewing $q$ as a kernel, the corresponding map $T_q: S_1(\ell_2(I))\to M$ is given by
\[T_q(|i\ran\lan j|)=\xi_i^*\xi_j\]
Let us show that $T_q:S_1(\ell_2(I))\to L_{\infty}^1(N\ssubset M)$ is completely contractive. Indeed, using operator space version of \eqref{ER}
\begin{align*} \norm{T_q}{cb}&=\norm{q}{\mathbb{B}(\ell_2(I))\ten_{\min} L_{\infty}^1(N\subset M)} \lel
\norm{q}{ L_{\infty}^1(\mathbb{B}(\ell_2(I))\ten_{\min} N\subset \mathbb{B}(\ell_2(I))\ten_{\min} M)}\\
&=\norm{id\ten E(q)}{ \mathbb{B}(\ell_2(I))\bar{\ten} M} \lel
\norm{\sum_{i}\ket{i}\bra{i}\ten p_i}{ \mathbb{B}(\ell_2(I))\bar{\ten} M}\le 1
\end{align*}
Here we have used the fact $q$ is positive and $p_i$ are projections. Note that the kernel of $T\circ T_q: S_1(\ell_2(I))\to M$ is exactly the Choi matrix $\chi_T$. Therefore, thanks to \eqref{ER} again, we deduce that \[ \norm{\chi_T}{}\lel \|Tq:S_1(\ell_2(I))\to M\|_{cb} \kl \|T\|_{1\to \infty,cb} \pl .\]
Now let $x\in L_{\infty}^1(N\ssubset M)$ of norm less than $1$. According Lemma \ref{JP}, we have a factorization $x=y_1y_2$ such that $E(y_1^*y_1)\le 1$ and $E(y_2^*y_2)\le 1$. This means we find coefficients $a_i,b_j$ such that
\[ x \lel \sum_{i,j} a_i^*\xi_i^*\xi_j b_j \]
and $\sum_i a_ia_i^*\le 1$ and $\sum_j b_j^*b_j\le 1$. Therefore, we deduce that
\begin{align*}
\|T(x)\|_M \!&= \!\|\sum_{i,j} a_i^*T(\xi_i^*\xi_j)b_j\| = \|(\sum_{i}\bra{i}\ten a_i^*) \Big(\sum_{i,j}\ket{i}\bra{j}\ten T(\xi_i^*\xi_j)\Big(\sum_{j}\ket{j}\ten b_j)\\
&\le
\|\sum_i a_ia_i^*\|^{1/2} \|\chi_T\| \|\sum_j b_j^*b_j\|^{1/2}\! .
\end{align*}
This implies
\[ \|T(x)\|
\kl \|\chi_T\| \inf_{x=y_1y_2} \|E(y_1y_1^*)\|^{1/2} \|E(y_2^*y_2)\|^{1/2} \pl ,\]
or equivalently
\[ \|T\|_{1\to \infty} \kl \|\chi_T\| \pl .\]
The same argument applies for $id_{\mm_m}\ten T$ and we have the equality $\|T\|_{1\to \infty, cb}\lel \|\chi_T\|$. \qd
We are now in a position to prove a version of Proposition \ref{decay} ii) in the non-ergodic situation
\begin{lemma} Let $T:M\to M$ be a unital completely positive $N$-bimodule map such that
\[ \|T-E_N:L_\infty^1(N\subset M)\to M\|_{cb} \kl \frac12 \pl .\]
Then $E_N\le_{cp} 2 T$.
\end{lemma}
\begin{proof} Let $\chi_T$ (resp. $\chi_E$) be the Choi matrix of $T$ (resp. $E_N$).
We known by Lemma \ref{Choi} that \[\|\chi_T-\chi_E\|_{B(\ell_2(I))\bar{\ten}N}\kl \frac12\pl.\] Since $T$ and $E$ are completely positive, $\chi_T$ and $\chi_E$ are positive. Thus we may write $\chi_E-\chi_T=\al-\beta$ with $\displaystyle 0\le \al,\beta \le \frac12 \pl$. Write $\al=\sum_{i,j}\bra{i}\ket{j}\ten \al_{i,j}$ and $\beta=\sum_{i,j}\bra{i}\ket{j}\ten \beta_{i,j}$. It is clear that $\al_{i,j}-\beta_{i,j}=E(\xi_i^*\xi_j)-T(\xi_i^*\xi_j)$. Let $x=y^*y$ be a positive element in $M$ and $y=\sum_j \xi_j y_j$ with coefficients $y_j\in N$ in the module basis. Then we deduce that
\begin{align}\label{eqa}
\sum_j y_j^*p_jy_j &= E(y^*y) \lel (E-T)(y^*y)+T(y^*y) \nonumber\\
&= \sum_{i,j} y_i^*p_i\al_{i,j}p_jy_j-
\sum_{i,j} y_i^*p_i\beta_{i,j}p_jy_j + T(y^*y) \nonumber\\
&\le \frac{1}{2} (\sum_j y_j^*p_jy_j) + T(y^*y) \pl .
\end{align}
Indeed in the last step we use the fact that
\begin{align*}\sum_{i,j} y_i^*p_i\al_{i,j}p_jy_j&=\Big(\sum_{i}\bra{i}\ten y_i^*p_i \Big)\Big(\sum_{i,j}\ket{i}\bra{j}\ten \al_{i,j}\Big)\Big(\sum_{j}\ket{j}\ten p_jy_j\Big)\\&\le \frac{1}{2}\Big(\sum_{i}\bra{i}\ten y_i^*p_i \Big)\Big(\sum_{j}\ket{j}\ten p_jy_j\Big)\\&=\frac{1}{2} (\sum_j y_j^*p_jy_j)\pl.\end{align*}
Subtracting $\frac{1}{2}(\sum_j y_j^*p_jy_j)$ in \eqref{eqa}, we obtain
\[ E(y^*y)=\sum_j y_j^*p_jy_j \kl 2 T(y^*y) \pl .\]
The same argument holds for matrix coefficients. Hence $E\le_{cp} 2 T$.\qd
Thus in the non-ergodic situation, we can now state the analogue of Theorem \ref{erg}.
\begin{theorem}\label{nerg} Let $T_t$ be a semigroup of completely positive self-adjoint maps and $N$ be contained in the fixed-point subalgebra. Suppose that
\begin{enumerate}
\item[i)] $\|T_t:L_\infty^1(N\ssubset M)\to L_{\infty}(M)\|_{cb}\kl c t^{-d/2}$ for $0\le t\le 1$ and $c, d>0$.;
\item[ii)] $\|T_t(I-E_N):L_2(M)\to L_2(M)\|\kl e^{-\la_{\min} t}$ for some $\la_{\min}>0$.
\end{enumerate}
Let $F$ be a function satisfying (I)+(QM) or (I)+($\Delta_2$). Then
$\Phi_F(A)$ satisfies $\la$-$\Gamma \E$ and hence $\la$-CSLI for some $\la$ depending on $c,d,F$ and $\la_{\min}$.
\end{theorem}
\section{Riemannian manifolds and Representation theory}
In this section we find heat kernel estimates, which allow us to apply Theorem \ref{nerg}.
\subsection{Riemannian manifolds}
Let $(\M,g)$ be a $d$-dimensional compact Riemannian manifold without boundary. A \emph{H\"{o}rmander system} is a finite family of vector fields $X=\{X_1,...,X_r\}$ such that for some global constant $l_X$, the set of iterated commutators (no commutator if $k=1$)
\[ \bigcup_{1\le k\le l_X} \{ [X_{j_1},[X_{j_2},...,[X_{j_{k-1}},X_{j_k}] \pl | \pl 1\le j_1,\cdots,j_k\le r\} \]
spans the tangent space $T_x \M$ at every point $x\in \M$. We consider the
the sub-Laplacian
\[ \Delta_X \lel \sum_{j=1}^r X_j^*X_j \pl. \]
Here $X_j^*$ is the adjoint operator of $X_j$ with respect to $L_2(\M,\mu)$ and $d\mu$ is the measure given by the form, which in local coordinates is given by
\[ d\mu(x) \lel \sqrt{|g|}dx_1\wedge \cdots \wedge dx_n \pl , |g|(x)=|{\rm det}g_{ij}(x)|\pl.\]
For compact $\M$, $\Delta_X$ extends to a self-adjoint operator on $L_2(\M,\mu)$.
It follows from the famous Rothschild-Stein estimate \cite{SR} (see also \cite{OZ}) that $\Delta_X$ is hypo-elliptic. This leads to the estimate (see e.g. \cite{HelfferNier})
\begin{equation}\label{hypo}
\lan f,\Delta^{\frac{1}{l_X}}f\ran \le C (\lan f,\Delta_{X}f\ran + \norm{f}{2}^2 ) \pl,
\end{equation}
where $\Delta$ is the Laplace-Beltrami operator on $\M$. Using the Hardy-Littlewood-Sobolev inequality in the Riemannian setting, we obtain the following Sobolev inequality (see e.g. \cite{LZ}):
\begin{lemma}\label{LZ} Let $\M$ be a compact Riemannian manifold and $X$ be a H\"ormander system of $\M$. Let $q=\frac{2dl_X}{dl_X-1}$. Then
\[ \|f\|_{q} \kl C ( \lan\Delta_X f,f\ran + \norm{f}{2}^2)^{1/2} \pl .\]
\end{lemma}
Now it is time to invoke the famous Varopoulos' Theorem about the dimension of semigroups.
\begin{theorem}[\cite{VSCC}]\label{Varo}Let $T_t:L_\infty(\Om,\mu) \to L_\infty(\Om,\mu)$ be a semigroup of positive measure preserving self-adjoint maps and $A$ is the generator.
The following conditions are equivalent: for $m\in \nz$,
\begin{enumerate}
\item[i)] $\|T_t:L_1(\Om,\mu)\to L_{\infty}(\Om,\mu)\|\kl C_1 t^{-m/2}$ for all $0\le t\le 1$ and some $C_1$;
\item[ii)] $\|f\|^2_{\frac{2m}{m-2}} \kl C_2 (\lan Af,f\ran+ \norm{f}{2}^2)$;
\item[iii)] $\|f\|_2^{2+4/m} \kl C_3 \lan Af,f\ran \|f\|_1^{4/m}$.
\end{enumerate}
\end{theorem}
\begin{rem}{\rm Varopoulos' Theorem remains valid for
semi-finite von Neumann algebras. For the proof, the only part which requires modification is i)$\Rightarrow$ ii) (see \cite{JZ} and independently \cite{XX}). The completely bounded norm analog is
significantly more involved \cite{JZh}, and it will be used later.}
\end{rem}
It is well-known that the Laplace-Beltrami operator $\Delta_{LB}$ on a compact Riemannian manifold has a spectral gap. Similarly, \eqref{hypo}
shows that $\Delta_X$ also has a spectral gap. Combining Lemma \ref{LZ} and Theorem \ref{Varo}, we obtain the kernel estimates in Proposition \ref{decay} for $m=dl_X$. As a consequence of Theorem \ref{erg}, we have:
\begin{theorem}\label{horm} Let $(\M,g)$ be a compact Riemannian manifold and $X=\{X_1,...,X_r\}$ be a H\"ormander system. Then there exists $m=dl_X\in \nz$ and $c>0$ such that $S_t=e^{-t\Delta_X}$ satisfies
\[ \|S_t:L_1(\M,\mu)\to L_{\infty}(\M,\mu)\|_{cb}\kl c t^{-m/2} \pl .\]
Moreover, for every $0<\theta<1$, $S_t^{\theta}=e^{-t\Delta_X^{\theta}}$ satisfies $\la$-$\Gamma\E$ with $\la=c_0t_0^{-\theta}(1-\theta)\theta^2$. Here $t_0=t_0(\Delta_X)$ is the return time of $\Delta_X$ and $c_0$ is an absolute constant.
\end{theorem}
As mentioned in Corollary \ref{kkeeyy}, the $\Gamma\E$ condition automatically extends to the operator-valued setting for any finite Neumann algebra $M$. Here we note that the kernel estimates for H\"ormander systems also extend to $M$-valued functions.
\begin{cor} \label{horm'} Let $M$ be a finite von Neumann algebra with tracial state $\tau$. Let $(\M,\mu)$ be a compact Riemannian manifold and $X$ be a H\"ormander system as above. Then
\[ \|id\ten T_t:L_\infty(\M ; L_{1}(M))\to L_\infty(\M ; L_\infty(M))\|
\kl c t^{-m/2} \]
holds for $0\le t\le 1$ and $c>0$. Moreover, $\Phi_F(\Delta_X\ten id_M)$ satisfies $\Gamma\E$.
\end{cor}
\begin{proof} Let $\ez(f)=\int_\M f(x) d\mu(x)$ be the conditional expectation onto $M$, i.e. $\ez=E_M$. Then a positive element $f\in L_{\infty}^1(M\ssubset L_{\infty}(\M)\bar{\ten}M)$ has norm $\le 1$ if $\|\ez(f)\|_{M}\le 1$. Let $h\in L_2(M)$ be unit vector. Consider the scalar function $f_h(x)=\lan h,f(x)h\ran_{L_2(M)}$. We deduce that
\[ \ez(f_h) \lel \int_{\M} f_h(x) d\mu(x) \lel \lan h,\ez(f)h\ran_{L_2(M)} \kl 1 \] and therefore $\|T_t(f_h)\|_{L_\infty(\M)}\le c t^{-m/2}$. This means
\[ \sup_{\|h\|_2\le 1} \sup_{x\in \M} \lan h,T_t(f)(x)h\ran_{L_2(M)} \kl ct^{-m/2} \pl . \]
Interchanging the double supremums, with the help of the duality $L_1(\M,L_1(M))^*=L_{\infty}(\M)\bar{\ten}M$,
implies the assertion \qd
\subsection{Group representation}
Let $G$ be a compact group with Haar measure $\mu$.
We consider a semigroup of positive measure preserving self-adjoint maps $S_t:L_{\infty}(G)\to L_{\infty}(G)$, which is also right translation invariant. Suppose that $S_t$ is given by the kernel \[ S_t(f)(g) \lel \int_G K_t(g,h)f(h) d\mu(h) \pl.\]
The right translation invariance means that for any $f\in L_\infty(G)$
\begin{align*}
\int K_t(gs,h)f(h) d\mu(h) &=
S_t(f)(gs) \lel \int K_t(g,h) f(hs) d\mu(h)= \int K_t(g,hs^{-1}) f(h) d\mu(h) \pl .
\end{align*}
Thus $K_t(gs,h)=K_t(g,hs^{-1})$ and hence $K_t(g,h)=k_t(gh^{-1})$ for some single variable function $k_t$. Conversely, $K_t(g,h)=k_t(gh^{-1})$ implies right invariance.
Now let $(M,\tau)$ be a finite von Neumann algebra and $\al:G \to Aut(M)$ be a action $G$ on $M$ of trace preserving automorphisms. Using the standard co-representation,
\[ \pi: M \to L_{\infty}(G;M) \pl ,\pl \pi(x)(g) \lel \al_{g^{-1}}(x) \pl .\]
we define the \emph{transferred semigroup}
\[ T_t(x) \lel \int_G k_t(g^{-1}) \al_{g^{-1}}(x) d\mu(g) \pl .\]
\begin{lemma}\label{comdiag} The semigroups $S_t$ and $T_t$ satisfy the following factorization property
\[ \pi\circ T_t \lel (S_t\ten id_M)\circ \pi \pl .\]
\end{lemma}
\begin{proof} We include the proof for completeness. Indeed, for $x\in M$
\begin{align*}
\pi(T_t(x))(g) &= \al_g^{-1}\int_G k_t(h^{-1}) \al_{h^{-1}}(x) d\mu(h)
\lel
\int_G k_t(gg^{-1}h^{-1}) \al_{(hg)^{-1}}(x) d\mu(h) \\
&= \int_G k_t(gh^{-1}) \al_{h^{-1}}(x) d\mu(h)
\lel (S_t\ten id)(\pi(x))(g) \pl . \qedhere
\end{align*} \qd
Let us denote by $N=\{x\pl | \al_g(x)=x \pl \forall \pl g\in G\}$ the fixpoint subalgebra. Note that we have the following commuting diagram
\begin{equation}\label{ccss}
\begin{array}{ccc} M\pl\pl &\overset{\pi}{\longrightarrow} & L_{\infty}(G,M) \\
\downarrow E_N & & \downarrow \ez=E_M \\
N \pl\pl &\overset{\pi}{\longrightarrow} & M
\end{array} \pl .
\end{equation}
Here $M\subset L_{\infty}(G,M)$ is considered as operator-valued constant functions and, as seen in \ref{horm'}, the conditional expectation is given by integration. Then for any $x\in M$,
\[ \ez(\pi(x)) \lel \int_{G} \al_{g^{-1}}(x) d\mu(g) \lel E_N(x) \pl \]
is exactly the conditional expectation form $M$ onto the fixpoint algebra $N$. Since $\ez$ is a unital complete positive $N$-bimodule map we see that
\[ \ez L_p^q(M\subset L_{\infty}(G,M))\to L_p^q(N\subset M) \pl \]
is completely contraction for all $1\le p,q\le \infty $. This implies that the inclusion $\pi: L_p^q(N\ssubset M) \subset L_p^q(M\ssubset L_{\infty}(G,M))$ is a completely isometric embedding (see \cite{JP} for details). Note that the conditions $\la$-$\Gamma\E$ and $\la$-CLSI pass to subsystems:
\begin{prop}\label{transf} Let $S_t:L_{\infty}(G)\to L_{\infty}(G)$ be an ergodic, right invariant semigroup and $T_t:M\to M$ be the transferred semigroup defined as above. Then
\begin{enumerate}
\item[i)] $\norm{T_t-E_N:L_2(M)\to L_2(M)}{}\le \norm{S_t-\ez:L_2(G)\to L_2(G)}{}$ for all $t>0$ and hence the spectral gap for $T_t$ (with respect to $E_N$) is not less than the spectral gap of $S_t$.
\item[ii)] $(T_t)$ satisfies $\la$-$\Gamma\E$ (resp. $\la$-FLSI, $\la$-CLSI) if $(S_t)$ does.
\end{enumerate}
\end{prop}
Combining \eqref{sfc} with Proposition \eqref{decay} and Proposition \eqref{lowest}, we obtain the following application of transference:
\begin{theorem}\label{Ho2} Let $S_t:L_{\infty}(G)\to L_{\infty}(G)$ be an ergodic right invariant semigroup with kernel function $k_t$. Let $\si$ be the spectral gap of $S_t$ and suppose $\sup_g |k_t(g)|\kl c t^{-m/2}$ holds for some $c,m>0$ and $0\le t\le 1$. Then the transferred semigroup $T_t:M\to M$ and its generator $A$ with spectral gap $\la_{\min}\gl \si$ satisfy:
\begin{enumerate}
\item[i)] $ \|T_t:L_1(M)\to L_1^{\infty}(N\ssubset M)\|_{cb}\kl c t^{-m/2}$ for $0\le t\le 1$, and
\[ \|T_t(id-E):L_1(M)\to L_1^{\infty}(N\ssubset M)\|_{cb}\kl \begin{cases} 2ct^{-m/2} &0\le t\le 1 \pl ,\\
c(m,\la_{\min})e^{-\la_{\min} t} & 1\le t<\infty \pl .\end{cases}\]
\item[ii)] For every function $F$ satisfying condition (I)$+$(QM) or (I)+($\Delta_2$), the generator $\Phi_F(A)$ satisfies $\Gamma\E$ and hence CLSI.
\end{enumerate}
\end{theorem}
Now we combine Theorem \ref{Ho2} with the kernel estimates for a H\"ormander systems on a Lie group. Let $G$ be a compact Lie group and $\mathfrak{g}$ be its Lie algebra (of right invariant vector fields). A generating set $X=\{X_1,\cdots,X_r\}$ of $\mathfrak{g}$ is a right invariant H\"ormander systems on $G$. Indeed,
\[ X(f) \lel \frac{d}{dt}f(\exp(tX)g)|_{t=0} \pl, \]
is right translation invariant because the left and right translations commute. Then the sub-Laplacian $\displaystyle \Delta_X=\sum_{j=1}^r {X_j}^*{X_j}$ generates a right invariant semigroup $S_t=e^{-t\Delta_X}$
\begin{cor} \label{diagtrans}Let $X$ be a generating set of $\mathcal{G}$ and $S_t=e^{-t\Delta_X}:L_\infty(G) \to L_\infty(G)$ be the right invariant semigroup given by the sub-Laplacian $\Delta_X$. Then transferred semigroup $T_t:M\to M$ and its generator $A$ satisfy
\begin{enumerate}
\item[i)] For every function $F$ satisfying condition $(I)+(QM)$ or $(I)+(\Delta_2)$, the generator $\Phi_F(A)$ satisfies $\Gamma\E$ and hence CLSI.
\item[ii)] In particular, for all $0<\theta<1$ the generator $A^{\theta}$ satisfies $\la$-$\Gamma\E$ with ${\la(\theta,X)=c_0t_0^{-\theta}\theta^2(1-\theta)}$. Here $t_0=t_0(\Delta_X)$ is the return time of $\Delta_X$ and $c_0$ an absolute constant.
\end{enumerate}
\end{cor}
\subsection{Finite dimensional representation of Lie groups}Let $\mm_m$ be the $m\times m$ matrix algebra and $U_m$ be its unitary group.
A unitary representation $u:G\to U_m$ induces a representation $\hat{u}:\mathfrak{g}\to \mathfrak{u}_n$ of the corresponding Lie algebra, where $\mathfrak{u}_n=i(\mm_m)_{sa}$ is the Lie algebra of $U_m$ and $(\mm_m)_{sa}$ are the self-adjoint matrices in $\mm_m$. Let $X=\{X_1,...,X_r\}$ be a generating set of $\mathfrak{g}$ and $Y_1,...,Y_r\in (\mm_m)_{sa}$ be their images under $\hat{u}$. Indeed, for the exponential map $exp$, we have
\[ u(\exp(tX_j)) \lel e^{itY_j} \]
and $Y_j\in \mm_{m}^{sa}$ is the corresponding generator for the one parameter unitary $u(\exp(tX_j))\subset M_m$. Let us consider the (self-adjoint) Lindblad generator is given by
\[ \L(\rho) \lel \sum_{j=1}^r Y_j^2\rho+\rho y_j^2-2Y_j\rho Y_j \pl .\]
Then we have a concrete realization of Lemma \ref{comdiag}.
\begin{lemma}\label{dial} Let $\pi:\mm_m\to L_{\infty}(G,\mm_m)$ be given by
\[ \pi(x) \lel u(g)^{-1}xu(g) \]
Then
\[ \Delta_X(\pi(\rho)) \lel \pi(\L(\rho)) \pl, \pl X_j(\pi(x))\lel i \pi([Y_j,x])\pl. \]
\end{lemma}
\begin{proof} Let $x\in \mm_m$ and $h,k\in l_2^m$ being two vectors. We consider the scalar function
\begin{align*}
f(g) &= \lan h, \pi(g)^{-1}x\pi(g)(k)\ran
\end{align*}
Then we have
\begin{align*}
X_j(f)(g) &= \frac{d}{dt} f(\exp (tX)g) |_{t=0}
\lel
\frac{d}{dt} \lan h,u(g)^{-1} e^{-itY_j}x e^{itY_j} u(g)k\ran |_{t=0} \\
&= i (h,u(g)^{-1}(xY_j-Y_jx)u(g) k) \pl .
\end{align*}
Since $h,k$ are arbitrary, we deduce the second assertion. Note that $ {X_j}=-{X_j}^*$. Then \[{X_j}{X_j}\pi(x)
\lel -\pi([Y_j,[Y_j,x]])
\lel \pi( 2Y_jxY_j-Y_j^2x-xY_j^2) \pl .\]
and hence
\[ \Delta_X(\pi(x))\lel \pi( \sum_j Y_j^2x+xY_j-2Y_jxY_j) \lel \pi(\L(x)) \pl .\]
This implies in particular that the semigroup $S_t=e^{-t\Delta_X}$ on $G$ satisfies
\begin{align*} (S_t\ten id)\circ \pi \lel & \pi \circ e^{-t\L} \pl . \qedhere\end{align*}
\qd
\begin{theorem}\label{Horm} Let $X=\{X_1,...,X_r\}$ be a generating set of $\mathfrak{g}$ and $u:G\to U_m$ be a unitary representation so that $\hat{u}(X_j)=Y_j$. Let
\[ \L(x) \lel \sum_j Y_j^2x+Y_j^{2}-Y_jxY_j \pl. \]
Then $A=\L^{\theta}$ satisfies $\la$-$\Gamma\E$ and hence $\la$-CLSI
with $\la(X,\theta)=c_0 t_0^{-\theta}\theta^2 (1-\theta)$ depending on $t_0=t_0(\Delta_X)$ and $\theta$.
\end{theorem}
We obtain the following corollary from the $cb$-version of Varopoulos' theorem \cite{JZh}.
\begin{cor} Let $G$ be a $d$-dimensional Lie group and $X$ be a generating set of $\mathfrak{g}$ using up to $l_X$-th iterated Lie bracket. Let $\L$ as above. Suppose $S_t=e^{-tB}$ be a semigroup of completely positive self-adjoint trace preserving maps on $M_m$ such that
\begin{enumerate}
\item[i)] The fixed-point algebra $N_{\L}$ of $e^{-t\L}$ is contained in the fixed-point algebra $N_B$ for $e^{-tB}$;
\item[ii)] $\lan x,\L^{\al} x\ran_{tr}\kl c (\lan x,B x\ran_{tr}+\norm{x}{2}^2 ) $ for some $0<\al< \frac{dl_X}{2}$.
\end{enumerate}
Then $B^\theta$ satisfies $\Gamma\E$ and hence CLSI.
\end{cor}
\begin{proof} Denote $d_X=dl_X$. The cb-version of Varopoulos' theorem implies that
\[ \|(I+\L)^{-\al/2}:L_2(M_m) \to L_2^{q}(N_{\L}\subset M_m)\|_{cb}
\kl c(q) \]
holds for $\frac{1}{q}=\frac{1}{2}-\frac{\al}{d_X}$ provided $2\al< d_X$. By our assumption we have
\[ \|(Id+\L)^{\al/2}(x)\|_2 \sim (\|x\|+ \|\L^{\al}(x)\|) \kl c (\|x\|+ \|B^{1/2}x\|_2) \pl .\]
Using $N_{\L}\subset N_B$ we deduce that
\[ \|(I+ B)^{-1/2}:L_2(M_m)\to L_2^q(N_B\subset M_m)\|_{cb} \kl c'(q) \pl .\]
By $ii)\Rightarrow i)$ in Varopoulos' theorem we deduce that
\[ \|e^{-tB}:L_\infty^1(N_B\subset M)_m\to M_m\|_{cb} \kl c t^{-d_X/2\al} \pl .\]
Thanks to the spectral gap for $B$, we may again use Theorem \ref{erg} and deduce the assertion. \qd
\section{A density result}
In this section we show that on matrix algebra the set of self-adjoint generators with $\Gamma\E$ is dense. Let $(T_t=e^{-tA}:\mm_n\to \mm_n)$ be
a semigroup of self-adjoint and unital completely positive maps. Using the Lindblad form, we may assume that
\[ \L(x) \lel \sum_{k=1}^m a_k^2x+xa_k^2-2a_kxa_k\]
and then the corresponding derivation is given by
\[ \delta:\mm_n \to \oplus_{k=1}^m\mm_n\pl, \pl \delta(x) \lel ([a_k,x])_{k=1}^m \pl .\]
Therefore, the fixpoint algebra is given by
\[ N\pl=\pl\{ x | \delta(x)\lel 0\} \lel \{a_1,...,a_m\}' \pl .\]
It is easy to check that
\[ \Gamma_\L(x,y) \lel \sum_k [a_k,x^*][a_k,y] \pl .\]
\begin{lemma}\label{H1}Let $A_j=ia_j$. Then $X=\{A_i,...,A_r\}$ is a H\"ormander system of some compact connected Lie group $G$.
\end{lemma}
\begin{proof} Let $A_j=ia_j$ and $\mathfrak{k}\subset \mathfrak{u}_n$ the real Lie algebra generated by $A_1,...,A_r$. Let $G_1$ be the closed subalgebra generated by $e^{tA_j}$, i.e. the closure of the Lie algebra $e^{\mathfrak{k}}$. We may define the map $V={\rm span}\{A_1,...,A_r\}$ and the map $\phi:G_1\times V\to \Mz_m^{sa}$. Note that $\delta_A(x)=[A,x]$ is a Schur-mutliplier (since $A$ is normal) and hence
\[ e^{t\delta_A}
\lel \sum_j \frac{t^j}{j!} \delta_A^j \]
also leaves $\mathfrak{k}$ invariant, certainly for elements $A\in\{A_1,...,A_r\}$ and then by induction also for elements in $\mathfrak{k}$. Thanks to the definition of $G_1$ we deduce that $\mathfrak{k}$ is $G_1$ invariant. Certainly the inner product given by the trace on $\mathbb{B}(S_2^m)$ is invariant with respect to conjugation $\si(g)(T)=gTg^*$ and leaves $\mathfrak{k}$ in variant. By differentiation the Killing form, the restriction of the trace to an invariant subspace, is $G_1$ in variant. According to \cite[5.30]{Lie1}, we see that $\mathfrak{k}$ and moreover, $H=e^{\mathfrak{k}}$ is itself a connected Liegroup with respect to a modified topology, see \cite[5.20]{Lie1}. Certainly, $A_1,...,A_m$ is a H\"ormander system for any Lie group with Liealgebra $\mathfrak{k}$. Hence it suffices to show that $k$ is a compact Lie algebra. First we note that for iterated commutators $C=[A,B]$ of elements satisfying $A^*=-A$, $B^*=-B$, we still have $C^*=-C$. Thus $\mathfrak{k}_{\rz}$, and $\mathfrak{k}_{\cz}$ are invariant under $^*$ operation. According to \cite[Prop 1.56]{Liea}, we see that $\mathfrak{k}$ is reductive, and the real part of a complex Liealgebra. Let $\mathfrak{k}=\mathfrak{t}+[\mathfrak{k},\mathfrak{k}]$. Then $\mathfrak{k}_0=[\mathfrak{k},\mathfrak{k}]$ is semisimple. According to \cite[Theorem 1.42]{Liea}, we know that the Killing form is non-degenerate on $\mathfrak{k}_0$. According to \cite[Prop 4.27]{Liea}, we see that
$\mathfrak{k}_0$ is compact. According to \cite[Theorem 5.11]{Lie1}, we find that the simply connected Lie algebra $H$ is a product of the commutative algebra with finite dimensional Lie algebra and $H_0$, and hence $H$ is indeed compact. This argument follows \cite{One}. \qd
Our aim is now to find a suitable approximation of the form $B_{\epsilon}=\phi_{\epsilon,\si}(\L)$ which satisfy a $\Gamma \E$ estimate and are close to $\L$ in operator norm on $L_2(\mm_n,tr)$. We apply the technique from Section 3 and define for a fixed $\si>0$,
\[ F_{\epsilon,\si}(t) \lel 1_{[\eps,1)}(t) t^{-2} + 1_{[1,\infty]}(t) t^{-\si} \pl .\]
\begin{lemma}\label{apcalc} Let $\eps>0$. Define
\[\phi_{\eps,\si}(\la) \lel (-\ln\eps)^{-1}\int_{\eps}^{\infty}(1-e^{-t\la})F_{\eps,\si}(t) \frac{dt}{t^2}\pl.\]
Then
\begin{enumerate}
\item $\|\L-\phi_{\eps,\si}(\L)\|\kl \frac{2\si^{-1}+\|\L\|^2}{|2\ln \eps|}$;
\item If $c(\L)\Gamma_{I-E}\le \Gamma_{I-T_t}$ holds for $t\gl t_0\gl 1$, then
\[ \frac{c(\L)}{(1+\si|\ln \eps|)t_0^{\si}}
\Gamma_{I-E}
\kl \Gamma_{\phi_{\eps,\si}(\L)}
\pl .\]
\end{enumerate}
\end{lemma}
\begin{proof} Using differentiation we have that
$x-\frac{x^2}{2}\le 1-e^{-x}\le x$. Define $\displaystyle \psi(\la)=\int_{\eps}^1 (1-e^{-\la t}) \frac{dt}{t^2}$. Then
\begin{align*}
|\ln \eps|\la -\frac{\la^2}{2} &\kl \int_{\eps}^1 (\la t -\frac{\la^2t^2}{2})\frac{dt}{t^2}
\kl \psi(\la) \kl \int_{\eps}^1 \la t \pl \frac{dt}{t^2}
\lel |\ln \eps|\la \pl .
\end{align*}
Write $\displaystyle \tilde{\psi}(\la)=\int_1^{\infty} (1-e^{-\la t}) \frac{dt}{t^{1+\si}}$. Note that $0\le \tilde{\psi}(\la)\kl \si^{-1}$. Then we find
\[ |\ln\eps| \la -\frac{\la^2}{2|\ln \eps|}
\kl \phi_{\eps,\si}(\la) \kl
\la+\frac{1}{\si |\ln \eps|} \pl .\]
By functional calculus, we deduce that
\[ \|\L- \phi_{\eps,\si}(\L)\| \kl \frac{1}{\si|\ln \eps|}+ \frac{\|\L\|^2}{2|\ln \eps|}
\kl \frac{2\si^{-1}+\|\L\|^2}{2|\ln \eps|} \pl .\]
For the second assertion, we observe that by linearity of $\Gamma_A$ in the variable $A$
\[ \Gamma_{\phi_{\eps,\si}(\L)}
\gl c(\L) |\ln\eps|^{-1}(\int_{t_0}^{\infty} \frac{dt}{t^{1+\si}}) \Gamma_{I-E}
\gl \frac{c(\L)}{\si|\ln\eps|} t_0^{-\si} \Gamma_{I-E} \pl .\]
This completes the proof of ii).\qd
\begin{rem}\label{tnot} {\rm a) An interesting choice is $\si=\frac{1}{\ln t_0}$. Then we find
\[ \Gamma_{\phi_{\eps,\si}(\L)}\gl \frac{c(\L)}{|\ln t_0|} \frac{1}{|\ln \eps|}\Gamma_{I-E} \pl ,
\]
and $\|\L-\phi_{\eps,\si}(\L)\|\kl \frac{2|\ln t_0|+\|\L\|^2}{2|\ln \eps|}$.
b) We can also slightly improve the lower estimate. Let $\beta>0$. The function $g(x)=1-e^{-x}$ is concave and hence $\frac{1-e^{-x}}{x}\gl (1-\frac{x}{2})$ implies
\begin{align*}
\int_{\eps}^{\eps^{\beta}} (1-e^{-\la t})\frac{dt}{t^2} &\gl
\la \frac{1-e^{-\|\L\|\delta^{\beta_1}}}{\|\L\|\eps^{\beta}}
\int_{\eps}^{\eps^{\beta}}\la t \frac{dt}{t^2} \gl (1-\frac{\|\L\|\eps^{\beta}}{2})(1-\beta)\la |\ln\eps| \pl .
\end{align*}
Thus assuming $|\ln \eps|\gl \frac{2}{\beta}|\ln \frac{\|\L\|}{2\beta}|$ implies
\[ -2\beta \la \kl \phi_{\eps,\si}(\la)-\la\kl \frac{1}{\si|\ln\delta|} \]
and hence for $\si=\frac{1}{\ln t_0}$, $t_0\gl 1$ we get
\[ \|\phi_{\eps,\si}(\L)-\L\|\kl 2\beta \|\L\|+ \frac{1}{\ln t_0 |\ln\eps|} \pl .\]
c) Estimating the return time $t_0$ through the H\"omander system may not be very concrete. Nevertheless, we only need to know $\|T_{1/2}:L_1(\mm_n)\to L_1(N\ssubset \mm_n)\|$ and the spectral gap of $\L$ to control $t_0$.
} \end{rem}
\begin{theorem}\label{predense} Let $L$ be the generator of a semigroup of unital completely positive and self-adjoint maps $T_t=e^{-tA}$ on $\Mz_m$. Then there exists a constant $\al(\L)$ such that for every $\eps>0$ there exists a generator $B_{\eps}$, obtained from functional calculus of $\L$, such that
\[ \|\L-B_{\eps}:L_2(\Mz_m)\to L_2(\Mz_m)\| \kl \eps \quad \mbox{and} \quad
\eps \al(\L) \Gamma_{I-E_N} \kl \Gamma_{B_{\eps}}
\pl .\]
Moreover, we have the estimate
\[ \al(\L) \le \Big(2\ln t_0 (\ln t_0+\|\L\|^2) \Big)^{-1} ,\]
where $t_0$ is the return time of $\L$.
\end{theorem}
\begin{proof} According to Lemma \ref{H1}, we know that $X=\{X_1,\cdots,X_r\}$ is a H\"ormander system. By construction, the corresponding Lie group representation satisfies $\pi(X_k)=ia_k$. Thus the proof of Theorem \ref{Horm} applies and shows that there exists a $t_0\gl e$, depending on $a_1,...,a_r$, such that $\Gamma_{I-T_t}\gl \frac{1}{2}\Gamma_{I-E_N}$ for $t\gl t_0$. Since $\Delta_{X}$ is ergodic, and the Lie algebra of $G$ is generated by $ia_1,...,ia_r$, we see that $\al_g(x)=x$ if $e^{ita_j}xe^{-ita_j}=x$ for all $j=1,...,r$. Thus the fixed point algebra is indeed $N=\{a_1,...,a_r\}'$ the commutator and also the fixpoint algebra of $e^{-t\L}$. Now, we choose $\si=\frac{1}{\ln t_0}$ and deduce that for every $0<\eps_0<1$
\[ \|\phi_{\eps_0,\si}(\L)-\L\| \kl \frac{2|\ln t_0|+\|\L\|^2}{2|\ln \eps_0|} \]
and $\Gamma_{\phi_{\eps_0,\si}(\L)}\gl \frac{1}{2e \ln t_0 |\ln\eps_0|}\Gamma_{I-E}$. Thus we may choose $0<\eps_0<1$ such that ${|\ln\eps_0|=\frac{|\ln t_0|+\|\L\|^2/2}{\eps}}$ and obtain
\[ \Gamma_{\phi_{\eps_0,\si}(\L)}\gl \frac{\eps}{2e (\ln t_0) (\ln t_0+\|\L\|^2)}\Gamma_{I-E} \pl .\]
Thus $\al(\L)= \frac{1}{2e\ln t_0 (\ln t_0+\|\L\|^2)}$ does the job.
\qd
\begin{rem}{\rm We can improve the dependence in $\|\L\|$ using \eqref{tnot} b). We need $4\beta\|\L\|\lel \eps$ and $|\ln \eps_0|\gl \frac{2|\ln t_0|}{\eps}$ and
\[ |\ln \eps_0|\gl \frac{2}{\beta}|\ln \frac{\|\L\|}{2\beta}| \lel \frac{8\|\L\|}{\eps} |\ln \frac{2\|\L\|^2}{\eps}| \pl .\]
And then we obtain the non-linear estimate
\begin{align*}
\Gamma_{B_{\eps}}& \gl \frac{\eps}{2e (\ln t_0) (8\|\L\|+2\ln \|\L\|+|\ln \eps| +2\ln t_0)} \Gamma_{I-E} \pl .
\end{align*}
Note that $t_0=\frac{\ln c_0\|T_{}\|_{cb}}{\la_{\min}(\L)}$ only depends linearly on $1/\la_{\min}(\L)$. Hence, up to the cb-estimate, our estimate just depends on the minimal and maximal eigenvalue of $\L$.}\end{rem}
\begin{cor} The set of generators of unital completely positive self-adjoint semigroups on $\Mz_m$ satisfying $\Gamma \E$ is dense.
\end{cor}
\begin{rem} {\rm In \cite{Dan} it was shown that for every ergodic semigroup of completely positive trace preserving maps there exists a entanglement-breaking time $t_{EB}$ such that $T_t$ is eventually entanglement-breaking for $t>t_{EB}$. A completely positive trace preserving map is called entanglement-breaking if its Choi matrix is a convex combination of tensor product density matrices. Our kernel estimate can be used to estimate this entanglement breaking time $t_{EB}$.
}
\end{rem}
\section{Geometric applications and deviation inequalities}
The aim of this section is to derive several concentration inequalities for semigroups satisfying FLSI in the non-ergodic and possibly infinite dimensional situation. The starting point is a version of Rieffel's quantum metric space. Let $T_t:M\to M$ be a semigroup of unital completely positive and self-adjoint maps and $A$ be the generator of $T_t$. As usual we will assume that $\A\subset {\rm dom}(A^{1/2})$ is a dense $^*$-algebra and invariant under $T_t$. On $M$ we define the Lipschitz norm via the gradient form,
\[ \|f\|_{Lip_{\Gamma}} \lel \max\{\|\Gamma(f,f)\|^{\frac{1}{2}}, \|\Gamma(f^*,f^*)\|_{\infty}\}^{\frac{1}{2}} \pl , f\in \A\pl.\]
This induces a quantum metric on the state space by duality
\[ \|\rho\|_{\Gamma^*}
\lel \sup \{ |\tau(\rho f)| \pl | \pl E(f)=0\pl , \pl \|f\|_{Lip_{\Gamma}}\le 1\}\pl. \]
Usually such a Lipschitz norm is considered in the ergodic setting, where the fixpoint subalgebra $N=\cz 1$ and hence the conditional expectation is given by $E(f)=\tau(f)$. Since for states $\rho(1)=1$, one can assume the additional condition $E(f)=0$ when calculating the distance $d_{\Gamma}(\rho,\si)=\|\rho-\si\|_{\Gamma^*}$. This is crucial in the non-ergodic situation, see the last section of \cite{JRZ} for more detailed discussion. Let $\delta:{\rm dom}(A^{1/2})\to L_2(\hat{M})$ be the derivation which implements the gradient form
\[ \Gamma(x,y) \lel E_M(\delta(x)^*\delta(y))\pl. \]
In the construction of a derivation in \cite{JRS} the following additional estimate was also proved.
\begin{equation}\label{JRS}
\|\delta(x)\|_{\hat{M}} \kl 2\sqrt{2} \max\{\|\Gamma(x,x)\|^{1/2},\|\Gamma(x^*,x^*)\|^{1/2}\}
\lel 2\sqrt{2}\|x\|_{Lip_\Gamma} \pl .
\end{equation}
\subsection{Wasserstein $2$-distance and transport inequalities}
In this part we review and extend the work of Carlen-Maas \cite{CM} which adapted Otto-Vilani's theory \cite{OV} to the non-ergodic self-adjoint setting. Following \cite{CM} we use the symbol
\[ [\rho](x) \lel \int_0^1 \rho^sx\rho^{1-s} ds \]
for the multiplier operator, and
\[ [\rho]^{-1}\lel \int_0^{\infty} (\rho+t)^{-1}x(\rho+t)^{-1} dt \pl .\]
for the inverse. The need of the symmetric two-sided multiplication instead of multiplication with $\rho$ is major difference between the commutative and noncommutative setting. Let us recall a key formula which recovers the generator from the logarithm as follows:
\begin{align} \label{derr}
A(\rho) &= \delta^*\Big([\rho](\ln \rho-\ln(E(\rho))\Big) \end{align}
Indeed, let us assume that $\rho$ and $x\in \mathcal{A}$ and write $\si=E(\rho)$. Using the operator integral $J_F$ for $F(x)=\ln(x)$, we deduce from $\delta(f(\si))=0$ that
\begin{align*}
&\langle x, \delta^*[\rho](\ln \rho-\ln(E(\rho)))\rangle \lel \tau\Big(\delta(x^*)[\rho]\delta(\ln \rho ))\Big)
-\tau\Big(\delta(x^*)[\rho](\delta(\ln \si))\Big) \\
&= \tau\Big(\delta(x^*)[\rho]J_F^{\rho}(\delta(\rho)\Big) \lel
\tau\Big(\delta(x^*)[\rho][\rho]^{-1}(\delta(\rho)\Big)\\
&= \tau(\Gamma(x,\rho)) \lel
\frac{1}{2} \Big(\tau(A(x^*)\rho)+\tau(x^*A(\rho))-\tau(A(x^*\rho))\Big) \lel
\tau(x^*A(\rho)) \pl .
\end{align*}
Here we used $A=A^*$ and $A(1)=0$. The expression $\ln \rho-\ln E(\rho)$ itself occurs by differentiating the relative entropy $D_N(\rho)=D(\rho||E(\rho))$. Consider $g(t)=\rho+t\beta$ with a self-adjoint $\beta$. Using the derivation formula \eqref{derv} for $F(x)=x\ln x$ with derivative $F'(x)=1+\ln x$, we deduce from the tracial property that
\begin{align*}
\frac{d}{dt}D_N(\rho+t\beta)|_{t=0}&=
\frac{d}{dt}\tau(F(\rho+t\beta))-\frac{d}{dt}\tau(F(E(\rho+t\beta)))|_{t=0} \\&\lel \tau(F'(\rho)\beta)-\tau(F'(E(\rho))\ln E(\beta)) \\
&\lel \tau(\beta)+\tau((\ln \rho)\beta)-\tau(E(\beta))-\tau(E(\ln\rho)E(\beta))\\& \lel
\tau((\ln \rho-\ln E(\rho))\beta) \pl .
\end{align*}
This means the Radon-Nikodym derivative of $D_N$ with respect to the trace satisfies
\begin{equation}\label{totder}
\frac{ d D_N'(\rho)}{d\tau} \lel \ln \rho-\ln E(\rho)\pl.
\end{equation}
In the following we will identify a normal state $\phi_\rho(x)=\tau(x\rho)$ of $M$ and its density operator $\rho$.
\begin{defi}\label{w2} Given a state $\rho\in M$, we define the weighted $L_2$-norm on $L_2(\hat{M})$ by the inner product
\[ \langle \xi,\eta\rangle_{\rho}:= \langle \xi,[\rho]\eta\rangle_{L_2(\hat{M})}
\lel \int_0^1 \tau_{\hat{M}}(\xi^* \rho^{1-s}\eta \rho^s) ds \pl .\]
\end{defi}
If $\rho$ is invertible and $\mu 1 \le \rho \le \mu^{-1}1$, we have
\[ \mu \langle \xi,\xi\rangle \kl
\langle \xi,\xi\rangle_{\rho} \kl \mu^{-1} \langle \xi,\xi\rangle \pl .\]
Hence for all invertible $\rho$, the weighted $L_2$ norm $\norm{{\atop} \pl }{\rho}$ is equivalent to the norm on $L_2(\hat{M})$ norm. However, this change of metric is crucial in introducing the following Riemannian metric. Recall that $\mathcal{H}_\Gamma=\mathcal{H}$ is the $W^*$-submodule of $L_\infty^c(M\subset \hat{M})$ generated by $\delta(\mathcal{A})\mathcal{A}$.
\begin{lemma}\label{weakaprox} Let $\rho$ be a normal state of $M$. For $z\in M$, define
\[ \|z\|_{\Tan_{\rho}}
\lel \inf\{ \|\xi\|_{\rho} \pl | \pl \delta^*([\rho]\xi )=z\} \pl .\]
Here the infimum is taken over all $\xi\in \mathcal{H}$. Then there exists $a_n\in \mathcal{A}$ such that $\|\delta(a_n)\|_{\rho}\le \|z\|_{\Tan_{\rho}}$ and $z=\lim_n \delta^*([\rho]\delta(a_n))$ holds weakly.\end{lemma}
\begin{proof}[Proof of Lemma \ref{weakaprox}] Observe that for $x\in\mathcal{A}$, $[\rho](x)$ belongs to the closure of $\mathcal{H}$. We say that $\xi$ in $\mathcal{H}$ is divergence free if $\delta^*(\xi)=0$. Let $\xi_0$ be in the closure of $\mathcal{H}$ such that \[ \|z\|_{\Tan_\rho}^2= \|\xi_0\|_{\rho}^2\pl, \pl \delta^*([\rho](\xi_0)=z\pl.\]
Write $\xi_{\eps} \lel \xi_0 + \eps [\rho]^{-1}(\xi)$. It
satisfies
\[ \|\xi_0\|_\rho^2 \le \|\xi_0+\eps [\rho]^{-1}(\xi)\|_\rho^2
\lel \|\xi_0\|^2_{\rho}+2\eps {\rm Re}\lan \xi_0,[\rho]^{-1}(\xi\ran)_\rho+\eps^2\|[\rho]^{-1}(\xi)\|_{\rho}^2 \]
and hence $\lan \xi_0,[\rho]^{-1}(\xi)\ran_{\rho}\lel 0$ for all divergence free $\xi$. Equivalently, we find
\[ \tau(\xi_0^*\xi) \lel \tau(\xi_0^*[\rho][\rho]^{-1}(\xi)) \lel 0 \pl. \pl \]
Let us rephrase this in terms of gradient form,
\[ \langle a\ten b,c\ten d\rangle_{\Gamma}
\lel \tau(b^*\Gamma(a,c)d) \pl .\]
An element $\xi$ is divergence free if and only if
\[ \langle x\ten 1,\xi\rangle_{\Gamma}\lel 0 \]
for all $x$. Hence $\xi_0$ is orthogonal to the divergence free forms if and only if $\xi_0$ is in the closure of $\delta(x)$. In other words there exists a sequence $a_n\in \A$ such that $\xi_0\lel \lim_n \delta(a_n)$. This implies
\[ \tau(b^*z)\lel \tau(\delta(b^*)[\rho]\xi_0)
\lel \lim_n \tau\Big(b^*\delta^*([\rho]\delta(a_n))\Big) \pl \]
for all $b\in \A$. That completes the proof.
\qd
\begin{rem} {\rm a) If $z$ is selfadjoint, we may use the fact that $\delta$ is $^*$-preserving to show that $\xi_0\in \hat{M}$ is also self-adjoint. Thus we may replace $a_n$ by their self-adjoint parts using the fact that $[\rho]$ preserves self-adjointness.\\
b) Since $A$ is self-adjoint, we know that the range of $A$ is dense in $(I-E)(L_2(M))$, the orthogonal complement of $L_2(N)$, and hence contained in the closure of $\delta^*(\delta(\A)\A)\subset L_2(N)^{\perp}$. In fact, the $L_2$-closure of $\delta^*(\delta(\A)\A)$ is exactly $(I-E)(L_2(M))$.
}
\end{rem}
In the following we denote by $\mathcal{H}_{\rho}$ the closure of $\delta(\mathcal{A})\mathcal{A}$ with respect to the $\norm{\cdot}{\rho}$ norm. $\mathcal{H}_{\rho}$ is viewed as the tangent space at the point $\rho$ and $\norm{\cdot}{Tan_\rho}$ gives a the Riemannian metric at $\rho$.
\begin{cor}\label{nnn} Let $\rho$ be a density operator of $M$. Then
\[ \|x\|_{\Gamma^*}\kl 2\sqrt{2}\|x\|_{\Tan_{\rho}} \pl .\] \end{cor}
\begin{proof} Let $a_n\in \mathcal{A}$ such that $\displaystyle \lim_{n\to \infty}\delta^*([\rho]a_n)=x$. We may assume that
\[ \|\delta(a_n)\|_{\rho}\kl (1+\eps)\|x\|_{\Tan_{\rho}} \pl \]
for a given $\eps>0$. Then we deduce that for $f\in \mathcal{A}$ we have
\begin{align*}
|\tau(f^*x)| &= \lim_n |\tau(f^*\delta^*([\rho]\delta(a_n)))|
\lel \lim_n \tau(\delta(f)^*[\rho]\delta(a_n))\\
&\le \limsup_n \|\delta(f)\|_{\rho} \|\delta(a_n)\|_{\rho} \kl (1+\eps)\|\delta(f)\|_{\rho} \|x\|_{\Tan_{\rho}} \pl .
\end{align*}
Furthermore, we deduce from the fact that the inclusion $M\subset \hat{M}$ is trace preserving that
\begin{align*}
\|\delta(f)\|_{\rho}^2
&=\tau(\delta(f)^*[\rho]\delta(f))
\lel \int_0^1 \tau(\delta(f)^*\rho^{1-s}\delta(f)\rho^s) ds \\
&\le \|\delta(f)\|_{\infty}^2 \int_0^1 \|\rho^{1-s}\|_{\frac{1}{1-s}}\|\rho^s\|_{\frac{1}{s}} ds \kl \|\delta(f)\|_{\infty}^2 \pl .
\end{align*}
Thus the estimate \eqref{JRS} implies the assertion, after sending $\eps$ to $0$.
\qd
Denote $S(M)$ as the set of normal states of $M$.
Let $F:S(M)\to \rz$ be a function defined (on a open dense subset) of the state space. We say that $F$ admits a \emph{gradient} $\grad_{\rho}F$ with respect to the tangent metric, if for every $\rho$ there is an vector $\xi\in \mathcal{H}_{\rho}$ such that for every differentiable path $\rho:(-\eps,\eps)\to S(M)$
\[ \rho'(0) \lel \delta^*([\rho]\xi_0) \quad \Longrightarrow \quad
\frac{d}{dt}F(\rho(t))|_{t=0}
\lel \langle \xi,\xi_0\rangle_{\rho} \pl .\]
and we write $\grad_{\rho}F=\xi$. Our control function is the relative entropy with respect to the fixpoint algebra \[F(\rho)=D_N(\rho)=D(\rho||E(\rho))\pl .\] Let $\rho:(-\eps,\eps)\to S(M)$ be a path. Using the directional derivative of $D_N$ from \eqref{totder}, we find that at $\rho=\rho(0)$
\begin{align*}
F'(\rho(0))
&= \tau\Big((\ln \rho-\ln E(\rho))\delta^*([\rho]\xi_0)\Big)
\lel \tau\Big(\delta(\ln \rho-\ln E(\rho))[\rho]\xi_0\Big) \lel
\lan\delta(\ln \rho),\xi_0\ran_{\rho} \pl .
\end{align*}
By definition that means
\begin{equation}
{\rm grad}_{\rho}D_N \lel \delta(\ln\rho) \pl .
\end{equation}
Note that in \cite{CM} the inner product with the modified multiplication was exactly designed to satisfy this property. Moreover, we find the correspond tangent direction in the dual of the state space is given by
\[ \delta^*([\rho]\grad_{\rho}F)
\lel \delta^*([\rho]\delta(\ln \rho))
\lel \delta^*([\rho][\rho]^{-1}\delta(\rho))\lel A(\rho) \pl .\]
A curve $\gamma$ in the state space is said to follow the path of \emph{steepest descent} with respect to $F$ if
\[ \frac{dF(\gamma)}{dt} \lel -\|\grad_{\gamma(t)}F(\gamma(t))\|_{\gamma(t)}^2
\pl. \]
The right hand side $E(\rho)=\|\grad_{\rho}F\|_{{\rho}}^2$ is the energy function with respect to $F$. In our special case, $F=D_N$, we find \begin{align*}
E(\rho) &= \|\grad_{\rho}D_N\|_{\rho}^2
\lel \lan \delta(\ln \rho)\delta(\ln(\rho)\ran_{\rho} \lel \lan[\rho]^{-1}\delta(\rho), \rho),[\rho][\rho]^{-1}\delta(\rho)\ran_{L_2(\hat{M})} \\
&= \tau(\delta(\rho) [\rho]^{-1}\delta(\rho)) \lel \tau(\rho\delta^*\delta(\ln \rho)) \lel
\tau(\rho A(\ln(\rho))
\lel \tau(A(\rho)\ln\rho) \lel \I_A(\rho) \pl .
\end{align*}
This means the sub-Riemannian metric is chosen so that the Fisher information for $F=D_N$ satisfies
\[ \frac{d}{dt}D_N\circ T_t|_{t=0}=-\I_A \pl. \]
We summarizes the above discussion as follows.
\begin{prop}\label{steep}
Suppose a differentiable curve $\gamma:(a,b)\to S(M)$ satisfies that
\[ \gamma'(t)\lel -A(\rho(t)) \]
Then the curve $\gamma$ follows the path of steepest descent with respect to $D_N$. In particular, the semigroup path $\gamma(t)=T_t(\rho)$ is a curve of steepest descent for $D_N$.
\end{prop}
\begin{lemma} Let $F(\rho)=D_N(\rho)$ and $E(\rho)=I_A(\rho)$. Let $\rho:[0,\infty)\to S(M)$ be a path of steepest descent with respect to $F$ and $\la>0$. Then
\[ 2\la F(\rho(t))\kl E(\rho(t)) \quad \mbox{implies} \quad F(\rho(t))\kl e^{-2\la t}F(\rho(0)) \]
\end{lemma}
\begin{proof} According to the above discussion, we have
\[\rho'(t)=-A(\rho(t))=-\delta^*\Big(\rho(t)F'(\rho(t))\Big)\pl, F'(t)=\langle \grad_{\rho(t)}F, -F'(\rho(t))\rangle_{\rho(t)}\lel -E(\rho(t))\pl .\]
Then our assumption implies that
\[ F'(t)\kl -2\la F(t) \]
and hence $F'(t)\kl e^{-2\la t}F(t)$ by Gr\"onwall's Lemma.\qd
The standard Riemannian distance on $S(M)$ of our metric is given by
\[ d_{A,2}(\rho,\si) \lel \inf\{ L(\gamma): \gamma(0)=\rho, \gamma(1)=\si\}\]
where the infimum runs over all piecewise smooth curve and the length function is defined by
\[ L(\gamma) \lel \int_0^1 \|\gamma'(t)\|_{\gamma(t)} dt \pl .\]
Thanks to Lemma \ref{nnn}, and the definition, we still have the distance estimate
\[ \|\rho-\si\|_{\Gamma^*}\kl 2\sqrt{2} \pl d_{A,2}(\rho,\si) \pl .\]
The following result follows similarly form \cite{CM}[Theorem 8.7] using the path of steepest descent and the relative entropy. Note that in \cite{CM} the generalized log-Sobolev inequality is defined with constant $2\la$.
\begin{theorem}\label{CM} The $\la$-FLSI inequality
\[ \la D(\rho||E(\rho)) \kl \I_{A}(\rho) \]
implies
\[ d_{A,2}(\rho,E(\rho))\kl 2\sqrt{\frac{D(\rho||E(\rho))}{\la}} \pl .\]
\end{theorem}
We have the following corollary using the $\Gamma$ Lipschitz distance.
\begin{cor} $\la$-FLSI implies
\[ \|\rho_1-E(\rho_1)\|_{\Gamma^*}\kl 4 \sqrt{\frac{2D(\rho||E(\rho))}{\la}} \]
and
\[ \|\rho_1-\rho_2\|_{\Gamma^*}
\kl 4\sqrt{2}\Big(\sqrt{\frac{D(\rho_1||E(\rho_1))}{\la}}+\sqrt{\frac{D(\rho_2||E(\rho_2))}{\la}}\Big) \pl . \]
\end{cor}
\begin{proof} The first inequality is just a combination of Lemma \ref{nnn} and Theorem \ref{CM}. For the second inequality, we observe that $\|E(\rho)\|_{\Gamma^*}=0$, and hence the triangle inequality implies
\begin{align*} \|\rho_1-\rho_2\|_{\Gamma^*}
&\le \|\rho_1-E(\rho_1)\|_{\Gamma^*}+\|\rho_2-E(\rho_2)\|_{\Gamma^*} \pl . \qedhere
\end{align*}
\qd
\begin{rem}\label{geoT}{\rm Let $e$ be a projection. Then $\rho_e=\frac{e}{\tau(e)}$ is the normalized state which satisfies
\[ D(\rho_e||E(\rho_e))\kl \tau(\rho_e\ln \rho_e)\kl -\ln \tau(e) \pl .\]
Assume that there exists a self-adjoint $y$ such that
\[ h\kl |\frac{\tau(e_1y)}{\tau(e_1)}-\frac{\tau(e_2y)}{\tau(e_2)}| \quad \mbox{and}\quad \|\Gamma(y,y)\|\le 1 \pl.\]
Then we find the \emph{geometric} version of Talagrand's inequality (see \cite{TA} and also \cite{JZ})
\[ \tau(e_1)\tau(e_2) \kl e^{-\frac{\la h^2}{64}} \]
Indeed, this follows from the triangle inequality
\[ h\kl \|\rho_{e_1}-\rho_{e_2}\|_{\Gamma^*}\kl
4\sqrt{2}\la^{-\frac{1}{2}} (\sqrt{-\ln\tau(e_1)}+\sqrt{-\ln \tau(e_2)})
\kl 8\la^{-\frac{1}{2}}\sqrt{-\ln\tau(e_1)-\ln\tau(e_1)} \pl .\]
The constant $64$ is probably not optimal in general.
}
\end{rem}
\subsection{Wasserstein $1$-distance and concentration inequalities}
In \cite{JZ} the commutative characterization of Wasserstein-Entropy estimates in terms of concentration inequalities was extended to the noncommutative setting. In the non-ergodic setting, we have the following result.
\begin{theorem}\label{characW} Let $(M,\tau)$ be a finite von Neumann algebra and $(T_t)$ be a self-adjoint semigroup of completely positive trace reducing maps. Let $N$ be the fixed-point subalgebra. Then the following conditions are equivalent
\begin{enumerate}
\item[i)] There exists a constant $C_1>0$ such that for all $p\gl 2$
\[ \|f\|_{L_{\infty}^p(N\subset M)}\kl C_1 \sqrt{p} \|f\|_{Lip_{\Gamma}}
\pl ; \]
\item[ii)] There exists a constant $C_2>0$ such that for all normal states $\rho$
\[ \|\rho\|_{\Gamma^*}\kl C_2 \sqrt{D(\rho||E(\rho))} \pl.\]
\end{enumerate}
\end{theorem}
In the following wesay that $(T_t)$ or its generator $A$ \emph{satisfies $\la$-WA$_1$} if
\[ \|\rho\|_{\Gamma^*}\kl 2\sqrt{2} \sqrt{\frac{D(\rho||E(\rho))}{\la}} \pl .\]
Note that the factor $2\sqrt{2}$ is chosen so that $\la$-FLSI implies $\la$-WA$_1$ (via $\la$-TA$_2$).
\begin{proof} Fix $\frac{1}{p}+\frac{1}{p'}=1$. Recall that the relative $p$-R\'{e}nyi entropy $D_p(\rho||\si) \lel p'\ln\|\si^{-1/2p'}\rho\si^{-1/2p'}\|_p$ is monotone over $p\in (1,\infty]$ and hence $\displaystyle D_N^p(\rho)=\inf_{\si\in N, \tau(\si)=1} D_p(\rho||\si)$ satisfies
\[ D(\rho||E(\rho))\kl D_N^p(\rho)\kl p'\ln \|\rho\|_{L_1^p(N\subset M)}\kl \frac{p'}{e\eps} \|\rho\|_{L_1^p(N\subset M)}^{\eps} \pl .\]
for any $\eps>0$. Therefore, we deduce from ii) that
\[ \|\rho\|_{\Gamma^*} \kl C\sqrt{\frac{p'}{\eps}} \|\rho\|_1^{1-\frac{\eps}{2}} \|\rho\|_{L_1^{p}}^{\frac{\eps}{2}} \pl .\]
For $\rho=\rho^*\in L_1^p(N\subset M)$, we note the elementary inequality
\[ \|\rho^+\|_{L_1^p(N\subset M)} \kl \|\rho\|_{L_1^p(N\subset M)} \pl .\]
Thus for arbitrary $\rho=\rho_1+i\rho_2$, we deduce that
\[ \max_{j,\pm} \|\rho_j^{\pm}\|_1^{1-\frac{\eps}{2}} \|\rho_j^{\pm}\|_{L_1^p(N\subset M)}^{\frac{\eps}{2}} \kl 4
\|\rho\|_1^{1-\frac{\eps}{2}} \|\rho\|_{L_1^p(N\subset M)}^{\frac{\eps}{2}} \pl .\]
In other words condition i) implies that
\[ \|\rho-E_N(\rho)\|_{\Gamma^*}\lel
\|\rho\|_{\Gamma^*} \kl 4C \sqrt{\frac{p'}{\eps}}
\|\rho\|_1^{1-\frac{\eps}{2}} \|\rho\|_{L_1^p(N\subset M)}^{\frac{\eps}{2}} \pl .\]
Now, we may use real interpolation theory \cite{BL} and duality deduce
\begin{equation} \label{ep}
\|f-E_N(f)\|_{[L_{\infty},L_{\infty}^{p'}(N\subset M)]_{\eps/2,\infty}}
\kl4C \sqrt{\frac{p'}{\eps}} \|f\|_{Lip_{\Gamma}} \pl .
\end{equation}
In particular, this is true for $\eps=\frac12$. Since $\tau(1)=1$, we deduce that the inclusion
\[ [L_{\infty}(M),L_{\infty}^{p'}(N\subset M)]_{1/4,\infty}
\subset [L_{\infty}(M),L_{\infty}^{p'}(N\subset M)]_{1/2,1} \subset [L_{\infty}(M),L_{\infty}^{p'}(N\subset M)]_{1/2} \]
is of norm bounded by a universal constant $c_0$. Thanks to \cite{JP}, we have
\[[L_{\infty}(M),L_{\infty}^{p'}(N\subset M)]_{1/2}=L_{\infty}^{2p'}(N\ssubset M)\pl.\] Let $q=2p'$, then we deduce
\[ \|f\|_{L_{\infty}^q(N\subset M)} \kl 4c_0C \sqrt{2p'}
\|f\|_{Lip_{\Gamma}} \lel c_1 C \sqrt{q}\|f\|_{\Gamma} \]
for a universal constant $c_1=4\sqrt{2}c_0$. This means $ii)$ implies $i)$ with constant $c_1C$. Conversely, we deduce from i), and again \cite{JP} that for $\frac{1}{q}=\frac{\eps}{2s}$
\[ L_{\infty}^{q}(N\subset M)
\lel [L_{\infty}(M),L_{\infty}^{s}(N\subset M)]_{\eps/2}
\subset [L_{\infty}(M),L_{\infty}^p(N\subset M)]_{\eps/2,\infty} \]
This means i) implies that
\[ \|f\|_{[L_{\infty}(M),L_{\infty}^s(N\subset M)]_{\eps/2,\infty}} \kl C \sqrt{q}\|f\|_{\Gamma}
\lel \sqrt{2}C \sqrt{\frac{s}{\eps}} \|f\|_{\Gamma} \pl .\]
By duality we deduce for $s=p'$ that
\[ \|\rho\|_{\Gamma^*}\kl \sqrt{2e}C
\sqrt{\frac{p'}{e\eps}} \|\rho\|^{1-\frac{\eps}{2}}
\|\rho\|_{L_1^p(N\subset M)}^{\frac{\eps}{2}} \pl .\]
Thus for a state $\rho$ such that $D(\rho||E(\rho))<\infty$ , we may choose $\eps=(\ln \|\rho\|_{L_1^p(N\subset M)})^{-1}$ and obtain that
\[ \|\rho\|_{\Gamma^*}\kl \sqrt{2e}C \sqrt{p'\ln \|\rho\|_{L_1^p(N\subset M)}} \pl .\]
By sending $p\to 1$, we deduce ii). \qd
\begin{rem}\label{jz}
{\rm It was proved in \cite{JZ} that $\|\rho\|_{\Gamma^*}\kl C \sqrt{\Ent(\rho)}$ is equivalent to \[ \|f-E(f)\|_{L_p(M)}\kl C' \sqrt{p}\|f\|_{\Gamma} \pl .\]
In that sense the estimate with respect to $D_N$ is significantly stronger, because the inclusion ${L_{\infty}^p(N\subset M)\subset L_p(M)}$ is contractive for all finite $M$.}
\end{rem}
\begin{lemma}\label{fcharac} For a positive density $\rho$
\[ D(\rho||E(\rho))\lel \sup_\si \tau(\rho (\ln \si-\ln E(\si)))\]
where the supremum is taken over all strictly positive density $\si$.
\end{lemma}
\begin{proof}
Using the convexity of $F(\rho)=D(\rho||E(\rho))=\lim_{p\to 1} \frac{\|\rho\|_{L_1^p(N\subset M)}-1}{p-1}$, we know that
\[ F(\rho)\gl F(\si)+ F'(\si)(\rho-\si) \]
In the previous section, we observed in \eqref{totder} that the total derivative is \[F'(\si)(\beta)=\tau\Big(\beta(\ln \si-\ln E_N(\si))\Big)\pl.\] Hence we obtain after cancellation that
\[ F(\rho)\gl \tau(\rho(\ln \si-\ln E_N(\si)))\]
Obviously, we have equality for $\si=\rho$ in case of a state, and hence homogeneity implies the assertion. Note also that we may replace $\si$ by $\si+\delta 1$ to guarantee that the condition is well-defined. The extra scaling factor $\tau(\si)+\eps$ cancels thanks to the logarithm.
\qd
\begin{prop} Let $(T_t)$ be a semigroup as in Theorem \ref{characW}. The condition
\begin{enumerate}
\item[iii)] There exists a $c>0$ such that
\[ E_N(e^{tf})\le e^{ct^2} \]
for all self-adjoint $f$ with $\Gamma(f,f)\le 1$ and $t>0$.
\end{enumerate}
implies $\la$-WA$_1$ for some $\la$. If in addition $N$ is contained in the center of $M$, then {\rm iii)} is equivalent to WA$_1$. \end{prop}
\begin{proof} Let us assume that iii) holds and that $f=f^*$ satisfies $\Gamma(f,f)\le 1$. We define $\rho=\frac{e^{tf}}{\tau(e^{tf})}$ and deduce that for every state $\psi$ (thanks to the cancelation of the scalar factor)
\begin{align*}
D(\psi||E(\psi))&\gl \tau(\psi (\ln \rho-\ln E(\rho))) \lel \tau(\psi (tf-\ln E(e^{tf})) \pl .
\end{align*}
This implies
\[ \tau(\psi f) \le \frac{D(\psi||E(\psi))}{t}+
\frac{\tau(\psi \ln E(e^{tf}))}{t}
\kl \frac{D(\psi||E(\psi))}{t}+ct \pl .\]
Now we may choose $t=\sqrt{\frac{D(\psi||E(\psi))}{c}}$ to deduce the condition ii) in Theorem \ref{characW} with constant $C=2\sqrt{c}$. For the converse, we assume that $N$ is in the center, $f=f^*$ and $\Gamma(f,f)\le 1$ and $E(f)=0$. Then we deduce from condition i) that
\[ \tau(\si f^p)\le \tau(\si|f|^p)^{1/p} \lel \|\si^{1/2p}f\si^{1/2p}\|_p \kl C\sqrt{p} \pl \]
for all $\si \in N_+$, $\tau(\si)=1$.
$E(f)=0$ implies that the first order term in the exponential expansion vanishes and hence
\begin{align*}
\tau(\si E(e^{tf}))
&\le 1+\sum_{k\gl 2} \frac{(Ct)^k \sqrt{k}^k}{k!} \kl
1+ \sum_{k\gl 2} \frac{(Cet)^k}{k^{k/2}}
\kl 1+\sum_{j=1}^{\infty} \frac{(KCet)^2}{j!} \pl .
\end{align*}
Here we use that for $k=2j$ we have $(2j)^{\frac{2j}{2}}\gl 2^jj^j\gl j!$. A slightly more involved estimate works for $k=2j-1$, $j\gl 2$ and leads to the constant $K$.
\qd
Let us recall the definition of the Orlicz space $L_{\Phi}(M,\tau)$ of a Young function $\Phi$ by the Luxembourg norm
\[ \|x\|_{L_{\Phi}}\lel \inf\{ \nu| \tau(\Phi(\frac{|x|}{\nu})\le 1\} \pl .\]
It is well-known that for the convex function $\Exp_2(t)=e^{t^2}-1$, we have
\[ \|x\|_{L_{\Exp_2}} \sim \sup_{p\gl 2} \frac{\|x\|_p}{\sqrt{p}} \pl .\]
\begin{cor}\label{wap} Assume that the generator $A$ satisfies $\la$-WA$_1$. Then
\[ \|f\|_{L_{Exp_2}}\kl K \la^{-2} \|f\|_{Lip_\Gamma} \pl \]
holds for some universal constant $K$.
\end{cor}
Indeed, we have two ways of proving this. By H\"older inequality we have a contraction
\[ L_{\infty}^p(N\subset M) \subset L_p(M) \pl .\]
On the other hand, we note that $\la$-WA$_1$ implies the condition $\la$-WA$_1'$ (which still implies the geometric Talagrand inequality)
\[ \|\rho\|_{\Gamma^*}\kl 2 \sqrt{\frac{\Ent(\rho)}{\la}} \pl .\]
Then Remark \ref{jz} also implies Corollary \ref{wap}.
\begin{rem}{\rm Similar concentration inequalities for a fixed state $\si$ can be found in \cite{DR}. They deduced an estimate for $\tau(\si e^{f})$ using the gradient norm of $\si^{1/2}f\si^{-1/2}$. Here we also need information for $\si^{-1}$, unless $N$ is central.
}\end{rem}
In \cite{JRZ} the cb-version of having finite diameter was used for approximation by finite dimensional systems. It is shown that the famous rotation algebras $A_{\theta}$ have finite $cb$-diameter. Let us recall that for the intrinsic metric ${\|f\|_{Lip_{\Gamma}}\simeq\|
\delta(f)\|}$ one can define a natural operator space structure as intersection of a column and a row space in a Hilbert $C^*$-module, or cb-equivalently as a subspace $\delta({\rm dom}(A^{1/2}))\subset \hat{M}$. Thus it makes sense to say that $(\A,\|\pl\|_{Lip_{\Gamma}},M)$ has \emph{finite} cb-diameter $D_{cb}$ if
\[ \|I-E_N: (\A,\|\pl\|_{Lip_\Gamma})\to M\|_{cb} \kl D_{cb} \pl .\]
\begin{cor} Let $A$ be a generator of a self-adjoint semigroup on a finite von Neuamann algebra $M$. If $(\A,\|\pl\|_{Lip_{\Gamma}})$ as a quantum metric space has finite cb-diameter, then $A$ satisfies WA$_1$ for $A\ten id_{\Mz_m}$ for all $m\in \nz$ and $A\ten id_{\tilde{M}}$ for any finite von Neuamm algebra $\tilde{M}$.
\end{cor}
\begin{proof} We just have to note that the inclusion
\[ L_{\infty}(M\bar{\ten} \tilde{M})\subset L_\infty^{p}(N\bar{\ten}\tilde{M}\subset M\bar{\ten} \tilde{M}) \]
is a contraction. In particular, the norm from the Lipschitz functions is smaller than $ 2D_{cb}\sqrt{p}$. Then Theorem \ref{characW} implies the assertion.
\qd
\begin{rem}
{\rm We see that both conditions $\la$-CLSI and $D_{cb}<\infty$ imply $\la$-WA$_1$ on all matrix levels, a property we will call $\la$-CWA$_1$. Note that according to Remark \ref{geoT}, $\la$-CWA$_1$ implies the geometric Talagrand inequality on all matrix levels, which we will call matrical Talagrand inequality.}
\end{rem}
Let $(\M,g)$ be a $d$-dimensional compact Riemannian manifold with sub-Laplacian $\Delta_X$ and sub-Riemannian (or Carnot-Caratheodory) metric $d_X$ induced by a H\"ormander system $X$. This gives a corresponding gradient form
\[ \Gamma_X(f,f) \lel \sum_{j=1}^k |{X_j}(f)|^2 \pl .\]
For matrix valued functions $f:\M \to \mm_m$, the natural operator space structure is given by
\[ \|f\|_{\Mz_m(Lip_{\Gamma})}
\lel \max \{ \|\sum_j |{X_j}(f)|^2\|^{1/2},
\|\sum_j |{X_j}(f)^*|^2\|^{1/2} \} \pl .\]
Thanks to Voiculescu's inequality this is equivalent to
\[ \|f\|_{\Mz_m(Lip_{\Gamma})} \sim \|\sum_j g_j \ten {X_j}(f)\| \]
where $g_j$ are freely independent semicircular (or circular) random variables (\cite{Po}). For matrix valued functions it is therefore better to to use the free Dirac operator $D=\sum_j g_j \ten {X_j}$ and the Laplace-Beltrami operator
in contrast to the spin Dirac operator $D=\sum_j c_j\ten X_j$ which is more common in noncommutative geometry \cite{NCG}. Let us now consider a manifold $\M$ with finite diameter $\diam_X(\M)=\sup_{x,y} d_X(x,y)$ and a normalized volume form $\mu$. Here $d_X$ is the Carnot-Caratheodory distance given by the H\"ormander system (see \cite{ABB}). Let $f:\M\to M$ be an $M$-valued Lipschitz function. Let $h,k\in L_2(M)$. Then $f_{h,k}(x)=(h,f(x)k)$ is a complex valued function and hence (following Connes' \cite{NCG})
\begin{align*}
&|(h,(f(x)-\ez_\M f)(k))|
\lel |f_{h,k}(x)-\int_\M f_{h,k}(y)d\mu(y)| \kl \int_\M |f_{h,k}(x)-f_{h,k}(y)| d\mu(y) \\
&\le \int (\sum_{j} |{X_j}f_{h,k}|^2)^{1/2} d(x,y) d\mu(y) \\
&\le \diam_X(\M) \sup_z |(\sum_{j} |{X_j}f_{h,k}(z)|^2)^{1/2}|
\lel \diam_X(\M) \sup_z (\sum \lan h,{X_j}(f)(z)k\ran)^{1/2}
\\&\kl\diam_X(\M) \|h\| \|k\|
\|\sum_j |{X_j}(f)|^2\|^{1/2} \pl .\end{align*}
Actually, the inequality $|f(x)-f(y)|\le \|f\|_{Lip}d_X(x,y)$ follows directly from the definition of the distance using connecting path. Therefore we have shown the following easy fact:
\begin{lemma}\label{cbmetric} Let $\Delta_X$ be the sub-Laplacian on $\M$ given by a H\"ormander system $X$. Then
\[ D_{cb}(\Delta_X) \kl \diam_X(\M) \pl .\]
\end{lemma}
\begin{theorem}\label{HWA} Let $X$ be a H\"ormander system on a connected compact Riemannian manifold. Then $\Delta_X$ satisfies CWA$_1$.
\end{theorem}
\begin{proof} According to the Chow-Rashevskii theorem (see \cite[Theorem 3.29]{ABB} and \cite{Rash}), the Carnot-Caratheodory distance
$d:\M\times \M\to \rz$ is continuous with respect to the original topology of the Riemannian metric. Thus, by compactness $\diam_X(\M)$ is finite. Then Lemma \ref{cbmetric} implies the assertion.
\qd
\begin{cor} Let $\L$ be the generator of a self-adjoint semigroup in $\Mz_m$. Then $\L$ satisfies CWA$_1$
\end{cor}
\begin{proof} According to Lemma \ref{H1}, we find a connected compact Lie group $G$ and a generating set $X$ of $\mathfrak{g}$ such that the transference principle \ref{Ho2} applies. That is, via the co-representation $\pi(x)(g)=u(g)xu(g)^{-1}$, $e^{-t\L}$ is a sub-dynamical system of $e^{-t\Delta_X}\ten id_{\Mz_m}$. According to Theorem \ref{HWA}, we know that $\Delta_X$ has $\la(X)$-CWA$_1$ for some constant $\la(X)$, and hence $\L$ inherits this property (compare to \ref{transf}). \qd
\begin{rem}{\rm
We conjecture that on compact Riemannian manifolds the Laplace-Beltrami operator satisfies $\la$-CLSI. However, since $\Gamma\E$ fails in general new techniques will be needed to approach this problem.
}
\end{rem}
\section{Examples and counterexamples}
\subsection{Data processing inequality and free products}
For the sake of completeness let us show that if to generators $A_1$ and $A_2$ of selfadjoint semigroups satisfy $\la$-CLSI (in its von Neumann algebra version), then $A=A_1\ten id+id\ten A_2$ also satisfies $\la$-CLSI. Indeed, $\rho \in L_2(M_1\ten M_1)$ and $E_1$, $E_2$ the conditional expectation onto the fixpoint algebras. Then we deduce from the data processing inequality that
\begin{align*}
D(\rho||E_1\ten E_2(\rho))
&= \tau(\rho\ln \rho)-\tau(\rho\ln E_1\ten E_2(\rho))\\
&= \tau(\rho\ln \rho)-\tau(\rho\ln E_1\ten id(\rho))
+\tau(E_1\ten id(\rho)\ln E_1\ten E_2(\rho))\\
&=D(\rho||E_1\ten id(\rho))+D(E_1\ten id(\rho)||E_1\ten E_2(\rho))\\
&\le D(\rho||E_1\ten id(\rho))+D(\rho||id\ten E_2(\rho))\\
&\le \I_{A_1\ten id}(\rho)+\I_{id\ten A_2}(\rho)
\lel \I_{A}(\rho) \pl .
\end{align*}
Following the lead of \cite{JZ}, we have performed a quite extensive research on stability of different conditions with respect to free products. The details would add more length this paper, but let us record some facts:
\begin{enumerate}
\item[i)] Assume that $A_j$ and $B_j$ have the same fixpoint algebras. Then $\Gamma_{A_j}\le \Gamma_{B_j}$ shows the generators $A(n)$ and $B(n)$ of the free amalgamated products $e^{-tA(n)}=\ast_{j=1}^n e^{-tA_j}$ and $e^{-tB(n)}=\ast_{j=1}^n e^{-tA_j}$ still satisfy $\Gamma_{A_n}\le \Gamma_{B_n}$.
\item[ii)] For $A_j=(id-E)$ we find for $A(n)$ the generator of the block-length semigroup. For fixed $\nen$, we have
\[ \Gamma_{id-E}\kl n \Gamma_{A(n)}\pl. \]
\item[iii)] Thus $\la \Gamma_{id-E_j}\le \Gamma_{A_j}$ implies that for a $n$-fold free product, we have
\[ \frac{\la}{n}\Gamma_{I-E}\kl \Gamma_{A(n)} \pl .\]
Since this holds with amalgamation, we deduce that $\frac{\la}{n}$-$\Gamma\E$ for $n$-fold free products of semigroups satisfying $\la$-$\Gamma\E$.
\item[iv)] The free group in $m$ generators satisfies $\frac{1}{3m}$-$\Gamma\E$ and $\frac{1}{3m}$-CLSI.
\end{enumerate}
\begin{conc} Although beyond the scope of this paper, let us note that infinite free and infinite tensor products of complete graphs, or more generally subordinated sublaplacians on compact manifolds give new infinite dimensional examples of von Neumann algebras satisfying Talagrand's Wasserstein estimate TA$_2$ and CLSI, but not necessarily $\Gamma\E$, which remains true for finite free products.
\end{conc}
Let us briefly sketch some arguments for a reader familiar with free probability, see \cite{VDN}. Let $N\subset M_j$ be finite von Neumann algebras with trace preserving conditional expectation $E$. According to \cite{Boc} a family $T_t^j:M_j\to M_j$ which leaves $N$ invariant can be extended to the free product with amalgamation $\M=\ast^j_{N}M_j$ via
\[ T_t(a_1\cdots a_{m})\lel T_t^{i_1}(a_1)\cdots T_t^{i_m}(a_m) \]
provided $a_j\in M_{i_j}$ and $i_1\neq i_2\neq \cdots \neq i_m$. We refer to \cite{VDN} for the definition and general facts on amalgamated free products. In the following we assume that each semigroup $T_t^j=e^{-tA_j}$ consists of selfadjoint trace preserving completely positive maps.
\begin{proof}[Proof of i)] For simplicity of notation we will assume that all the algebras are the same, and all the generators $A=A_j$ and $B=B_j$ are the same. Our first task is to identify the module for free product.
Let $\delta_{A}$ be the derivation. Then we observe that $\delta_{A_j}(xb)=x\delta_{A_j}(b)$ holds for $x\in N$.
For a word $\om=a_1\cdots a_m$ so that $a_j\in A_{i_j}$ we define the vectors $\xi^l=(e_{i_1},...,e_{i_l})\in \ell_2(\nz^l)$ and
\[ v(\om)
\lel \sum_{l=1}^{m} \xi^l \ten
u_N(a_1\ten \cdots a_{l-1})\ten \delta_{A}(a_{l})a_{l+2}\cdots a_m \pl .\]
To explain the cancellation let us consider $\om=b_1^*b_2^*$ $b_k\in A_{l_k}$,
and $\om'=a_1a_2a_3$, $a_k\in A_{r_k}$. We use the notation $a^{\circ}$ for the mean $0$ part. We have the following decomposition in mean $0$ words.
\begin{align*}
\om^*\om' &= b_2b_1a_1a_2a_3
\lel b_2(b_1a_1)^{\circ}a_2a_3+b_2E(b_1a_2)a_2a_3 \\
&= b_2(b_1a_1)^{\circ}a_2a_3+ (b_2E(b_1a_2)a_2)^{\circ}a_3 + E(b_2b_2a_1a_2)a_3 \pl .
\end{align*}
Of course for $l_1\neq r_1$ the two additional terms vanish. Only if $l_1=r_2$ and $l_2=r_2$ we really find $3$ terms. This implies
\begin{align*}
A(\om^*\om')
&= A(b_2(b_1a_1)^{\circ}a_2a_3)+
A(b_2E(b_1a_1)a_2)^{\circ}a_3) + A( E(b_2b_2a_1a_2)a_3) \\
&= A(b_2)(b_1a_1)^{\circ}a_2a_3+b_2A(b_1a_1)a_2a_3 + b_2(b_1a_1)^{\circ}A(a_2)a_3+ b_2(b_1a_1)^{\circ}a_2A(a_3) \\
&+ A(b_2)E(b_1a_2)a_2a_3+ b_2E(b_1a_2)A(a_2)a_3\\
& +A(b_2E(b_1a_1)a_2)a_3+ b_2E(b_1a_1)a_2)^{\circ}A(a_3)+
E(b_2b_2a_1a_2)A(a_3) \pl .
\end{align*}
We recall that $2\Gamma(\om,\om') \lel A(\om^*)\om'+\om^*A(\om')-A(\om^*\om ')$ and hence compare this with
\begin{align*}
&A(\om^*)\om'+\om^*A(\om')
\lel A(b_2)b_1a_1a_2a_3+b_2A(b_1)a_1a_2a_3+b_2b_1A(a_1)a_2a_3+ b_2b_1a_1A(a_2) \\
&=A(b_2)(b_1a_1)^{\circ}a_2a_3 + A(b_2)E(b_1a_1)a_2a_2a_3
+b_2(b_1a_1)^{\circ}A(a_2)a_3+ b_2E(b_1a_1)A(a_2)a_3\\
&\pll + b_2(b_1a_1)^{\circ}a_2A(a_3)+b_2E(b_1a_1)a_2A(a_3)
+ b_2(A(b_1)a_1a_2a_3+b_2b_1A(a_1))a_2a_3
\end{align*}
We first observe that the $A(a_3)$ terms cancel, because they can not interact with anything from $b$. If $l_1\neq r_1$, we certainly find $0$. If $l_1=r_1$ and
$l_2\neq r_2$, the the terms for $A(b_2)$ and $A(a_2)$ can not interact and we find
\[ b_2\Gamma(b_1^*,a_1)a_2a_3 \]
Finally, if $l_1=r_1$ and $r_2=l_2$ we get the additional term
\[ A(b_2)E(b_1a_2)a_2a_3+b_2E(b_1a_1)A(a_2)a_3-A((b_2E(b_1a_1)a_2)^{\circ})a_3
\lel \Gamma(b_2^*,E(b_1a_2)a_2)a_3 \pl .\]
In full generality, we have to use an inductive procedure and obtain
\[ \Gamma(b_1\cdots b_m,a_1,...,a_n)\lel
v(b_1\cdots b_m)^*v(a_1\cdots a_n) \pl .\]
This implies for $x=\sum_{i_1,...,i_k} \om(i_1,...,i_k)$ with $\om(i_1,...,i_k)$ in the linear span $A^{\circ}_{i_1}\cdots A_{i_k}^{\circ}$ that
\begin{align*}
\Gamma_B(x,x)&= \!\sum_{\om,\om'}
\sum_k \delta_{\si_k(\om),\si_k(\om')} (v^B_k(\om),v^B_k(\om'))
\le \sum_{\om,\om'} \sum_k \delta_{\si_k(\om),\si_k(\om')} (v^A_k(\om),v^A_k(\om')) = \Gamma_A(x,x) \p .
\end{align*}
Here we used that for fixed and some elements $\xi_l$
\[ \sum_{l,l'} \xi_l^*\Gamma_A(\al_l,E_N(\beta_l^*\beta_{l'})\al_{l'})\xi_{l'}
\kl \sum_{l,l'} \xi_l^* \Gamma_B(\al_l,E_N(\beta_l^*\beta_{l'})\al_{l'})\xi_{l'} \pl .\]
The additional inner product can be obtained by writing $E(x^*y)=\sum_j w_j(x)^*w_j(y)$, see \cite{JD}, and hence
\[ \Gamma_A(\al,E(x^*y)\beta)
\lel \sum_j \Gamma_A(w_j(x)\al,w_j(y)\beta) \kl
\sum_j \Gamma_B(w_j(x)\al,w_j(y)\beta)
\lel \Gamma_B(\al,E(x^*y)\beta) \pl \] follows from our assumption. \qd
A particularly interesting case is given by $T_t=e^{-t(I-E_N)}$. The corresponding free product gives the so-called blocklength
\[ T_t(a_1\cdots a_n) \lel e^{-tn}a_1\cdots a_n \]
for (free) products of mean $0$ terms.
\begin{proof}[Proof of ii)] For $I-E$ the gradient form
\[ 2\Gamma_{I-E}(x,y) \lel (x-E(x))^*(y-E(y)) + E((x-E(x))^*(y-E(y)) \]
splits into two forms $v_1(x)=(x-E(x))$ and $v_2(x)=u_N(x-E(x))$. Therefore, we may use our argument from above and find two orthogonal forms
\[ v^1(a_1\cdots a_m)
\lel \sum_{k=1}^m e_{i_1,...,i_k} u_N(a_1\cdots a_{k-1})a_{k}\cdots a_k \]
and
\[ v^2(a_1\cdots a_m) \lel
\sum_{k=1}^m e_{i_1,...,i_k} u_N(a_1\cdots a_{k})a_{k+1}\cdots a_m \pl .\]
We may apply the contraction $id\ten E_N$ to $v^2$ and deduce from the mean $0$ property of the products that
\begin{equation}\label{EEE}
E_N(x^*x) \lel |id\ten E_N(v^2(x))|^2 \kl |v^2(x)|^2 \pl .
\end{equation}
Moreover, let $P_j$ be the projection onto words starting with $i_1=j$. Then we see that for mean $0$ worlds
\begin{align*}
x^*x &\lel
\sum_{j,k} P_j(x)^*P_k(x^*)\kl n \sum_{j} |P_j(x)|^2 &\le n (\sum_{l\gl 1} \delta_{\si_l(\om),\si_l\om')} (v_l^2(\om),v_l^2(\om'))
\kl n |v^1(x)|^2 \pl ,
\end{align*}
because the term $l=1$ exactly corresponds to $i_1=i_1'=j$. Therefore, we find that
\begin{align*}
\frac{x^*x+E(x^*x)}{2}
&\kl \frac{n}{2} |v^1(x)|^2+ \frac{1}{2}|v^2(x)|^2 \kl
\frac{n}{2}(|v^1(x)|^2+|v^2(x)|^2) \lel n \Gamma_{(I-E)\ast(I-E)} \pl .
\end{align*}
By taking words $a_j$ of length $1$ and $E(a_j^*a_j)$ very small, we see that $n$ is indeed optimal.\qd
\begin{theorem}\label{free} Let $A_j$ be generators such that
\[ \Gamma_{I-E}\kl \Gamma_{A_j} \pl \]
holds for $j=1,...,n$. Then the free product $A^n$ of $\ast_{j=1}^n T_t^j$ satisfies
\[ \Gamma_{I-E} \kl n \Gamma_{A^n} \pl .\]
\end{theorem}
\begin{cor} Let $l(g_{i_1}^{k_1}\cdots g_{i_m}^{k_m})=\sum_j k_j$ be the word length on the free group with $n$ generators and $A(\la(w))=l(w)\la(w)$ the corresponding generator. Then
\[ \Gamma_{I-E}\kl 3n \Gamma_{A} \pl .\]
\end{cor}
\begin{proof} We refer to $\Gamma_{|\Delta|^{1/2}}\gl \frac{1}{3}\Gamma_{I-E}$ to section 7.3.
\qd
\subsection{Graphs}
Let $(V,E)$ be a graph with finite vertex set and $w:E\to \rz_{+}$ be a symmetric weight function. Indeed, we may assume that $A=E(\delta^*\delta)$ is given by an inner derivation $\delta(x)=[\xi,x]$ with $\xi$ selfadjoint. This implies that
\[ \Gamma(f,f)(x) \lel \sum_y \|\xi_{xy}\|^2|f(x)-f(y)|^2 \pl. \]
always given by a set of weights $w_{xy}=\|\pi(e_x)\xi \pi(e_y)\|_2^2$. Thus we find $\delta(f)(x)=(\sqrt{w_{yx}}(f(y)-f(x)))_{y}$.
Let us assume we use the normalized Haar measure $\mu$ for $\ell_{\infty}(V)$. Then we deduce
\[ (\delta(f_1),\delta(f_2))_{\mu}\lel
\frac{1}{|V|} \sum_{x} \sum_{y} 2w_{yx} f_1(x)^*f_2(x)-2 \frac{1}{|V|}\sum_x \sum_y w_{yx}f_1(y)^*f_2(x) \pl .\]
This means we have a one to one relations ship between the weights
\[ w_{xy} \lel \frac{A_{xy}}{2} \]
for $x\neq y$ and the selfadoint weighted Laplacian with diagonal entries $A_{xx}=2\sum_{y\neq x} w_{yx}$. For the ergodic sitation we see that $\Gamma_{I-E}$ has entries $w^{I-E}_{xy}=\frac{1}{2|V|}$.
\begin{conc} For an ergodic (i.e. irreducible) graph Laplacian the condition $\la$-$\Gamma\E$ is equivalent to
\[ w_{xy}\gl \frac{1}{2|V|} \quad \mbox{ for all } x\neq y \pl .\]
The weights $w_{xy}(\theta)$ for the approximating sequence given by $A^{\theta}$ are strictly positive.
\end{conc}
\begin{proof} For an erodic graph Laplacian we have a spectral gap and $\|T_1:L_1\to L_{\infty}\|\le c_1<\infty$. Thanks to Saloff Coste's argument (see \eqref{sfc}), we find $\|T_t-E:L_1\to L_{\infty}\|\kl c' e^{-\la_{\min} t}$ for $t\gl 2$. Thus $A^{\theta}$ satisfies $\la(\theta)$-$\Gamma\E$ and hence $w_{xy}(\theta)\gl \frac{\la(\theta)}{2|V|}$ is strictly positive. \qd
\begin{rem} Of course, we expect CLSI for every finite graph. It it also not clear how expander graphs fit into this picture since they are quite opposite to complete graphs, see \cite{BobT} for more information.
\end{rem}
\subsection{Fourier multiplier and discrete groups}
We will use group von Neumann algebras and Fourier-multipliers. This means we consider a discrete group and $T_t(\la(g))=e^{-t\psi(g)}\la(g)$ for a conditional negative function $\psi$, see \cite{BrO},\cite{Boz} for more information. For Fourier multiplies the gradient form is given by the Gromov distance
\[ 2K_{\psi}(g,h) \lel \psi(g)+\psi(h)-\psi(g^{-1}h) \pl ,\]
and
\[ \Gamma_{\psi}(\la(g),\la(h))\lel K(g,h) \la(g^{-1}h) \pl .\]
It is very easy to see that for two generators $\psi$ and $\tilde{\psi}$ the relation
\[ \Gamma_{\psi} \gl_{cp} \la \Gamma_{\tilde{\psi}} \]
is equivalent to
\[ K_{\psi} \gl \la K_{\tilde{\psi}} \pl \]
in the usual sense of matrices.
Let us now consider a discrete group $G$ with the normalized trace on $L(G)$ and the conditional expectation onto $\cz$ in $L(G)$ given by the trace. Then $id-E$ is a Fourier multiplier and
\[ K_{I-E}(g,h) \lel (1-\delta_{g,1})(1-\delta_{h,1}) (\frac{1}{2}+\frac{\delta_{g,h}}{2}) \pl .\]
It therefore suffices to consider the matrix on $G\setminus \{1\}$. Let us now consider the specific example $G=\zz$ and $\psi(k)=|k|$ given by the Poisson semigroup. Then
\[ K(k,j) \lel \frac{|k|+|j|-|k-j|}{2}
\lel \begin{cases} 0& \mbox{if } k<0<j \mbox{ or } j<0<k \\
\min(|j|,|k|) & \mbox{ else. }
\end{cases} \pl .\]
Let us consider the matrix $B(j,k)=\min(j,k)$ and $\al_j$ be a finite sequence. Then we see that
\[ (\al,B(\al))\lel \sum_{j,k} \bar{\al}_j\al_k \min(k,j)
\lel \sum_{j,k} \bar{\al}_j\al_k \sum_{1\le l\le \min(j,k)}
\lel \sum_l |\sum_{j\gl l}\al_j|^2 \pl .\]
Using $|a-b|^2\kl 2a^2+2b^2$, we deduce that
\[ \sum_l |\al_l|^2 \lel \sum_l |\sum_{j\gl l}\al_j-\sum_{j>l}a_j|^2
\kl 2\sum_l (|\sum_{j\gl l}a_j|^2+ |\sum_{j> l}a_j|^2)
\kl 4 (\al,B(\al)) \pl .\]
This means $4B\gl 1_{\nz}$, and hence
\[ 2K_{\psi} \lel 2(B \ten 1_{\ell_2^2}) \gl \frac{1}{2}1_{\zz\setminus 0} \pl .\]
On the other hand, let $\eiz$ the matrix with all entries $1$. Then we certainly have
\[ B\gl \eiz_{\nz} \]
and therefore
\[ K_{\psi} \lel B\ten 1_{\ell_2^2} \gl \frac{1}{2} B\ten \kla \begin{array}{cc} 1&1\\ 1&1 \end{array}\mer \lel \frac{1}{2} \eiz_{\zz\setminus \{0\}} \pl .\]
Combining these estimates, we deduce that
\[ 3K_{\psi} \gl K_{I-E} \pl .\]
\begin{conc} The Poisson group on $L(\zz)=L_{\infty}(\mathbb{T})$ satisfies $\frac{1}{3}$-$\Gamma\E$, and hence $\frac{1}{3}$-CLSI. In particular, the Fourier multiplier associated to $\psi(k_1,...,k_n)=\sum_{j=1}^n |k_j|$ on $\zz^n$ satisfies $\frac{1}{3}$-CLSI, but not $\Gamma\E$ for $n\gl 2$. The free product $L(\ff_n)$ with the word length function still satisfies $\frac{1}{3n}$-$\Gamma\E$.
\end{conc}
\begin{proof} Let $n=2$ and $\al_{jl}=\eps_j\eps_l$ defined for $1\le l,j\le 2M$, so that $\sum_j \eps_j=0$. Then
\[ (\al,K_{\psi_2}\al)
\lel (\eps,K_{\psi}\eps)(\eps,\eiz(\eps))+
(\eps,\eiz(\eps))(\eps,K_{\psi}(\eps)) \lel 0 \pl .\]
On the other hand $(\al,id(\al))=M^2$. The last fact follows from 7.1.iii). \qd
\subsection{Non-additivity of $\I_{id-E_N}$}
We seen above that the data processing inequality
allows for tensorization of CLSI. This is no longer true for the symmetrized divergence of Kullback-Leibler, see \cite{KL}.
\begin{exam} Let $N_j\subset M_j$ be von Neumann subalgebas. The inequality
\begin{equation}\label{rr}
\I_{id-(E_{N_1}\ten N_2})\le \I_{id-E_{N_1\ten M_2}}
+\I_{id- E_{M_1\ten N_2}} \pl .
\end{equation}
is not valid in general.
\end{exam}
We note that \eqref{rr} is equivalent to
\[ \tau((E_1\ten id)(x) \ln x)+\tau((id\ten E_2)(x) \ln x)
\kl \tau(x\ln x)+\tau((E_1\ten E_2(x))\ln x) \pl .\]
Let $N_1=N_2=\cz$, and
$M_1=M_2=\ell_{\infty}(\{1,2,3\})$. The conditional expectation is given by row and column average.
Therefore we have decide whether $\tau((x+1-E_1(x)-E_2(x))\ln(x))$ is always positive for a state $x$. Let $\delta>0$ and
\[ [x_{ij}] \lel \left[\begin{array}{ccc} \delta &\al &\al \\
\al& \gamma &\gamma \\
\al&\gamma &\gamma
\end{array}\right] \]
where $\al=3/8$ and $\gamma=15/8-\frac{\delta}{4}$. Then the $11$-entry of $1+x_{11}-E_{N_1}(x)-E_{N_2}(x)$ is given by
\[ 1+\delta-2/3(\delta+\al+\al) \lel \frac{1}{2} +\frac{\delta}{3} \]
Note that $\lim_{\delta\to 0}\gamma=15/8$ stays away from $0$ and hence $1/2(\ln\delta)$ is going to $-\infty$. Thus for $\delta \to 0$ the expression converges to $-\infty$.
\subsection{Failure of Rothaus Lemma for matrix valued functions}
Let $N=\Mz_n\ten 1\subset \Mz_n\ten \Mz_n$ and we work with the normalized trace. We use the notation $D_N(\rho)=D(\rho||E_N(\rho))$ for the asymmetry measure \cite{Marv,GJLR2}.
\begin{prop} For $n\gl 2$ there exists no constant such that
\begin{equation}\label{bb} D_N(|x|^2) \kl C \tau(xA(x))+D\|x-E_N(x)\|^2 \pl.
\end{equation}
Moreover, there are no constants $C,D$ such that
\begin{equation}\label{bbb}
D_N(|x|^2)\kl C D_N(|x-E(x)|^2)+D\|x-E_N(x)\|^2 \pl ,
\end{equation}
holds for selfadjoint $x$.
\end{prop}
Let us start with the non-selfadjoint element
\[ y \lel \frac{n}{\sqrt{n-1}} \sum_{j=2}^n |11\ran \lan jj| \pl .\]
The corresponding conditional expectation $E=E_N$ satisfies
\[ E(|y|^2) \lel \frac{n^2}{n-1}
E(\sum_{j,k} |jj\ran\lan kk|)
\lel \frac{n}{n-1} \sum_{j=2}^n |j\ran\lan j|
\lel \frac{n}{n-1} 1_{n-1} \pl .\]
Since $y$ has rank one, we get
\[ D_N(|y|^2) \lel \tau(|y|^2\ln |y|^2)-\tau(E(|y|^2)\ln E(|y^2|)
\lel \frac{1}{n^2} n^2\ln n^2 - \ln \frac{n}{n-1}
\lel 2\ln n-\ln \frac{n}{n-1} \pl .\]
Now we modify this element by considering
\[ x \lel \al (|1\ran\lan 1| \ten 1) + y \pl \]
by adding an element in $\Mz_n\ten 1$. Thus $x-E(x)=y$. We have to calculate $D_N(|x|^2)$. Let us denote by $f=|1\ran\lan 1|\ten 1$ the projection. First we observe that
\[ x^*x \lel \al^2f+ \al y +y^*\al + y^*y \] and hence
\[ E(x^*x)\lel \al^2 |1\ran\lan 1|+\frac{n}{n-1}1_{n-1} \pl ,\]
where $1_{n-1}=\sum_{j=2}^n |j\ran\lan j|$ has rank $n-1$. This implies
\[ \tau(E(|x|^2)\ln E(|x|^2)) \lel \frac{\al^2}{n}\ln \al^2+\ln \frac{n}{n-1} \pl .\]
In order to calculate the entropy for $|x|^2$, we write we decompose $f=|11\ran\lan 11|+1_{n-1}\ten |1\ran\lan 1$ the second projection, call it $g$ is orthogonal to the support of $y$, and hence $x^2$ is equivalent to
\[ \kla \begin{array}{ccc} \al^2 &n\al&0 \\
n\al & n^2 &0\\
0&0& \al^2 g
\end{array}\mer \pl .\]
The upper corner is of rank $1$ with size $n^2\al^2$ and hence
\[ \tau(|x|^2\ln |x|^2)
\lel \frac{n^2+\al^2}{n^2}\ln(n^2+\al^2)+\frac{\al^2(n-1)}{n^2}\ln(\al^2) \pl .\]
This yields
\begin{align*}
D_N(|x|^2) &=
\frac{n^2+\al^2}{n^2}\ln(n^2+\al^2)+\frac{\al^2}{n}\ln(\al^2)-\frac{\al^2}{n^2}\ln(\al^2)
-\frac{\al^2}{n}\ln(\al^2)-\ln\frac{n}{n-1} \\
&= \ln(n^2+\al^2)+\frac{\al^2}{n^2}\ln(1+\frac{n^2}{\al^2})-\ln\frac{n}{n-1}
\end{align*}
In order to contradict \eqref{bb} and \eqref{bbb}, we observe that $\tau(xA(x))=\tau(xA(y))=\tau(yA(y))$. Hence the right hand side in \eqref{bb} and \eqref{bbb} is bounded, but the left hand side converges to $+\infty$ for $\al\to \infty$, as long as $n\gl 2$. For selfadjoint $x$ see below. \qed
We will now address cb-hypercontractivity (in the sense of \cite{BK}) at $p=2$ by considering the selfadjoint element
\[ z \lel \kla \begin{array}{cc} 0&x\\x^*&0 \end{array}\mer \pl .\]
We have $\tau(|z|^2\ln|z|^2)=\tau(|x|^2\ln |x|^2)$. For the conditional expectation, we find
\[ E(|z|^2) \lel
\kla \begin{array}{cc} E(xx^*)&0\\
0 & E(x^*x)
\end{array} \mer
\lel
\kla \begin{array}{cc} \al^2 f + E(n^2|11\ran\lan 11|)&0\\
0 & \al^2f+\frac{n}{n-1}1_{n-1}
\end{array} \mer \pl .\]
This gives (with the normalized trace)
\[ \tau(|z|^2\ln |z|^2)
\lel \frac{\al^2}{2n}\ln \al^2+\frac{1}{2}\ln\frac{n}{n-1}+\frac{\al^2+n}{2n}\ln(\al^2+n) \pl .\]
The new part here is
\begin{align*}
D_N(xx^*)
&= \frac{n^2+\al^2}{n^2}\ln(n^2+\al^2)+\frac{\al^2(n-1)}{n^2}\ln(\al^2) -\frac{\al^2+n}{n}\ln(\al^2+n)\\
&= \ln(\frac{n^2+\al^2}{n+\a^2})+\frac{\al^2}{n^2}\ln \frac{\al^2+n^2}{\al^2}
-\frac{\al^2}{n}\ln(\frac{\al^2+n}{\al^2})\pl. \end{align*}
Following our previous calculation we find that
\begin{align*}
D_N(|z|^2) &=
\frac{1}{2}[\ln(n^2+\al^2)+\frac{\al^2}{n^2}\ln(1+\frac{n^2}{\al^2})-\ln\frac{n}{n-1}]\\
& \pll
+\frac{1}{2}[\ln(\frac{n^2+\al^2}{n+\a^2})+\frac{\al^2}{n^2}\ln \frac{\al^2+n^2}{\al^2}
-\frac{\al^2}{n}\ln(\frac{\al^2+n}{\al^2})] \\
&= \frac{1}{2}\ln(n^2+\al^2) + \frac{1}{2} \ln(\frac{n^2+\al^2}{n+\a^2})+\frac{\al^2}{2n^2}\ln(\frac{n^2+\al^2}{n+\a^2}) \\
&\pll + \frac{\al^2}{n^2}\ln(1+\frac{n^2}{\al^2})-\frac{1}{2}\ln\frac{n}{n-1}-
\frac{\al^2}{2n}\ln(\frac{\al^2+n}{\al^2}) \pl .
\end{align*}
In order to keep the last term in check, we choose $\al_n^2=n$ and then we find
\begin{align*}
D_N(|z|^2) &=\frac{1}{2}\ln n + \frac{1}{2}\ln(n+1)+\frac{1}{2}\ln(n+1)-\frac{1}{2}\ln 2+ \frac{1}{2n}[\ln(n+1)-\ln 2]\\
& +\frac{1}{n}\ln(n+1)-\frac{1}{2}\ln\frac{n}{n-1}-\frac{1}{2}\ln 2 \\
&=\frac{1}{2}\ln n + (1+\frac{3}{2n})\ln(n+1)-(1+\frac{1}{2n})\ln 2 - \frac{1}{2}\ln\frac{n}{n-1} \pl.
\end{align*}
Note that the $\log n$ term is the optimal rate for entropy as $n\to \infty$, and hence the example is rather extreme. Following the work of \cite{BK}, we may formulate this observation as follows.
\begin{prop} Let $(A_n)$ be sequence of generators on $M_n$ such that $\sup_n \|A_n:L_2(M_n)\to L_2(M_n)\|<\infty$. Then the 2-cb-hyper-contractivity constant also converges to $\infty$.
\end{prop}
For example we may choose $A_n=Id-\tau_n$ which has norm $\le 2$. In fact, we only have to control the behaviour of $A_n$ on some version of the maximally entangled state.
\begin{proof} We recall \cite{BK} that the cb-hpercontractitivity constant $\la_{2}^{cb}$ is the best constant such that
\[ D_{\Mz_n}(|x|^2)\kl 4\la_2^{cb} \tau((id\ten A_n)(|x|)|x|) \lel
4\la_2^{cb} \mathcal{E}(x) \]
allows an estimate with respect to the energy. However, using the derivation calculus, we deduce the well-known estimate (see Lindsay Davies \cite{DaLi})
\[ \mathcal{E}(|x|)\kl \mathcal{E}(x)
\lel \mathcal{E}\left (\kla \begin{array}{cc} 0&y\\
y^*& 0 \end{array}\mer\right)
\kl \|A_n\|^2 \|y\|_{L_2(M_{n^2})} \kl \|A_n\| \pl .\]
Thus with our choice of $\al_n=\sqrt{n}$, we obtain the lower bound $\ln n\kl C \la_2^{cb}(A_n)\|A_n\|^2$, and hence not both of them can be bounded. \qd
\begin{rem} {\rm A counterexample of similar nature was constructed in \cite{BaRo} which also show that $S_2(S_p)$ is not uniformly convex. Since our example covers a different regime (and in particular the tracial case), we think they are of independent interest.}
\end{rem}
\subsection{Complete Wasserstein $1$-distance}
The complete analogues of the `triangle' inequality for the Wasserstein $1$ distance from \cite{Ma1,Ma2} can be formulated as follows:
\begin{prop} Let $(M_1,\Gamma_{A_1})$ satisfy $C_1^{-1}$-CWA$_1$, and
$(M_2,\Gamma_{A_2})$ satisfy $C_2^{-1}$-CWA$_1$, then $(M_1\ten M_2,\Gamma_{A_1\ten 1+1\ten A_2})$ satisfies $(C_1+C_2)^{-1}$-CWA$_1$.
\end{prop}
\begin{proof} Let $f$ be such that
\[ \|\Gamma_{A_1\ten 1}(f,f)+\Gamma_{1\ten A_2}(f,f)\| \kl 1 \pl \]
and $E_{N_1\ten N_2}(f)=0$. In particular,
\[ \|\Gamma_{A_1}(f-E_{N_1\ten M_2}(f),f-E_{N_1\ten M_2}(f))\|\le 1 \]
and by Kadison's inequality
\[ \|\Gamma_{1\ten A_1}(E_{N_1\ten M_2}(f),E_{N_1\ten M_2}(f))\|\le \|E_{N_1}\Gamma_{1\ten A_2}(f,f)\|\kl
1 \pl. \]
Let $t>0$. By the complete version, we know that
\[ |\tau(\rho (f-E_{N_1\ten M_2}(f))|\kl \frac{D(\rho||E_{N_1\ten M_2}(\rho))}{t}+tC_1 \]
and
\begin{align*} |\tau(\rho (E_{N_1\ten M_2}(f)-E_{N_1\ten N_2}(f))|
&= |\tau(E_{N_1\ten M_2}(\rho)E_{N_1\ten M_2}(f)-E_{N_1\ten N_2}(f))| \\
&\le
\frac{D(E_{N_1\ten M_2}(\rho)||E_{N_1\ten N_2}(\rho))}{t}+tC_2
\end{align*}
Therefore the triangle and proof of the data processing inequality implies
\[ |\tau(\rho f)|\kl \frac{D(\rho||E_{N_1\ten N_2}(\rho))}{t} + (C_1+C_2)t \pl .\]
Taking the infimum over $t$ implies the assertion.\qd
This implies that for a tensor products with $\la$-CWA$_1$ $T_t^n=(e^{-tA})^{\ten_n}$, we find $\frac{\la}{n}$-CAW$_1$, which in turn is enough to imply Talagrand's inequality for matrix valued function on $\{-1,1\}^n$ and $[-1,1]^n$, see \cite{L3} for details in the scalar case.
\subsection{Major Open Problems}
\begin{prob} 1) Does every compact Riemanin manifold satisfy CLSI?
2) Does every generator of a selfadjoint semigroup on a matrix algebra satisfy CLSI?
3) Is CLSI stable under free products?
\end{prob}
\newcommand{\etalchar}[1]{$^{#1}$}
|
1,314,259,994,854 | arxiv | \section{Introduction}
\label{sec:intro}
Easter eggs nowadays also refer to inside jokes and/or secret messages
usually hidden e.g. in computer gaming and hi-tech software. In this
work, we take advantage of this terminology to motivate the search for
New Physics Beyond the Standard Model in the radiative and in the
(semi)leptonic channels of rare $B$ meson decays.
In the decades that have
followed the original formulation of flavour mixing~\cite{Cabibbo:1963yz},
the flavour structure of the SM has been experimentally
tested and well established. The tremendous progress of the
experimental facilities has probed the flavour of the SM to an
exquisite level of precision \cite{Amhis:2016xyh}, along with the
substantial effort on the part of the theoretical community to go well
beyond leading order computations \cite{Buras:2011we}. From this
perspective of ``precision tests'', radiative and (semi)leptonic
$\Delta B = 1$ processes, related at the partonic level to
$b \to s \gamma, s \ell \ell$ transitions, occupy a special place in
probing the SM and its possible extensions in
terms of New Physics (NP)
models~\cite{Beaujean:2013soa,Blake:2016olu}.
Firstly, these rare $B$ meson decays belong to the class of
flavour-changing neutral current (FCNC) processes, that are well known
to be sensitive probes of Physics Beyond the Standard Model (BSM): in
fact -- within the SM -- the flavour structure of the theory allows
FCNC to arise only at loop level, as a consequence of
the GIM mechanism \cite{Glashow:1970gm}. This allows
for significant room for heavy new degrees of freedom to sizably
contribute to these rare processes.
Secondly, from the experimental side, the study of rare $B$ meson
decays offers us some of the most precise measurements amongst the
$| \Delta F | = 1$ processes. For instance, the measurement of the
inclusive branching fraction of $B \to X_{s} \gamma$ is currently
performed with a relative uncertainty of a few percent
\cite{Saito:2014das,Belle:2016ufb,Lees:2012ym}, while the
study of an exclusive mode such as $B \to K^{*} \ell \ell$ allows for
a detailed analysis of the angular distribution of the four final
state particles, yielding rich experimental information in terms
of angular functions of the dilepton invariant mass, with full
kinematic coverage of the latter \cite{Aaij:2013iag} and -- starting
from ref.~\cite{LHCb:2015dla} -- also with available experimental
correlations among the angular observables.
In $B$ Physics, the recent years have been characterized by the
emergence of a striking pattern of anomalies in multiple independent
studies of some of these rare $b \to s$ transitions
\cite{Blake:2017wjz}. Of particular importance, the measurement of the
$P_{5}'$ angular observable
\cite{Matias:2012xw,DescotesGenon:2012zf,Descotes-Genon:2013vna,Matias:2014jua}
stands out from all the other ones related to the angular distribution
of $B \to K^{*} \mu \mu\,$; first realized by the LHCb collaboration
\cite{Aaij:2013qta,Aaij:2015oid} and later on also by the Belle
collaboration \cite{Abdesselam:2016llu}, the experimental analysis of
$P_{5}'$ in the large recoil region of the decay points to a deviation
of about $3\sigma$ with respect to the SM prediction presented in
ref.~\cite{Descotes-Genon:2014uoa}. The latter, however, suffers from
possible hadronic uncertainties which are sometimes even hard to
guesstimate
\cite{Jager:2012uw,Jager:2014rwa,Lyon:2014hpa,Ciuchini:2015qxb}, and
this observation has been at the origin of a quite vivid debate in the
recent literature about the size of (possibly) known and (yet) unknown
QCD power corrections to the amplitude of this process in the infinite
mass
limit~\cite{Hurth:2016fbr,MartinCamalich:2016wht,Ciuchini:2016weo,Capdevila:2017ert}. To
corroborate even more the cumbersome picture of the ``$P_{5}'$
anomaly'', two new independent measurements of this angular observable
(among others) have been recently released by ATLAS
\cite{ATLAS-CONF-2017-023} and CMS \cite{CMS-PAS-BPH-15-008}
collaborations, showing respectively an appreciable increase and
reduction of the tension between data and the SM prediction in
ref.~\cite{Descotes-Genon:2014uoa}, as reported by these experiments.
For the sake of completeness, one should also remark that other
smaller tensions have been around, concerning the measurement of
differential branching fractions of $B \to K \mu \mu\,$
\cite{Aaij:2014pli,Aaij:2016cb} and $B_{s} \to \phi \mu \mu$
\cite{Aaij:2015esa}. It is worth noting that, while for the latter
mode an explanation in terms of hadronic physics may be easily
conceivable, the theoretical computation of the former seems to be
under control \cite{Khodjamirian:2012rm}.
Quite surprisingly, a possible smoking gun for NP in rare $B$ meson
decays already came out in 2014, when the LHCb collaboration presented
for the first time the measurement of the ratio of branching fractions
\cite{Aaij:2014ora}:
\begin{eqnarray}
R_{{K}_{[1,6]}} \ &\:\equiv\: &\ \frac{Br(B^{+} \to K^{+} \mu^{+} \mu^{-})}{Br(B^{+} \to K^{+} e^{+} e^{-})} \\ \nonumber
\ & = & \ 0.745 ^{+ 0.090}_{-0.074}\pm 0.036 \ ,
\label{eq:RK}
\end{eqnarray}
where the subscript refers to the dilepton mass (denoted hereafter with $q^{2}$) range going from 1 to
6 GeV$^{2}$. This experimental value shows a deviation
of about $2.6\sigma$ with respect to the standard theoretical
prediction. Indeed, the SM value of $R_{K}$ in the bin provided by the
LHCb collaboration is expected to be equal to unity beyond the
percent level of accuracy \cite{Hiller:2003js,Bordone:2016gaq}. In fact, contrary
to observables such as $P_{5}'$, it is important to stress that
$R_{{K}}$ may be, in general, regarded as insensitive to QCD
effects \cite{Hiller:2003js}. From the model building point of view, $R_K$ can certainly be
considered as quite informative, hinting at a UV completion of the SM
where Lepton Flavour Universality violation (LFUV) takes place in the
flavour-violating couplings of new heavy degrees of freedom,
e.g. leptoquarks and/or $Z'$ gauge bosons
\cite{Alonso:2014csa,Hiller:2014yaa,Ghosh:2014awa,Glashow:2014iga,Hiller:2014ula,Gripaios:2014tna,Sahoo:2015wya,Crivellin:2015lwa,Crivellin:2015era,Celis:2015ara,Alonso:2015sja,Greljo:2015mma,Calibbi:2015kma,Falkowski:2015zwa,Carmona:2015ena,Chiang:2016qov,Becirevic:2016zri,Feruglio:2016gvd,Megias:2016bde,Becirevic:2016oho,Arnan:2016cpy,Sahoo:2016pet,Alonso:2016onw,Hiller:2016kry,Galon:2016bka,Crivellin:2016ejn,GarciaGarcia:2016nvr,Cox:2016epl,Jager:2017gal,Megias:2017ove}. Most
importantly, the tantalizing correlation of this signature of LFUV
with the $P_{5}'$ anomaly, suggested by several global analyses
\cite{Beaujean:2013soa,Hurth:2013ssa,Altmannshofer:2014rta,Descotes-Genon:2015uva,Chobanova:2017ghn,Altmannshofer:2017fio}
has triggered different proposals of measurements of such effect in
the angular analysis of the $K^{*} \ell \ell$ channel
\cite{Capdevila:2016ivx,Serra:2016ivr}. Interestingly enough, an
analysis from the Belle collaboration aiming at separating the leptonic
flavours in $B \to K^{*} \ell \ell$~ \cite{Wehle:2016yoi}, shows a
consistent $\sim 2.6\sigma$ deviation from the SM prediction reported
in ref.~\cite{Descotes-Genon:2014uoa} in the dimuon leptonic final
state only. This is compatible with previous experimental findings
related only to the mode with muonic final states.
Sitting on similar theoretical grounds as $R_{K}$, another intriguing
ratio of $B$ decay branching fractions can be measured in the $K^{*}$
channel:
\begin{eqnarray}
R_{{K^{*}}_{[0.045,1.1]}} \ &\:\equiv\: &\ \frac{Br(B \to K^{*} \mu^{+} \mu^{-})}{Br(B \to K^{*} e^{+} e^{-})} \\ \nonumber
\ & = & \ 0.660 ^{+ 0.110}_{-0.070}\pm 0.024 \ ,\\
R_{{K^{*}}_{[1.1,6]}} \ &=& \ 0.685 ^{+ 0.113}_{-0.069}\pm 0.047 \ .
\label{eq:RKstar}
\end{eqnarray}
These measurements for the low-$q^2$ bin and the central-$q^2$ one
have just been presented by the LHCb
collaboration~\cite{LHCb_RKstar}, pointing again to a discrepancy of
about $2\sigma$ with respect to the expected SM prediction -- again
equal to 1 to a very good accuracy for the central-$q^2$ bin and close to $0.9$ for the
low-$q^2$ one -- and yielding more than a $3\sigma$ deviation when naively combined
with the measurement of $R_K$. Note that with higher degree of braveness
(or, depending on the taste of the reader, of unconsciousness),
the disagreement of the SM with precision $B$ physics may reach the exciting level of
$\gtrsim 5\sigma$ when one naively combines together the single
significances coming from $R_{K,K^{*}}$ ratios, $P_{5}'$ measurements
and the minor deviations observed in the other exclusive branching
fractions.
Given the excitement of these days for all the above hints of a
possible NP discovery in rare $B$ meson decays, in this work we take
our first steps towards a positive attitude in the search of a
definite BSM pattern aimed at addressing these $B$ anomalies. We
perform our study in a model-independent fashion, within the framework
of effective field theories for weak interactions
\cite{Buras:1992tc,Buras:1992zv,Ciuchini:1993vr}. In particular, in
section~\ref{sec:framework} we define the setup characterizing the
whole global analysis, presenting six different benchmark scenarios
for NP, together with a discussion about two different approaches in
the estimate of the hadronic uncertainties that can affect
quantitatively our final results. In section~\ref{sec:fit}, we list
all the experimental measurements we use to construct the likelihood
in our fit, and we discuss in detail our most important findings. The
latter are effectively depicted in
figures~\ref{fig:fig1}--\ref{fig:fig6}, and collected in
tables~\ref{tab:PMDpars}--\ref{tab:PDDobs} in ~\ref{sec:tab}. In section~\ref{sec:conclusions} we summarize
our conclusions.
\section{Theoretical Framework of the Analysis}
\label{sec:framework}
In this section we present the effective field theory framework at the
basis of this work and introduce the benchmark scenarios we focus on
for our study of NP effects in rare $B$ decays. We then illustrate
the two distinct broad classes of assumptions that characterize our
global analysis: the case where we take an optimistic attitude towards
the estimate of hadronic uncertainty plaguing the amplitude of both
$B \to K^{*} \ell \ell / \gamma$ and $B_s \to \phi \ell \ell / \gamma$
channels, and a second one where we aim at providing a more
conservative approach. All the results in section~\ref{sec:results}
will be classified under these two different setups.
\subsection{\textit{New Physics Benchmarks for $\Delta B = 1$}}
\label{sec:eft&NP}
Integrating out the heavy degrees of freedom, the resulting effective
Hamiltonian of weak interactions for $b \to s \gamma, s \ell \ell$
transitions involves the following set of dimension six operators
within the SM~\cite{Chetyrkin:1997gb}:
\begin{eqnarray}
Q^p_1 &\:\:=\:\:& (\bar{s}_L\gamma_{\mu}T^a p_L)(\bar{p}_L\gamma^{\mu}T^ab_L)\,,\nonumber \\
Q^p_2 &=& (\bar{s}_L\gamma_{\mu} p_L)(\bar{p}_L\gamma^{\mu}b_L)\,, \nonumber \\
P_3 &=& (\bar{s}_L\gamma_{\mu}b_L)\sum\phantom{} _q(\bar{q}\gamma^{\mu}q)\,, \nonumber \\
P_4 &=& (\bar{s}_L\gamma_{\mu}T^ab_L)\sum\phantom{}
_q(\bar{q}\gamma^{\mu}T^aq) \,,\nonumber \\
P_5 &=&
(\bar{s}_L\gamma_{\mu1}\gamma_{\mu2}\gamma_{\mu3}b_L)\sum\phantom{}
_q(\bar{q}\gamma^{\mu1}\gamma^{\mu2}\gamma^{\mu3}q)
\,,\nonumber \\
P_6 &=&
(\bar{s}_L\gamma_{\mu1}\gamma_{\mu2}\gamma_{\mu3}T^ab_L)\sum\phantom{}
_q(\bar{q}\gamma^{\mu1}\gamma^{\mu2}\gamma^{\mu3}T^aq)
\,,\nonumber \\
Q_{8g} &=& \frac{g_s}{16\pi^2}m_b\bar{s}_L\sigma_{\mu\nu}G^{\mu\nu}b_R \,,\nonumber \\
Q_{7\gamma} &=&
\frac{e}{16\pi^2}m_b\bar{s}_L\sigma_{\mu\nu}F^{\mu\nu}b_R\,,
\nonumber \\
Q_{9V} &=&
\frac{\alpha_{e}}{4\pi}(\bar{s}_L\gamma_{\mu}b_L)(\bar{\ell}\gamma^{\mu}\ell)\,,
\nonumber \\
Q_{10A} &=&
\frac{\alpha_{e}}{4\pi}(\bar{s}_L\gamma_{\mu}b_L)(\bar{\ell}\gamma^{\mu}\gamma^5\ell) \,,
\label{eq:deltaB1}
\end{eqnarray}
where $\ell = e , \mu$, $p = u,c$ and we have neglected the chirally
suppressed SM dipoles. The $\Delta B = 1$ effective Hamiltonian can
be casted in full generality as a combination of two distinct parts:
\begin{equation}
\mathcal{H}_\mathrm{eff}^{\Delta B = 1} =
\mathcal{H}_\mathrm{eff}^\mathrm{had} +
\mathcal{H}_\mathrm{eff}^\mathrm{sl+\gamma},
\label{eq:Heff}
\end{equation}
where, within the SM, the hadronic term involves the first seven
operators in eq.~(\ref{eq:deltaB1}):
\begin{eqnarray}
\mathcal{H}_\mathrm{eff}^\mathrm{had} \;\;&
= &\;\; \frac{4G_F}{\sqrt{2}} \Bigg[\sum_{p=u,c}\lambda_p\bigg(C_1 Q^{p}_1 +
C_2 Q^{p}_2\bigg) \\ \nonumber
\ & - & \ \lambda_t \bigg(\sum_{i=3}^{6} C_i P_i +
C_{8}Q_{8g} \bigg)\Bigg] \,,
\label{eq:H_had}
\end{eqnarray}
while the second piece includes the electromagnetic dipole and
semileptonic operators:
\begin{equation}
\mathcal{H}_\mathrm{eff}^\mathrm{sl+\gamma} = -
\frac{4G_F}{\sqrt{2}}\lambda_t\bigg( C_7Q_{7\gamma} + C_9Q_{9V} +
C_{10}Q_{10A} \bigg) \,,
\label{eq:H_sl}
\end{equation}
with $\lambda_{i}$ corresponding to the CKM combination
$V^{}_{ib} V^{*}_{is} $ for $i=u,c,t$ and where $C_{i=1,\dots,10}$ are
the Wilson coefficients (WCs) encoding the short-distance physics of
the theory. All the SM WCs in this work are evolved from the mass
scale of the W boson down to $\mu_{b}=4.8$ GeV, using
state-of-the-art perturbative QCD and QED calculations for the
matching conditions \cite{Bobeth:1999mk,Gambino:2001au,Misiak:2004ew}
and the anomalous dimension matrices
\cite{Bobeth:2003at,Gambino:2003zm,Misiak:2004ew,Huber:2005ig}
relevant for the processes considered in this analysis.
While a general UV completion of the SM may enter in the effective
couplings present in both pieces of eq.~(\ref{eq:Heff}), general
NP effects in $b \to s \gamma, s \ell \ell$ can be
phenomenologically parametrized as shifts of the Wilson coefficients
of the electromagnetic and semileptonic operators at the typical scale
of the processes, $\mu_{b}$. In particular, the most general basis for
NP effects in radiative and (semi)leptonic $B$ decays can be enlarged
by the presence of scalar, pseudo-scalar and tensorial semileptonic
operators, together with right-handed quark currents as the analogue
of $Q_{7\gamma} ,Q_{9V},Q_{10A}$ SM operators
\cite{Jager:2012uw,Aebischer:2015fzz}. In this work, motivated by
previous interesting findings concerning LFUV
\cite{Altmannshofer:2014rta,Descotes-Genon:2015uva,Chobanova:2017ghn}
and the measurement of $R_{K}$ and $R_{K^{*}}$, we focus on the
contributions of NP appearing as shifts of the SM WCs related to the
electromagnetic dipole and semileptonic operators with left-handed
quark currents only. A comprehensive analysis with different chiral
structures as well as a more general effective theory framework will
be presented elsewhere~\cite{future}. Furthermore, we restrict
ourselves to CP-conserving effects, taking NP WCs to be real.
For NP in semileptonic operators we discriminate between couplings to muon
and electron fields both in the axial and vector leptonic currents.
We characterize our phenomenological analysis for NP through six
different benchmark scenarios, studying the impact of combinations of
the following NP WCs :
\begin{flushleft}
\begin{itemize}
\item[\textbf{(I)}] $C^{NP}_{9,\mu}$ and $C^{NP}_{9,e}$ varied in the range $[-4,4]$,
i.e. adding to the SM two NP parameters;
\item[\textbf{(II)}] $C^{NP}_{9,\mu}$ and $C^{NP}_{10,\mu} $ varied in
the range $[-4,4]$, adding to the SM again two NP parameters;
\item[\textbf{(III)}] $C^{NP}_{9,\mu}$ and $C^{NP}_{9,e}$ varied in the range $[-4,4]$,
and $C^{NP}_{7}$ varied in the range $[-0.5,0.5]$,
i.e. a scenario with three NP parameters;
\item[\textbf{(IV)}] $C^{NP}_{10,\mu}$ and $C^{NP}_{10,e}$ varied in the range $[-4,4]$,
and $C^{NP}_{7}$ varied in the range $[-0.5,0.5]$,
i.e. adding again to the SM three NP
parameters;
\item[\textbf{(V)}] $C^{NP}_{9,\mu} = - C^{NP}_{10,\mu}$ and $C^{NP}_{9,e} = - C^{NP}_{10,e}$
varied in the range $[-4,4]$,
and $C^{NP}_{7}$ varied in the range $[-0.5,0.5]$, i.e. a
NP scenario again described by three different parameters.
\item[\textbf{(VI)}] $C^{NP}_{7}$, $C^{NP}_{9,\mu}$, $C^{NP}_{9,e}$,
$C^{NP}_{10,\mu}$ and $C^{NP}_{10,e}$ varied simultaneously in the respective ranges defined
above, i.e. a NP scenario described by five different
parameters.
\end{itemize}
\end{flushleft}
We remark that while benchmarks \textbf{(I)} and \textbf{(II)} have
been already studied in literature, none of the other cases has been
analyzed so far. In particular, NP scenarios \textbf{(III)} and
\textbf{(IV)} allow us to study, for the first time, the interesting
impact of a NP radiative dipole operator in combination with
vector-like and axial-like LFUV effects generated by NP. Most
interestingly, scenario \textbf{(V)} allows us to explore the
correlation $C_{9}^{NP}=-C_{10}^{NP}$, possibly hinting at a
$SU(2)_{L}$ preserving BSM theory. As an additional interesting case
to explore, we eventually generalize to simultaneously nonvanishing
$C^{NP}_{7}$, $C^{NP}_{9,\mu}$, $C^{NP}_{9,e}$,
$C^{NP}_{10,\mu}$ and $C^{NP}_{10,e}$ in case \textbf{(VI)}.
We wish to stress that all of the six benchmarks defined above will
be studied for the first time under two different approaches in the
estimate of QCD hadronic power corrections, as presented in
next section.
\subsection{\textit{Treatment of the Hadronic Uncertainties}}
\label{sec:hadronic}
In our previous works~\cite{Ciuchini:2015qxb,Ciuchini:2016weo,Ciuchini:2017gva}, we
went into considerable detail on the treatment of hadronic
contributions in the angular analysis of $B\to K^*\ell\ell$. Our
approach there was to study how large these contributions can be
assuming that the LHCb data on branching fractions and angular
distributions of these decay modes could be described within the
SM. For that purpose we considered four scenarios for the hadronic
contributions, with increasing theoretical input from the
phenomenological analysis presented in
ref.~\cite{Khodjamirian:2010vf}.
The underlying functional form that
we used for the hadronic contribution was given by:
\begin{eqnarray}
h_\lambda(q^2) &\;\;=\;\;& \frac{\epsilon^*_\mu(\lambda)}{m_B^2} \int d^4x e^{iqx} \langle \bar K^* \vert T\{j^{\mu}_\mathrm{em} (x)
\mathcal{H}_\mathrm{eff}^\mathrm{had} (0)\} \vert \bar B \rangle \nonumber\\
&\;\;=\;\;& h_\lambda^{(0)} + \frac{q^2}{1\,\mathrm{GeV}^2}
h_\lambda^{(1)} + \frac{q^4}{1\, \mathrm{GeV}^4} h_\lambda^{(2)} \,,
\label{eq:hlambda}
\end{eqnarray}
where we fitted for the complex, helicity dependent, coefficients
$h^{(i)}_\lambda$, $(i=0,1,2)$ and $(\lambda=0,+,-)$ using the data
and the phenomenological model in \cite{Khodjamirian:2010vf}. Since
$h_0$ enters the decay amplitude with an additional factor of
$\sqrt{q^2}$ with respect to $h_\pm$, we drop $h_0^{(2)}$ in our
analysis.
In this work we proceed to study the possible existence of NP
contributions in semileptonic and radiative $b\to s$ decays which
requires a re-evaluation of the hadronic uncertainties. For the sake
of simplicity, to address hadronic contributions we use the same
functional parameterization as given in
eq.~(\ref{eq:hlambda}). However, we limit ourselves to only two
hadronic models. The first, corresponding to the most widely used
assumption, relies completely on the phenomenological model
in~\cite{Khodjamirian:2010vf} below $q^2 < 4m_c^2$. The second is a
more conservative approach, where we impose the latter only in the
large recoil region at $q^2\le1$ GeV$^2$ while letting the data drive
the hadronic contributions in the higher invariant mass region. We
will refer to the first approach as phenomenological model driven
(PMD) and the second as phenomenological and data driven (PDD). In our
fit we vary the $h^{i}_\lambda$ parameters over generous ranges. More
detailed discussion on these can be found
in~\cite{Ciuchini:2015qxb,Ciuchini:2016weo}.
In the present analysis we also need to address modes that were not
considered in our previous works, namely $B\to K\ell\ell$,
$B_{s}\to \phi\ell\ell$ and $B_{s}\to \phi\gamma$. The decay
$B\to K\ell\ell$ has been studied in detail
in~\cite{Khodjamirian:2012rm}, where the authors show that the
hadronic uncertainties are smaller than in $B\to K^*\ell\ell$. A
comparison of the LCSR estimate of the soft gluon contribution and the
QCDF estimate of the hard gluon contribution reveals that the soft
gluon exchange is subdominant with respect to QCDF hard gluon
exchange. Therefore, although in principle the same concerns on the
soft gluon contribution we raised for $B \to K^*$ apply also in this
case, in practice the overall effect of soft gluons can be reasonably
neglected. In our computation we therefore only include hard gluon
exchange computed using the QCDF formalism in
ref.~\cite{Beneke:2001at}.
The long distance contributions for $B_s\to \phi\ell\ell$ and
$B_s\to \phi\gamma$ follow a similar theoretical derivation as those
for $B\to K^*\ell\ell$ and $B\to K^*\gamma$, respectively, barring the
fact that the spectator quark in the former is different from that in
the latter. No theoretical estimates of power corrections to the
infinite mass limit are available for the
$B_s \to \phi \ell \ell / \gamma$ decays and one has to rely on the
ones for the $B\to K^* \ell \ell / \gamma$ decays to get a handle on
the long distance contributions. The spectator quark effects can come
through the hard spectator scattering involving matrix elements of
$Q_{2}$, $P_6$ and $Q_{8g}$ computable in QCD
factorization~\cite{Beneke:2001at} which we include in our
computation. However, we do not include the sub-leading, and
numerically small, QCDF power corrections to spectator scattering
involving $Q_{8g}$~\cite{Kagan:2001zk,Feldmann:2002iw,Beneke:2004dp}
and contributions to weak spectator scattering involving $Q_{8g}$
beyond QCDF computed in
LCSR~\cite{Ball:2006eu,Dimou:2012un,Lyon:2013gba}. The effect of the
difference in all these spectator contributions is expected to be low
firstly because they are numerically small and, secondly, because the
effect is proportional to the small flavour $SU(3)$
breaking. Different approaches in relating the long distance
contributions in the $B\to K^* \ell \ell / \gamma$ channels to the
ones in the $B \to \phi \ell \ell / \gamma$ channels have been used in
the
literature~\cite{Altmannshofer:2014rta,Paul:2016urs,Descotes-Genon:2015uva},
which vary in the degree of correlation between the two. While
Ref~\cite{Descotes-Genon:2015uva} uses uncorrelated hadronic
uncertainties, refs.~\cite{Altmannshofer:2014rta,Paul:2016urs} have
left the two contributions highly correlated noting that the spectator
contribution is expected to be numerically small. We take an approach
similar to the the latter considering the insensitivity of the current
data to such effects and use the same value of power corrections in
$B \to K^*$ and $B_s \to \phi$ amplitudes, even though this choice
pertains to a quite oversimplifying optimistic attitude. We leave a
more detailed analysis of this assumption by relaxing the correlation
between the hadronic contributions in the two modes to a future
work~\cite{future}.
\section{Bayesian Fit of the Dipole and Semileptonic Operators}
\label{sec:fit}
\subsection{\textit{Experimental Information Considered}}
In this section we discuss the experimental measurements we use in our
fit. Please note that for the exclusive modes we make use of
measurements in the large recoil region only. Our choice harbours on
the fact that the QCD long distance effects in the low recoil region
are substantially different from the large recoil
regime~\cite{Grinstein:2004vb,Bobeth:2010wg,Beylich:2011aq,Bobeth:2011gi}
and would require a dedicated analysis. For the fit in this study we
consider the following experimental information:
\begin{itemize}
\item $B \to K^* \ell \ell$\\
For the $B \to K^* \mu \mu$ channel we use the LHCb measurements of
CP-averaged angular observables extracted by means of the unbinned
maximum likelihood fit, along with the provided correlation
matrix~\cite{Aaij:2015oid}. Moreover, we employ the recent results
for CP-averaged angular observables from
ATLAS~\cite{ATLAS-CONF-2017-023} and the ones measured by
CMS~\cite{Khachatryan:2015isa,CMS-PAS-BPH-15-008}\footnote{For all
CMS data we use the 7, 8 TeV combined results, which can be found
in
\url{https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhysicsResultsBPH13010}
.}
as well. Finally, we use the CP-averaged optimized angular
observables recently measured by
Belle~\cite{Wehle:2016yoi}\footnote{Belle measures the
$B^0 \to K^{*0} \mu \mu$ and $B^+ \to K^{*+} \mu \mu$
channels together, without providing the mixture ratio. On the
theoretical side, we can therefore use these measurements under
the approximation that QCD power corrections differentiating the
amplitudes of the two channels are small. We have numerically
checked that the impact of known QCD power
corrections~\cite{Beneke:2001at} is indeed at the percent level in
the observables of interest.}. Regarding the differential
branching fractions, we use the recently updated measurements from
LHCb~\cite{Aaij:2016flj} and the ones from
CMS~\cite{Khachatryan:2015isa}. For the $B \to K^* e e$ channel we
consider the LHCb results from~\cite{Aaij:2015dea} and the Belle
results from~\cite{Wehle:2016yoi}. $R_{K^*}$ observable is considered according
to the recently presented measurements by LHCb~\cite{LHCb_RKstar} in both the low-$q^{2}$
and central-$q^{2}$ bins, see also eq.~(\ref{eq:RKstar}).
Our theoretical predictions are computed in the helicity basis,
whose relevant expressions can be found in~\cite{Jager:2012uw}; the
same framework is employed to study $B \to K^* \gamma$,
$B_s \to \phi \mu \mu$, $B_s \to \phi \gamma$ and
$B \to K \ell \ell$ channels. For the latter, we use the full set of
form factors extrapolated from the lattice results, along with the
provided correlation matrix~\cite{Bailey:2015dka}; for the remaining
channels, we use the full set of form factors estimated combining
LCSR and lattice results, along with the correlation
matrices~\cite{Straub:2015ica}. For the factorizable and
non-factorizable QCD power corrections, we refer to
Sec.~\ref{sec:hadronic}.
\item $B \to K^* \gamma$\\
We include in our analysis the HFAG average for the branching
fractions from~\cite{Amhis:2016xyh}.
\item $B_s \to \phi \mu \mu$\\
We consider the LHCb CP-averaged angular observables and
differential branching fractions measurements, along with the
provided correlation matrix~\cite{Aaij:2015esa}.
\item $B_s \to \phi \gamma$\\
We use the LHCb measurement of the branching fraction
from~\cite{Aaij:2012ita}.
\item $B \to K \ell \ell$\\
We employ the LHCb measurement of $B \to K e e$ differential
branching fraction and $R_K$ from~\cite{Aaij:2014ora}.
\item $B \to X_s \gamma$\\
We use the HFAG average from~\cite{Amhis:2016xyh}. We perform our
theoretical computation at NNLO in $\alpha_s$ and NLO in
$\alpha_{em}$, following ref.~\cite{Misiak:2015xwa} and references
therein.
\item $B_s \to \mu \mu$\\
We consider the latest measurement from LHCb~\cite{Aaij:2017vad} and
do not consider the measurement from CMS~\cite{Chatrchyan:2013bka},
which has the same central value of LHCb, but larger
uncertainty. Moreover, we chose not to use results for
$B_d \to \mu \mu$, since there are only upper bounds for this decay
channel so far~\cite{Chatrchyan:2013bka,Aaij:2017vad}. Our
theoretical predictions include NLO EW corrections, as well as NNLO
QCD correction, following the detailed expressions obtained
in ref.~\cite{Bobeth:2013uxa}.
\end{itemize}
\subsection{\textit{Results of the Global Fit}}
\label{sec:results}
In this section we present the main results of our work. We perform
this study using \texttt{HEPfit}\xspace~\cite{HEPfit} relying on its Markov Chain
Monte Carlo based Bayesian analysis framework implemented with
BAT~\cite{Caldwell:2008fw}. We fit to the data using 16 real free
parameters that characterize the non-factorizable power corrections,
as was done in~\cite{Ciuchini:2015qxb}, along with the necessary set
of NP WCs. We assign to the hadronic parameters and the NP WCs flatly
distributed priors in the relevant ranges mentioned in
section~\ref{sec:framework}. The remaining parameters used in the fit
are listed in table~\ref{Tab:SM}. To better compare different
scenarios, we use the Information Criterion \cite{IC,MR2027492},
defined as
\begin{equation}
\label{eq:IC}
\mathit{IC} = -2 \overline{\log L} + 4 \sigma^2_{\log L}\,,
\end{equation}
where $\overline{\log L}$ is the average of the log-likelihood and
$\sigma^2_{\log L}$ is its variance. The second term in eq.~(\ref{eq:IC}) takes into account
the effective number of parameters in the model, allowing for a
meaningful comparison of models with different number of parameters.
Preferred models are expected to
give smaller $IC$ values.
\begin{table*}[t!]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Parameters} & \textbf{Mean Value} & \textbf{Uncertainty} & \textbf{Reference} \\[1mm]
\hline
$\alpha_{s}(M_{Z})$ & $0.1181$ & $ 0.0009 $ & \cite{Agashe:2014kda,deBlas:2016ojx}\\
$\mu_{W}$ (GeV) & $80.385$ & $-$ & \\
$m_{t}$ (GeV) & $173.34$ & $0.76$ & \cite{ATLAS:2014wva}\\
$m_{c}(m_c)$ (GeV) & $1.28$ & $0.02$ & \cite{Lubicz}\\
$m_{b}(m_b)$ (GeV) & $4.17$ & $0.05$ & \cite{Sanfilippo:2015era}\\
$f_{B_{s}}$ (MeV) & $226$ & $5$ & \cite{Aoki:2013ldr}\\
$f_{B_{s}}/ f_{B_{d}}$ & $1.204$ & $0.016$ & \cite{Aoki:2013ldr}\\
$\Delta \Gamma_s/\Gamma_s$ & $0.129$ & $0.009$ & \cite{Amhis:2016xyh} \\
$\lambda $ & $0.2250$ & $0.0006$ & \cite{Bona:2006ah,UTfit}\\
$A $ & $0.829$ & $0.012$ & \cite{Bona:2006ah,UTfit}\\
$\bar{\rho} $ & $0.132$ & $0.018$ & \cite{Bona:2006ah,UTfit}\\
$\bar{\eta} $ & $0.348$ & $0.012$ & \cite{Bona:2006ah,UTfit}\\
$f_{K^*,\vert\vert}$ (MeV) & $204$ & $7$ & \cite{Straub:2015ica}\\
$f_{K^*,\perp}(1\mathrm{GeV})$ (MeV) & $159$ & $6$ &\cite{Straub:2015ica} \\
$f_{\phi,\vert\vert}$ (MeV) & $233$ & $4$ &\cite{Straub:2015ica} \\
$f_{\phi,\perp}(1\mathrm{GeV})$ (MeV) & $191$ & $4$ & \cite{Straub:2015ica}\\[0.6mm]
\hline
&&&\\[-3mm]
$\lambda_{B}$ (MeV) & $350$ & $ 150$ & \cite{Bosch:2001gv}\\
$a_{1}(\bar{K}^{*})_{\perp, \, ||}$ & $0.04$ & $0.03$ & \cite{Ball:2005vx}\\
$a_{2}(\bar{K}^{*})_{\perp, \, ||}$ & $0.05$ & $ 0.1$ & \cite{Ball:2006nr}\\
$a_{2}(\phi)_{\perp, \, ||}$ & $0.23$ & $ 0.08$ & \cite{Ball:2007rt}\\
$a_{1}(K)$ & $0.06$ & $0.03$ & \cite{Ball:2005vx}\\
$a_{2}(K)$ & $0.115$ & $ -$ & \cite{Ball:2004ye}\\
\hline
\end{tabular}
\caption{\it{Parameters used in the analysis. The Gegenbauer
parameters and $\lambda_B$ have flat priors with half width
reported in the third column. The remaining ones have Gaussian
priors. Meson masses, lepton masses, $s$-quark mass and
electroweak couplings are fixed at the PDG value
\cite{Agashe:2014kda}.}}
\label{Tab:SM}
\end{table*}
The results for NP WCs for the several cases that we study can be
found in figures~\ref{fig:fig1}--\ref{fig:fig6}, where the $IC$ value
for each model is also reported, and in
tables~\ref{tab:PMDpars}--\ref{tab:PDDpars} in \ref{sec:tab}.
In tables~\ref{tab:PMDobs}--\ref{tab:PDDobs}, we report the results of
the fit for observables of interest. We observe that all cases have
comparable $IC$ values except cases \textbf{(IV)} and \textbf{(V)},
which are disfavoured in the PMD approach while they remain viable in
the PDD one. The main difference between the two approaches is that
angular observables, in particular $P_5'$, call for NP in
$C_{9,\mu}^{NP}$ in the PMD approach, while they can be accommodated
within the SM in the PDD one.
Let us discuss the various cases in more detail. It is important to
stress that the evidence of NP in our fit for cases
\textbf{(I)}--\textbf{(V)} is always larger than $3\sigma$ for one of
the semileptonic NP WCs used in the analysis, given the need of a
source of LFUV primarily from $R_{K,K^{*}}$ measurements. In
particular, we remark that in the PMD scenarios of cases \textbf{(I)}
and \textbf{(II)} we get evidence for NP at more than
$5\sigma$. However, looking at the corresponding PDD scenarios, the NP
evidence gets significantly reduced, roughly between $3\sigma$ and
$4\sigma$. The reduction in the significance comes from the larger
hadronic uncertainties in the PDD approach which weaken the
constraining power of the angular observables on the NP WCs.
Concerning case \textbf{(III)}, we observe very similar findings to the
ones obtained for case \textbf{(I)}, since the effective
coupling for the radiative dipole operator is well constrained,
especially from the inclusive $B \to X_s \gamma$ branching fraction.
Regarding case \textbf{(IV)}, in which we vary
the three NP parameters $C_{7}^{NP}, C^{NP}_{10,\mu}$ and
$C^{NP}_{10,e}$, the model comparison between
the PDD and PMD realization of this NP benchmark is quite
informative: NP effects in the dipole operator and in the axial
semileptonic currents cannot address at the same time $R_{K,K^{*}}$
ratios and the $P_{5}'$ anomaly in a satisfactory way when we stick to
small non-factorizable QCD power corrections; however, this is no
longer true when we allow for a more conservative estimate of the
hadronic uncertainties. In particular, the tension in the
fit coming from the angular analysis of $B \to K^{*} \mu \mu $ can be
now addressed by large QCD effects as those given in
eq.~(\ref{eq:hlambda}), while a $C^{NP}_{10,e} \neq 0 $ at about $3\sigma$
can successfully describe all the observational hints of LFUV showed
by current measurements.
This interesting possibility of \textit{axial lepton-flavor violating NP} is not found in other
global analyses~\cite{Altmannshofer:2014rta,Descotes-Genon:2015uva,Chobanova:2017ghn,Altmannshofer:2017fio}, as it proceeds from the conservative treatment of hadronic uncertainties we proposed
in ref.~\cite{Ciuchini:2015qxb}.
Concerning tables~\ref{tab:PMDobs}--\ref{tab:PDDobs} of
~\ref{sec:tab}, we would like
to point out the pattern displayed by the
transverse ratios $R_{K^*}^T$ and $R_{\phi}^T$: cases \textbf{(I)}--\textbf{(III)}
predict these values to be $\sim 1$ with a small error, while
the remaining cases give different predictions with the central value
ranging between $\sim 0.7$ and $\sim 0.8$. Therefore,
obtaining experimental information on transverse ratios may help
in discerning between the different NP scenarios.
We then show results for case \textbf{(V)}, in which we vary
$C_{7}^{NP}$, $C_{9,\mu}^{NP}$, $C_{9,e}^{NP}$ and correlate the
semileptonic vector and axial currents according to
$C_{9,\mu}^{NP}=-C_{10,\mu}^{NP}$ and $C_{9,e}^{NP}=-C_{10,e}^{NP}$.
In analogy to case \textbf{(IV)}, only within the PDD approach we find
for this NP benchmark a fairly good description of data, with
$C_{9,\mu}^{NP} = -C_{10,\mu}^{NP}$ compatible with zero at
$\sim 2\sigma$. Again, we are presented with the case where deviations
in angular observables are addressed by large QCD power corrections,
while LFUV is driven by semielectronic operators. Looking back at
tables~\ref{tab:PMDobs}--\ref{tab:PDDobs}, we note that for this case,
as well as for case \textbf{(IV)} and \textbf{(VI)}, both transverse
and longitudinal muon over electron ratios in the central-$q^{2}$ bin,
namely $R^T_{K,K^{*},\phi} \,$ and $R^L_{K,K^{*},\phi} \,$, are
characterized by similar central values.
We close our presentation with an analysis of case \textbf{(VI)} in
which we float simultaneously $C_{7}^{NP}$, $C_{9,\mu}^{NP}$,
$C_{9,e}^{NP}$, $C_{10,\mu}^{NP}$, and $C_{10,e}^{NP}$. As can be seen
from figure~\ref{fig:fig6}, current measurements are informative
enough to constrain, at the same time, all the NP WCs both in the PMD
and PDD approaches. In particular, within the latter case, a
nontrivial interplay among NP effects encoded both in
$C_{9,\mu }^{NP}$ and $C_{10,e }^{NP}$, together with the hadronic
contributions reported in table~\ref{tab:PDDpars}, produces the
weakest hint in favour of NP provided by our global analysis --
sitting between $2\sigma$ and $3\sigma$ level -- while allowing for a
very good description of the entire data set, similar to the other
cases. The power corrections we found are larger than those obtained
in ref.~\cite{Khodjamirian:2010vf}, but smaller than those required by
the SM fit of $B\to K^*\mu\mu$~\cite{Ciuchini:2015qxb}. As discussed
in detail in refs.~\cite{Ciuchini:2016weo,Ciuchini:2017gva}, the size
obtained for the power corrections is compatible with the naive power
counting relative to the leading amplitude. We stress (once again)
that a more optimistic attitude towards the estimate of QCD power
corrections (PMD approach) leads to the a much stronger claim in
favour of NP, at a statistical significance larger than $5\sigma$.
In tables~\ref{tab:PMDpars}--\ref{tab:PDDpars} we report mean and
standard deviation for the NP WCs and absolute values of $h_{\lambda}$
for all the cases considered in the analysis. It is also relevant to
observe that, once we switch on NP effects through $C^{NP}_{9,\mu}$ in
order to attempt at simultaneously explaining observables such as
$R_{K,K^{*}}$ and $P_{5}'$ in the PDD approach we find values for
$|h_\lambda^{(1,2)}|$ compatible with zero at $\sim
1\sigma$. Conversely, if we set $C^{NP}_{9,\mu}=0$ then a nonvanishing
$|h_{-}^{(2)}|$ is needed to account for the angular observables, as
found in ref.~\cite{Ciuchini:2015qxb}, showing that one cannot
disentangle hadronic uncertainties and NP in $B\to K^*\mu\mu$ at
present.
\begin{figure}[!t]
\centering
\subfigure{\includegraphics[width=.45\textwidth]{triangle_plot_C9NPmu_C9NPe_FULLKD.pdf}}
\subfigure{\includegraphics[width=.45\textwidth]{triangle_plot_C9NPmu_C9NPe_KD.pdf}}
\caption{\textit{The two NP parameter fit using $C^{NP}_{9,\mu}$ and
$C^{NP}_{9,e}$. Here and in the following, the left green panel
shows the results for the PMD approach and the right red panel
shows that for the PDD one. In the 1D distributions we show the
$16^{th}$, $50^{th}$ and $84^{th}$ percentile marked with the
dashed lines. In the correlation plots we show the 1, 2 and
$3\sigma$ contours in decreasing degrees of transparency. The blue
square and lines identify the values of the NP WCs in the SM
limit. The numbers at the bottom left corner of the 2D plots refer
to the correlation. We also report $IC$ values for the two
approaches (see eq.~\ref{eq:IC}). Preferred models are expected
to give smaller $IC$ values.}}
\label{fig:fig1}
\end{figure}
\begin{figure}[!t!]
\centering
\subfigure{\includegraphics[width=.45\textwidth]{triangle_plot_C9NPmu_C10NPmu_FULLKD.pdf}}
\subfigure{\includegraphics[width=.45\textwidth]{triangle_plot_C9NPmu_C10NPmu_KD.pdf}}
\caption{\textit{The two NP parameter fit using $C^{NP}_{9,\mu}$ and
$C^{NP}_{10,\mu}$. See caption of figure \ref{fig:fig1} for the colour coding and further details.}}
\label{fig:fig2}
\end{figure}
\begin{figure}[!t!]
\centering
\subfigure{\includegraphics[width=.45\textwidth]{triangle_plot_C7NP_C9NPmu_C9NPe_FULLKD.pdf}}
\subfigure{\includegraphics[width=.45\textwidth]{triangle_plot_C7NP_C9NPmu_C9NPe_KD.pdf}}
\caption{\textit{The three NP parameter fit using $C^{NP}_{7}$,
$C^{NP}_{9,\mu}$ and $C^{NP}_{9,e}$. See caption of figure
\ref{fig:fig1} for the colour coding and further details.}}
\label{fig:fig3}
\end{figure}
\begin{figure}[!ht!]
\centering
\subfigure{\includegraphics[width=.45\textwidth]{triangle_plot_C7NP_C10NPmu_C10NPe_FULLKD.pdf}}
\subfigure{\includegraphics[width=.45\textwidth]{triangle_plot_C7NP_C10NPmu_C10NPe_KD.pdf}}
\caption{\textit{The three NP parameter fit using $C^{NP}_{7}$,
$C^{NP}_{10,\mu}$ and $C^{NP}_{10,e}$. See caption of figure
\ref{fig:fig1} for the colour coding and further details.}}
\label{fig:fig4}
\end{figure}
\begin{figure}[!ht!]
\centering
\subfigure{\includegraphics[width=.45\textwidth]{triangle_plot_C7NP_C9NPmu_eq_mC10NPmu_C9NPe_eq_mC10NPe_FULLKD.pdf}}
\subfigure{\includegraphics[width=.45\textwidth]{triangle_plot_C7NP_C9NPmu_eq_mC10NPmu_C9NPe_eq_mC10NPe_KD.pdf}}
\caption{\textit{The three NP parameter fit using $C^{NP}_{7}$,
$C^{NP}_{9,\mu}$, $C^{NP}_{9,e}$ and
$C^{NP}_{10,\mu,e}=-C^{NP}_{9,\mu,e}$. See caption of figure
\ref{fig:fig1} for the colour coding and further details.}}
\label{fig:fig5}
\end{figure}
\begin{figure}[!ht!]
\centering
\subfigure{\includegraphics[width=.5\textwidth]{triangle_plot_all_WCs_FULLKD.pdf}}
\subfigure{\includegraphics[width=.5\textwidth]{triangle_plot_all_WCs_KD.pdf}}
\caption{\textit{The five NP parameter fit using $C^{NP}_{7}$,
$C^{NP}_{9,\mu}$, $C^{NP}_{9,e}$,
$C^{NP}_{10,\mu}$ and $C^{NP}_{10,e}$. See caption of figure
\ref{fig:fig1} for the colour coding and further details.}}
\label{fig:fig6}
\end{figure}
\section{Discussion}
\label{sec:conclusions}
In this work, we critically examined several BSM scenarios in order to
possibly explain the growing pattern of $B$ anomalies, recently
enriched by the $R_{K^*}$ measurement performed by the LHCb
collaboration~\cite{LHCb_RKstar}. We carried out our analysis in an
effective field theory framework, describing the non-factorizable
power corrections by means of 16 free parameters in our fit along the
lines of ref.~\cite{Ciuchini:2015qxb}.
We performed all our fits using two different hadronic models. The
first approach, labelled PMD, relies completely on the
phenomenological model from ref.~\cite{Khodjamirian:2010vf} and
corresponds to the more widely used choice in the literature. The
second one, named PDD, imposes the result of
ref.~\cite{Khodjamirian:2010vf} only at $q^2\le1$,\footnote{This
choice is motivated in ref.~\cite{Ciuchini:2015qxb}.} allowing the
data to drive the hadronic contributions in the higher invariant mass
region.
Regarding the NP contributions, we analyze six different benchmark
scenarios, differentiated by distinct choices of NP WCs employed in
the fits. Case \textbf{(I)} allows for $C^{NP}_{9,\mu}$ and
$C^{NP}_{9,e}$, while case \textbf{(II)} considers the scenario with
$C^{NP}_{9,\mu}$ and $C^{NP}_{10,\mu}$; case \textbf{(III)} studies NP
effects coming as $C^{NP}_{7}$, $C^{NP}_{9,\mu}$ and $C^{NP}_{9,e}$,
and case \textbf{(IV)} is the same as the latter but with $C^{NP}_{10}$
instead of $C^{NP}_{9}$; case \textbf{(V)} studies the possibility
described in the third case with
$C_{10,\mu}^{NP} = - C_{9,\mu}^{NP}$ and
$C_{10,e}^{NP} = - C_{9,e}^{NP}$ enforced; finally, case
\textbf{(VI)} considers the general case with
all the five NP WCs being allowed to float independently. Our main results are
collected in figures~\ref{fig:fig1}--\ref{fig:fig6} and
also reported in tables~\ref{tab:PMDpars}--\ref{tab:PDDobs}.
The comparison of different scenarios using the $IC$ shows that all
the considered cases are on the same footing except for cases
\textbf{(IV)} and \textbf{(V)}. These cases are strongly disfavoured
in the PMD approach, as there is no $C_{9,\mu}^{NP}$ in case
\textbf{(IV)} to account for the deviation in $P_5'$, while
$C_{9,\mu}^{NP}$ is constrained by its correlation with
$C_{10,\mu}^{NP}$ and the measured value of BR$(B_s\to\mu\mu)$ in case
\textbf{(V)}.
In fact, from our analysis of radiative and (semi)leptonic
$B$ decays we identify two classes of viable NP scenarios:
\begin{itemize}
\item The widely studied $C^{NP}_{9,\mu} \neq 0$ scenario: from
figures~\ref{fig:fig1}--\ref{fig:fig3}, we find a remarkable
$\gtrsim 5\sigma$ evidence in favour of $C^{NP}_{9,\mu} \neq 0$ in
the PMD approach. It is indeed nontrivial that a single NP WC can
explain all the present anomalies in $b\to s$ transitions
\cite{Beaujean:2013soa,Hurth:2013ssa,Altmannshofer:2014rta,Descotes-Genon:2015uva,Chobanova:2017ghn,Altmannshofer:2017fio}.
However, in the more conservative PDD approach, the significance of
a nonvanishing $C^{NP}_{9,\mu}$ drops to about $3\sigma$, mainly
driven by LFUV.
\item An alternative scenario with nonvanishing $C^{NP}_{10,e}$, which
emerges in the presence of large hadronic corrections to the
infinite mass limit, namely our PDD approach. To our knowledge, a NP
electronic axial current has not been studied in the literature,
since it does not provide a satisfactory description of the angular
observables within the commonly used PMD approach. We think that the
present theoretical status of power correction calculations is not
robust enough to discard this interesting NP scenario.
\end{itemize}
Finally the most general fit we performed, namely case \textbf{(VI)},
confirms in the PDD approach that both scenarios above are viable,
although a slight preference for $C^{NP}_{9,\mu} \neq 0$ is found.
More data are needed to assess what kind of NP scenario (if the
anomalies persist) is realized in Nature.
\begin{acknowledgements}
The research leading to these results has received
funding from the European Research Council under the European Union's
Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreements
n. 279972 ``NPFlavour'' and n. 267985 ``DaMeSyFla''.
M.C. is associated to the Dipartimento di Matematica e
Fisica, Universit{\`a} di Roma Tre, and E.F. and L.S. are associated
to the Dipartimento di Fisica, Universit{\`a} di Roma ``La Sapienza''.
We wish to express our sincere gratitude to Alfredo Urbano for his
invaluable scientific correspondence concerning these flavourful
Easter eggs of New Physics.
\end{acknowledgements}
{\onecolumn
|
1,314,259,994,855 | arxiv | \section{Introduction}
Kinetic representations of hydrodynamics are potentially applicable to flow regimes beyond the reach of classical (near-equilibrium) fluid mechanics. Nevertheless, the derivation and solution of high-order hydrodynamic equations for far-from-equilibrium flows with arbitrary geometry remains an open challenge. Computational methods are a valuable alternative but even with the aid of efficient algorithms the solution of Boltzmann equations is a formidable task. Among different kinetic approaches, the lattice Boltzmann--BGK (LBGK) method has been able to span from scientific research to large-scale engineering applications. The LBGK method has two distinctive components largely responsible for its success; discretization of velocity space and adoption of the Bhatnagar-Gross-Krook (BGK) collision ansatz. Decades of work have established that LBGK models correctly represent macroscopic physics at the Navier--Stokes (N--S) level of approximation. On the contrary, it is not widely accepted in the fluid mechanics community that high-order LBGK models provide hydrodynamic descriptions beyond the N--S equations. Efforts in establishing LBGK as a legitimate model for far-from-equilibrium flows must address two key points; the effect of velocity discretization errors and the validity limits of the BGK ansatz.
The rigorous formulation of the LBGK method by Shan et al. (2006) places LBGK in the group of Galerkin procedures for the Boltzmann-BGK equation (BE--BGK) governing the evolution of the single-particle distribution $f$. In $N$-order LBGK procedures the approximate solution is sought within a function space $\mathbb{H}^N$ spanned by Hermite polynomials of order$\le N$. In this work, within the framework of Hermite-space approximation $f\in\mathbb{H}^N$, we present a technique to systematically derive closed moment equations in the form of (single) $N$-order partial-differential equations (PDEs). At each order of approximation, an increasing number of moments of $f$ are preserved and, thus, the derived hierarchy of equations tends to the exact BE--BGK hydrodynamics as $N\to\infty$. To assess the derived hydrodynamic relations we perform numerical analysis with $N$-order LBGK models \cite{Colosqui2009,Shan2006} and DSMC \cite{Alexander97} for the case of Kolmogorov flow in a wide range of Knudsen/Weissenberg numbers ($0.01\le Wi=\tau/T \le 10$); this free-space problem allows to remove from analysis all issues related to solid-fluid interaction and choice of kinetic boundary condition (e.g. diffuse scattering, bounce-back). Comparison of the derived equations for $f\in\mathbb{H}^N$ against kinetic simulations and previous theoretical expressions \cite{Chen2007,Colosqui2009} from exact solution of BE--BGK uncovers capabilities and limitations of lattice discretization and the BGK model in general non-equilibrium conditions.
\section{High-order Hydrodynamics from Boltzmann--BGK}
The single-particle distribution $f({\bf x},{\bf v},t)$ can determine all macroscopic properties (e.g. thermohydrodynamic quantities) observed in configuration space. In describing the flow of simple fluids we employ the velocity moments
\begin{equation}
{\bf M}^{(n)} ({\bf x},t) = \int f({\bf x},{\bf v},t) {\bf v}^n d{\bf v}.
\label{N_mom}
\end{equation}
The $n$-order moment $\{{\bf M}^{(n)}\equiv{M}^{(n)}_{i_1,i_2,...,i_{n}};~i_k=1,D\}$ is a symmetric tensor of rank $n$ and $D$ is the velocity-space dimension. In similar fashion, hydrodynamic moments at local thermodynamic equilibrium are ${\bf M}^{(n)}_{eq}=\int f^{eq} {\bf v}^n d{\bf v}$. The low-order moments $(n\le2)$ relate to conserved quantities; namely mass, momentum, and energy:
\begin{eqnarray}
{\bf M}^{(0)}={\bf M}^{(0)}_{eq}\!\!\!&=& \rho; \\
{\bf M}^{(1)}={\bf M}^{(1)}_{eq}\!\!\!&=&\rho {\bf u};\\
\mathrm{trace}({\bf M}^{(2)})=\mathrm{trace}({\bf M}^{(2)}_{eq})\!\!\!&=& \rho (u^2 + D \theta);
\end{eqnarray}
here we define $\theta={k_B T}/{m}$ while $T$ is the temperature, $k_B$ the Boltzmann constant, and $m$ the molecular mass. We assume that the evolution of $f({\bf x},{\bf v},t)$ is governed by the BE--BGK \cite{Cercignani}
\begin{equation}
\frac{\partial f}{\partial t}+{\bf v}\cdot {\boldsymbol \nabla} f =-\frac{f-f^{eq}}{\tau}
\label{BE--BGK}
\end{equation}
where $\tau$ is the so-called single relaxation time and the local equilibrium distribution $f^{eq}$ is given by
\begin{equation}
f^{eq}({\bf x},{\bf v},t)=\frac{\rho}{(2\pi\theta)^{\frac{D}{2}}}\exp\left[-\frac{({\bf v}-{\bf u})^{2}}{2\theta}\right].
\label{feqc}
\end{equation}
An evolution equation for the $n$-order moment (\ref{N_mom}) can be readily obtained via moment integration over the BE--BGK (\ref{BE--BGK}):
\begin{equation}
\left(1+\tau \frac{\partial}{\partial t}\right) {\bf M}^{(n)}=
{\bf M}^{(n)}_{eq} - \tau {\boldsymbol \nabla} \cdot {\bf M}^{(n+1)}
;~~n=0,\infty.
\label{BE--BGK_mom}
\end{equation}
\noindent The obtained moment equation (\ref{BE--BGK_mom}) is clearly not closed as it involves the higher-order moment ${\bf M}^{(n+1)}$.
\subsection{High-order hydrodynamic equations}
\label{high-order}
Leaving temporarily aside the problem of closing Eq.~(\ref{BE--BGK_mom}) let us observe that the evolution of ${\bf M}^{(n)}$ is actually determined by all higher-order moments $\{{\bf M}^{(k)};~k>n\}$. From Eq.~(\ref{BE--BGK_mom}) we find that the first time derivative of ${\bf M}^{(n)}$ is equal to the divergence of ${\bf M}^{(n+1)}$, i.e. the flux of moments one-order above. In the same way, the dynamics of ${\bf M}^{(n+1)}$ is determined by ${\bf M}^{(n+2)}$ and so on. Climbing up the infinite moment hierarchy, one can express the evolution of ${\bf M}^{(n)}$ in terms of arbitrary high-order moments $\{{\bf M}^{(n+k)};~k \ge 1\}$ after suitable combination of the moment equations. Multiply Eq.~(\ref{BE--BGK_mom}) by $\left(1+\tau \frac{\partial}{\partial t}\right)$:
\begin{equation}
\left(1+\tau \frac{\partial}{\partial t}\right)^2 {\bf M}^{(n)}=
\left(1+\tau \frac{\partial}{\partial t}\right)
\left[{\bf M}^{(n)}_{eq} - \tau {\boldsymbol \nabla} \cdot {\bf M}^{(n+1)}\right],
\label{BE--BGK_mom1}
\end{equation}
\noindent and take divergence of the moment equation for the following $(n\!+\!1)$-order:
\begin{equation}
\left(1+\tau \frac{\partial}{\partial t}\right)
{\boldsymbol \nabla} \cdot {\bf M}^{(n+1)}=
{\boldsymbol \nabla} \cdot \left[
{\bf M}^{(n+1)}_{eq} - \tau {\boldsymbol \nabla} \cdot {\bf M}^{(n+2)}\right].
\label{BE--BGK_mom2}
\end{equation}
\noindent By using Eq.~(\ref{BE--BGK_mom2}) one can eliminate the term $\left(1+\tau \frac{\partial}{\partial t} \right){\boldsymbol \nabla} \cdot {\bf M}^{(n+1)}$ in Eq.~(\ref{BE--BGK_mom1}) to obtain
\begin{eqnarray}
\label{M2}
\left(1+\tau \frac{\partial}{\partial t}\right)^2 {\bf M}^{(n)}&=&
\left(1+\tau \frac{\partial}{\partial t}\right) {\bf M}^{(n)}_{eq}
- \tau {\boldsymbol \nabla} \cdot {\bf M}^{(n+1)}_{eq} \nonumber\\
&+& \tau^2 {\boldsymbol \nabla} \cdot{\boldsymbol \nabla} \cdot {\bf M}^{(n+2)}.
\end{eqnarray}
\noindent The resulting expression, involving the evolution equations for ${\bf M}^{(n)}$ and ${\bf M}^{(n+1)}$, takes the form of a second-order PDE. The same procedure that lead to Eq.~(\ref{M2}) can be applied in order to eliminate ${\bf M}^{(n+2)}$ and iteratively performed an arbitrary number of times as the following higher-order moments consequently appear. After $(N-1)$ iterations we arrive to the general expression
\begin{eqnarray}
\label{M_N}
&&\!\!\!\left(1+\tau \frac{\partial}{\partial t}\right)^N\!\!{\bf M}^{(n)}=~~~\nonumber\\
&&\sum_{k=0}^{N-1} (-\tau {\boldsymbol \nabla}\cdot)^k \left(1+\tau \frac{\partial}{\partial t}\right)^{N-(k+1)}{\bf M}^{(n+k)}_{eq}\nonumber\\
&&+(-\tau {\boldsymbol \nabla}\cdot)^N{\bf M}^{(n+N)}.
\end{eqnarray}
Notice here that the term $({\boldsymbol \nabla}\cdot)^N{\bf M}^{(n+N)}$ represents a tensor of rank $n$. The time evolution of the thermohydrodynamic variables corresponding to ${\bf M}^{(n)}$ is now given by Eq.~(\ref{M_N}) in the form of a $N$-order PDE. A single $N$-order equation of this kind implicitly involves the evolution of $N$ velocity moments, i.e. those of order $n$ to $n+N-1$. Equilibrium moments readily computed from $f^{eq}$ (\ref{feqc}) are explicit function of mass, momentum, and energy; in solving Eq.~(\ref{M_N}) one still faces the problem of evaluating the non-equilibrium moment ${\bf M}^{(n+N)}$ and its $N$-order space derivatives. As elaborated in the next section, a possible way to close Eq.~(\ref{M_N}) is to express the non-equilibrium distribution $f$ in terms of its leading-order moments $\{{\bf M}^{(k)}; k < n+N\}$ by means of finite Hermite series.\\
\noindent{\it Unidirectional shear flows}. For the sake of analytical simplicity, we focus on the case of unidirectional shear flow ${\bf u}=u {\bf{i}}$ with spatial gradients ${\boldsymbol \nabla}={\nabla}{\bf j}\equiv \partial_y{\bf j}$ and within nearly isothermal regime ($M=u/\sqrt{\theta}\ll 1$). Note that the studied unidirectional flow is exactly incompressible, hereinafter we adopt $\rho=1$. The fundamental hydrodynamic variables thus are
\begin{eqnarray}
\rho({\bf x},t)&=& 1, \\
{\bf u}({\bf x},t)&=&u(y,t) {\bf i},\\
\theta({\bf x},t)&=&\theta+{\cal O}(M^2);
\end{eqnarray}
while the components of the $n$-order moment ${\bf M}^{(n)}$ are
\begin{equation}
M^{(n)}_{i_1,i_2,...,i_n}({\bf x},t)
= \int f v_{i_1}v_{i_2}...v_{i_n} d{\bf v}
\equiv< v_{i_1}v_{i_2}...v_{i_n}>.
\end{equation}
For the studied flow the underlying distribution function must not vary along the $x$- and $z$-axes ($\partial_x=\partial_z=0$) while $<v_y>=<v_z>=0$, it follows that only the moment components $<v_x v_y^k>$ ($k=0,\infty$) exhibit spatial variation. The $N$-order equation (\ref{M_N}) for the fluid velocity $u(y,t)$ then reduces to
\begin{eqnarray}
\label{eq_Nn}
&&\tau\frac{\partial}{\partial t}\left(1 +\tau \frac{\partial}{\partial t}\right)^{(N-1)} \!\! u = \nonumber\\
&&\sum_{k=1}^{N-1} (-\tau \nabla)^k \left(1 +\tau \frac{\partial}{\partial t}\right)^{(N-1-k)} \!\!\!\!\!\!\! <v_x v_y^{k}>_{eq} \nonumber\\
&+&(-\tau \nabla)^N \!\!<v_x v_y^N>,
\end{eqnarray}
after recalling conservation of momentum $u\!=<\!v_x\!>=<v_x\!>_{eq}$. Hereafter, we refer to each $N$-order PDE defined by Eq.~(\ref{eq_Nn}) as the $N$-order hydrodynamic description of the flow. More explicitly, Eq.~(\ref{eq_Nn}) defines the following approximations for the studied flow: first-order ($N=1$)
\begin{equation}
\frac{\partial u}{\partial t} = -\nabla <v_x v_y>,
\label{u_1}
\end{equation}
second-order ($N=2$)
\begin{eqnarray}
\label{u_2}
\left(1 +\tau \frac{\partial}{\partial t} \right) \frac{\partial u}{\partial t} &=& - \nabla <v_x v_y>_{eq}\nonumber\\
&+& \tau \nabla^2 <v_x v_y^2>,
\end{eqnarray}
third-order ($N=3$)
\begin{eqnarray}
\label{u_3}
\left(1 +\tau \frac{\partial}{\partial t} \right)^2 \frac{\partial u}{\partial t} &=&
- \left(1 +\tau \frac{\partial}{\partial t} \right) \nabla <v_x v_y>_{eq}\nonumber\\
&+& \tau \nabla^2 <v_x v_y^2>_{eq}\nonumber\\
&-&\tau^2 \nabla^3 <v_x v_y^3>,
\end{eqnarray}
and fourth-order ($N=4$)
\begin{eqnarray}
\label{u_4}
\left(1 +\tau \frac{\partial}{\partial t} \right)^3 \frac{\partial u}{\partial t} &=&
- \left(1 +\tau \frac{\partial}{\partial t} \right)^2 \nabla <v_x v_y>_{eq}\nonumber \\
&+&\left(1 +\tau \frac{\partial}{\partial t} \right) \tau \nabla^2 <v_x v_y^2>_{eq}\nonumber\\
&-& \tau^2 \nabla^3 <v_x v_y^3>_{eq}\nonumber\\
&+& \tau^3 \nabla^4 <v_x v_y^4>.
\end{eqnarray}
The resulting expressions are not closed uniquely due to the presence of high-order terms $(-\tau \nabla)^N \!\!<v_x v_y^N>$. If high-order terms are dominant $|(\tau \nabla)^N|>|(\tau \nabla)^{N-1}|$, precise knowledge of the distribution $f$ is required for accurate calculation of high-order (non-equilibrium) moments in Eqs.~(\ref{u_1})--(\ref{u_4}). On the other hand, flow regimes where $|(\tau \nabla)^N|<|(\tau \nabla)^{N-1}|$ will permit certain approximations of $f$ in terms of its $N$ leading-order moments to produce accurate equations in closed form.
\section{Hermite expansion of the Boltzmann distribution}
\label{sec:Hermite}
As originally proposed by Grad (1949)\nocite{Grad}, the single-particle distribution can be expressed in terms of hydrodynamic moments via Hermite series expansion
\begin{equation}
f({\bf x},{\bf v},t)= f^{M}({\bf v}) \sum_{n=0}^{\infty}
\frac{1}{n!} {\bf C}^{(n)}({\bf x},t) : {\bf H}^{(n)}({\bf v})
\label{f_H}
\end{equation}
with $f^{M}$ being the Gaussian weight (i.e. Maxwellian distribution for $\rho=1$):
\begin{equation}
f^{M}({\bf v})=\frac{1}{(2\pi\theta)^{D/2}} \exp \left( -\frac{~{\bf v}^2}{2\theta}\right).
\label{f_M}
\end{equation}
The Hermite polynomials in velocity are defined by the Rodrigues' formula:
\begin{equation}
{\bf H}^{(n)}({\bf v})=(-1)^n \theta^{\frac{n}{2}} e^{\frac{{\bf v}^2}{2\theta}} \nabla^{n} e^{-\frac{{\bf v}^2}{2\theta}},
\label{hermite_pol}
\end{equation}
while the Hermite coefficients are
\begin{equation}
{\bf C}^{(n)}({\bf x},t)=\int f({\bf x},{\bf v},t) {\bf H}^{(n)}({\bf v}) d{\bf v}.
\label{hermite_coeff}
\end{equation}
Both ${\bf H}^{(n)}$ and ${\bf C}^{(n)}$ are $n$-rank symmetric tensors; the product ${\bf C}^{(n)}:{\bf H}^{(n)}$ in Eq.~(\ref{f_H}) and hereafter represents full contraction. Each component of ${\bf H}^{(n)}({\bf v})$ is an $n$-degree polynomial in velocity ${\bf v}$, the first four Hermite polynomials in particular are
\begin{equation}
H^{(0)}({\bf v})=1,
\label{h0}
\end{equation}
\begin{equation}
H_{i}^{(1)}({\bf v})=\frac{1}{\theta^{\frac{1}{2}}}v_{i},
\label{h1}
\end{equation}
\begin{equation}
H_{ij}^{(2)}({\bf v})=\frac{1}{\theta}
(v_{i} v_{j}-\theta\delta_{ij}),
\label{h2}
\end{equation}
and
\begin{equation}
H_{ijk}^{(3)}({\bf v})=\frac{1}{\theta^{\frac{3}{2}}}
[v_{i}v_{j}v_{k}-\theta (v_{i} \delta_{jk}+v_{j}\delta_{ik} + v_{k}\delta_{ij})].
\label{h3}
\end{equation}
Hermite polynomials satisfy the orthogonality condition
\begin{equation}
<{\bf H}^{(m)},{\bf H}^{(n)}\!>=\!\int f^{M} {\bf H}^{(m)}{\bf H}^{(n)} d{\bf v}=0 ~(\forall~m\ne n)
\label{orthogonal}
\end{equation}
and, hence, span the Hilbert space of square-integrable functions $g_i({\bf v})$ with inner product $<g_i,g_j>=\int f^{M} g_i~g_j d{\bf v}$. Another fundamental advantage of employing the Hermite polynomial basis is that the n-order Hermite coefficient is a linear combination of the leading n-order moments of $f$. For example,
\begin{equation}
{\bf C}^{(0)}={\bf M}^{(0)}=\rho,
\label{c0h0}
\end{equation}
\begin{equation}
\theta^{\frac{1}{2}}{\bf C}^{(1)}={\bf M}^{(1)}=\rho{\bf u},
\label{c1h1}
\end{equation}
\begin{equation}
\theta{\bf C}^{(2)}={\bf M}^{(2)}-\rho\theta {\bf I}.
\label{c2h2}
\end{equation}
In similar fashion, the equilibrium distribution can be expressed as the Hermite expansion of the Maxwell-Boltzmann distribution (\ref{feqc}):
\begin{equation}
f^{eq}({\bf x},{\bf v},t)= f^{M}({\bf v}) \sum_{n=0}^{\infty}
\frac{1}{n!} {\bf C}_{eq}^{(n)}({\bf x},t) : {\bf H}^{(n)}({\bf v}).
\label{feq_H}
\end{equation}
The Hermite coefficients ${\bf C}_{eq}^{(n)}$ can be readily computed using Eq.~(\ref{feqc}) for $f^{eq}$ in Eq.~(\ref{hermite_coeff}).
\subsection{Closure of hydrodynamic equations via Hermite expansions}
Successive order approximations can be obtained by truncating the infinite Hermite series (\ref{f_H}) at increasing orders, the $N$-order approximation
\begin{equation}
f^{N}({\bf x},{\bf v},t)= f^{M}({\bf v}) \sum_{n=0}^{N}
\frac{1}{n!} {\bf C}^{(n)}({\bf x},t) : {\bf H}^{(n)}({\bf v})
\label{f_N}
\end{equation}
expresses the distribution function in terms of its leading $N$-order moments. The approximation $f= f^N \in \mathbb{H}^{N}$ is tantamount to projecting the distribution function onto a finite Hilbert space $\mathbb{H}^{N}$ spanned by the orthonormal basis of Hermite polynomials of order $\le N$. Due to orthogonality of the Hermite basis (\ref{orthogonal}), a finite expansion (\ref{f_N}) and the infinite series representation of $f$ (\ref{f_H}) give the same leading moments
\begin{equation}
{\bf M}^{(n)}=\int f {\bf v}^n d{\bf v} =\int f^N {\bf v}^n d{\bf v};~~n\le N.
\end{equation}
While low order moments are preserved the higher-order moments ($n>N$) can be approximately expressed in terms of low-order moments. In order to close the $N$-order hydrodynamic equations (\ref{u_1})--(\ref{u_4}) we employ
\begin{equation}
{\bf M}^{(N+1)} \simeq \int f^N {\bf v}^{(N+1)} d{\bf v}.
\label{M_app}
\end{equation}
Hence, within the framework of projection onto $\mathbb{H}^{N}$, the closed-form approximations below are obtained for unidirectional shear flow [see appendix \ref{app:hermite} for detailed derivation]; $f\in{\mathbb H}^{2}$:
\begin{equation}
\left(1 +\tau \frac{\partial }{\partial t}\right) \frac{\partial u}{\partial t} =
\tau \theta \nabla^2 u,
\label{u_2closed}
\end{equation}
$f\in{\mathbb H}^{3}$:
\begin{equation}
\left(1+2\tau \frac{\partial}{\partial t}+\tau^2\frac{\partial^2}{\partial t^2} \right) \frac{\partial u}{\partial t} = \left(1+3\tau\frac{\partial}{\partial t}\right) \tau \theta \nabla^2 u,
\label{u_3closed}
\end{equation}
$f\in{\mathbb H}^{4}$:
\begin{eqnarray}
\label{u_4closed}
\left(1+3\tau \frac{\partial}{\partial t}+3 \tau^2 \frac{\partial^2}{\partial t^2}
+\tau^3 \frac{\partial^3}{\partial t^3} \right)
\frac{\partial u}{\partial t} &=& \nonumber\\
\left(1 + 7 \tau \frac{\partial}{\partial t}+6 \tau^2 \frac{\partial^2}{\partial t^2} \right)\tau \theta \nabla^2 u
- 3 \theta^2 \tau^3 \nabla^4 u.
\end{eqnarray}
As evidenced by Eqs.~(\ref{c0h0})--(\ref{c2h2}) for $\{{\bf C}^{(n)};~n\le2\}$, second- or higher-order expansions ($N \ge 2$) are required to satisfy conservation of mass, momentum, and energy.
\section{$N$-Order lattice Boltzmann--BGK method}
\label{sec:LBGK}
The rigorous formulation of so-called $N$-order lattice Boltzmann models introduced by Shan et al. (2006) is based on the projection of the continuum distribution function onto ${\mathbb H}^{N}$ so that $f_i({\bf x},t)=f^{N}({\bf x},{\bf v}_i,t)$ at a finite discrete-velocity set $\{{\bf v}_i; i=1,Q\}$. Since the finite set of distributions $\{f_i;i=1,Q\}$ is expressed by $N$-order Hermite series, Gauss--Hermite (G--H) quadrature with algebraic degree of precision $d\ge 2N$ allows for exact integration of the leading $N$-order velocity moments. Once velocity abscissae ${\bf v}_i$ and weights $w_i$ are determined by a proper G--H quadrature formulae \cite{Shan2007,Shan2006} one has
\begin{eqnarray}
{\bf M}^{(n)}({\bf x},t)&\equiv& \int f({\bf x},{\bf v},t) {\bf v}^n d{\bf v}\nonumber\\
&=& \sum_{i=1}^{Q} w_i f_i({\bf x},t) {\bf v}^n_i;~~ n=0,N.
\end{eqnarray}
Note that all Hermite coefficients (\ref{hermite_coeff}) in the expansion of $f$ (\ref{f_N}) are then exactly integrated as well. At the same time, high-order G--H formulae determine velocity sets $\{{\bf v}_i; i=1,Q\}$ that fulfill high-order moment isotropy required for hydrodynamic representation beyond N--S \cite{Chen2008,Chen2008a}. A collateral conclusion of the Hermite expansion formulation is that the employed number $Q$ of lattice velocities (i.e. quadrature points) sets an upper limit on the attainable order of hydrodynamic description.
\noindent{\it The Lattice Boltzmann--BGK Equation}. The Hermite expansion formulation \cite{Shan2006} places LBGK in the category of Galerkin methods, within this theoretical framework the evolution equations
\begin{equation}
\frac{\partial f_i}{\partial t}+{\bf v}_i \cdot \nabla f_i = - \frac{f_i - f_i^{eq}}{\tau}~~(i=1,Q)
\label{LBGK}
\end{equation}
for $f_i({\bf x},t)$ can be systematically derived via approximation in velocity function space ${\mathbb H}^{N}$. The equilibrium distribution $f^{eq}_i \in {\mathbb H}^{N}$ in Eq.~(\ref{LBGK}) takes the form
\begin{equation}
f_{i}^{eq}({\bf x},t)=f^{M}({\bf v}_i) \sum_{n=0}^{N}
\frac{1}{n!} {\bf C}_{eq}^{(n)}({\bf x},t) {\bf H}^{(n)}({\bf v}_i).
\label{feq}
\end{equation}
\subsection{The LBGK Algorithm}
\label{sec:regularization}
Conventional LBGK algorithms for solving Eq.~(\ref{LBGK}) use an operator splitting technique and, thus, advance in two steps: advection $f^{a}_{i}
({\bf x}, t)=f_{i}({\bf x}-{\bf v}_i \Delta t, t)$ and collision $f_i({\bf x},t+\Delta t)= f_i^{a}({\bf x},t) - \left[f^{a}_i({\bf x},t) - f_i^{eq}\right] \Delta t/\tau$.
These steps do not constitute a standard Galerkin procedure, where one would directly compute the evolution of the Hermite coefficients. As a consequence, conventional LBGK algorithms exhibit an undesired dependence on the flow field alignment with the underlying lattice \cite{Colosqui2009, Zhang}. This numerical anisotropy becomes noticeable at finite Knudsen or Weissenberg numbers where non-equilibrium effects are important. For non-equilibrium systems $f^{a}_i$ will lie outside ${\mathbb H}^{N}$ but the problem is effectively solved using a so-called {\it regularization} procedure \cite{Zhang}, i.e. by re-projecting the non-equilibrium component $f_{i}^{ne}=f^{a}_{i}-f_{i}^{eq}$ onto ${\mathbb H}^{N}$;
\begin{equation}
\widehat{f_{i}}^{ne}= f^M({\bf v}_i) \sum_{n=0}^{N}
\frac{1}{n!} {\bf C_{ne}}^{(n)}({\bf x},t) {\bf H}^{(n)}({\bf v}_i)
\label{fne}
\end{equation}
where
\begin{equation}
{\bf C_{ne}}^{(n)}({\bf x},t)=\sum_{j=1}^{Q} w_j f_{j}^{ne}({\bf x},t) {\bf H}^{(n)}({\bf v}_j).
\end{equation}
The {\it re-projected} non-equilibrium component (\ref{fne}) can be reintroduced at the collision step:
\begin{equation}
f_i({\bf x}+{\bf v}_i,t+\Delta t)=
f_i^{eq} + \left( 1 - \frac{\Delta t}{\tau} \right) \widehat{f_{i}}^{ne}.
\label{regularization}
\end{equation}
Provided that Hermite expansions for $f_i^{eq}$ (\ref{feq}) and $\widehat{f_{i}}^{ne}$ (\ref{fne}) are truncated at the same $N$th-order, the re-projection step keeps $f_i$ within ${\mathbb H}^{N}$ as it must be the case for standard Galerkin procedures. The re-projection of $f^{a}_i$ onto ${\mathbb H}^{N}$ is indispensable to ensure that the leading $N$-order moments of $f$ are exactly integrated via G--H quadrature so that simulated dynamics becomes independent of lattice-flow alignment.
\section{Non-Newtonian Kolmogorov flow}
\label{sec:numerical}
The decay of a sinusoidal shear wave in free space, also known as Kolmogorov flow, is a useful benchmark to assess derived hydrodynamic descriptions and kinetic methods employed in this work. In order to characterize the flow at arbitrary non-equilibrium conditions we employ the Weissenberg number $Wi=\tau/T\equiv\tau\nu k^2$ where $\nu=\tau\theta$ is the kinematic viscosity and $T=\nu k^2$ determines a characteristic decay time. Assuming a mean-free path $\lambda=\tau\sqrt{\theta}$, the employed Weissenberg number directly converts to a Knudsen number $Kn=\lambda k\equiv \sqrt{Wi}$. In order to remain within laminar and nearly isothermal regimes the flow Mach number is kept small $M={U_0}/{\sqrt{\theta}}<0.1$; thus $Re={U_0}/{\nu k}=M/\sqrt{Wi}<1$ is always below the stability limit $Re<\sqrt{2}$. Kinetic initial conditions are given by a distribution $f(y,{\bf v},0)=f^{eq}(\rho,u(y,0),\theta)$, i.e. local equilibrium. For this arbitrary choice of initialization the collision term in the kinetic equation vanishes and the simulated dynamics is collisionless at $t=0$. As a consequence, initial conditions at hydrodynamic level are given by the free-molecular flow solution \cite{Colosqui2009}:
\begin{equation}
\frac{\partial^n u(y,0)}{\partial t^n}=U_{0}\sin(ky)~
\frac{\partial^n}{\partial t^n}\exp \left[-\frac{\theta k^2 t^2}{2}\right];~ n\ge 0.
\label{wave_ic_all}
\end{equation}
We remark that after the choice of initialization at local equilibrium the microscopic dynamics remains practically collisionless for a finite time $t\lesssim \tau$, therefore, (viscous) Newtonian behavior or purely exponential decay can only be observed after time intervals of the order of the relaxation time.
The analytical description of the flow at arbitrary $Wi$ is given by solution of the hydrodynamic approximations, i.e. Eqs.~(\ref{u_2closed})--(\ref{u_4closed}), derived in Sec.~\ref{sec:Hermite} via Hermite-space approximation $f\!\in\!{\mathbb H}^{N}$. For a periodic wave, the solution to each $N$-order hydrodynamic equation is expressed by:
\begin{equation}
u(y,t)=\sum_{n=1}^{N} C_n \mathrm{Im} \{e^{i k y} e^{-\omega_n (t + \phi_n)}\}.
\label{modes}
\end{equation}
Each mode in the solution is determined by the complex frequencies $\omega_n(Wi)=\mathrm{Re}\{\omega_n\}+i\mathrm{Im}\{\omega_n\}$ ($n=1,N$), these values are the roots of the dispersion relation (i.e. a $N$-order polynomial) that corresponds to the $N$-order hydrodynamic approximation. The constants $C_n$ and $\phi_n$ in the particular solution can be determined by imposing $N$ initial conditions given by Eq.~(\ref{wave_ic_all}) and symmetry constraints. While (positive) real roots produce exponentially decaying modes, each pair of complex conjugate roots describes two identical waves (i.e. same amplitude $C$ and phase $\phi$) which combine into a single standing wave that decays in time.
\subsection{Numerical simulation}
The decay of a velocity wave $u(y,0)=U_0 \sin ky$ of wavenumber $k=2\pi/l_y$ is simulated with two different kinetic methods: the direct simulation Monte Carlo (DSMC) algorithm described in \cite{Alexander97} and the LBGK scheme described in Sec.~\ref{sec:regularization}.
In the analysis of DSMC results, given that $\tau$ is not a simulation parameter for this method, we use $Wi\simeq\lambda \nu k^2 / c_s$ (i.e $\tau\simeq\lambda/c_s$); the speed of sound $c_s$, mean-free path $\lambda$ and viscosity $\nu$ are determined from the relations for a hard-sphere gas. For DSMC simulation we set $M=0.1$ and employ a rather large number of particles ($N_p=30000$), ensembles ($N_e=2000$), and collision cells along $l_y$ ($N_c=500$). To further reduce the statistical noise in DSMC results we perform spatial averaging $u(t)/[\frac{u(y,t)}{U_0\sin(ky)}]=\int u(y,t)/u(y,0) dy$ over the wavelength segments $l_y/8$--$l_y3/8$ and $l_y5/8$--$l_y7/8$, these quantities are presented in Fig~\ref{fig1}.
For LBGK simulation we set $M=0.01$ while the computational domain has $l_x \times l_y=10\times 2500$ nodes; in all cases the spatial resolution is conservatively larger than that determined by grid convergence tests. For the present results we employ the D2Q37 model (two-dimensional lattice with 37 states) corresponding to a G--H quadrature rule with algebraic degree of precision $d=9$ \cite{Shan2007}, i.e. permitting the exact integration of fourth-order moments. Different $N$-order truncations of the Hermite expansions are implemented on the D2Q37 lattice; we refer to these schemes as D2Q37-H2 ($N=2$), D2Q37-H3 ($N=3$), and D2Q37-H4 ($N=4$). As in previous studies with {\it regularized} LBGK algorithms \cite{Colosqui2009,Zhang}, the present results are independent of the flow-lattice alignment. In Fig.~\ref{fig1} we present the velocity field at $Wi=0.1, 0.5, 1, 10$ given by DSMC and LBGK simulation, as well as analytical solution (\ref{modes}) of Eqs.~(\ref{u_2closed})--(\ref{u_4closed}).
\begin{figure*}
\centerline{
\subfigure[]{\includegraphics[angle=0,scale=0.4]{./u_tw01_all.pdf}}
\subfigure[]{\includegraphics[angle=0,scale=0.4]{./u_tw05_all.pdf}}}
\centerline{
\subfigure[]{\includegraphics[angle=0,scale=0.4]{./u_tw1_all.pdf}}
\subfigure[]{\includegraphics[angle=0,scale=0.4]{./u_tw10_all.pdf}}}
\caption{$\frac{u(y,t)}{U_0\sin(ky)} ~ vs. ~ t \nu k^2$; (a) $Wi=0.1$ (b) $Wi=0.5$ (c) $Wi=1$ (d) $Wi=10$. Dotted line ($f\in \mathbb{H}^2$): analytical solution of Eq.~(\ref{u_2closed}). Dashed line ($f\in\mathbb{H}^3$): analytical solution of Eq.~(\ref{u_3closed}). Solid line ($f\in\mathbb{H}^4$): Analytical solution of Eq. (\ref{u_4closed}).
Markers: (${\bigtriangleup}$) D2Q37-H2, (${\square}$) D2Q37-H3, ($\bigcirc$) D2Q37-H4, ($+$) DSMC.}
\label{fig1}
\end{figure*}
As expected, since Hermite-space approximations $f\!\in\!{\mathbb H}^{N}$ underpin the $N$-order LBGK method, the flow simulated by LBGK models is exactly described by analytical solution to Eqs.~(\ref{u_2closed})--(\ref{u_4closed}) at arbitrary $Wi$. The DSMC method, which does not resort to discretization of velocity space nor the BGK collision ansatz, is in good agreement with LBGK and the $f \in{\mathbb H}^N$ approximations in the parameter range $0\le Wi\lesssim 1$.
\subsection{Long-time decay and hydrodynamic modes}
The long-time dynamics becomes independent of the choice of initial condition for $t/\tau=t\nu k^2 /Wi \gg 1$. The long-time solution of the flow is determined by the decay frequency $\omega(Wi)$ with the smallest real part. In Newtonian regime ($Wi\!\!=\!\!0$), N--S solution yields a single hydrodynamic mode $u=\mathrm{Im}\{U_0 \exp(i k y-\omega t)\}$ describing purely exponential decay with $\omega= 1/\nu k^2$. Hermite-space approximations ${f\in\mathbb H}^N$ ($N=2,3,4$) predict a long-time decay $\omega(Wi)$ [see Fig.~\ref{fig:longdecay}] determined from the set of roots $\{\omega_{n};n=1,N\}$ of dispersion relations corresponding to Eqs.~(\ref{u_2closed})--(\ref{u_4closed}). An alternative approach to Hermite-space approximations is provided by formal solution of BE--BGK with the method of characteristics \cite{Chen2007,Colosqui2009}:
\begin{eqnarray}
\label{eq:f_char}
f({\bf x},{\bf v},t)&=&f_{0}({\bf x}-{\bf v}t,{\bf v})e^{-\frac{t}{\tau}}\nonumber\\
&+&\int_{0}^{\frac{t}{\tau}}e^{-s}f^{eq}({\bf x}-{\bf v}\tau s,{\bf v},t-\tau s)ds.
\end{eqnarray}
Hydrodynamic relations for arbitrary $Wi$ can be derived by taking velocity moments of Eq.~(\ref{eq:f_char}); in the long-time limit $t\gg\tau$ of the studied shear flows the following dispersion relation is obtained \cite{Chen2007}
\begin{equation}
\tau\omega = 1 - \sqrt{\pi} ~ z ~ \exp(z^2) ~ \mathrm{erfc}(z)
\label{eq:tauomega}
\end{equation}
with $z=(1-\tau\omega)/\sqrt{2Wi}$. Numerical solution to Eq~(\ref{eq:tauomega}) is presented in Fig.~\ref{fig:longdecay}, this dispersion relation has one trivial solution $\omega=1/\tau$ and a second root $\omega=\omega(Wi)$ also on the positive real axis ($\mathrm{Re}\{\omega\}>0$, $\mathrm{Im}\{\omega\}=0$). Based on asymptotic analysis of the exact solution of BE--BGK approximate explicit expressions have been proposed \cite{Colosqui2009}:
\begin{equation}
\frac{\omega}{\nu k^2} = \frac {\sqrt{1+4 Wi}-1}{2 Wi} ~\mathrm{for}~ Wi\ll 1,
\label{eq:decay_low}
\end{equation}
and
\begin{equation}
\frac{\omega}{\nu k^2} = \frac {1 \pm \sqrt{1-4 Wi}}{2 Wi} ~\mathrm{for}~ Wi\gg 1.
\label{eq:decay_high}
\end{equation}
In Fig.~\ref{fig:longdecay}, different Hermite-space approximations ${f\in\mathbb H}^N$ ($N=2,3,4$) which exactly described LBGK results in Fig.~\ref{fig1} are now compared against numerical solution to the exact dispersion relation (\ref{eq:tauomega}) and asymptotic approximations (\ref{eq:decay_low})--(\ref{eq:decay_high}).
\begin{figure*}
\centerline{
\subfigure[]{\includegraphics[angle=0,scale=0.4]{./wlt.pdf}}
\subfigure[]{\includegraphics[angle=0,scale=0.4]{./wlt_i.pdf}}}
\caption{Long-time decay: (a) $\frac{\mathrm{Re}\{\omega\}}{\nu k^2} ~ vs. ~Wi$ , (b) $\frac{\mathrm{Im}\{\omega\}}{\nu k^2} ~ vs. ~Wi$.
Markers: (${\bigtriangleup}$) $f\in \mathbb{H}^2$ [Eq.~(\ref{u_2closed})], (${\square}$) $f\in\mathbb{H}^3$ [Eq.~(\ref{u_3closed})], ($\bigcirc$) $f\in\mathbb{H}^4$ [Eq.~(\ref{u_4closed})], ({\Large$\bf{\times}$}): $f\in \mathbb{H}^\infty$ [numerical solution of Eq.~(\ref{eq:tauomega})]. Dashed line: $Wi\ll1$ approximation [Eq.~(\ref{eq:decay_low})]. Solid line: $Wi\gg1$ approximation [Eq.~(\ref{eq:decay_high})].}
\label{fig:longdecay}
\end{figure*}
All roots of the different dispersion relations have a positive real part indicating time decay of the flow, the non-Newtonian decay is always {\it slower} than the Newtonian decay $\mathrm{Re}\{\omega\}<\nu k^2$ for $Wi>0$ and becomes $\mathrm{Re}\{\omega\} \sim 1 / \tau $ for $Wi>1$. At a first glance, the studied expressions provide comparable results in the limits $Wi\to 0$ and $Wi\to \infty$ while significant disagreement is observed for $W\sim 1$. Notice that Eq.~(\ref{eq:decay_high}) is the dispersion relation corresponding to the telegraph equation [i.e. Eq.~(\ref{u_2closed})] derived for ${f\in\mathbb H}^2$.
\section{Conclusions and discussions}
Provided that BE--BGK is a valid model, moment equations derived for $f\in\mathbb{H}^N$ are in principle not constrained to near-equilibrium conditions. For unidirectional and isothermal shear flow, Hermite space approximations of different order $\{f\in\mathbb{H}^N;~N=2,3,4\}$ led to $N$-order PDEs (\ref{u_2closed})--(\ref{u_4closed}) for the evolution of fluid momentum [see appendix \ref{app:hermite} for detailed derivation]. The studied Kolmogorov flow represents an initial value problem in free-space with kinetic initialization at local equilibrium, particular analytical solution to Eqs.~(\ref{u_2closed})--(\ref{u_4closed}) has been compared against kinetic simulation via LBGK and DSMC [see Fig.~\ref{fig1}]. We found that derived $N$-order hydrodynamic equations predict exactly all hydrodynamic modes present in the flow simulated by $N$-order LBGK models. We conclude that Eqs.~(\ref{u_2closed})--(\ref{u_4closed}) can be used to benchmark LBGK algorithms at arbitrary $Wi$ and $Kn$ number. High-order LBGK models and corresponding Hermite-space approximations (e.g. D2Q37-H4 and $f\in\mathbb{H}^4$) are in good agreement with DSMC results in a wide region $Wi \simeq Kn^2 <1$ extending well beyond N--S hydrodynamics. These results indicate that in the region $Wi<1$ the BE--BGK moment hierarchy approximates fairly well the low-order moments of the Boltzmann equation with binary collision integral. A significant disagreement exists between LBGK and DSMC solutions in the region $Wi\gtrsim 1$ as seen in Fig.~\ref{fig1}(d).\\
Hereafter, we put aside a discussion on the validity of the BGK ansatz for far-from-equilibrium flows (e.g. $Wi\gtrsim1$ or $Kn\gtrsim1$). Instead, we proceed to study the effect of velocity-space discretization when solving the continuum BE--BGK over the entire parameter range $0\le Wi\le \infty$. The dispersion relation expressed by Eq.~(\ref{eq:tauomega}) coming from exact solution of BE--BGK ($f\in\mathbb{H}^\infty$) for $t\gg\tau$ has two branches of solutions [see Fig.~\ref{fig:longdecay}(a--b)]. Meanwhile, the dispersion relation corresponding to Hermite-space approximation $f\in\mathbb{H}^N$ admit $N$ roots; it follows that initial conditions may excite spurious modes in Eqs.~(\ref{u_2closed})--(\ref{u_4closed}). In order to remove initialization from analysis we examine the long-time behavior $t\gg\tau$ characterized by the fundamental frequency $\omega(Wi)$. While ${\mathrm Re}\{\omega\}>0$ determines the flow decay rate or momentum dissipation, an imaginary component ${\mathrm Im}\{\omega\}\neq 0$ is responsible for time oscillations or momentum wave propagation as observed in Fig.~\ref{fig1}(c--d). We have compared in Fig.~\ref{fig:longdecay} the long-time frequency $\omega(Wi)$ determined from Eqs.~(\ref{u_2closed})--(\ref{u_4closed}) against $\omega(Wi)$ according to Eq.~(\ref{eq:tauomega}). After truncation of the Hermite series, or corresponding velocity space discretization, dissipative properties of the flow can still be well represented for $Wi\ll 1$, where ${\mathrm Re}\{\omega\} / \nu k^2 \sim 1$, and $Wi\gg 1$, where ${\mathrm Re}\{\omega\} / \nu k^2\sim 1/Wi$. The imaginary parts also approximate the exact BE--BGK prediction ${\mathrm Im}\{\omega\}/\nu k^2=0$ in both limits $Wi\to0$ and $Wi\to\infty$ as seen in Fig.~\ref{fig:longdecay}(b). Notice that odd-order approximations (e.g $f\in\mathbb{H}^3$) yield a real-valued frequency $\omega$ for all $Wi$ while even-order approximations admit a long-time frequency with non-zero imaginary part at sufficiently high values of $Wi$; i.e. $Wi\ge0.25$ for $f\in\mathbb{H}^2$ and $Wi\ge0.388$ for $f\in\mathbb{H}^4$. In the case of Hermite-space approximations of even order when $Wi\gg 1$, time oscillations may persist in the long-time solution as the oscillation period becomes smaller than the decay time; e.g. ${\mathrm Re}\{\omega\}/{\mathrm Im}\{\omega\}= \sqrt{Wi}$ for $f\in\mathbb{H}^2$. As observed in previous work \cite{Colosqui2009, Yakhot}, a second-order approximation $f\in\mathbb{H}^2$ can be employed to model a viscoelastic response in high-frequency oscillatory flows similar to that observed for a Maxwell fluid and governed by the telegraph equation (\ref{u_2closed}).\\
\noindent{\it LBGK methods and extensions}: The LBGK method has been extensively employed for macroscopic description of various physical phenomena (e.g. microfluidics, turbulence, reaction-diffusion, phase transition), albeit the exact (high-order) moment dynamics that different LBGK algorithms produce has not been fully elucidated. This inconvenience is partly because Chapman--Enskog (C--E) expansions, which have emerged as the preferred closure procedure, become increasingly difficult when carried to high-orders. The approach presented in this work allows to close the LBGK moment hierarchy circumventing C--E techniques. At the same time, it is straightforward to determine the C--E expansion order that correspond to a particular Hermite-space approximation [see Shan et al. (2006)]. The moment-equation hierarchy presented by Eq.~(\ref{M_N}) when combined with different Hermite-space approximations can be applied for a priori design of LBGK schemes that solve high-order and non-linear PDEs governing numerous complex physical systems beyond fluid mechanics. It is also worth to remark that a relatively simple algorithm, based on fully-implicit and low-order finite-difference schemes, offering significant computational advantages can be effectively employed for the numerical solution of PDEs involving high-order derivatives in time and space, e.g. see Eq.~(\ref{u_4closed}) with hyperviscosity.\\
\noindent{\it The validity limits of BE--BGK}: The main scope of this work is not to establish the validity of BE--BGK in far-from-equilibrium conditions; efforts in that area could compare the presented analytical expressions against experimental data or more extensive numerical analysis via alternative methods. From results in this work it is clear that DSMC, which emulates the Boltzmann equation with a binary collision integral, and BE--BGK produce similar solutions for the studied shear flow in the region $Wi=\tau\nu k^2<1$. Nevertheless, the upper applicability limit of BE--BGK for describing macroscopic physics remains to be established when the system dramatically departs from equilibrium conditions.
\begin{acknowledgments}
The author thanks Dr. V. Yakhot and Dr. H. Chen for valuable suggestions and stimulating discussions throughout the progress of this work.
\end{acknowledgments}
|
1,314,259,994,856 | arxiv | \section{Introduction}
Differential flatness, roughly speaking, means that all the variables of an under-determined system of differential equations can be expressed as functions of a particular output, called flat output, and a finite number of its successive time derivatives (\cite{Martin_92,Fliess_95,Fliess_99}, see also \cite{Ramirez_04,Levine_09,Levine_11} and the references therein).
For time-delay systems and more general classes of infinite-dimensional systems, extensions of this concept have been proposed and thoroughly discussed in \cite{Mounier_95,Fliess_96,Petit_00,Rudolph_03}. In a linear context, relations with the notion of system parameterization \cite{Pommaret_99,Pommaret_01} and, in the behavioral approach of \cite{Polderman_98}, with \emph{latent variables} of \emph{observable image representations} \cite{Trentelman_04}, have been established. Other theoretic approaches have been proposed e.g. in \cite{Rocha_97,Chyzak_05}. Interesting control applications of linear time-delay systems may be found in \cite{Mounier_95,Petit_00,Rudolph_03}.
Characterizing differential flatness and flat outputs has been an active topic since the beginning of this theory. The interested reader may find a historical perspective of this question in \cite{Levine_09,Levine_11}. Constructive algorithms, relying on standard computer algebra environments, may be found e.g. in \cite{Antritter_08} for nonlinear finite-dimensional systems, or \cite{Chyzak_04} for linear systems over Ore algebras.
The results and algorithm proposed in this paper for the characterization and computation of $\pi$-flat outputs for linear time-delay systems are strongly related to the algebraic framework developed in \cite{Mounier_95,Petit_ecc97,Rudolph_03}.
More precisely, we study linear time-delay differential control systems, i.e. linear systems of the form $Ax=Bu$, with $x \in {\mathbb R}^{n}$ the pseudo-state, and $u\in {\mathbb R}^{m}$ the control, for given integers $m\leq n$, where the entries of the matrices $A$ and $B$ belong to the ring ${\mathfrak K}[\delta,{\frac{d}{dt}}]$ of multivariate polynomials of $\delta$, the delay operator, and ${\frac{d}{dt}}$, the time derivative operator, over the ground field ${\mathfrak K}$ of meromorphic functions of the variable $t$.
We say that the system $Ax=Bu$ is $\pi$-flat if, and only if, the module generated by the components of $x$ and $u$ over ${\mathfrak K}[\delta,{\frac{d}{dt}}]$ and satisfying the relations $Ax=Bu$, localized at the powers of a polynomial $\pi\in {\mathfrak K}[\delta]$, is free, and a $\pi$-flat output is a basis of this free module (see \cite{Mounier_95}).
To characterize and compute $\pi$-flat outputs, we propose a methodology based on standard polynomial algebra, generalizing the one used in \cite{Levine_03} for ordinary linear differential systems, by extending the original ring ${\mathfrak K}[\delta,{\frac{d}{dt}}]$ to the principal ideal ring ${\mathfrak K}(\delta)[{\frac{d}{dt}}]$ of polynomials of ${\frac{d}{dt}}$ over the fraction field ${\mathfrak K}(\delta)$, namely the field generated by fractions of polynomials of $\delta$ with coefficients in ${\mathfrak K}$, and finally localize the results of our computations at the powers of a suitable polynomial $\pi$ of ${\mathfrak K}[\delta]$. This approach allows us to use the well-known Smith-Jacobson (or diagonal) decomposition (\cite{Cohn_85,Jacobson_78}) of matrices with entries in the larger ring ${\mathfrak K}(\delta)[{\frac{d}{dt}}]$ as the main tool to obtain the searched $\pi$-flat outputs.
Following \cite{Levine_03}, in order to work with a smaller set of equations and variables, we eliminate the input variables, leading to an implicit system representation, as opposed to previous approaches (see e.g. \cite{Mounier_95,Petit_00,Rudolph_03,Chyzak_05,Chyzak_07}).
Let us also insist on the fact that the time-varying dependence of the systems under consideration is in the class of meromorphic functions, whereas in \cite{Chyzak_05,Chyzak_07}, this dependence is polynomial with respect to time in order to apply effective Gr\"{o}bner bases techniques.
The main contributions of this paper are (1) the characterization of $\pi$-flatness in terms of the \emph{hyper-regularity} of the system matrices, (2) yielding an elementary algorithm to compute $\pi$-flat outputs, based on the Smith-Jacobson decomposition of the former matrices.
In addition, the evaluation of our $\pi$-flatness criterion only relies on computations over the larger ring ${\mathfrak K}(\delta)[{\frac{d}{dt}}]$.
The paper is organized as follows. The $\pi$-flat output computation problem is described in section~\ref{statement-sec}, as well as the algebraic framework. Then, the main result of the paper is presented in section~\ref{main-sec}. Finally, the proposed methodology is illustrated by some examples in section~\ref{ex-sec}, and its generalization to multiple delays is outlined on an example of vibrating string, first solved in \cite{Mounier_98}.
\section{Problem Statement}\label{statement-sec}
We consider a linear system governed by the set of time-delay differential equations:
\begin{equation}
A\left(\delta,{\frac{d}{dt}} \right)x=B\left(\delta,{\frac{d}{dt}} \right)u,\label{sys_lin_2}
\end{equation}
where $x\in{\mathbb R}^{n}$ is the pseudo-state, $u\in{\mathbb R}^{m}$ the input vector, $A$ (resp. $B$) a $n\times n$ (resp. $n\times m$) matrix, whose coefficients are multivariate polynomials of $\delta$ and ${\frac{d}{dt}}$, with ${\frac{d}{dt}}$ the differentiation operator with respect to time and $\delta$ the time-delay operator defined by:
\begin{equation}
\delta:f(t)\mapsto \delta f(t)=f(t-\tau),\quad \forall t \in {\mathbb R}\label{delay_eq}
\end{equation}
where $\tau \in {\mathbb R}^{+}$ is the delay.
In order to precise the nature of the coefficients $a_{i,j}(\delta,{\frac{d}{dt}})$, $i,j=1,\ldots,n$, and $b_{i,j}(\delta,{\frac{d}{dt}})$, $i=1,\ldots,n$, $j=1,\ldots,m$, of the matrices $A(\delta,{\frac{d}{dt}})$ and $B(\delta,{\frac{d}{dt}})$ respectively, some algebraic recalls are needed.
\subsection{Algebraic Framework}\label{alg_sec}
Since we deal with smooth functions of time, a natural field is the \emph{differential field of meromorphic functions on the real line} ${\mathbb R}$. We call this field the \emph{ground field} and we denote it by ${\mathfrak K}$.
The previously introduced operators $\delta$ and ${\frac{d}{dt}}$ satisfy the following rules:
\begin{equation}
{\frac{d}{dt}} \left( \alpha(t)\cdot \right)=\alpha(t){\frac{d}{dt}}+\dot{\alpha}(t)\cdot,\quad\delta \left(\alpha(t)\cdot\right)=\alpha(t-\tau)\delta,\quad{\frac{d}{dt}}\delta=\delta{\frac{d}{dt}}\nonumber
\end{equation}
for every time function $\alpha$ belonging to ${\mathfrak K}$. The set of multivariate polynomials of these operators, namely polynomials of the form
$$\sum_{k,l~\rm{finite}}\alpha_{k,l}(t)\frac{d^{k}}{dt^{k}}\delta^{l},\quad \alpha_{k,l}\in{\mathfrak K}$$
is a \emph{skew commutative ring} \cite{McConnell_00,Shafarevich_97}, denoted by ${\mathfrak K}[\delta,{\frac{d}{dt}}]$.
The coefficients $a_{i,j}$ (resp. $b_{i,j}$) of the matrix $A$ (resp. $B$) of system (\ref{sys_lin_2}) are supposed to belong to ${\mathfrak K}[\delta,{\frac{d}{dt}}]$, thus making system (\ref{sys_lin_2}) a linear time-varying time-delay differential system, whose coefficients are meromorphic functions with respect to time.
\subsubsection{System Module, Freeness}\label{module-sec}
To system (\ref{sys_lin_2}) is associated the so-called \emph{system module}, noted $\Lambda$. More precisely, following \cite{Fl-scl, Mounier_95}, let us consider a non zero, but otherwise arbitrary, pair $(\xi,\nu)= (\xi_{1},\ldots, \xi_{n},\nu_{1},\ldots, \nu_{m})$ and the free module\footnote{For more details on rings and modules, the reader may refer to \cite{Cohn_85}.}, denoted by $[\xi,\nu]$, generated by all possible linear combinations of $\xi$ and $\nu$ with coefficients in ${\mathfrak K}[\delta,{\frac{d}{dt}}]$. Next, we set $\theta=A\xi-B\nu$ and construct the submodule $[\theta]$ of $[\xi,\nu]$ generated by the components of the vector $\theta$.
The system module $\Lambda$ is, by definition, the quotient module $\Lambda =[\xi,\nu]/[\theta]$.
In \cite{Mounier_95}, in the context of commutative polynomial rings, the notion of projective (resp. torsion-free) \emph{controllability} of a time-invariant system, i.e. a system of the form (\ref{sys_lin_2}) with ground field ${\mathfrak K} = {\mathbb R}$, is defined as the projective (resp. torsion) freeness of $\Lambda$, and shown to generalize the well-known Kalman controllability criterion to linear time-invariant differential delay systems. Moreover, as a consequence of a theorem of Quillen and Suslin, solving a conjecture of Serre (see e.g. \cite{Eisenbud_94,Lam_78}), $\Lambda$ is free if and only if it is projective free.
If $F$ is a finite-dimensional presentation matrix of $\Lambda$, the latter module $\Lambda$ is projective free if $F$ is right-invertible, i.e. there exists a matrix $T$ over ${\mathfrak K}[\delta,{\frac{d}{dt}}]$ such that $FT = I$.
This approach has been generalized to modules over the Weyl algebras by Quadrat and Robertz \cite{Quadrat_07}, based on a theorem of Stafford \cite{Stafford_78}, (see algorithmic versions of this result in \cite{Hillebrand_01,Leykin_04}).
In both time-invariant and time-varying cases, systems whose module is free are called \emph{flat} (\cite{Mounier_95,Petit_ecc97,Rudolph_03}). Nevertheless, only few systems have a free system module, thus motivating the weaker notion of $\pi$-flatness: we say that the system is $\pi$-flat, or that its associated module is $\pi$-free (\cite{Mounier_95,Rudolph_03}), if, and only if, there exists a polynomial $\pi\in {\mathfrak K}[\delta]$, called \emph{liberation polynomial} (\cite{Mounier_95,Rudolph_03}), such that the module ${\mathfrak K}[\delta, \pi^{-1},{\frac{d}{dt}}] \otimes_{{\mathfrak K}[\delta, {\frac{d}{dt}}]} \Lambda$, i.e. the set of elements of the form $\sum_{i\in I} \pi^{-i} a_{i}\xi_{i}$ with $I$ arbitrary subset of ${\mathbb N}$, $a_{i}\in {\mathfrak K}[\delta, {\frac{d}{dt}}]$ and $\xi_{i} \in \Lambda$ for all $i\in I$, called the \emph{system module localized at the powers of $\pi$}, is free.
In other words, $\pi$-flatness means that the state and input can be expressed in terms of the $\pi$-flat output, a finite number of its time derivatives and delays, and advances corresponding to powers of the inverse operator $\pi^{-1}$.
In the sequel, we will also use the extension, as announced, of the ground field ${\mathfrak K}$ to ${\mathfrak K}(\delta)$, the fraction field generated by rational functions of $\delta$ with coefficients in ${\mathfrak K}$. The system module over this field extension is ${\mathfrak K}(\delta)[{\frac{d}{dt}}] \otimes_{{\mathfrak K}[\delta, {\frac{d}{dt}}]} \Lambda$. Indeed, freeness (in any sense) of the latter module does not imply freeness (in any sense) of the original system module $\Lambda$ (see e.g. \cite{Mounier_95}).
\subsubsection{Polynomial Matrices, Smith-Jacobson Decomposition, Hyper-regularity}\label{SJdecomp-sec}
The matrices of size $p\times q$ whose entries are in ${\mathfrak K}[\delta,{\frac{d}{dt}}]$ generate a \emph{module} denoted by $\mm{p}{q}$. The matrix $M\in\mm{n}{n}$ is said to be \emph{invertible}\footnote{Note that the ${\mathfrak K}[\delta,{\frac{d}{dt}}]$-independance of the $n$ columns and rows of $M$ is not sufficient for its invertibility. Its inverse, denoted by $N$, has to be polynomial too.} if there exists a matrix $N\in\mm{n}{n}$ such that $MN=NM=I_{n}$, where $I_{n}$ is the identity matrix of order $n$. The subgroup of $\mm{n}{n}$ of invertible matrices is called the group of \emph{unimodular matrices} of size $n$ and is noted $\uu{n}$\footnote{It is also often denoted by $GL_{n}({\mathfrak K}[\delta, {\frac{d}{dt}}])$}.
Let us give an example of a system of the form (\ref{sys_lin_2}), that will serve as a guideline all along this section to illustrate the various concepts.
\begin{ex}\label{Example_1}
\begin{equation}
Ax \triangleq \left(\begin{array}{cc}
{\frac{d}{dt}}&-k(t)(\delta-\delta^{2})\\0&{\frac{d}{dt}}
\end{array}\right)x=
\left(\begin{array}{c}
0\\\delta
\end{array}\right)u \triangleq Bu\label{example_1}
\end{equation}
where $x=(x_{1},x_{2})^{T}$, $u$ is scalar and $k(t)$ a meromorphic function. In other words (\ref{example_1}) reads:
\begin{equation}
\left\{\begin{array}{l}
\dot{x}_{1}(t)=k(t)(x_{2}(t-\tau)-x_{2}(t-2\tau))\\\dot{x}_{2}(t)=u(t-\tau)
\end{array}\right.\label{example_1_bis}
\end{equation}
The coefficients ${\frac{d}{dt}}$ and $-k(t)(\delta-\delta^{2})=-k(t)\delta (1-\delta)$ are elements of ${\mathfrak K}[\delta,{\frac{d}{dt}}]$ and the corresponding matrices $A$ and $B$ belong to $\mm{2}{2}$ and $\mm{2}{1}$ respectively.
\end{ex}
Note that it may be necessary to extend the polynomial ring as shown by the following computation on the previous example:
Let us express $u$ of (\ref{example_1}) as a function of $x_{1}$. It is straightforward to see that $x_{2}=\displaystyle (1-\delta)^{-1}\delta^{-1}\frac{1}{k}{\frac{d}{dt}} x_{1}$ and, since $u=\delta^{-1}\dot{x}_{2}$, we immediately get \mbox{$u = \delta^{-2}(1-\delta)^{-1}\left(\displaystyle- \frac{\dot{k}}{k^{2}}+\frac{1}{k}{\frac{d}{dt}}\right)\displaystyle{\frac{d}{dt}} x_{1}$}. Denoting by $\pi\triangleq
(1-\delta)\delta^{2}\in {\mathfrak K}[\delta]$,
the polynomial $\pi^{-1}\left(\displaystyle - \frac{\dot{k}}{k^{2}}+\frac{1}{k}{\frac{d}{dt}}\right)\displaystyle{\frac{d}{dt}}$ lives in ${\mathfrak K}[\delta,\pi^{-1},{\frac{d}{dt}}]$ the ring of polynomials of $\delta,\pi^{-1}$ and ${\frac{d}{dt}}$ with coefficients in ${\mathfrak K}$, but not in ${\mathfrak K}[\delta,{\frac{d}{dt}}]$. This is why we may also introduce matrices over ${\mathfrak K}[\delta,\pi^{-1},{\frac{d}{dt}}]$ for some given $\pi$ in ${\mathfrak K}[\delta]$. The corresponding module of matrices of size $p\times q$ will be denoted by $\mmp{p}{q}$. More precisely, a matrix $M$ belongs to $\mmp{p}{q}$ if and only if there exists a finite $s\in {\mathbb N}$ such that $\pi^{s} \cdot M \in \mm{p}{q}$.
\begin{rem}\label{finitesupp-rem}
It may be argued that the previous expression of $u$ in function of $x_{1}$ is not feasible since $(1-\delta)^{-1}=\sum_{j=0}^{+\infty}\delta^{j}$ implies
$$u(t)=\displaystyle\sum_{j=-2}^{+\infty} \left(- \frac{\dot{k}(t-j\tau)}{k^{2}(t-j\tau)}\dot{x}_{1}(t-j\tau)+\frac{1}{k(t-j\tau)}\ddot{x}_{1}(t-j\tau)\right)$$
which involves an infinite number of delayed terms. However, if we deal with motion planning, if $x_{1}$ is chosen constant outside the interval $[t_{0}, t_{1}]$, for some $t_{0}, t_{1} \in {\mathbb R}$, $t_{0} < t_{1} $, then $\dot{x}_{1}$ and $\ddot{x}_{1}$ are identically equal to zero before $t_{0}$ and after $t_{1}$, and the above series has at most $\lceil \frac{t_{1}-t_{0}}{\tau}\rceil+2$ non vanishing terms, where the notation $\lceil r \rceil$ stands for the least integer upper bound of an arbitrary non negative real $r$.
\end{rem}
Unfortunately, ${\mathfrak K}[\delta,{\frac{d}{dt}}]$ and ${\mathfrak K}[\delta,\pi^{-1},{\frac{d}{dt}}]$ are not Principal Ideal Domains (see e.g. \cite{McConnell_00,Shafarevich_97,Jacobson_78,Cohn_85}), a property which is essential for our purpose (see the Smith-Jacobson decomposition in Appendix~\ref{SJalgo-sec}). However, if we extend the ground field ${\mathfrak K}$ to the fraction field ${\mathfrak K}(\delta)$, ${\mathfrak K}(\delta)[{\frac{d}{dt}}]$ is a principal ideal ring of polynomials of ${\frac{d}{dt}}$. We then construct the modules $\mmd{p}{q}$ of matrices of size $p\times q$ and $\uud{p}$ of unimodular matrices of size $p\times p$ respectively, over ${\mathfrak K}(\delta)[{\frac{d}{dt}}]$. Note that ${\mathfrak K}(\delta)[{\frac{d}{dt}}]$ strictly contains ${\mathfrak K}[\delta,{\frac{d}{dt}}]$ and ${\mathfrak K}[\delta,\pi^{-1},{\frac{d}{dt}}]$ for every $\pi\in{\mathfrak K}[\delta]$. Therefore, to interpret the results of computations in ${\mathfrak K}(\delta)[{\frac{d}{dt}}]$, which turn out to be quite simple, and to decide if they belong to a suitable ${\mathfrak K}[\delta,\pi^{-1},{\frac{d}{dt}}]$, following \cite{Mounier_95}, we have recourse to the notion of \emph{localization} introduced in subsection~\ref{module-sec}. This aspect will be discussed in section~\ref{main-sec}.
Since ${\mathfrak K}(\delta)[{\frac{d}{dt}}]$ is a Principal Ideal Domain, the matrices of $\mmd{p}{q}$ enjoy the essential property of admitting a so-called \emph{Smith-Jacobson decomposition}\footnote{we adopt here the names of Smith and Jacobson for the diagonal decomposition to remind that it is credited to Smith \cite{Gantmacher_66,Kailath_79} in the commutative context and Jacobson \cite{Jacobson_78,Chyzak_05} for general principal ideal domains.}, or \emph{diagonal decomposition}:
\begin{thm}[Smith-Jacobson decomposition \cite{Cohn_85,Jacobson_78}]\label{SJdecomp-thm}
Let $M\in\mmd{p}{q}$ be an arbitrary polynomial matrix of size $p\times q$. There exist unimodular matrices $U\in\uud{p}$ and $V\in\uud{q}$ such that:
\begin{equation} \label{smith-Jac-decomp}
UMV=\left\{
\begin{array}{ll}
\displaystyle (\Delta_{p}|0_{p,q-p})& \mbox{\textrm{if~~}} p\leq q\vspace{0.5em}\\
\left(\begin{array}{c}\Delta_{q}\\0_{p-q,q}\end{array}\right)&\mbox{\textrm{if~~}} p>q.
\end{array}
\right.
\end{equation}
In both cases, $\Delta_{\sigma}\in\mmd{\sigma}{\sigma}$, $\sigma = p$ or $q$, is a diagonal matrix whose diagonal elements $(d_{1},\ldots,d_{s},0,\ldots,0)$, with $s\leq \sigma$, are such that $d_{i}$ is a nonzero ${\frac{d}{dt}}$-polynomial for $i=1,\ldots,s$, with coefficients in ${\mathfrak K}(\delta)$, and is a divisor of $d_{j}$ for all $1\leq j\leq i$.
\end{thm}
A constructive algorithm to compute this decomposition may be found in Section~\ref{SJalgo-sec} of the Appendix.
Given an arbitrary matrix $M$, we call $\lsm{M}$ (resp. $\rsm{M}$), the left (resp. right) Smith-Jacobson subset of unimodular matrices $U\in\uud{p}$ (resp. $V\in\uud{q}$) such that there exists $V\in\uud{q}$ (resp. $U\in\uud{p}$) satisfying the decomposition (\ref{smith-Jac-decomp}).
\begin{ex}\label{example_smith}
Consider again system (\ref{example_1}). The Smith-Jacobson decomposition of $B$ is straightforward:
\begin{equation}
U=\left(\begin{array}{cc}0&1\\1&0\end{array}\right),\quad V=1,\quad\Delta_{1}=\delta, \quad UBV= \left(\begin{array}{c}\delta\\0\end{array}\right)\nonumber
\end{equation}
If we want to eliminate $u$ in (\ref{example_1}), using the previous Smith-Jacobson decomposition of $B$, we first remark that the second line of $U$, which will be denoted by $U_{2}=\left( 1\quad 0\right)$, corresponds to the left projection operator on the kernel of $B$, i.e. $U_{2}B=0$. It suffices then to left multiply $A$ by $U_{2}$ to obtain the implicit form
\begin{equation}
F(\delta,{\frac{d}{dt}})x \triangleq U_{2}Ax=\left(\begin{array}{cc}{\frac{d}{dt}}&-k\delta(1-\delta)\end{array}\right)\left(\begin{array}{c}x_{1}\\x_{2}\end{array}\right)=U_{2}Bu = 0\label{implicit_form}
\end{equation}
We may also compute a Smith-Jacobson decomposition of $F$: we first right multiply $F$ by $\left(\begin{array}{cc}0&1\\1&0\end{array}\right)$ to shift the 0-th order term in ${\frac{d}{dt}}$ to the left, yielding
$$F\left(\begin{array}{cc}0&1\\1&0\end{array}\right)=\left(\begin{array}{cc}-k\delta(1-\delta)&{\frac{d}{dt}}\end{array}\right)$$ and then, right multiplying the result by
$$\left(\begin{array}{cc}-\delta^{-1}(1-\delta)^{-1}\frac{1}{k}&\delta^{-1}(1-\delta)^{-1}\frac{1}{k}{\frac{d}{dt}}\\0&1\end{array}\right)$$
leads to $(1\quad 0)$. The Smith-Jacobson decomposition of $F$ is therefore given by
\begin{equation}
U_{F}FV_{F}=(1\quad 0).\label{F_decomp}
\end{equation}
with
\begin{equation}
\begin{aligned}
V_{F}&=\left(\begin{array}{cc}0&1\\1&0\end{array}\right)\left(\begin{array}{cc}- \delta^{-1}(1-\delta)^{-1}\frac{1}{k}&\delta^{-1}(1-\delta)^{-1}\frac{1}{k}{\frac{d}{dt}}\\0&1\end{array}\right)\\
&=\left(\begin{array}{cc}0&1\\-\delta^{-1}(1-\delta)^{-1}\frac{1}{k}&\delta^{-1}(1-\delta)^{-1}\frac{1}{k}{\frac{d}{dt}}
\end{array}\right)
\label{Smith_VF}
\end{aligned}
\end{equation}
and $U_{F}=1$.
As previously discussed, the Smith-Jacobson decomposition has been computed over the ring ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$ but, according to (\ref{Smith_VF}), its result may be expressed in the ring ${\mathfrak K}[\delta, \pi^{-1}_{F},{\frac{d}{dt}}]$ with $\pi_{F}=(1-\delta)\delta$.
It is also easy to verify that, since the matrix $F$ is a presentation matrix of the system module over ${\mathfrak K}[\delta, \pi^{-1}_{F},{\frac{d}{dt}}]$, the latter module, according to (\ref{F_decomp}), is isomorphic to any free finitely generated module admitting the matrix $(1\quad 0)$ as presentation matrix, which implies that the localized system module is free (see Section~\ref{module-sec}). Note also that the polynomial $\pi_{F}$ admits non zero roots: every $\tau$-periodic non zero meromorphic function $f$ of the variable $t$ satisfies $\pi_{F}f(t)=f(t-\tau)-f(t-2\tau)=0$. Therefore, the system module is not ${\mathfrak K}[\delta,{\frac{d}{dt}}]$-free.
\end{ex}
We now introduce a remarkable class of matrices of $\mmd{p}{q}$ called \emph{hyper-regular}.
\begin{defn}[Hyper-regularity \cite{Levine_09}]
Given a matrix $M\in\mmd{p}{q}$, we say that $M$ is ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$-\emph{hyper-regular}, or simply \emph{hyper-regular} if the context is non ambiguous, if and only if, in (\ref{smith-Jac-decomp}), $\Delta_p=I_p$ if $p\leq q$ (resp. $\Delta_q=I_q$ if $p> q$).
\end{defn}
\begin{rem} It is not difficult to prove that a finitely generated module $\Lambda$ over the ring ${\mathfrak K}(\delta)[{\frac{d}{dt}}]$ whose presentation matrix $F$ is hyper-regular cannot have torsion elements and is therefore free, since ${\mathfrak K}(\delta)[{\frac{d}{dt}}]$ is a Principal Ideal Ring. However, the system module $\Lambda$, over ${\mathfrak K}[\delta,{\frac{d}{dt}}]$, does not need to be free, as shown in the previous example. Nevertheless, it will be seen later that there exists a \emph{liberation polynomial} $\pi$, deduced from the Smith-Jacobson decomposition of $F$, such that the ${\mathfrak K}[\delta,\pi^{-1},{\frac{d}{dt}}]$-system module, associated to the same presentation matrix $F$, is free.
\end{rem}
\begin{ex}\label{example_smith_bis}
Going back to the decomposition of Example~\ref{example_smith}, a variant of this decomposition may be obtained as:
\begin{equation}
U'=\left(\begin{array}{cc}0&\delta^{-1}\\1&0\end{array}\right),\quad V=1,\quad \Delta'_{1}=I_{1}=1\label{Smith_B}
\end{equation}
which proves that $B$ is ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$-hyper-regular.
It is also immediate from (\ref{F_decomp}) that $F$ is ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$-hyper-regular, or more precisely, ${\mathfrak K}\left[\delta, \pi_{F}^{-1},{\frac{d}{dt}}\right]$-hyper-regular with $\pi_{F}=(1-\delta)\delta$. Note, on the contrary, that $A$ is not ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$-hyper-regular, though $F=U_{2}A$ is. It is easily verified that
$$U_{A}AV_{A}= \Delta_{A}$$
with
$$\begin{array}{lcl}
U_{A}&=&\left( \begin{array}{cc}
1&0\\
-\frac{\dot{k} }{k}+ {\frac{d}{dt}}&k\delta(1-\delta)
\end{array}\right) \vspace{0.5em}
\\
V_{A}&=&\left( \begin{array}{cc}
0&1\\
-(1-\delta)^{-1}\delta^{-1}\frac{1}{k}&(1-\delta)^{-1}\delta^{-1}\frac{1}{k}{\frac{d}{dt}}
\end{array}\right)
\end{array}
$$
and
$$ \Delta_{A}=\left( \begin{array}{cc}
1&0\\
0&(-\frac{\dot{k} }{k} +{\frac{d}{dt}} ){\frac{d}{dt}}
\end{array}\right).
$$
Thus, since the second degree ${\frac{d}{dt}}$-polynomial $(-\frac{\dot{k} }{k} +{\frac{d}{dt}}){\frac{d}{dt}}$ of the diagonal of $\Delta_{A}$ cannot be reduced, $A$ is not hyper-regular.
\end{ex}
\subsubsection{Implicit system representation}
One of the applications of the Smith-Jacobson decomposition concerns the possibility of expressing the system (\ref{sys_lin_2}) in implicit form by eliminating the input $u$, which may be useful to work with a smaller number of variables.
For simplicity's sake, we rewrite system (\ref{sys_lin_2}) $Ax=Bu$.
\begin{prop}\label{impl-prop}
System (\ref{sys_lin_2}) is equivalent to
\begin{equation}\label{semi_impl_rep}
Fx=0, \quad \Delta_{B}N^{-1}u=(I_{m},0_{m,n-m})MAx
\end{equation}
with
\begin{equation}\label{Fdef}
F= (0_{n-m,m},I_{n-m})MA,
\end{equation}
$M\in \lsm{B}$ and $N$ such that
\begin{equation}\label{Bdecomp}
MBN=\left(\begin{array}{c}\Delta_{B}\\0_{n-m,m}\end{array}\right).
\end{equation}
Moreover, if $B$ is ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$-hyper-regular, the explicit form (\ref{sys_lin_2}) admits the implicit representation
\begin{equation}\label{impl_rep}
Fx=0
\end{equation}
with $F$ given by (\ref{Fdef}), and with $\Delta_{B}=I_{m}$ in (\ref{Bdecomp}). In this case, $u$ is deduced from $x$ by
\begin{equation}\label{udef}
u=N(I_{m},0_{m,n-m})MAx.
\end{equation}
\end{prop}
\begin{proof}
Consider a pair of matrices $M$ and $N$ obtained from the Smith-Jacobson decomposition of $B$, i.e. satisfying (\ref{Bdecomp}). Thus, left-multiplying both sides of system (\ref{sys_lin_2}) by $(0_{n-m,m},I_{n-m})M$, according to (\ref{Fdef}) we get $Fx=0$. On the other hand, multiplying both sides of (\ref{sys_lin_2}) by $(I_{m},0_{m,n-m})M$ we get
$$(I_{m},0_{m,n-m})MAx= \Delta_{B}N^{-1}u,$$
hence the representation (\ref{semi_impl_rep}).
Conversely, if $x$ and $u$ are given by (\ref{semi_impl_rep}), we have
$$MAx=\left( \begin{array}{c} (I_{m},0_{m,n-m})MA\\(0_{n-m,m},I_{n-m})MA\end{array}\right)x = \left(
\begin{array}{c} \Delta_{B}N^{-1}u\\0 \end{array}\right)= MBu$$
the last equality being a consequence of (\ref{Bdecomp}). Thus, since $M$ is unimodular, the pair $(x,u)$ satisfies $Ax=Bu$, which proves the equivalence.
Finally, if $B$ is ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$-hyper-regular, one can replace $\Delta_{B}$ by $I_{m}$ and the second equation of (\ref{semi_impl_rep}) becomes (\ref{udef}). Thus, $u$ is a ${\mathfrak K}\left(\delta\right)[{\frac{d}{dt}}]$-combination of the components of $x$ and can be eliminated. Therefore, the remaining part (\ref{impl_rep}) is the desired implicit representation of (\ref{sys_lin_2}). The proposition is proven.
\end{proof}
In the sequel, if $B$ is hyper-regular, we refer to (\ref{impl_rep}) as the implicit representation of system (\ref{sys_lin_2}).
\begin{prop}\label{controllability-prop}
For system (\ref{sys_lin_2}) to be ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$-torsion free controllable (see \cite{Mounier_95, Rudolph_03}), it is necessary that $B$ and $F$, defined by (\ref{Fdef}), are ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$-hyper-regular.
Moreover, in this case, there exists a polynomial $\bar{\pi}$ such that the localized system module at the powers of $\bar{\pi}$ is free.
\end{prop}
\begin{proof}
Assume that the system is ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$ torsion free controllable and that $B$ is not ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$-hyper-regular. Then, using the decomposition (\ref{Bdecomp}), $\Delta_B$ has at least one diagonal element which is a polynomial of degree larger than or equal to 1 with respect to ${\frac{d}{dt}}$ and with coefficients in ${\mathfrak K}(\delta)$. There indeed exists a non zero element $v$ of the ${\mathfrak K}[\delta,{\frac{d}{dt}}]$-free module generated by the components of $u$, such that $\Delta_{B}v=0$ ($v$ is a non trivial solution of a differential delay equation). It is immediately seen that the pair $x=0, u=Nv$ is a non zero torsion element of the system module, which contradicts the freeness assumption.
If $F$ is not ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$-hyper-regular, its decomposition is given by $UF\tilde{Q} = \left( \Delta_{F}, 0 \right)$, $ \Delta_{F}$ having at least one diagonal element which is a polynomial of degree larger than or equal to 1 with respect to ${\frac{d}{dt}}$ and with coefficients in ${\mathfrak K}(\delta)$ which shows, using the representation (\ref{semi_impl_rep}), that every pair $(\xi_1,\xi_2)$ such that $\Delta_{F} \xi_1 =0$, $\xi_1 \neq 0$, and $\xi_2$ arbitrary, satisfies $\left( \Delta_{F}, 0 \right) \left( \begin{array}{c}\xi_1\\\xi_2\end{array}\right)=0$. Thus, the pair $(x,u)$ with $x=\tilde{Q}\left( \begin{array}{c}\xi_1\\\xi_2\end{array}\right)$ and $u$ satisfying $\Delta_{B}N^{-1}u=(I_{m},0_{m,n-m})MA\tilde{Q} \left( \begin{array}{c}\xi_1\\\xi_2\end{array}\right)$ (see (\ref{semi_impl_rep})) is a torsion element of the system module. Consequently, the latter module cannot be ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$-free.
To prove the existence of $\bar{\pi}$, we remark that, according to the Smith-Jacobson decomposition algorithm (see Section \ref{SJalgo-sec} of the Appendix), if $M\in \lsm{B}$ and $N\in \rsm{B}$ are such that $MBN=\left(\begin{array}{c}I_{m}\\0\end{array}\right)$, each row of $M$ and $N$ may contain the inverse of a polynomial of ${\mathfrak K}[\delta]$. Taking the LCM, say $\pi_{M,N}$, of these polynomials for all rows, we immediately get that $\pi_{M,N}\cdot M\in \uu{n}, \pi_{M,N}\cdot N\in \uu{m}$ with $\pi_{M,N} \in {\mathfrak K}[\delta]$. The same argument applies for $F$: consider $U\in\uud{n-m},\wt{Q}\in\uud{n}$ as above, namely such that $UF\wt{Q}=\left(I_{n-m}\ 0_{n-m,m}\right)$. There exists $\pi_{U,\tilde{Q}}\in {\mathfrak K}[\delta]$ such that $\pi_{U,\tilde{Q}}\cdot U \in \uu{n-m}$, $\pi_{U,\tilde{Q}}\cdot \tilde{Q} \in \uu{n}$. Taking $\bar{\pi}$ as the LCM of $\pi_{M,N}$ and $\pi_{U,\tilde{Q}}$, it is immediately seen from the decompositions of $B$ and $F$, multiplied by suitable powers of $\bar{\pi}$, that the system module over the localized ring ${\mathfrak K}[\delta,\bar{\pi}^{-1},{\frac{d}{dt}}]$ is free, and the proof is complete.
\end{proof}
\subsection{Differential $\pi$-Flatness}\label{pi-flat-sec}
We first recall the classical definition of a flat system \cite{Levine_09,Ramirez_04}, in the context of systems described by ordinary nonlinear differential equations: a system is said to be differentially flat if and only if there exists a set of independent variables, referred to as a flat output, such that every system variable (including the input variables) is a function of the flat output and a finite number of its successive time derivatives. More precisely, the system
\begin{equation}
\dot{x}=f(x,u)\nonumber
\end{equation}
with $x\in{\mathbb R}^{n}$ and $u\in{\mathbb R}^{m}$ is differentially flat if and only if there exist a set of independent variables (flat output)
\begin{equation}\label{y-flat}
y=h(x,u,\dot{u},\ddot{u},\ldots,u^{(r)}),\quad y\in{\mathbb R}^{m}
\end{equation}
such that
\begin{eqnarray}
x&=&\alpha(y,\dot{y},\ddot{y},\ldots,y^{(s)})\\
u&=&\beta(y,\dot{y},\ddot{y},\ldots,y^{(s+1)})
\end{eqnarray}
and such that the system equations
\begin{equation}
\frac{d\alpha}{dt}(y,\dot{y},\ddot{y},\ldots,y^{(s+1)})=f\left(\alpha(y,\dot{y},\ddot{y},\ldots,y^{(s)}),\beta(y,\dot{y},\ddot{y},\ldots,y^{(s+1)})\right)
\end{equation}
are identically satisfied for all smooth enough function $t\mapsto y(t)$, $r$ and $s$ being suitable finite $m$-tuples of integers.
Let us now discuss further extensions of this definition to linear delay systems and consider first an elementary example to motivate the next definition.
\begin{ex}
Let us consider the elementary system $$\dot{x}(t)=u(t-\tau)$$ with $A={\frac{d}{dt}}$ and $B=\delta$ in the notations of (\ref{sys_lin_2}). Clearly, if we set $y=x$, $y$ looks like a flat output though $u=\delta^{-1}y$ contains a one-step prediction and belongs to ${\mathfrak K}(\delta)[{\frac{d}{dt}}]$ but not to ${\mathfrak K}[\delta,{\frac{d}{dt}}]$. However, for motion planning, such a dependence remains acceptable (see Remark~\ref{finitesupp-rem}), even if for feedback design it poses more delicate problems. This notion is called differential $\delta$-flatness (see e.g. \cite{Mounier_95,Rudolph_03}).
\end{ex}
According to \cite{Mounier_95}, this notion is generalized as follows:
\begin{defn}[Differential $\pi$-flatness \cite{Mounier_95}]\label{pi_flatness} The linear delay system (\ref{sys_lin_2}) is said to be differentially $\pi$-flat (or $\pi$-free) if and only if there exists a polynomial $\pi\in {\mathfrak K}[\delta]$, and a collection $y$ of $m$ $(\delta,\pi^{-1})$-differentially independent variables\footnote{more precisely, there does not exist a non zero matrix $S \in \mmp{m}{m}$ such that $Sy=0$, or equivalently, $Sy=0$ implies $S\equiv 0$}, called $\pi$-flat output, of the form
\begin{equation}
y =P_{0}(\delta,\pi^{-1})x+P_{1}(\delta,\pi^{-1},{\frac{d}{dt}})u,\label{y_pi_flat}
\end{equation}
with $P_{0}(\delta,\pi^{-1})\in({\mathfrak K}[\delta,\pi^{-1}])^{m\times n}$ the set of matrices of size $m\times n$, with coefficients in ${\mathfrak K}[\delta,\pi^{-1}]$, $P_{1}(\delta,\pi^{-1},{\frac{d}{dt}})\in\mmp{m}{m}$, and such that
\begin{eqnarray}
x &=&Q(\delta,\pi^{-1},{\frac{d}{dt}})y,\label{x_pi_flat}\\
u &=&R(\delta,\pi^{-1},{\frac{d}{dt}})y,\label{u_pi_flat}
\end{eqnarray}
with $Q(\delta,\pi^{-1},{\frac{d}{dt}})\in\mmp{n}{m}$, and $R(\delta,\pi^{-1},{\frac{d}{dt}})\in\mmp{m}{m}$.
\end{defn}
In other words, definition \ref{pi_flatness} states that the components of a $\pi$-flat output $y$ can be obtained as a ${\mathfrak K}[\delta,\pi^{-1},{\frac{d}{dt}}]$-linear combination of the system variables, and that the system variables $(x, u)$ are also ${\mathfrak K}[\delta,\pi^{-1},{\frac{d}{dt}}]$-linear combinations of the components of $y$. Thus $x$ and $u$ can be calculated from $y$ using differentiations, delays, and predictions (coming from the inverse of $\pi$). Note that the facts that every element of the system module is a ${\mathfrak K}[\delta,\pi^{-1},{\frac{d}{dt}}]$-linear combination of the components of $y$, and that the components of $y$ are ${\mathfrak K}[\delta,\pi^{-1},{\frac{d}{dt}}]$-independent, indeed imply that the components of $y$ form a basis of the system module ${\mathfrak K}[\delta,\pi^{-1},{\frac{d}{dt}}] \otimes_{{\mathfrak K}[\delta,{\frac{d}{dt}}]} \Lambda$, which is therefore free, hence the equivalence with the definition of subsection~\ref{module-sec}.
If system (\ref{sys_lin_2}) is considered in implicit form (\ref{impl_rep}) after elimination of the input $u$, since this elimination expresses $u$ as a ${\mathfrak K}\left(\delta\right)[{\frac{d}{dt}}]$-combination of the components of $x$, the expression (\ref{y_pi_flat}), combined with (\ref{udef}), reads $y=Px$ with $P\in\mmd{m}{n}$.
The previous definition is thus adapted as follows:
\begin{defn}\label{pi_flatness_impl} The implicit linear delay system (\ref{impl_rep}) is said to be differentially $\pi$-flat if and only if there exists a polynomial $\pi\in {\mathfrak K}[\delta]$, and a collection $y$ of $m$ $(\delta,\pi^{-1})$-differentially independent variables, called $\pi$-flat output, of the form
\begin{equation}
y =P(\delta,\pi^{-1},{\frac{d}{dt}})x,\label{y_pi_flat_impl}
\end{equation}
with $P(\delta,\pi^{-1},{\frac{d}{dt}})\in\mmp{m}{n}$, and such that
\begin{equation}
x =Q(\delta,\pi^{-1},{\frac{d}{dt}})y,\label{x_pi_flat_impl}
\end{equation}
with $Q(\delta,\pi^{-1},{\frac{d}{dt}})\in\mmp{n}{m}$.
\end{defn}
The matrices $P$, $Q$ and $R$ of (\ref{y_pi_flat})--(\ref{u_pi_flat}) in the explicit case, and $P$ and $Q$ of (\ref{y_pi_flat_impl})--(\ref{x_pi_flat_impl}) in the implicit case, are called \emph{defining operators} of the $\pi$-flat output $y$.
\begin{rem}
In \cite{Mounier_95} and later (see e.g. \cite{Rudolph_03}), the above notion is often called $\pi$-freeness and introduced via the notion of system module. The wording $\pi$-flatness appears, to the authors knowledge, for the first time in \cite{Petit_00}. It has also been related to system parameterization in \cite{Pommaret_99,Chyzak_07}. We have preferred here the name $\pi$-flatness, in reference to differential flatness, and to directly present it via the notion of flat output, rather than basis of the system module. Note that in formula~(\ref{y_pi_flat}), $P_{0}$ is a 0th degree polynomial of ${\frac{d}{dt}}$ to mimic the general definition in (\ref{y-flat}) that does not include time derivatives of $x$, with (\ref{y_pi_flat})--(\ref{u_pi_flat}) restricted to linear expressions.
\end{rem}
\begin{ex}\label{ex_contd}
Let us go back again to example 1 and let us prove that $y=x_{1}$ is a $\pi$-flat output with $\pi=(1-\delta)\delta^{2}$. From (\ref{example_1_bis}), we have
$$x_{2}=\delta^{-1}(1-\delta)^{-1}\frac{1}{k}\dot{x}_{1}=\delta^{-1}(1-\delta)^{-1}\frac{1}{k}\dot{y}$$
\begin{equation}
u=\delta^{-1}{\frac{d}{dt}} x_{2}=\delta^{-2}(1-\delta)^{-1}\left(-\frac{\dot{k}}{k^{2}}\dot{y}+\frac{1}{k}\ddot{y}\right)\label{u_ex1}
\end{equation}
In other words, following the notations of (\ref{y_pi_flat})--(\ref{u_pi_flat}), $P_{0}=\left( 1,0\right)$, $P_{1}=0$ and
\begin{equation}
\left(\begin{array}{c}x_{1}\\x_{2}\end{array}\right)=\left(\begin{array}{c}1\\\pi^{-1}\delta\frac{1}{k}{\frac{d}{dt}}\end{array}\right)y \triangleq Q(\delta,\pi^{-1},{\frac{d}{dt}})y\label{x_ex1}
\end{equation}
\begin{equation}
u=\pi^{-1}\left(- \frac{\dot{k}}{k^{2}}{\frac{d}{dt}}+\frac{1}{k}\frac{d^{2}}{dt^{2}}\right)y \triangleq R(\delta,\pi^{-1},{\frac{d}{dt}})y\label{u_bis_ex1}
\end{equation}
which proves that $y$ is a $\pi$-flat output.
\end{ex}
\section{Main Result}\label{main-sec}
In this section, we propose a simple and effective algorithm for the computation of $\pi$-flat outputs of linear time-delay systems based on the following necessary and sufficient condition for the existence of defining operators of a $\pi$-flat output. Moreover, explicit expressions of $\pi$ and of these operators are obtained.
\begin{thm}\label{pi_flat_thm}
A necessary and sufficient condition for system (\ref{sys_lin_2}) to be $\pi$-flat is that the matrices $B$ and $F$ are ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$-hyper-regular.
We construct the operators $P$, $Q$ and $R$ and the polynomial $\pi$ as follows.
\begin{itemize}
\item[\bf{0.}] According to Propositions~\ref{impl-prop} and \ref{controllability-prop}, construct the decomposition of $B$ (\ref{Bdecomp}), define $F$ by (\ref{Fdef}) and compute $\bar{\pi}$;
\item[\bf{1.}] $Q=\wt{Q}\left( \begin{array}{c}0_{n-m,m}\\I_{m}\end{array}\right),~\text{with}~\wt{Q}\in\rsm{F}$. Note that $\bar{\pi}\cdot Q \in \mm{n}{m}$ by construction;
\item[\bf{2.}] $R=N(I_{m},0_{m,n-m})MAQ,~\text{with}~N\in\rsm{B}$. There exists $\pi_{R} \in {\mathfrak K}[\delta]$ such that $\pi_{R}\cdot R\in \mm{m}{m}$;
\item[\bf{3.}] $P=W(I_{m},0_{m,n-m})\wt{P},~\text{with}~\wt{P}\in\lsm{Q}~\text{and}~W\in\rsm{Q}$. There exists $\pi_{P}\in {\mathfrak K}[\delta]$ such that $\pi_{P}\cdot P\in \mm{m}{n}$.
\end{itemize}
Let $\pi$ be given by $\pi= \lcm{\bar{\pi},\pi_{P},\pi_{R}}$, the least common multiple of $\bar{\pi}$, $\pi_{P}$ and $\pi_{R}$.
Thus, $P$, $Q$ and $R$ are defining operators with $y=Px$, $x=Qy$, $u=Ry$, and $y$ is a $\pi$-flat output.
\end{thm}
\begin{proof}
If $B$ or $F$ are not ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$-hyper-regular, according to Proposition~\ref{controllability-prop}, the ${\mathfrak K}(\delta)[{\frac{d}{dt}}]$ system module cannot be torsion free. Therefore, system (\ref{sys_lin_2}) cannot be $\pi$-flat, for any $\pi\in {\mathfrak K}[\delta]$. Taking the contrapositive of this statement, the hyper-regularity of $B$ and $F$ is proven to be necessary.
We now prove that the ${\mathfrak K}(\delta)\left[{\frac{d}{dt}}\right]$-hyper-regularity of $B$ and $F$ is sufficient to construct the defining matrices $P$, $Q$ and $R$ as well as a liberation polynomial $\pi$.
We first use the implicit form (\ref{impl_rep}) of Proposition~\ref{impl-prop}, and more precisely (\ref{Fdef})--(\ref{udef}) to obtain a decomposition of $B$ and $F$.
Since $F$ is hyper-regular by assumption, there exist $U\in\uud{n-m},\wt{Q}\in\uud{n}$ such that $UF\wt{Q}=\left(I_{n-m}\ 0_{n-m,m}\right)$. Consequently $Q=\wt{Q}\left(\begin{array}{c}0_{n-m,m}\\\quad I_{m}\end{array}\right)$, with $\wt{Q}\in \rsm{F}$, is such that $FQ=0_{n-m,m}$. Thus setting $x=Qy$ we have that $FQy=0$ for all $y$.
The existence of $\bar{\pi}\in {\mathfrak K}[\delta]$ is proved following the same argument as in the proof of Proposition~\ref{controllability-prop}: for an arbitrary $\alpha \times \beta$ hyper-regular matrix $M$, according to the Smith-Jacobson decomposition algorithm (see Section \ref{SJalgo-sec} of the Appendix), if $U_{M}\in \lsm{M}$ and $V_{M}\in \rsm{M}$ are such that $U_{M}MV_{M}=(I_{\alpha},0)$ if $\alpha\leq \beta$ (resp.
$U_{M}MV_{M}=\left(\begin{array}{c}I_{\beta}\\0\end{array}\right)$ if $\alpha\geq \beta$), each row of $U_{M}$ and $V_{M}$ may contain the inverse of a polynomial of ${\mathfrak K}[\delta]$. Taking the LCM, say $\pi_{M}$, of these polynomials for all rows, we immediately get that $\pi_{M}\cdot U_{M}\in \mm{\alpha}{\beta}$ with $\pi_{M} \in {\mathfrak K}[\delta]$. Applying this result to the decompositions of $B$ and $F$, we have proven the existence of $\bar{\pi}$ such that $\bar{\pi}\cdot \wt{Q} \in \uu{n}$ and $\bar{\pi}\cdot Q\in \mm{n}{m}$, which proves item 1.
Going back to (\ref{Bdecomp}) and (\ref{udef}), setting $R=N\left(I_{m}\quad 0_{m,n-m}\right)MAQ$, we obtain $u=Ry$ with $N\in\rsm{B}$. Finally, the proof of the existence of $\pi_{R} \in {\mathfrak K}[\delta]$ such that $\pi_{R}\cdot R\in \mm{m}{m}$ follows the same lines as in item~1, which proves 2.
Since $Q$ is hyper-regular by construction, its Smith-Jacobson decomposition yields the existence of $\wt{P}\in \uud{n}$ and $W\in \uud{m}$ such that
$$\wt{P}QW=\left(\begin{array}{c}I_{m}\\0_{n-m,m}\end{array}\right)$$
thus $\left(I_{m}\quad 0_{m,n-m}\right)\wt{P}QW=I_{m}$ and, setting
$$P=W(I_{m},0_{m,n-m})\wt{P}$$
it results that $Px=W(I_{m},0_{m,n-m})\wt{P}x=W(I_{m},0_{m,n-m})\wt{P}Qy=WW^{-1}y=y$, and the proof of the third item is complete, noting again that the existence of $\pi_{P} \in {\mathfrak K}[\delta]$ such that $\pi_{P}\cdot P\in \mm{m}{n}$ follows the same lines as in items 1 and 2.
Finally, taking $\pi= \lcm{\bar{\pi},\pi_{P},\pi_{R}}$, the least common multiple of $\bar{\pi}$, $\pi_{P}$ and $\pi_{R}$, it is straightforward to show that $\pi \cdot P \in \mm{m}{n}$, $\pi\cdot Q \in\mm{n}{m}$ and $\pi \cdot R \in\mm{m}{m}$. Therefore, $P$, $Q$ and $R$ are defining matrices of a $\pi$-flat output for system (\ref{sys_lin_2}) and thus that system (\ref{sys_lin_2}) is $\pi$-flat.
\end{proof}
Theorem \ref{pi_flat_thm} is easily translated into the Algorithm~\ref{algo_pi_flat} presented below.
\begin{algorithm}[h]\label{algo_pi_flat}
\SetAlgoNoLine
\LinesNumbered
\DontPrintSemicolon
\SetKwInput{KwInit}{Initialization}
\SetKwInput{Algo}{Algorithm}
\caption{Procedure to compute $\pi$-flat outputs.}
\KwIn{Two matrices $A\in\mm{n}{n}$ and $B\in\mm{n}{m}$.}
\KwOut{a polynomial $\pi\in {\mathfrak K}[\delta]$ and defining operators $P\in\mmp{m}{n}$, $Q\in\mmp{n}{m}$ and $R\in\mmp{m}{m}$ such that $y=Px$, $x=Qy$ and $u=Ry$.}
\KwInit{Test of hyper-regularity of $B$ by its Smith-Jacobson decomposition, which also provides $M\in\lsm{B}$ and $N\in\rsm{B}$, i.e. such that $MBN=\left( I_{m}\, ,\, 0_{n-m,m}\right)^{T}$. If $B$ is not hyper-regular, the system is not $\pi$-flat whatever $\pi\in {\mathfrak K}[\delta]$.}
\Algo
Set $F=(0_{n-m,m},I_{n-m})MA$ and test if $F$ is hyper-regular by computing its Smith-Jacobson decomposition: $VF\wt{Q}=(I_{n-m},0_{n-m,m})$. If $F$ is not hyper-regular, the system is not $\pi$-flat whatever $\pi\in {\mathfrak K}[\delta]$. Otherwise, compute $\bar{\pi}\in{\mathfrak K}[\delta]$ such that $\bar{\pi}\cdot M\in \uu{n}$, $\bar{\pi}\cdot N\in \uu{m}$, $\bar{\pi}\cdot \wt{Q}\in \uu{n}$.\;
Compute $Q=\wt{Q}\left( 0_{n-m,m}\, ,\, I_{m} \right)^{T}$, $R=N(I_{m},0_{m,n-m})MAQ$ and $\pi_{R}\in {\mathfrak K}[\delta]$ such that $\pi_{R}\cdot R\in \mm{m}{m}$.\;
Compute a Smith-Jacobson decomposition of $Q$: $\wt{P}QW=\left( I_{m}\, ,\, 0_{n-m,m}\right)^{T}$, $P=W(I_{m},0_{m,n-m})\wt{P}$, and a polynomial $\pi_{P} \in{\mathfrak K}[\delta]$ such that $\pi_{P}\cdot P \in \mm{m}{n}$.\;
Compute the polynomial $\pi=\lcm{\bar{\pi}, \pi_{P}, \pi_{R}} \in{\mathfrak K}[\delta]$. The system is $\pi$-flat.
\end{algorithm}
\begin{rem}
The $\pi$-flatness criterion is given in terms of properties of the matrices $B$ and $F$ that depend only on the larger ring ${\mathfrak K}(\delta)[{\frac{d}{dt}}]$. If, in addition, the submodule of ${\mathfrak K}[\delta]$ generated by the powers of $\pi$ is torsion free (e.g. $\pi=\delta$, for which the equation $\delta f=0$ admits the unique solution $f=0$), then the system module $\Lambda$ over ${\mathfrak K}[\delta,\pi^{-1},{\frac{d}{dt}}]$ is torsion free, and the original module $\Lambda$ (over ${\mathfrak K}[\delta,{\frac{d}{dt}}]$) is free if, and only if, $\pi =1$. Note that computing a free basis of the system module directly over ${\mathfrak K}[\delta,{\frac{d}{dt}}]$, would require more elaborated tools such as those developed in \cite{Pommaret_99,Pommaret_01,Quadrat_07}.
\end{rem}
\begin{rem}\label{multdelay-rem}
Since the computations are made in the larger modules $\mmd{p}{q}$ for suitable $p$ and $q$, Theorem~\ref{pi_flat_thm} remains valid for systems depending on an arbitrary but finite number of delays, say $\delta_{1},\ldots, \delta_{s}$, by replacing the field ${\mathfrak K}(\delta)$, on which $\mmd{p}{q}$ is modeled, by the fraction field ${\mathfrak K}(\delta_{1},\ldots, \delta_{s})$ generated by the multivariate polynomials of $\delta_{1},\ldots, \delta_{s}$. An example with two independent delays is presented in Example~\ref{string-ex-sec}.
\end{rem}
\begin{rem}
Our results also apply in the particular case of linear time-varying systems without delays. If ${\mathfrak K}$ is the field of meromorphic functions of time, ${\mathfrak K}[{\frac{d}{dt}}]$ is a Principal Ideal Domain and all the computations involved in Theorem~\ref{pi_flat_thm} and the associated algorithm, remain in this ring, contrarily to the case with delay. Therefore, the last step consisting in finding the so-called liberation polynomial $\pi$ is needless. Related results may be found in \cite{Levine_03,Trentelman_04,Pommaret_99}.
\end{rem}
\section{Examples}\label{ex-sec}
\subsection{Back to the Introductory Example}
Going back to Example~\ref{Example_1}, let us apply the previous algorithm to the time-delay system defined by (\ref{example_1}), for which a $\pi$-flat output is already known from Example \ref{ex_contd}. The first step, consisting in the computation of a Smith-Jacobson decomposition of the matrix $B$ has already been done, the left and right unimodular matrices $M\in \lsm{B}$ and $N\in \rsm{B}$ being given by (\ref{Smith_B}) with $M=U'$ and $N=V=1$. Then the matrix $F=(0\quad 1)MA$ of an implicit representation of (\ref{example_1}) is given by (\ref{implicit_form}). Its Smith-Jacobson decomposition $VF\wt{Q}=(1\quad 0)$ is given by (\ref{F_decomp})-(\ref{Smith_VF}), with $V=U_{F}=1$ and $\wt{Q}=V_{F}$, and has been seen to be hyper-regular in Example~\ref{example_smith_bis}. We easily check that $\bar{\pi}= (1-\delta)\delta$.
Going on with step 2, we set
$$Q=\wt{Q}\left(\begin{array}{c}0\\1\end{array}\right)=\left(\begin{array}{c}1\\\displaystyle \delta^{-1}(1-\delta)^{-1} \frac{1}{k(t)}{\frac{d}{dt}}\end{array}\right)$$
and verify that $\bar{\pi}\cdot Q \in \mm{2}{1}$. Moreover
$$R=N(1\quad 0)MAQ=\delta^{-2}(1-\delta)^{-1}\left(-\frac{\dot{k}(t)}{k^{2}(t)}{\frac{d}{dt}}+\frac{1}{k(t)}\frac{d^{2}}{dt^{2}}\right)$$
and we have $\pi_{R}=\bar{\pi}\delta$.
Note that, setting $x=Qy$ and $u=Ry$, we recover formulae (\ref{x_ex1}) and (\ref{u_ex1}) (or equivalently (\ref{u_bis_ex1})).
According to step 3 of the algorithm, we compute a Smith-Jacobson decomposition of $Q$: $\wt{P}QW=\left(\begin{array}{c}1\\ 0\end{array}\right)$, which provides $W=1$ and
$$\wt{P}=\left(\begin{array}{cc}1&0\\-\delta^{-1}(1-\delta)^{-1}\frac{1}{k(t)}{\frac{d}{dt}}&1\end{array}\right)$$
hence $P=\left(1\quad 0\right)$ and $y=Px=x_{1}$. Here, $\pi_{P}=1$.
Finally, the least common multiple of $(1-\delta)\delta$, $(1-\delta)\delta^{2}$ and 1 is $\pi=(1-\delta)\delta^{2}$. We have thus verified that Algorithm~\ref{algo_pi_flat} comes up with the same conclusion as Example~\ref{ex_contd}.
\subsection{A Multi-input Example}
Let us consider the following academic example of multi-input delay system:
\small
\begin{equation}
\left\{\begin{array}{lll}
\dot{x}_{1}(t)+x_{1}^{(2)}(t)-2x_{1}^{(2)}(t-\tau)+x_{1}^{(3)}(t)+x_{1}^{(4)}(t-\tau)-x_{2}^{(3)}(t)+x_{2}^{(5)}(t)-x_{3}^{(2)}(t)\\
-\dot{x}_{4}(t)+x_{4}^{(3)}(t)=u_{1}(t)+\dot{u}_{1}(t)+u_{2}(t),\\
\dot{x}_{1}(t)+\dot{x}_{1}(t-\tau)-\dot{x}_{1}(t-2\tau)+x_{1}^{(2)}(t)+x_{1}^{(2)}(t-\tau)+x_{1}^{(2)}(t-2\tau)-x_{1}^{(3)}(t-\tau)\\
+2\dot{x}_{2}(t)+\dot{x}_{2}(t-\tau)-x_{2}^{(2)}(t)-x_{2}^{(4)}(t)+\dot{x}_{3}(t)+x_{3}^{(2)}(t-\tau)-x_{4}(t)-x_{4}(t-\tau)\\
-x_{4}^{(2)}(t)=u_{1}^{(2)}(t-\tau)+\dot{u}_{2}(t-\tau),\\
-x_{1}(t-2\tau)+\dot{x}_{1}(t-3\tau)+x_{1}^{(2)}(t-2\tau)-x_{2}(t-\tau)+\dot{x}_{2}(t-2\tau)+x_{2}^{(3)}(t-\tau)\\
-x_{3}(t-\tau)+\dot{x}_{3}(t-2\tau)+\dot{x}_{4}(t-\tau)=\dot{u}_{1}(t-2\tau)+u_{2}(t-2\tau),\\
\dot{x}_{1}(t-\tau)+\dot{x}_{2}(t)+\dot{x}_{3}(t)=\dot{u}_{1}(t)+u_{2}(t).\\
\end{array}\right.\label{sys_ex_2}
\end{equation}
\normalsize
Denoting by $x$ the state vector, $x=(x_{1},x_{2},x_{3},x_{4})^{T}$, by $u$ the input vector, $u=(u_{1},u_{2})^{T}$, and by $\delta$ the delay operator of length $\tau$, system (\ref{sys_ex_2}) can be rewritten in matrix form $Ax=Bu$, with $A\in\mm{4}{4}$ and $B\in\mm{4}{2}$ defined by
\begin{equation}
A=\left(\begin{array}{cccc}A_{1}&A_{2}&A_{3}&A_{4}\end{array}\right)
\end{equation}
with
\begin{equation}
\begin{array}{l}
A_{1}=\left( \begin{array}{c}
\frac{d}{dt}+\frac{{d}^{2}}{{dt}^{2}}\left( 1-2\delta \right) +\frac{{d}^{3}}{{dt}^{3}}+\frac{{d}^{4}}{{dt}^{4}}\delta\\
{\frac{d}{dt}}\left( 1+ \delta - \delta^{2} \right) + {\frac{{d}^{2}}{{dt}^{2}}}\left( 1+ \delta + \delta^{2} \right) - {\frac{{d}^{3}}{{dt}^{3}}}\delta
\\
- {\delta}^{2} + {\frac{d}{dt}}{\delta}^{3} + {\frac{{d}^{2}}{{dt}^{2}}{\delta}^{2}}
\\
{\frac{d}{dt}\delta}
\end{array}\right)
\\
A_{2}=\left( \begin{array}{c}
-{\frac{{d}^{3}}{{dt}^{3}}}+{\frac{{d}^{5}}{{dt}^{5}}}
\\
\,{\frac{d}{dt}}\left( 2 + \delta \right) - {\frac{{d}^{2}}{{dt}^{2}}} - {\frac{{d}^{4}}{{dt}^{4}}}
\\
- \delta + {\frac{d}{dt}{\delta}^{2}} + {\frac{{d}^{3}}{{dt}^{3}}\delta}
\\
{\frac{d}{dt}}
\end{array}\right)
\\
A_{3}=\left( \begin{array}{c}
-{\frac{{d}^{2}}{{dt}^{2}}}
\\
{\frac{d}{dt}}+{\frac{{d}^{2}}{{dt}^{2}}\delta}
\\
- \delta + {\frac{d}{dt}{\delta}^{2}}
\\
{\frac{d}{dt}}
\end{array}\right),
\quad
A_{4}=\left( \begin{array}{c}
-{\frac{d}{dt}}+{\frac{{d}^{3}}{{dt}^{3}}}
\\
-\left(1+ \delta \right) - {\frac{{d}^{2}}{{dt}^{2}}}
\\
{\frac{d}{dt}\delta}
\\
0
\end{array}\right)
\end{array}
\end{equation}
and
\begin{equation}
B=\left(\begin{array}{cc}
1+ \frac{d}{dt}
&
1
\\
\frac{{d}^{2}}{{dt}^{2}}\delta
&
\frac{d}{dt}\delta
\\
\frac{d}{dt}{\delta}^{2}
&
{\delta}^{2}
\\
\frac{d}{dt}
&
1
\end{array}\right).\nonumber
\end{equation}
We apply Algorithm~\ref{algo_pi_flat} to compute a $\pi$-flat output if it exists. We start with the Smith-Jacobson decomposition of $B$. By left multiplying $B$ by the following product of unimodular matrices $M=M_{4}M_{3}M_{2}M_{1} \in\uud{4}$, given by
\begin{equation}
\begin{array}{ll}
M_{1}=\left(\begin{array}{cccc}1&0&0&-1\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{array}\right),&
M_{2}=\left(\begin{array}{cccc}1&0&0&0\\-\frac{d^{2}}{dt^{2}}\delta&1&0&0\\-\frac{d}{dt}\,{\delta}^{2}&0&1&0\\-\frac{d}{dt}&0&0&1\end{array}\right),\\\\
M_{3}=\left(\begin{array}{cccc}1&0&0&0\\0&0&0&1\\0&1&0&0\\0&0&1&0\end{array}\right),&
M_{4}=\left(\begin{array}{cccc}1&0&0&0\\0&1&0&0\\0&-\frac{d}{dt}\delta&1&0\\0&-{\delta}^{2}&0&1\end{array}\right),
\end{array}\nonumber
\end{equation}
and by setting $N=I_{2}$, we obtain the Smith-Jacobson decomposition
\begin{equation}
MBN=\left(
\begin{array}{cccc}
1&0&0&-1\\-\frac{d}{dt}&0&0&\frac{d}{dt}+1\\0&1&0&-\frac{d}{dt}\delta\\0&0&1&-\delta^{2}\end{array}\right)
\left(\begin{array}{cc}
\frac{d}{dt}+1&1\\\frac{{d}^{2}}{{dt}^{2}}\delta&\frac{d}{dt}\delta\\\frac{d}{dt}{\delta}^{2}&{\delta}^{2}\\\frac{d}{dt}&1
\end{array}\right)=\left(
\begin{array}{cccc}1&0\\0&1\\0&0\\0&0\end{array}\right),\nonumber
\end{equation}
thus showing that $B$ is hyper-regular. We then compute an implicit representation of (\ref{sys_ex_2}) by
$F=(0_{2,2}\quad I_{2})MA$, i.e.
\begin{equation}
F=\left(\begin{array}{cccc} F_{1}&F_{2}&F_{3}&F_{4}\end{array}\right)
\end{equation}
with
\begin{equation}
\begin{array}{l}
F_{1}=\left(
\begin{array}{c}
{\frac{d}{dt}}\left(1+ \delta-\delta^{2}\right) + {\frac{{d}^{2}}{{dt}^{2}}}\left( 1+ \delta \right) -{\frac{{d}^{3}}{{dt}^{3}}\delta}
\\
-{\delta}^{2} + {\frac{{d}^{2}}{{dt}^{2}}{\delta}^{2}}
\end{array}
\right)
\\
F_{2}=\left(
\begin{array}{c}
{\frac{d}{dt}}\left( 2 + \delta \right) - {\frac{{d}^{2}}{{{dt}}^{2}}} \left( 1 + \delta \right) - {\frac{{d}^{4}}{{dt}^{4}}}
\\
-\delta+{\frac{{d}^{3}}{{dt}^{3}}\delta}
\end{array}
\right)
\\
F_{3}=\left(
\begin{array}{c}
{\frac{d}{dt}}
\\
-\delta
\end{array}
\right),
\quad
F_{4}=\left(
\begin{array}{c}
-1-\delta-{\frac{{d}^{2}}{{dt}^{2}}}
\\
{\frac{d}{dt}\delta}
\end{array}
\right)
\end{array}\
\end{equation}
to which corresponds the difference-differential system
\begin{equation}
\begin{array}{l}
\displaystyle \left( \dot{x}_{1}(t)+\dot{x}_{1}(t-\tau)-\dot{x}_{1}(t-2\tau)+\ddot{x}_{1}(t)+\ddot{x}_{1}(t-\tau)-x_{1}^{(3)}(t-\tau)\right) \\
\displaystyle \hspace{2cm} +\left( 2\dot{x}_{2}(t)+\dot{x}_{2}(t-\tau) -\ddot{x}_{2}(t)-\ddot{x}_{2}(t-\tau)-x_{2}^{(4)}(t)\right) \vspace{0.4em}\\
\displaystyle \hspace{3cm} + \dot{x}_{3}(t)- \left( x_{4}(t) + x_{4}(t-\tau) + \ddot{x}_{4}(t)\right) =0,\vspace{0.4em}\\
\displaystyle \left( -x_{1}(t-2\tau)+\ddot{x}_{1}(t-2\tau)\right) - \left( x_{2}(t-\tau)- x_{2}^{(3)}(t-\tau)\right) \vspace{0.4em}\\
\displaystyle \hspace{3cm} - x_{3}(t-\tau) + \dot{x}_{4}(t-\tau)=0.
\end{array}\nonumber
\end{equation}
According to step 1, we compute a right Smith-Jacobson decomposition of $F$:
\begin{equation}
VF\wt{Q}=\left(\begin{array}{cccc}1&0&0&0\\0&1&0&0\end{array}\right)
\end{equation}
where $V=1$ and
\begin{equation}\label{Qtilde-ex2}
\wt{Q}=\left(\begin{array}{cccc} 0&0&0&1\\0&0&1&0\\-\left(1+\delta\right)^{-1}{\frac{d}{dt}}&-{\delta}^{-1}\left(1+\delta\right)^{-1}\left(1+\delta+{\frac{{d}^{2}}{{dt}^{2}}}\right)&{\frac{{d}^{2}}{{dt}^{2}}}-1&{\frac{{d}^{3}}{{dt}^{3}}}+{\frac{{d}^{2}}{{dt}^{2}}}-\delta\\-\left(1+\delta\right)^{-1}&-\delta^{-1}\left(1+\delta\right)^{-1}{\frac{d}{{dt}}}&\left(1-{\frac{d}{{dt}}}\right)\frac{d}{dt}&\left({\frac{d}{dt}}+1-\delta\right)\frac{d}{dt}\end{array}\right),\nonumber
\end{equation}
showing thus that $F$ is hyper-regular and that $\bar{\pi}=\delta(1+\delta)$.
For the interested reader, $\wt{Q}$ is obtained as the product
$\wt{Q}=\wt{Q}_{1}\wt{Q}_{2}\wt{Q}_{3}$ of matrices of elementary actions:
\begin{equation}
\wt{Q}_{1}=\left(\begin{array}{cccc}0&0&0&1\\0&0&1&0\\{\frac{d}{dt}}&1&0&0\\1&0&0&0\end{array}\right),\nonumber
\end{equation}
\begin{equation}
\wt{Q}_{2}=\left(\begin{array}{cccc}\wt{q}_{1,1,2}&\wt{q}_{1,2,2}&\wt{q}_{1,3,2}&\wt{q}_{1,4,2}
\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{array}\right),\nonumber
\end{equation}
with
\begin{equation}
\begin{array}{l}
\wt{q}_{1,1,2} = -\left(1+\delta\right)^{-1}\\
\wt{q}_{1,2,2} = \left(1+\delta\right)^{-1}\frac{d}{dt}\\
\wt{q}_{1,3,2} = \left(1 + \delta \right)^{-1} \left( \left( 2+\delta \right) \frac{d}{dt} - \frac{d^{4}}{dt^{4}}\right) - \frac{d^{2}}{dt^{2}} \\
\wt{q}_{1,4,2} ={\frac{d}{dt}} - \delta^{2} \left(1 + \delta \right)^{-1}{\frac{d}{dt}} + \frac{d^{2}}{dt^{2}} - \left(1 + \delta \right)^{-1} \frac{d^{3}}{dt^{3}}\delta
\end{array}
\end{equation}
and
\begin{equation}
\wt{Q}_{3}=\left(\begin{array}{cccc}1&0&0&0\\0&-{\delta}^{-1}&-1+{\frac{{d}^{3}}{{dt}^{3}}}&{\frac{{d}^{2}}{{dt}^{2}}{\delta}}-{\delta}\\0&0&1&0\\0&0&0&1\end{array}\right).\nonumber
\end{equation}
According to step 2, we get
$$Q=\wt{Q}\left( \begin{array}{c} 0_{2,2}\\ I_{2}\end{array}\right)
=
\left( \begin{array}{cc}
0
&
1
\\
1
&
0
\\
{\frac{{d}^{2}}{{dt}^{2}}}-1
&
{\frac{{d}^{3}}{{dt}^{3}}}+{\frac{{d}^{2}}{{dt}^{2}}}-\delta
\\
\left(1-{\frac{d}{{dt}}}\right)\frac{d}{dt}&\left({\frac{d}{dt}}+1-\delta\right)\frac{d}{dt}
\end{array}\right)$$
\begin{equation}
R=N(I_{2}\quad 0_{2,2})MAQ=\left(\begin{array}{cc}-{\frac{{d}^{3}}{{dt}^{3}}}
&
{\frac{d}{dt}}-{\frac{{d}^{3}}{{{dt}}^{3}}}-{\frac{{d}^{4}}{{dt}^{4}}}
\\
\left({\frac{d}{dt}}+1\right)\frac{{d}^{3}}{{dt}^{3}}
&
-{\frac{{d}^{2}}{{dt}^{2}}}+{\frac{{d}^{3}}{{dt}^{3}}}+2\,{\frac{{d}^{4}}{{dt}^{4}}}+{\frac{{d}^{5}}{{dt}^{5}}}\end{array}\right),\nonumber
\end{equation}
with $\pi_{R}=1$.
From $x=Qy$ and $u=Ry$, we deduce the expressions
\begin{equation}\label{Q-ex2}
\left(\begin{array}{cc}x_{1}(t)\\x_{2}(t)\\x_{3}(t)\\x_{4}(t)\end{array}\right) = \left(\begin{array}{cc}
y_{2}(t)\\
y_{1}(t)\\-y_{1}(t)+y_{1}^{(2)}(t)-y_{2}(t-\tau)+y_{2}^{(2)}(t)+y_{2}^{(3)}(t)\\\dot{y}_{1}(t)-y_{1}^{(2)}(t)+\dot{y}_{2}(t)-\dot{y}_{2}(t-\tau)+y_{2}^{(2)}(t)
\end{array}\right).
\end{equation}
and
\begin{equation}\label{R-ex2}
\left(\begin{array}{cc}u_{1}(t)\\u_{2}(t)\end{array}\right) = \left(\begin{array}{cc}
-y_{1}^{(3)}(t)+\dot{y}_{2}(t)-y_{2}^{(3)}(t)-y_{2}^{(4)}(t)\\
y_{1}^{(3)}(t)+y_{1}^{(4)}(t)-y_{2}^{(2)}(t)+y_{2}^{(3)}(t)+2y_{2}^{(4)}(t)+y_{2}^{(5)}(t)
\end{array}\right).
\end{equation}
Next, according to step 3, we compute $\wt{P}\in\uud{4}$ and $W\in \uud{2}$ such that $\wt{P}QW=(I_{2}\quad 0_{2,2})^{T}$.
\begin{equation}
\wt{P}=\left(\begin{array}{cccc}0&1&0&0\\1&0&0&0\\-{\frac{{d}^{3}}{{dt}^{3}}}-{\frac{{d}^{2}}{{dt}^{2}}}+\delta&-{\frac{{d}^{2}}{{dt}^{2}}}+1&1&0\\{\frac{d\delta}{dt}}-{\frac{{d}^{2}}{{dt}^{2}}}-{\frac{d}{dt}}&-{\frac{d}{dt}}+{\frac{{d}^{2}}{{dt}^{2}}}&0&1\end{array}\right),\quad W=I_{2}.\nonumber
\end{equation}
Again, $\wt{P}$ is obtained as the product $\wt{P}=\wt{P}_{2}\wt{P}_{1}$ of elementary actions:
\begin{equation}
\wt{P}_{2}=\left(\begin{array}{cccc}1&0&0&0\\0&1&0&0\\-{\frac{{d}^{2}}{{dt}^{2}}}+1&-{\frac{{d}^{3}}{{dt}^{3}}}-{\frac{{d}^{2}}{{dt}^{2}}}+\delta&1&0\\-\left(1-{\frac{d}{dt}}\right)\frac{d}{dt}&-\left({\frac{d}{dt}}+1-\delta\right)\frac{d}{dt}&0&1\end{array}\right) , \quad
\wt{P}_{1}=\left(\begin{array}{cccc}0&1&0&0\\1&0&0&0\\0&0&1&0\\0&0&0&1\end{array}\right).
\nonumber
\end{equation}
Then
$$P=W\left(I_{2}\quad 0_{2,2}\right)\wt{P}= \left( \begin{array}{cccc}0&1&0&0\\1&0&0&0\end{array}\right).
$$
which, with $y=Px$, yields
\begin{equation}\label{P-ex2}
y_{1}=x_{2}, \quad y_{2}=x_{1}
\end{equation}
and $\pi_{P}=1$.
Then it is immediately seen that $\pi=\delta(1+\delta)$ and that the system is $\pi$-flat.
\begin{rem}
It is worth noting that the polynomial $\pi=\delta(1+\delta)$ only appears at the intermediate level of the computation of $\wt{Q}$, and not anymore in the defining matrices $P$, $Q$ and $R$. However, this means that the system module contains elements $z$ such that $z(t)=-z(t-\tau)$ for all $t$, that satisfy $\pi z=0$, thus preventing this module from being free.
\end{rem}
\subsection{Vibrating String With an Interior Mass}\label{string-ex-sec}
As noted in Remark~\ref{multdelay-rem}, the computation of $\pi$-flat outputs based on Theorem \ref{pi_flat_thm} can be extended to linear systems with multiple delays. As an example, we consider the system of vibrating string with two controls proposed in \cite{Mounier_98}, which can be modeled as a set of one-dimensional wave equations together with a second order linear ordinary differential equation describing the motion of the mass. Using Mikusi\'{n}ski operational calculus (see for instance \cite{Fliess_96}), this infinite-dimensional system can be transformed into the time-delay system
\begin{equation}
\left\{\begin{array}{lll}
\psi_{1}(t)+\phi_{1}(t)-\psi_{2}(t)-\phi_{2}(t)=0,\\
\dot{\psi}_{1}(t)+\dot{\phi}_{1}(t)+\eta_{1}(\phi_{1}(t)-\psi_{1}(t))-\eta_{2}(\phi_{2}(t)-\psi_{2}(t))=0,\\
\phi_{1}(t-2\tau_{1})+\psi_{1}(t)=u_{1}(t-\tau_{1}),\\
\phi_{2}(t)+\psi_{2}(t-2\tau_{2})=u_{2}(t-\tau_{2}).\\
\end{array}\right.\label{sys_string_2control}
\end{equation}
where $\eta_{1}$ and $\eta_{2}$ are constant parameters. Denoting the state $x=(\psi_{1},\phi_{1},\psi_{2},\phi_{2})^{T}$, the control input $u=(u_{1},u_{2})$, and $\delta_{1}$, $\delta_{2}$ the delay operators of respective lengths $\tau_{1}$ and $\tau_{2}$, the system (\ref{sys_string_2control}) may be rewritten in the form $Ax=Bu$, with $A\in\mmdd{4}{4}$ and $B\in\mmdd{4}{2}$ given by
\begin{equation}
A=\left(
\begin{array}{cccc}
1&1&-1&-1\\{\frac{d}{dt}}+\eta_{1}&{\frac{d}{dt}}-\eta_{1}&\eta_{2}&-\eta_{2}\\1&\delta_{1}^{2}&0&0\\0&0&\delta_{2}^{2}&1\end{array}\right), \quad
B=\left(\begin{array}{cc}0&0\\0&0\\\delta_{1}&0\\0&\delta_{2}\end{array}\right).
\end{equation}
The computation of a Smith-Jacobson decomposition of $B$ is here straightforward: it suffices to exchange the two last lines of $B$ with the two first lines, and we get $MBN=\left(\begin{array}{c}I_{2}\\0_{2,2}\end{array}\right)$ with
\begin{equation}
M=\left(\begin{array}{cccc}
0&0&1&0\\0&0&0&1\\\delta_{1}^{-1}&0&0&0\\0&\delta_{2}^{-1}&0&0
\end{array}\right),\quad N=I_{2}.
\end{equation}
We have then $\pi_{M,N}=\delta_{1}\delta_{2}.$
Thus $B$ is hyper-regular and
\begin{equation}
F=(0_{2,2}\quad I_{2})MA=\left(\begin{array}{cccc}\delta_{1}^{-1}&\delta_{1}^{-1}&-\delta_{1}^{-1}&-\delta_{1}^{-1}\vspace{0.3 cm}\\
\delta_{2}^{-1}\left({\frac{d}{dt}}+\eta_{1}\right)&\delta_{2}^{-1}\left({\frac{d}{dt}}-\eta_{1}\right)&\delta_{2}^{-1}\eta_{2}&-\delta_{2}^{-1}\eta_{2}\end{array}\right),\nonumber
\end{equation}
A right Smith-Jacobson decomposition of $F$, namely $VF\wt{Q}=\left(I_{2},\; 0_{2,2}\right)$, is given by
\begin{equation}
\begin{array}{c}
V=\left(\begin{array}{cc}\delta_{1}&0\\\delta_{2}\left(-{\frac{d}{dt}}-\eta_{1}\right)&\delta_{2}\end{array}\right)
\\
\wt{Q}=\left(\begin{array}{cccc}1&\frac{1}{2\eta_{1}}&
\frac{1}{2\eta_{1}}\left( -{\frac{d}{dt}}+ \left(\eta_{1}-\eta_{2}\right)\right)&
\frac{1}{2\eta_{1}}\left( -{\frac{d}{dt}}+ \left(\eta_{1}+\eta_{2}\right)\right)\\
0&-\frac{1}{2\eta_{1}}&
\frac{1}{2\eta_{1}}\left( {\frac{d}{dt}}+ \left(\eta_{1}+\eta_{2}\right)\right)&
\frac{1}{2\eta_{1}}\left( {\frac{d}{dt}}+ \left(\eta_{1}-\eta_{2}\right)\right)
\\0&0&1&0\\0&0&0&1\end{array}\right)
\end{array}
\end{equation}
with $\bar{\pi}=\delta_{1}\delta_{2}$,
and where $\wt{Q}$ is obtained as the product of elementary actions $\wt{Q}_{1}$ and $\wt{Q}_{2}$:
\begin{equation}
\begin{array}{c}
\wt{Q}_{1}=\left(\begin{array}{cccc}1&-1&1&1\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{array}\right),
\\
\wt{Q}_{2}=\left(\begin{array}{cccc}
1&0&0&0\\
0&-\frac{1}{2\eta_{1}}&
\frac{1}{2\eta_{1}}\left( {\frac{d}{dt}}+ \left(\eta_{1}+\eta_{2}\right)\right)&
\frac{1}{2\eta_{1}}\left( {\frac{d}{dt}}+ \left(\eta_{1}-\eta_{2}\right)\right)\\
0&0&1&0\\0&0&0&1\end{array}\right)
\end{array}
\end{equation}
thus showing that $F$ is hyper-regular.
According to step 2 of the algorithm, we compute $Q = \wt{Q}\left( \begin{array}{c}0_{2,2}\\I_{2}\end{array}\right) $ and $R = N\left( I_{2},\; 0_{2,2}\right) MAQ$:
\begin{equation}\label{QandR-ex3}
\begin{array}{c}
Q =
\left(\begin{array}{cc}
\frac{1}{2\eta_{1}}\left( -{\frac{d}{dt}}+ \left(\eta_{1}-\eta_{2}\right)\right)
&
\frac{1}{2\eta_{1}}\left( -{\frac{d}{dt}}+ \left(\eta_{1}+\eta_{2}\right)\right)
\\
\frac{1}{2\eta_{1}}\left( {\frac{d}{dt}}+ \left(\eta_{1}+\eta_{2}\right)\right)
&
\frac{1}{2\eta_{1}}\left( {\frac{d}{dt}}+ \left(\eta_{1}-\eta_{2}\right)\right)
\\
1&0
\\
0&1
\end{array}\right)
\\
R =\left(\begin{array}{cc}
R_{1,1}
&
R_{1,2}
\\
\delta_{2}^{2}
&
1
\end{array}\right)
\end{array}
\end{equation}
with
\begin{equation}
\begin{array}{l}
R_{1,1}=
\frac{1}{2\eta_{1}}\left( -{\frac{d}{dt}}+ \left(\eta_{1}-\eta_{2}\right)\right)+
\frac{\delta_{1}^{2}}{2\eta_{1}}\left( {\frac{d}{dt}}+ \left(\eta_{1}+\eta_{2}\right)\right)
\\
R_{1,2}=
\frac{1}{2\eta_{1}}\left( -{\frac{d}{dt}}+ \left(\eta_{1}+\eta_{2}\right)\right)+
\frac{\delta_{1}^{2}}{2\eta_{1}}\left( {\frac{d}{dt}}+ \left(\eta_{1}-\eta_{2}\right)\right)
\end{array}
\end{equation}
We indeed have $\pi_{R}=1$.
Therefore, setting $x=Qy$ and $u=Ry$, we obtain the expressions
\begin{equation}
\left(\begin{array}{cc}\psi_{1}(t)\\\phi_{1}(t)\\\psi_{2}(t)\\\phi_{2}(t)\end{array}\right)
=\left(\begin{array}{cc}
\frac{1}{2\eta_{1}}\left(-\dot{y}_{1}(t)+(\eta_{1}-\eta_{2})y_{1}(t)-\dot{y}_{2}(t)+(\eta_{1}+\eta_{2})y_{2}(t)\right)
\\
\frac{1}{2\eta_{1}}\left(\dot{y}_{1}(t)+(\eta_{1}+\eta_{2})y_{1}(t)+\dot{y}_{2}(t)+(\eta_{1}-\eta_{2})y_{2}(t)\right)
\\
y_{1}(t)
\\
y_{2}(t)
\end{array}\right).
\end{equation}
and
\begin{equation}
\begin{array}{l}
u_{1}(t)=\frac{1}{2\eta_{1}}\bigl[ \dot{y}_{1}(t-\tau_{1})-\dot{y}_{1}(t+\tau_{1})+\dot{y}_{2}(t-\tau_{1})-\dot{y}_{2}(t+\tau_{1}) \bigr.
\\
\hspace{1cm}\bigl. +(\eta_{1}+\eta_{2})(y_{1}(t-\tau_{1})+y_{2}(t+\tau_{1}))+(\eta_{1}-\eta_{2})(y_{1}(t+\tau_{1})+y_{2}(t-\tau_{1}))\bigr]
\\
u_{2}(t)=y_{1}(t-\tau_{2})+y_{2}(t+\tau_{2}).
\end{array}
\end{equation}
Further, according to step 3, we compute $\wt{P}$ and $W$ of a Smith-Jacobson decomposition of $Q$, namely
$\wt{P}QW=\left( \begin{array}{c}I_{2}\\ 0_{2,2}\end{array}\right)$.
We find $W=I_{2}$ and $\wt{P}=\wt{P}_{4}\wt{P}_{3}\wt{P}_{2}\wt{P}_{1}$ with
\begin{equation}
\wt{P}_{1}=\left(\begin{array}{cccc}0&0&1&0\\0&1&0&0\\1&0&0&0\\0&0&0&1\end{array}\right),\quad
\wt{P}_{2}=\left(\begin{array}{cccc}
1&0&0&0\\
-\frac{1}{2\eta_{1}}\left( {\frac{d}{dt}}+\left(\eta_{1}+\eta_{2}\right)\right)&1&0&0\\
\frac{1}{2\eta_{1}}\left( {\frac{d}{dt}}-\left(\eta_{1}-\eta_{2}\right)\right)&0&1&0\\0&0&0&1\end{array}\right),\nonumber
\end{equation}
\begin{equation}
\wt{P}_{3}=\left(\begin{array}{cccc}1&0&0&0\\0&0&0&1\\0&0&1&0\\0&1&0&0\end{array}\right),\quad
\wt{P}_{4}=\left(\begin{array}{cccc}1&0&0&0\\0&1&0&0\\
0&\frac{1}{2\eta_{1}}\left( {\frac{d}{dt}}-\left(\eta_{1}+\eta_{2}\right)\right)&1&0\\
0&-\frac{1}{2\eta_{1}}\left( {\frac{d}{dt}}+\left(\eta_{1}-\eta_{2}\right)\right)&0&1\end{array}\right),\nonumber
\end{equation}
thus
\begin{equation}
\wt{P}=\left(\begin{array}{cccc}0&0&1&0\\0&0&0&1\\
1&0&\frac{1}{2\eta_{1}}\left( {\frac{d}{dt}}-\left(\eta_{1}-\eta_{2}\right)\right)&
\frac{1}{2\eta_{1}}\left( {\frac{d}{dt}}-\left(\eta_{1}+\eta_{2}\right)\right)\\
0&1&-\frac{1}{2\eta_{1}}\left( {\frac{d}{dt}}+\left(\eta_{1}+\eta_{2}\right)\right)&
-\frac{1}{2\eta_{1}}\left( {\frac{d}{dt}}+\left(\eta_{1}-\eta_{2}\right)\right)
\end{array}\right)
\end{equation}
and
\begin{equation}
P=W\left(I_{2}\;\; 0_{2,2}\right)\wt{P}=\left(\begin{array}{cccc}0&0&1&0\\0&0&0&1\end{array}\right)
\end{equation}
Finally, setting $y=Px$, we get $y_{1}=\psi_{2}$, $y_{2}=\phi_{2}$ and $\pi_{P}=1$.
Taking the least common multiple of $(1,1, \delta_{1}\delta_{2})$, we get $\pi=\delta_{1}\delta_{2}$. It is then immediately seen that $y_{1}=\psi_{2}$ and $y_{2}=\phi_{2}$ is a $\delta_{1}\delta_{2}$-flat output.
\begin{rem}
For multiple delays $\delta_{1}, \delta_{2},\ldots, \delta_{n}$, a flat output for which the polynomial $\pi$ is restricted to a monomial (i.e. of the form $\delta_{1}^{s_{1}}\delta_{2}^{s_{2}}\cdots \delta_{n}^{s_{n}}$) is called $\delta$-flat output in \cite{Rudolph_03}. This is the case here with $n=2$ and $s_1=s_2=1$.
\end{rem}
\begin{rem}
In \cite{Mounier_98}, a different solution $y_1=\delta_1\phi_1-u$, $y_2=\phi_1+\psi_1$ has been proposed.
\end{rem}
\section{Concluding Remarks}
In this paper, a direct characterization of $\pi$-flat outputs for linear time-varying, time-delay systems, with coefficients that are meromorphic functions of time is obtained, yielding a constructive algorithm for their computation. The proposed approach is based on the Smith-Jacobson decomposition of a polynomial matrix over the Principal Ideal Domain ${\mathfrak K}(\delta)[{\frac{d}{dt}}]$, containing the original ring of multivariate polynomials ${\mathfrak K}[\delta, {\frac{d}{dt}}]$.
The fact that the computations are done in a larger ring, which is a principal ideal ring, makes them elementary and their localization at the powers of a $\delta$-polynomial results from an easy calculation of least common multiple of a finite set of polynomials of $\delta$. It is remarkable, however, that the $\pi$-flatness criterion only involves properties of the system matrices over the extended ring ${\mathfrak K}(\delta)[{\frac{d}{dt}}]$.
Several examples are presented to illustrate the simplicity of the approach.
Translating our algorithm in a computer algebra programme, e.g. in Maple or Mathematica, might be relatively easy and will be the subject of future works.
\section*{Acknowlegements}
The authors are indebted to Hugues Mounier and Alban Quadrat for useful discussions.
\bigskip
|
1,314,259,994,857 | arxiv | \section{Introduction}
The potentiality of the quark model for hadron physics in the
low-energy regime became first manifest when it was used to classify
the known hadron states. Describing hadrons as $q\bar q$ or $qqq$
configurations, their quantum numbers were correctly explained. This
assignment was based on the comment by Gell-Mann~\cite*{Ge64}
introducing the notion of quark: {\it ``It is assuming that the
lowest baryon configuration ($qqq$) gives just the representations
1, 8 and 10, that have been observed, while the lowest meson
configuration ($q \bar{q}$) similarly gives just 1~and~8''}. Since
then, it has been assumed that these are the only two configurations
involved in the description of physical hadrons. However, color
confinement is also compatible with other multiquark structures like
the tetraquark $qq\bar q\bq$ first introduced by Jaffe \cite{Ja77}.
During the last two decades there appeared a number of experimental
data that are hardly accommodated in the traditional scheme defined
by Gell-Mann.
One of the first scenarios where the existence of bound multiquarks
was proposed was a system composed of two light quarks and two heavy
antiquarks ($nn\bar Q\bar Q$). These objects are called heavy-light
tetraquarks due to the similarity of their structure with the
heavy-light mesons ($n\bar Q$). Although they may be experimentally
difficult to produce and also to detect~\cite{Mo96} it has been
argued that for sufficiently large heavy quark mass the tetraquark
should be bound~\cite{ZS86a,ZS86b}. The stability of a heavy-light
tetraquark relies on the heavy quark mass. The heavier the quark the
more effective the short-range Coulomb attraction to generate
binding, in such a way that it could play a decisive role to bind
the system. Moreover the $\bar Q \bar Q$ pair brings a small kinetic
energy into the system contributing to stabilize it.
Another interesting scenario where tetraquarks may be present corresponds to the scalar mesons, $J^{PC}=0^{++}$.
To obtain a positive parity state from a $q\bar q$
pair one needs at least one unit of orbital angular momentum. Apparently
this costs an energy around 0.5 GeV\footnote{This
effect can be estimated from the experimental $M(L=1)-M(L=0)$ energy differences:
$a_1(1260)-\rho(776)=484$ MeV, $f_1(1282)-\omega(782)=500$ MeV,
$h_1(1170)-\eta(548)=622$ MeV, $h_c(3526)-\eta_c(2980)=546$ MeV,
$\chi_{c1}(3511)-J/\Psi(3097)=414$ MeV, $\chi_{b1}(9893)-\Upsilon(9460)=433$ MeV,
being the average $M(L=1)-M(L=0)\approx500$ MeV.}, making the lightest theoretical scalar
states to be around 1.3 GeV, far from their experimental error bars. However, a $qq\bar q\bq$ state
can couple to $J^{P C}=0^{++}$ without orbital excitation and, as a consequence, they could coexist and mix
with $q\bar q$ states in this energy region. Furthermore, the color and spin
dependent interaction arising from the one-gluon exchange, favors states where quarks and
antiquarks are separately antisymmetric in flavor. Thus, the energetically
favored flavor configuration for $qq\bar q\bq$ is $[(qq)_{\bar 3}(\bar q\bq)_3]$, a
flavor nonet, having the lightest multiplet spin 0. The most striking feature
of a scalar $qq\bar q\bq$ nonet in comparison with a $q\bar q$ nonet is a {\it reversed
mass spectrum} (see Figure~\ref*{fig-jaff}). One can see a degenerate isosinglet
and isotriplet at the top of the multiplet, an isosinglet at the bottom, and a
strange isodoublet in between. The resemblance to the experimental structure of the
light scalar mesons is striking.
\begin{figure}[h]
\centering
\caption{Quark content of a $q\bar q$ nonet (left) and a $qq\bar q\bq$ nonet (right).}
\includegraphics{fig1-mod.eps}
\label{fig-jaff}
\end{figure}
Four-quark states could also play an important role in the charm
sector. Since 2003 there have been discovered several open-charm
mesons: the $D_{sJ}^*(2317)$, the $D_{sJ}(2460)$, and the
$D_0^*(2308)$. In the subsequent years several new states joined
this exclusive group either in the open-charm sector: the
$D_{sJ}(2860)$, or in the charmonium spectra: the $X(3872)$, the
$X(3940)$, the $Y(3940)$, the $Z(3940)$, the $Y(4260)$, and the
$Z(4430)$ among others~\cite{PDG08}. It seems nowadays unavoidable
to resort to higher order Fock space components to tame the
bewildering landscape arising with these new findings. Four-quark
components, either pure or mixed with $q\bar q$ states, constitute a
natural explanation for the proliferation of new meson
states~\cite{Jaf05a,Jaf05b,Jaf05c}. They would also account for the possible
existence of exotic mesons as could be stable $cc\bar n\bar n$
states, the topic for discussion since the early 1980s~\cite{Ade82a,Ade82b}.
All these scenarios suggest the study of $qq\bar q\bq$ structures and
their possible mixing with the $q\bar q$ systems to understand the role
played by multiquarks in the hadron spectra. The manuscript is
organized as follows. In Section~\ref*{tech} the variational
formalism necessary to evaluate four-quark states is discussed in
detail with special emphasis on the symmetry properties. In Section
\ref*{thres} the way to exploit discrete symmetries to determine the
four-quark decay threshold is discussed. In Section \ref*{prob} the
formalism to evaluate four-quark state probabilities is sketched.
In Section~\ref*{results} we discuss some examples of four-quark
states calculated using this formalism. Finally, we summarize in
Section~\ref*{summary} our conclusions.
\section{Four-quark spectra}
\label{tech}
\subsection{Solving the four-body system}
\label{mixo}
The four-quark ($qq\bar q\bq$) problem will be addressed by means of
the variational method, specially suited for studying low-lying
states. The nonrelativistic Hamiltonian will be given by
\begin{equation}
H=\sum_{i=1}^4\left(m_{i}+\frac{\vec p_{i}^{\,2}}{2m_{i}}\right)+\sum_{i<j=1}^4V(\vec r_{ij}) \, ,
\label{ham}
\end{equation}
where the potential $V(\vec r_{ij})$ corresponds to an arbitrary
two-body interaction. The extension of this formalism to consider
many-body interactions is discussed in~\cite{Vij07ba,Vij07bb}.
The variational wave function must include all possible flavor-spin-color channels
contributing to a given configuration. For each channel $s$, the wave function will be the tensor product of
a color ($\left|C_{s_1}\right>$), spin ($\left|S_{s_2}\right>$), flavor ($\left|F_{s_3}\right>$), and radial
($\left|R_{s_4}\right>$) component,
\begin{equation}
\label{efr}
\left| \phi _{s}\right>=\left|C_{s_1}\right>\otimes\left|
S_{s_2}\right>\otimes\left|F_{s_3}\right>\otimes\left|R_{s_4}\right> \, ,
\end{equation}
where $s\equiv\{s_1,s_2,s_3,s_4\}$. The procedure to construct the wave function will be detailed later
on. Once the spin, color and flavor parts are integrated out the coefficients of the radial wave function are
obtained by solving the system of linear equations
\begin{equation}
\label{funci1g}
\sum_{s'\,s} \sum_{i} \beta_{s_4}^{(i)}
\, [\langle R_{s_4'}^{(j)}|\,H\,|R_{s_4}^{(i)}
\rangle - E\,\langle
R_{s_4'}^{(j)}|R_{s_4}^{(i)}\rangle \delta_{s,s'} ] = 0
\qquad \qquad \forall \, j\, ,
\end{equation}
where the eigenvalues are obtained by a minimization procedure.
\subsection{Four-body wave function}
\label{tetwav}
\begin{figure}[tb]
\begin{center}
\caption[Tetraquark Jacobi coordinates.]{Tetraquark Jacobi
coordinates, see Equations~\eref{coo} for definitions.} \label{coor}
\epsfig{file=fig2.eps}
\end{center}
\end{figure}
For the description of the $q_1q_2\bar{q_3}\bar{q_4}$ wave function
we consider the four-body Jacobi coordinates depicted in
Figure~\ref*{coor}:
\begin{eqnarray}
\label{coo}
\vec{x} &=&\vec{r}_{1}-\vec{r}_{2} \\ \nonumber
\vec{y} &=&\vec{r}_{3}-\vec{r}_{4} \\ \nonumber
\vec{z} &=&\frac{m_{1}\vec{r}_{1}+m_{2}\vec{r}_{2}}{m_{1}+m_{2}}-\frac{m_{3}\vec{r}_{3}+m_{4}\vec{r
}_{4}}{m_{3}+m_{4}}\\ \nonumber
\vec{R} &=&\frac{\sum m_{i}\vec{r}_{i}}{\sum m_{i}}\nonumber \, ,
\end{eqnarray}
\noindent where indices $1$ and $2$ will stand for quarks and $3$ and $4$ for antiquarks.
Let us now describe each component of the variational wave function \eref{efr} separately.
The total wave function should have well-defined permutation properties under
the exchange of identical particles: quarks or antiquarks. The Pauli principle
must be satisfied for each subsystem of identical
particles\footnote{One should have in mind that if flavor $SU(3)$ symmetry
is assumed, $u$, $d$, and $s$ quarks are identical particles.}.
This imposes restrictions on the quantum numbers of the basis states.
\subsection{Color space}
There are three different ways of coupling two quarks and two antiquarks into a colorless state:
\begin{subequations}
\begin{eqnarray}
\label{eq1a}
[(q_1q_2)(\bar q_3\bar q_4)]&\equiv&\{|\bar 3_{12}3_{34}\rangle,|6_{12}\bar 6_{34}\rangle\}\equiv\{|\bar 33\rangle_c^{12},
|6\bar 6\rangle_c^{12}\}\\
\label{eq1b}
[(q_1\bar q_3)(q_2\bar q_4)]&\equiv&\{|1_{13}1_{24}\rangle,|8_{13} 8_{24}\rangle\}\equiv\{|11\rangle_c,|88\rangle_c\}\\
\label{eq1c}
[(q_1\bar q_4)(q_2\bar q_3)]&\equiv&\{|1_{14}1_{23}\rangle,|8_{14} 8_{23}\rangle\}\equiv\{|1'1'\rangle_c,|8'8'\rangle_c\}\,,
\end{eqnarray}
\label{eq1}
\end{subequations}
\noindent being the three of them orthonormal basis. Each coupling
scheme allows to define a color basis where the four-quark problem
can be solved. Only two of these states have well defined
permutation properties: $|\bar 33\rangle_c^{12}$, is antisymmetric under the exchange
of both quarks and antiquarks, $(AA)$, and $|6\bar 6\rangle_c^{12}$ is symmetric,
$(SS)$. Therefore, the basis Equation~(\ref*{eq1a}) is the most
suitable one to deal with the Pauli principle. The other two,
Equations~(\ref*{eq1b}) and~(\ref*{eq1c}), are hybrid bases
containing singlet-singlet (physical) and octet-octet (hidden-color)
vectors. The three basis are related through~\cite{De63a,De63b}:
\begin{eqnarray}
\label{qq13}
|11\rangle_c&=&\sqrt{1\over3}\,|\bar 33\rangle_c^{12}+\sqrt{2\over3}\,|6\bar 6\rangle_c^{12}\\ \nonumber
|88\rangle_c&=&-\sqrt{2\over3}\,|\bar 33\rangle_c^{12}+\sqrt{1\over3}\,|6\bar 6\rangle_c^{12} \, ,
\end{eqnarray}
and
\begin{eqnarray}
\label{qq14}
|1'1'\rangle_c&=&-\sqrt{1\over3}|\bar 33\rangle_c^{12}+\sqrt{2\over3}|6\bar 6\rangle_c^{12}\\ \nonumber
|8'8'\rangle_c&=&\sqrt{2\over3}|\bar 33\rangle_c^{12}+\sqrt{1\over3}|6\bar 6\rangle_c^{12} \, .
\end{eqnarray}
\begin{table}[h!]
\caption{Color matrix elements.}
\label{coma}
\begin{center}
\begin{tabular}{|c|cccccc|}
\hline
$\hat{O}$ &$(\vec\lambda_1\cdot\vec\lambda_2)$&$(\vec\lambda_3\cdot\vec\lambda_4)$&$(\vec\lambda_1\cdot\vec\lambda_3)$
&$(\vec\lambda_2\cdot\vec\lambda_4)$&$(\vec\lambda_1\cdot\vec\lambda_4)$&$(\vec\lambda_2\cdot\vec\lambda_3)$\\
\hline
$^{12}_c\langle\bar 33| \hat{O}|\bar 33\rangle_c^{12}$&$-8/3$&$-8/3$&$-4/3$&$-4/3$&$-4/3$&$-4/3$\\
$^{12}_c\langle 6\bar 6| \hat{O}|6\bar 6\rangle_c^{12}$&$4/3$&$4/3$&$-10/3$&$-10/3$&$-10/3$&$-10/3$\\
$^{12}_c\langle\bar 33| \hat{O}|6\bar 6\rangle_c^{12}$&0&0&$-2\sqrt{2}$&$-2\sqrt{2}$&$2\sqrt{2}$&$2\sqrt{2}$\\
\hline
\end{tabular}
\end{center}
\end{table}
To evaluate color matrix elements the two-body color
operators are introduced in the same manner as in angular momentum theory,
\begin{equation}
\label{momen}
\vec\lambda_i\cdot \vec\lambda_j={1\over2}\Big(\vec\lambda^2_{ij}-\vec\lambda_i^2-\vec\lambda_j^2\Big)\, ,
\end{equation}
where $\vec\lambda_i$ are the $SU(3)_c$ Gell-Mann matrices acting
on quark $i$, and $\vec\lambda_{ij}^2$ is the Casimir operator. For an
irreducible representation $\psi(\lambda\mu)$, the eigenvalue of the
Casimir operator is given by:
\begin{equation}
\label{eige}
\vec\lambda_{ij}^2\psi(\lambda\mu)={4\over3}\Big(\lambda^2+\mu^2+\lambda\mu+3\lambda+3\mu\Big)\psi(\lambda\mu)\,.
\end{equation}
In the color space a quark is described by $3_c=(10)$ and an antiquark by $\bar
3_c=(01)$, so
\begin{eqnarray}
\label{ant}
\vec\lambda_i^2\psi(10)&=&\vec\lambda_i^2[3_c]={16\over3}[3_c]={16\over3}\psi(10)\, , \\ \nonumber
\vec\lambda_i^2\psi(01)&=&\vec\lambda_i^2[\bar 3_c]={16\over3}[\bar 3_c]={16\over3}\psi(01)\, .
\end{eqnarray}
Two quarks in a symmetric state, $6$ or $\bar 6$, have $(\lambda\mu)=(20)$ and therefore
\begin{equation}
\label{ant2}
\vec\lambda_i^2\psi(20)=\vec\lambda_i^2[6_c]=\vec\lambda_i^2[\bar 6_c]={40\over3}\psi(20),
\end{equation}
while two quarks in an antisymmetric state, $3$ or $\bar 3$, have
$(\lambda\mu)=(01)$, being the same value as \linebreak Equation
\eref{ant}. Using these expressions, the color matrix elements
summarized in Table~\ref*{coma}, may be easily evaluated.
\subsection{Spin space}
\label{subspin}
The spin part of the wave function can be written as
\begin{equation}
\left[(s_1s_2)_{S_{12}}(s_3s_4)_{S_{34}}\right]_{S}\equiv|S_{12}S_{34}\rangle^{12}_s
\end{equation}
where the spin of the two quarks (antiquarks) is coupled to $S_{12}$ ($S_{34}$).
Two identical spin-$1/2$ fermions in a $S=0$ state are antisymmetric $(A)$ under permutations while those coupled to
$S=1$ are symmetric $(S)$.
In Table~\ref*{spin} we have included the corresponding vectors for each total
spin together with their symmetry properties.
\begin{table}[h!]
\caption[Spin basis vectors.]{Spin basis vectors for all possible
total spin states $(S)$. The ``Symmetry'' column stands for the
symmetry properties of the pair of quarks and antiquarks.}
\label{spin}
\begin{center}
\begin{tabular}{|c|cc|}
\hline
$S$&Vector&Symmetry\\
\hline
0&$|00\rangle^{12}_s$&AA\\
&$|11\rangle^{12}_s$&SS\\
\hline
&$|01\rangle^{12}_s$&AS\\
1&$|10\rangle^{12}_s$&SA\\
&$|11\rangle^{12}_s$&SS\\
\hline
2&$|11\rangle^{12}_s$&SS\\
\hline
\end{tabular}
\end{center}
\end{table}
Using this notation is straightforward to evaluate the four-body
spin matrix elements,
\begin{equation}
_s^{12}\langle S_{12}S_{34}|\vec\sigma_i\cdot\vec\sigma_j|S_{12}'S_{34}'\rangle_s^{12}=
\Big[2S_{ij}(S_{ij}+1)-3\Big]\delta_{S_{12},S_{12}'}\delta_{S_{34},S_{34}'}\delta_{S,S'} \, ,
\end{equation}
for $(ij)=(12)$ or (34) and where $\vec\sigma_i$ is the spin operator acting over particle $i$.
To calculate the other spin operators we should reorder the spin wave function~\cite{Va88}
\begin{eqnarray}
\Big[(s_1s_2)_{S_{12}}(s_3s_4)_{S_{34}}\Big]_{S}
&=&\sum_{k,l}(-1)^{2S_{12}+s_2+2s_3+s_4+l+S}\sqrt{2k+1}\sqrt{2l+1}\sqrt{2S_{12}+1}\sqrt{2S_{34}+1} \nonumber \\
&&\left\{\begin{array}{ccc}S_{12}&s_3&k\\s_4&S&S_{34}\end{array}\right\}
\left\{\begin{array}{ccc}s_2&s_1&S_{12}\\s_3&k&l\end{array}\right\}\Bigg[\Big[(s_1s_3)_ls_2\Big]_ks_4\Bigg]_{S}.
\end{eqnarray}
Now one can calculate the matrix element for the case $s_1=s_2=s_3=s_4={1\over2}$,
\begin{eqnarray}
&&_s^{12}\langle S_{12}S_{34}|\vec\sigma_1\cdot\vec\sigma_3|S_{12}'S_{34}'\rangle_s^{12}=\\ \nonumber
&=&\sqrt{2S_{12}+1}\sqrt{2S'_{12}+1}\sqrt{2S_{34}+1}\sqrt{2S'_{34}+1}\sum_{k,\,l}(2k+1)(2l+1)\big[2l(l+1)-3\big]\times\\ \nonumber
&\times&\left\{\begin{array}{ccc}S_{12}&1/2&k\\1/2&S&S_{34}\end{array}\right\}
\left\{\begin{array}{ccc}S'_{12}&1/2&k\\1/2&S&S'_{34}\end{array}\right\}
\left\{\begin{array}{ccc}1/2&1/2&S_{12}\\1/2&k&l\end{array}\right\}
\left\{\begin{array}{ccc}1/2&1/2&S'_{12}\\1/2&k&l\end{array}\right\}.
\end{eqnarray}
The same can be done for the other spin operators, $(\vec\sigma_1\cdot\vec\sigma_4)$, $(\vec\sigma_2\cdot\vec\sigma_4)$
and $(\vec\sigma_2\cdot\vec\sigma_3)$, using the expressions given above. The results are resumed in Table~\ref*{tabs}.
\begin{table}[h!!]
\caption{Spin matrix elements.}
\label{tabs}
\begin{center}
\begin{tabular}{|c|c|cccccc|}
\hline
$S$&&$(\vec\sigma_1\cdot\vec\sigma_2)$&$(\vec\sigma_3\cdot\vec\sigma_4)$&$(\vec\sigma_1\cdot\vec\sigma_3)$
&$(\vec\sigma_2\cdot\vec\sigma_4)$ &$(\vec\sigma_1\cdot\vec\sigma_4)$&$(\vec\sigma_2\cdot\vec\sigma_3)$\\
\hline
&$^{12}_s\langle 00|\hat{O}|00\rangle^{12}_s$&$-3$&$-3$&0&0&0&0\\
0&$^{12}_s\langle 11|\hat{O}|11\rangle^{12}_s$&$1$&$1$&$-2$&$-2$&$-2$&$-2$\\
&$^{12}_s\langle 00|\hat{O}|11\rangle^{12}_s$&0&0&$-\sqrt{3}$&$-\sqrt{3}$&$\sqrt{3}$&$\sqrt{3}$\\
\hline
&$^{12}_s\langle 01|\hat{O}|01\rangle^{12}_s$&$-3$&$1$&0&0&0&0\\
&$^{12}_s\langle 10|\hat{O}|10\rangle^{12}_s$&$1$&$-3$&0&0&0&0\\
1&$^{12}_s\langle 11|\hat{O}|11\rangle^{12}_s$&$1$&$1$&$-1$&$-1$&$-1$&$-1$\\
&$^{12}_s\langle 01|\hat{O}|10\rangle^{12}_s$&0&0&1&1&$-1$&$-1$\\
&$^{12}_s\langle 10|\hat{O}|11\rangle^{12}_s$&0&0&$\sqrt{2}$&$-\sqrt{2}$&$-\sqrt{2}$&$\sqrt{2}$\\
&$^{12}_s\langle 01|\hat{O}|11\rangle^{12}_s$&0&0&$-\sqrt{2}$&$\sqrt{2}$&$-\sqrt{2}$&$\sqrt{2}$\\
\hline
2&$^{12}_s\langle 11|\hat{O}|11\rangle^{12}_s$&1&1&1&1&1&1\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h!!]
\caption{Pauli-based classification of four-quark states.
$\checkmark$ indicates that the quark/antiquark pair requires the
application of the Pauli principle, being the notation (pair of
quarks, pair of antiquarks). The third and fourth columns contain
the recoupling corresponding to bases~\eref{eq1b} and~\eref{eq1c}.}
\label{tab_clasi}
\begin{center}
\begin{tabular}{|c|c||cc|}
\hline
(12)(34) & Pauli &(13)(24) & (14)(23) \\
\hline
$(nn)(\bar n\bar n)$ & $(\checkmark,\checkmark)$ & $(n\bar n)(n\bar n)$ & $(n\bar n)(n\bar n)$\\
$(nn)(\bar n\bar Q)$ & $(\checkmark,X)$ & $(n\bar n)(n\bar Q)$ & $(n\bar Q)(n\bar n)$\\
$(nn)(\bar Q_1\bar Q_2)$ & $(\checkmark,\checkmark$ if $\bar Q_1=\bar Q_2$)
& $(n\bar Q_1)(n\bar Q_2)$ & $(n\bar Q_2)(n\bar Q_1)$\\
$(nQ_1)(\bar n\bar Q_2)$ & $(X,X$) & $(n\bar n)(Q_1\bar Q_2)$ & $(n\bar Q_2)(Q_1\bar n)$\\
$(nQ_1)(\bar Q_2\bar Q_3)$ & $(X,\checkmark$ if $\bar Q_2=\bar Q_3$)
& $(n\bar Q_2)(Q_1\bar Q_3)$ & $(n\bar Q_3)(Q_1\bar Q_2)$\\
$(Q_1Q_2)(\bar Q_3\bar Q_4)$ & $(\checkmark$ if $Q_1=Q_2$, $\checkmark$ if $\bar Q_3=\bar Q_4$)
& $(Q_1\bar Q_3)(Q_2\bar Q_4)$ & $(Q_1\bar Q_4)(Q_2\bar Q_3)$\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Flavor space}
\label{flwave}
Before discussing the flavor part of the wave function one must
specify the required flavor symmetry, $SU(2)$ or $SU(3)$.
In the former case, $u$ and $d$ quarks are identical
whether in the latter, $u$, $d$, and $s$ are indistinguishable.
In the following, $n$ will stand for light $u$ and $d$ quarks and $Q$
for heavy ones, $c$ or $b$. $s$ quarks will be considered {\em heavy}
if flavor $SU(2)$ is assumed and {\em light} otherwise.
For the flavor part one finds several different possible four-quark
states depending on the number of light quarks. They can be
classified depending on whether they are made of undistinguishable
quarks in one of the pairs (and therefore the Pauli principle must
be imposed) or not. In following subsections we will discuss the
important role played by the Pauli principle in the description of
the four-quark states properties. This classification is illustrated
in Table~\ref*{tab_clasi}. Symmetry properties of the flavor wave
function are summarized in Table~\ref*{sim_flav}.
\begin{table}[h!!]
\caption{Symmetry properties of the flavor wave function under the
exchange of quarks (the same holds for antiquarks). $^{\dagger}$ If
flavor $SU(3)$ is assumed, symmetric and antisymmetric flavor wave
functions with $I=1/2$ can be constructed, i.e., $(us\pm
su)/\sqrt{2}$).} \label{sim_flav}
\begin{center}
\begin{tabular}{|c|c|}
\hline
Flavor&Symmetry\\
\hline
$nn$ $I=0$ & A \\
$nn$ $I=1$ & S \\
$nn$ $I=1/2^{\dagger}$ & S/A \\
$QQ$ $I=0$ & S \\
\hline
\end{tabular}
\end{center}
\end{table}
The flavor $SU(2)$ matrix elements can be evaluated by means of the
same relations shown in Section~\ref*{subspin}. For those
corresponding to flavor $SU(3)$ the procedure will require the
explicit construction of the flavor wave function by means of the
$SU(3)$ isoscalar factors given in~\cite{De63a,De63b}\footnote{Note there
is no universal agreement in the phase convention regarding the
isoscalar factor, so mixing different tables from different authors
should be done with care.}. As an example we evaluate some of the
flavor matrix elements needed for the description of heavy-light
tetraquarks. They can be obtained using the matrix expression of
$\lambda^a$,
\begin{eqnarray}
&&\lambda^1=\left(\begin{array}{ccc}0&1&0\\1&0&0\\0&0&0\end{array}\right)\,\,\,\,\,\,\,\,
\lambda^2=\left(\begin{array}{ccc}0&-i&0\\i&0&0\\0&0&0\end{array}\right)\,\,\,\,\,\,\,\,
\lambda^3=\left(\begin{array}{ccc}1&0&0\\0&-1&0\\0&0&0\end{array}\right)\\ \nonumber
&&\lambda^4=\left(\begin{array}{ccc}0&0&1\\0&0&0\\1&0&0\end{array}\right)\,\,\,\,\,\,\,\,
\lambda^5=\left(\begin{array}{ccc}0&0&-i\\0&0&0\\i&0&0\end{array}\right)\,\,\,\,\,\,\,\,
\lambda^6=\left(\begin{array}{ccc}0&0&0\\0&0&1\\0&1&0\end{array}\right)\\ \nonumber
&&\lambda^7=\left(\begin{array}{ccc}0&0&0\\0&0&-i\\0&i&0\end{array}\right)\,\,\,\,\,\,\,\,
\lambda^8=\left(\begin{array}{ccc}{1\over\sqrt{3}}&0&0\\0&{1\over\sqrt{3}}&0\\0&0&{-2\over\sqrt{3}}\end{array}\right),
\end{eqnarray}
where, following the same convention, quarks and antiquarks are given by,
\begin{eqnarray}
&u&=\bar u=(1,0,0)\\ \nonumber
&d&=\bar d=(0,1,0)\\ \nonumber
&s&=\bar s=(0,0,1).
\end{eqnarray}
The tetraquark flavor wave function corresponding to two light quarks coupled to total isospin
$I$ with $I_z=0$ and two heavy antiquarks can be written as
\begin{equation}
\left|\psi\right>={1\over\sqrt{2}}[ud+(-1)^{I+1}du][\bar s\bar s]\,.
\end{equation}
A typical flavor operator is
\begin{equation}
\label{opeI}
\vec\tau_i\cdot\vec\tau_j=\sum_{a=1}^3\lambda_i^a\lambda_j^a\, ,
\end{equation}
where $\lambda_i^a$ are the $SU(3)$ flavor matrices defined above and $\tau_i$ are the isospin
Pauli matrices, both acting on quark $i$. So the same expression obtained
for the spin operators holds here:
\begin{equation}
\left< \psi\Big|\sum_{a=1}^3\lambda^a_1\lambda^a_2\Big|\psi\right>=
\left\{\begin{array}{ll}I=0&\rightarrow-3\\I=1&\rightarrow1\end{array}\right.\, .
\end{equation}
Alternatively one can write the flavor matrix element as
\begin{eqnarray}
\label{flqq}
\left< \psi|\sum_{a=1}^3\lambda^a_1\lambda^a_2|\psi\right>
&=&\left< {ud+(-1)^{I+1}du\over\sqrt{2}}\Big|\sum_{a=1}^3\lambda^a_1\lambda^a_2\Big|{ud+(-1)^{I+1}du\over\sqrt{2}}\right>=\\ \nonumber
&=&{1\over2}\sum_{a=1}^3\Bigg\{\left< ud\Big|\lambda^a_1\lambda^a_2\Big|ud\right>+\left<
du\Big|\lambda^a_1\lambda^a_2\Big|du\right>+\\ \nonumber
&+&(-1)^{I+1}\left< du\Big|\lambda^a_1\lambda^a_2\Big|ud\right>+(-1)^{I+1}\left<
ud\Big|\lambda^a_1\lambda^a_2\Big|du\right>\Bigg\}=\\ \nonumber
&=&\sum_{a=1}^3\Bigg\{\left< u|\lambda^a|u\right>\left< d|\lambda^a|d\right>+(-1)^{I+1}|\left< u|\lambda^a|d\right>|^2\Bigg\}=\\ \nonumber
&=&-1+2(-1)^{I+1}=\left\{\begin{array}{ll}I=0&-3\\I=1&1\end{array}\right. \, .
\end{eqnarray}
Other matrix elements of interest are,
\begin{eqnarray}
&&\left< \psi|\lambda_1^8\lambda_2^8| \psi\right>={1\over 3}\\ \nonumber
&&\left< \psi|\lambda_3^8\lambda_4^8| \psi\right>={4\over 3}\\ \nonumber
&&\left< \psi|\lambda_1^8\lambda_3^8| \psi\right>=
\left< \psi|\lambda_2^8\lambda_3^8| \psi\right>=
\left< \psi|\lambda_1^8\lambda_4^8| \psi\right>=
\left< \psi|\lambda_2^8\lambda_4^8| \psi\right>=-{2\over 3}\, .
\end{eqnarray}
\subsection{Radial space}
The most general radial wave function with orbital angular momentum
$L=0$ may depend on the six scalar quantities that can be
constructed with the Jacobi coordinates of the system, they are:
$\vec x^{\,2}$, $\vec y^{\,2}$, $\vec z^{\,2}$,
$\vec{x}\cdot\vec{y}$, $\vec{x}\cdot\vec{z}$ and
$\vec{y}\cdot\vec{z}$. We define the variational spatial wave
function as a linear combination of \linebreak {\em generalized
Gaussians},
\begin{equation}
\left|R_{s_4}\right>=\sum_{i=1}^{n} \beta_{s_4}^{(i)} R_{s_4}^i(\vec x,\vec y,\vec z)=\sum_{i=1}^{n} \beta_{s_4}^{(i)} R_{s_4}^i
\label{wave}
\end{equation}
where $n$ is the number of Gaussians we use for each
color-spin-flavor component. $R_{s_4}^i$ depends on six variational
parameters, $a^i_s$, $b^i_s$, $c^i_s$, $d^i_s$, $e^i_s$, and
$f^i_s$, one for each scalar quantity. Therefore, any tetraquark
\pagebreak will depend on $6\times n\times n_s$ variational
parameters (where $n_s$ is the number of different channels allowed
by the Pauli Principle). Equation~\eref{wave} should have well
defined permutation symmetry under the exchange of both quarks and
antiquarks,
\begin{eqnarray}
\label{parx}
P_{12}(\vec x \rightarrow -\vec x)R^i_{s_4}&=&P_xR^i_{s_4}\\ \nonumber
P_{34}(\vec y \rightarrow -\vec y)R^i_{s_4}&=&P_yR^i_{s_4},
\end{eqnarray}
where $P_x$ and $P_y$ are $-1$ for antisymmetric states, $(A)$, and $+1$ for symmetric ones, $(S)$. One can build
the following radial combinations, $(P_xP_y)=(SS)$, $(SA)$, $(AS)$ and $(AA)$:
\begin{eqnarray}
\label{wave2-1}
(SS)\Rightarrow R_1^i&=&
{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}-d^i_s\vec x\vec y-e^i_s\vec x\vec z-f^i_s\vec y\vec z\right)\\\nonumber
&+&{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}+d^i_s\vec x\vec y-e^i_s\vec x\vec z+f^i_s\vec y\vec z\right)\\\nonumber
&+&{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}+d^i_s\vec x\vec y+e^i_s\vec x\vec z-f^i_s\vec y\vec z\right)\\\nonumber
&+&{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}-d^i_s\vec x\vec y+e^i_s\vec x\vec z+f^i_s\vec y\vec z\right) \, ,
\end{eqnarray}
\begin{eqnarray}
\label{wave2-2}
(SA)\Rightarrow R_2^i&=&
{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}-d^i_s\vec x\vec y-e^i_s\vec x\vec z-f^i_s\vec y\vec z\right)\\\nonumber
&-&{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}+d^i_s\vec x\vec y-e^i_s\vec x\vec z+f^i_s\vec y\vec z\right)\\\nonumber
&+&{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}+d^i_s\vec x\vec y+e^i_s\vec x\vec z-f^i_s\vec y\vec z\right)\\\nonumber
&-&{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}-d^i_s\vec x\vec y+e^i_s\vec x\vec z+f^i_s\vec y\vec z\right) \, ,
\end{eqnarray}
\begin{eqnarray}
\label{wave2-3}
(AS)\Rightarrow R_3^i&=&
{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}-d^i_s\vec x\vec y-e^i_s\vec x\vec z-f^i_s\vec y\vec z\right)\\\nonumber
&+&{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}+d^i_s\vec x\vec y-e^i_s\vec x\vec z+f^i_s\vec y\vec z\right)\\\nonumber
&-&{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}+d^i_s\vec x\vec y+e^i_s\vec x\vec z-f^i_s\vec y\vec z\right)\\\nonumber
&-&{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}-d^i_s\vec x\vec y+e^i_s\vec x\vec z+f^i_s\vec y\vec z\right) \, ,
\end{eqnarray}
\begin{eqnarray}
\label{wave2-4}
(AA)\Rightarrow R_4^i&=&
{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}-d^i_s\vec x\vec y-e^i_s\vec x\vec z-f^i_s\vec y\vec z\right)\\\nonumber
&-&{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}+d^i_s\vec x\vec y-e^i_s\vec x\vec z+f^i_s\vec y\vec z\right)\\\nonumber
&-&{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}+d^i_s\vec x\vec y+e^i_s\vec x\vec z-f^i_s\vec y\vec z\right)\\\nonumber
&+&{\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}-d^i_s\vec x\vec y+e^i_s\vec x\vec z+f^i_s\vec y\vec z\right) \, .
\end{eqnarray}
By defining the function
\begin{equation}
\label{red1}
g(s_1,s_2,s_3)={\rm Exp}\left(-a^i_s\vec x^{\,2}-b^i_s\vec y^{\,2}-c^i_s\vec z^{\,2}
-s_1d^i_s\vec x\vec y-s_2e^i_s\vec x\vec z-s_3f^i_s\vec y\vec z\right),
\end{equation}
we can build the vectors
\begin{equation}
\vec{G_s^i}=\left(\begin{array}{l} g(+,+,+)\\g(-,+,-)\\g(-,-,+)\\g(+,-,-)\end{array}\right)\, ,
\end{equation}
and
\begin{eqnarray}
\label{red2}
\vec{\alpha_{SS}}&=&(+,+,+,+)\\ \nonumber
\vec{\alpha_{SA}}&=&(+,-,+,-)\\ \nonumber
\vec{\alpha_{AS}}&=&(+,+,-,-)\\ \nonumber
\vec{\alpha_{AA}}&=&(+,-,-,+),
\end{eqnarray}
what allows to write in a compact way
Equations~\eref{wave2-1},~\eref{wave2-2},~\eref{wave2-3},
and~\eref{wave2-4},
\begin{eqnarray}
\label{redu}
(SS)&\Rightarrow& R_1^i=\vec{\alpha_{SS}}\cdot\vec{G_s^i}\\ \nonumber
(SA)&\Rightarrow& R_2^i=\vec{\alpha_{SA}}\cdot\vec{G_s^i}\\ \nonumber
(AS)&\Rightarrow& R_3^i=\vec{\alpha_{AS}}\cdot\vec{G_s^i}\\ \nonumber
(AA)&\Rightarrow& R_4^i=\vec{\alpha_{AA}}\cdot\vec{G_s^i} \, .
\end{eqnarray}
Such a radial wave function includes all possible relative orbital angular momenta
coupled to $L=0$. This can be seen through the relation:
\begin{eqnarray}
\label{ldz}
{\rm Exp}\left(-d^i_s\vec x\vec y-e^i_s\vec x\vec z-f^i_s\vec y\vec
z\right)={1\over\sqrt{4\pi}}\sum_{\ell_x=0}^\infty\sum_{\ell_y=0}^\infty\sum_{\ell_z=0}^\infty
\left[[Y_{\ell_x}(\hat x)Y_{\ell_y}(\hat y)]_{\ell_z}Y_{\ell_z}(\hat
z)\right]_0\\ \nonumber
\sum_{\ell_1,\ell_2,\ell_3}(2\ell_1+1)(2\ell_2+1)(2\ell_3+1)\left< \ell_10\ell_20|\ell_x\right>\left<
\ell_10\ell_30|\ell_y\right>\left<
\ell_20\ell_30|\ell_z\right>\left\{\begin{array}{ccc}\ell_x&\ell_y&\ell_z\\\ell_3&\ell_2&\ell_1\end{array}\right\}
\\ \nonumber
\left(\sqrt{\pi\over {2\,d_s^ixy}}I_{\ell_1+1/2}(d_s^ixy)\right)
\left(\sqrt{\pi\over {2\,e_s^ixz}}I_{\ell_2+1/2}(e_s^ixz)\right)
\left(\sqrt{\pi\over {2\,f_s^iyz}}I_{\ell_3+1/2}(f_s^iyz)\right) \, ,
\end{eqnarray}
where $\ell_x$, $\ell_y$ and $\ell_z$ are the orbital angular momenta associated
to coordinates $\vec x$, $\vec y$ and $\vec z$, and $I_a(x)$ are the modified Bessel functions.
The radial wave functions defined above have also well-defined
symmetry properties on the $\vec z$ coordinate. Being
$P_{(12)(34)}(\vec z \rightarrow -\vec z)R^i_{s_4}=P_zR^i_{s_4}$
one obtains,
\begin{eqnarray}
\label{parz}
P_{(12)(34)}R_1^i&=&+R_1^i\\ \nonumber
P_{(12)(34)}R_2^i&=&-R_2^i\\ \nonumber
P_{(12)(34)}R_3^i&=&-R_3^i\\ \nonumber
P_{(12)(34)}R_4^i&=&+R_4^i \, .
\end{eqnarray}
To evaluate radial matrix elements we will use the notation
introduced in Equation~\eref{redu}:
\begin{equation}
\label{ra1}
\left< R_{\gamma}^i|f(x,y,z)|R_{\beta}^j\right>=\int_V(\vec \alpha_{S_\gamma}\cdot \vec G^i_s)f(x,y,z)(\vec \alpha_{S_\beta}\cdot \vec G^j_{s'})dV=
\vec \alpha_{S_\gamma}\cdot F^{ij}\cdot\vec \alpha_{S_\beta}\, ,
\end{equation}
where $\gamma$ and $\beta$ stand for the symmetry of the radial
wave function and $F^{ij}$ is a matrix whose element $(a,b)$ is
defined through,
\begin{equation}
F^{ij}_{ab}=\int_V(\vec G_s^i)_a(\vec G^j_{s'})_bf(x,y,z)dV\, ,
\end{equation}
being $(\vec G_s^i)_a$ the component $a$ of the vector $\vec G_s^i$.
>From Equation~\eref{red1} one obtains,
\begin{equation}
g(s_1,s_2,s_3)g(s'_1,s'_2,s'_3)={\rm Exp}\left(-a_{ij}\vec x^{\,2}-b_{ij}\vec y^{\,2}-c_{ij}\vec z^{\,2}
-\bar s_{ij}\vec x\vec y-\bar e_{ij}\vec x\vec z-\bar f_{ij}\vec y\vec z\right) \, ,
\end{equation}
where we have shortened the previous notation according to $a^i_s\to a_i$,
$a_{ij}=a_i+a_j$ and $\bar d_{ij}=(s_1d_i+s_1'd_j)$. Therefore,
all four-body radial matrix elements will contain integrals of the form
\begin{equation}
I=\int_V{\rm Exp}\left(-a_{ij}\vec x^{\,2}-b_{ij}\vec y^{\,2}-c_{ij}\vec
z^{\,2} -\bar s_{ij}\vec x\vec y-\bar e_{ij}\vec x\vec z-\bar f_{ij}\vec y\vec z\right)f(x,y,z)d\vec xd\vec yd\vec z \, ,
\end{equation}
where the functions $f(x,y,z)$ are the potentials. Being
all of them radial functions (not depending on angular variables)
one can solve the previous integral by noting:
\begin{equation}
\int {\rm Exp}\big[-\sum_{i,j=1}^nA_{ij}\vec x_i \vec x_j\big]f\big(|\sum \alpha_k\vec
x_k|\big)d\vec x_1...d\vec
x_n=\Bigg({\pi^n\over{det\,A}}\Bigg)^{3\over2}4\pi\Bigg({\Omega_{ij}\over\pi}\Bigg)^{3\over2}F(\Omega_{ij},f) \, ,
\end{equation}
where
\begin{eqnarray}
{1\over\Omega_{ij}}&=&\bar\alpha\cdot A^{-1} \cdot\alpha\\ \nonumber
F(A,f)&=&\int e^{-Au^2}f(u)u^2du\\ \nonumber
det\,A&>&0\\ \nonumber
{1\over\Omega_{ij}}&>&0 \, .
\end{eqnarray}
One can extract some useful relations for the radial matrix elements
using simple symmetry properties. Let us rewrite Equation~\eref{ra1}
\begin{eqnarray}
\left< R_{\gamma}^i|f(x,y,z)|R_{\beta}^j\right>&=&\left< R_{P_xP_yP_z}^i|f(x,y,z)|R_{P_x'P_y'P_z'}^j\right>\\ \nonumber
&=&\int_x\int_y\int_z R_{P_xP_yP_z}^if(x,y,z)R_{P_x'P_y'P_z'}^jd\vec xd\vec yd\vec z \, .
\end{eqnarray}
If $f(x,y,x)$ depends only in one coordinate, for example $\vec x$, the
integrals over the other coordinates will be zero if one of them has different symmetry properties,
$P_y\neq P_y'$ or $P_z\neq P_z'$ in our example. Therefore
\begin{eqnarray}
\left< R_{\gamma}^i|f(x)|R_{\beta}^j\right>&\propto&\delta_{\gamma\beta}\\ \nonumber
\left< R_{\gamma}^i|f(y)|R_{\beta}^j\right>&\propto&\delta_{\gamma\beta}\\ \nonumber
\left< R_{\gamma}^i|f(z)|R_{\beta}^j\right>&\propto&\delta_{\gamma\beta}\\ \nonumber
\left< R_{\gamma}^i|{\rm Constant}|R_{\beta}^j\right>&\propto&\delta_{\gamma\beta} \, .
\end{eqnarray}
The radial wave function described in this section is adequate to describe
not only bound states, but also it is flexible enough to describe states of
the two-meson continuum within a reasonable accuracy.
We will came back to this point in Sect.~\ref*{results}
\subsection{Parity and $C-$parity}
The parity of a tetraquark can be calculated as
\begin{equation}
\label{pari1}
P\left[R_{s_4}^i(\vec x,\vec y,\vec z)\right]=R_{s_4}^i\left(\begin{array}{c}\vec x\to-\vec x\\\vec y\to-\vec y\\
\vec z\to-\vec z\end{array}\right)=(-1)^{\ell_x+\ell_y+\ell_z}R_{s_4}^i(\vec x,\vec y,\vec z)\, ,
\end{equation}
or using Equations~\eref{parx} and~\eref{parz},
\begin{equation}
\label{pari2b}
P\left[R_{s_4}^i(\vec x,\vec y,\vec z)\right]=P_{12}P_{34}P_{(12)(34)}R_{s_4}^i(\vec x,\vec y,\vec z)=P_xP_yP_z\,
R_{s_4}^i(\vec x,\vec y,\vec z)\, ,
\end{equation}
what in our case implies
\begin{equation}
P\left[R_{s_4}^i(\vec x,\vec y,\vec z)\right] =
\left\{\begin{array}{c} (+)(+)(+)R^i_1 \\
(+)(-)(-)R^i_2 \\
(-)(+)(-)R^i_3 \\
(-)(-)(+)R^i_4
\end{array}\right\}=
\left\{\begin{array}{c} +R^i_1 \\
+R^i_2 \\
+R^i_3 \\
+R^i_4
\end{array}\right\}=
+R_{s_4}^i(\vec x,\vec y,\vec z) \, .
\label{pari2}
\end{equation}
Hence, this formalism describes positive parity states, being thus
adequate to study tetraquark ground states.
\begin{table}[h!!]
\caption{The action of $C$ over the spin part or the wave function.}
\label{c_spin}
\begin{center}
\begin{tabular}{|cc|}
\hline
$S=0$ &$C|00\rangle^{12}_s=+|00\rangle^{12}_s$\\
&$C|11\rangle^{12}_s=+|11\rangle^{12}_s$\\
\hline
&$C|01\rangle^{12}_s=-|10\rangle^{12}_s$\\
$S=1$ &$C|10\rangle^{12}_s=-|01\rangle^{12}_s$\\
&$C|11\rangle^{12}_s=+|11\rangle^{12}_s$\\
\hline
$S=2$ &$C|11\rangle^{12}_s=+|11\rangle^{12}_s$\\
\hline
\end{tabular}
\end{center}
\end{table}
From Equation~\eref{pari2} one can see that, not only the total wave
function will be an eigenstate of the parity operator, but also each
component will be. This is not the case for $C$-parity, where only
the total wave function will be an eigenstate and therefore it must
be obtained numerically for each state. The tetraquark $C-$parity
will depend on the variational parameters and on the
$\beta_{s}^{(i)}$ coefficients. This dependence is contained in the
action of the $C-$parity operator over the different parts of the
wave function which will give us the following relations:
\begin{equation}
C\left|R_{s_4}^i(\vec x,\vec y,\vec z)\right>=R_{s_4}^i(\vec y,\vec x,-\vec z) \, ,
\end{equation}
and if $a_s^i=b_s^i$ and $e_s^i=f_s^i$ (what is a very common result),
\begin{equation}
C\left|R_{s_4}^i(\vec x,\vec y,\vec z)\right>=\left\{\begin{array}{cc}s_4=1&\rightarrow +R_1^i\\
s_4=2&\rightarrow -R_3^i\\
s_4=3&\rightarrow -R_2^i\\
s_4=4&\rightarrow +R_4^i\end{array}\right..
\end{equation}
The action of $C$ over the spin part or the wave function is summarized in Table~\ref*{c_spin}.
The action of $C$ over the flavor part of the wave function has to be evaluated individually once the wave
functions have been constructed.
\subsection{$q\bar q\leftrightarrow qq\bar q\bar q$ mixing}
\label{mixing}
Many of the possible four-quark systems may present $J^{PC}$ quantum numbers that can be
reached not only by means of $qq\bar q\bq$ configurations but also by $q\bar q$ ones and with similar energies.
In these cases the possibility of a mixing between them cannot be discarded a priori and therefore, their most
general wave function will read
\begin{equation}
\label{mes-w}
\left|\mathrm{B=0}\right>=\sum_n\Omega_n\left|(q\bar q)^n\right>=\Omega_1\left|q\bar q\right>+\Omega_2\left|q\bar q q\bar q\right>+....
\end{equation}
These particular systems may be described by a hamiltonian
\begin{equation}
H=H_0 + H_1 \,\,\,\,\, {\rm being} \,\,\,\,\,
H_0=\left(\begin{array}{cc}H_{q\bar q} & 0 \\ 0 & H_{qq\bar q\bq}\end{array}\right)\,\,\,
H_1=\left(\begin{array}{cc} 0 & V_{q\bar q \leftrightarrow qq\bar q\bq} \\ V_{q\bar q \leftrightarrow qq\bar q\bq} & 0 \end{array}\right)
\label{per1}
\end{equation}
where the nondiagonal terms can be treated perturbatively, therefore allowing to solve the two- and four-body sectors
separately.
\begin{figure}[h!!]
\begin{center}
\caption{Coupling between $qq\bar q\bq$ and $q\bar q$ configurations.}
\label{figcq}
\epsfig{file=fig3.eps}
\end{center}
\vspace*{-0.5cm}
\end{figure}
The Hamiltonian $H_1$ describes the mixing between two- and
four-body configurations. Its explicit expression would require the knowledge
of the operator annihilating a quark-antiquark pair into the vacuum. This could
be done, for example, using a $^3P_0$ model, but the result will introduce an
additional degree of uncertainty on the parametrization used to describe the vertex. Such a parametrization
is determined by the energy scale at which the transition $qq\bar q\bq\leftrightarrow q\bar q$ takes place.
For the sake of simplicity this can be parametrized by looking to the quark pair that it is annihilated,
and not to the spectator quarks that will form the final $q\bar q$ state:
\begin{equation}
V_{q\bar q \leftrightarrow qq\bar q\bq}\Rightarrow\left\{
\begin{array}{l}
\left< nn\bar n\bn|V|n\bar n\right>=\left< ns\bar n\bar s|V|s\bar s\right>=\left< nn\bar n\bar s|V|n\bar s\right>=C_n \\
\left< ss\bar s\bs|V|s\bar s\right>=\left< ns\bar n\bar s|V|n\bar n\right>=\left< ns\bar s\bs|V|n\bar s\right>=C_s\end{array}\right.\, .
\end{equation}
A sketch of these mixing interactions is drawn in
Figure~\ref*{figcq}. Such approach has been used in a series of
papers to describe the light-scalar mesons and the open-charm and
open-bottom meson \linebreak sectors~\cite{Vij05,Vij06,Vij08,Vij09}.
\begin{center}
\begin{table}[h!!]
\caption{Lowest two-meson thresholds in the uncoupled (UN) and
coupled (CO) schemes for two particular $cn\bar c\bar n$ (upper) and
$cc\bar n\bar n$ (lower) states, see text for details. They have
been calculated using the CQC model, see Section~\ref*{results} for
details. $M_1\,M_2\vert_L$ indicates the lowest threshold and $L$
its relative orbital angular momentum. Energies are in MeV.}
\label{threstab}
\begin{tabular}{|c||c|c|c|c|c|c||c|c|}
\hline
& \multicolumn{6}{c||}{UN} & \multicolumn{2}{c|}{CO} \\
\hline
$S$ &0 &1 &2 &0 &1 &2 &\multicolumn{2}{c|}{}\\
\hline
$I$ &\multicolumn{3}{c|}{$0$} & \multicolumn{3}{c||}{$1$} & 0 & 1\\
\hline
$J^{PC}=1^{++}$ & $-$ & $J/\psi\,\omega\vert_S$ & $-$ & $-$ & $J/\psi\,\rho\vert_S$ &$-$
& $J/\psi\,\omega\vert_{S,D}$ & $J/\psi\,\rho\vert_{S,D}$ \\
$(L=0)$ & $-$ & 3745 & $-$ & $-$ & 3838 & $-$
& 3745 & 3838\\
\hline
$J^{PC}=1^{--} $& $D\,\bar D\vert_P$ & $\eta_c\,\omega\vert_P$& $D^*\,\bar D^*\vert_P$ & $D\,\bar D\vert_P$
& $J/\psi\,\pi\vert_P$ & $D^*\,\bar D^*\vert_P$ & $\eta_c\,\omega\vert_P$
& $J/\psi\,\pi\vert_P$ \\
$(L=1)$ & 3872 & 3683 & 4002 & 3872
& 3590 & 4002 & 3683 & 3590\\
\hline
\hline
$J^P=1^+$ & $-$ & $D\,D^*\vert_S$ & $-$ & $-$ & $D\,D^*\vert_S$ & $-$ & $D\,D^*\vert_{S,D}$ & $D\,D^*\vert_{S,D}$ \\
$(L=0)$ & & 3937 & & & 3937 & & 3937 & 3937 \\
\hline
$J^P=1^-$ & $D\,D\vert_P$ & $D\,D^*\vert_P$ & $D^*\,D^*\vert_P$ & $D\,D_1\vert_S$ & $D\,D^*\vert_P$
& $D^*\,D_J^*\vert_{S,D}$ & $D\,D\vert_P$ & $D\,D^*\vert_P$ \\
$(L=1)$ & 3872 & 3937 & 4002 & 4426 & 3937
& 4499 & 3872 & 3937 \\
\hline
\end{tabular}
\end{table}
\end{center}
\section{Four-quark stability and threshold determination}
\label{thres}
The color degree of freedom makes an important difference between
four-quark systems and ordinary baryons or mesons. For baryons and
mesons it is not possible to construct a color singlet using a
subset of the constituents, thus only $q\bar q$ or $qqq$ states are
proper solutions of the two- or three-quark interacting hamiltonian
and therefore, all solutions correspond to bound states. However,
this is not the case for four-quark systems. The color rearrangement
of Equations~\eref{qq13} and~\eref{qq14} makes that two isolated
mesons are also a solution of the four-quark hamiltonian. In order
to distinguish between four-quark bound states and simple pieces of
the meson-meson continuum, one has to analyze the two-meson states
that constitute the threshold for each set of quantum numbers.
These thresholds must be determined assuming quantum number
conservation within exactly the same model scheme (same parameters
and interactions) used in the four-quark calculation. When dealing
with strongly interacting particles, the two-meson states should
have well defined total angular momentum ($J$) and parity ($P$), the
{\it coupled} scheme. If two identical mesons are considered, the
spin-statistics theorem imposes a properly symmetrized wave
function. Moreover, $C-$parity should be conserved in the final
two-meson state for those four-quark states with well-defined
$C-$parity. If noncentral forces are not considered, orbital angular
momentum ($L$) and total spin ($S$) are also good quantum numbers,
being this the {\it uncoupled} scheme.
An important property of four-quark states containing identical
quarks, like for instance the $QQ\bar n\bar n$ system, that is
crucial for the possible existence of bound states, is that only one
physical threshold is allowed, $(Q \bar n)(Q\bar n)$ for the case of
heavy-light tetraquarks. Consequently, particular modifications of
the four-quark interaction, for instance a strong color-dependent
attraction in the $QQ$ pair, would not be translated into the
asymptotically free two-meson state. As discussed in~\cite{Vij09b},
this is not a general property of four-quark spectroscopy, since the
$Q\bar Q n\bar n$ four-quark state has two allowed physical
thresholds: $(Q\bar Q)(n\bar n)$ and $(Q\bar n)(n\bar Q)$. The
lowest thresholds for $nn\bar Q\bar Q$ states are given
in~\cite{Vij09b}, for $nQ\bar n\bar Q$ states in~\cite{Vij07}, and
those for $QQ\bar Q\bar Q$ in~\cite{Vij06b}. We give in
Table~\ref*{threstab} the lowest threshold for same particular cases
to illustrate their differences. We show both the coupled (CO) and
the uncoupled (UN) schemes together with the final state relative
orbital angular momentum of the decay products. We would like to
emphasize that even when only central forces are considered the
coupled scheme is the relevant one for experimental observations.
The relevant quantity for analyzing the stability of any four-quark
state is $\Delta_E$, the energy difference between the mass of the
four-quark system and that of the lowest two-meson threshold,
\begin{equation}
\label{binding}
\Delta_E=E_{4q}-E(M_1,M_2)\, ,
\end{equation}
where $E_{4q}$ stands for the four-quark energy and $E(M_1,M_2)$ for
the energy of the two-meson threshold. Thus, $\Delta_E<0$ indicates
that all fall-apart decays are forbidden, and therefore one has a
proper bound state. $\Delta_E\ge 0$ will indicate that the
four-quark solution corresponds to an unbound threshold (two free
mesons).
\section{Probabilities in four-quark systems}
\label{prob}
As discussed in the previous sections four-quark systems present a
richer color structure than ordinary baryons or mesons. While the
color wave function for standard mesons and baryons leads to a
single vector, working with four-quark states there are different
vectors driving to a singlet color state out of colorless or colored
quark-antiquark two-body components. Thus, dealing with four-quark
states an important question is whether we are in front of a
colorless meson-meson molecule or a compact state, i.e., a system
with two-body colored components. While the first structure would be
natural in the naive quark model, the second one would open a new
area on the hadron spectroscopy.
To evaluate the probability of physical channels (singlet-singlet
color states) one needs to expand any hidden-color vector of the
four-quark state color basis in terms of singlet-singlet color
vectors. Given a general four-quark state this requires to mix terms
from two different couplings, Equations~\eref{eq1b} and~\eref{eq1c}.
If $(q_1,q_2)$ or $(\bar q_3,\bar q_4)$ are identical
quarks/antiquarks then, a general four-quark wave function can be
expanded in terms of color singlet-singlet nonorthogonal vectors and
therefore the determination of the probability of physical channels
becomes cumbersome.
In ~\cite{Vij09c} the two Hermitian operators that are well-defined
projectors on the two physical singlet-singlet color states were
derived,
\begin{eqnarray}
{\cal P}_{\left.\mid 11 \right\rangle_c} & =& \left( P\hat Q + \hat Q P \right) \frac{1}{2(1-|\,_c\left\langle 11 \mid 1'1' \right\rangle_c|^2)}
\nonumber \\
{\cal P}_{\left.\mid 1'1' \right\rangle_c} & =& \left( \hat P Q + Q \hat P \right) \frac{1}{2(1-|\,_c\left\langle 11 \mid 1'1' \right\rangle_c|^2)} \, ,
\label{tt}
\end{eqnarray}
where $P$, $Q$, $\hat P$, and $\hat Q$ are the projectors over the
basis vectors (\ref*{eq1b}) and (\ref*{eq1c}),
\begin{eqnarray}
P & = & \left.\mid 11 \right\rangle_c \,_c\left\langle 11 \mid\right. \nonumber \\
Q & = & \left.\mid 88 \right\rangle_c\,_c\left\langle 88 \mid\right. \, ,
\label{Proj1}
\end{eqnarray}
and
\begin{eqnarray}
\hat P & = & \left.\mid 1'1' \right\rangle_c\,_c\left\langle 1'1' \mid\right. \nonumber \\
\hat Q & = & \left.\mid 8'8' \right\rangle_c\,_c\left\langle 8'8' \mid\right. \, .
\label{Proj2}
\end{eqnarray}
Using them and the formalism of ~\cite{Vij09c}, the four-quark nature (unbound, molecular
or compact) can be explored. Such a formalism can be applied to any four-quark
state, however, it becomes much simpler when distinguishable quarks are present. This would be,
for example, the case of the $nQ\bar n\bar Q$ system, where the Pauli principle does not apply.
In this system the bases \eref{eq1b} and \eref{eq1c} are distinguishable due to the flavor part,
they correspond to $[(n\bar c)(c\bar n)]$ and $[(n\bar n)(c\bar c)]$ as indicated in Table~\ref*{tab_clasi},
and therefore they are orthogonal. This makes that the probability of a physical channel
can be evaluated in the usual way for
orthogonal basis~\cite{Vij08}. The non-orthogonal bases formalism is required for those cases
where the Pauli Principle applies either for the quarks or the antiquarks pairs, see Table~\ref*{tab_clasi}.
Relevant expressions can be found in~\cite{Vij09c}.
\begin{table}[h!!]
\begin{center}
\caption{Mass, in MeV, and flavor dominant component of the
light scalar-isoscalar mesons.}
\label{t8}
\begin{tabular}{|cc|cc|}
\hline
State & PDG & Mass & Flavor \\
\hline
$f_0(600)$ & 400$-$1200 & 568 & $(n\bar n_{1P})$ \\
$f_0(980)$ & 980$\pm$10 & 999 & $(nn\bar n\bar n)$ \\
$f_0(1200-1600)$ & 1400$\pm$200 & 1299 & $(s\bar s_{1P})$ \\
$f_0(1370)$ & 1200$-$1500 & 1406 & $(n \bar n_{2P})$ \\
$f_0(1500)$ & 1507$\pm$5 & 1611 & $(ns\bar n\bar s)$ \\
$f_0(1710)$ & 1714$\pm$5 & 1704 & (glueball) \\
$f_0(1790)$ & 1790$^{+40}_{-30}$& 1782 & $(n \bar n_{3P})$ \\
$f_0(2020)$ & 1992$\pm$16 & 1902 & $(ss \bar s\bar s)$ \\
$f_0(2100)$ & 2103$\pm$17 & 1946 & $(s \bar s_{2P})$ \\
$f_0(2200)$ & 2197$\pm$17 & 2224 & $(s \bar s_{3P})$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Some selected results}
\label{results}
To illustrate the formalism we have introduced, we discuss some illustrative results. We
make use of a standard quark potential model, the constituent quark cluster (CQC) model.
It was proposed in the early 90's in an attempt to obtain a simultaneous description of the nucleon-nucleon
interaction and the baryon spectra~\cite{Rep05}.
Later on it was generalized to all flavor sectors giving
a reasonable description of the meson \cite{Vij05a} and baryon spectra~\cite{Vij04a,Vij04b,Vij04c}.
Explicit expressions of the interacting potentials and a detailed discussion of the model can be found in
~\cite{Vij05a}.
The performance of the numerical procedure we have presented described can be checked
by comparing with other methods in the literature to understand its capability and
advantages. Ref.~\cite{Vij09b} makes use of a hyperspherical harmonic (HH) expansion
to study heavy-light tetraquarks, obtaining a mass of
3860.7 MeV ($K_{\rm max}=24$) for the $(L,S,I)=(0,1,0)$ $cc \bar n\bar n$ state
using the CQC model.
The variational formalism described here gives a value of
3861.4 MeV (with 6 Gaussians), in very good agreement.
Concerning the unbound states, belonging
to the two-meson continuum, the variational is able to describe
reasonably their energies and root mean square radii. For the
unbound $(L,S,I)=(0,0,1)$ $cc \bar n\bar n$ state the variational
method gives a value of $\Delta_E=+5$ MeV to be compared with the
value obtained with the HH formalism ($K=28$), $\Delta_E=+33$. This is
due to the flexibility of the expansion in terms of generalized Gaussians
and its ability to mimic the oscillatory behavior of the continuum wave functions,
something that is more difficult using an expansion in terms of
Laguerre functions~\cite{Vij09b}.
\begin{figure}[h!!]
\begin{center}
\caption{Regge trajectories for the scalar-isoscalar mesons. The squares represent
the results of Table~\protect\ref*{t8}. The lower solid line corresponds to
$n \bar n$ systems and the upper line to $s \bar s$ systems.
The dashed lines correspond to the mass
of those states with a large non$-q\bar q$ component.}
\label{figre}
\vspace*{1.0cm}
\epsfig{file=fig4.eps,width=3.5in}
\end{center}
\end{figure}
Let us now discussed some particular examples where four-quark
structures could be present. First of all we center
our attention on the light scalar-isoscalar mesons.
In ~\cite{Vij05} scalar mesons below 2 GeV were studied in terms
of the mixing of a chiral nonet of tetraquarks with conventional $q\bar q$ states using the scheme described in
Section~\ref*{mixing}. We show in Table~\ref*{t8} results for the
energies and dominant flavor component of the scalar-isoscalar mesons
when considering also the mixing with a scalar glueball based on intuition from
lattice QCD~\cite{Bal01,Mcn00,Lee00,Ams95}.
The results show a nice correspondence between theoretical
predictions and experiment. This assignment suggests that there are four
isoscalar mesons that are not dominantly
$q \bar q$ states, they are the
$f_0(980)$ (dominantly a $nn \bar n \bar n$ state), the
$f_0(1500)$ (dominantly a $ns \bar n \bar s$ state), the
$f_0(1710)$ (dominantly a glueball) and the
$f_0(2020)$ (dominantly a $ss \bar s \bar s$ state).
This is clearly seen in Figure~\ref*{figre} where we have
constructed the two Regge trajectories associated to
the isoscalar mesons. As it is observed the masses of the
$f_0(600)$, $f_0(1200-1600)$, $f_0(1370)$, $f_0(1790)$, $f_0(2100)$, $f_0(2200)$ fit
nicely in one of the two Regge trajectories, while those
corresponding to the $f_0(980)$, $f_0(1500)$, $f_0(1710)$, $f_0(2020)$ do not
fit for any integer value. The exception would be the $f_0(2020)$
that it is the orthogonal state to the $f_0(2100)$ having
almost 50\% of four-quark component.
The glueball component is shared between the three neighboring states: 20 \% for the $f_0(1370)$,
2 \% for the $f_0(1500)$ and 76 \% for the $f_0(1710)$.
These results assigning the larger glueball component to the $f_0(1710)$ are on
the line with Refs.~\cite{Mcn00,Lee00} and differ from those of
Refs.~\cite{Ams02,Ams96a,Ams96b} concluding that the $f_0(1710)$ is dominantly $s \bar s$
and Ref.~\cite{Ven06} supporting a low-lying glueball camouflaged within the
$f_0(600)$ peak.
\begin{table}[h!!]
\caption{Probabilities (P), in \%, of the wave function components
and masses (QM), in MeV, of the open-charm and open-bottom mesons with $I=0$ (left) and
$I=1/2$ (right) once the mixing between $q\bar q$ and $qq\bar q\bar q$ configurations
is considered. Experimental data (Exp.) are taken from Ref. \cite{PDG08}.}
\label{t3}
\begin{center}
\begin{tabular}{|c|cc||c|cc||c|cc|}
\hline
\multicolumn{6}{|c||}{$I=0$} & \multicolumn{3}{|c|}{$I=1/2$} \\
\hline
\multicolumn{3}{|c||}{$J^P=0^+$} & \multicolumn{3}{|c||}{$J^P=1^+$} &
\multicolumn{3}{|c|}{$J^P=0^+$} \\
\hline
QM &2339 &2847 &QM &2421 &2555 &
QM &2241 &2713 \\
Exp. &2317.8$\pm$0.6 &$-$ &Exp. &2459.6$\pm$0.6 &$2535.4 \pm 0.6$ &
Exp. &2352$\pm$50 &$-$\\
\hline
P($cn\bar s\bar n$) &28 &55 &P($cn\bar s\bar n$) &25 &$\sim 1$ &
P($cn\bar n\bar n$) &46 &49 \\
P($c\bar s_{1^3P}$) &71 &25 &P($c\bar s_{1^1P}$) &74 &$\sim 1$ &
P($c\bar n_{1P}$) &53 &46 \\
P($c\bar s_{2^3P}$) &$\sim 1$ &20 &P($c\bar s_{1^3P}$)&$\sim 1$ &98 &
P($c\bar n_{2P}$) &$\sim 1$ &5 \\
\hline
\hline
QM &5679 &6174 &QM &5713 &5857
&QM &5615 &6086 \\
\hline
P($bn\bar s\bar n$) &0.30 &0.51 &P($bn\bar s\bar n$) &0.24 &$\sim 0.01$
&P($bn\bar n\bar n$) &0.48 &0.46 \\
P($b\bar s_{1^3P}$) &0.69 &0.26 &P($b\bar s_{1^1P}$) &0.74 &$\sim 0.01$
&P($b\bar n_{1P}$) &0.51 &0.47 \\
P($b\bar s_{2^3P}$) &$\sim 0.01$ &0.23 &P($b\bar s_{1^3P}$) &$\sim 0.01$ &0.99
&P($b\bar n_{2P}$) &$\sim 0.01$ &0.07 \\
\hline
\end{tabular}
\end{center}
\end{table}
Another interesting scenario where four-quark states may help in the understanding
of the experimental data is the open-charm meson sector~\cite{Vij06,Vij08,Vij09}.
The positive parity open-charm mesons present unexpected properties quite different
from those predicted by quark potential models if a pure $c\bar q$ configuration
is considered. We include in Table~\ref*{t3} some results considering the mixing
between $c\bar q$ configurations and four-quark states.
Let us first analyze the nonstrange sector. The
$^{3}P_{0}$ $c\bar n$ pair and the $cn\bar n\bar n$
have a mass of
2465 MeV and 2505 MeV, respectively. Once the mixing is considered
one obtains a state at 2241 MeV with 46\% of four-quark component
and 53\% of $c\bar n$ pair. The lowest state, representing
the $D^*_0(2308)$, is above the isospin preserving threshold $D\pi$,
being broad as observed experimentally.
The mixed configuration compares much better with
the experimental data than the pure $c\bar n$ state.
The orthogonal state appears higher in energy, at 2713 MeV, with
and important four-quark component.
Concerning the strange sector, the $D_{sJ}^*(2317)$ and the $D_{sJ}(2460)$
are dominantly $c\bar s$ $J=0^+$ and $J=1^+$ states, respectively,
with almost 30\% of four-quark component. Without being dominant,
it is fundamental to shift the mass of the unmixed states to
the experimental values below the $DK$ and $D^*K$ thresholds.
Being both states below their isospin-preserving
two-meson threshold, the only allowed strong decays to
$D_s^* \pi$ would violate isospin and are expected to
have small widths. This width has been estimated assuming either a $q\bar q$ structure
\cite{God03,Bar03}, a four-quark state \cite{Nie05} or vector meson dominance \cite{Col03}
obtaining in all cases a width of the order of 10 keV.
The second isoscalar $J^P=1^+$ state, with an energy of 2555 MeV and
98\% of $c\bar{s}$ component, corresponds to the $D_{s1}(2536)$.
Regarding the $D_{sJ}^*(2317)$, it has been argued that a
possible $DK$ molecule would be preferred with
respect to an $I=0$ $cn\bar s\bar n$ tetraquark,
what would anticipate an $I=1$ $cn\bar s\bar n$ partner
nearby in mass \cite{Barb3}.
The present results support the last argument, namely, the vicinity
of the isoscalar and isovector tetraquarks. However, the coupling
between the four-quark state and the $c\bar s$ system, only allowed
for the $I=0$ four-quark states due to isospin conservation, opens
the possibility of a mixed nature for the $D_{sJ}^*(2317)$, the
remaining $I=1$ pure tetraquark partner appearing much higher in
energy.
The $I=1$ $J=0^{+}$ and $J=1^{+}$ four-quark states appear above
2700 MeV and cannot be shifted to lower energies.
\begin{center}
\begin{table}[h!!]
\begin{center}
\caption{Heavy-light four-quark state properties for selected
quantum numbers. All states have positive parity and total orbital
angular momentum $L=0$. Energies are given in MeV. The notation
$M_1M_2\mid_{\ell}$ stands for mesons $M_1$ and $M_2$ with a
relative orbital angular momentum $\ell$. $P[| \bar 3
3\rangle_c^{12}(| 6\bar 6\rangle_c^{12})]$ stands for the
probability of the $3\bar 3(\bar 6 6)$ components given in
Equation~(\ref*{eq1a}) and $P[\left.\mid 11 \right\rangle_c(\left.\mid 88 \right\rangle_c)]$ for the $11(88)$ components
given in Equation~(\ref*{eq1b}). $P_{MM}$, $P_{MM^*}$, and
$P_{M^*M^*}$ have been calculated following the formalism
of~\cite{Vij09c}, and they represent the probability of finding
two-pseudoscalar ($P_{MM}$), a pseudoscalar and a vector
($P_{MM^*}$) or two vector ($P_{M^*M^*}$) mesons}. \label{re1}
\begin{tabular}{|c|ccccc|}
\hline
$(S,I)$ & (0,1) & (1,1) & (1,0) & (1,0) & (0,0) \\
Flavor &$cc\bar n\bar n$&$cc\bar n\bar n$&$cc\bar n\bar n$&$bb\bar n\bar n$&$bb\bar n\bar n$\\
\hline
Energy & 3877 & 3952 & 3861 & 10395 & 10948 \\
Threshold & $DD\mid_S$ & $DD^*\mid_S$ & $DD^*\mid_S$ & $BB^*\mid_S$ & $B_1B\mid_P$\\
$\Delta_E$ & +5 & +15 & $-76$ & $-$217 & $-153$ \\
\hline
$P[| \bar 3 3\rangle_c^{12}]$ & 0.333 & 0.333 & 0.881 & 0.974 & 0.981 \\
$P[| 6 \bar 6\rangle_c^{12}]$ & 0.667 & 0.667 & 0.119 & 0.026 & 0.019 \\
\hline
$P[\left.\mid 11 \right\rangle_c]$ & 0.556 & 0.556 & 0.374 & 0.342 & 0.340 \\
$P[\left.\mid 88 \right\rangle_c]$ & 0.444 & 0.444 & 0.626 & 0.658 & 0.660 \\
\hline
$P_{MM}$ & 1.000 & $-$ & $-$ & $-$ & 0.254 \\
$P_{MM^*}$ & $-$ & 1.000 & 0.505 & 0.531 & $-$ \\
$P_{M^*M^*}$ & 0.000 & 0.000 & 0.495 & 0.469 & 0.746 \\
\hline
\end{tabular}
\end{center}
\end{table}
\end{center}
We finally tackled an interesting problem in tetraquark
spectroscopy, the molecular or compact nature of four-quark bound
states. This problem requires the determination of probabilities in
non-orthogonal bases mathematically addressed in ~\cite{Vij09c}. We
show in Table~\ref*{re1} some examples of results obtained for
heavy-light tetraquarks. One can see how independently of their
binding energy, all of them present a sizable octet-octet component
when the wave function is expressed in the~\eref{eq1b} coupling. Let
us first of all concentrate on the two unbound states, $\Delta_E >
0$, one with $S=0$ and one with $S=1$, given in Table~\ref*{re1}. The
octet-octet component of basis~\eref{eq1b} can be expanded in terms
of the vectors of basis~\eref{eq1c} as explained in the previous
section. Then, the probabilities are concentrated into a single
physical channel, $MM$ or $MM^*$ [$MM$ stands for two identical
pseudoscalar $D$ ($B$) mesons and $MM^*$ for a pseudoscalar $D$
($B$) meson together with its corresponding vector excitation, $D^*$
($B^*$)]. In other words, the octet-octet component of the
basis~\eref{eq1b} or~\eref{eq1c} is a consequence of having
identical quarks and antiquarks. Thus, four-quark unbound states are
represented by two isolated mesons. This conclusion is strengthened
when studying the root mean square radii, leading to a picture where
the two quarks and the two antiquarks are far away, $\langle
x^2\rangle^{1/2}\gg 1$ fm and $\langle y^2\rangle^{1/2}\gg 1$ fm,
whereas the quark-antiquark pairs are located at a typical distance
for a meson, $\langle z^2\rangle^{1/2}\le 1$ fm. Let us now turn to
the bound states shown in Table~\ref*{re1}, $\Delta_E < 0$, one in
the charm sector and two in the bottom one. In contrast to the
results obtained for unbound states, when the octet-octet component
of basis~\eref{eq1b} is expanded in terms of the vectors of
basis~\eref{eq1c}, one obtains a picture where the probabilities in
all allowed physical channels are relevant. It is clear that the
bound state must be generated by an interaction that it is not
present in the asymptotic channel, sequestering probability from a
single singlet-singlet color vector from the interaction between
color octets. Such systems are clear examples of compact four-quark
states, in other words, they cannot be expressed in terms of a
single physical channel.
\begin{figure}[h!!]
\begin{center}
\caption{\label{f2}$P_{MM}$ as a function of $\Delta_E$.}
\epsfig{file=fig5.eps,width=4in}
\end{center}
\vspace*{-0.5cm}
\label{fignew}
\end{figure}
We have studied the dependence of the probability of a physical channel
on the binding energy. For this purpose we have considered the simplest
system from the numerical point of view, the
$(S,I)=(0,1)$ $cc\bar n\bar n$ state. Unfortunately, this state
is unbound for any reasonable set of parameters. Therefore, we bind it by multiplying the
interaction between the light quarks by a fudge factor.
Such a modification does not affect the two-meson threshold while
it decreases the mass of the four-quark state. The results are illustrated in
Figure~\ref*{fignew}, showing how in the $\Delta_E\to0$ limit,
the four-quark wave function is almost a pure single physical
channel. Close to this limit one would find what could be defined as
molecular states. When the probability concentrates
into a single physical channel ($P_{M_1M_2}\to 1$) the
system gets larger than two isolated mesons~\cite{Vij09}.
One can identify the subsystems responsible for increasing the size of the four-quark state.
Quark-quark ($\langle x^2\rangle^{1/2}$) and antiquark-antiquark ($\langle y^2\rangle^{1/2}$)
distances grow rapidly while the quark-antiquark distance ($\langle z^2\rangle^{1/2}$)
remains almost constant. This reinforces our previous result, pointing to the appearance
of two-meson-like structures whenever the binding energy goes to zero.
\section{Summary}
\label{summary}
We have presented a detailed analysis of the symmetry properties of a four-quark wave
function and its solution by means of a variational approach for simple
Hamiltonians. The numerical capability of the method has been analyzed.
We have also emphasized the relevance of a correct analysis of the two-meson
thresholds when dealing with the stability of four-quark systems.
We have discussed the potential importance of four-quark structures
in several different systems: the light scalar-isoscalar mesons and the
open-charm mesons. We have also introduced the necessary ingredients
to study the nature of four-quark bound states, distinguishing
between molecular and compact four-quark states.
Although the present analysis has been performed by means of a particular
quark interacting potential, the CQC model, the conclusions derived are independent
of the quark-quark interaction
used. They mainly rely on using the same hamiltonian to describe tensors of
different order, two and four-quark components in the present case. When dealing with a
complete basis, any four-quark deeply bound state has to be compact. Only
slightly bound systems could be considered as molecular. Unbound states correspond
to a two-meson system. A similar situation would
be found in the two baryon system, the deuteron could be considered as a
molecular-like state with a small percentage of its wave function on the $\Delta \Delta$ channel,
whereas the $H-$dibaryon would be a compact six-quark state.
When working with central forces, the only way of getting a bound system is to have
a strong interaction between the constituents that are far apart in the asymptotic limit
(quarks or antiquarks in the present case). In this case the short-range
interaction will capture part of the probability of a two-meson threshold to form a bound
state. This can be reinterpreted as an infinite sum over physical states.
This is why
the analysis performed here is so important before any conclusion can be made concerning
the existence of compact four-quark states beyond simple molecular structures.
If the prescription of using the same hamiltonian to describe all tensors in the Fock space is relaxed,
new scenarios may appear. Among them, the inclusion of many-body forces is particularly relevant.
In ~\cite{Vij07ba,Vij07bb} the stability of $QQ\bar n\bar n$ and $Q\bar Q n \bar n$ systems
was analyzed in a simple string model considering only a multiquark confining interaction given
by the minimum of a flip-flop or a butterfly potential in an attempt to discern whether
confining interactions not factorizable as two-body potentials would influence the stability
of four-quark states. The ground state of systems made of two quarks and two antiquarks of
equal masses was found to be below the dissociation threshold. While for the cryptoexotic
$Q\bar Q n\bar n$ the binding decreases when increasing the mass ratio $m_Q/m_n$, for the
flavor exotic $QQ\bar n\bar n$ the effect of mass symmetry breaking is opposite. Others scenarios may emerge
if different many-body forces, like many-body color interactions~\cite{Dmi01a,Dmi01b} or 't Hooft
instanton-based three-body interactions~\cite{Hoo76}, are considered.
\section{Acknowledgements}
This work has been partially funded by Ministerio de Ciencia y Tecnolog\'{\i}a
under Contract No. FPA2007-65748 and by EU FEDER, by Junta de Castilla y Le\'{o}n
under Contracts No. SA016A17 and GR12,
by the Spanish Consolider-Ingenio 2010 Program CPAN (CSD2007-00042),
by HadronPhysics2, a FP7-Integrating Activities and Infrastructure
Program of the European Commission, under Grant 227431, and by
Generalitat Valenciana, PROMETEO/2009/129.
\bibliographystyle{mdpi}
|
1,314,259,994,858 | arxiv | \section{Introduction}\label{Intro}
A $4-$dimensional plane wave spacetime is given, in Brinkmann coordinates \cite{Brinkmann}, by
\begin{equation}
g_{\mu\nu}dX^\mu dX^\nu=\delta_{ij} dX^i dX^j + 2 dU dV + K_{ij}(U) X^i X^j dU^2 \,,
\label{Bmetric}
\end{equation}
where $\mathbf{X}\in \IR^2$ and $U,\, V$ are the transversal resp. light-cone coordinates. (\ref{Bmetric}) admits a covariantly constant, null Killing vector $\xi = \partial_V$ and it has a symmetric profile $K_{ij}(U)$. If, in addition, $K_{ij}$ is traceless then the elements of the Ricci tensor vanish identically, $R_{\mu\nu}=0$, and (\ref{Bmetric}) satisfy vacuum Einstein equations, i.e., it is an \emph{exact plane gravitational wave} \cite{BoPiRo}.
Recent insight into the ``Memory Effect'' for gravitational waves \cite{Memory,OurMemory,Harte,Faber,Shore,Maluf,Kulczycki, POLPER,Ilderton}
was brought about by a better understanding of their symmetries. For exact plane waves (\ref{Bmetric}) the implicitly known isometry group \cite{BoPiRo,EhlersKundt,Sou73,exactsol,Torre} maps geodesics to geodesics and yields conserved quantities~; conversely, the latter determine the transverse motion (motion in the $\mathbf{X}$-plane) of test particles in the gravitational wave background \cite{Sou73,Carroll4GW}.
The isometries of plane gravitational waves (\ref{Bmetric}) have been identified as L\'evy-Leblond's ``Carroll'' group with broken rotations \cite{Leblond,Carroll4GW,Carrollvs,NewCarroll,Morand:2018tke,Ciambelli:2019lap,SLC}.
However \emph{homotheties}, $h~:~\mathcal{M} \to \mathcal{M}$\,,
\begin{equation}
U \to U,
\quad
{\bm X} \to \chi\, {\bm X},
\quad
V \to \chi^2\, V,
\quad
\chi\,=\mathop{\rm const.}\nolimits
\label{homothety0}
\end{equation}
play also an important role, namely for the integrability of the geodesic equations \cite{AndrPrenc, AndrPrenc2, PKKA}.
The homothety is \emph{not} an isometry though ; it is a \emph{conformal transformation}, i.e., infinitesimally
\begin{equation}
L_Yg_{\mu\nu}=2{\omega}g_{\mu\nu}\,
\label{confodef0}
\end{equation}
for some function $\omega$. For the homothety (\ref{homothety0}) $\omega= 1$.
Finding all conformal symmetries of (\ref{Bmetric}) is a difficult task which requires a series of constraints to be satisfied whose solution depends on the chosen profile and is found only case-by-case \cite{Sippel, Eardley, MaMa,Keane, HallSteele,KeaTu04}.
On the other hand, plane gravitational waves endowed with a covariantly constant null vector $\xi$ (\ref{Bmetric}) can also be viewed as a ``Bargmann manifold'' for a non-relativistic system in one less dimensions.
The underlying non-relativistic motions can be ``Eisenhart-Duval (E-D) lifted'' as null geodesics \cite{Eisenhart,Bargmann,DGH91,dissip}. The Bargmann point of view provides a powerful framework to investigate the symmetries of the associated non-relativistic system.
Each conformal vector field, (\ref{confodef0}), of the metric generates a conserved quantity $\mathcal{Q}$ for null geodesics.
If $Y$ preserves, in addition, the vertical vector $\xi$,
\begin{equation}
L_Y\xi=0,
\label{Lxi0}
\end{equation}
then $\mathcal{Q}$ projects to a conserved quantity for the underlying non-relativistic dynamics. Conformal vector fields which satisfy also (\ref{Lxi0}) generate the ``extended Schr\"odinger group'' ; such isometries span the ``Bargmann group'' \cite{Bargmann,DGH91}.
Up in the Bargmann space a conserved quantity
is associated with any conformal vector field, even if the latter does not preserve $\xi$, though. For instance, the generator $Y_h=(Y_h^\mu)$ of the homothety (\ref{homothety0}) preserves only the \emph{direction} of the vertical vector
\begin{equation}
L_{Y_h}\xi=\psi\,\xi,
\qquad
\psi=-2\chi=\mathop{\rm const.}\nolimits\,
\label{Homocond}
\end{equation}
and generates the charge \eqref{Qhomot} whose conservation determines the vertical coordinates, see sec. \ref{homoSec} below.
The first relation here is in fact a paradigm of the ``chrono-projective condition''
\begin{equation}
L_Y\xi=\psi\,\xi\,\qquad \psi= \mathop{\rm const.}\nolimits
\label{Chronocond}
\end{equation}
which plays a fundamental r\^ole in our investigations.
The constant $\psi$ is called the chrono-projective factor.
This condition has been considered at various instances.
Firstly, it was put forward by Duval et al. \cite{DGH91, 5Chrono}. Remarkably, these authors introduced it originally as a geometric property related to the Newton-Cartan structure of
$d+1$-dimensional \emph{non-relativistic spacetime}. They called it ``chrono-projective property''. Later these same authors realized that chrono-projectivity can actually be derived by lightlike reduction from a $d+1,1$ dimensional \emph{relativistic} spacetime --- their Bargmann space \cite{5Chrono,DuLaz}, --- namely as in \eqref{Chronocond}. See also eqn. \# (4.4) of \cite{DGH91} or \# (5.17)-(5.21) of \cite{DuLaz}. In \cite{NCosmo} it was rebaptized as the \emph{conformal Newton-Cartan} group ; in \cite{Gundry} it was rediscovered under the name of ``enlarged Schr\"odinger group''. In this paper we return to the original terminology proposed in \cite{DThese}.
The condition (\ref{Chronocond}) is \emph{almost} identical to a property noticed by Hall et al \cite{HallSteele}, who pointed out that for pp waves it follows from studying the Weyl tensor -- however with $\psi$ a \emph{function}, rather then a \emph{constant}.
The original definition made in \cite{5Chrono} would allow $\psi$ to be a \emph{function}. However, the additional condition \# (4.8) these authors impose on the connection $\Gamma$ implies that $\psi$ is necessarily a \emph{constant}. It is (\ref{Chronocond}) (and not the formula of \cite{HallSteele}) that reproduces the chrono-projectivity after reduction.
In sec.\ref{ConfChronSec} we show that \emph{all special conformal Killing vectors of a non-flat pp-wave (\ref{Bmetric}) are chrono-projective.} Thus all conformal Killing vector of an Einstein vacuum solution satisfies (\ref{Chronocond}).
In this paper we take, on the one hand, advantage of the chronoprojective condition (\ref{Chronocond}) to simplify the procedure of finding all conformal vectors of gravitational waves and derive, on the other hand, the associated conserved charges.
Our paper is organized as follows: In Sec.\ref{planewaves}, after recalling exact plane waves and conformal transformations, we briefly outline the Bargmann [alias Eisenhart-Duval] approach. Chrono-projective transformations are then introduced.
Conformal transformations and their subgroups in flat space are spelled out in sec. \ref{MinkowskiSec}.
In sec. \ref{JacobiSec}, conserved quantities of null geodesics related to conformal transformations are discussed.
(Symmetries of timelike geodesics were considered recently using non-local conservation laws \cite{Dimakis}).
In sec. \ref{homoSec}, new types of conserved quantities associated with chrono-projective transformations in the Bargmann framework,
generalizing those in \cite{KHarmonies} are considered.
The chrono-projective transformations of exact plane gravitational waves are studied in sec.\ref{BJRSec},
using Baldwin-Jeffery-Rosen (BJR) \cite{BaJe} coordinates.
In sec.\ref{Examples} we illustrate our general theory on various examples.
\goodbreak
\section{Exact plane gravitational waves}\label{planewaves}
\subsection{Gravitational waves and conformal transformations}\label{GWConfSec}
For generic profile $K_{ij}(U)$, the isometries of an exact gravitational plane wave (\ref{Bmetric}) i.e. diffeomorphisms of spacetime, $f:~\mathcal{M} \to \mathcal{M}$ s.t.
\begin{equation}
f^*g_{\mu\nu}= g_{\mu\nu} \quad\text{\small infinitesimally}\quad
L_Yg_{\mu\nu}=0
\label{isom}
\end{equation}
span a $5$-parameter group \cite{BoPiRo,EhlersKundt,Sou73,Torre,exactsol},
which is in fact the subgroup of the Carroll group in $2+1$ dimensions
with broken rotations \cite{Leblond,Sou73,Carrollvs,NewCarroll,Carroll4GW}.
However the homothety (\ref{homothety0}) \cite{Torre,AndrPrenc}
generated by the vector field
\begin{equation}
Y_{hom}= X^i{\partial}_{i}+2 V{\partial}_{V}\,
\label{infhomo}
\end{equation}
is \emph{not} an isometry but
a \textit{conformal transformation} of the pp-wave metric (\ref{Bmetric}),
\begin{equation}
f^*g_{\mu\nu}= \Omega^2g_{\mu\nu}
\qquad\text{\small infinitesimally}\qquad
L_Yg_{\mu\nu}=2\omega\,g_{\mu\nu}\,.
\label{confom}
\end{equation}
For the homothety (\ref{homothety0})
$\Omega^2=\chi^2=\mathop{\rm const.}\nolimits$. Its role may be understood by looking at the geodesic motion.
For the profile
\begin{equation}
K_{ij}(U){X^i}{X^j}=
\half{{\mathcal{A}_{+}}}(U)\Big((X^1)^2-(X^2)^2\Big)+{\mathcal{A}}_{\times}(U)\,X^1X^2\,,
\label{Bprofile}
\end{equation}
where ${\mathcal{A}_{+}}$ and ${\mathcal{A}}_{\times}$ are the $+$ and $\times$ polarization-state amplitudes \cite{Brinkmann,BoPiRo,EhlersKundt,exactsol}, the geodesics are described by
\begin{subequations}
\begin{align}
& \dfrac{d^2 {\bm X}}{dU^2} - \half\left(\begin{array}{lr}
{{\mathcal{A}_{+}}} &{\mathcal{A}}_{\times}
\\
{\mathcal{A}}_{\times} & -{{\mathcal{A}_{+}}}
\end{array}\right)
{\bm X} = 0\,,
\label{ABXeq}
\\[10pt]
& \dfrac {d^2 V}{dU^2} + \dfrac{1}{4} \dfrac{d{{\mathcal{A}_{+}}}}{dU\,}\Big((X^1)^2 - (X^2)^2 \Big)
+{{\mathcal{A}_{+}}}\Big(X^1\dfrac{dX^1}{dU\,} - X^2\dfrac{dX^2}{dU\,}\Big)
\nonumber
\\[6pt]
&\qquad\,+\dfrac{1}{2} \dfrac {d{\mathcal A}_{\times}}{dU\,} X^1 X^2
+ {{\mathcal A}_{\times}}\Big(X^2\dfrac{dX^1}{dU\,} + X^1\dfrac{dX^2}{dU\,}\Big)
= 0\,.
\label{ABVeq}
\end{align}
\label{ABeqs}
\end{subequations}
Then the homothety (\ref{homothety0}) multiplies the ${\bm X}$~-~equation by $\chi$ and the $V$-equation by $\chi^2$~; trajectories are therefore taken to trajectories, as illustrated on fig.\ref{blowup}.
Alternatively, the geodesic Lagrangian
\begin{equation}
{\mathcal{L}}_{geo}=\half\delta_{ij} \dot{X}^i \dot{X}^j + \dot U \dot V + \half K_{ij}(U) X^i X^j \dot U^2 \,,
\label{geoLagrangian}
\end{equation}
where the dot means derivation w.r.t. an affine parameter $\sigma$ \footnote{We mostly choose $\dot{\{\,\cdot\,\}}= d/dU$.} scales under as,
\begin{equation}
{\mathcal{L}}_{geo} \to \chi^2\, {\mathcal{L}}_{geo}\,,
\label{Lscale}
\end{equation}
implying again that the trajectories go into trajectories~:
The geodesic motion in such a background is thus \emph{scale invariant}. We note that all of these $4D$ trajectories project to the same curve ${\bm X}(U)$ in the transverse plane. Let us record for further use that
\begin{equation}
{\mathcal{L}}_{geo}=0
\end{equation}
for \emph{null geodesics}, which are thus homothety-invariant by (\ref{Lscale}).
We note that the transverse equations (\ref{ABXeq}) are decoupled from the ``vertical'' one, (\ref{ABVeq}), and can be solved separately. Once ${\bm X}(U)$ has been determined, the result should be inserted into (\ref{ABVeq}) which then can be integrated. Analytic solutions are difficult to find, and therefore the best is to use numerical integration \cite{POLPER}. As it will be further discussed in sec.\ref{homoSec} the ``new'' conserved charge $\mathcal{Q}_{hom}$ associated with the homothety provides an alternative way to derive the vertical motion.
For further use, we record some basic facts about the conformal transformations (\ref{confom})~: for a conformally flat spacetime, the number of conformal transformations is 15,
but for a non-conformally-flat spacetime, their maximum number is 7 \cite{HallSteele}. 5 of them are isometries and there is 1 homothety. There may or may not be a 7th transformation which may or may not be an isometry depending on special conditions, see sec. \ref{Examples}.
\subsection{All conformal Killing vectors of vacuum pp-waves are chrono-projective
}\label{ConfChronSec}
The aim of this subsection is to prove the theorem stated in the title.
In their seminal paper Maartens and Maharaj \cite{MaMa} have shown that for a pp-wave
the conformal factor of the most general conformal Killing vector $\bm{Y}$ is (cf. their Eqns.\# (29-32))
\begin{equation}
\omega (U, X, V) = \mu V + a'_i(U) X^i + b(U)\,,
\label{MaMaomega}
\end{equation}
where the constant $\mu$ and the functions $a_i(U)$ and $b(U)$ can be determined by a case-by-case calculation using the additional constraints.
Now,
\begin{equation}
L_{\bm{Y}} \partial_V = - \Big[ 2\mu V + a'_i(U) X^i + 2b(U) -a'(U)\Big] \partial_V - \Big[\mu X^i + a_i (U) \Big]\partial_i\,,
\label{MaMaLie}
\end{equation}
which does not seem to be parallel to $\xi={\partial}_V$. Accordingly, the conformal factor (\ref{MaMaomega}) depends on all coordinates. However, in order to conclude whether $\bm{Y}$ is chrono-projective or not, one needs to deal with profile-dependent integrability conditions. Taking into account the additional integrability constraints, Maharaj and Maartens found, after tedious calculations, that for a \emph{special conformal Killing vector} ${\bf W}$ in a non-flat pp-wave. (A conformal Killing vector is called \emph{special} when its conformal factor
satisfies $\omega_{;\mu\nu}=0$.)
\begin{subequations}
\begin{align}
\bm{W} &= \rho (U^2 \partial_U + \frac{1}{2}\delta_{ij}X^i X^j \partial_V + U X^i\partial_i ) + \bm{Z}
\label{MaMaW}
\\
\label{mamaHom}
\bm{Z} &= \phi(2V \partial_V + X^i\partial_i) + \bm{X},
\\
\label{mamaKilling}
\bm{X} &= (\alpha U + \beta)\partial_U + (\lambda - \alpha V + c'_i(U)X^i )\partial_V + (c_i + \gamma \epsilon_{ij}X^j)\partial_i
\end{align}
\end{subequations}
where $\rho, \phi, \alpha, \beta, \gamma$ are constants and and $c_i(U)$ is a function, cf. their eqn. $\#$ (56). The first term in $\bm{Z}$ is a homothety ; ${\bm X}$ is a Killing vector.
An additional integrability condition, their eqn. \# (55), should also be satisfied.
\emph{The special conformal Killing vector $\bm{W}$ is chrono-projective}, with conformal and chrono-projective factors
\begin{equation}
\omega = \omega(U) = \rho U + \phi \;{\quad\text{and}\quad}\; \psi=\alpha -2\phi\,,
\end{equation}
respectively. To complete the proof it is enough to remember that \emph{every conformal Killing vector of an Einstein vacuum pp-wave is special conformal} \cite{MaMa}.
\goodbreak
In an alternative approach inspired by \cite{HallSteele}, one starts with a conformal vector field $Y^\mu, L_Y g_{\mu\nu} = 2\omega g_{\mu\nu}
$, which satisfies
$L_Y C^\mu_{\nu\rho\sigma} =0$ , where $C^\mu_{\nu\rho\sigma}$ is the Weyl tensor.
$k^\mu $ is called a principal null direction when
$
C_{\mu\nu\rho\sigma}k^\sigma = 0.
$
Then
$
L_Y \Big[ C_{\mu\nu\rho\sigma} k^\sigma \Big] = C_{\mu\nu\rho\sigma} L_Y k^{\sigma} = 0.
$
Assuming that the spacetime is non-flat (which excludes, e.g. the Minkowski space)
the Weyl tensor is non trivial, allowing us to conclude that the Lie derivative of the null direction should be again a null direction.
Now a pp wave is is known to be of Petrov type N and have just one principal null direction, namely our ``vertical vector'' $\xi$. Therefore $L_Y\xi$ is proportional to $\xi$ itself,
\begin{equation}
L_Y\xi=\alpha(X^\mu)\,\xi
\label{Hallcond}
\end{equation}
where
$\alpha(X^\mu)$ is a function. One can only know it if the conformal vector and spacetime are given. This is as far as that we can go with no further assumptions.
The chrono-projective property (\ref{Chronocond}), i.e.,
$
\alpha(X)=\psi=\mathop{\rm const.}\nolimits
$
may or may not be satisfied at this level, as it is manifest from (\ref{MaMaLie}).
We just mention that calculating the components of the Weyl tensor could lead to an alternative proof of our statement.
\subsection{The ``Bargmann'' point of view}\label{BargSec}
Further insight can be gained using the
``Bargmann'' framework \cite{Bargmann,DGH91}.
We first recall that the space-time of a $4$-dimensional gravitational wave with metric (\ref{Bmetric}) we denote by $(\mathcal{M}, g_{\mu\nu})$ can be viewed as the ``Bargmann space'' for a non-relativistic system in $2+1$ dimensional non-relativistic spacetime, obtained by factoring out the integral curves of the covariantly constant ``vertical'' vector $\xi={\partial}_V$. The $4$ dimensional Bargmann manifold will be referred to as ``upstairs'' and the underlying non-relativistic $2+1$-d system will be ``downstairs''.
The relativistic (metric) structure of Bargmann space projects to a non-relativistic Newton-Cartan structure \cite{Bargmann}.
The factor space has coordinates $(U,{\bm X})$ with
$U$ playing the r\^ole of non-relativistic time ; the classical motions ``downstairs" are the projections of the null geodesics ``upstairs". See \cite{Bargmann,DGH91} for precise definitions and details.
For example, the null geodesics of 4D flat Minkowski spacetime
written in light-cone coordinates project to free non relativistic motions in (2+1) dimensions.
More generally, let us consider\footnote{
In 4D, the metric of any solution of the vacuum Einstein equations $R_{\mu\nu}=0$ which is conformal to some vacuum Einstein solution can be brought to the form
\cite{Brinkmann,DGH91},
\begin{equation}
ds^2 = G_{ij}(U,{\bm X})dX^idX^j + 2 dU dV - 2\Phi (U,{\bm X})\, dU^2\,.
\label{Gen4DB}
\end{equation}
where $G_{ij}(U,{\bm X})$ is a possibly $U$ [but not $V$] dependent metric on transverse space.
This is however \emph{not} true in $D\geq5$ dimensions,
allowing for more freedom \cite{Brinkmann,DGH91}.},
\begin{equation}
ds^2 = d {\bm X}^2 + 2 dU dV - 2\Phi(U,{\bm X})\, dU^ 2\,
\label{flattransv4D}
\end{equation}
The geodesics are described by the action
$
S=\displaystyle\int \!{\mathcal{L}}_{geo}\,d\sigma
$
with ${\mathcal{L}}_{geo}$ in (\ref{geoLagrangian}).
The equations of motion are
\begin{equation}
\ddot {\bm X} =-(\dot U)^2 \frac{\partial \Phi}{\partial {\bm X}},
\qquad
\ddot U = 0,
\qquad
\frac{\;d}{d\sigma}\big(\dot{V} -2\Phi\, \dot{U}\big) = -\frac{\partial\Phi}{\partial U} \,\dot{U}^2\, .
\label{XUVeq}
\end{equation}
The related $4D$ geodesic Hamiltonian is
\begin{equation}
{\mathcal{H}}= \frac{1}{2}{{\rm{\bf P}}}^2 +P_U P_V + \Phi({\bm X}, U) P_V^2\, ,
\label{Ham}
\end{equation}
where $\bm{P} = \dot{\bm{X}}$, $P_U = \dot{V} - 2\Phi\, \dot{U}$ and $P_V = \dot{U}$ is a constant.
The Hamiltonian (\ref{Ham}) and the Lagrangian
(\ref{geoLagrangian}) are in fact identical.
As they do not depend explicitly upon $\sigma$, we also have the constraint
\begin{equation}
{\dot {\bm X}}^2 + 2\dot U \dot V - 2\Phi({\bm X}) (\dot U)^2 =-\epsilon
\label{constraint}
\,,
\end{equation}
where $\epsilon =1$ for timelike geodesics and $\epsilon=0 $ for null geodesics.
Focusing our attention at null-geodesics by requiring ${\mathcal{H}}\equiv0$ and $P_V =M$
yields the non-relativistic $2+1-$d Hamiltonian ``downstairs'',
\begin{equation}
H_{NR} = \frac{\bm{P}^2}{2M} + M\Phi(\bm{X}, U) = - P_U\, .
\label{HNRPU}
\end{equation}
$-P_U$ is the Hamiltonian for a non-relativistic particle of mass $M$ downstairs, and $U=M\sigma$ plays the role of Newtonian time. $\Phi({\bm X},U)$ is identified as a [possibly ``time''-dependent] scalar potential. The projected motion is governed by the single equation in (\ref{XUVeq}).
As seen from (\ref{constraint}), the condition ${\mathcal{H}}=0$ implies
\begin{equation}
\dot{V} = -\dot{U} \Big( \frac{1}{2}\frac{\bm{\dot{X}}^2}{\dot{U}^2} - \Phi(X, U) \Big)
=- \left(\frac{1}{2M} \left(\frac{d{\bm X}}{dU}\right)^2 -M\Phi(\bm{X}, U)\right) =- L_{NR}\,,
\label{dotVL}
\end{equation}
where $L_{NR}$ is the \emph{non-relativistic Lagrangian}. Therefore the vertical coordinate is essentially (minus) \emph{the classical action} along the path ${\bm X}(\sigma)$,
\begin{equation}
V=V_0-S,\quad S=\int\! L_{NR} \ d\sigma,
\label{Vevol}
\end{equation}
as noticed already by Eisenhart \cite{Eisenhart}.
When the potential $\Phi$ happens not to depend on $U$ explicitly, ${\partial}\Phi/{\partial} U=0$,
eqn. (\ref{XUVeq}) implies that $(\dot{V}-2\Phi\, \dot{U})$ is also conserved; eliminating $\dot{V}$ using (\ref{dotVL})
yields the constant of the motion
$E=\half\left({\dot{\bm X}}/{\dot U}\right)^2 + \Phi ,
$
identified as the conserved energy for unit mass of the projected motion.
The special choice (\ref{Bmetric}),
\begin{equation}
\Phi(U,{\bm X})=-\half K_{ij}(U){X}^i{X}^j
\end{equation}
where $K_{ij}$ is a traceless symmetric matrix, represents, in Bargmann terms, a \emph{time-dependent anisotropic (attractive or repulsive) harmonic oscillator in the transverse plane} \cite{Carroll4GW,OurMemory,POLPER}.
\goodbreak
For a general Bargmann space, those isometries
(resp. conformal transformations)
which preserve in addition the vertical vector $\xi={\partial}_V$, i.e., which satisfy (\ref{isom}) resp. (\ref{confom}), with the additional condition
\begin{equation}
f_{*} \xi = \xi
\qquad\text{infinitesimally}\qquad
L_Y\xi=0
\label{xifix}
\end{equation}
span the [generalized] \emph{Bargmann} (alias extended Galilei)
resp. the [generalized] \emph{extended Schr\"odinger} \emph{group/algebra}. One can prove that the conformal factors $\Omega$ resp.
$\omega$ depend only on $U$ \cite{Bargmann,DGH91}.
The homothety (\ref{homothety0}) belongs to the [extended] \emph{chrono-projective} group \cite{5Chrono,DGH91,DuLaz} \footnote{The original definition \cite{DThese} is in the Newton-Cartan structure of non-relativistic spacetime.}
defined, in general, by weakening the constraint (\ref{xifix}), \vskip-6mm
\begin{subequations}
\begin{align}
&f^*g_{\mu\nu}= \Omega^2(U)g_{\mu\nu}
\qquad\text{\small infinitesimally}\qquad
L_Yg_{\mu\nu}=2\omega(U)\,g_{\mu\nu}
\label{confombis}
\\
&f_{*} \xi\quad = \Psi\, \xi\quad\quad
\qquad\;\;\text{\small infinitesimally}\qquad
L_Y\xi\quad=\quad\psi\,\xi
\label{chronoxi}
\end{align}
\label{chronoprojdef}
\end{subequations}
where $\Psi$ resp. $\psi$ are constants. It is a further 1-parameter (non-central) extension of the (centrally extended) Schr\"odinger group.
\section{The Minkowski case}\label{MinkowskiSec}
Here we list the generators of the conformal transformations for \emph{flat Minkowski spacetime} $d{\bm X}^2+2dUdV$.
Plane gravitational waves with non-trivial profile $K_{ij}$ will be studied in sec. \ref{BJRSec}.
The flat metric can, alternatively, be thought as the Bargmann space for a $2+1$ dimensional NR particle (\ref{HNRPU}). The various subalgebras/subgroups can be identified by listing the
generators of ${\rm O}(4,2)$ \cite{HHAP},
\vskip-6mm
\begin{subequations}
\label{confalg}
\begin{align}
Y_U &= \partial_U, \qquad Y^i_T = -\partial^i, \qquad Y_V = -\partial_V, \quad\, \qquad \qquad \qquad\text{translations},
\\
Y_{12} &= X^1\partial^2-X^2\partial^1, \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\text{$X^1-X^2$ rotation},
\\
Y^i_{B} &= U\partial^i - X^i\partial_V, \qquad \qquad \qquad \qquad \quad \qquad \qquad \qquad \quad\;\;\text{galilean boosts}, \\
Y^i_{AB} &= X^i\partial_U - V\partial^i, \qquad \qquad \qquad \qquad \quad \qquad \qquad \qquad \quad\;\text{``antiboosts''},
\label{antib} \\
Y_{UV} &= U\partial_U - V\partial_V, \qquad \qquad \qquad \qquad \quad \qquad \qquad \qquad \quad\;\;\text{U-V boost},
\label{UVboostM}
\\
Y_{D} &= 2U\partial_U + X^i\partial^i, \qquad \qquad \qquad \quad \quad \qquad \qquad \qquad \qquad\text{Sch dilatation},
\label{dilationM}\\
Y_{K} &= U^2\partial_U+UX^i\partial^i-\frac{\bm{X}^2}{2}\partial_V, \qquad \qquad \qquad \qquad \qquad \quad\;\text{Sch expansion},
\label{expansionM} \\
Y_{C1} &= \frac{\bm{X}^2}{2}\partial_U -VX^i\partial^i - V^2\partial_V, \qquad \qquad \qquad \qquad \qquad \quad\;\text{$C_1$},
\label{C1} \\
Y^i_{C2} &= X^i U\partial_U + X^i V\partial_V - \Big(\frac{\bm{X}^2}{2} +UV \Big)\partial^i + X^i (X^j\partial^j) \quad\; \text{$C^i_2$}.
\label{C2i}
\end{align}
\end{subequations}
\begin{itemize}
\item
The 4D {\bf{Poincar\'e group}} $P_4$ is the $10-$parameter group of isometries generated by
\begin{equation}
\Big\{Y_U, \ Y^i_T, \ Y_V, Y_{12}, \ Y^i_{B}, \ Y^i_{AB}, Y_{UV} \Big\}\,,
\end{equation}
some of which do not preserve the vertical vector $\partial_V$,
$
[Y^i_{AB}, \partial_V] =\partial^i, \; [Y_{UV}, \partial_V] = \partial_V.
$
\item
The isometries which \emph{do} preserve the vertical vector $\xi={\partial}_V$ (\ref{xifix}) provide us with the $7-$parameter {\bf{Bargmann group}} \cite{Bargmann,DGH91} whose
Lie algebra defined by $({\cal{L}}_Y g)_{\mu\nu}= 0, \; [Y, \partial_V] = 0$
is spanned by
\begin{equation}
\Big\{Y_U, \ Y^i_T, \ Y_V, Y_{12}, \ Y^i_{B} \Big\}.
\label{Bargalg}
\end{equation}
\item
The {\bf{extended Schr\"odinger} group} includes conformal transformations which preserve the vertical vector $\partial_V$. The additional non-isometric transformations are \emph{non-relativistic dilations} $Y_{D}$ (\ref{dilationM}) and \emph{expansions} $Y_{K}$ (\ref{expansionM}) \cite{Schrgroup} which act as
\begin{subequations}
\begin{align}
({{\cal{L}}_{Y_{D}}} g)_{\mu\nu} &= 2\Lambda\ g_{\mu\nu} , \qquad [Y_{D}, \partial_V]=0, \\
({{\cal{L}}_{Y_{K}}} g)_{\mu\nu} &= 2\Lambda U \ g_{\mu\nu} , \quad\; [Y_{K}, \partial_V]=0.
\end{align}
\label{DKextS}
\end{subequations}
The extended Schr\"odinger group has thus 9 parameters, namely,
\begin{equation}
\Big\{Y_U, \ Y^i_T, \ Y_V, Y_{12}, \ Y^i_{B},\ Y_{D},\ Y_{K} \Big\}.
\label{extSalg}
\end{equation}
A detailed discussion can be found in \cite{Bargmann,DGH91}.
\item The {\bf{chrono-projective group}} \cite{DThese,5Chrono,DGH91,DuLaz,NCosmo} is a subgroup of the [relativistic] conformal group, (\ref{confom}), defined by the condition
$[Y, \partial_V] = \psi\, \partial_V$, cf. (\ref{chronoxi}). For $4D$ Minkowski space it has $10$-parameters and is spanned by \cite{5Chrono,DGH91,DuLaz, NCosmo}
\begin{equation}
\Big\{Y_U, \ Y^i_T, \ Y_V, Y_{12}, \ Y^i_{B},\ Y_{D},\ Y_{K},\ Y_{UV} \Big\}.
\label{chronoalg}
\end{equation}
The $U-V$ boost $Y_{UV}$ in (\ref{UVboostM}) is, in particular, a \emph{chrono-projective isometry}~: it satisfies (\ref{chronoprojdef})
with $\omega=0$ and $\psi=1$, respectively.
The homothety (\ref{infhomo}) can be expressed as
\begin{equation}
Y_{hom}=Y_D-2Y_{UV},
\label{homoDuv}
\end{equation}
therefore belongs to the chrono-projective algebra.
These ``Bargmannian'' expressions $Y_K, Y_{UV}$ and $Y_{hom}$ appear in the literature independently, under the name of ``special conformal Killing vectors of the pp wave spacetime'' \cite{MaMa,KeaTu04}.
\item The $(2+1)$D {\bf{Carroll group}} \cite{Leblond,DGH91,Carrollvs,NewCarroll} is the restriction of the Bargmann group to the $3$D submanifold ${\mathcal{C}}$ defined by the constraint $U=0$,
\begin{equation}
f^*g = g,
\qquad
f_{*}\xi=\xi\,,
\qquad
f({\mathcal{C}}) \subset {\mathcal{C}}\, .
\label{Carrolldef}
\end{equation}
It is a $6$ parameter subgroup embedded into $P_4$. $U-$translations $Y_U$ are no more allowed.
Its generators are,
\begin{equation}
\Big\{Y^i_T,\ Y_V,\ Y^i_{B},\ Y_{12}\Big\}, \quad U=0.
\end{equation}
The Bargmann framework, whose primary aim is to provide a ``relativistic" description of non-relativistic physics, has additional bonuses. One of them is to consider, instead of \emph{projecting} from 4 to 3 dimensions, the
\emph{pull-back} of a given Bargmann metric to the $3$-dimensional submanifold $U=0$ \footnote{The embedding $U=\mathop{\rm const.}\nolimits$ would yield an equivalent construction. See also \cite{HallSteele}.}.
${\cal C}$ has coordinates $({\bm X},V)$ and carries a \emph{Carroll structure}. $\xi={\partial}_V$ ; the coordinate $V$ is interpreted as \emph{``Carrollian time''} \cite{DGH91,Carrollvs}.
\item
The {\bf{Schr\"odinger-Carroll group}} is the conformal extension of the Carroll group \emph{within the conformal group} ${\rm O}(4,2)$, obtained by relaxing the isometry condition in (\ref{Carrolldef}) but still requiring that $\partial_V$ be preserved,
\begin{equation}
({\cal{L}}_Y g)_{\mu\nu}= \Omega^2 g_{\mu\nu}, \qquad
[Y, \partial_V] = 0\,, \qquad U=0 \,.
\end{equation}
It has $8$ generators, namely those of the Carroll isometries, augmented by non-relativistic dilations and expansions,
\begin{equation}
\Big\{Y^i_T,\ Y_V,\ Y^i_{B},\ Y_{12},\ Y_{D},\ Y_{K} \Big\}, \quad U=0.
\end{equation}
and is represented by vectorfield
\begin{equation}
(\omega^i_j\,X^j+\gamma^i+\lambda{}\,X^i)\frac{\partial}{\partial X^i} + T({\bm X}) {\partial}_V,\qquad
T({\bm X}) =\nu-\boldsymbol{\beta}\cdot{\bm X}
+\half\kappa\,{\bm X}^2,
\label{Sch-carr}
\end{equation}
where $T({\bm X})$ is called, borrowing the Bondi-Metzner-Sachs (BMS) - inspired terminology \cite{BMS=confCarr,confCarroll},
\emph{supertranslations}.
\item
The {\bf{Chrono-Carroll}} group is a $1$-parameter extension of the Schr\"odinger-Carroll group with the weakened condition (\ref{chronoxi}),
$
[Y,\partial_V] = \psi \,\partial_V.
$
This adds $Y_{UV}$ to the Schr\"odinger-Carroll algebra, yielding 9 generators
\begin{equation}
\Big\{Y^i_T,\ Y_V,\ Y^i_{B},\ Y_{12},\ Y_{D},\ Y_{K},\ Y_{UV} \Big\}, \qquad U=0.
\end{equation}
Infinitesimally, (\ref{Sch-carr}) is generalized to
$
Y_V ={\psi}V +\nu-\boldsymbol{\beta}~\cdot~{\bm X}
+\half\kappa\,{\bm X}^2\,.
$
\end{itemize}
\goodbreak
\section{Geodesics and their symmetries}\label{JacobiSec}
In this section we revisit some aspects of geodesics
and the conserved quantities associated with Killing and reps. conformal Killing vectors.
\subsection{Affinely parametrised geodesics}
A fully covariant action for a
particle (and the only one for a massless particle) is
\begin{equation}
S = \int g_{\mu \nu} \frac{dX^\mu}{d \sigma}\frac{dX^\nu}{d \sigma}
d \sigma\,.
\end{equation}
Variation w.r.t. $X^\mu$ gives the geodesic equations in the form
\begin{equation}
\frac{d^2 X^ \mu}{d \sigma ^2} + \Gamma^\mu _{\alpha \beta} \
\frac{d X^\alpha}{d \sigma} \frac{d X^\beta }{d \sigma} =0.
\label{sigmageo}
\end{equation}
Here the $\Gamma^\mu_{\alpha \beta}$ are
the Christoffel symbols of the metric $g_{\mu \nu}$.
Because there is no explicit dependence on $\sigma$, we have the constraint (\ref{constraint}).
Choosing $\epsilon$ to be $-m^2 \le 0$,
$\bullet$ When $m\ne0$,
one sees that
\begin{equation}
|m|d \sigma = d \tau,
\end{equation}
where $m$ is the relativistic mass and
$\tau $ is proper time along the curve
$X^\mu =X^\mu(\sigma)$.
$\bullet$ However when our geodesic is massless, $m^2 =0$, then $\sigma$ is called an \emph{affine parameter}, and is defined only up to an affine transformation.
The constraint (\ref{constraint}) may be written as
$
g^{\mu \nu} P_\mu P_\nu = -m ^2\,,
$
where
$P_\mu = g_{\mu \nu} \frac{dX^\nu}{d \sigma}$ is the 4-momentum.
If $m^2 \ne 0$ one has $P_\mu = |m| g_{\mu \nu} \frac{dX^\nu}{d \tau}$.
In flat space one may set $P_0=-E$, and obtains the well known formulae
$
E^2-{\rm{\bf P}}^2 = m^2\,, \, E= \sqrt{m^2 + {\rm{\bf P}}^2}\,.
$
For the general plane gravitational wave (\ref{Bmetric}) the constraint is
\begin{equation}
- K_{ij}(U) X^iX^j P_V^2 +2P_U P_V + m^2 + P_iP_i =0 \,.
\label{PVgen}
\end{equation}
In general the only conserved quantity is $P_V$. If in addition $K_{ij}$ is independent of $U$, we have an additional conserved quantity,
\begin{equation}
P_U = \frac{1}{2 P_V} \bigl (K_{ij}X^iX^j P_V^2 - m^2 - P_iP_i\bigr) \,.
\end{equation}
We mention for completeness that null geodesics lying in the null hypersurfaces $U=\mathop{\rm const.}\nolimits$, referred to as the \emph{null geodesic generators of the null hypersurfaces} $U=\mathop{\rm const.}\nolimits$ ; they may be related to lifts of isotropic geodesics in Newton-Cartan spacetimes \cite{DThese,confCarroll} for which
$V$ is an affine parameter.
\subsection{Killing resp. conformal Killing vectors}
We first recall what happens for \emph{Killing vectors}.
If we define the tangent vector of a curve with general
parameter $\lambda$ by $T^\mu = \frac{dX^\mu}{d \lambda}$,
then a geodesic satisfies
\begin{equation}
T^\alpha T^\mu _{\; ;\, \alpha} = h(\lambda)\, T^\mu
\label{lambdageobis}
\end{equation}
for some function $h(\lambda)$, where the
``\,semicolon $; \alpha $'' denotes covariant derivative.
\goodbreak
\parag\underbar{Killing vectors}.
Suppose first that $Y^\mu $ is a \emph{Killing vector field}; then
it satisfies Killing's equations
\begin{equation}
Y_{\mu ; \alpha} + Y_{\alpha ; \mu}= 0 \,.
\label{Killingeq}
\end{equation}\vskip-5mm
It follows that
\begin{equation}
{\cal E}= Y_\mu T^\mu = g_{\mu \nu}T^\mu Y^\nu \,
\qquad\text{satisfies}\qquad
{\cal E}_{; \alpha}T^\alpha =
\frac{d {\cal E}}{d \lambda} = h(\lambda)\, {\cal E}\,.
\label{KillCons}
\end{equation}
Then we get a conserved quantity for the geodesic motion,
\begin{equation}\bigbox{
\mathcal{Q}_Y=
g_ {\mu \nu}\frac{dX^\mu}{d \sigma} Y^\nu = g_{\mu \nu} \frac{dX^\mu}{d \lambda} \frac{d \lambda}{d \sigma}\, Y^\nu\,,\quad
\frac{d\mathcal{Q}_Y}{d\sigma}=0\,.
\;}
\label{KillingCons}
\end{equation}
Translations along the ``vertical'' vector $\xi={\partial}_V$ are isometries for any metric of the form (\ref{Bmetric}).
The associated conserved quantity $P_V= M$ in (\ref{PVgen})
is identified, in the Bargmann framework, with the mass downstairs (as mentioned in section \ref{BargSec}).
\goodbreak
\parag\underbar{Conformal Killing vectors}.
Now we suppose that we have instead a \emph{conformal Killing vector} $Y^\mu$, i.e., one for which
\begin{equation}
Y_{\mu ;\nu}+ Y_{\nu ; \mu}= 2\omega\, g_{\mu \nu}
\end{equation}
for some function $\omega$. If $\omega={\rm constant}$, $Y^\mu$ is called a \emph{homothetic Killing vector} since it generates a homothety.
For a \emph{timelike geodesic} with tangent vector
$
T^\alpha = \frac{dx^\alpha}{d \tau} \,
$
where $\tau$ is proper time along the geodesic
so that
$
T^\alpha T_{\alpha} =-1\,,
$
we have instead
\begin{equation}
(Y_\alpha T^\alpha)_{\,;\, \mu }T^\mu =-\omega\,.
\end{equation}
Thus in general the quantity (\ref{KillCons}) i.e.
$
Y_\alpha T^\alpha= \frac{1}{m}Y^\mu P_\mu
$
(where $P_\mu=m T_\mu$ is the momentum of a particle of mass $m$) is \emph{not} constant
along the world line. From the point of view of the covariant
Hamiltonian treatment, $Y^\mu P_\mu$ is the moment map generating the lift to the
co-tangent bundle of the conformal transformation of the base manifold\,.
In the special case of a homothety when $\omega =\omega_0=\mathop{\rm const.}\nolimits$, we find that
$
\frac{d (Y_\alpha T^\alpha)}{d \tau} = -\omega_0
\,\Rightarrow\,
Y_\alpha T^\alpha = -\omega_0 \tau -\omega_{-1}
\,.
$
Alternatively, deriving again, we have
$
{d^2 (Y_\alpha\, T^\alpha)}/{d\tau^2} = 0\,,
$
which is a covariant version of the \emph{Lagrange-Jacobi identity} \cite{JacobiLec}.
Conformal Killing vectors do not generate symmetries for timelike geodesics. However, as observed by Jacobi \cite{JacobiLec}, while $Y_\alpha T^\alpha$ is not in general conserved, the two constants of integration above yield, in modern language, the conserved quantities (\ref{ConsSch}) associated
with non-relativistic dilations and expansions, respectively \cite{DHNC}.
\subsection{Conserved quantities for null geodesics}\label{nullCQ}
By contrast, if one considers an affinely
parametrised \emph{null geodesic} with tangent vector
$
l^\alpha = {dx^\alpha}/{d\sigma}
$
that satisfies
\begin{equation}
g_{\mu \nu} l^\mu l^\nu =0 \,,\qquad
l^\mu_{\; ;\,\nu}\,l^\nu=0
\,,
\end{equation}
we \emph{do obtain a constant of the motion},
\begin{equation}\medbox{
\mathcal{Q}_Y= Y_\mu\, l^\mu \,,\qquad
\frac{d \mathcal{Q}_Y}{d \sigma}= 0\,. \;
}
\label{nullQ}
\end{equation}
\goodbreak
\vskip2mm
$\bullet$ As a first illustration, we re-derive the conserved quantities associated with Schr\"odinger dilations and expansions. For $\Phi(\mathbf{X})=0$ and $\Phi(\mathbf{X}) \propto |{\bm X}|^{-2}$ (\ref{flattransv4D}) describes a free particle and the inverse-square potential respectively. The generators $Y_D$ and $Y_K$ (\ref{DKextS}) are conformal Killing vectors. Following the procedure outlined in sect. \ref{BargSec}, \eqref{nullQ} yields the conserved Schr\"odinger quantities downstairs \cite{Schrgroup},
\begin{subequations}
\begin{align}
&{\mathcal{D}}=P_i X^i-2E U &\text{dilation}
\label{Consdilat}
\\[4pt]
&{\mathcal{K}}=-EU^2+UP_iX^i-\frac{M}{2}X_i X^i
&\text{expansion}
\label{Consexp}
\end{align}
\label{ConsSch}
\end{subequations}
These quantities are conserved for null geodesics ``upstairs'' and project to well-defined conserved quantities for the projected non-relativistic motion. In fact ${\mathcal{D}}, {\mathcal{K}}$ close, with the projected Hamiltonian $H_{NR}$ to an ${\rm o}(2,1)$ algebra \cite{Schrgroup}.
More generally, for motion along null geodesics eqn. (\ref{nullQ}) associates a conserved quantity to each conformal vector $Y$; if the latter preserves in addition also the ``vertical'' vector $\xi={\partial}_V$,
$L_Y\xi=0$
(\ref{xifix}), this quantity (we call of the Schr\"odinger type) projects to a conserved quantity for the underlying non-relativistic dynamics ``downstairs" --- this is in fact the original idea of the Bargmann framework \cite{Bargmann,DGH91}.
\goodbreak
\section{Scalings as chrono-projective transformations} \label{homoSec}
Now we present a systematic and detailed discussion of scale transformations in the Bargmann framework.
We start with the homothety (\ref{homothety0}) -- (\ref{infhomo}). Being a conformal vector for the gravitational wave spacetime, (\ref{nullQ}) provides us with
\begin{equation}
\mathcal{Q}_{hom} = X^i P_i+2V P_V\,,
\label{Qhomot}
\end{equation}
where $P_V$ is associated with the ``vertical'' Killing vector ${\partial}_V$. $\mathcal{Q}_{hom}$ is
conserved for \emph{null} (but not for timelike) geodesics, as confirmed also by using the equations of motion (\ref{ABVeq}).
Assuming that the transverse motion $X^i(\sigma)$ had already been determined, the conservation of $\mathcal{Q}_{hom}$ allows us to determine the evolution of the ``vertical coordinate'' ,
\begin{equation}
V(\sigma)=\frac{\mathcal{Q}_{hom}}{2P_V}-\frac{X^i(\sigma)P_i(\sigma)}{2P_V}=
\frac{\mathcal{Q}_{hom}}{2P_V}-\frac{1}{4P_V}\frac{\,d}{d\sigma} \big(X^i(\sigma)X_i(\sigma)\big)\,.
\label{VUGW}
\end{equation}
As explained in sec.\ref{BargSec}, the null dynamics in 4D projects, in the Bargmann framework, to an underlying non-relativistic system in $2+1$D, whereas $P_V$ becomes the mass, $M$ ; $U$ becomes the non-relativistic time. The non-relativistic Hamiltonian and Lagrangian are recovered as in (\ref{HNRPU}) and (\ref{dotVL}), allowing us to express
\begin{equation}
\mathcal{Q}_{hom}=Q_{NR}+2MV_0,\qquad
Q_{NR} = X^i P_i-{2} \int^U\!\! L_{NR} (u) d u\,.
\label{QNR}
\end{equation}
$Q_{NR}$ is thus \emph{conserved ``downstairs'} (as it can be confirmed directly using the eqns of motion).
More generally, the anisotropic rescaling
\begin{equation}
\label{abcscale}
U \to \mu^b\,U,
\qquad
{X^i} \to \mu^a\, X^i,
\qquad
V \to \mu^{c}\, V\,,
\qquad
\mu=\mathop{\rm const.}\nolimits
\end{equation}
induces, for the Brinkmann metric (\ref{Bmetric}),
$$
g_{\mu\nu}dX^\mu dX^\nu\to\mu^{2a}
\Big(\delta_{ij} dX^i dX^j + \mu^{-2a+b+c}\, 2 dU dV + K_{ij}(\mu^bU) \mu^{2b}X^i X^j dU^2\Big)\,.
$$
This is conformal provided
$
c=2a-b
$ and $K_{ij}(\mu^bU)=\mu^{-2b}K_{ij}(U) \,.
$
Then, for any $b$,
\begin{equation}
g_{\mu\nu}dX^\mu dX^\nu\to\Omega^2g_{\mu\nu}dX^\mu dX^\nu,
\qquad
\Omega=\mu^{a}.
\end{equation}
${\cal{L}}_Y \partial_V = -c\partial_V$ implies
that the vector field $Y$ is genuinly chrono-projective whenever $c=2a-b\neq 0$.
The associated conserved quantity
\begin{equation}
\mathcal{Q}_{a,b}=aX^iP_i+b\,UP_U+cVP_V
\label{abcharge}
\end{equation}
induces a conserved charge downstairs,
\begin{equation}
Q_{a,b}=\mathcal{Q}_{a,b}+cV_0P_V
=aX^iP_i-b\,U E-c\left(\int^U\!\!L_{NR}\right)M\,.
\label{Qdown}
\end{equation}
Note here the new ``chrono-projective'' term proportional to the non-relativistic action.
\begin{enumerate}
\item
For $a=b=c=1$ we would get the relativistic (isotropic) dilation $U\to \mu U,\, X^i \to \mu X^i, V \to \mu V $; when $b=2a$ we get Schr\"odinger dilations.
\item
When $b=0$ we recover, for any profile $K_{ij}(U)$, (\ref{QNR}).
\item
If $K_{ij}$ is $U$-independent (as for Brdi\v{c}ka metric
(\ref{Brdmetric}) below), then $b=0$ ;
\item
$b\neq0$ could be obtained for the [singular] non-trivial profile \cite{Sippel, Eardley,MaMa, Keane}. (See also class 11 in Table 4 of \cite{KeaTu04}).
\begin{equation}
K_{ij}(U)=\frac{K_{ij}^0}{U^2}\,,
\quad
K_{ij}^0=\mathop{\rm const.}\nolimits
\label{inverseU2}
\end{equation}
Choosing $a=0$ we obtain a \emph{chrono-projective isometry}
-- namely our \emph{U-V boost} $Y_{UV}$, $U\to \mu U,\, X^i \to X^i, V \to \mu^{-1}V $. Its conserved charge is ``chrono-projective''
\begin{equation}
\mathcal{Q}_{UV}=UP_U-VP_V=- UE+\int^U\!\!L_{NR}-V_0P_V.
\end{equation}
Choosing instead $b=0$, we recover (as said above) the homothety~(\ref{QNR})~\footnote{ The profile
(\ref{inverseU2}) is symmetric also w.r.t. Schr\"odinger dilations.
This is not a surprise, though, because the latter is a combination of an UV boosts and of homothety, as seen before.}.
Thus this example has again a maximal i.e. a 7-parameter chrono-projective algebra.
\end{enumerate}
The quotient $z=b/a$ is also called the \emph{dynamical exponent} \cite{ConfGal,Henkel}. The typical relativistic value is $z=1$ ; for Schr\"odinger-type expressions $z=2$. The ``chrono-projective'' contribution to the associated conserved quantity $\mathcal{Q}$ with an additional action integral term arises when $z\neq2$ \cite{DHNC,ConfGal,Henkel,ZhHhydro}.
\section{Chrono-projective transformations in BJR coordinates}\label{BJRSec}
Having reviewed the Bargmannian aspects, now we turn to a systematic study of chrono-projective transformations of the gravitational wave metric \eqref{Bmetric} with a non-trivial profile $K_{ij}(U)$.
The conformal transformations of pp-waves have been
determined some time ago \cite{Sippel, Eardley, MaMa, Keane}. Below we study them in our case of interest in a novel way.
Motivated by their utility to identify the isometries \cite{Sou73,Carroll4GW,OurMemory}, we switch to Baldwin-Jeffery-Rosen (BJR) coordinates $(u,{\bf x},v)$ \cite{BaJe},
\begin{eqnarray}
U = u \, , \qquad
{\bm X} = P(u)\,{\bf x} \, ,\qquad V = v - \frac{1}{4}{\bf x} \cdot \dot{a}(u) {\bf x} \, ,
\label{BtoBJR}
\end{eqnarray}
where $a(u) = P^\dagger (u) P(u)$, the $2\times2$ matrix $P=(P_{ij})$
being a solution of the Sturm-Liouville problem \cite
OurMemory,SLC}
\begin{equation}
\ddot{P} = K P \, , \qquad P^\dagger \dot{P} = \dot{P}^\dagger P \,.
\label{SLP}
\end{equation}
Then the metric \eqref{Bmetric} takes the form,
\begin{equation}
g = a_{ij}(u)dx^idx^j + 2dudv\,.
\label{BJRmetric}
\end{equation}
The conditions (\ref{chronoprojdef}) determine the form of the general chrono-projective vector field
\begin{equation}
\medbox{
Y = Y^u (x,u) \partial} \def\lrpa{\partial^\leftrightarrow_u + Y^i(x,u) \partial} \def\lrpa{\partial^\leftrightarrow_{i} + \big(b(x,u) - \psi\, v\big)\, \partial} \def\lrpa{\partial^\leftrightarrow_v \,.
}
\label{ChronoBJR}
\end{equation}
\goodbreak
\vskip2mm
The conformal Killing equation (\ref{chronoprojdef}) requires, \vspace{-3mm}
\begin{subequations}
\begin{align}
&\partial_iY^u = 0\,,
\label{Yia}
\\
&\partial_u Y^u = 2\omega+\psi\,,
\label{Yib}
\\
&{\partial}_uY^v = 0
\label{Yic}
\\
&\partial_i Y^v +(\partial_u Y^j)a_{ij}=0\,,
\label{Yid}
\\
&Y^u (\partial_u a_{ij}) + a_{kj}(u)\partial_i \big(Y^k(x,u)\big)+ a_{ki}\big(\partial_j Y^k(x,u)\big) = 2\omega(u) a_{ij} \,.
\label{Yie}
\end{align}
\label{Yi}
\end{subequations}
Eqn. (\ref{Yia}) implies that $Y^u = Y^u(u)$ hence $\omega = \omega(u)$; then
(\ref{Yib}) can be solved as
\begin{equation}
Y^u(u)= {\epsilon}+ \int_0^u\!\big(2\omega(w)+\psi\big)dw\,.
\label{Yuompsi}
\end{equation}
($\epsilon=\mathop{\rm const.}\nolimits$)
Eqn.
(\ref{Yic}) implies that
$b$ is $u$-independent, $b = b(x)$.
Then $Y^i(x,u)$ can be written as
$
Y^i(x, u) = K^i(u) + F^i(x) + L^i(x,u).
$
Substituting into (\ref{Yid}) we get,
\begin{equation}
\label{csYigenex}
Y^k(x,u) = F^k(x) - H^{ki}(u)\partial_i b(x)\,,
\end{equation}
where $H^{ki}$(u) is Souriau's $2\times2$ matrix \cite{Sou73,Carroll4GW},
\begin{equation}
H^{ki}(u) = \int_{0}^u a^{ki}(w) dw \,,
\label{Smatrix}
\end{equation}
where $(a^{ij})$ is the inverse matrix, $a^{ij}a_{jk}=\delta^i_k$\,.
Inserting this into (\ref{Yie}), the last condition can be written as
\begin{eqnarray}
\label{csmaineq}
&&-2\omega(u) a_{ij}(u)+Y^u(u) (\partial_u a_{ij}(u)) +
\\[6pt]
&&a_{kj}(u)\Big(\partial_i F^k(x) - H^{km}(u) \partial_i \partial_m b(x)\Big)
+ a_{ki}(u)\Big(\partial_j F^k(x) - H^{km}(u)\partial_j \partial_m b(x) \Big)=0.\nonumber
\end{eqnarray}
Collecting our results,
\begin{subequations}
\begin{align}
&Y^u (u) = \epsilon+ 2 \int^u\!\omega(w) dw + \psi u \,,
\label{BJRu}
\\
&Y^i (x, u) = F^i(x) - H^{ij}(u)\partial_j b(x)\,,
\label{BJRi}
\\
&Y^v (x, v) = b(x) -\psi v\, .
\label{BJRv}
\end{align}
\label{csKvs}
\end{subequations}
Thus ${\epsilon}=\mathop{\rm const.}\nolimits$ is a time translation ; the conformal resp. chrono-projective factors $\omega$ and $\psi$ contribute to time dilations.
Although the functions
$F^i(x)$ and $b(x)$ are generally profile-dependent and can only be determined from (\ref{csmaineq}), we can conclude that
$b(x)$ is at most quadratic in $x$,
\begin{equation}
b(x)= b_{ij}x^ix^j-b_ix^i+h,\, \quad b_i,\, h = \mathop{\rm const.}\nolimits.
\label{bBJR}
\end{equation}
Thus $F^i(x)$ should be at most of the first order in $x^j$, see our examples in the next section.
A particular transformation in (\ref{csKvs}) is $\psi(u\partial_u - v\partial_v)$, associated with the chrono-projective factor, we called before a $u-v$ boost, cf. (\ref{UVboostM}).
\parag\underbar{Hamiltonian structure}
The geodesic Lagrangian resp. Hamiltonian are,
in BJR coordinates,
\begin{equation}
{\mathcal{L}} = \frac{1}{2} a_{ij} (u) \dot{x}^i \dot{x}^j + \dot{u} \dot{v}\,,
\qquad
{\mathcal{H}} = \frac{1}{2} a^{ij}p_i p_j + p_u p_v\,,
\label{BJRLagHam}
\end{equation}
where the canonical momenta $p_\mu={\partial}{\mathcal{L}}/{\partial} \dot{x}^{\mu}$ are
$
p_u = \dot{v}, \, p_v = \dot{u}, \, p_i = a_{ij} \dot{x}^j \,\Rightarrow \, \dot{x}^i = a^{ij}p_j\,.
$
By (\ref{nullQ}) the conserved quantity associated with the conformal vectorfield $Y$ is,
\begin{equation}
\medbox{
\mathcal{Q}_Y = Y^\mu p_\mu = Y^u (x,u)\, p_u +Y^i(x,u)\, p_{i} + \big(b(x) - \psi v\big)\, p_v \,.}
\label{ChronoProjQ}
\end{equation}
By using the Poisson bracket,
$
\big\{\mathcal{R},\mathcal{T} \big\} =\frac{{\partial}\mathcal{R}}{{\partial}{x}^\mu}\frac{{\partial}\mathcal{T}}{{\partial}{p}_\mu}-
\frac{{\partial}\mathcal{R}}{{\partial}{p}_\mu}\frac{{\partial}\mathcal{T}}{{\partial}{x}^\mu}\,,
$
the generating vector field is recovered as
$
Y^{\mu}{\partial}_\mu=\{x^{\mu}\, , \mathcal{Q}_Y\}{\partial}_\mu \,.
$
Rewriting the Hamiltonian as,
\begin{equation}
{\mathcal{H}} = p_\mu \dot{x}^\mu - {\mathcal{L}}= \frac{1}{2} (a^{-1})^{ij} p_i p_j +p_u p_v = \frac{1}{2} (g^{-1})^{\mu\nu} p_\mu p_\nu\,,
\label{cHbis}
\end{equation}
we have
\begin{equation}
\big\{\mathcal{Q}, \mathcal{H}\big \} = - \frac{1}{2} \mathcal{L}_Y{g}^{\mu\nu} p_\mu p_\nu,
{\quad\text{and}\quad}
\{\mathcal{Q}, \xi\big\} = -\mathcal{L}_Y(\xi)\,.
\label{QHxiPB}
\end{equation}
By (\ref{chronoprojdef})
a charge which is conserved along null geodesics should satisfy
\begin{equation}
\left\{\mathcal{Q}, {\mathcal{H}} \right\} = 2\omega\, \mathcal{H}\,,
\qquad
\big\{\mathcal{Q}, \xi\big\} = -\psi\, \xi \,
\end{equation}
with $\omega=\omega(u),\, \psi=\mathop{\rm const.}\nolimits$.
These formulas come handy to check whether a given quantity is conserved or not.
\vskip-5mm
\goodbreak
\parag\underbar{Isometries and homothety}:
As seen from (\ref{csmaineq}), one can not have a generic profile independent symmetry unless $Y^u = 0$.
Setting $Y^u = 0$ correlates the conformal and chrono factors, $\omega = -\psi/2 = \mathop{\rm const.}\nolimits$ and $\epsilon = 0$. Then (\ref{csmaineq}) simplifies as
\begin{equation}
\partial_i F^k(x) - H^{km}(u) \partial_i \partial_m h(x) = \omega \delta^k_i.
\end{equation}
Thus, we conclude that $b(x)=-b_i x^i + h$ and $F^i(x) = \omega x^i+f^i$. Putting all into (\ref{csKvs}), we obtain the combination $Y = Y_{iso} + Y_{hom}$
\begin{eqnarray}
\label{BJRiso}
Y_{iso} &=& f^i\partial_i + h\partial_v + b_i (H^{ij}\partial_j - x^i\partial_v) , \quad f^i = \text{const.}, \\
Y_{hom}&=& \omega (x^i\,{\partial}_i+ 2v\,{\partial}_v).
\label{BJRhom}
\end{eqnarray}
$Y_{iso}$ contains the 5 standard isometries (namely $\bm{x}$-translations, $v$-translation and ${\bf x}$-boosts, respectively), identified as the Carroll group with broken rotations \cite{Leblond,Carroll4GW,Carrollvs}, see sec.\ref{MinkowskiSec}.
In the Hamiltonian framework, they become
\begin{equation}
T^i = \delta^{ij}\,p_j ,\quad
T_v = p_v\,, \quad
B^i = H^{ij}p_j - x^i\, p_v \, ,
\label{5iso}
\end{equation}
and they all commute with the geodesic Hamiltonian ${\mathcal{H}}$ (\ref{cHbis}) ; the only non-vanishing brackets are \footnote{Note that this is not a central extension; the generators belong themselves to the algebra.}
\begin{equation}
\left\{T_i \,, \,B^j \right\} = \delta_i^j\, p_v \,.
\end{equation}
Being proportional to $\omega$, $Y_{hom}$ is the homothetic vector field (\ref{infhomo}) exported to BJR coordinates ; it induces the
conserved charge for null geodesics
\begin{equation}
\mathcal{Q}_{hom} = x^i p_i+2 v p_v \, ,
\qquad
\{Q_{hom} , \mathcal{H} \} = 2 \mathcal{H} \, .
\label{homoHPB}
\end{equation}
We would like to emphasize that the isometries (\ref{BJRiso}) and the homothety (\ref{BJRhom}) are valid for every profile $a_{ij}(u)$ and do not require any integrability equation. For comparison, we note that the homothetic Killing vector found in \cite{MaMa}, their eqn. $\#$ (48), is subject to an integrability condition.
In addition to the isometries and the homothety other (conformal) symmetries may arise; depending on eqn. (\ref{csmaineq}) (see the next section).
We just mention that the Ricci flatness of the Brinkmann metric (\ref{Bmetric}),
$\mathrm{Tr}(K_{ij})=0$, can be exported to BJR coordinates as \cite{BaJe, Sou73}
\begin{equation}
\mathrm{Tr}\left(\dot{L} + \frac{1}{2} L^2\right), \quad L = a^{-1}\dot{a}.
\end{equation}
So far we have not used this condition, therefore our solutions (\ref{csKvs}) should apply for the special conformal Killing vectors of any pp-wave.
\section{Examples}\label{Examples}
Now we illustrate our general theory on selected examples.
\subsection{Minkowski case}
In the flat (Minkowski) case $a_{ij} = \delta_{ij}$ and $H^{ij}(u) = u\,\delta^{ij}$ the constraint (\ref{csmaineq}) requires
\begin{equation}
\label{csflatmaineq1}
-2u\partial_i\partial_j b(x) + \big(\partial_i F^j (x) + \partial_j F^i(x)\big) = 2\omega(u)\delta_{ij}
\end{equation}
and a simple calculation yields
the BJR form of the free chrono-projective Lie algebra (\ref{chronoalg}),
\vspace{-7mm}
\begin{subequations}
\begin{align}
&Y^u (u) = \epsilon+ 2 \lambda\, u +\kappa\, u^2 + \psi\, u
\\
&Y^i (x, u) = \omega^{i}_{\,j}x^j + f^i + ub_i +\lambda x^i +\kappa ux^i\,,
\\
&Y^v (x, v) = -\frac{\kappa}{2} {\bf x}^2-{\bf b}\cdot{\bf x} + h -\psi v,
\end{align}
\label{freeCPalg}
\end{subequations}
where $\epsilon, \lambda,\kappa, \psi, \omega^{i}_{\,j}, f^i, b_i, h$ are constants.
Thus
$\epsilon$ generates non-relativistic time translations, $\lambda$ and $\kappa$ are the parameters of Schr\"odinger dilations and expansions, $f^i$ and $h$ space and vertical translations, $\omega^{i}_{\,j}$ rotations, $b_i$ as Galilei boosts.
$\psi$ generates u-v boosts.
In conclusion and consistently with what we said in sec. \ref{MinkowskiSec} we have 7 dimensional isometry group which satisfies
$
L_Yg_{\mu\nu}=0
$ and
$
L_Y\xi=0
$
and is identified with the Bargmann group (\ref{Bargalg}). The latter is extended to the [extended] Schr\" odinger
group (\ref{extSalg}) by the addition of Schr\"odinger dilations and expansions, which are conformal and leave the vertical vector invariant,
$$
L_Yg_{\mu\nu}=2\omega\,g_{\mu\nu}
{\quad\text{with}\quad}
\omega=\lambda + \kappa u\,
{\quad\text{and}\quad}
L_Y\xi=0
$$
The 10-parameter chrono-projective group is then obtained by adding one more
transformation, for example u-v boosts.
Alternatively, by (\ref{homoDuv}) u-v boosts can be replaced by the homothety.
As mentioned before, conformal Killing vectors of a plane gravitational wave, eqn. $\#$ (56) in \cite{MaMa}, are chrono-projective. However, flat pp-waves were excluded \footnote{The solutions (\ref{freeCPalg}) can be obtained from (\ref{MaMaW}) by putting $H(u, x^A)=0$ in eqn. $\#$ (55) of \cite{MaMa}. }, explaining why the full $o(4,2)$ algebra (\ref{confalg}) is not recovered~: ``antiboosts" (\ref{antib}) are, for example, not chrono-projective transformations.
Now we turn to examples with non-trivial profile.
\subsection{The Brdi\v{c}ka~ metric}\label{BrdickaSec}
Let us first consider the linearly polarized gravitational wave metric given in Brinkmann coordinates \footnote{See also class 13 in Table 4 of \cite{KeaTu04}.} \cite{Brdicka},
\begin{equation}
dX_1^2+dX_2^2+2dUdV-2\Phi dU^2\,,
\qquad
\Phi=\half\Omega^2\Big(X_1^2-X_2^2\Big),\qquad
\Omega=\mathop{\rm const.}\nolimits\,
\label{Brdmetric}
\end{equation}
The potential $\Phi(\mathbf{X})$ is attractive in the $X_1$ and repulsive in the $X_2$ sector.
We note that
the Brdi\v{c}ka profile is $U$-independent, leaving us with the homothety as the only conformal symmetry.
We switch to BJR coordinates using the solution of the Sturm-Liouville equation (\ref{SLP})
\begin{equation}
P(u)=\mathrm{diag}\Big(\cos\Omega u\,,\, \cosh\Omega u\Big)\,.
\label{BrdminP}
\end{equation}
The induced Brinkmann $\to$ BJR transformation (\ref{BtoBJR}) i.e.
\footnote{Indices are lifted by the transverse metric, $x^i=a^{ij}x_i$.}
\begin{equation}\left\{
\begin{array}{llllll}
U&=&u \qquad
X_1 = x^1 \cos(\Omega u)\qquad
X_2 = x^2\cosh(\Omega u)
\\[6pt]
V &=& v+\dfrac {\Omega}{4}(x^1)^2 \sin(2\Omega u)-\dfrac {\Omega}{4} (x^2)^2 \sinh(2\Omega u)
\end{array}\right.
\label{brdickatraf}
\end{equation}
yields the BJR metric (\ref{BJRmetric}) resp. Souriau matrix
\begin{subequations}
\begin{align}
a(u)&=
\mathrm{diag}\Big(\cos^2 (\Omega u)\,,\, \cosh^2 (\Omega u) \Big)\,,
\label{BrdBJminRprof}
\\
\big(H^{ij}(u)\big) &=
{\Omega}^{-1}
\mathrm{diag} \big(
\tan(\Omega u) \,,\,\tanh(\Omega u)
\big).
\label{BrdSmat}
\end{align}
\end{subequations}
\vskip-5mm
\parag\underbar{A ``screw-type'' isometry}\footnote{We borrowed the word from
\cite{POLPER}, where a broken $U$ translation combines with a broken rotation into a symmetry, which acts as a ``screw'' \cite{exactsol,Carroll4GW,Ilderton}, as illustrated in fig.\ref{CPPscrewfig} below.}.
When written in Brinkmann coordinates, the metric (\ref{Brdmetric}) is $U$-independent, implying that the $U$-translations
\begin{equation}
U\to U + e
\label{Utr}
\end{equation}
add a 6th manifest isometry to the $5$ standard ones. It is redundant nonetheless instructive to see how this comes about in BJR coordinates~: $u\equiv U$-translations are still symmetries but the implementation becomes distorted, see fig. \ref{Brdiscrew}.
\goodbreak
\begin{figure}[h]
\includegraphics[scale=.145]{Fig1}
\vskip-4mm
\caption{\textit{\small
The $U$-translation in Brinkmann coordinates (\ref{Utr})
becomes, in BJR coordinates, ``screwed''.
}}
\label{Brdiscrew}
\end{figure}
To find all chrono-projective vectorfields we follow the recipe outlined in sec.\ref{BJRSec}~: starting with the general equations (\ref{csKvs}),
a calculation similar to the one in the free case shows that
\begin{equation}\left\{\begin{array}{lll}
\ddot{\omega} + 2 \Omega^2 (2\omega + \psi) &=& 0
\\
\ddot{\omega} - 2 \Omega^2 (2\omega + \psi) &=& 0
\end{array}\right.
\label{homoscales}
\end{equation}
whose consistency requires $2\omega+\psi = 0$.
Therefore $\omega$ is a constant and $Y^u$ is a mere $u$-translation, $Y^u=\epsilon$.
Then from (\ref{csmaineq}) we deduce that
\begin{equation}
b(x)=-\frac{\Omega^2}{2}\Big((x^1)^2-(x^2)^2\Big)\,\epsilon\,
-b_ix^i+h
{\quad\text{and}\quad}
F^i(x)=\omega x^i+f^i\,,\; f^i=\mathop{\rm const.}\nolimits.
\label{BrdbF}
\end{equation}
In conclusion, the most general chrono-projective vectorfield for the Brdi\v{c}ka metric has 7 parameters~: 6 isometries and one conformal generator (namely the homothety). In BJR coordinates it is,
\begin{eqnarray}
Y_{B}&& = \epsilon \Big({\partial}_u +
\Omega\big(x^1\tan(\Omega u)\partial_1 - x^2\tanh(\Omega u)\partial_2\big) -\frac{\Omega^2}{2}\big((x^1)^2 - (x^2)^2\big){\partial}_v\Big)
\\[6pt]
&&
\,+f^i{\partial}_i+
h\partial_v +\Big(\dfrac{1}{\Omega}\big(b_1\tan(\Omega u)\partial_1
+ b_2 \tanh (\Omega u) \partial_2\big)-b_ix^i\partial_v\Big)
+ \omega\, \big(x^i{\partial}_i+2v\partial_v\big).\quad
\nonumber
\label{BrdgenCP}
\end{eqnarray}
The conformal resp. chrono-projective factors are
$
\omega = - \psi/2 = \mathop{\rm const.}\nolimits
$ ;
$f^i$ and $h$ generate space and vertical translations, respectively, and the $b_i$ generate boosts.
The parameter $\epsilon\in\IR$ generates the additional isometry induced by $u$-translations.
Expressed in Brinkmann coordinates the ``screw-charge'' that this isometry generates,
\begin{equation}
\mathcal{Q}_{\epsilon}^{B}=
p_u +\Omega\Big(\tan(\Omega u)x^1p_1 - \tanh(\Omega u)x^2p_2 \Big)\,
-\frac{\Omega^2}{2}\Big((x^1)^2 - (x^2)^2\Big)\,
\label{BrdCC}
\end{equation}
turns out to be $P_U$, (minus) the ``Brinkmann'' energy, as expected.
\subsection{``Screw'' for circularly polarized periodic (CPP) waves}\label{CPPSec}
\emph{Circularly polarized periodic waves} (\ref{Bprofile}) with profile \footnote{See also class 14 with $l=0$ in Table 4 of \cite{KeaTu04}.}
\begin{equation}
{{\mathcal{A}_{+}}} (U) = A_0\cos(\omega U),
\qquad
\mathcal{A}_{\times}(U) = A_0\sin(\omega U),
\qquad
A_0=\mathop{\rm const.}\nolimits
\label{PerProf}
\end{equation}
have, beyond the homothety and the usual 5 isometries also a 6th, ``screw'' isometry, obtained by combining broken rotations with broken $U$-translations \cite{exactsol,Carroll4GW,POLPER,Ilderton},
\begin{equation}
Y_{CPP}^{scr}=\partial_U + \frac{\omega}{2} (X^1 \partial_2 - X^2\partial_1)\,.
\label{CPPscrew}
\end{equation}
We find instructive to outline how this result is recovered using our framework. (We choose $\omega=2$ for simplicity).
After bringing the system to a $U$-independent form by a suitable rotation, (eqn. \#(5.7) of \cite{POLPER}) we solve the Sturm-Liouville equation (\ref{SLP}) for the profile (\ref{PerProf}) as
\begin{equation}
P(u) =
\begin{pmatrix}
{\cos\Omega_-u}\qquad& \frac{\sin\Omega_+u}{\Omega_+}
\\[4pt]
-\frac{\sin\Omega_-u}{\Omega_-}\qquad & {\cos\Omega_+u}
\end{pmatrix}{\quad\text{where}\quad}
\Omega_\pm^2 = 1 \pm \frac{A_0}{2}\,.
\label{CPPSLP}
\end{equation}
Then (\ref{BtoBJR}) yields (\ref{BJRmetric}) with
transverse metric
\begin{equation}
(a_{ij})=
\begin{pmatrix}
\cos^2\Omega_- u + \frac{\sin^2\Omega_- u}{\Omega_-^2}&\frac{\cos\Omega_- u \sin\Omega_+ u}{\Omega_+} -\frac{\cos\Omega_+ u \sin\Omega_- u}{\Omega_-}
\\[4pt]
\frac{\cos\Omega_- u \sin\Omega_+ u}{\Omega_+} -\frac{\cos\Omega_+ u \sin\Omega_- u}{\Omega_-} & \cos^2\Omega_+ u + \frac{\sin^2\Omega_+ u}{\Omega_+^2}
\end{pmatrix}\,.
\label{CPPa}
\end{equation}
The Souriau matrix, calculated using (\ref{CPPa}), is
\begin{eqnarray}
&&(H^{ij}) = \displaystyle\frac{2}{A_0
(\det P)}\times
\\[6pt]
&&\begin{pmatrix}
\frac{\cos\Omega_- u \sin\Omega_+ u}{\Omega_+} - \Omega_- \sin\Omega_- u \cos\Omega_+ u & -1
\\[4pt]
- 1 & \Omega_+ \sin\Omega_+u \cos\Omega_- u - \frac{\sin\Omega_- u\cos\Omega_+ u}{\Omega_-}
\end{pmatrix}\,\qquad\quad \nonumber
\end{eqnarray}
where $\det P$ is the determinant of the Sturm-Liouville matrix (\ref{CPPSLP}),
\begin{equation}
\det P = \cos\Omega_- u\cos\Omega_+u + 2\frac{\sin\Omega_+ u \sin\Omega_- u}{\sqrt{4-A_0^2}}.
\end{equation}
A tedious calculation for (\ref{csmaineq}) then yields the sixth isometry in BJR coordinates as
\begin{equation}
Y_{CPP}^{scr}= \partial_u
-\frac{A_0}{2} \Big((H^{11} x^1- H^{12}x^2 )\partial_1 + (H^{21} x^1 - H^{22}x^2 )\partial_2 \Big) + \frac{A_0}{4}
\Big((x^1)^2 - (x^2)^2\Big)\partial_v\,,
\label{screwBJR}
\end{equation}
Circularly polarized periodic waves have thus 6 isometries and one conformal transformation, namely the homothety.
Figs.\ref{blowup} and \ref{CPPscrewfig} show how
trajectories are taken into trajectories by the homothety and by the screw transformation, respectively.
\begin{figure}[h]
\hskip-7mm
\includegraphics[scale=.145]{Fig2}\,
\null\vskip-4mm
\caption{\textit{\small For the circularly polarized periodic profile (\ref{Bprofile}) with ${{\mathcal{A}_{+}}}(U) = \cos(U),\,
{\cal A}_{\times}(U) = \sin(U)$ the \textcolor{blue}{\bf homothety} (\ref{homothety0}) takes the trajectory with initial condition $(U_0,{\bm X}_0,V_0)$ [in \textcolor{magenta}{\bf magenta}] into that with initial condition $(U_0,\chi\,{\bm X}_0,\chi^2\,V_0)$ [in \textcolor[rgb]{0,0.35,0}{\bf green}].
\label{blowup}
}}
\end{figure}
\begin{figure}[h]
\hskip-7mm
\includegraphics[scale=.145]{Fig3}
\null\vskip-4mm
\caption{\textit{\small Dropping the $V$-coordinate and unfolding the transverse CPP trajectory by adding $U$ yields spirals.
The screw-transformation (\ref{CPPscrew}) (in \textcolor{blue}{\bf blue}) carries the trajectory in \textcolor{magenta}{\bf magenta} into another trajectory (in \textcolor[rgb]{0,0.35,0}{\bf green}).
\label{CPPscrewfig}
}}
\end{figure}
\subsection{``Screw'' with expansion}\label{APSec}
In \cite{AndrPrenc,AndrPrenc2} Andrzejewski and Prencel investigate the memory effect for the linearly polarized gravitational wave with regular $U$-dependent profile \cite{Sippel, Eardley, MaMa, Keane} \footnote{See also class 11ii in Table 4 of \cite{KeaTu04}.}
\begin{equation}
K_{ij}(U) = \frac{\epsilon^2}{(U^2+\epsilon^2)^2}\,\mathrm{diag}(1,-1)\,,
\label{APprof}
\end{equation}
whose explicit $U$-dependence breaks the $U$-translation symmetry.
On the other hand shows that
rescalings are broken with the exception of the homothety. However combining the broken $U$-translation with a broken Schr\"odinger expansion (\ref{expansionM}),
\begin{equation}
{Y}^{scr}=Y_{K} +\epsilon^2 {\partial}_U,
\label{screxpvf}
\end{equation}
we to call here a ``screwed expansion'', generates a conformal transformation
\begin{equation}
L_{Y^{scr}} g_{\mu\nu} = 2U g_{\mu\nu}\,,
\qquad
L_{Y^{scr}}\xi= 0\,.
\end{equation}
The conserved quantity associated with \eqref{screxpvf}
\begin{equation}
\mathcal{Q}^{scr} = (U^2 + \epsilon^2)P_U + U X^i P_i - \frac{\bm{X}^2}{2}P_V
\end{equation}
satisfies
\begin{equation}
\{\mathcal{Q}^{scr},{\mathcal{H}}\} = 2 U {\mathcal{H}} \,
\end{equation}
and is therefore conserved for null geodesics.
In conclusion, the Brinkmann metric with profile (\ref{APprof}) provides us with an
example with 5 isometries and \emph{two} conformal generators, namely the homothety and the ``screw" (\ref{screxpvf}).
\goodbreak
\subsection{``Screw'' with U-V boost}\label{IlderSec}
Ilderton \cite{Ilderton,IldertonPC}
mentions that for the [singular] profile
\begin{equation}
K_{ij}(U)=
\frac{K^0_{ij}}{(1+U)^2},
\qquad K^0_{ij}=\mathop{\rm const.}\nolimits
\label{Ilderprof}
\end{equation}
the manifest breaking of $U$-translation invariance can be cured by ``screw-combining'' it with a (broken) boost (\ref{UVboostM}), i.e., $
Y_{UV}^{scr}=Y_U+Y_{UV}.
$
His statement is confirmed by calculating the Poisson bracket of the associated charge with the Hamiltonian,
\begin{equation}
\mathcal{Q}_{UV}^{scr}=P_U + \mathcal{Q}_{UV}\,,
\qquad
\big\{\mathcal{Q}_{UV}^{scr},{\mathcal{H}}\big\}=0.
\end{equation}
The conformal (resp. chrono-projective) factors are
$\omega= 0$ and $\psi= 1\,.$
Adding the homothety, we end up with a chrono-isometry plus a chrono-conformal transformation in addition to the standard 5 isometries.
Another way of understanding this is to observe that $U\to U-1$ carries the profile (\ref{Ilderprof}) to the form (\ref{inverseU2}), whose U-V boost symmetry was established in sec. \ref{homoSec}.
\section{Conclusion}\label{Concl}
Plane gravitational waves have long been known to admit, generically, a 5-parameter isometry group \cite{BoPiRo,EhlersKundt,Sou73,Torre,exactsol}.
The homothety (\ref{homothety0}) is a universal conformal generator.
For a non-conformally-flat spacetime, the maximum number of conformal Killing vectors is $7$ \cite{Sippel, Eardley, MaMa, Keane,exactsol,HallSteele,KeaTu04}.
The associated conserved quantities determine the transverse-space trajectory ${\bm X}(\sigma)$ in (\ref{ABXeq}) \cite{Sou73,Carroll4GW} and for null geodesics $\mathcal{Q}_{hom}$ in (\ref{Qhomot}) allows us to find also the vertical motion according to (\ref{VUGW}).
The homothety is a chrono-projective transformation introduced by Duval et al. \cite{DThese,5Chrono}. The fundamental importance of the latter becomes clear from two, related contexts.
Firstly, the chrono-projective property (\ref{Chronocond}) is precisely what we need to derive conserved quantities for null geodesics in the gravitational wave spacetime and -- unexpectedly -- also in the underlying non-relativistic dynamics \cite{KHarmonies}. The conserved quantities (\ref{Qdown}) they generate involve a novel term, namely the action integral of the underlying non-relativistic dynamics.
Secondly, for an exact plane gravitational wave, \emph{all} conformal vectors are
chrono-projective \cite{MaMa}. Using the chrono-projective condition makes it simpler to determine the conformal transformations for a given profile, as illustrated in sec. \ref{Examples}. Using BJR coordinates is particularly convenient.
Since we have not made use of Ricci flatness in our calculations in sect.\ref{BJRSec}, our solutions (\ref{csKvs}) should apply for the special conformal Killing vectors of any pp-wave.
Actually, for a type-N (non-flat) null fluid spacetime \cite{KeaTu04} arbitrary conformal vector fields satisfy the chrono-projective conditions (\ref{confodef0}) and (\ref{Chronocond}). Although different coordinates were used, one can notice the similarity between their eqns. $\#$ (19) and our (\ref{csKvs}). However, unlike in their case, the form of the unknown functions $F(x)$ and $b(x)$ is readily determined, justifying our preference for BJR coordinates.
As it is illustrated in sect. \ref{Examples}, the isometry group of gravitational waves can, in special instances, be enlarged to 6 parameters \cite{exactsol, Sippel, Eardley, MaMa, Keane}.
This is plainly the case when the profile is $U$-independent so that $U$-translations are isometries; an example is given by the Brdi\v{c}ka metric (\ref{Brdmetric}).
When the profile does depend on $U$, $U$-translations are manifestly broken, however they can, under special circumstances, be combined with another broken symmetry generator yielding an additional ``screw-type'' conformal symmetry \cite{exactsol,POLPER,Ilderton}. Examples are presented in (\ref{inverseU2}) and in sec.\ref{Examples}.
Chrono-projective transformations may extend to Newton-Cartan framework \cite{DuvalNC, NewtonCartan} as conformal extensions of Carroll manifolds.
At last, we mention that our investigations have some overlap with those in \cite{Morand:2018tke,Igata}, as we discovered during the final phases of this research.
\begin{acknowledgments}
We are grateful to Christian Duval (1947-2018) for his advices during 40 years of friendship and collaboration, and his contribution at the early stages of this project. This paper is dedicated to his memory. We would like to acknowledge Gary Gibbons for discussions during the long preparation of this paper. We thank also X. Bekaert, P. Pio Kosi\'nski, Krzysztof Andrzejewski \cite{PKKA} Anton Ilderton \cite{IldertonPC} and Nikolaos Dimakis \cite{DimakisPC} and Thomas Helpin.
ME thanks the \emph{Institute of Modern Physics} of the Chinese Academy of Sciences in Lanzhou and the \emph{Denis Poisson Institute of Orl\'eans-Tours University}.
PH thanks the \emph{Institute of Modern Physics} of the Chinese Academy of Sciences in Lanzhou for hospitality. This work was partially supported by the Chinese Academy of Sciences President's International Fellowship Initiative (No. 2017PM0045), and by the National Natural Science Foundation of China (Grant No. 11975320).
\end{acknowledgments}
\goodbreak
|
1,314,259,994,859 | arxiv | \section{Introduction}
\noindent Electrons close to the Fermi level in strained graphene can be described by the two-dimensional massive Dirac operator \cite{Vozmediano}. De Martino et al \cite{DeMartino} predicted the exis\-tence of infinitely many bound states of the two-dimensional massive Dirac ope\-rator with a dipole potential and that these bound states accumulate with an exponential rate at the edges of the spectral gap. Shortly after, Cuenin and Siedentop~\cite{Cuenin} proved the former statement, whereas the latter one has so far only been proven under the assumption that no point charges are located directly in the graphene sheet (Rademacher and Siedentop \cite{Rademacher}). The purpose of the present article is to extend the result in~\cite{Rademacher} to the case of potentials with finitely many such Coulomb singularities with subcritical coupling constants.\\
The operator of interest acts in $\textnormal{\textsf{L}}^2(\mathbb{R}^2,\mathbb{C}^2)$ and is formally given by the expression
\begin{align}
\begin{split}
F= & \hspace{0.1cm} -i\pmb{\sigma}\cdot\nabla+m\sigma_3+V\,,\\
V= & \hspace{0.1cm} V_{\textnormal{sing}}+V_{\textnormal{reg}}\,,\\
V_{\textnormal{sing}}= & \hspace{0.1cm} \mbox{\footnotesize $\displaystyle\sum\limits_{n=1}^N$}\nu_n|\cdot - x_n|^{-1}
\end{split}
\end{align}
with $\pmb{\sigma}=(\sigma_1,\sigma_2)$, where $\left\{\sigma_k\right\}_{k=1}^3$ are the standard Pauli matrices, $m\in\mathbb{R}^+:=(0,\infty)$ is a strictly positive mass and $V$ is the (real-valued) potential associated to the charge distribution given by a finite, signed Borel measure $\rho$ in $\mathbb{R}^3$ via
\begin{equation}
V:\mathbb{R}^2\rightarrow\mathbb{R}\,, \quad x\mapsto \displaystyle\int\limits_{\mathbb{R}^3}\frac{\textnormal{d}\rho(y)}{|(x,0)-y|}\,,
\end{equation}
where the charge distribution is, accordingly, split into a singular and regular part, viz.,
\begin{equation}
\begin{split}
\rho= & \hspace{0.2cm} \rho_{\textnormal{sing}}+\rho_{\textnormal{reg}}\,,\\
\rho_{\textnormal{sing}}= & \hspace{0.2cm} \mbox{\footnotesize $\displaystyle\sum\limits_{n=1}^N$}\nu_n\delta(\cdot- (x_n,0))\,,
\end{split}
\end{equation}
where the positions $\left\{x_n\right\}_{n=1}^N\subset \mathbb{R}^2$ of the charges of subcritical coupling constants $\left\{\nu_n\right\}_{n=1}^N\subset \left(-1/2,1/2\right)\setminus \{0\}$ are mutually distinct, i.e., $y_k\neq y_j$ whenever $k\neq j$.
The assumptions on $\rho_{\textnormal{reg}}$ will be made in Thm. \ref{main_theorem}.\\
We denote the dipole moment corresponding to $\rho$ by
\begin{equation}\label{dip_mom}
\mathfrak{d}:=\displaystyle\int\limits_{\mathbb{R}^3}(y_1,y_2)\textnormal{d}\rho (y) \in\mathbb{R}^2
\end{equation}
and the radius of a sufficiently large ball around the Coulomb singularities by
\begin{align}
\gamma:=2\max\limits_{n\in\{1,\dots,N\}}\left\{|x_n|\right\}\,.
\end{align}
In order to obtain a physically sensible self-adjoint realization of $F$, we will recall two basic facts proven in \cite{Cuenin}.
\begin{Prop}[distinguished self-adjoint extension (cf. \cite{Cuenin}, Thm. 1, Rem. 1)]\label{dist_sa_ext}
The operator $-i\pmb{\sigma}\cdot\nabla+m\sigma_3+V_{\textnormal{sing}}$ defined on $\textnormal{\textsf{C}}_0^{\infty}\big(\mbox{\small $\mathbb{R}^2\setminus \mbox{\footnotesize $\left\{x_n\right\}_{n=1}^N$},\mathbb{C}^2$}\big)$ has a unique self-adjoint extension $\tilde{D}$ satisfying $\mathscr{D}(\tilde{D})\subset \textnormal{\textsf{H}}^{1/2}(\mathbb{R}^2,\mathbb{C}^2)$.
\end{Prop}
\begin{Prop}[energy gap (\cite{Cuenin}, Prop. 1)]
The essential spectrum of $\tilde{D}$ is given by $\sigma_{\textnormal{ess}}(\tilde{D})=\mathbb{R}\setminus (-m,m)$.
\end{Prop}
To formulate our main result, we will need the following definitions.
\begin{Def}[rescaled Mathieu operator]
For $p\in \mathbb{R}^+$ we define the \textit{rescaled Ma\-thieu operator with periodic boundary conditions} on the domain $\mathscr{D}(M_p)=\textnormal{\textsf{H}}^2(\mathbb{S}^1)$ as
\begin{equation}
M_p\psi:=\big(-\partial^2-p\cos(\cdot)\big)\psi\,.
\end{equation}
\end{Def}
\begin{Def}
For a self-adjoint operator $A$ and a Borel set $I\subset \mathbb{R}\setminus \sigma_{\textnormal{ess}}(A)$ we define the \textit{number of eigenvalues (counting multiplicity)} by
\begin{equation}
\mathcal{N}_I(A):=\textnormal{rank}(\chi_I(A))\,,
\end{equation}
where $\chi$ denotes the indicator function.\\
\end{Def}
\begin{Def}
We denote a ball of radius $a\in\mathbb{R}_0^+\cup \{\infty\}$ by $B_a:=\left\{x\in\mathbb{R}^2: |x|<a\right\}$.
\end{Def}
\begin{Def}\label{eff_rest_pot}
We introduce the \textit{effective rest potential} $R$, which is obtained from the potential $V$ by subtracting the short range part of $V_{\textnormal{sing}}$ and the long range part of the pure point dipole, i.e.,
\begin{equation}
R:\mathbb{R}^2\rightarrow \mathbb{R}\,, \quad x\mapsto V_{\textnormal{reg}}(x)+\left[V_{\textnormal{sing}}(x)-\frac{\langle\mathfrak{d},x\rangle_{\mathbb{R}^2}}{|x|^3}\right]\chi_{\mathbb{R}^2\setminus B_{\gamma}}(x)\,.
\end{equation}
\end{Def}
The Kato-Rellich theorem implies that
\begin{equation}\label{operator_of_interest}
D:=\tilde{D}+V_{\textnormal{reg}}
\end{equation}
is self-adjoint if the regular part of the potential $V_{\textnormal{reg}}$ is relatively $\tilde{D}$-bounded with relative bound $n_{\tilde{D}}(V_{\textnormal{reg}})<1$.
\begin{The}[exponential accumulation rate]\label{main_theorem}
Let $\tilde{D}$ be the distinguished self-adjoint extension of $-i\pmb{\sigma}\cdot\nabla+m\sigma_3+V_{\textnormal{sing}}$ defined on $\textnormal{\textsf{C}}_0^{\infty}\big(\mbox{\small $\mathbb{R}^2\setminus \mbox{\footnotesize $\left\{x_n\right\}_{n=1}^N$},\mathbb{C}^2$}\big)$ (see Prop. \ref{dist_sa_ext}).
Assume that the following hypotheses hold:
\begin{enumerate}
\item The regular part of the potential $V_{\textnormal{reg}}$ is relatively $\tilde{D}$-bounded with relative bound $n_{\tilde{D}}(V_{\textnormal{reg}})<1$. \label{hyp1}
\item The square of the regular part of the potential $(V_{\textnormal{reg}})^2$ is relatively compact w.r.t. the Laplacian $-\Delta_{\mathbb{R}^2}$ defined on $\textnormal{\textsf{H}}^2(\mathbb{R}^2)$.\label{hyp2}
\item The dipole moment $\mathfrak{d}$ (see (\ref{dip_mom})) is non-zero: $\mathfrak{d}\neq 0$.\label{hyp3}
\item There are neighborhoods of the positions $\left\{x_n\right\}_{n=1}^N$ of the point charges in which the regular part of the potential $V_{\textnormal{reg}}$ is bounded.\label{hyp4}
\item The effective rest potential $R$ (see Def. \ref{eff_rest_pot}) fulfills the following integrability conditions: \label{hyp5}
\begin{enumerate}
\item $R,R^2\in \textnormal{\textsf{L}}^1(\mathbb{R}^2;\log(2+|x|)\textnormal{d}x)$.
\item $|R|_*,(R^2)_*\in \textnormal{\textsf{L}}^1(\mathbb{R}^+;\log_+(r^{-1})\textnormal{d}r)$.
\end{enumerate}
Here, $(\cdot)_*$ denotes the (non-increasing) spherical rearrangement (see \cite{Shargorodsky}).
\end{enumerate}
Then, $D$, defined by (\ref{operator_of_interest}), satisfies
\begin{align} \label{main_statement}
\lim\limits_{E\nearrow m}\frac{\mathcal{N}_{(-E,E)}(D)}{|\log(m-E)|}=\frac{1}{\pi}\textnormal{tr}\left(\sqrt{\left(M_{2m|\mathfrak{d}|}\right)_-}\hspace{0.5mm}\right)\,,
\end{align}
where $(\cdot)_-:=-\min\{\cdot,0\}$ denotes the negative part.
\end{The}
\begin{Rem}
Hypothesis \ref{hyp5}.(a) in Thm. \ref{main_theorem} ensures that the total charge vanishes.
\end{Rem}
\begin{Rem}
The expression on the right of (\ref{main_statement}) is always strictly positive, since the lowest eigenvalue of $M_p$ is negative for all $p\in\mathbb{R}^+$ (see \cite{McLachlan}, 3.25, diagram).
\end{Rem}
\section{The Two-Dimensional Massless Dirac Operator with Subcritical Coulomb Potential}
\noindent Let $\nu\in (-1/2,1/2)\setminus\{0\}$. Since the differential expression of the two-dimensional massless Dirac operator with Coulomb potential $\mbox{\small $-i\pmb{\sigma}\cdot\nabla +\nu |\cdot|^{-1}$}$ acting in $\mbox{\small $\textnormal{\textsf{L}}^2(B_a,\mathbb{C}^2)$}$, where $a\in\mathbb{R}^+\cup \{\infty\}$, commutes with the total angular momentum $-i\partial_{\varphi}+\frac{1}{2}\sigma_3$, it can be decomposed by a unitary map $\mathcal{U}_a:\textnormal{\textsf{L}}^2(B_a,\mathbb{C}^2) \rightarrow \displaystyle\bigoplus\limits_{\kappa\in\mathbb{Z}+1/2}\textnormal{\textsf{L}}^2((0,a),\mathbb{C}^2)$ into
\begin{align}
\displaystyle\bigoplus\limits_{\kappa\in\mathbb{Z}+1/2}\mathbf{d}_{\kappa}^{\nu}\,, \qquad \textnormal{where}\quad \mathbf{d}_{\kappa}^{\nu}:=\begin{pmatrix} \frac{\nu}{r} & -\partial_r-\frac{\kappa}{r} \\ \partial_r-\frac{\kappa}{r} & \frac{\nu}{r} \end{pmatrix}
\end{align}
(see \cite{Thaller}, Section 7.3.3).
We define for $\kappa\in\mathbb{Z}+1/2$ the operator
\begin{align}
d^{\nu}_{\kappa,a}:\textnormal{\textsf{C}}_0^{\infty}((0,a),\mathbb{C}^2) &\rightarrow \textnormal{\textsf{L}}^2((0,a),\mathbb{C}^2)\,,\quad
\psi \mapsto \mathbf{d}^{\nu}_{\kappa}\psi\,.
\end{align}
The fundamental solution of $\mathbf{d}_{\kappa}^{\nu}\Upsilon=0$ in $\mathbb{R}^+$ is a linear combination of $\Upsilon_{\kappa}^{\nu,+}$ and $\Upsilon_{\kappa}^{\nu,-}$, where
\begin{align}
\Upsilon_{\kappa}^{\nu,\pm}(r)=\begin{pmatrix} -\nu \\ \pm \mbox{\footnotesize $\sqrt{\kappa^2-\nu^2}$}-\kappa \end{pmatrix} r^{\pm\sqrt{\kappa^2-\nu^2}}\,,
\end{align}
and hence $\mathbf{d}_{\kappa}^{\nu}$ is in the limit circle case at $\vartheta\in\mathbb{R}^+$ and in the limit point case at $\infty$ and, moreover, it is in the limit circle case at $0$ if and only if $\mbox{\small $\kappa=\pm 1/2$}$ \cite{Morozov1}. Thus, the deficiency indices of $\mbox{\small $d^{\nu}_{\kappa,\infty}$}$ are $\mbox{\small $(1,1)$}$ in case of $\mbox{\small $\kappa=\pm 1/2$}$ and $\mbox{\small $(0,0)$}$, otherwise~\cite{Morozov1},\cite{Weidmann2}; the deficiency indices of $\mbox{\small $d_{\kappa,\vartheta}^{\nu}$}$ are $\mbox{\small $(2,2)$}$ in case of $\mbox{\small $\kappa=\pm 1/2$}$ and $\mbox{\small $(1,1)$}$, otherwise \cite{Weidmann2}.\\\\
We define
\begin{align} D_a^{\nu}: \textnormal{\textsf{C}}_0^{\infty}(B_a\setminus \{0\},\mathbb{C}^2)\rightarrow \textnormal{\textsf{L}}^2(B_a,\mathbb{C}^2)\,, \quad \psi\mapsto \big(-i\pmb{\sigma}\cdot\nabla +\nu |\cdot|^{-1}\big)\psi\,
\end{align}
and denote the distinguished self-adjoint extension (see Prop.~\ref{dist_sa_ext}) of $D_{\infty}^{\nu}$ by $H^{\nu}_{\infty}$.
\begin{Lem}\label{sa_ext_disc_sp}
For all $\vartheta\in\mathbb{R}^+$ there exists a self-adjoint extension $H^{\nu}_{\vartheta}$ of $D^{\nu}_{\vartheta}$ with discrete spectrum.
\end{Lem}
\begin{proof}
It suffices to find self-adjoint extensions $\left\{\mbox{\small $\hat{d}^{\nu}_{\kappa,\vartheta}$}\right\}_{\kappa\in\mathbb{Z}+1/2}$ of $\left\{\mbox{\small $d^{\nu}_{\kappa,\vartheta}$}\right\}_{\kappa\in\mathbb{Z}+1/2}$ with compact resolvents and the property that
\begin{align}\label{res_conv_to_zero}
\left\|(\hat{d}^{\nu}_{\kappa,\vartheta}+i\mathbb{I})^{-1}\right\|\longrightarrow 0\quad\textnormal{as}\quad \kappa\rightarrow \pm\infty\,.
\end{align}
In case of $\kappa=\pm 1/2$, the resolvent of any self-adjoint extension of $d^{\nu}_{\kappa,\vartheta}$ is a Hilbert-Schmidt operator, since $\mathbf{d}_{\kappa}^{\nu}$ is in the limit circle case both at $0$ and at $\vartheta$ (see~\cite{Weidmann2}, Prop. 1.6).
Now, let $\kappa\neq \pm 1/2$. A self-adjoint extension of $d^{\nu}_{\kappa,\vartheta}$ is given by
\begin{gather}
\begin{split}
\hat{d}_{\kappa,\vartheta}^{\nu}: \mathscr{D}(\hat{d}_{\kappa,\vartheta}^{\nu}) \rightarrow \textnormal{\textsf{L}}^2((0,\vartheta),\mathbb{C}^2)\,,\quad
\psi \mapsto \mathbf{d}^{\nu}_{\kappa}\psi\,,\qquad\qquad\qquad\\
\textnormal{where}\qquad\quad\mathscr{D}(\hat{d}_{\kappa,\vartheta}^{\nu}):=\big\{\phi\equiv (\phi_1,\phi_2)\in \textnormal{\textsf{L}}^2((0,\vartheta),\mathbb{C}^2)\cap \textnormal{\textsf{AC}}_{\textnormal{loc}}((0,\vartheta),\mathbb{C}^2):\qquad\\
\mathbf{d}_{\kappa}^{\nu}\phi\in \textnormal{\textsf{L}}^2((0,\vartheta),\mathbb{C}^2),\phi_1(\vartheta)=\phi_2(\vartheta)\big\}\,.\quad
\end{split}
\end{gather}
Indeed, observing that the conditions $\phi_1(\vartheta)=\phi_2(\vartheta)$ and $\left\langle i\sigma_2\phi(\vartheta),\mbox{\small $\Upsilon_{\kappa}^{\nu}$}(\vartheta)\right\rangle_{\mathbb{C}^2}=0$, where
\begin{align}
\Upsilon_{\kappa}^{\nu}:=\big(\mbox{\footnotesize $\sqrt{\kappa^2-\nu^2}$}-\nu+\kappa\big)\Upsilon^{\nu,+}_{\kappa}+\vartheta^{2\sqrt{\kappa^2-\nu^2}}\big(\mbox{\footnotesize $\sqrt{\kappa^2-\nu^2}$}+\nu-\kappa\big)\Upsilon^{\nu,-}_{\kappa}\,,
\end{align}
are equivalent, this follows from Prop. 1.5 in \cite{Weidmann2}, since $\Upsilon_{\kappa}^{\nu}$ solves $\mathbf{d}_{\kappa}^{\nu}\Upsilon=0$.\\
There are two functions $\Omega_{\kappa}^{\nu,\pm}:\mathbb{R}^+\rightarrow\mathbb{C}^2$ with
\begin{align}\label{sol1}
\Omega^{\nu,+}_{\kappa}(r)=r^{\sqrt{\kappa^2-\nu^2}}\begin{pmatrix}\kappa+\mbox{\footnotesize $\sqrt{\kappa^2-\nu^2}$}\\ \nu\end{pmatrix}+\mathcal{O}\left(r^{1+\sqrt{\kappa^2-\nu^2}}\right)
\end{align}
and
\begin{align}\label{sol2}
\Omega^{\nu,-}_{\kappa}(r)=r^{-\sqrt{\kappa^2-\nu^2}}\begin{pmatrix} \nu \\ \kappa+\mbox{\footnotesize $\sqrt{\kappa^2-\nu^2}$}\end{pmatrix} +\mathcal{O}\left(r^{1-\sqrt{\kappa^2-\nu^2}}\right)
\end{align}
\begin{samepage}\noindent
as $r\rightarrow 0$ generating the fundamental solution of $(\mathbf{d}_{\kappa}^{\nu}+i)\Omega=0$ in $\mathbb{R}^+$ (see~\cite{Morozov2}, Lem.~8). For any $c\in (0,\vartheta)$, the restriction of $\Omega^{\nu,-}_{\kappa}$ to $(0,c)$ is not contained in $\textnormal{\textsf{L}}^2((0,c),\mathbb{R}^2)$. Therefore, the integral kernel of $(\hat{d}^{\nu}_{\kappa,\vartheta}+i\mathbb{I})^{-1}$ is given by
\begin{align}
G_{\kappa,\vartheta}^{\nu}:(0,\vartheta)^2\rightarrow \mathbb{C}^{2\times 2}\,,\quad
(x,y) \mapsto \textnormal{const.}\begin{cases}
\Omega_{\kappa}^{\nu,+}(x)\mbox{\footnotesize $\Big($}\Omega_{\kappa,\vartheta}^{\nu}(y)\mbox{\footnotesize $\Big)$}^{\intercal}\,,\quad x<y,\\\\
\Omega_{\kappa,\vartheta}^{\nu}(x)\mbox{\footnotesize $\Big($}\Omega_{\kappa}^{\nu,+}(y)\mbox{\footnotesize $\Big)$}^{\intercal}\,,\quad x\geq y\,,
\end{cases}
\end{align}
where $\Omega_{\kappa,\vartheta}^{\nu}$ is the solution of $(\mathbf{d}_{\kappa}^{\nu}+i)\Omega=0$ satisfying $\big\langle i\sigma_2\mbox{\small $\Omega_{\kappa,\vartheta}^{\nu}$}(\vartheta),\mbox{\small $\Upsilon_{\kappa}^{\nu}$}(\vartheta)\big\rangle_{\mathbb{C}^2}=0$ (see \cite{Weidmann2}, Prop. 1.6). It follows from (\ref{sol1}), (\ref{sol2}) and the continuity of $\Omega_{\kappa}^{\nu,\pm}$ on $(0,\vartheta]$ that the components of $\Omega_{\kappa}^{\nu,+}(r)$ and $\Omega_{\kappa,\vartheta}^{\nu}(r)$ are bounded on $r\in (0,\vartheta)$ by $Cr^{\sqrt{\kappa^2-\nu^2}}$ and $Cr^{-\sqrt{\kappa^2-\nu^2}}$ for some $C\in\mathbb{R}^+$, respectively. Thus, $G_{\kappa,\vartheta}^{\nu}$ lies in $\textnormal{\textsf{L}}^2((0,\vartheta)^2,\mathbb{C}^{2\times 2})$ and hence $\big(\hat{d}^{\nu}_{\kappa,\vartheta}+i\mathbb{I}\big)^{-1}$ is a Hilbert-Schmidt operator.
A core for $\hat{d}^{\nu}_{\kappa,\vartheta}$ is given by $\mathfrak{C}^{\vartheta}:=\left\{\left(\phi_1,\phi_2\right)\in \textnormal{\textsf{C}}_0^{\infty}((0,\vartheta],\mathbb{C}^2): \phi_1(\vartheta)=\phi_2(\vartheta)\right\}$, since the closure of the restriction of $\hat{d}^{\nu}_{\kappa,\vartheta}$ to $\mathfrak{C}^{\vartheta}$ is a strict extension of $\overline{d^{\nu}_{\kappa,\vartheta}}$. Indeed, for instance, $\mbox{\small $f:=(1,1)\lim\limits_{\mbox{\scriptsize $x\downarrow \max\{\cdot,\mbox{\scriptsize $\vartheta/2$}\} $}}\exp\left[2/(\vartheta-2x)\right]$}\in\mathfrak{C}^{\vartheta}\setminus\mathscr{D}\big(\overline{\mbox{\small $d^{\nu}_{\kappa,\vartheta}$}}\big)$, as $\big\langle i\sigma_2 f(\vartheta),\Upsilon_{\kappa}^{\nu,+}(\vartheta)\big\rangle_{\mathbb{C}^2}\neq 0$ (see \cite{Weidmann2}, Prop. 1.5). Now, let $\psi\equiv\mbox{\small $(\psi_1,\psi_2)$}\in \mathfrak{C}^{\vartheta}$. Then, $2\vartheta\big\|\hat{d}^{\nu}_{\kappa,\vartheta}\psi\big\|\geq |\kappa|\hspace{0.5mm}\|\psi\|$ holds for large $|\kappa|$, hence (\ref{res_conv_to_zero}) is satisfied. Indeed,
\begin{equation}
\begin{aligned}
\left\|\hat{d}_{\kappa,\vartheta}^{\nu}\psi\right\|^2
&=\overbrace{\left\|\hat{d}_{\mbox{\scriptsize $3/2$},\vartheta}^{\nu}\psi\right\|^2}^{\geq 0}+\displaystyle\int\limits_0^{\vartheta}\textnormal{d}r\hspace{0.5mm} \Bigg(\frac{|2\kappa-3|^2+6(2\kappa-3)}{4r^2}|\psi(r)|^2-\\
& \underbrace{-\frac{2(2\kappa-3)\nu}{r^2}\Re\left[\overline{\psi_1(r)}\psi_2(r)\right]}_{\geq -\frac{|2\kappa-3||\nu|}{r^2}|\psi(r)|^2}-\frac{2\kappa-3}{2r}\partial_r\left[|\psi_1(r)|^2-|\psi_2(r)|^2\right]\Bigg)\\
& \geq \overbrace{\frac{1}{4}\left[|2\kappa-3|^2+6(2\kappa-3)-4|2\kappa-3|\left(|\nu|+\mbox{\small $\frac{1}{2}$}\right)\right]\displaystyle\int\limits_0^{\vartheta}\textnormal{d}r\hspace{0.5mm}\left|\frac{\psi(r)}{r}\right|^2}^{=:S^{\nu}_{\kappa}[\psi]}+\\
& + \frac{1}{2}\underbrace{\left(|2\kappa-3|\displaystyle\int\limits_0^{\vartheta}\textnormal{d}r\hspace{0.5mm} \left|\frac{\psi(r)}{r}\right|^2-(2\kappa-3)\displaystyle\int\limits_0^{\vartheta}\textnormal{d}r\hspace{0.5mm} \frac{1}{r}\partial_r\left[|\psi_1(r)|^2-|\psi_2(r)|^2\right]\right)}_{=:T_{\kappa}[\psi]}\,.
\end{aligned}
\end{equation}
Exploiting the boundary condition at $r=\vartheta$, integration by parts yields
\begin{equation}
\displaystyle\int\limits_0^{\vartheta}\textnormal{d}r\hspace{0.5mm} \frac{1}{r}\partial_r\left[|\psi_1(r)|^2-|\psi_2(r)|^2\right]=\displaystyle\int\limits_0^{\vartheta}\textnormal{d}r\hspace{0.5mm}\frac{1}{r^2}\left[|\psi_1(r)|^2-|\psi_2(r)|^2\right]
\end{equation}
and, therefore, $T_{\kappa}[\psi]$ is nonnegative. But for large $|\kappa|$ it holds that
\begin{equation}
4S_{\kappa}^{\nu}[\psi]\geq \kappa^2\displaystyle\int\limits_0^{\vartheta}\textnormal{d}r\hspace{0.5mm} \left|\frac{\psi(r)}{r}\right|^2\geq \frac{\kappa^2}{\vartheta^2}\|\psi\|^2\,.
\end{equation}
\end{samepage}
\end{proof}
\begin{Lem}\label{Coulomb_Dirac_core}
For all $\vartheta\in\mathbb{R}^+$ there exist $f_{\vartheta},g_{\vartheta}\in \textnormal{\textsf{H}}^{1/2}(B_{\vartheta},\mathbb{C}^2)$ so that the restriction of $H^{\nu}_{\infty}$ to $\textnormal{\textsf{C}}_0^{\infty}(\mathbb{R}^2\setminus \{0\},\mathbb{C}^2)\dotplus \textnormal{span}\left\{f_{\vartheta},g_{\vartheta}\right\}$ is essentially self-adjoint.
\begin{proof}
For $\kappa\neq \pm 1/2$ the operator $d^{\nu}_{\kappa,\infty}$ is essentially self-adjoint (see above).
Now, let $\kappa=\pm 1/2$. We pick $\varsigma\in \textnormal{\textsf{C}}_0^{\infty}([0,\vartheta))$ such that $\varsigma(2w)=1 \quad \forall w\leq \vartheta$. Then, it holds that $\varsigma\Upsilon^{\nu,\pm}_{\kappa}\not\in\mathscr{D}\big(\mbox{\small $\overline{d^{\nu}_{\kappa,\infty}}$}\big)$, since $\lim\limits_{r\rightarrow 0}\big\langle i\sigma_2\varsigma(r)\mbox{\small $\Upsilon^{\nu,\pm}_{\kappa}$}(r),\mbox{\small $\Upsilon^{\nu,\mp}_{\kappa}$}(r)\big\rangle_{\mathbb{C}^2}\neq 0$ (see \cite{Weidmann2}, Prop. 1.5), and therefore
\begin{align}
\mathscr{D}\mbox{\footnotesize $\Big($}\left(d^{\nu}_{\kappa,\infty}\right)^*\mbox{\footnotesize $\Big)$}=\mathscr{D}\mbox{\footnotesize $\Big($}\mbox{\small $\overline{d^{\nu}_{\kappa,\infty}}$}\mbox{\footnotesize $\Big)$}\dotplus \textnormal{span}\Big\{\varsigma\Upsilon^{\nu,+}_{\kappa},\varsigma\Upsilon^{\nu,-}_{\kappa}\Big\}\,.
\end{align}
Thus, any self-adjoint extension of $d^{\nu}_{\kappa,\infty}$ is obtained as the closure of the restriction of $\left(d^{\nu}_{\kappa,\infty}\right)^*$ to $\textnormal{\textsf{C}}_0^{\infty}(\mathbb{R}^+,\mathbb{C}^2)\dotplus\textnormal{span}\big\{\varsigma\big(\alpha\mbox{\small $\Upsilon^{\nu,+}_{\kappa}$}+\beta\mbox{\small $\Upsilon^{\nu,-}_{\kappa}$}\big)\big\}$ with $(\alpha,\beta)\in\mathbb{C}^2\setminus\{0\}$; the distinguished one is obtained by setting $(\alpha,\beta)=(1,0)$. Indeed, one may \mbox{verify that} $\mathcal{U}_{\infty}^*\mathfrak{P}_{\kappa}\big[\varsigma\big(\alpha\mbox{\small $\Upsilon^{\nu,+}_{\kappa}$}+\beta\mbox{\small $\Upsilon^{\nu,-}_{\kappa}$}\big)\big]\not\in \textnormal{\textsf{L}}^{2}(\mathbb{R}^2,\mathbb{C}^2;|x|^{-1}\textnormal{d}x)$ for all $(\alpha,\beta)\in\mathbb{C}^2\setminus\textnormal{span}\left\{(1,0)\right\}$, where $\mathfrak{P}_{\kappa}:\textnormal{\textsf{L}}^2(\mathbb{R}^+,\mathbb{C}^2)\rightarrow \mbox{\small $\bigoplus\limits_{\mbox{\tiny $\lambda\in\mathbb{Z}+1/2$}}$}\textnormal{\textsf{L}}^2(\mathbb{R}^+,\mathbb{C}^2)$, $f\mapsto\mbox{\small $\bigoplus\limits_{\mbox{\tiny $\lambda\in\mathbb{Z}+1/2$}}$} \delta_{\lambda,\kappa}f$, where $\delta_{\cdot,\cdot}$ is the Kronecker symbol. But $\textnormal{\textsf{H}}^{1/2}(\mathbb{R}^2)\subset \textnormal{\textsf{L}}^2(\mathbb{R}^2;(1+|x|^{-1})\textnormal{d}x)$ (cf. \cite{Mueller}, Rem. 15).
\end{proof}
\end{Lem}
\section{Proof of the Exponential Accumulation Rate}
\noindent It follows from Hypothesis \ref{hyp4} in Thm. \ref{main_theorem} that we can choose
\begin{align}
\epsilon\in\left(0,\min\left\{|x_j-x_k|/3:j,k\in\{1,\dots,N\},j\neq k\right\}\right)
\end{align}
such that $V_{\textnormal{reg}}$ is bounded on $\left(B_{\epsilon}+x_n\right)$ for all $n\in\{1,\dots,N\}$.\\
In order to localize the Coulomb singularities, we pick a partition of unity $\mbox{\small $\left\{U_n\right\}_{n=0}^N$}$ with the following properties:
\begin{itemize}
\item $\left\{U_n\right\}_{n=0}^N\subset \textnormal{\textsf{C}}^{\infty}(\mathbb{R}^2,[0,1])\,,$
\item $\mbox{\footnotesize $\displaystyle\sum\limits_{n=0}^N$}(U_n)^2=1\,,$
\item $\textnormal{supp}(U_n)\in B_{\epsilon}+x_n$ for all $n\in\{1,\dots,N\}\,,$
\item $\textnormal{supp}(U_0)\subset \mathbb{R}^2\setminus \mbox{\footnotesize $\displaystyle\bigcup\limits_{n=1}^N$}\left(\overline{B_{\epsilon/2}}+x_n\right)\,.$
\end{itemize}
Thanks to the spectral theorem, we can devote ourselves to the study of the asymptotic behavior of the negative eigenvalues of $D^2-m^2\mathbb{I}\,$ using the Min-Max principle (see~\cite{Schmuedgen}, Thm. 12.1). The Min-Max values of a lower semi-bounded self-adjoint operator $A$ are denoted by $\mu_{(\cdot)}(A)$ (see~\cite{Schmuedgen}, formula (12.2)). The following lemma guarantees that we can restrict the minimization to $\mathfrak{C}:=\textnormal{\textsf{C}}_0^{\infty}\big(\mbox{\small $\mathbb{R}^2\setminus \mbox{\footnotesize $\left\{x_n\right\}_{n=1}^N$},\mathbb{C}^2$}\big)$ when investigating the asymptotic behavior of the eigenvalues.
\begin{Lem}\label{finite_defect}
The defect number of the restriction of $D$ to $\mathfrak{C}$ is bounded above by $2N$.
\end{Lem}
\begin{proof}
It follows from Lem.~\ref{Coulomb_Dirac_core} that for all $n\in\{0,\dots,N\}$ the restriction of $U_n \tilde{D}U_n$ to $\mbox{$\tilde{\mathfrak{C}}:=\mbox{ $\mathfrak{C}\dotplus \textnormal{span}$}\mbox{\footnotesize $\Big($}\mbox{$\big\{f_{\mbox{\scriptsize $\epsilon$}}(\cdot-x_j),g_{\mbox{\scriptsize $\epsilon$}}(\cdot-x_j)\big\}_{\mbox{\scriptsize $j=1$}}^{\mbox{\scriptsize $N$}}$}\mbox{\footnotesize $\Big)$}$}$ is essentially self-adjoint. Hence, given $\psi\in\mathscr{D}(\tilde{D})$ and $n\in\{0,\dots,N\}$, we can choose a sequence $\left\{\mbox{\footnotesize $\psi^{(n)}_k$}\right\}_{k\in\mathbb{N}}\subset\tilde{\mathfrak{C}}$ such that $\psi^{(n)}_k\rightarrow\psi$ as $k\rightarrow\infty$ w.r.t. $\|\cdot\|_{ U_n\tilde{D} U_n}$. For $k\in\mathbb{N}$ we define $\phi_k:=\mbox{\small $\sum\limits_{n=0}^N (U_n)^2\psi^{(n)}_k$}$ and observe that $\left\{\phi_k\right\}_{k\in\mathbb{N}}\subset\tilde{\mathfrak{C}}$ and $\phi_k\rightarrow\psi$ as $k\rightarrow\infty$ w.r.t. $\|\cdot\|_{\tilde{D}}$. Thus, $\tilde{\mathfrak{C}}$ is a core for $\tilde{D}$ and - by the Kato-Rellich theorem~\cite{Reed2} and Hypothesis \ref{hyp1} in Thm.~\ref{main_theorem} - also \mbox{for $D$}.
\end{proof}
Following Rademacher and Siedentop~\cite{Rademacher}, we will deal with Schr{\"o}dinger opera\-tors with relatively compact perturbations of $-\Delta_{\mathbb{R}^2}:\textnormal{\textsf{H}}^2(\mathbb{R}^2)\rightarrow \textnormal{\textsf{L}}^2(\mathbb{R}^2)\,,\psi\mapsto -\Delta\psi$
whose eigenvalues bound those of $D^2-m^2\mathbb{I}$. It follows by the inequality of Seiler and Simon (see~\cite{Seiler}, Lem.~2.1) that $W_2\left(\mbox{\small $-\Delta_{\mathbb{R}^2}$}+\mathbb{I}\right)^{-1}$ is a Hilbert-Schmidt operator for all $W_2\in\textnormal{\textsf{L}}^2(\mathbb{R}^2)$. As the operator norm limit preserves compactness, any potential that lies in
\begin{align}
\begin{split}
\textnormal{\textsf{L}}^{2}_{\infty}(\mathbb{R}^2):=\Big\{W:\mathbb{R}^2\rightarrow\mathbb{C}: \hspace{0.5mm}\forall \epsilon\in\mathbb{R}^+ \hspace{1.5mm}\exists (W_2,W_{\infty})\in \textnormal{\textsf{L}}^2(\mathbb{R}^2)\times\textnormal{\textsf{L}}^{\infty}(\mathbb{R}^2)\\
\textnormal{such that } W=W_2+W_{\infty} \textnormal{ and } \|W_{\infty}\|_{\infty}<\epsilon \Big\}
\end{split}
\end{align}
is relatively compact w.r.t. $-\Delta_{\mathbb{R}^2}$. For any such $(\mbox{\small $-\Delta_{\mathbb{R}^2}$})$-compact potential $W$, the operator $-\Delta_{\mathbb{R}^2}+W$ is bounded from below and its restriction to $\textnormal{\textsf{C}}_0^{\infty}(\mathbb{R}^2)$ is essentially self-adjoint (see \cite{Weidmann1}, Section 17.2).\\
Hypothesis \ref{hyp5}.(a) in Thm. \ref{main_theorem} implies that $V_{\textnormal{reg}}\in \textnormal{\textsf{L}}^2_{\infty}(\mathbb{R}^2)$. Therefore, $V\chi_{\textnormal{supp}(U_0)}$ and $W:=V_{\textnormal{sing}}\left[V_{\textnormal{sing}}+2V_{\textnormal{reg}}\right]\chi_{\textnormal{supp}(U_0)}$ lie in $\textnormal{\textsf{L}}^2_{\infty}(\mathbb{R}^2)$, since $V_{\textnormal{sing}}\chi_{\textnormal{supp}(U_0)}$ is bounded and vanishes at infinity. Furthermore, $(V_{\textnormal{reg}})^2$ is $(\mbox{\small $-\Delta_{\mathbb{R}^2}$})$-compact (see Hypothesis \ref{hyp2} in Thm.~\ref{main_theorem}). Thus, both $V\chi_{\textnormal{supp}(U_0)}$ and $V^2\chi_{\textnormal{supp}(U_0)}=(V_{\textnormal{reg}})^2\chi_{\textnormal{supp}(U_0)}+W$ are relatively compact perturbations of $-\Delta_{\mathbb{R}^2}$ and the operator
\begin{align}
\mathfrak{V}^{\zeta}_{\pm}:=-\Delta_{\mathbb{R}^2}+\Big(\pm 2mV\chi_{\textnormal{supp}(U_0)}+(1-1/\zeta)V^2\chi_{\textnormal{supp}(U_0)}-\mbox{\footnotesize $\displaystyle\sum\limits_{n=0}^N$}|\nabla U_n|^2\Big)/(1-\zeta)
\end{align}
is self-adjoint and bounded from below for all $\zeta\in (0,1)$. Moreover, $\textnormal{\textsf{C}}_0^{\infty}(\mathbb{R}^2)$ is a core for $\mathfrak{V}^{\zeta}_{\pm}$. Obviously, $V\chi_{\mathbb{R}^2\setminus B_{\gamma}}\in\textnormal{\textsf{L}}^2_{\infty}(\mathbb{R}^2)$ and, therefore, the operator
\begin{align}
\mbox{\small $\tilde{\mathfrak{W}}^{\zeta}_{\pm}:\textnormal{\textsf{C}}^{\infty}_0(\mathbb{R}^2\setminus\overline{B_{\gamma}})\rightarrow \textnormal{\textsf{L}}^2(\mathbb{R}^2\setminus B_{\gamma})\,,\quad \psi\mapsto \left[-\Delta+\left[\pm 2mV+(1+1/\zeta)V^2\right]/(1+\zeta)\right]\psi $}
\end{align}
is bounded from below for all $\zeta\in (0,1)$. We denote its Friedrichs extension (see~\cite{Reed2}, Thm. X.23) by $\mathfrak{W}^{\zeta}_{\pm}$. A form core for $\mathfrak{W}^{\zeta}_{\pm}$ is given by $\textnormal{\textsf{C}}^{\infty}_0(\mathbb{R}^2\setminus\overline{B_{\gamma}})$.
In the following lemma, we reduce the problem to the study of negative eigenvalues of $\mathfrak{W}^{\zeta}_{\pm}$ and $\mathfrak{V}^{\zeta}_{\pm}$.
\begin{Lem}\label{lem_bounds_number_of_ev}
There exist $c\in\mathbb{N}$ such that for all $\zeta\in (0,1)$ and $\upsilon\in\mathbb{R}^-:=(-\infty,0)$ it holds that
\begin{align}\label{bounds_for_number_of_ev}
\mbox{\small $\displaystyle\sum\limits_{+,-}$} \mathcal{N}_{(-\infty,\upsilon)}\mbox{\footnotesize $\Big(\mathfrak{W}^{\zeta}_{\pm}\Big)$}\leq\mathcal{N}_{(-\infty,\upsilon)}(D^2-m^2\mathbb{I})\leq c+\displaystyle\sum\limits_{+,-}\mathcal{N}_{(-\infty,\upsilon)}\mbox{\footnotesize $\Big(\mathfrak{V}^{\zeta}_{\pm}\Big)$}\,.
\end{align}
\end{Lem}
\begin{proof}
We first claim the existence of $\{\tilde{c}_n\}_{n=1}^N\in\big(\mathbb{R}^+\big)^N$ such that the inequality
\begin{align}\label{preliminary_claim}
\mbox{\small $\mu_{\mbox{\footnotesize $\displaystyle\sum\limits_{n=0}^Ns_n+N$}}\mbox{\footnotesize $\Big($}D^2-m^2\mathbb{I}\mbox{\footnotesize $\Big)$}\geq (1-\zeta)\min\left\{0,{\mu}_{s_0}\left(\mathfrak{V}^{\zeta}_+\oplus \mathfrak{V}^{\zeta}_-\right)\right\}+\frac{1}{2}\displaystyle\sum\limits_{n=1}^N\min\left\{0,\mu_{s_n}\mbox{\footnotesize $\Big($}\big(\mbox{\footnotesize $H_{\epsilon}^{\nu_n}$}\big)^2\mbox{\footnotesize $\Big)$}-\tilde{c}_n\right\}$}
\end{align}
holds for all $\zeta\in (0,1)$ and $\{s_n\}_{n=0}^N\in \mathbb{N}^{N+1}$.
One easily checks that the IMS localization formula for the quadratic form associated to $D^2-m^2\mathbb{I}$,
\begin{align}
\begin{split}
\|D\cdot\|^2-m^2\|\cdot\|^2 & =\mbox{\small $\displaystyle\sum\limits_{n=0}^N$}\mathfrak{v}[ U_n\hspace{0.1cm}\cdot\hspace{0.1cm}]\,,\\
\mathfrak{v}:\mathscr{D}(D)\rightarrow\mathbb{R}\,,&\quad \psi\mapsto\|D\psi\|^2-m^2\|\psi\|^2-\mbox{\small $\displaystyle\sum\limits_{j=0}^N$}\Big\||\nabla U_j|\psi\Big\|^2\,,
\end{split}
\end{align}
holds, and therefore Lem. \ref{finite_defect} implies that
\begin{align}\label{loc_est_1}
\mu_{\mbox{\footnotesize $\displaystyle\sum\limits_{n=0}^Ns_n+N$}}\left(D^2-m^2\mathbb{I}\right)\geq \sup\limits_{M\subset \textnormal{\scriptsize \textsf{L}\normalsize}^2(\mathbb{R}^2,\mathbb{C}^2)\atop \textnormal{dim}(\textnormal{span}(M))\leq \mbox{\scriptsize $\displaystyle\sum\limits_{n=0}^Ns_n-N-1$}}\inf\limits_{\psi\in \mathfrak{C}\cap M^{\perp}\atop \|\psi\|=1}\displaystyle\sum\limits_{j=0}^N\mathfrak{v}[U_j\psi]\,.
\end{align}
The estimate
\begin{align}
\mbox{\small $\sup\limits_{M\subset \textnormal{\tiny \textsf{L}\normalsize}^2(\mathbb{R}^2,\mathbb{C}^2)\atop \textnormal{dim}(\textnormal{span}(M))\leq \mbox{\scriptsize $\displaystyle\sum\limits_{n=0}^Ns_n-N-1$}}\inf\limits_{\psi\in \mathfrak{C}\cap M^{\perp}\atop \|\psi\|=1}\displaystyle\sum\limits_{j=0}^N\mathfrak{v}[ U_j\psi] \geq \displaystyle\sum\limits_{n=0}^N\sup\limits_{M_n\subset \textnormal{\tiny \textsf{L}\normalsize}^2(\mathbb{R}^2,\mathbb{C}^2)\atop \textnormal{dim}(\textnormal{span}(M_n))\leq s_n-1}\inf\limits_{\psi\in \mathfrak{C}\cap M^{\perp}_n\atop \|\psi\|=1}\mathfrak{v}[ U_n\psi]$}
\end{align}
is trivial. Partially following Evans et al~\cite{Evans} (inequality (21)), we obtain
\begin{align}\label{loc_est_2}
\sup\limits_{M_n\subset \textnormal{\scriptsize \textsf{L}\normalsize}^2(\mathbb{R}^2,\mathbb{C}^2)\atop \textnormal{dim}(\textnormal{span}(M_n))\leq s_n-1}\inf\limits_{\psi\in \mathfrak{C}\cap M_n^{\perp}\atop \|\psi\|=1}\mathfrak{v}[ U_n\psi]\geq \sup\limits_{M_n\subset \textnormal{\scriptsize \textsf{L}\normalsize}^2(\mathbb{R}^2,\mathbb{C}^2)\atop \textnormal{dim}(\textnormal{span}(M_n))\leq s_n-1}\inf\limits_{\psi\in \mathfrak{C}\cap (U_n\mathscr{A}_nM_n)^{\perp}\atop \|\psi\|=1}\mathfrak{v}[ U_n\psi]\,,
\end{align}
where $\mathscr{A}_n:\textnormal{\textsf{L}}^2(\mathbb{R}^2,\mathbb{C}^2)\rightarrow \textnormal{\textsf{L}}^2(\mathbb{R}^2,\mathbb{C}^2)\,,\textnormal{ }\psi\mapsto \psi(\cdot -x_n)\,$, where we set $x_0:=0$, and
\begin{align}\label{loc_est_3}
\mbox{\small $\inf\limits_{\psi\in\mathfrak{C}\cap(U_n\mathscr{A}_nM_n)^{\perp}\atop \|\psi\|=1}\mathfrak{v}[ U_n\psi]\geq\inf\limits_{\psi\in U_n(\mathfrak{C}\cap (U_n\mathscr{A}_nM_n)^{\perp})\atop \|\psi\|\leq 1}\mathfrak{v}[\psi]
\geq \inf\limits_{\psi\in U_n\mathfrak{C}\cap \mathscr{A}_nM^{\perp}_n\atop \|\psi\|\leq 1}\mathfrak{v}[\psi]=\min\Bigg\{0,\inf\limits_{\psi\in U_n\mathfrak{C}\cap \mathscr{A}_nM^{\perp}_n\atop \|\psi\|= 1}\mathfrak{v}[\psi]\Bigg\}\,.$}
\end{align}
The second step in (\ref{loc_est_3}) follows from the inclusion $U_n\left(U_n\mathscr{A}_nM_n\right)^{\perp}\subset \mathscr{A}_nM^{\perp}_n$.\\
As $V\chi_{\textnormal{supp}(U_0)}\in \textnormal{\textsf{L}}^2_{\infty}(\mathbb{R}^2)$ (see Hypothesis \ref{hyp5}.(a) in Thm. \ref{main_theorem}), all $\psi\in \textnormal{\textsf{C}}_0^{\infty}(\mathbb{R}^2,\mathbb{C}^2)$ obey
\begin{align}\label{cs_inequality}
\begin{split}
|2\Re\left\langle -i\pmb{\sigma}\cdot \nabla\psi, V\chi_{\textnormal{supp}(U_0)}\psi\right\rangle | & \leq 2 \|\nabla\psi\| \|V\chi_{\textnormal{supp}(U_0)}\psi\|\leq \\
&\leq \left\langle\psi,\left(-\zeta\Delta+V^2\chi_{\textnormal{supp}(U_0)}/\zeta\right)\psi\right\rangle
\end{split}
\end{align}
(cf. \cite{Rademacher}, inequality (14)). If $\psi\equiv (\psi_1,\psi_2)\in U_0\mathfrak{C}\subset \textnormal{\textsf{C}}_0^{\infty}(\textnormal{supp}(U_0),\mathbb{C}^2)$, then, with $\psi=\chi_{\textnormal{supp}(U_0)}\psi$, (\ref{cs_inequality}) implies that
\begin{align}\label{cs_ineq_conseq}
\begin{split}
\mbox{\small $\|D\psi\|^2-m^2\|\psi\|^2$} & \geq \left\langle\psi,\left((\zeta-1)\Delta + 2mV\chi_{\textnormal{supp}(U_0)}\sigma_3+(1-1/\zeta)V^2\chi_{\textnormal{supp}(U_0)}\right)\psi\right\rangle\\
& = \mbox{\footnotesize $(1-\zeta)\left\langle \psi,\left(-\Delta+\left[2mV\chi_{\textnormal{supp}(U_0)}\sigma_3+(1-1/\zeta)V^2\chi_{\textnormal{supp}(U_0)}\right]/(1-\zeta)\right)\psi\right\rangle $}\\
& =\mbox{\small $(1-\zeta) \Big\langle \psi_1\oplus\psi_2,\left(\mathfrak{V}^{\zeta}_+\oplus\mathfrak{V}^{\zeta}_-\right)\psi_1\oplus\psi_2\Big\rangle + \displaystyle\sum_{n=0}^N\Big\||\nabla U_n|\psi\Big\|^2$}
\end{split}
\end{align}
(cf. \cite{Rademacher}, inequality (16)), which is equivalent to
\begin{align}\label{cs_ineq_concl}
\mathfrak{v}[\psi]\geq (1-\zeta)\Big\langle \psi_1\oplus\psi_2,\left(\mathfrak{V}^{\zeta}_+\oplus\mathfrak{V}^{\zeta}_-\right)\psi_1\oplus\psi_2\Big\rangle \,.
\end{align}
It follows from (\ref{loc_est_3}), (\ref{cs_ineq_concl}) and $U_0\mathfrak{C}\subset \textnormal{\textsf{C}}^{\infty}_0(\mathbb{R}^2,\mathbb{C}^2)$ that
\begin{align}\label{bound_regular_part}
\mbox{\small $\inf\limits_{\psi\in\mathfrak{C}\cap (U_0M_0)^{\perp}\atop \|\psi\|=1}\mathfrak{v}[U_0\psi]\geq (1-\zeta)\min\Bigg\{0,\inf\limits_{\psi_1\oplus\psi_2\in \textnormal{\tiny \textsf{C}\normalsize}^{\infty}_0(\mathbb{R}^2,\mathbb{C}^2)\cap M_0^{\perp}\atop \|\psi_1\oplus\psi_2\|=1}\left\langle \psi_1\oplus\psi_2,\left(\mathfrak{V}^{\zeta}_+\oplus\mathfrak{V}^{\zeta}_-\right)\psi_1\oplus\psi_2\right\rangle\Bigg\}$}\,.
\end{align}
Now, let $n\in\{1,\dots,N\}$. Hypothesis \ref{hyp4} in Thm. \ref{main_theorem} guarantees the existence of $c^{\prime}_n\in\mathbb{R}^+$ satisfying
\begin{align}
\mbox{\small $\|D\psi\|^2\geq \left(\left\|\left(-i\pmb{\sigma}\cdot\nabla+\nu_n|\cdot-x_n|^{-1}\right)\psi\right\|^2-c^{\prime}_n\|\psi\|^2\right)/2=\left(\|D_{\epsilon}^{\nu_n}\mathscr{A}_n^*\psi\|^2-c^{\prime}_n\|\psi\|^2\right)/2$}
\end{align}
for all $\psi\in U_n\mathfrak{C}\subset\textnormal{\textsf{C}}^{\infty}_0((B_{\epsilon}\setminus\{0\})+x_n)=\mathscr{A}_n\mathscr{D}\big(D^{\nu_n}_{\epsilon}\big)$. As $D^{\nu_n}_{\epsilon}\subset H^{\nu_n}_{\epsilon}$, it follows that\vspace{3mm}
\begin{align}\label{est_by_extension}
\mbox{\footnotesize $\inf\limits_{\psi\in U_n\mathfrak{C}\cap \mathscr{A}_nM^{\perp}_n\atop \|\psi\|=1}\mathfrak{v}[\psi]\geq \frac{1}{2}\inf\limits_{\psi\in \mathscr{A}_n\mbox{\footnotesize $\big($}\mathscr{D}\mbox{\scriptsize $\big($}D^{\nu_n}_{\epsilon}\mbox{\scriptsize $\big)$}\cap M^{\perp}_n\mbox{\footnotesize $\big)$}\atop \|\psi\|=1}\Big(\|D_{\epsilon}^{\nu_n}\mathscr{A}^*_n\psi\|^2-\tilde{c}_n\Big)\geq \frac{1}{2}\inf\limits_{\psi\in \mathscr{D}\mbox{\scriptsize $\big($}\mbox{\tiny $H_{\epsilon}^{\nu_n}$}\mbox{\scriptsize $\big)$}\cap M^{\perp}_n\atop \|\psi\|=1}\Big(\|H_{\epsilon}^{\nu_n}\psi\|^2-\tilde{c}_n\Big)$}
\end{align}
holds for some $\tilde{c}_n\in\mathbb{R}^+$. Plugging (\ref{est_by_extension}) into (\ref{loc_est_3}), we obtain
\begin{align}\label{bound_singular_part}
\inf\limits_{\psi\in\mathfrak{C}\cap (U_n\mathscr{A}_nM_n)^{\perp}\atop \|\psi\|=1}\mathfrak{v}[U_n\psi]\geq \frac{1}{2}\min\Bigg\{0,\inf\limits_{\psi\in \mathscr{D}\mbox{\footnotesize $\big($}\mbox{\scriptsize $H_{\epsilon}^{\nu_n}$}\mbox{\footnotesize $\big)$}\cap M^{\perp}_n\atop \|\psi\|=1}\Big(\|H_{\epsilon}^{\nu_n}\psi\|^2-\tilde{c}_n\Big)\Bigg\}\,.
\end{align}
Then, our preliminary claim (inequality (\ref{preliminary_claim})) follows from (\ref{loc_est_1})-(\ref{loc_est_2}), (\ref{bound_regular_part}) and (\ref{bound_singular_part}).
With $s_0=\mathcal{N}_{(-\infty,\upsilon/(1-\zeta))}\mbox{\footnotesize $\Big(\mathfrak{V}^{\zeta}_+\oplus\mathfrak{V}^{\zeta}_-\Big)$}+1$ and $s_n=\mathcal{N}_{(-\infty,\tilde{c}_n)}\big(\big(\mbox{\footnotesize $H_{\epsilon}^{\nu_n}$}\big)^2\big)+1$, the right side - and thus the left side - of (\ref{preliminary_claim}) is bounded from below by $\upsilon$ and hence
\begin{align}
\mathcal{N}_{(-\infty,\upsilon)}\mbox{\footnotesize $\Big($}D^2-m^2\mathbb{I}\mbox{\footnotesize $\Big)$}\leq 2N+\mathcal{N}_{(-\infty,\upsilon/(1-\zeta))}\mbox{\footnotesize $\Big(\mathfrak{V}^{\zeta}_+\oplus\mathfrak{V}^{\zeta}_-\Big)$}+\displaystyle\sum\limits_{n=1}^N\mathcal{N}_{(-\infty,\tilde{c}_n)}\mbox{\footnotesize $\Big($}\big(\mbox{\footnotesize $H_{\epsilon}^{\nu_n}$}\big)^2\mbox{\footnotesize $\Big)$}
\end{align}
holds. As the spectra of $\left\{H^{\nu_n}_{\epsilon}\right\}_{n=1}^N$ are discrete (see Lem.~\ref{sa_ext_disc_sp}), $\mathcal{N}_{(-\infty,\tilde{c}_n)}\big(\big(\mbox{\footnotesize $H_{\epsilon}^{\nu_n}$}\big)^2\big)$ is finite for all $n\in\{1,\dots,N\}$. Then, the upper bound in (\ref{bounds_for_number_of_ev}) follows from
\begin{align}
\mathcal{N}_{(-\infty,\upsilon/(1-\zeta))}\mbox{\footnotesize $\Big(\mathfrak{V}^{\zeta}_+\oplus\mathfrak{V}^{\zeta}_-\Big)$}\leq \mathcal{N}_{(-\infty,\upsilon)}\mbox{\footnotesize $\Big(\mathfrak{V}^{\zeta}_+\oplus\mathfrak{V}^{\zeta}_-\Big)$}= \displaystyle\sum\limits_{+,-}\mathcal{N}_{(-\infty,\upsilon)}\mbox{\footnotesize $\Big(\mathfrak{V}^{\zeta}_{\pm}\Big)$}\,.
\end{align}
As for the lower bound, by the Min-Max principle, the eigenvalues of $D^2-m^2\mathbb{I}$ are bounded from above by those of the Friedrichs extension of
\begin{align}
\textnormal{\textsf{C}}_0^{\infty}(\mathbb{R}^2\setminus \overline{B_{\gamma}},\mathbb{C}^2)\rightarrow \textnormal{\textsf{L}}^2(\mathbb{R}^2\setminus B_{\gamma},\mathbb{C}^2)\,,\quad \psi\mapsto (D^2-m^2\mathbb{I})\psi\,.
\end{align}
As in (\ref{cs_ineq_conseq}), we estimate for all $\psi\equiv (\psi_1,\psi_2)\in \textnormal{\textsf{C}}_0^{\infty}(\mathbb{R}^2\setminus\overline{B_{\gamma}},\mathbb{C}^2)$
\begin{align}
\begin{split}
\|D\psi\|^2-m^2\|\psi\|^2 & \leq \left\langle\psi,\left(-(1+\zeta)\Delta + 2mV\sigma_3+(1+1/\zeta)V^2\right)\psi\right\rangle\\
& = (1+\zeta)\left\langle \psi,\left(-\Delta+\left[2mV\sigma_3+(1+1/\zeta)V^2\right]/(1+\zeta)\right)\psi\right\rangle \\
& = (1+\zeta) \Big\langle \psi_1\oplus\psi_2,\left(\mathfrak{W}^{\zeta}_+\oplus\mathfrak{W}^{\zeta}_-\right)\psi_1\oplus\psi_2\Big\rangle
\end{split}
\end{align}
(cf. \cite{Rademacher}, inequality (15)). Then, the lower bound in (\ref{bounds_for_number_of_ev}) follows from
\begin{align}
\mathcal{N}_{(-\infty,\upsilon/(1+\zeta))}\mbox{\footnotesize $\Big(\mathfrak{W}^{\zeta}_+\oplus\mathfrak{W}^{\zeta}_-\Big)$}\geq \mathcal{N}_{(-\infty,\upsilon)}\mbox{\footnotesize $\Big(\mathfrak{W}^{\zeta}_+\oplus\mathfrak{W}^{\zeta}_-\Big)$}= \displaystyle\sum\limits_{+,-}\mathcal{N}_{(-\infty,\upsilon)}\mbox{\footnotesize $\Big(\mathfrak{W}^{\zeta}_{\pm}\Big)$}\,.
\end{align}
\end{proof}
At the expense of a bounded and compactly supported localization error, the negative eigenvalues of Schr{\"o}dinger operators defined in $\textnormal{\textsf{L}}^2(\mathbb{R}^2)$ with pure long range dipole potentials can be bounded from below by those of Schr{\"o}dinger operators defined in $\textnormal{\textsf{L}}^2(\mathbb{R}^2\setminus\overline{B_{\gamma}})$ with pure dipole potentials (see below). The latter accumulate exponentially fast at the bottom of the essential spectrum (see~\cite{Rademacher}, Lem.~1). To decouple the interior from the exterior part, we make use of a further partition of unity $(\tilde{U}_{\textnormal{int}},\tilde{U}_{\textnormal{ext}})\in \textnormal{\textsf{C}}^{\infty}_0(B_{2\gamma},[0,1])\times\textnormal{\textsf{C}}^{\infty}(\mathbb{R}^2\setminus \overline{B_{\gamma}},[0,1])$ with $(\tilde{U}_{\textnormal{int}})^2+(\tilde{U}_{\textnormal{ext}})^2=1$.
\begin{Lem}\label{pure_dipole_lem}
Let $\mathfrak{c}\in\mathbb{R}^2\setminus\{0\}$ and $\mbox{\small $-\Delta_{\mathbb{R}^2\setminus\overline{B_{\gamma}}}^{\textnormal{D}}$}$ be the Dirichlet-Laplacian for $\mathbb{R}^2\setminus\overline{B_{\gamma}}$ (see~\cite{Reed4}, Section~XIII.15). Then it holds for all $\upsilon\in\mathbb{R}^-$ that
\begin{align}\label{pure_dipole_ineq}
\mathcal{N}_{(-\infty,\upsilon)}\left(-\Delta_{\mathbb{R}^2}+\chi_{\mathbb{R}^2\setminus B_{\gamma}}\langle\mathfrak{c},\cdot\rangle_{\mathbb{R}^2}/|\cdot|^3+\mathscr{L}^{\mathfrak{c}}_{\gamma}\right)\leq\mathcal{N}_{(-\infty,\upsilon)}\left(\mbox{\small $-\Delta_{\mathbb{R}^2\setminus\overline{B_{\gamma}}}^{\textnormal{D}}$}+\langle\mathfrak{c},\cdot\rangle_{\mathbb{R}^2}/|\cdot|^3\right)\,,
\end{align}
where $\mathscr{L}^{\mathfrak{c}}_{\gamma}:=\chi_{B_{2\gamma}\setminus B_{\gamma}} |\langle\mathfrak{c},\cdot\rangle_{\mathbb{R}^2}|/|\cdot|^3+|\nabla\tilde{U}_{\textnormal{int}}|^2+|\nabla\tilde{U}_{\textnormal{ext}}|^2$ is the localization error.
\end{Lem}
\begin{proof}
Let $M\subset\textnormal{\textsf{L}}^2(\mathbb{R}^2)$. As in (\ref{loc_est_3}), we estimate using the IMS localization formula for Schr{\"o}dinger operators (see \cite{Cycon}, Thm. 3.2)
\begin{align}
\begin{split}
&\inf\limits_{\psi\in \textnormal{\scriptsize\textsf{C}\small}_0^{\infty}(\mathbb{R}^2)\cap (\tilde{U}_{\textnormal{ext}}M)^{\perp}\atop \|\psi\|=1} \left\langle\psi,\left(-\Delta+\chi_{\mathbb{R}^2\setminus B_{\gamma}}\langle\mathfrak{c},\cdot\rangle_{\mathbb{R}^2}/|\cdot|^3+\mathscr{L}^{\mathfrak{c}}_{\gamma}\right)\psi\right\rangle\\
&= \inf\limits_{\psi\in \textnormal{\scriptsize\textsf{C}\normalsize}_0^{\infty}(\mathbb{R}^2)\cap (\tilde{U}_{\textnormal{ext}}M)^{\perp}\atop \|\psi\|=1}\mbox{\small $\Big[\left\langle \tilde{U}_{\textnormal{ext}}\psi,\left(-\Delta+\chi_{\mathbb{R}^2\setminus B_{\gamma}}\langle\mathfrak{c},\cdot\rangle_{\mathbb{R}^2}/|\cdot|^3+\chi_{B_{2\gamma}\setminus B_{\gamma}} |\langle\mathfrak{c},\cdot\rangle_{\mathbb{R}^2}|/|\cdot|^3\right)\tilde{U}_{\textnormal{ext}}\psi\right\rangle+$}\\
&\qquad\quad\quad\quad\quad\quad + \mbox{\small $\left\langle \tilde{U}_{\textnormal{int}}\psi,\left(-\Delta+\chi_{\mathbb{R}^2\setminus B_{\gamma}}\langle\mathfrak{c},\cdot\rangle_{\mathbb{R}^2}/|\cdot|^3+\chi_{B_{2\gamma}\setminus B_{\gamma}} |\langle\mathfrak{c},\cdot\rangle_{\mathbb{R}^2}|/|\cdot|^3\right)\tilde{U}_{\textnormal{int}}\psi\right\rangle\Big]$}\\\\
&\geq \inf\limits_{\psi\in \textnormal{\scriptsize\textsf{C}\normalsize}_0^{\infty}(\mathbb{R}^2)\cap (\tilde{U}_{\textnormal{ext}}M)^{\perp}\atop \|\psi\|=1}\left\langle \tilde{U}_{\textnormal{ext}}\psi,\left(-\Delta+\chi_{\mathbb{R}^2\setminus B_{\gamma}}\langle\mathfrak{c},\cdot\rangle_{\mathbb{R}^2}/|\cdot|^3\right)\tilde{U}_{\textnormal{ext}}\psi\right\rangle\\
&\geq \inf\limits_{\psi\in \textnormal{\scriptsize\textsf{C}\normalsize}_0^{\infty}(\mathbb{R}^2\setminus \overline{B_{\gamma}})\cap M^{\perp}\atop \|\psi\|\leq 1}\left\langle\psi, \left(-\Delta+\langle\mathfrak{c},\cdot\rangle_{\mathbb{R}^2}/|\cdot|^3\right)\psi\right\rangle\,.
\end{split}
\end{align}
By an estimate similar to (\ref{loc_est_2}), we conclude that negative eigenvalues satisfy
\begin{align}
\mu_{s}\left(-\Delta_{\mathbb{R}^2}+\chi_{\mathbb{R}^2\setminus B_{\gamma}}\langle\mathfrak{c},\cdot\rangle_{\mathbb{R}^2}/|\cdot|^3+\mathscr{L}^{\mathfrak{c}}_{\gamma}\right)\geq \mu_{s}\left(\mbox{\small $-\Delta_{\mathbb{R}^2\setminus\overline{B_{\gamma}}}^{\textnormal{D}}$}+\langle\mathfrak{c},\cdot\rangle_{\mathbb{R}^2}/|\cdot|^3\right)\,,
\end{align}
which implies (\ref{pure_dipole_ineq}).
\end{proof}
Next, following Rademacher and Siedentop~\cite{Rademacher}, we decouple the pure dipole part from higher-order multipole moments, which - a posteriori - merely contribute with finitely many negative eigenvalues. For this purpose, we formulate the follo\-wing statement.
\begin{Lem}\label{kirsch_lem}
Suppose, $A_1$, $A_2$ and $A_3$ are lower semi-bounded self-adjoint operators in a Hilbert space with a common form core $K$ such that $\textnormal{inf}\sigma_{\textnormal{ess}}(A_j)\in\mathbb{R}^+_0$ for $j=1,2,3$ is satisfied and $A_1=A_2+A_3$ holds in the form sense on $K$, i.e.,
\begin{align}
\left\langle \psi,A_1\psi\right\rangle=\left\langle \psi,A_2\psi\right\rangle+\left\langle \psi,A_3\psi\right\rangle\quad\forall \psi\in K\,.
\end{align}
Then, it holds for all $\eta\in (0,1)$ and $\upsilon\in\mathbb{R}^-$ that
\begin{align}\label{kirsch_statement}
\mathcal{N}_{(-\infty,\upsilon)}\left(A_1\right)\leq \mathcal{N}_{(-\infty,(1-\eta)\upsilon)}\left(A_2\right)+\mathcal{N}_{(-\infty,\eta\upsilon)}\left(A_3\right)\,.
\end{align}
\end{Lem}
\begin{proof}
The statement follows by mimicking the proof of Prop. 4. in~\cite{Kirsch}.
\end{proof}
\begin{Rem}
We obtain for all $\xi\in (0,1)$ and $\upsilon^{\prime}\in\mathbb{R}^-$ the inequality
\begin{align}\label{kirsch_variant}
\mathcal{N}_{(-\infty,\upsilon^{\prime})}\left(A_2\right)\geq \mathcal{N}_{(-\infty,(1+\xi)\upsilon^{\prime})}\left(A_1\right)-\mathcal{N}_{(-\infty,\xi\upsilon^{\prime})}\left(A_3\right)
\end{align}
when we insert $\upsilon=(1+\xi)\upsilon^{\prime}$ and $\eta=(1+\xi)^{-1}\xi$ into (\ref{kirsch_statement}).
\end{Rem}
Let $\eta,\xi,\zeta\in (0,1)$. We decompose $\mathfrak{V}^{\zeta}_{\pm}$ into $\mathfrak{V}^{\zeta}_{\pm}=(1-\eta)\mathfrak{X}^{\zeta,\eta}_{\pm}+\eta\mathfrak{T}^{\zeta,\eta}_{\pm}$, where
\begin{align}
\mathfrak{X}^{\zeta,\eta}_{\pm}:=-\Delta_{\mathbb{R}^2}\pm \chi_{\mathbb{R}^2\setminus B_{\gamma}}\frac{2m}{(1-\zeta)(1-\eta)}\frac{\langle\mathfrak{d},\cdot\rangle_{\mathbb{R}^2}}{|\cdot|^3}+\mathscr{L}^{\pm 2m\mathfrak{d}/[(1-\zeta)(1-\eta)]}_{\gamma}
\end{align}
and
\begin{align}
\begin{split}
\mathfrak{T}^{\zeta,\eta}_{\pm}:=-\Delta_{\mathbb{R}^2}+\bigg(\pm 2m\left[V\chi_{\textnormal{supp}(U_0)}-\frac{\langle\mathfrak{d},\cdot\rangle_{\mathbb{R}^2}}{|\cdot|^3}\chi_{\mathbb{R}^2\setminus B_{\gamma}}\right]+(1-1/\zeta)V^2\chi_{\textnormal{supp}(U_0)}-\\
-\displaystyle\sum\limits_{n=0}^N|\nabla U_n|^2-(1-\eta)(1-\zeta)\mathscr{L}_{\gamma}^{\pm 2m\mathfrak{d}/[(1-\eta)(1-\zeta)]}\bigg)/[(1-\zeta)\eta]\,.
\end{split}
\end{align}
Since $V\chi_{\textnormal{supp}(U_0)}$ and $V^2\chi_{\textnormal{supp}}(U_0)$ are $(-\Delta_{\mathbb{R}^2})$-compact (see above), $\mathfrak{T}^{\zeta,\eta}_{\pm}$ is self-adjoint and bounded from below.\\
Using Lem.~\ref{kirsch_lem} and then Lem.~\ref{pure_dipole_lem}, we obtain for all $\upsilon\in\mathbb{R}^-$ that
\begin{align}\label{decouple_above}
\begin{split}
\mathcal{N}_{(-\infty,\upsilon)}\mbox{\small $\Big(\mathfrak{V}^{\zeta}_{\pm}\Big) $}& \leq \mathcal{N}_{(-\infty,\upsilon)}\mbox{\small $\Big(\mathfrak{X}^{\zeta,\eta}_{\pm}\Big) $}+\mathcal{N}_{(-\infty,\upsilon)}\mbox{\small $\Big(\mathfrak{T}^{\zeta,\eta}_{\pm}\Big) $}\\
& \leq \mathcal{N}_{(-\infty,\upsilon)}\mbox{\small $\Big(-\Delta_{\mathbb{R}^2\setminus \overline{B_{\gamma}}}^{\textnormal{D}}\pm\frac{2m}{(1-\zeta)(1-\eta)}\frac{\langle\mathfrak{d},\cdot\rangle_{\mathbb{R}^2}}{|\cdot|^3}\Big) $}+\mathcal{N}_{\mathbb{R}^-}\mbox{\small $\Big(\mathfrak{T}^{\zeta,\eta}_{\pm}\Big) $}\,.
\end{split}
\end{align}
We decompose $\mathfrak{W}^{\zeta}_{\pm}$ in a similar way. Let $\mathfrak{Z}^{\zeta,\xi}_{\pm}$ be the Friedrichs extension of
\begin{align}\label{rest_exterior}
\mbox{\footnotesize $\textnormal{\textsf{C}}^{\infty}_0(\mathbb{R}^2\setminus \overline{B_{\gamma}})\rightarrow\textnormal{\textsf{L}}^2(\mathbb{R}^2\setminus B_{\gamma})\,, \quad \psi\mapsto-\Delta\psi+\left[\mp 2m\left[V-\mbox{\small $\frac{\langle\mathfrak{d},\cdot\rangle_{\mathbb{R}^2}}{|\cdot|^3}$}\right]-(1+1/\zeta)V^2\right]\psi/[(1+\zeta)\xi]\,,$}
\end{align}
which is bounded from below, since $\chi_{\mathbb{R}^2\setminus B_{\gamma}}V$ and $\chi_{\mathbb{R}^2\setminus B_{\gamma}}V^2$ are $(-\Delta_{\mathbb{R}^2})$-compact (cf. above considerations).\\
Then, since
\begin{align}
(1+\xi)\left(-\Delta^{\textnormal{D}}_{\mathbb{R}^2\setminus \overline{B_{\gamma}}}\pm \mbox{\small $\frac{2m}{(1+\zeta)(1+\xi)}\frac{\langle\mathfrak{d},\cdot\rangle_{\mathbb{R}^2}}{|\cdot|^3}$}\right)=\mathfrak{W}^{\zeta}_{\pm}+\xi\mathfrak{Z}^{\zeta,\xi}_{\pm}
\end{align}
holds in the form sense on the common form core $\textnormal{\textsf{C}}^{\infty}_0(\mathbb{R}^2\setminus \overline{B_{\gamma}})$, we obtain
\begin{align}\label{decouple_below}
\begin{split}
\mathcal{N}_{(-\infty,\upsilon)}\mbox{\small $\Big(\mathfrak{W}^{\zeta}_{\pm}\Big) $}& \geq \mathcal{N}_{(-\infty,\upsilon)}\mbox{\small $\Big(-\Delta_{\mathbb{R}^2\setminus \overline{B_{\gamma}}}^{\textnormal{D}}\pm\frac{2m}{(1+\zeta)(1+\xi)}\frac{\langle\mathfrak{d},\cdot\rangle_{\mathbb{R}^2}}{|\cdot|^3}\Big) $}-\mathcal{N}_{(-\infty,\upsilon)}\mbox{\small $\Big(\mathfrak{Z}^{\zeta,\xi}_{\pm}\Big) $}\\
& \geq \mathcal{N}_{(-\infty,\upsilon)}\mbox{\small $\Big(-\Delta_{\mathbb{R}^2\setminus \overline{B_{\gamma}}}^{\textnormal{D}}\pm\frac{2m}{(1+\zeta)(1+\eta)}\frac{\langle\mathfrak{d},\cdot\rangle_{\mathbb{R}^2}}{|\cdot|^3}\Big) $}-\mathcal{N}_{\mathbb{R}^-}\mbox{\small $\Big(\mathfrak{Z}^{\zeta,\xi}_{\pm}\Big) $}\,.
\end{split}
\end{align}
for all $\upsilon\in\mathbb{R}^-$ by using (\ref{kirsch_variant}).\\
As mentioned above, we now show that the higher-order multipole moments merely contribute with finitely many negative eigenvalues.
\begin{Lem}\label{finite_number_ev}
Let $\zeta,\eta,\xi\in (0,1)$. Then, $\mathcal{N}_{\mathbb{R}^-}\mbox{\small $\Big(\mathfrak{T}^{\zeta,\eta}_{\pm}\Big)$}$ and $\mathcal{N}_{\mathbb{R}^-}\mbox{\small $\Big(\mathfrak{Z}^{\zeta,\xi}_{\pm}\Big)$}$ are finite.
\end{Lem}
\begin{proof}
It follows from Hypothesis \ref{hyp5}.(a) in Thm. \ref{main_theorem} that $V\chi_{\textnormal{supp}(U_0)}-\mbox{\small $\frac{\langle\mathfrak{d},\cdot\rangle_{\mathbb{R}^2}}{|\cdot|^3}$}\chi_{\mathbb{R}^2\setminus B_{\gamma}}$ and $V^2\chi_{\textnormal{supp}(U_0)}$ - and thus also the potential of $\mathfrak{T}^{\zeta,\eta}_{\pm}$ - lie in $\textnormal{\textsf{L}}^1(\mathbb{R}^2;\log(2+|x|)\textnormal{d}x)$. Accordingly, Hypothesis \ref{hyp5}.(b) in Thm. \ref{main_theorem} implies that their spherical rearrangements are contained in $\textnormal{\textsf{L}}^1(\mathbb{R}^+;\log_+(r^{-1})\textnormal{d}r)$. Then, the finiteness of $\mathcal{N}_{\mathbb{R}^-}\mbox{\small $\Big(\mathfrak{T}^{\zeta,\eta}_{\pm}\Big)$}$ follows from Thm. 4.3 in~\cite{Shargorodsky}. The same applies to the zero extension of the potential in~(\ref{rest_exterior}) to $\mathbb{R}^2$. Hence - by the inclusion of form cores $\textnormal{\textsf{C}}^{\infty}_0(\mathbb{R}^2\setminus\overline{B_{\gamma}})\subset \textnormal{\textsf{C}}^{\infty}_0(\mathbb{R}^2)$ - Thm.~4.3 in~\cite{Shargorodsky} also implies that $\mathcal{N}_{\mathbb{R}^-}\mbox{\small $\Big(\mathfrak{Z}^{\zeta,\xi}_{\pm}\Big)$}$ is finite.
\end{proof}
We are now prepared for the proof of Thm. \ref{main_theorem}:
\begin{proof}
Let $\zeta,\eta,\xi\in (0,1)$. Using Lem. \ref{lem_bounds_number_of_ev} and \ref{finite_number_ev} and inequalities (\ref{decouple_above}) and (\ref{decouple_below}) , we estimate
\begin{align}
\begin{split}
& \limsup\limits_{E\nearrow m}\frac{\mathcal{N}_{(-\infty,E^2-m^2)}\mbox{\footnotesize $\Big($}D^2-m^2\mathbb{I}\mbox{\footnotesize $\Big)$}}{|\log(m-E)|}\leq\\
\leq & \displaystyle\sum\limits_{+,-}\limsup\limits_{E\nearrow m}\frac{\mathcal{N}_{(-\infty,E^2-m^2)}\Big(-\Delta_{\mathbb{R}^2\setminus\overline{B_{\gamma}}}^{\textnormal{D}}\pm \frac{2m}{(1-\zeta)(1-\eta)}\frac{\langle\mathfrak{d},\cdot\rangle_{\mathbb{R}^2}}{|\cdot|^3}\Big)}{|\log(m^2-E^2)|}\overbrace{\left|\frac{\log(m^2-E^2)}{\log(m-E)}\right|}^{\rightarrow 1}
\end{split}
\end{align}
and
\begin{align}
\begin{split}
& \liminf\limits_{E\nearrow m}\frac{\mathcal{N}_{(-\infty,E^2-m^2)}\mbox{\footnotesize $\Big($}D^2-m^2\mathbb{I}\mbox{\footnotesize $\Big)$}}{|\log(m-E)|}\geq\\
\geq & \displaystyle\sum\limits_{+,-}\liminf\limits_{E\nearrow m}\frac{\mathcal{N}_{(-\infty,E^2-m^2)}\Big(-\Delta_{\mathbb{R}^2\setminus\overline{B_{\gamma}}}^{\textnormal{D}}\pm \frac{2m}{(1+\zeta)(1+\xi)}\frac{\langle\mathfrak{d},\cdot\rangle_{\mathbb{R}^2}}{|\cdot|^3}\Big)}{|\log(m^2-E^2)|}\overbrace{\left|\frac{\log(m^2-E^2)}{\log(m-E)}\right|}^{\rightarrow 1}\,.
\end{split}
\end{align}
Due to the continuity of $\mbox{\small $\textnormal{tr}\Big(\mbox{\small $\sqrt{(M_{(\cdot)})_-}$}\hspace{0.5mm}\Big)$}$ (see \cite{Rademacher}), the desired result follows from Lem. 1 in \cite{Rademacher} in the limits $\zeta,\eta\rightarrow 0$ and $\zeta,\xi\rightarrow 0$, respectively.
\end{proof}
\section*{Acknowledgements}
\noindent I thank Heinz Siedentop and Sergey Morozov for many useful discussions.
\section*{References}
\newcommand\oldsection{}
\let\oldsection=\section
\renewcommand{\section}[2]{}
|
1,314,259,994,860 | arxiv | \section{Introduction}
For decades, lunar occultations (LO) have occupied a special niche as a
technique for high-angular resolution with excellent performance, but relatively
inefficient yield. The diffraction fringes that are created by the lunar limb as
it occults a background source, provide a unique opportunity to achieve
milliarcsecond angular resolution with single telescopes also of relatively
small diameter. In terms of instrumentation, LO have always been simple,
requiring only a fast photometer. Of course, they have the significant drawback
that only sources included in the apparent lunar orbit can be observed (about
10\% of the sky), and then only at arbitrary fixed times and with limited
opportunities for repeated observations. If one adds that each observation only
provides a one-dimensional scan of the source, it is clear that detailed and
repeated observations are better performed with long-baseline interferometry
(LBI), when available. One should, however, not forget additional important
advantages of LO: even for complicated sources, the full, one-dimensional
brightness profile can be recovered according to maximum-likelihood principles
without any assumptions on the source's geometry (Richichi
\cite{1989A&A...226..366R}). Besides, the limiting sensitivity achieved in the
near-IR by LO at the 1.5\,m telescope on Calar Alto is K$\approx 8$\,mag
(Richichi et al.~\cite{2006A&A...445.1081R}). When extrapolated to a 4-meter
class telescope or larger, LO appear quite competitive with even the most
powerful, LBI facilities (Richichi~\cite{1997IAUS..158...71R}).
As a result, although the trend is understandably to develop more flexible,
powerful and complex interferometric facilities, there is some balance that
makes LO still attractive at least for some applications. It should not be
forgotten that the majority of the hundreds of directly-measured stellar angular
diameters (Richichi~(\cite{vlti_2007}) listed 688, and the numbers keep
increasing) were indeed obtained by LO, and that LO are
still the major contributor to the discovery of small separation binary stars.
Two recent developments, however, have provided a significant boost to the
performance of the LO technique, and have significantly enlarged its range
of applications: a) the introduction of IR array detectors that can be
read out at fast rates on a small subarray has made it possible to provide
a large gain in limiting sensitivity, and b) IR survey catalogues
that have led to an exponential increase
of the number of sources for which LO can be computed. Literally,
thousands of occultations per night could now be potentially observed with
a large telescope. We describe in this paper the details and impact of
these two factors for LO work. We also address the new needs imposed on
data reduction by the potential availability of a large volume of lunar
occultation data per night, by describing new approaches to an automated
LO data pipeline. We illustrate both the new quality of LO data and their
analysis by means of examples drawn from the observation of two recent
passages of the Moon over crowded regions in the vicinity of the Galactic
Center, carried out with array-equipped instruments at Calar Alto and
Paranal observatories.
\section{Infrared arrays and new catalogues}
A number of reasons make the near-IR domain preferable for LO work with respect to other wavelengths.
First, LO observations are affected by the high background around the Moon
which, being mainly reflected solar light, shows an intensity maximum at visible
wavelengths. Because of the atmospheric Rayleigh scattering
($\propto\lambda^{-4}$), the background level greatly decreases in the near-IR.
At longer wavelengths ($10{\mu}{\rm} m-20{\mu}{\rm m}$), the thermal emission
of Earth's atmosphere and of the lunar surface introduces a high-background level.
Second, the spacing of diffraction fringes at the telescope is proportional to
$\lambda^{-\frac{1}{2}}$. Therefore, for two LO observations with the same
temporal sampling, one recorded in IR will obtain a higher fringe sampling than
one in the visible.
Finally, at least in the field of stellar diameters, there is an advantage to
observing in the near-IR because for a given bolometric flux redder stars
will present a larger angular diameter.
Being cheap and with a fast time response,
near-IR photometers have traditionally represented the detector
of choice for LO observations.
Richichi~(\cite{1997IAUS..158...71R}) showed the
great increase in sensitivity possible with
panoramic arrays, which by reading only the pixels of interest,
permit to avoid most of the shot noise generated by the
high background in LO.
Such arrays are now becoming a viable option,
thanks to read-out noises, that are decreasing at each
new generation of chips, and to flexible electronics allow us to address a subarray and read it out at millisecond rates.
Richichi~(\cite{1997IAUS..158...71R}) predicted that an 8\,m telescope
would reach between K=12 and 14\,mag, depending on the
lunar phase and background, with an integration time of 12\,ms at
signal--to--noise ratio (SNR)=10. Observations on one of the 8.2\,m VLT telescopes, equipped
with the ISAAC instrument in the so-called burst mode
(Richichi et al. \cite{2006Msngr.126....24R}), show a limiting
magnitude K$\approx$12.5 at SNR=1 and 3ms integration time,
in agreement with the decade-old prediction.
These newly-achieved sensitivities call for a corresponding extension in the
limiting magnitudes of the catalogues used for LO predictions, and their
completeness.
In the near-IR, until recently the only survey-type catalogue
available was the Two-Micron Sky Survey (TMSS, or IRC, Neugebauer \&
Leighton~\cite{1969tmss.book.....N}) that was incomplete in declination and
limited to K$<3$. Already, a 1\,m-class telescope equipped with
an IR photometer exceeds this sensitivity by several magnitudes
(Fors et al.~\cite{2004A&A...419..285F}, Richichi et al.
\cite{1996AJ....112.2786R}).
The release of catalogues associated with modern all-sky near-infrared surveys, such as 2MASS (Cutri et
al.~\cite{2003yCat.2246....0C}) and DENIS (Epchtein et al.~\cite{1997Msngr..87...27E}), has helped.
Our prediction software {\tt ALOP} (Richichi~\cite{AR_thesis}) includes about 50
other catalogues with stellar and extragalactic sources.
We have now added a subset of 2MASS
with K$\le11$, which includes $3.7\times10^6$ sources subject to
occultations.
While with the previous catalogues a typical night run
close to the maximum lunar phase would cover 100-150 sources over several nights
, predictions with 2MASS can include
thousands of events observable with a large telescope over one night.
Special cases, like the passage of the Moon over crowded, obscured regions
in the direction of the Galactic Center, can include
thousands of events predicted over just a few hours
(Richichi et al. \cite{2006Msngr.126....24R}, Fors et al. \cite{SEA_2006}).
Fig.~\ref{fig:lo_alop} illustrates the two cases.
The incompleteness of the catalogues without 2MASS is evident already
from the regime $5\le {\rm K}\le 7$\,mag. At even fainter magnitudes, but
still within the limits of the technique as described here,
the predictions based on the 2MASS catalogue are more numerous by
several orders of magnitude.
\begin{figure*}
\centering
\includegraphics[width=18cm]{8987fig1.eps}
\caption{Frequency of lunar occultation events as a function
of K magnitude, computed on the basis
of all standard catalogues in {\tt ALOP} (gray bars) and of the
2MASS catalogue only (limited to $K\le11$, clear bars).
For both cases, we have used the constraints
of Moon $\ge 25\degr$ above horizon and Sun $\le -5\degr$ below horizon.
Left: a relatively rich 5-night run, from
7 thru 11 January 2006, at Calar Alto Observatory.
Right: part of the night of August 5, 2006 from Paranal, when the
Moon reached a minimum approach of $12^{\prime}$ from the Galactic Center.
Note the logarithmic scale.
}
\label{fig:lo_alop}
\end{figure*}
Note that the increase in the number of potential
occultation candidates is not reflected automatically in more
results. The shift to fainter
magnitudes implies that the SNR of the recorded lightcurves
is on average lower;
LO runs based on 2MASS predictions are now likely to be
less efficient in detecting binaries when compared, for example, to
studies such as those of
Evans et al.~(\cite{1986AJ.....92.1210E}) and
Richichi et al.~(\cite{2002A&A...382..178R}),
especially for large brightness ratios.
\section{Automated reduction of large sets of lunar occultation data}
In general, LO data are analyzed by fitting model lightcurves. We take as an
example the Arcetri Lunar Occultation Reduction software ({\tt ALOR}), a general
model-dependent lightcurve fitting algorithm first developed by one of us
(Richichi \cite{1989A&A...226..366R}). Two groups of parameters are
simultaneously fitted using a non-linear least squares method. First, those
related to the geometry of the event: the occultation time ($t_0$), the stellar
intensity ($F_0$), the intensity of the background ($B_0$) and the limb linear
velocity with respect to the source ($V_{\rm P}$). Second, those related to
physical quantities of the source: for resolved sources; the angular diameter
and; for binary (or multiple) stars, the projected separation and the brightness
ratio of the components.
In general, the fitting procedure is approached in two steps. First, a
preliminar fit assuming an unresolved source model is performed. To ensure convergence, {\tt ALOR} needs to be provided with reliable initial guesses.
We can estimate the geometrical parameters with a visual inspection of the data, and $V_{\rm P}$ is predicted.
The source parameters can be refined in a second step.
This is done interactively since it requires understanding the nature of
each particular lightcurve and the possible correlation between geometrical and
physical parameters.
As a result of that great increase in the number of potential occultations, we
soon realized that we needed a substantial optimization in the processes of
extracting the occultation lightcurves from the raw data and of the interactive
evaluation of the LO lightcurves for the estimate of the initial parameter
values needed for the fits. We then developed, implemented, and tested a new
automatic reduction tool, the Automatic Wavelet-based Lunar Occultation
Reduction Package ({\tt AWLORP}; (Fors~\cite{2006PhDT.........9F}). This allows
both lightcurve extraction and characterization to perform the preliminary
analysis of large sets of LO events in a quick and automated fashion. In the
following, we describe the main parts of {\tt AWLORP}, which are schematically
illustrated in Fig.~\ref{fig:lo_pipeline}.
\begin{figure*}
\centering
\includegraphics[height=18cm]{8987fig2.eps}
\caption{Flow-chart description of {\tt AWLORP}.}
\label{fig:lo_pipeline}
\end{figure*}
\subsection{Input data and lightcurve extraction}
In the cases available to us, the LO data are stored in Flexible Image Transport
System (FITS) cubes. The number of cube frames is given by the frame exposure
and total integration time. Additional information, such as telescope diameter,
filter and identificator of the occulted object, are extracted from the FITS
cube header and saved in a separate file. In addition, the limb linear velocity
and the distance to the Moon as predicted by {\tt ALOP} are available in a
separate file.
An occultation lightcurve must be extracted from the recorded FITS cube file.
We explored several methods for this purpose, among them
fixed aperture integration, border clipping, Gaussian profile and brightest-faintest
pixels extraction. We found these partly unsatisfactory, among
other things, because of lack of connectivity across the stellar image
and because of sensitivity to flux and image shape variations.
We addressed the problem of connectivity with the use of
masking extraction, and two methods were
considered. The first method, called {\tt 3D-SExtractor}, consists of a
customization of the object detection package {\tt SExtractor} (Bertin \&
Arnouts~\cite{1996A&AS..117..393B}) for the case of 3D FITS LO cubes. The
algorithm invokes {\tt SExtractor} for every frame and evaluates its output to
decide if the source has been effectively detected. The segmentation map (or
source mask) provided by {\tt SExtractor} defines the object (background)
pixels in case of positive (negative) detection. These pixels are used to
compute the source (background) intensity before and after the occultation. The
second method, called {\tt Average mask}, consists in performing simple aperture
photometry using a predefined source mask. This is obtained by averaging a large
number of frames previous to the occultation and by applying a 3$\sigma$
thresholding.
We empirically compared {\tt 3D-SExtractor} and {\tt Average mask} methods under
a variety of SNR, scintillation, and pixel sampling situations. Although the
{\tt 3D-SExtractor} makes use of a more exact mask definition for every frame,
{\tt Average mask} was found to provide less noisy lightcurves with no evident
fringe smoothing. Therefore, we adopted this extraction algorithm as the default in the {\tt AWLORP} description.
\subsection{Lightcurve characterization}
Inaccuracies in catalogue coordinates and lunar limb irregularities introduce an
uncertainty in the predicted occultation time of about 5 to 10 seconds. To secure the effective registering of an occultation event, the acquisition
sequence is started well before the predicted occultation time. This results in
a very long extracted lightcurve, typically spanning several tens of seconds. In
contrast, the fringes that contain the relevant high-resolution information
extend only a few tenths of a second. In addition, to accomplish a proper
fitting of this much shorter lightcurve subsample, as mentioned before, we need reliable estimates of $t_{0}$, $B_{0}$ $F_{0}$.
The problem corresponds to detecting a slope with a known-frequency range in a
noisy, equally sampled data series. The key idea here is to note that the drop
from the first fringe intensity (close to $t_{0}$) is always characterized by a
signature of a given spatial frequency. Of course, this frequency depends on the
data sampling but, once this is fixed, the aimed algorithm should be able to
detect that signature and provide an estimate of $t_{0}$, regardless its SNR.
Once $t_{0}$ is known, the other two parameters ($B_{0}$ and $F_{0}$) can be
estimated.
This problem calls for a transformation of the data that would be capable of
isolating signatures in frequency space, while simultaneously keeping the
temporal information untouched. Wavelet transform turns out to be convenient for
this purpose.
\subsubsection{Wavelet transform overview}
\label{subsubsect:wav_over}
The wavelet transform of a distribution $f(t)$ can be expressed as:
\begin{eqnarray}
\label{WT_cont}
W(f(t))(a,b) = \vert a\vert^{-{1 \over 2}}\int_{-\infty}^{+\infty} f(t)
\,\psi\biggl({t-b\over a}\biggr)\;dt\;,
\end{eqnarray}
\noindent where $a$ and $b$ are scaling and translational parameters
respectively. Each base (or {\it scaling}) function $\psi({t-b\over a})$ is a scaled and translated version of a function $\psi$ called {\it mother wavelet}, satisfying the relation $\int \psi({t-b\over a})=0$.
We followed the {\it \`{a} trous} algorithm (Starck \&
Murtagh~\cite{1994A&A...288..342S}) to obtain the discrete wavelet decomposition
of $f(t)$ into a sequence of approximations:
\begin{equation}
F_1(f(t)) = f_1(t),\;\;
F_2(f_1(t)) = f_2(t),\cdots\;.
\end{equation}
$f_i(t)\;(i=1,\cdots,n)$ are computed by performing successive convolutions with
a filter derived from the {\it scaling} function, which in this case is a $B_3$
cubic spline. The use of a $B_3$ cubic spline leads to a convolution with a mask
of 5 elements, scaled as {\tt (1,4,6,4,1)}.
The differences between two consecutive approximations $f_{i-1}(t)$ and $f_i(t)$
are the wavelet (or {\it detail}) planes, $w_i(t)$. Letting $f_0(t)=f(t)$, we
can reconstruct the original signal from the expression:
\begin{equation}
f(t) = \sum_{i=1}^{n}w_i(t) + f_r(t)\;\;,
\label{wav_des}
\end{equation}
where $f_r(t)$ is a residual signal that contains the global energy of $f(t)$.
Note that $n=r$, but we explicitly substitute $n$ with $r$ to clearly
express the concept of $residual$. Each wavelet plane can be understood as a
localized frequential representation at a given scale according to the
wavelet base function used in the decomposition.
In our case, we are using a multiresolution decomposition scheme, which means the original signal $f(t)$ has twice the resolution of $f_1(t)$. This latter has twice the resolution of $f_2(t)$, and so on.
\subsubsection{Algorithm description}
\label{subsubsect:alg_des}
We developed a program to perform a discrete decomposition of the lightcurve
into $n_{\rm wav}$ wavelet planes. Note that the choice of $n_{\rm wav}$ depends
exclusively on the data sampling and will be discussed later.
For example, $n_{\rm wav}=7$ was empirically
found to be a suitable value for representing all the features in the frequency
space of the lightcurve when the sampling was $8.4$~ms. The 2nd to 7th wavelet
planes resulting from the decomposition of the lightcurve of the bright star
\object{SAO~190556} (SNR=43) are represented in
Fig.~\ref{fig:wavelet_estimation}. The 1st plane was excluded as it nearly
exclusively contains noise features not relevant for this discussion. For the
sake of simplicity, we will consider this particular lightcurve and sampling
value in the description that follows.
We designed an algorithm which estimates $t_{0}$, $B_{0}$ and $F_{0}$ from the
previous wavelet planes. This consists of the following two steps: First, it was
empirically determined\footnote{This was realized by repeating the same analysis
to many other lightcurves of different SNR values and same time sampling
($8.4$~ms).} that the 7th plane serves as an invariant indicator of the
occultation time ($t_{0}$). In particular, $t_{0}$ coincides approximately
with the zero located between the absolute minimum ($t_{0}^{min}$) and maximum
($t_{0}^{max}$) of that plane (see upper right panel in
Fig.~\ref{fig:wavelet_estimation} for a zoomed display of the 7th plane). The
good localization of $t_{0}$ in this plane is justified because the first fringe
magnitude drop is mostly represented at this wavelet scale. In addition, the
presence of noise is greatly diminished in this plane. This is because noise
sources (electronics or scintillation) contribute at higher frequencies, and
therefore are better represented at lower wavelet scales (planes). In other
words, this criteria for estimating $t_{0}$ is likely to be insensitive to
noise, even for the lowest SNR cases.
Second, once a first estimate of $t_{0}$ was obtained, $B_{0}$ and $F_{0}$ could
be derived by considering the 5th wavelet plane. We found that this plane
indicates those values with fairly good approximation. The procedure is
illustrated in Fig.~\ref{fig:wavelet_estimation} and is described as follows:
\begin{figure*}
\centering
\resizebox{\hsize}{!}{\includegraphics{8987fig3.eps}}
\caption{Schematic of the wavelet-based algorithm for the
estimation of $t_{0}$, $F_{0}$, and $B_{0}$ to be used in {\tt
AWLORP}. The lightcurve corresponds to an occultation of \object{SAO~190556}
observed at the Calar Alto Observatory, sampled every $8.4$~ms. Left: box
with 2nd to 7th wavelet planes resulting from the wavelet decomposition of the
original lightcurve. Upper right: the 7th plane is found to be a good indicator of
$t_{0}$. A zoomed display of the region around $t_{0}$ is shown. Lower right: a
box display of 5th plane (bottom part of this panel) provides the abscissae $t_{\rm b},t_{\rm a}$ to compute $F_{0}$ and $B_{0}$ in the
original lightcurve (upper part of the same panel).}
\label{fig:wavelet_estimation}
\end{figure*}
\begin{enumerate}
\item We consider the abscissa in the 5th plane, corresponding to
$t_{0}$ found in the 7th plane.
\item From $t_{0}$, we search for the nearby zeroes in the 5th plane, before and after the above abscissa. We call them $t_{\rm b}$ and $t_{\rm a}$.
\item We estimate $B_{0}$ by averaging the lightcurve values around $t_{\rm
a}$ within a specified time range. We empirically fixed this to
$\left[-8,8\right]$ samples because it provided a good compromise between
improving noise attenuation and suffering from occasional background slopes.
\item The same window average is computed around $t_{\rm b}$. The obtained
value ($I_{\rm p}$) represents a mean value of the intensity at the {\it
plateau} region before the onset of diffraction fringes. Note that the 5th wavelet
plane was chosen because its zero at $t_{\rm b}$ is safely before the
fringes region in the lightcurve, where the intensity is not constant and, thus,
not appropriate for $I_{\rm p}$ calculation.
\item $F_{0}$ is computed by subtracting $B_{0}$ to $I_{\rm p}$.
\end{enumerate}
As in the case of the 7th plane, the contribution in the 5th plane is dominated
by signal features represented at this scale, while noise, even the
scintillation component, has a minor presence. Therefore, again, the estimation
criteria for $B_{0}$ and $F_{0}$ is likely to be well behaved and robust in
presence of high noise.
Although {\tt AWLORP} was demonstrated on a particular data set,
its applicability is totally extensible to any sampling of the
lightcurve and also to reappearances. To show this,
we repeated the previous algorithm description for 6 sets of 100
simulated\footnote{The procedure folowed to simulate these data sets is
explained in Sect.~\ref{subsect:simul_data}.} lightcurves of different samplings
(1,2,4,6,8 and 10\,ms). For these six samplings, $n_{\rm wav}$ was found to be 8,7,6,6,5 and 5, respectively. Note these values are proportional to a geometric sequence of ratio 2 and argument $(8-n_{\rm wav})$, which is in agreement with the dyadic nature of the wavelet transform we adopted.
\subsection{Lightcurve fitting}
The algorithm just described has been integrated in
an automated pipeline. As shown in the scheme of
Fig.~\ref{fig:lo_pipeline}, the characterization of the lightcurve is used to
decide if a fit can be performed succesfully with {\tt ALOR}. The cases of very
faint sources, wide binaries and those lightcurves with some data truncation
(i.e. very short time span on either side of the diffraction fringes) are the
typical exclusions, and are discussed in Sect.~\ref{problems}.
In case of positive evaluation, {\tt ALOR} is executed using
the detected values of $t_0$,$F_0$, and $B_0$ as initial guesses. After the
preliminary fit is performed, a quicklook plot of lightcurve data, model, and
residual files is generated. This process is iterated for all the observed
sources.
This automatic pipeline frees us from the most tedious and error-prone part
of {\tt ALOR} reduction. The pipeline spends a few seconds
per occultation to complete the whole process described in
Fig.~\ref{fig:lo_pipeline}. For comparison, an experienced user takes 10-20
minutes per event for reaching the same stage of the reduction
pipeline. In cases when the data sets included hundreds of occultation events,
this difference is substantial.
The pipeline was coded entirely in {\tt Perl} programming language, which turns
our to be a powerful and flexible tool for concatenating the I/O streams of
independent programs.
Once {\tt AWLORP} has automatically generated all the single source fit plots,
the user can perform a quick visual inspection. The
objective of this first evaluation is to separate the unresolved, relatively
uninteresting events from those that bear the typical marks of a resolved
angular diameter, of an extended component or of a multiple source.
These latter will still need an interactive data reduction with
{\tt ALOR}, but they will represent typically only a small fraction of the whole
data set.
\section{Performance evaluation}
We have verified the performance of {\tt AWLORP} by analysing both simulated and
real LO data sets.
\subsection{Simulated data}
\label{subsect:simul_data}
\begin{figure*}
\centering
\includegraphics[height=10cm]{8987fig4.eps}
\caption{Application of {\tt AWLORP} to six sets of 10000 simulated lightcurves
at 2ms sampling and of different SNRs values. As explained in the
text, the offset between the simulated
occultation time and the time detected by {\tt AWLORP} (${\Delta}t_{0}$) is
Gaussian distributed with FWHMs inversely proportional to the SNR value, and
the histogram peaks are sistematically shifted within the range
${\Delta}t_{0}\sim\left[-4,-2\right]$ms (only 1 to 2 sampling points).}
\label{fig:snr_simulated}
\end{figure*}
Thanks to a specific module included in {\tt ALOR}, a set of simulated LO
lightcurves was generated for varying SNR values. The noise model assumes three
independent noises sources: detector electronics, photon shot-noise, and
scintillation, which are of Gaussian, Poisson, and multiplicative nature,
respectively (Richichi \cite{1989A&A...226..366R}).
With a realistic combination of these three noise sources, we generated six
series with SNR $50,20,10,5,2$ and $1$, each of them
consisting of 10000 lightcurves. We chose the sampling to be 2ms, which is a realistic value considering what is offered by current detectors.
{\tt AWLORP} was executed for all the 60000 simulated events. For each
lightcurve, we found an estimate of the triplet ($t_0$,$F_0$,$B_0$). The {\tt
AWLORP} only failed to characterize the lightcurve in 10 cases of the noisiest
series for which the {\tt ALOR} fits could not converge. For the remaining
59990 cases, we computed the difference (${\Delta}t_{0}$) between the detected and the simulated occultation time and plotted these differences as shown in
Fig.~\ref{fig:snr_simulated}. Two comments can be made.
First, the ${\Delta}t_{0}$ distribution is, to a good approximation,
Gaussian-shaped.
This is in agreement with the fact that the first fringe
signature is primarily dominated by Gaussian noise at the wavelet plane
($n_{wav}=7$) employed to estimate $t_0$. This noise distribution has its
origins in the detector read-out for the faint end (low SNR) and in the shot-noise for the bright end (high SNR), which can be approximated by a Gaussian distribution in this regime.
In addition, the typical width of the ${\Delta}t_{0}$ distribution is inversely
proportional to the SNR value. A gaussian function was fitted to every histogram, and we found the values $\sigma =23.0,11.7,4.6,2.3,1.1,0.5$ for the cases with SNR$=1,2,5,10,20,50$.
Second, note that the histograms in Fig.~\ref{fig:snr_simulated} are not exactly
centered at ${\Delta}t_{0}=0$, but systematically shifted $4$ms to $2$ms
(only $2$ to $1$ sampling points). This error is about the Nyquist cut-off
frequency of our data sampling. It can be assumed as a limitation imposed
by the data and not as an intrinsic constraint of {\tt AWLORP}.
The difference could be corrected by subtracting this small offset to
all analyzed lightcurves, but it is in any case of no consequence
for the purpose of the subsequent interactive analysis.
\subsection{Real data}
We considered a set of six real lightcurves. These were recorded in the course
of Calar Alto Lunar Occultation Program (CALOP) (Richichi et
al.~\cite{2006A&A...445.1081R}, Fors et al.~\cite{2004A&A...419..285F}). They
correspond to a series of SNR values similar to the one discussed in
Sect.~\ref{subsect:simul_data}.
\begin{figure*}
\centering
\includegraphics[height=18cm]{8987fig5.eps}
\caption{Application of {\tt AWLORP} to 6 lightcurves with different SNRs (from
top to bottom: 47.2, 22.3, 10.9, 5.9, 2.1 and 1.2) observed as part of the CALOP
program. The left side panels show the whole lightcurves
(60 seconds). The right side panels show the trimmed lightcurves (spanning
only 2 seconds) around the $t_{0}^d$ value detected by {\tt AWLORP}. The
occultation time fitted by {\tt ALOR} using $t_{0}^d$ as initial value,
$t_{0}^f$, are also displayed. Note that even in the faintest SNR case, the
occultation time is correctly detected.}
\label{fig:wavelet_estimation_snr}
\end{figure*}
The robustness of $t_{0}$ estimation is shown in
Fig.~\ref{fig:wavelet_estimation_snr}, where even in the lightcurves at the
limit of detection (SNR$=1.2, 2.1$) the value of $t_{0}$ is correctly detected.
This is confirmed by visual inspection and by
an comparison with the predicted values.
To verify this concordance, we ran {\tt ALOR} fits for all six lightcurves
with the {\tt AWLORP}-detected triplets ($t_0$,$F_0$,$B_0$) as initial values.
Even in the faintest cases, {\tt ALOR} converged for all parameters of the
lightcurve model. With regard as $t_{0}$, the difference between the initial and
the fitted values never exceeded $13.6$~ms ($1.6$ sample points) as can be seen
in Fig.~\ref{fig:wavelet_estimation_snr}.
\subsection{Problematic cases}\label{problems}
The pipeline just described works well for about 98\% of the recorded events. There
are, however, a few special situations where the algorithm of
Fig.~\ref{fig:lo_pipeline} fails. Those can be classified in three distinctive
groups:
\begin{enumerate}
\item The current version of wavelet-based lightcurve characterization does not
support wide binary events. In other words, the pipeline cannot simultaneously
determine the values ($t_{0}^{A}$, $B_{0}^{A}$ and $F_{0}^{A}$) and
($t_{0}^{B}$, $B_{0}^{B}$ and $F_{0}^{B}$) for two components $A$ and $B$
separated by more than a hundred of milliarcseconds.
Since these cases represent at most a few percent
of the overall volume of LO events and they are also relatively
uninteresting, this feature has not been implemented yet.
\item Due to observational constraints, to
an unusually large prediction
error or simply by mistake, sometimes
the recording of an event is started too close
to the actual occultation time.
Since the scaling
function has a given size at each wavelet scale, there is a filter ramp
that extends over an initial span of data depending on the wavelet plane.
For example, in the case of data in
Sect.~\ref{fig:wavelet_estimation_snr} this happens up to 4000 milliseconds from
the beginning of the lightcurves, since this is the size of the scaling function
at the scale of the 7th plane for the given temporal sampling.
\item Depending on the subarray size employed, the image scale, the seeing
conditions or telescope tracking, part of the stellar image might be displaced
outside the subarray so that the extracted flux decreases and the shape of
lightcurve is affected. Under these circumstances, {\tt AWLORP} is likely to
produce false $t_{0}$ detections. Again, the small number of cases affected
does not justify the substantial effort required to improve the {\tt AWLORP} treatment.
\end{enumerate}
\section{Summary}
The observation of lunar occultation (LO) events with modern
infrared array detectors at large telescopes, combined with the
use of infrared survey catalogues for the predictions, has shown
that even a few hours of observation can result in many tens if
not hundreds of recorded occultation lightcurves. The work
to bring these data sets to a stage where an experienced observer
can concentrate on accurate interactive data analysis for the
most interesting events is long and tedious.
We have designed, implemented, and tested an automated data
pipeline that takes care of extracting the lightcurves from the
original array data (FITS cubes in our case); of restricting the
range from the original tens of seconds to the few seconds
of interest near the occultation event; of estimating the initial
guesses for a model-dependent fit; of performing the fit; and
finally of producing compact plots for easy visual inspection.
This effectively reduces the time needed for the initial
preprocessing from several days to a few hours, and frees
the user from a rather tedious and error-prone task.
The pipeline is based on an algorithm for automated extraction
of the lightcurves,
and on a wavelet-based algorithm for the estimation of the
initial parameter guesses.
The pipeline has been tested on a large number
of simulated lightcurves spanning a wide range of realistic
signal-to-noise ratios.
The result has been completely satisfactory: in all
cases in which the algorithm converged, the derived lightcurve
characterization was correct and consistent with the simulated
values. Convergence could not be reached due to poor signal-to-noise ratio in only in ten cases out of 60000. These cases would, of
course, be challenging for an interactive data analysis
by an experienced observer as well. We also tested the pipeline on
a set of real data, with similar conclusions.
We identified and discussed the cases that may prove
problematic for our scheme of automated preprocessing.
\begin{acknowledgements}
This work is partially supported by the ESO
Director General's Discretionary Fund
and by the
\emph{MCYT-SEPCYT Plan Nacional I+D+I AYA \#2005-082604}.
\end{acknowledgements}
|
1,314,259,994,861 | arxiv |
\section{Introduction} \label{sec:intro}
Graph neural networks (GNNs) play an essential role in many emerging applications with graph data.
On these applications, GNNs show their strength in extracting valuable information from both the \textit{features} (\textit{i.e.}, information from individual nodes) and the \textit{topology} (\textit{i.e.}, the relationship between nodes).
For example, GNNs can effectively analyze financial data to decide loan-related policies.
Another example is the social network with billions of users where GNNs can do friendship recommendation.
Such wide deployment of GNNs motivates the investigation of their robustness and reliability.
One of the key aspects is to effectively generate adversarial examples on graph data, so that we can better understand the ``weakness'' of GNNs and secure them more wisely afterward.
Our initial exploration and studies identify several key properties to be considered in the GNN attack.
First, the adversarial example needs to consider \textit{topology} and \textit{feature} information to comprehensively attack the GNNs on all perspectives.
Second, the attack method needs to be efficient in both \textit{memory} and \textit{computation} for catering to the huge number of nodes in graph data.
However, existing work inevitably falls in short at least one of the above aspects, as summarized in Table~\ref{tab:importantProperties}.
Specifically, FGSM~\cite{FGSM} is crafted for attacking traditional deep neural networks (DNNs).
Even though it can attack node embeddings with good computation and memory efficiency, it can not well support the graph topology, which distinguishes GNNs from DNNs.
PGD~\cite{xu2019topology} is one of the state-of-the-art work designated for the GNN attack. However, it does not support the attack on node features, and it suffers a quadratic memory overhead due to maintaining the large size of edge gradients in a dense matrix format.
For example, a graph with $N$ nodes leads to a dense $N\times N$ gradient matrix, consuming more than $10$GB memory for a moderate-size graph with only $50,000$ nodes.
Another work, Nettack~\cite{zugner2018adversarial}, while performing well in three aspects, finds its shortcoming in computation efficiency due to a fine-grained edge manipulation in a very inefficient trial-and-error manner.
\begin{table}[t] \small
\centering
\caption{Comparison with the Existing Attack Methods.}
\vspace{-5pt}
\scalebox{0.9}{
\begin{tabular}{|c|c c c c|}
\hline
\textbf{Method} & \textbf{Topology} & \textbf{Feature} & \textbf{Comp. Effi.} & \textbf{Mem. Effi.}\\
\Xhline{2\arrayrulewidth}
FGSM & \xmark & \Checkmark & \Checkmark & \Checkmark \\
PGD & \Checkmark & \xmark & \Checkmark & \xmark \\
Nettack & \Checkmark & \Checkmark & \xmark & \Checkmark \\
\textbf{\Mname} & \Checkmark & \Checkmark & \Checkmark & \Checkmark \\
\hline
\end{tabular}}
\vspace{-10pt}
\label{tab:importantProperties}
\vspace{-5pt}
\end{table}
To overcome these challenges, we propose \Mname, the first ADMM-attack on GNNs with the alternating direction method of multipliers (ADMM), which iteratively maximizes the training loss through modifying the graph topology and the node features.
ADMM has been shown to be effective in dividing-and-conquering a large non-convex optimization problem into multiple smaller ones for achieving both efficiency and scalability \cite{ijcai2017-228,DBLP:conf/aaai/LengDLZJ18,ADMM-AutoML,ADMM-NN}.
As shown in the last line of Table~\ref{tab:importantProperties}, \Mname~can attack both the graph topology and the node features, while maintaining computation and memory efficiency to a great extent, making it a viable and promising solution for large-scale graphs.
In summary, our major contributions are:
\begin{itemize}
\item We identify and analyze the key properties of the effective adversarial attacks on GNNs, where none of the existing methods could address all of them systematically and comprehensively.
\item We propose \Mname~to effectively generate adversarial examples on graph neural networks based on both topology attack and feature attack.
We formulate \Mname~with the ADMM optimization framework and achieve both high memory and computation efficiency.
\item Evaluation shows our proposed method can launch more effective attacks compared with the state-of-the-art approaches while reducing the computation and memory overhead, making it applicable towards large graph settings.
\end{itemize}
\section{Related Work} \label{sec:related_work}
\iffalse
\paragraph{Graph Neural Network}
Graph Neural Networks (GNNs) are now becoming a major way for gaining insights from the graph structures. It generally includes several graph convolutional layers, each of which consists of a neighbor aggregation and a node update step. It computes the embedding for node $v$ at layer $k+1$ based on its embedding at layer $k$, where $k \geq 0$.
\begin{gather} \small \label{eq: GNN}
\begin{aligned}
a_{v}^{k+1} &= Aggregate^{k+1}({h_{u}^k|u\in Neighbor(v)\cup h_v^k}) \\
h_{v}^{k+1} &= Update^{k+1}(a_{v}^{k+1})
\end{aligned}
\end{gather}
As shown in Equation~\ref{eq: GNN}, $h_{v}^{k}$ is the embedding vector for node $v$ at layer $k$. $a_{v}^{k+1}$ is the aggregation results through collecting neighbors' information (\textit{e.g.}, node embeddings). The aggregation method could vary across different GNNs. Some methods just purely depend on the properties of neighbors while others also leverage the edge properties, such as weights. The update function is generally composed of a single fully connected layer or multi-layer perceptron (MLP) for NN-operations on both aggregated neighbor results and its current embedding at layer $k$.
\fi
\textbf{Graph Adversarial Attacks.}
Graph adversarial attacks aim to maximize the accuracy drop on GNN models by introducing the perturbations, such as the modifications of the graph topology and the node representations (feature embeddings).
Existing GNN attacks can be broadly classified into two major categories, \textit{poisoning} attack~\cite{zugner2018adversarial,zugner2019adversarial} and \textit{evasion}~\cite{dai2018adversarial} attack, depending on the time they happen. The former (poisoning attack) happens during the training time of the GNNs through modifying training data, while the latter (evasion attack) takes place during the GNN inference time by changing test data samples. Our proposed \Mname~is a comprehensive attack method that can cover both types of attack meanwhile offering significant computation and memory efficiency compared with the existing attack approaches.
\vspace{4pt}
\noindent \textbf{ADMM Framework.}
ADMM \cite{ADMM} is an optimization framework that is effective in decomposing and solving optimization problems under constraints.
The theoretical results of ADMM have been explored in \cite{ADMM_theory1, ADMM_theory2, ADMM_theory3,ADMM_Theory4} for various convex problems under diverse constraints and are shown to have linear convergence.
Formally, ADMM can effectively solve an optimization problem under linear constraints
\begin{equation} \small
\begin{split}
\min_{\mathbf{x},\mathbf{z}} \;\;\;\; & f(\mathbf{x}) + g(\mathbf{z}) \\
\text{subject to} \;\;\;\; & A\mathbf{x}+B\mathbf{z} = \mathbf{c}
\end{split}
\end{equation}
where $f(\mathbf{x})$ and $g(\mathbf{z})$ could be either differentiable or non-differentiable but has some exploitable structure properties.
Then, by introducing the augmented Lagrangian function, ADMM can break the problem into two subproblems in $\mathbf{x}$ and $\mathbf{z}$ and iteratively solve each subproblem at one time.
While popular stochastic gradient descent (SGD) method can usually solve optimization problems, SGD cannot effectively process the case with diverse constraints (\textit{e.g.}, equality constraints between variables) and usually require ad-hoc modifications on gradients.
Although ADMM is originally developed to solve convex problems, recent studies successfully exploit ADMM to solve NP-hard non-convex problems under constraints in CNN pruning \cite{ijcai2017-228,ADMM-Pruning}, compressive sensing \cite{NIPS2016_6406}, Auto-ML \cite{ADMM-AutoML}, Top-K feature selection \cite{ijcai2017-228}, and hardware design \cite{ADMM-NN}.
In this paper, we focus on exploring the benefit of exploiting ADMM framework in the context of graph-based adversarial attacks.
\section{Methodology} \label{sec:methodology}
\subsection{Problem Formulation of Scalable Graph Attack}
We first define the notation in this paper, as summarized in Table \ref{tab:notation}.
We consider an input graph $G = (A, X)$, where $A \in \{0,1\}^{N\times N}$ is the adjacency matrix on the edge connection between nodes, $X \in \mathcal{R}^{N\times D}$ is the associated node features, $N$ is the number of nodes in the graph, and $D$ is the feature dimension of each node.
Here, only a portion of nodes $V_{GT}$ are labeled and the goal of node classification is to predict labels for remaining nodes.
Following the common practice in the field \cite{xu2019topology, zugner2018adversarial}, we focus on the well-established work that utilizes graph convolutional layers \cite{GCNConv} for node classification.
Formally, the $i^{th}$ layer is defined as
\begin{equation} \small
H^{(k+1)} = \Tilde{A}H^{(k)}W^{(k)}
\end{equation}
where $\Tilde{A} = \Tilde{D}^{-\frac{1}{2}}(A+I_N)\Tilde{D}^{-\frac{1}{2}}$ to achieve numerical stability, $D$ is a diagonal matrix with $D_{ii} = \sum_j (A+I_N)_{ij}$, and $W^{(k)}$ is the weights for the $k^{th}$ layer.
Since the memory and the computation complexity generally increase as the number of layers increases, we focus on a single layer GCN that is tractable and still captures the idea of graph convolutions:
\begin{equation} \small
Z = softmax(\Tilde{A}XW)
\end{equation}
Here, the output $Z \in \{1,2,...,c\}^N$ is the predicted labels for individual nodes.
The parameter $W$ is learned by minimizing the cross-entropy on the output of labeled nodes.
\begin{equation} \small
\ell(A,X;W, Y_{GT}) = -\sum_{v\in V_{GT}} ln \;\; Z_{v, Y_v}
\end{equation}
where $Y_v$ is the ground truth label for node $v$.
Our experiments show that adversarial examples on this single layer GCN model transfer well to GCNs with various layers and other GNN models such as GAT.
Besides, recent studies \cite{zugner2018adversarial} show that a L-layer GCN can also be treated as $\sigma(A^LXW^L)$ to generate adversarial attacks with tractable computation.
In this paper, SAG aims to generate a perturbed adjacency matrix $\Tilde{A}\in \{0,1\}^{N\times N}$ and a perturbed feature matrix $\Tilde{X} \in \mathcal{R}^{N \times D}$ satisfying a pre-defined perturbation budget.
Similar to \cite{xu2019topology}, we use a Boolean matrix $S \in \{0,1\}^{N\times N}$ to record the edge perturbations that $S_{ij}=1$ indicates the perturbation of edge between node $i$ and node $j$.
Given the original adjacency matrix $A$ and its supplement matrix $\bar{A}$ (\textit{i.e.}, $\bar{A}_{i,j} = \neg A_{i,j}$), we can generate the perturbed adjacency matrix as
$\Tilde{A} = A + (\bar{A} - A) \circ S$,
where $\circ$ is the Hadamard product.
Formally, given the edge perturbation budget $\epsilon_A$ and the feature perturbation budget $\epsilon_X$, \Mname~aims to solve the following optimization problem
\begin{equation} \label{eq:optimization1} \small
\begin{split}
\min_{S,\Tilde{X}} \;\;\;\;\; & -\ell(S, \Tilde{X}) \\
\text{s.t.} \;\;\;\;\; & ||\Tilde{X} - X ||_2^2 \leq \epsilon_X \\
& 1^TS \leq \epsilon_S, S \in \{0,1\} ^{N\times N}
\end{split}
\end{equation}
For notation simplicity, we use $\ell(S,\Tilde{X})$ to represent the cross-entropy loss $\ell(S, \Tilde{X}; W, Y_{GT})$ when the context is clear.
Following the common practice~\cite{zugner2018adversarial, xu2019topology} in the field, we consider two threat models -- the evasive attack and the poisoning attack.
The evasive attack assumes that the GNN model is fixed and targets the test data.
The poisoning attack targets the training data and performs the model training phase on the perturbed data after the attack.
There are several challenges in solving this optimization problem.
First, the popular stochastic gradient descent (SGD) methods cannot effectively solve optimization problems under constraints and usually require ad-hoc modification on the gradients to satisfying such constraints.
Second, the discrete modification on the edge perturbations makes it a combinatorial optimization problem. \cite{zugner2018adversarial} attacks the graph topology by changing one edge at one time and selecting the edges that achieve the maximum loss.
However, this approach needs to trial-and-error on all edges, leading to prohibitive time cost.
\cite{xu2019topology} takes gradient on the $S$ matrix by releasing the requirement $S \in \{0,1\}^{N \times N}$ to be $S \in [0,1]^{N \times N}$.
However, this approach takes $N\times N$ memory to store the gradient matrix, which is prohibitive for large graphs with tens of thousands of nodes.
Besides, \cite{xu2019topology} only supports topology attack and cannot conduct joint attack on the node features.
We detail ADMM-based SAG on solving the optimization problems under constraints in later sections.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{imgs/overview.pdf}
\caption{Overview of SAG.}
\label{fig:overview}
\vspace{-16pt}
\end{figure}
\subsection{Scalable Graph Attack Framework using ADMM}
SAG achieves a scalable attack on graph data by splitting the matrix $S$ into multiple partitions and consider one partition at each time, as illustrated in Figure~\ref{fig:overview}.
Supposing we split the graph into $M$ partitions, we only need to take gradient on a small matrix with shape $\frac{N}{M} \times N$ that can easily fit into the memory.
Similar to \cite{xu2019topology} and the probabilistic view of node classification problem, we first release the discrete constraints $S \in \{0,1\}^{N\times N}$ to continuous constraints $S \in [0,1]^{N\times N}$, where $S_{ij}$ indicates the probability that an edge needs to be perturbed for attack.
Since the impact between node $i$ and node $j$ may be asymmetry (\textit{i.e.}, the impact of node $i$ on node $j$ is higher than the reverse one), we do not apply symmetry constraint on $S$.
Then, we split $S \in [0,1]^{N \times N}$ into $M$ sub matrix such that $S_i \in [0,1]^{(N/M) \times N}$, where $S_i$ considers the nodes with index between $[\floor*{i*\frac{N}{M}}, \floor*{(i+1)*\frac{N}{M}}]$.
Due to the cumulative property in cross-entropy loss, we can attack by solving the following optimization problem
\begin{equation} \label{eq:optimization2} \small
\begin{split}
\min_{S_i,\Tilde{X}} \;\;\;\;\; & -\sum_{i=1}^M \ell(S_i, \Tilde{X}) \\
\text{s.t.} \;\;\;\;\; & ||\Tilde{X} - X ||_2^2 \leq \epsilon_X\\
& 1^TS_i \leq \epsilon_{i}, S_i \in [0,1]^{(N/M)\times N}
\end{split}
\end{equation}
Here, $\epsilon_i$ represents the allowed number of edges to change in each graph partition $S_i$ and we set it to be $\frac{\epsilon_A}{M}$ for simplicity.
Ideally, we can split the problem~\ref{eq:optimization2} into $M$ sub-problems and solve them independently for memory efficiency.
However, problem~\ref{eq:optimization2} still has interaction between $M$ sub-problems on the $\Tilde{X}$ term.
To this end, we further reformulate the problem \ref{eq:optimization2} into the following problem by substitute $\Tilde{X} \in \mathcal{R}^{N \times D}$ with a duplicated feature matrix $\Tilde{X}_i \in \mathcal{R}^{N \times D}$
\begin{equation} \label{eq:optimization4} \small
\begin{split}
\min_{S_i, \Tilde{X}_i} \; & -\sum_{i=1}^M \ell(S_i, \Tilde{X}_i) + \sum_{i=1}^M I_{C_{Si}}(S_i) + \sum_{i=1}^M I_{C_{Xi}}(\Tilde{X}_i)\\
\text{s.t.} \;\;\;\; & \Tilde{X_i} = \Tilde{X}_{i+1}, \;\;\;\; i\in \{1,2,...,M\} \\
\end{split}
\end{equation}
For notation simplicity, we use $\Tilde{X}_{M+1}$ to represent $\Tilde{X}_1$.
$I_C(x)$ is an indicator function such that $I_C(X) = 0$ if $X\in C$, otherwise $I_C(X) = \infty$.
$C_{Si}$ and $C_{Xi}$ are feasible sets
\begin{equation} \small
\begin{split}
C_{Si} & = \{S_i \;| \; S_i \in [0,1]^{\frac{N}{M}\times N}, \;\;1^TS_i \leq \epsilon_{i} \} \\
C_{Xi} & = \{\Tilde{X_i} \;| \; ||\Tilde{X_i}-X||_2^2 \leq \epsilon_X \}\\
\end{split}
\end{equation}
\iffalse
\begin{equation}
\begin{split}
\min_{S_1, S_2, V, Z} \;\;\;\;\; & -\ell(S_1, V) -\ell(S_2, Z) \\
\text{s.t.} \;\;\;\;\; & V = Z \\
& ||V - X ||_2 \leq \epsilon_X, \;\; ||Z - X ||_2 \leq \epsilon_X \\
& 1^TS_1 \leq \epsilon_{1}, \;\; 1^TS_2 \leq \epsilon_2 \\
& S_1 \in [0,1]^{(N/2)\times N}, \;\; S_2 \in [0,1]^{(N/2)\times N}
\end{split}
\end{equation}
\fi
Here, the popular SGD cannot be easily applied to the optimization problem \ref{eq:optimization4} under constraints, especially the equality ones.
To this end, we can adopt the ADMM framework to systematically solve the problem.
First, we have the Lagragian function
\resizebox{.9\linewidth}{!}{
\begin{minipage}{\linewidth}
\begin{equation} \small
\begin{split}
L_\rho(\Tilde{X}_i, S_i, \mu_i) = & -\sum_{i=1}^M\ell(S_i, \Tilde{X}_i) + \sum_{i=1}^M I_{C_{Xi}}(\Tilde{X}_i) + \sum_{i=1}^M I_{C_{Si}}(S_i) \\
& + \frac{\rho}{2}\sum_{i=1}^M||\Tilde{X}_i - \Tilde{X}_{i+1}||^2 + \sum_{i=1}^M \mu_i^T(\Tilde{X}_i - \Tilde{X}_{i+1})
\end{split}
\end{equation}
\end{minipage}
}
where $\rho > 0$ is a hyper-parameter and $\mu_i \in \mathcal{R}^{N \times D}$ is the dual variable.
\iffalse
\begin{equation} \small
\begin{split}
L_\rho(\Tilde{X}_i, S_i, \mu_i) = & -\sum_{i=1}^M\ell(S_i, \Tilde{X}_i) + \sum_{i=1}^M I_{C_{Xi}}(\Tilde{X}_i) \\
& + \sum_{i=1}^M I_{C_{Si}}(S_i) \\
& + \frac{\rho}{2}\sum_{i=1}^M||\Tilde{X}_i - \Tilde{X}_{i+1}||^2\\
& + \sum_{i=1}^M \mu_i^T(\Tilde{X}_i - \Tilde{X}_{i+1})
\end{split}
\end{equation}
\fi
Following the ADMM framework, we can solve the problem \ref{eq:optimization4} by repeating for $k \in \{1,2,...,K\}$ iterations and, in each iteration, solving each $S_i$ and $\Tilde{X}_i$ individually
\begin{equation} \small \label{eq:ADMMForm}
\begin{split}
\Tilde{X}_i^{(k+1)} &= \operatornamewithlimits{argmin}_{\Tilde{X}_i}L_\rho(\Tilde{X}_i, S_i^{(k)}, \mu_i^{(k)})\\
S_i^{(k+1)} &= \operatornamewithlimits{argmin}_{S_i}L_\rho( \Tilde{X}_i^{(k+1)}, S_i, \mu_i^{(k)}) \\
\mu_i^{(k+1)} &= \mu_i^{(k)} + \rho(\Tilde{X}_i^{(k+1)}-\Tilde{X}_{i+1}^{(k+1)})
\end{split}
\end{equation}
which is respectively the feature update, topology update, and dual update.
We stress that we only need to solve the minimization problem for a single graph partition $S_i\in \mathcal{R}^{\frac{N}{M} \times N}$, leading to much reduced memory consumption compared to \cite{xu2019topology}, which requires solving the whole graph $S \in \mathcal{R}^{N \times N}$ at the same time.
Here, the main memory overhead comes from the duplicated feature $\Tilde{X}_i \in \mathcal{R}^{N \times D}$.
However, the feature dimension $D$ is usually a fixed number around $1000$, which is much smaller than the number of nodes $N$ that may reach tens of thousands, or even millions.
\subsection{Algorithm Subroutines for SAG Optimization}
In this section, we present how to efficiently solve the above problem and derive closed form formula for individual minimization problems.
\vspace{3pt}
\noindent \textbf{Feature Update.}
In feature update, we aim to find the feature $\Tilde{X}_i$ that minimizes
\begin{equation} \small
-\ell(S_i^{(k)}, \Tilde{X}_i) + I_{C_{Xi}}(\Tilde{X}_i) + \frac{\rho}{2} ||\Tilde{X}_i - \Tilde{X}_{i+1}^{(k)}||^2
+ \mu_i^{(k)\;T}(\Tilde{X}_i - \Tilde{X}_{i+1}^{(k)} )
\end{equation}
Here, $\ell(\cdot, \cdot)$ is the cross-entropy loss on the GNN predictions, the last two terms are differentiable functions, but the $I_{C_{Xi}}(\cdot)$ function cannot be easily solved with gradient descent method due to the $\infty$ values for $\Tilde{X}_i \notin C_{Xi}$.
To this end, we adopt a two-step strategy for the feature update.
For the first three terms that are differentiable, we use the gradient descent method to access the gradient $g_{Xi}^{(k)}$ on $\Tilde{X}_i$ and get a pseudo-perturbed feature $\Tilde{X}_i^{(k+1)'}$
\begin{equation} \small
\Tilde{X}_i^{(k+1)'} = \Tilde{X}_i^{(k)} - \eta_k \cdot g_{X_i}^{(k)}
\end{equation}
where $\eta_k$ is the learning rate at iteration $k$.
Note that the update on feature $\Tilde{X}_i$ depends only on the graph partition $S_i^{(k)}$ and does not involve with remaining graph partitions.
For the term $I_{C_{Xi}}(\cdot)$, we refer to the projection method by projecting the pseudo-perturbed feature onto the feasible set $C_{Xi}$:
\begin{equation} \label{eq:featureUpdate} \small
\begin{split}
\Tilde{X}_i^{(k+1)} & = \Pi_{C_{Xi}}[\Tilde{X}_i^{(k)} - \eta_k \cdot g_{X_i}^{(k)}] \\
g_{X_i}^{(k)} & = -\frac{\partial}{\partial X_i} \ell(S_i^{(k)}, X_i^{(k)}) + \rho (X_i^{(k)} - X_{i+1}^{(k)}) + \mu^{(k)}
\end{split}
\end{equation}
While computing the projection is a difficult task in general and usually requires an iterative procedure to solve, we exploit the special structure of $C_{Xi}$ and derive a closed-form formula to analytically solve it.
\vspace{2pt}
\noindent \textbf{Proposition 1}. Given $C_{X} = \{a \;| \; ||a-X||_2^2 \leq \epsilon_X\}$, the projection of $a$ to $C_X$ is
\begin{equation} \small
\Pi_{C_{X}}(a) = \begin{cases}
\frac{a+uX}{1+u} \;\; &\text{if $u>0$ and $u = \sqrt{\frac{||a-X||_2^2}{\epsilon_X}} -1 $} \\
a \;\; & \text{if $||a-X||_2^2 \leq \epsilon_X$}
\end{cases}
\end{equation}
\noindent \textbf{Proof:}
$\Pi_{C_X}(a)$ can be viewed as an optimization problem
\begin{equation} \small
\begin{split}
\min_R \;\;\;\; & || R - a||_2^2 \\
\text{s.t.} \;\;\;\; & (R-X)^T(R-X) \leq\epsilon_X
\end{split}
\end{equation}
We can derive its Lagrangian function as
\begin{equation} \small
L(R,u) = || R - a||_2^2 + u[(R-X)^T(R-X) - \epsilon_X]
\end{equation}
where $u \geq 0$.
Using the KKT condition we have the stationary condition that
\begin{equation} \small
\frac{\partial L}{\partial R} = 2(R-a) + 2u(R-X) = 0
\end{equation}
and get that $R = \frac{a+uX}{1+u}$.
We can also get the complementary slackness that
\begin{equation} \small
u[(R-X)^T(R-X) - \epsilon_X] = 0
\end{equation}
If $u>0$, we need to have $R = \frac{a+uX}{1+u}$ and $(R-X)^T(R-X) = \epsilon_X$.
By reformulating, we can have $u = \sqrt{\frac{||a-X||_2^2}{\epsilon_X}} -1$.
If $u=0$ and $||a-X||_2^2 \leq \epsilon_X$, we have $R = a$.
\vspace{3pt}
\noindent \textbf{Topology Update.}
In topology update, we aim to minimize the following function by finding $S_{i}$
\begin{equation} \small
-\ell(S_i, \Tilde{X}_i^{(k+1)}) + I_{C_{Si}}(S_i)
\end{equation}
Similar to feature update, we can first use the gradient descent method to access the gradient $g_{S_i}^{(k)}$ on $S_i$ and then use the projection method to generate the perturbed topology in the feasible set $C_{S_i}$
\begin{equation} \small \label{eq:topologyUpdate}
\begin{split}
S_i^{(k+1)} & = \Pi_{C_{S_i}}[S_i^{(k)} - \eta_t \cdot g_{S_i}^{(k)}] \\
g_{S_i}^{(k)} & = -\frac{\partial}{\partial S_i} \ell(S_i, \Tilde{X}_i^{(k+1)}) \\
\end{split}
\end{equation}
where $\eta_t$ is the learning rate.
With the Langrangian function and KKT condition, we can derive the closed-form formula to analytically project $g_{S_i}^k$ to the feasible set $C_{S_i}$. Due to the similarity with proof for Proposition 1 and page limits, we leave the detailed proof to appendix.
\noindent \textbf{Proposition 2.}
Given $C_{S_i} = \{S_i \;| \; S_i \in [0,1]^{\frac{N}{M}\times N}, 1^TS_i \leq \epsilon_i \}$, the projection of $a$ to $C_{S_i}$ is
\begin{equation} \small
\Pi_{C_{S_i}}(a) = \begin{cases}
P_{[0,1]}(a-u1) \;\; \text{if $u>0$ and $1^TP_{[0,1]}(a-u1) = \epsilon_i$} \\
P_{[0,1]}(a) \;\; \text{if $1^TP_{[0,1]}(a) \leq \epsilon_i$}
\end{cases}
\end{equation}
where $P_{[0,1]}(x) = x, if \; x\in [0,1]; 0, if \; x<0; 1, if\; x>1.$
\setlength{\textfloatsep}{0pt}
\begin{algorithm}[t] \small
\caption{SAG to solve Problem~\ref{eq:optimization4}.}
\label{alg:summary}
\SetAlgoLined
\textbf{Input:} Given A, X, fixed GNN weights $W$, learning rate $\eta_t$, epoch number $K$, and partition number $M$;
\textbf{Initialize:}
$S_i = A_i$, $X_i=X$
\For{$k = 1,2,...,K$}{
\For{$i = 1,2,...,M$}{
\textbf{Feature Update on $\Tilde{X}_i$:}
$\Tilde{X}_i^{k+1} = \Pi_{C_{Xi}}[\Tilde{X}_i^k - \eta_k \cdot g_{X_i}^k]$ with Eq \ref{eq:featureUpdate}.
\textbf{Topology Update on $S_i$:}
$S_i^{k+1} = \Pi_{C_{S_i}}[S_i^k - \eta_t \cdot g_{S_i}^k]$ with Eq \ref{eq:topologyUpdate}.
\textbf{Dual Update on $\mu_i$:}
$\mu_i^{k+1} = \mu_I^k + \rho \cdot (\Tilde{X}_i^{k+1}-\Tilde{X}_{i+1}^{k+1})$ with Eq \ref{eq:ADMMForm}.
}
}
Sample and generate the final perturbed matrix
\end{algorithm}
\iffalse
\begin{table}[t] \small
\centering
\begin{tabular}{|c|c|c|}
\hline
Method & Computation Complexity & Memory Complexity \\
\hline
\hline
FGSM & & $ND + E$ \\
\hline
PGD & & $O(ND + N^2/K)$ \\
\hline
Nettack & & $O(ND + E)$ \\
\hline
Our & & $O(ND + N^2/K)$ \\
\hline
\end{tabular}
\caption{Complexity Analysis}
\label{tab:complexityAnalysis}
\end{table}
\fi
\vspace{4pt}
\noindent \textbf{Optimization and Complexity Analysis.}
We summarize our SAG in Algorithm \ref{alg:summary}.
There are two optimization loops.
In the inner loop, we iterate through $M$ graph partitions and update the corresponding partitioned graph perturbation $S_i$ and feature perturbation $\Tilde{X}_i$.
At each iteration, only a single graph partition $S_i$ needs to be considered and large memory is saved for not considering the other $M-1$ partitions.
In the outer loop, we repeat the optimization for $K$ (=200 by default) iterations for the algorithm to converge.
After $K$ iterations, $\Tilde{X}_i$ will be identical to each other and can be directly used as the feature perturbation $\Tilde{X}$.
Recalling that $S_i$ is a probability whether an edge needs to be perturbed for attack, we use Bernoulli distribution to sample a $0\text{-}1$-valued edge between each pair of nodes.
We repeat this sample procedure for $20$ times and select the one with minimal loss for the final perturbed topology $S \in \mathcal{R}^{N \times N}$.
The memory complexity of \Mname~is
\begin{equation} \small
O(\frac{N}{M} \cdot N + N\cdot D)
\end{equation}
for storing graph partition $S_i$ and feature perturbation $\Tilde{X}_i$, respectively, since we only need to optimize one graph partition at each iteration.
We store only the current graph partition in the limited GPU memory (around 10 GB) and offload the remaining graph partitions to the large host memory (more than 50 GB).
Note that the same implementation strategy cannot be applied to \cite{xu2019topology} which requires attacking the whole graph $S \in \mathcal{R}^{N\times N}$ at each iteration.
\section{Evaluation}
In this section, we evaluate SAG on five datasets and compare with three attack algorithms to show its effectiveness.
\iffalse
\begin{table*}[t] \small
\centering
\caption{Evaluation of \Mname~with existing adversarial attacks.}
\begin{threeparttable}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\textbf{Dataset} & \textbf{Method} & \textbf{Topo. Ratio (\%)} & \textbf{Feat. Ratio (\%)} & \textbf{Time (min) } &\textbf{Mem. (GB)} & \textbf{Evasive Acc. (\%)} & \textbf{Poisoning Acc. (\%)}\\
\hline
\hline
\multirow{5}{*}{Cora}&Clean & 0 & 0 & - & - & 79.63 & 79.63 \\
& FGSM & - & 2 & 0.05 & 0.64 & 70.52 & 78.87 \\
& PGD & 5 & - & 0.25 & 1.03 & 75.70 & 70.77 \\
& Nettack & 5 & 2 & 14.75 & 0.68 & 69.62 & 71.35 \\
& \textbf{SAG} & 5 & 2 & 0.48 & 0.70 & \textbf{68.06} & \textbf{67.15} \\
\hline
\hline
\multirow{5}{*}{Citeseer}&Clean & 0 & 0 & - & - & 71.80 & 71.80 \\
& FGSM & - & 2 & 0.03 & 0.41 & 67.00 & 71.70 \\
& PGD & 5 & - & 0.18 & 0.97 & 69.08 & 67.30 \\
& Nettack & 5 & 2 & 12.63 & 0.60 & 62.91 & 67.54 \\
& \textbf{SAG} & 5 & 2 & 0.34 & 0.82 & \textbf{62.62} & \textbf{63.68}\\
\hline
\hline
\multirow{5}{*}{\thead{Amazon\\Photo}}&Clean & 0 & 0 & - & - & 93.11 & 93.11 \\
& FGSM & - & 2 & 0.05 & 1.17 & 89.13 & 91.37 \\
& PGD & 5 & - & 4.21 & 3.69 & 83.02 & 82.77\\
& Nettack & 5 & 2 & 1560 & 1.84 & 87.95 & 89.25\\
& \textbf{SAG} & 5 & 2 & 6.13 & 1.24 & \textbf{80.52} & \textbf{80.12} \\
\hline
\hline
\multirow{5}{*}{\thead{Amazon\\Computer}}&Clean & 0 & 0 & - & - & 89.29 & 89.29 \\
& FGSM & - & 2 & 3.28 & 2.67 & 85.62 & 88.03 \\
& PGD & 5 & - & 17.32 & 10.59 & 77.22 & 77.93 \\
& Nettack & 5 & 2 & & & &\\
& \textbf{SAG} & 5 & 2 & 18.78 & 2.91 & \textbf{74.56} & \textbf{76.31} \\
\hline
\hline
\multirow{5}{*}{Pubmed}&Clean & 0 & 0 & - & - & 77.53 & 77.53 \\
& FGSM & - & 2 & 9.83 & 3.54 & 72.93 & 74.25 \\
& PGD & 5 & - & - & \todo{OOM} & - & - \\
& Nettack & 5 & 2 & 3108 & 8.17 & 67.72 & 76.25 \\
& \textbf{SAG} & 5 & 2 & 47.62 & 3.63 & \textbf{62.71} & \textbf{36.27} \\
\hline
\end{tabular}
\begin{tablenotes}
\small
\item[1] \textbf{Note that '-' means such parameter is not applicable to the given setting.}
\item[2] \textbf{OOM refers to "Out of Memory".}
\end{tablenotes}
\end{threeparttable}
\label{tab: Overall Quantization Performance (Accuracy, Average Bits, and Memory Saving}
\end{table*}
\fi
\noindent\textbf{Datasets.}
In this experiment, we select various datasets to cover the vast majority of the GNN inputs, including typical datasets (\textit{Citeseer}, \textit{Cora}, \textit{Pubmed}, and \textit{Amazon-Computer/Photo}) used by many GNN papers~\cite{GCNConv, GINConv, SageConv}.
Details of these datasets are listed in Table~\ref{tab:dataset}.
\vspace{3pt}
\noindent\textbf{Baselines.}
To evaluate the effectiveness of SAG, we compare it with the state-of-the-art attack methods by using the adversarial attack repository DeepRobust~\footnote{\url{https://github.com/DSE-MSU/DeepRobust.git}}.
\begin{itemize}
\item \textbf{FGSM} \cite{FGSM} is a gradient-based adversial attack that generates adversarial examples by perturbing the input features.
While it is originally designed for attacking CNNs, it can be easily adapted to attack GNNs by perturbing node features.
\item \textbf{PGD} \cite{xu2019topology} is another gradient-based attack method tailored for discrete graph data.
The reason to select PGD for comparison is that it provides fast attack on discrete graph data by leveraging an optimization-based approach.
\item \textbf{Nettack} \cite{zugner2018adversarial} is the most popular attack algorithm on graph data by incorporating both edge and feature attacks.
We select Nettack for comparison because it serves as a strong baseline for SAG on the joint attack.
\end{itemize}
\noindent\textbf{Models.} \underline{\textit{Graph Convolutional Network}} (\textbf{GCN})~\cite{GCNConv} is one of the most popular GNN architectures. It has been widely adopted in node classification, graph classification, and link prediction tasks.
Besides, it is also the key backbone network for many other GNNs, such as GraphSage~\cite{SageConv}, and differentiable pooling (Diffpool)~\cite{diffpool}. We use the setting of hidden dimension size = 16 for each layer of GCN. \underline{\textit{Graph Attention Network}} (\textbf{GAT})~\cite{GINConv}, another typical category of GNN, aims to distinguish the graph-structure that cannot be identified by GCN.
GAT differs from GCN in its aggregation function, which assigns different weights for different nodes during the aggregation. We use the setting of 8 hidden dimension and 8 attention heads for each layer of GAT.
\vspace{3pt}
\noindent \textbf{Platforms.}
We implement SAG based on PyTorch Geometric~\cite{PyG}.
We evaluate SAG on Dell T7910 (Ubuntu 18.04) with Intel Xeon CPU E5-2603, 64 GB host memory, and an NVIDIA 1080Ti GPU with 12 GB memory.
\begin{table}[t] \small
\caption{Datasets for Evaluation.}
\label{tab:dataset}
\vspace{-3pt}
\centering
\begin{tabular}{ l c c c c }
\Xhline{2\arrayrulewidth}
\textbf{Dataset} & \textbf{\#Vertex} & \textbf{\#Edge} & \textbf{\#Dim} & \textbf{{\#Class}}\\
\Xhline{2\arrayrulewidth}
Cora & 2,708 & 10,858 & 1,433 & 7 \\
Citeseer & 3,327 & 9,464 & 3,703 & 6 \\
Amazon-Photo & 7,487 & 119,043 & 745 & 8 \\
Amazon-Computer & 13,381 & 245,778 & 767 & 10 \\
Pubmed & 19,717 & 88,676 & 500 & 3 \\
\Xhline{2\arrayrulewidth}
\end{tabular}
\vspace{3pt}
\end{table}
\begin{table*}[t] \small
\centering
\caption{Evaluation of \Mname~with existing adversarial attacks.}
\vspace{-5pt}
\scalebox{0.93}{
\begin{threeparttable}
\begin{tabular}{|c||c|c|c|c|c|}
\hline
\textbf{Dataset} & \textbf{Method} & \textbf{Time (min) } &\textbf{Mem. (GB)} & \textbf{Evasive Acc. (\%)} & \textbf{Poisoning Acc. (\%)}\\
\hline
\hline
\multirow{5}{*}{Cora}&Clean & 0 & 0 & 79.63 & 79.63 \\
& FGSM & 0.05 & 0.64 & 70.52 & 78.87 \\
& PGD & 0.25 & 1.03 & 75.70 & 70.77 \\
& Nettack & 14.75 & 0.68 & 69.62 & 71.35 \\
& \textbf{SAG} & 0.48 & 0.70 & \textbf{68.06} & \textbf{67.15} \\
\hline
\hline
\multirow{5}{*}{Citeseer}&Clean & 0 & 0 & 71.80 & 71.80 \\
& FGSM & 0.03 & 0.41 & 67.00 & 71.70 \\
& PGD & 0.18 & 0.97 & 69.08 & 67.30 \\
& Nettack & 12.63 & 0.60 & 62.91 & 67.54 \\
& \textbf{SAG} & 0.34 & 0.82 & \textbf{62.62} & \textbf{63.68}\\
\hline
\hline
\multirow{5}{*}{\thead{Amazon\\Photo}}&Clean & 0 & 0 & 93.11 & 93.11 \\
& FGSM & 0.05 & 1.17 & 89.13 & 91.37 \\
& PGD & 4.21 & 3.69 & 83.02 & 82.77\\
& Nettack & 1560 & 1.84 & 87.95 & 89.25\\
& \textbf{SAG} & 6.13 & 1.24 & \textbf{80.52} & \textbf{80.12} \\
\hline
\hline
\multirow{5}{*}{\thead{Amazon\\Computer}}&Clean & 0 & 0 & 89.29 & 89.29 \\
& FGSM & 3.28 & 2.67 & 85.62 & 88.03 \\
& PGD & 17.32 & 10.59 & 77.22 & 77.93 \\
& Nettack & 2578 & 4.35 & 82.41 & 85.33 \\
& \textbf{SAG} & 18.78 & 2.91 & \textbf{74.56} & \textbf{76.31} \\
\hline
\hline
\multirow{5}{*}{Pubmed}&Clean & 0 & 0 & 77.53 & 77.53 \\
& FGSM & 9.83 & 3.54 & 72.93 & 74.25 \\
& PGD & - & \underline{\textit{OOM}} & - & - \\
& Nettack & 3108 & 8.17 & 67.72 & 76.25 \\
& \textbf{SAG} & 47.62 & 3.63 & \textbf{62.71} & \textbf{36.27} \\
\hline
\end{tabular}
\begin{tablenotes}
\small
\item[1] \textbf{Note that ``-'' means such parameter is not applicable to the given setting.}
\item[2] \textbf{OOM refers to ``Out of Memory''.}
\end{tablenotes}
\end{threeparttable}
}
\vspace{-15pt}
\label{tab: Overall Quantization Performance (Accuracy, Average Bits, and Memory Saving}
\end{table*}
\vspace{3pt}
\noindent\textbf{Metrics.}
We evaluate \Mname~with six metrics -- \textit{evasive accuracy}, \textit{poisoning accuracy}, \textit{topology ratio}, \textit{feature ratio}, \textit{memory consumption}, and \textit{running time}.
Following the common setting~\cite{zugner2018adversarial, xu2019topology}, we report the \textbf{evasive accuracy} by assuming the GNN model is fixed and targeting the test data.
We report the \textbf{poisoning accuracy} by targeting the training data and perform the model training phase after the attack.
The \textbf{topology ratio} (\%) is computed as the number of attacked edges over the number of existing edges in the clean graph dataset.
The \textbf{feature ratio} (\%) is reported as the $L_2$ norm of perturbed features over the $L_2$ norm of the original features in the clean graph dataset.
To measure the memory, we utilize NVProf to query the runtime GPU memory consumption at a pre-selected frequency of $100$ ms.
To measure the time, we leverage the software timer from python $time$ library.
For a fair comparison, all gradient-based approaches are conducted for $200$ iterations.
\vspace{-5pt}
\subsection{Overall Performance}
Table ~\ref{tab: Overall Quantization Performance (Accuracy, Average Bits, and Memory Saving} shows the overall performance comparison between \Mname~and existing adversarial attacks under the same setting of topology ratio and feature ratio.
Following the most common setting used by many previous papers~\cite{zugner2018adversarial, xu2019topology}, we select the same topology attack ratio of $5\%$ and feature attack ratio of $2\%$, and leave the study on diverse ratios to the ablation study.
On PGD and FGSM, we only attack the topology and features, respectively, due to their limits in attacking capability.
In SAG, we stick to \textit{M}=2 and will exhibit the impact of \textit{M} in the ablation study.
We observe that \Mname~consistently outperforms the state-of-the-art attack approach, such as FGSM, PGD, and Nettack on evasive attack (up to $14.82\%$ accuracy drop) and poisoning attack (up to $41.26\%$ accuracy drop) across different datasets.
On Pubmed dataset, \Mname~achieves $14.82\%$ accuracy drop in evasive attack and $41.26\%$ in poisoning attack.
The major reason for such success is that SAG enables the gradient-based joint optimization on both the features and topology while incorporating global reasoning on the interaction between attacking different nodes.
By contrast, FGSM and PGD attack only the feature or topology, and Nettack considers only one edge at each time, failing to reason the global interaction across edges and nodes.
\iffalse
\begin{figure*}
\centering
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=\linewidth]{imgs/trainingLoss-1.pdf}
\caption{\small \Mname~attack losses on Cora and Pubmed.}
\label{fig:attackLoss}
\end{minipage}
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=\linewidth, height=3.1cm, trim=0 0.1cm 0 0]{imgs/convergenceBehavior-1.pdf}
\caption{\small $||V^k - Z^k||_2$ convergence Behavior of \Mname~on Cora and Pubmed.}
\label{fig:convergenceBehavior-1}
\end{minipage}
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=\linewidth, height=3cm, trim=0 0.2cm 0 0]{imgs/convergenceBehavior-2.pdf}
\caption{$||Z^{k+1} - Z^{k}||_2$ convergence Behavior of \Mname~on Cora and Pubmed.}
\label{fig:convergenceBehavior-2}
\end{minipage}
\end{figure*}
\fi
\begin{figure*}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\linewidth, height=3.0cm, trim=0 0.1cm 0 0]{imgs/trainingLoss-1.pdf}
\caption{Attack Loss}
\label{fig:attackLoss}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\linewidth, height=3cm, trim=0 0.1cm 0 0]{imgs/convergenceBehavior-1.pdf}
\caption{\small $||\Tilde{X}_i^{(k)} - \Tilde{X}_{i+1}^{(k)}||_2$}
\label{fig:convergenceBehavior-1}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\linewidth, height=3.1cm, trim=0 0.1cm 0 0]{imgs/convergenceBehavior-2.pdf}
\caption{$||\Tilde{X}_i^{(k+1)} - \Tilde{X}_i^{(k)}||_2$}
\label{fig:convergenceBehavior-2}
\end{subfigure}
\caption{Convergence Behavior of \Mname~on \textbf{Cora} and \textbf{Pubmed}.}
\vspace{-15pt}
\label{fig:convergenceBehavior}
\end{figure*}
Across different datasets and settings, we notice that Nettack always comes with the highest time cost.
The reason is that, at each iteration, it selects only one edge or feature to attack by examining all edges and node features, and repeats the procedure until reaching the topology ratio and the feature ratio.
Moreover, we observe that PGD usually has the highest memory consumption, since it requires a floating-point $N\times N$ matrix to store the edge gradients between each pair of nodes, where $N$ is the number of nodes.
For Pubmed, a single matrix of this shape requires at least $1.5$ GB to store and PGD processes the whole matrix at each iteration, instead of processing only a small partition as the case in SAG.
Besides, in the time and memory comparison among these implementations, we notice the significant strength of \Mname, which achieves up to $254\times$ speedup and $3.6\times$ memory reduction.
This is largely due to our effective algorithmic and implementation optimizations that can reduce the runtime complexity meanwhile amortizing the memory overhead to great extent.
\subsection{Ablation Studies}
In this ablation study, we will focus on two representative datasets -- Cora and Pubmed -- for the performance of SAG on a small graph dataset and a large graph dataset.
\textbf{ADMM Convergence Behavior}
We show the ADMM convergence behavior in Figure ~\ref{fig:convergenceBehavior}.
Here, we adopt the same number of epochs as $T=200$ and show only the first $70$ epochs in Figure \ref{fig:convergenceBehavior-1} and Figure \ref{fig:convergenceBehavior-2} since SAG converges fast on $||\Tilde{X}_i^{(k)} - \Tilde{X}_{i+1}^{(k)}||_2$ and $||\Tilde{X}_i^{(k+1)} - \Tilde{X}_i^{(k)}||_2$.
We only present the result for $i=1$ since various $i$'s show similar results.
Overall, SAG converges gracefully as epoch increases, demonstrating the effectiveness of our method.
In Figure~\ref{fig:attackLoss}, with the increase of the epoch number, SAG gradually increases the attack loss by perturbing the features and the topology.
In Figure \ref{fig:convergenceBehavior-1}, $||\Tilde{X}_i^{(k)} - \Tilde{X}_{i+1}^{(k)}||_2$ starts from 0 since we initialize both $\Tilde{X}_{i}$ and $\Tilde{X}_{i+1}$ to the node features on clean graph dataset.
Compared across different datasets, Pubmed converges faster than Cora since Pubmed contains much smaller feature dimension than Cora.
\textbf{Topology and Feature Perturbation}
For poisoning attack on Cora dataset (Table~\ref{tab:ablationCora}), the increase of feature perturbation and topology perturbation would both lead to the accuracy drop compared with the original clean data.
Besides, we observe that, at the same level of topology perturbation, $8\%$ feature perturbation can lead to $3.6\%$ extra accuracy drop on average.
On Pubmed dataset, we observe that $8\%$ feature perturbation can lead to $9.4\%$ extra accuracy drop, averaged over various topology perturbation ratio.
These results show the benefit of attacking both topology and features.
\begin{table}[t] \small
\centering
\caption{Accuracy (\%) of \Mname~under the diverse ratio of perturbed edges and feature attacks on \textbf{Cora} and \textbf{Pubmed}.}
\vspace{-5pt}
\scalebox{0.97}{
\begin{tabular}{c|c|c|c|c|c|c}
\Xhline{2\arrayrulewidth}
\textbf{Cora} & \multicolumn{6}{c}{\textbf{Feature Perturbation (\%)}} \\
\hline
\multirow{6}{*}{\shortstack{\textbf{Topology} \\ \textbf{Perturbation} \\ \textbf{(\%)}}}& & \textbf{0} & \textbf{1} & \textbf{2} & \textbf{4} & \textbf{8} \\
& \textbf{0} & 79.63 & 79.25 & 78.91 & 78.84 & 78.73 \\
& \textbf{5} & 70.47 & 69.11 & 68.71 & 66.52 & 65.59 \\
& \textbf{10} & 58.63 & 57.29 & 56.34 & 55.23 & 53.77 \\
& \textbf{15} & 50.15 & 49.60 & 47.53 & 47.04 & 46.73 \\
& \textbf{20} & 45.37 & 43.46 & 43.06 & 42.71 & 41.65 \\
\Xhline{2\arrayrulewidth}
\textbf{Pubmed} & \multicolumn{6}{c}{\textbf{Feature Perturbation (\%)}} \\
\hline
\multirow{6}{*}{\shortstack{\textbf{Topology} \\ \textbf{Perturbation} \\ \textbf{(\%)}}}& & \textbf{0} & \textbf{1} & \textbf{2} & \textbf{4} & \textbf{8} \\
& \textbf{0} & 77.53 & 75.34 & 73.91 & 72.55 & 68.65 \\
& \textbf{5} & 49.19 & 39.14 & 36.27 & 31.65 & 28.53 \\
& \textbf{10} & 33.24 & 28.57 & 27.81 & 26.35 & 25.30 \\
& \textbf{15} & 28.04 & 26.05 & 23.59 & 23.04 & 22.34 \\
& \textbf{20} & 23.95 & 21.62 & 21.09 & 20.71 & 20.38 \\
\hline
\end{tabular}
}
\label{tab:ablationCora}
\vspace{-10pt}
\end{table}
\iffalse
\begin{table}[t] \small
\centering
\caption{Accuracy (\%) of \Mname~under diverse ratio of perturbed edges and feature attacks on Pubmed dataset.}
\scalebox{0.97}{
\begin{tabular}{c|c|c|c|c|c|c}
\hline
& \multicolumn{6}{c}{\textbf{Feature Perturbation (\%)}} \\
\Xhline{2\arrayrulewidth}
\multirow{6}{*}{\shortstack{\textbf{Topology} \\ \textbf{Perturbation} \\ \textbf{(\%)}}}& & \textbf{0} & \textbf{1} & \textbf{2} & \textbf{4} & \textbf{8} \\
& \textbf{0} & 77.53 & 75.34 & 73.91 & 72.55 & 68.65 \\
& \textbf{5} & 49.19 & 39.14 & 36.27 & 31.65 & 28.53 \\
& \textbf{10} & 33.24 & 28.57 & 27.81 & 26.35 & 25.30 \\
& \textbf{15} & 28.04 & 26.05 & 23.59 & 23.04 & 22.34 \\
& \textbf{20} & 23.95 & 21.62 & 21.09 & 20.71 & 20.38 \\
\hline
\end{tabular}
}
\label{tab:ablationPubmed}
\end{table}
\fi
\begin{table}[t] \small
\centering
\caption{Impact of \textit{M} for Poisoning Attack on \textbf{Pubmed}.}
\vspace{-5pt}
\begin{tabular}{c|c|c|c}
\Xhline{2\arrayrulewidth}
\multirow{2}{*}{\textbf{M}} & \textbf{Time} & \textbf{Memory} & \textbf{Accuracy Drop} \\
& \textbf{(min)}& \textbf{(GB)} & \textbf{(\%)}\\
\hline
\textbf{1} & 33 & 5.72 & 35.37 \\
\textbf{2} & 42 & 3.63 & 36.27\\
\textbf{4} & 62 & 2.75 & 36.18 \\
\textbf{8} & 95 & 2.21 & 35.97 \\
\Xhline{2\arrayrulewidth}
\end{tabular}
\label{tab:ablationK}
\vspace{5pt}
\end{table}
\begin{table}[t]
\centering
\caption{SAG Transferability for \textbf{Poisoning} Attack.}
\vspace{-5pt}
\begin{threeparttable}
\begin{tabular}{c|c|c}
\Xhline{2\arrayrulewidth}
& \textbf{Cora (\%)} & \textbf{Pubmed (\%)} \\
\hline
\textbf{1-layer GCN} & 0.67 (0.80) & 0.35 (0.77)\\
\textbf{2-layer GCN} & 0.78 (0.83) & 0.80 (0.85) \\
\textbf{4-layer GCN} & 0.74 (0.81) & 0.76 (0.84)\\
\hline
\textbf{1-layer GAT} & 0.74 (0.82) & 0.37 (0.79)\\
\textbf{2-layer GAT} & 0.77 (0.83) & 0.78 (0.85) \\
\textbf{4-layer GAT} & 0.75 (0.80) & 0.73 (0.82)\\
\Xhline{2\arrayrulewidth}
\end{tabular}
\begin{tablenotes}
\small
\item[1] Data Format: \textbf{attacked data acc. (clean data acc.).}
\end{tablenotes}
\end{threeparttable}
\label{tab:SAG Transferability for Poisoning Attack}
\vspace{3pt}
\end{table}
\textbf{M-value Impact} We also evaluate SAG for poisoning attack on Pubmed to show the impact of the hyperparameter \textit{M} (\textit{i.e.}, the number of graph partitions) on memory saving.
As shown in Table~\ref{tab:ablationK}, with the increase of the \textit{M} value, the memory size reduction becomes significant, since splitting the graph into \textit{M} partitions and attacking individual partitions at each time essentially reduce the memory requirement.
We also observe similar accuracy drop under different \textit{M} since SAG converges gracefully and hits similar optimal points for diverse \textit{M}.
Meanwhile, we also observe that the increase of value \textit{M} also brings the runtime overhead in terms of time cost, for example, \textit{M}=8 setting is 33 minutes slower than \textit{M}=4 setting.
This slowdown happens since we need to attack individual split graphs at each time, leading to a small portion of system overhead on memory access.
This also leads to a tradeoff among these factors when selecting the value of \textit{M}.
We also observe that the memory consumption does not decrease linearly as \textit{K} increases.
The main reason is that, as \textit{M} reduces, memory consumption from other sources (\textit{e.g.}, loading PyTorch framework and the features) becomes the dominant component.
\vspace{3pt}
\textbf{Transferability} To demonstrate the transferability of \Mname, we further evaluate our attacked graphs (Cora and Pubmed) on GCNs (with 1, 2, and 4 layers) and GATs (with 1, 2, and 4 layers), respectively.
We generate adversarial examples on a 1-layer GCN model and conduct poisoning attack on other models by targeting the training data and training these models on the perturbed data.
As shown in Table~\ref{tab:SAG Transferability for Poisoning Attack}, \Mname~can effectively maximize the accuracy drop on Cora (up to 13\%) and Pubmed (up to 42\%).
The major reason for such success in launching the poisoning attack is that the adversarial attack on 1-layer GCN effectively captures the intrinsic property on the graph data that is agnostic to the models.
We want to stress that, even on models with different layers (\textit{i.e.}, 2 and 4), the poisoned graph data can still achieve $8\%$ and $9\%$ accuracy drop on Cora and Pubmed, respectively.
These results demonstrate the transferability of \Mname~towards models with diverse architectures and the number of layers.
\section{Conclusion}
This work focuses on GNN robustness by giving an in-depth understanding of GNN's ``weakness''. We propose \Mname, the first scalable adversarial attack method with Alternating Direction Method of Multipliers (ADMM), which can successfully overcome the limitations of the previous solutions. Extensive experiments further highlight \Mname's advantage of reducing the computation and memory overhead over the existing approaches.
\section{Proof of Proposition 2}
In this section, we provide the proof for Proposition 2.
Similar to the proof in Proposition1 and existing works on projection \cite{DBLP:conf/aaai/LengDLZJ18,ADMM-Pruning,xu2019topology}, we utilize the Langrangian function and the KKT contition to derive a closed-form formula to project a given input towards the feasible set $C_{S_i}$.
\noindent \textbf{Proposition 2.}
Given $C_{S_i} = \{S_i \;| \; S_i \in [0,1]^{\frac{N}{M}\times N}, 1^TS_i \leq \epsilon_i \}$, the projection of $a$ to $C_{S_i}$ is
\begin{equation} \small
\Pi_{C_{S_i}}(a) = \begin{cases}
P_{[0,1]}(a-u1) \;\; \text{if $u>0$ and $1^TP_{[0,1]}(a-u1) = \epsilon_i$} \\
P_{[0,1]}(a) \;\; \text{if $1^TP_{[0,1]}(a) \leq \epsilon_i$}
\end{cases}
\end{equation}
where $P_{[0,1]}(x) = x, if \; x\in [0,1]; 0, if \; x<0; 1, if\; x>1.$
\noindent \textbf{Proof: }
We first transform the projection problem $\Pi_{C_{S_i}}$ into an optimization problem
\begin{equation} \small
\begin{split}
\min_R \;\;\;\; & \frac{1}{2}||R-a||_2^2 \\
\text{s.t.} \;\;\;\; & R \in [0,1]^{\frac{N}{M} \times N} \\
& 1^tR\leq \epsilon_i
\end{split}
\end{equation}
Then, we can derive its Langrangian function as
\begin{equation} \small
L(R,u) = \frac{1}{2} ||R-a||_2^2 + I_{[0,1]}(R) + u(1^TR-\epsilon_i)
\end{equation}
where $u\leq 0$ is the dual variable.
Here, $I_{[0,1]}(R) = 0 \text{ if } R \in [0,1]^{\frac{N}{M}\times N}; =\infty \text{ otherwise.}$
Using the KKT condition, we have the stationary condition that
\begin{equation} \small
\frac{\partial L}{\partial R} = (R-a) + u1 + \frac{\partial}{\partial R}I_{[0,1]}(R) = 0
\end{equation}
Here, $\frac{\partial}{\partial R}I_{[0,1]}(R) = 0 \text{ if } R \in [0,1]^{\frac{N}{M}\times N}; =\infty \text{ otherwise}$
We have $R = P_{[0,1]}(a-u1)$, where $P_{[0,1]}(x) = x, if \; x\in [0,1]; = 0, if \; x<0; = 1, if\; x>1$, and $P_{[0,1]}(x)$ is element-wisely applied on $(a-u1)$.
Using the KKT condition, we also have the complementary slackness
\begin{equation} \small
u(1^TR-\epsilon_i) = 0
\end{equation}
If $u=0$, we need to have $R = P_{[0,1]}(a)$.
If $u>0$, we need to have $R = P_{[0,1]}(a-u1)$ and $1^TR-\epsilon_i = 0$.
In other words, we have $1^TP_{[0,1]}(a-u1) = \epsilon_i$, where $u$ is a scalar variable and can be solved with the bisection method \cite{ADMM}.
|
1,314,259,994,862 | arxiv | \section{Introduction}
\label{sec:1}
Hadron colliders offer an excellent place to look for new quarks, as the top quark discovery~\cite{Abe:1994xt} and its recent observation in single production~\cite{Abazov:2009ii,Aaltonen:2009jj} at Tevatron evidence. In the near future, the operation of the Large Hadron Collider (LHC) will open a new, yet unexplored mass range for new physics searches, in particular for new quarks heavier than the top.
Despite the demanding environment, new quark searches at LHC will be relatively clean because they can be produced in pairs through strong interactions with a large cross section and, being rather heavy, their signals can be easily distinguished from the large background from top pair production and $W$ plus jets.
Although the possibility of a fourth standard model (SM) sequential generation has not yet been excluded~\cite{Alwall:2006bx,Kribs:2007nz,Holdom:2009rf} and partial wave unitarity allows for fourth generation masses up to 1 TeV~\cite{Chanowitz:1978mv},
new quarks heavier than the top are generally expected to be of vector-like nature if they exist. For example, extra-dimensional models with $t_R$ in the bulk~\cite{Mirabelli:1999ks,Chang:1999nh,Csaki:2004ay} predict a tower of charge $2/3$ isosinglets $T_{L,R}^{(n)}$, of which the lightest one can be light and have sizeable mixing with the third generation~\cite{delAguila:2000kb}. More recently, $(T \, B)_{L,R}$ and $(X \, T)_{L,R}$ isodoublets of hypercharges $1/6$, $7/6$ coupling to the third generation naturally emerge~\cite{Contino:2006qr,Carena:2006bn} in warped models implementing a custodial symmetry to protect the $Zbb$ coupling~\cite{Agashe:2006at}. Charge $-1/3$ isosinglets $B_{L,R}$ are predicted in grand unification theories based on $\text{E}_6$, one of the most widely studied groups~\cite{Frampton:1999xi,Hewett:1988xc}, in which one such fermion per family appears in the {\bf 27} representation.
Little Higgs models~\cite{ArkaniHamed:2001nc,ArkaniHamed:2002qy,Perelstein:2005ka} also
introduce a new $T_{L,R}$ isosinglet partner of the top quark which ameliorates the quadratic divergences in the Higgs mass.
In general, the new quarks predicted in these SM extensions are expected to couple mainly to the third generation. For generic Yukawa matrices and heavy quark mass terms, it has been shown~\cite{delAguila:1982fs} that the mixing of new vector-like quarks is of order $m/M$, where $m,M$ are the masses of SM and new quarks, respectively. Then, unless specific symmetries are imposed on the mass matrices,
the large mass hierarchy $m_t \gg m_{u,d}$, $m_b \gg m_{d,s}$ favours mixing with the third generation. Additionally, constraints on top couplings are weaker than for the rest of quarks~\cite{delAguila:1998tp,AguilarSaavedra:2002kr} so there is more room for mixing also from the experimental side. Note, however, that in some models it is possible to evade direct constraints with cancellations, and have large mixing with the first and second generations compatible with experimental data, see Ref.~\cite{Atre:2008iu}.
In case that any new physics is discovered at LHC, as it is hoped, it will be compulsory to determine its nature. For heavy quarks this means not only the observation of an event excess or even an invariant mass peak, but the determination of the quark charges and $\text{SU}(2)_L$ isospin, the investigation of the decay channels and the measurement of their mixing with the SM quarks. In this paper we address some of these issues. We study the pair production of vector-like singlets $T_{L,R}$, $B_{L,R}$ of charges $2/3$, $-1/3$ and doublets $(T \, B)_{L,R}$, $(X \, T)_{L,R}$, $(B \, Y)_{L,R}$ of hypercharges $1/6$, $7/6$, $-5/6$, respectively, with quarks $X$, $Y$ of charges $5/3$, $-4/3$.
(From now on we will drop the $L$, $R$ subscripts.)
We will assume that the new quarks mainly couple to the third generation.
Previous literature has also investigated some of these signals in specific final states. For example, pair production of $T$ singlets has been studied in the single lepton final state \cite{AguilarSaavedra:2005pv,AguilarSaavedra:2006gv,AguilarSaavedra:2006gw}, as well as pair production of charge $5/3$, $-1/3$ quarks in $(X \, T)$, $(T \, B)$ doublets producing like-sign dileptons~\cite{Contino:2008hi} and one charged
lepton~\cite{Dennis:2007tv}.\footnote{The discovery potential for $D$ singlets coupling to $u,d$ instead of the third generation has already been explored, for example in Refs.~\cite{Mehdiyev:2006tz,Sultansoy:2006cw,Mehdiyev:2007pf}.}
Here we will advance beyond previous work by analysing twelve multi-leptonic final states which give evidence for the several decay modes
\begin{align}
& T \to W^+ b \,, \quad T \to Zt \,,\quad T \to Ht \,, \displaybreak \notag \\
& B \to W^- t \,, \quad B \to Zb \,,\quad B \to Hb \,, \notag \\
& X \to W^+ t \,, \notag \\
& Y \to W^- b \,,
\label{ec:decall}
\end{align}
with the aim of model discrimination. It is well known since some time (see for example Ref.~\cite{delAguila:1989rq}) that the presence or absence of specific decay modes can characterise the new quarks eventually observed. Here we demonstrate how this could be done in practice. For example, $T$ quarks in a $(X \, T)$ doublet have suppressed decay $T \to W^+ b$, so they are not seen in the $W^+ b \, W^- \bar b$ final state as $T$ singlets are.
But they have enhanced $T \to Ht$ decays, so if the Higgs boson is light (as preferred by electroweak precision data) they give a fairly large and clean $T \bar T \to Ht \, H \bar t \to H W^+ b \, H W^- \bar b$ signal with one charged lepton and six $b$ quarks. On the other hand, $Y \bar Y \to W^- b \, W^+ \bar b$ cannot be distinguished from $T \bar T \to W^+ b \, W^- \bar b$ unless the $b$ jet charge is measured, which is very difficult and requires large statistics. But, apart from different signal branching ratios, $T$ quarks are cleanly identified by their characteristic $T \to Zt$ decay, which can be observed in the trilepton final state. $X$ and $B$ quarks can both decay into four $W$ final states, but in some models the latter also decays $B \to Zb$ producing a sharp peak in a $\ell^+ \ell^- b$ invariant mass distribution, which can be observed in dilepton and trilepton final states (here and in the following $\ell=e,\mu$). In summary, here it will be shown that the simultaneous study and comparison of several multi-leptonic final states, with the observation of invariant mass peaks in most cases, can establish the identity of the new quarks, if they are observed at LHC.
We remark that model discrimination is somewhat more demanding that evaluating the discovery potential of one's favourite model in some final state. From the technical point of view, it requires the complete signal generation with all decay channels. For $T \bar T$ and $B \bar B$ production there are in general nine decay modes according to Eqs.~(\ref{ec:decall}). When the decay of the $W$ and $Z$ bosons (up to four, depending on the channel) are included, a plethora of possible final states appears involving multi-lepton signals. These contributions are all included in our simulations, which take into account the effects of radiation, pile-up and hadronisation, performed by a parton shower Monte Carlo, and use a fast detector simulation. SM backgrounds have also to be generated and simulated, including those with huge cross sections such as $W$ and $Z$ production plus jets, which are computationally demanding.
Heavy quark pair production gives interesting signals in final states with one, two (like- and opposite-sign), three and four charged leptons. (Five and six lepton final states have too small branching ratios.) For model discrimination it is very convenient to classify signals not only by lepton multiplicity but by the number of $Z$ boson ``candidates'' present (same-flavour opposite-charge lepton pairs with an invariant mass consistent with $M_Z$). For example, the trilepton final state is divided into a sample of events having a $Z$ candidate (in which $T \bar T \to Zt \, W^- b$ and other signals involving $Z \to \ell^+ \ell^-$ would be found) and events without $Z$ candidates (to which $X \bar X \to W^+ t \, W^- \bar t$, for instance, would contribute). In some cases the number of $b$ jets present is also relevant. This gives a total of twelve interesting final states to be examined, and for which specific analyses are presented in this paper. But, even after this final state organisation in terms of charged lepton multiplicity and number of $Z$ candidates, there are final states where more than one type of quark give interesting signals. One of such cases is, for the trilepton final state
with a $Z$ candidate,
\begin{align}
& T \bar T \to Zt \, W^- \bar b \to Z W^+ b \, W^- \bar b &&
\quad Z \to \ell^+ \ell^- , WW \to \ell \nu q \bar q' \,,
\notag \\
& B \bar B \to Zb \, W^+ \bar t \to Z b \, W^+ W^- \bar b &&
\quad Z \to \ell^+ \ell^- , WW \to \ell \nu q \bar q'
\end{align}
(the charge conjugate modes are also understood). In these cases, a likelihood classification is performed to separate and identify the $T \bar T$ and $B \bar B$ signals, and reconstruct them accordingly. This approach is unavoidable, since in some models like the $(T \, B)$ doublet both signals can be present.
Besides model discrimination, which is the main goal of this paper, the systematic study of all interesting final states offers several advantages. One of them is that the most sensitive ones can be identified. We find that the single lepton final state (with either two or four $b$-tagged jets) offers the best discovery potential for all the models studied. For quark masses of 500 GeV, $5\sigma$ significance could be achieved for integrated luminosities ranging from 0.16 fb$^{-1}$\ for a $(X \, T)$ doublet to 1.9 fb$^{-1}$\ for a $B$ singlet. Our study also provides a guide of final state signatures to be searched in case that an event excess is identified in one of them. This complements previous work done for the characterisation and discrimination of seesaw models~\cite{delAguila:2008cj,delAguila:2008hw} and new heavy leptons~\cite{AguilarSaavedra:2009ik}.
The structure of the rest of this paper is the following. In section~\ref{sec:2} we introduce the models studied giving the relevant Lagrangian terms. In section~\ref{sec:3} we discuss the general features of heavy quark pair production at LHC, and some details associated to the signal and background generation. In sections~\ref{sec:4l}--\ref{sec:1l} the results for final states with four, three, two (like-sign and opposite-sign) and one charged lepton are presented, respectively. For the reader's convenience, the main results obtained are summarised at the end of each section, so that in a first reading the details can be omitted. Section~\ref{sec:summ} is a general summary where we address model discrimination by comparing signals in different final states. Our conclusions are drawn in section~\ref{sec:concl}. The Feynman rules used in our Monte Carlo programs are given in the Appendix.
\section{Model overview}
\label{sec:2}
In this section we briefly review the electroweak interactions of the new quarks, which determine their decay modes and single production. Additional details can be found in many early references, for example
\cite{delAguila:1982fs,delAguila:1989rq,
Branco:1986my}.
The interactions of $(X \, T)$ and $(B \, Y)$ doublets are also given in Refs.~\cite{delAguila:2000aa,delAguila:2000rc}.
\subsection{$T$ singlet}
\label{sec:2.1}
We denote the SM weak eigenstates as $q'_{Li}=(u'_{Li} \; d'_{Li})^T$, $u'_{Ri}$, $d'_{Ri}$, where Latin indices $i,j=1,2,3$ run over SM generations and Greek indices $\alpha,\beta=1,\dots,4$ over all quark fields.
We use primes to distinguish them from mass eigenstates, where needed.
The addition of a $\text{SU}(2)_L$ isosinglet $u'_{L4}$, $u'_{R4}$ to the SM quark content does not modify the SM charged and neutral current interactions in the weak eigenstate basis. The new $u_{R4}'$ field has Yukawa couplings to the SM left-handed fields
(the Yukawa coupling matrix $\mathrm{Y}$ must not be confused with a charge $-4/3$ quark $Y$),
and a bare mass term can be written involving the new left-handed singlet $u'_{L4}$,\footnote{In full generality, the right-handed fields $u_{R\alpha}'$ can be redefined so that the bare mass term only involves $u_{R4}'$. This change of basis also redefines the arbitrary matrix $\mathrm{Y}$ of Yukawa couplings.}
\begin{eqnarray}
\mathcal{L}_W & = & -\frac{g}{\sqrt 2} \, \bar u_{Li}' \gamma^\mu d'_{Li} \, W_\mu^+ + \text{H.c.}
\,, \notag \\
\mathcal{L}_Z & = & -\frac{g}{2 c_W} \left[ \bar u'_{Li} \gamma^\mu u'_{Li} - 2 s_W^2 J_\text{EM}^\mu \right] Z_\mu \,, \notag \\
\mathcal{L}_\text{Y} & = & - \mathrm{Y}_{i\beta}^u \; \bar q'_{Li} u'_{R\beta} \, \tilde \phi + \text{H.c.}
\,, \notag \\
\mathcal{L}_\text{bare} & = & - M \bar u_{L4}' u_{R4}' + \text{H.c.}
\end{eqnarray}
In this and the rest of models,
the electromagnetic current $J_\text{EM}^\mu$ has the same expression as in the SM but summing over all quark fields.
The Higgs doublet is
\begin{equation}
\phi = \left( \!\begin{array}{c} \phi^+ \\ \phi^0 \end{array} \!\right)
\to \frac{1}{\sqrt 2} \left( \!\begin{array}{c} 0 \\ v + H \end{array} \!\right) \,,\quad
\tilde \phi \equiv i\tau_2 \phi^*
\to \frac{1}{\sqrt 2} \left( \!\begin{array}{c} v + H \\ 0 \end{array} \!\right) \,,
\end{equation}
with $v=246$ GeV and $\tau$ the Pauli matrices. In the Lagrangians above we have omitted the terms in the down sector which are not affected by mixing. After the mass matrix diagonalisation
the $W$, $Z$ and $H$ interactions read
\begin{eqnarray}
\mathcal{L}_W & = & -\frac{g}{\sqrt 2} \, \bar u_{L \alpha} \gamma^\mu \mathrm{V}_{\alpha j} d_{Lj} \, W_\mu^+ + \text{H.c.} \,, \notag \\
\mathcal{L}_Z & = & -\frac{g}{2 c_W} \left[ \bar u_{L\alpha} \gamma^\mu \mathrm{X}_{\alpha \beta} u_{L\beta} - 2 s_W^2 J_\text{EM}^\mu \right] Z_\mu \,, \notag \\
\mathcal{L}_H & = & - \frac{g}{2 M_W} \left[
\bar u_{L\alpha} \mathrm{X}_{\alpha \beta} \, m^u_\beta u_{R\beta} + \bar u_{R\alpha} m^u_\alpha \mathrm{X}_{\alpha \beta} u_{L\beta} \right] H \,,
\label{ec:Tint}
\end{eqnarray}
where $\mathrm{V}_{\alpha j}$ is the $4 \times 3$ generalisation of the Cabibbo-Kobayashi-Maskawa~\cite{Cabibbo:1963yz,Kobayashi:1973fv} (CKM) matrix, $\mathrm{X} = \mathrm{V} \mathrm{V}^\dagger$ a Hermitian $4 \times 4$ matrix (not to be confused with a charge $5/3$ quark $X$) and $m_\alpha^u$ the up-type quark masses. The electromagnetic current $J_\text{EM}^\mu$ obviously remains diagonal.
These equations, which result from a trivial change from weak to mass eigenstate basis, are exact and do not assume small mixing.
Notice the appearance of left-handed flavour-changing neutral (FCN) couplings among up-type quarks, due to the mixing of left-handed weak eigenstates of different isospin, which breaks the Glashow-Iliopoulos-Maiani~\cite{Glashow:1970gm} mechanism.
For a heavy quark $T \equiv u_4$ mixing with the top quark, and assuming small mixing, we have the approximate equality $\mathrm{X}_{Tt} \simeq \mathrm{V}_{Tb}$ among neutral and charged current couplings, replacing generation indices by quark labels. This is a very well known result: in the $T$ singlet model charged current mixing ($WTb$) automatically implies neutral current ($ZTt$) and scalar ($HTt$) interactions, all of the same strength up to multiplicative factors independent of mixing. The corresponding Feynman rules are given in the Appendix. These interactions determine the $T$ quark decays,
\begin{equation}
T \to W^+ b \,,\quad \quad T \to Zt \,,\quad \quad T \to Ht \,.
\end{equation}
This new eigenstate has a mass $m_T = M + O(v^2 \mathrm{Y}^2/M^2)$.
\subsection{$B$ singlet}
\label{sec:2.2}
The Lagrangian for a $B$ singlet is completely analogous to the one for a $T$ singlet, with few replacements. The relevant interactions in the weak eigenstate basis read
\begin{eqnarray}
\mathcal{L}_W & = & -\frac{g}{\sqrt 2} \, \bar u_{Li}' \gamma^\mu d_{Li}' \, W_\mu^+ + \text{H.c.}
\,, \notag \\
\mathcal{L}_Z & = & -\frac{g}{2 c_W} \left[- \bar d_{Li}' \gamma^\mu d_{Li}' - 2 s_W^2 J_\text{EM}^\mu \right] Z_\mu \,, \notag \\
\mathcal{L}_\mathrm{Y} & = & - \mathrm{Y}_{i\beta}^d \; \bar q_{Li}' d_{R\beta}' \, \phi
+ \text{H.c.} \notag \\
\mathcal{L}_\text{bare} & = & - M \bar d_{L4}' d_{R4}' + \text{H.c.}
\end{eqnarray}
After mass matrix diagonalisation, we have
\begin{eqnarray}
\mathcal{L}_W & = & -\frac{g}{\sqrt 2} \, \bar u_{Li} \gamma^\mu \mathrm{V}_{i\beta} d_{L\beta} \, W_\mu^+ + \text{H.c.} \,, \notag \\
\mathcal{L}_Z & = & -\frac{g}{2 c_W} \left[ - \bar d_{L\alpha} \gamma^\mu \mathrm{X}_{\alpha \beta} d_{L\beta} - 2 s_W^2 J_\text{EM}^\mu \right] Z_\mu
\,, \notag \\
\mathcal{L}_H & = & - \frac{g}{2 M_W} \left[
\bar d_{L\alpha} \mathrm{X}_{\alpha \beta} \, m^d_\beta d_{R\beta} + \bar d_{R\alpha} m^d_\alpha \mathrm{X}_{\alpha \beta} d_{L\beta} \right] H \,.
\end{eqnarray}
The CKM matrix has dimension $3\times 4$, $\mathrm{X} = \mathrm{V}^\dagger \mathrm{V}$ in this case and $m_\alpha^d$ are the down-type quark masses. For $B$ mixing with the third generation we have $\mathrm{X}_{bB} \simeq \mathrm{V}_{tB}$, so that the new quark $B$ has $WtB$, $ZbB$ and $HbB$ interactions governed by a single mixing factor $\mathrm{V}_{tB}$, in analogy with the $T$ singlet model. The
new quark $B$ has a mass $m_B \simeq M$, and its decays are
\begin{equation}
B \to W^- t \,,\quad \quad B \to Zb \,,\quad \quad B \to Hb \,.
\end{equation}
\subsection{$(T \, B)$ doublet}
\label{sec:2.3}
With the addition of a vector-like doublet, the relevant Lagrangian in the weak interaction basis is
\begin{eqnarray}
\mathcal{L}_W & = & -\frac{g}{\sqrt 2} \left[
\bar u_{L\alpha}' \gamma^\mu d_{L\alpha}' + \bar u_{R4}' \gamma^\mu d_{R4}' \right] W_\mu^+
+ \text{H.c.} \,, \notag \\
\mathcal{L}_Z & = & -\frac{g}{2 c_W} \left[ \bar u_{L\alpha}' \gamma^\mu u_{L\alpha}' +
\bar u_{R4}' \gamma^\mu u_{R4}' - \bar d_{L\alpha}' \gamma^\mu d_{L\alpha}'
- \bar d_{R4}' \gamma^\mu d_{R4}' - 2 s_W^2 J_\text{EM}^\mu \right] Z_\mu \,, \notag \\
\mathcal{L}_\text{Y} & = &
- \mathrm{Y}_{\alpha j}^u \; \bar q_{L\alpha}' u_{Rj}' \, \tilde \phi
- \mathrm{Y}_{\alpha j}^d \; \bar q_{L\alpha}' d_{Rj}' \, \phi
+ \text{H.c.} \,, \notag \\
\mathcal{L}_\text{bare} & = & - M \bar q_{L4}' q_{R4}' + \text{H.c.} \,,
\end{eqnarray}
with four SM-like left-handed doublets $q_{Li}'$ and one new right-handed doublet
$q_{R4}' = (u_{R4}' \, d_{R4}')^T$. The left-handed fields can be redefined so that the bare mass term only couples $q_{L4}'$.
In the mass eigenstate basis it is more transparent to write the Lagrangians at first order in the (small) light-heavy mixing,
\begin{eqnarray}
\mathcal{L}_W & = & -\frac{g}{\sqrt 2} \left[
\bar u_{Li} \gamma^\mu \mathrm{V}_{ij}^L d_{Lj} + \bar T_L \gamma^\mu B_L
+ \bar u_{R\alpha} \gamma^\mu \mathrm{V}_{\alpha \beta}^R d_{R\beta}
\right] W_\mu^+ + \text{H.c.} \,, \notag \\
\mathcal{L}_Z & = & -\frac{g}{2 c_W} \left[
\bar u_{L\alpha} \gamma^\mu u_{L\alpha}
+ \bar u_{R\alpha} \gamma^\mu \mathrm{X}_{\alpha \beta}^u u_{R\beta} \right. \notag \\
& & \left. - \bar d_{L\alpha} \gamma^\mu d_{L\alpha}
- \bar d_{R\alpha} \gamma^\mu \mathrm{X}_{\alpha \beta}^d d_{R\beta}
- 2 s_W^2 J_\text{EM}^\mu \right] Z_\mu \,, \notag \\
\mathcal{L}_H & = & - \frac{g}{2 M_W} \left[
\bar u_{L\alpha} m_\alpha^u (\delta_{\alpha \beta}-\mathrm{X}^u_{\alpha \beta}) u_{R\beta}
+ \bar u_{R\alpha} (\delta_{\alpha \beta}-\mathrm{X}^u_{\alpha \beta}) m_\beta^u u_{L\beta}
\right. \notag \\
& & \left.
+ \bar d_{L\alpha} m_\alpha^d (\delta_{\alpha \beta}-\mathrm{X}^d_{\alpha \beta}) d_{R\beta}
+ \bar d_{R\alpha} (\delta_{\alpha \beta}-\mathrm{X}^d_{\alpha \beta}) m_\beta^d d_{L\beta}
\right] H \,,
\end{eqnarray}
so that it is apparent that the mixing of the heavy quarks $T$, $B$ with SM quarks is only right-handed. The $4 \times 4$ matrix $\mathrm{V}^R$ is not unitary, and also determines the FCN interactions, because
$\mathrm{X}^u = \mathrm{V}^R \mathrm{V}^{R\dagger}$,
$\mathrm{X}^d = \mathrm{V}^{R\dagger} \mathrm{V}^R$. Both $\mathrm{X}^u$ and $\mathrm{X}^d$ are Hermitian and non-diagonal, mediating FCN currents.
Then, charged current interactions of the new states with SM quarks imply FCN ones, which result from the mixing of right-handed weak eigenstates with different isospin. At first order we have $\mathrm{X}_{tT} \simeq \mathrm{V}_{tB}^R$, $\mathrm{X}_{Bb} \simeq \mathrm{V}_{Tb}^R$. The
new quarks are almost degenerate, with masses $m_T = m_B = M$, up to terms of order $v^2 \mathrm{Y}^2 / M^2$. One can distinguish three scenarios for the heavy quark decays, depending on the relative sizes of the charged current mixing of the new quarks. For $V_{Tb} \sim V_{tB}$ the decay modes, assuming that they couple to the third generation, are the same as for singlets,
\begin{align}
& T \to W^+ b \,,\quad \quad T \to Zt \,,\quad \quad T \to Ht \,, \notag \\
& B \to W^- t \,,\quad \quad B \to Zb \,,\quad \quad B \to Hb \,,
\end{align}
but with couplings of different chirality, which is reflected in some angular distributions.
For $V_{Tb} \ll V_{tB}$ ({\em i.e.} the top quark mixes with its partner much more than the bottom quark), the decays are
\begin{align}
& T \to Zt \,,\quad \quad T \to Ht \,, \notag \\
& B \to W^- t \,.
\end{align}
This scenario is the most natural one for generic Yukawa couplings due to the fact that the top quark is much heavier than the bottom quark, and is realised in some models \cite{Contino:2006nn}. Finally, a mixing $V_{Tb} \gg V_{tB}$ would give
\begin{align}
& T \to W^+ b \,, \notag \\
& B \to Zb \,,\quad \quad B \to Hb \,,
\end{align}
with signals similar to a hypercharge $-5/6$ doublet $(B \, Y)$ (see below). However, a mixing
$V_{Tb} \gg V_{tB}$ is not natural in view of the mass hierarchy $m_t \gg m_b$,
and is disfavoured by constraints on $b$ quark mixing.
\subsection{$(X \, T)$ doublet}
\label{sec:2.4}
The interactions when a hypercharge $7/6$ doublet is added have some similarities and differences with the previous case. In the weak eigenstate basis we have
\begin{eqnarray}
\mathcal{L}_W & = & -\frac{g}{\sqrt 2} \left[ \bar u_{Li}' \gamma^\mu d_{Li}'
+ \bar X_L \gamma^\mu u_{L4}' + \bar X_R \gamma^\mu u_{R4}'
\right] W_\mu^+ + \text{H.c.} \,, \notag \\
\mathcal{L}_Z & = & -\frac{g}{2 c_W} \left[ \bar u_{Li}' \gamma^\mu u_{Li}'
- \bar u_{L4}' \gamma^\mu u_{L4}' - \bar u_{R4}' \gamma^\mu u_{R4}' + \bar X \gamma^\mu X
- 2 s_W^2 J_\text{EM}^\mu \right] Z_\mu \,, \notag \\
\mathcal{L}_\text{Y} & = &
- \mathrm{Y}_{ij}^u \; \bar q_{Li}' u_{Rj}' \, \tilde \phi
- \mathrm{Y}_{4j}^u \; (\bar X_L \; \bar u_{L4}') \, u_{Rj}' \, \phi
+ \text{H.c.} \,, \notag \\
\mathcal{L}_\text{bare} & = & - M \left( \bar X_L \, \bar u_{L4}' \right)
\left( \! \begin{array}{c} X_R \\ u_{R4}' \end{array} \! \right) + \text{H.c.} \,,
\end{eqnarray}
where for the charge $5/3$ quark $X$ the weak interaction and mass eigenstates coincide.
We omit terms for the down sector which are unaffected by the presence of the new doublet.
In the mass eigenstate basis, at first order in the light-heavy mixing the Lagrangians read
\begin{eqnarray}
\mathcal{L}_W & = & -\frac{g}{\sqrt 2} \left[
\bar u_{Li} \gamma^\mu \mathrm{V}_{ij}^L d_{Lj} + \bar X_L \gamma^\mu T_L
+ \bar X_R \gamma^\mu \mathrm{V}_{4\beta}^R u_{R\beta}
\right] W_\mu^+ + \text{H.c.} \,, \notag \\
\mathcal{L}_Z & = & -\frac{g}{2 c_W} \left[
\bar u_{Li} \gamma^\mu u_{Li} - \bar T_L \gamma^\mu T_L
- \bar u_{R\alpha} \gamma^\mu \mathrm{X}_{\alpha \beta} u_{R\beta}
+ \bar X \gamma^\mu X - 2 s_W^2 J_\text{EM}^\mu \right] Z_\mu \,, \notag \\
\mathcal{L}_H & = & - \frac{g}{2 M_W} \left[
\bar u_{L\alpha} m_\alpha^u (\delta_{\alpha \beta}-\mathrm{X}_{\alpha \beta}) u_{R\beta}
+ \bar u_{R\alpha} (\delta_{\alpha \beta}-\mathrm{X}_{\alpha \beta}) m_\beta^u u_{L\beta}
\right] H \,,
\end{eqnarray}
so that again the interactions of the new quarks $X$, $T$ with the SM ones are right-handed.
$\mathrm{V}^L$ is the usual CKM matrix.
The $1 \times 4$ matrix $\mathrm{V}^R$ also determines the neutral mixing because $\mathrm{X} = \mathrm{V}^{R\dagger} \mathrm{V}^R$. Notice an important difference with the $T$ singlet and $(T \, B)$ doublet: at first order the quark $T$ does not have charged current couplings to SM quarks but has neutral ones $ZTt$, $HTt$. For a mixing with the third generation we have $\mathrm{X}_{Tt} \simeq \mathrm{V}_{Xt}^R$. Obviously,
the charge $5/3$ quark $X$ only has charged current interactions with SM charge $2/3$ quarks.
As in the $(T \, B)$ doublet, the new mass eigenstates are almost degenerate, with masses $m_X \simeq m_T \simeq M$. Their allowed decays are
\begin{align}
& X \to W^+ t \,, \notag \\
& T \to Zt \,,\quad \quad T \to Ht \,.
\end{align}
\subsection{$(B \, Y)$ doublet}
\label{sec:2.5}
Finally, the relevant Lagrangian for SM quarks plus a $(B \, Y)$ doublet is
\begin{eqnarray}
\mathcal{L}_W & = & -\frac{g}{\sqrt 2} \left[ \bar u_{Li}' \gamma^\mu d_{Li}'
+ \bar d_{L4}' \gamma^\mu Y_L + \bar d_{R4}' \gamma^\mu Y_R
\right] W_\mu^+ + \text{H.c.} \,, \notag \\
\mathcal{L}_Z & = & -\frac{g}{2 c_W} \left[ - \bar d_{Li}' \gamma^\mu d_{Li}'
+ \bar d_{L4}' \gamma^\mu d_{L4}' + \bar d_{R4}' \gamma^\mu d_{R4}' - \bar Y \gamma^\mu Y
- 2 s_W^2 J_\text{EM}^\mu \right] Z_\mu \,, \notag \\
\mathcal{L}_\text{Y} & = &
- \mathrm{Y}_{ij}^d \; \bar q_{Li}' d_{Rj}' \, \phi
- \mathrm{Y}_{4j}^d \; (\bar d_{L4}' \; \bar Y_L ) \, d_{Rj}' \, \tilde \phi
+ \text{H.c.} \,, \notag \\
\mathcal{L}_\text{bare} & = & - M \left( \bar d_{L4}' \, \bar Y_L \right)
\left( \! \begin{array}{c} d_{R4}' \\ X_R \end{array} \! \right) + \text{H.c.}
\end{eqnarray}
At first order, the interactions in the mass eigenstate basis read
\begin{eqnarray}
\mathcal{L}_W & = & -\frac{g}{\sqrt 2} \left[
\bar u_{Li} \gamma^\mu \mathrm{V}_{ij}^L d_{Lj} + \bar B_L \gamma^\mu Y_L
+ \bar d_{R\alpha} \gamma^\mu \mathrm{V}_{\alpha 4}^R Y_R
\right] W_\mu^+ + \text{H.c.} \,, \notag \\
\mathcal{L}_Z & = & -\frac{g}{2 c_W} \left[
-\bar d_{Li} \gamma^\mu d_{Li} + \bar B_L \gamma^\mu B_L
+ \bar d_{R\alpha} \gamma^\mu \mathrm{X}_{\alpha \beta} d_{R\beta}
- \bar Y \gamma^\mu Y - 2 s_W^2 J_\text{EM}^\mu \right] Z_\mu \,, \notag \\
\mathcal{L}_H & = & - \frac{g}{2 M_W} \left[
\bar d_{L\alpha} m_\alpha^d (\delta_{\alpha \beta}-\mathrm{X}_{\alpha \beta}) d_{R\beta}
+ \bar d_{R\alpha} (\delta_{\alpha \beta}-\mathrm{X}_{\alpha \beta}) m_\beta^d d_{L\beta}
\right] H \,.
\end{eqnarray}
The matrix $\mathrm{V}^R$ has dimension $4\times 1$ and $\mathrm{X} = \mathrm{V}^R \mathrm{V}^{R\dagger}$. At first order the quark $B$ does not have charged current couplings to SM quarks but has neutral ones.
(The charge $-4/3$ quark $Y$ has only charged current interactions with down-type SM quarks.)
For a mixing with the third generation we have $\mathrm{X}_{bB} \simeq \mathrm{V}_{bY}^R$.
The new quarks have masses $m_B \simeq m_Y \simeq M$, and their allowed decays are
\begin{align}
& B \to Zb \,,\quad \quad B \to Hb \,, \notag \\
& Y \to W^- b \,.
\end{align}
Notice that the $\bar Y \to W^+ \bar b$ decay is like $T \to W^+ b$ but with a $b$ antiquark instead of a quark. These decays can be distinguished using angular distributions but, except for small kinematical differences, the signatures of a $(B \, Y)$ doublet are similar to the ones of a $(T \, B)$ doublet in which the $B$ quark mixes much more than the $T$ quark.
\section{Heavy quark production at LHC}
\label{sec:3}
New heavy quarks can be produced in pairs via QCD interactions,
\begin{equation}
gg,q \bar q \to Q \bar Q \quad\quad (Q=T,B,X,Y) \,,
\end{equation}
in the same way as the top quark. The cross section only depends on the quark mass, and is plotted in Fig.~\ref{fig:mass-cross} (left).
For $T$ quark singlets the partial decay widths are
\begin{align}
\Gamma(T \to W^+ b) & = \frac{g^2}{64 \pi} |V_{Tb}|^2
\frac{m_T}{M_W^2} \lambda(m_T,m_b,M_W)^{1/2} \nonumber \\
& \times \left[ 1+\frac{M_W^2}{m_T^2}-2 \frac{m_b^2}{m_T^2}
-2 \frac{M_W^4}{m_T^4} + \frac{M_W^4}{m_T^4} + \frac{M_W^2 m_b^2}{m_T^4}
\right] \,, \nonumber \\
\Gamma(T \to Z t) & = \frac{g}{128 \pi c_W^2} |X_{Tt}|^2
\frac{m_T}{M_Z^2} \lambda(m_T,m_t,M_Z)^{1/2} \nonumber \\
& \times \left[ 1 + \frac{M_Z^2}{m_T^2}
- 2 \frac{m_t^2}{m_T^2} - 2 \frac{M_Z^4}{m_T^4} + \frac{m_t^4}{m_T^4}
+ \frac{M_Z^2 m_t^2}{m_T^4} \right] \,, \nonumber \\
\Gamma(T \to H t) & = \frac{g^2}{128 \pi} |X_{Tt}|^2
\frac{m_T}{M_W^2} \lambda(m_T,m_t,M_H)^{1/2} \nonumber \\
& \times \left[ 1 + 6 \frac{m_t^2}{m_T^2} - \frac{M_H^2}{m_T^2}
+ \frac{m_t^4}{m_T^4} - \frac{m_t^2 M_H^2}{m_T^4} \right] \,,
\label{ec:Gamma}
\end{align}
being%
\begin{equation}
\lambda(x,y,z) \equiv (x^4 + y^4 + z^4 - 2 x^2 y^2
- 2 x^2 z^2 - 2 y^2 z^2)
\end{equation}%
a kinematical function.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/mass-cross.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mass-BR.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Left: Heavy quark production cross sections at LHC. Right: branching ratios for $T$ and $B$ decays.}
\label{fig:mass-cross}
\end{center}
\end{figure}
For a $B$ singlet, the expressions for $B \to W^- t$, $B \to Zb$, $B \to Hb$ can be obtained from Eqs.~(\ref{ec:Gamma}) by replacing the mixings $V_{Tb} \to V_{tB}$, $X_{Tt} \to X_{Bb}$ and the quark masses $m_T \to m_B$, $m_t \to m_b$, $m_b \to m_t$. The branching ratios as a function of the heavy quark mass are presented in Fig.~\ref{fig:mass-cross} (right), fixing $M_H = 115$ GeV. For $(T \, B)$ doublets the analytical expressions of the widths are the same as for the singlets, although the relation beween the neutral and charged current mixings differs. For equal mixings $V_{Tb} \simeq V_{tB}$ the branching ratios are the same as for singlets, while for $V_{Tb} \ll V_{tB}$ the decays
$T \to W^+ b$, $B \to Zb$, $B \to Hb$ are absent, so that $\mathrm{Br}(T \to Zt) \simeq \mathrm{Br}(T \to Ht) \simeq 0.5$, $\mathrm{Br}(B \to W^- t) = 1$.
For $T$, $B$ quarks in $(X \, T)$ and $(B \, Y)$ doublets the charged decay modes are absent, and thus the partial widths for the other modes are roughly one half. For $X \to W^+ t$ and $Y \to W^- b$ the widths are as for $B \to W^- t$, $T \to W^+ b$ replacing the mixings by $\mathrm{V}_{Xt}^R$ and $\mathrm{V}_{bY}^R$, respectively, as well as the quark masses. These are the only decay modes for $X$, $Y$ quarks.
Electroweak single heavy quark production is also possible at LHC, for example in the $t$-channel processes
\begin{align}
& gq \to T \bar b q' \,, \quad\quad gq \to T \bar t q' \,, \notag \\
& gq \to B \bar b q \,, \quad\quad gq \to X \bar t q' \,, \notag \\
& gq \to Y \bar b q' \,.
\label{ec:single}
\end{align}
For $T \bar b j$, $X \bar t j$ and $Y \bar b j$ production ($j=q,q'$ denotes a light jet) the processes involve a $t$-channel $W$ boson, while $B \bar b j$ and $T \bar t j$ production exchange a $Z$ boson. This latter process has a much smaller cross section than $T \bar b j$ but is the only possibility for the $T$ quark in a $(X \, T)$ doublet.
The cross sections for the processes in Eqs.~(\ref{ec:single}) are also plotted in Fig.~\ref{fig:mass-cross}, for reference mixings $V,X=0.1$ with the third generation and including heavy quark and antiquark production. For the $2 \to 2$ processes $bq \to Tj$, $bq \to Bj$, etc. the cross sections are very close to the ones for their $2 \to 3$ counterparts $T \bar bj$, $B \bar bj$, etc.
Next-to-leading order corrections \cite{Campbell:2009gj,Berger:2009qy} are not included.
In this work we do not consider single production as a means for model discrimination.
Nevertheless, depending on the heavy quark mass and mixing, some single production processes can be important, as it can be observed in Fig.~\ref{fig:mass-cross}.
In any case, it is important to remark that single production processes are crucial to
measure the heavy quark mixing with SM quarks.
The heavy quark signals studied in this paper have been calculated by implementing
pair ($T \bar T$, $B \bar B$, $X \bar X$ and $Y \bar Y$) and single ($Tj$, $T \bar b j$, $T \bar t j$, $Bj$, $B \bar b j$, $X \bar t j$, $Yj$, $Y \bar b j$) production in the generator {\tt Protos}~\cite{AguilarSaavedra:2008gt}, for the six models considered. All the decay channels in Eqs.~(\ref{ec:decall}) are included, with the subsequent $W$ and $Z$ boson decays in all channels. The Higgs boson decay, which does not carry any spin information, is left to the parton shower Monte Carlo.
A complete signal evaluation is necessary for a study like the one presented here, which surveys final states from one to four leptons, and various $b$ quark multiplicities in some cases. But also because sometimes charged leptons are missed by the detector, {\em e.g.} in $Z \to \ell^+ \ell^-$, resulting in contributions with fewer detected charged leptons than were generated at the partonic level.
Matrix elements are calculated using {\tt HELAS} \cite{helas}, to take finite width and spin effects into account, and integration in phase space is done by {\tt Vegas} \cite{vegas}. The output is given in a suitable form to be interfaced to the parton shower Monte Carlo {\tt Pythia} 6.4\cite{Sjostrand:2006za} to add initial and final state radiation (ISR, FSR) and pile-up, and perform hadronisation.
In this work we restrict our detailed simulations to heavy quark pair production,
assuming heavy quark masses of 500 GeV and $m_t = 175$ GeV, $M_H = 115$ GeV. Cross sections and branching ratios are independent of the heavy-light mixing for $T$, $B$ singlets and
$(X \, T)$, $(B \, Y)$ doublets, and a mixing $V = 0.1$ is assumed for definiteness. For the $(T \, B)$ doublet we study two scenarios: (1) equal mixing $V_{Tb} = V_{tB} = 0.1$, in which the signals are quite similar to the ones of two $T$, $B$, singlets; (2) doublet mixing mainly with the top quark, $V_{Tb} = 0$, $V_{tB} = 0.1$. The signals for a doublet mixing mainly with the bottom are practically the same (except for the exchange of $b$ quarks and antiquarks and small kinematical differences) as for the $(B \, Y)$ doublet, and are not presented for brevity.
These six models are identified by the labels $T_\text{s}$, $B_\text{s}$, $TB_{\text{d}_1}$, $TB_{\text{d}_2}$, $XT_\text{d}$ and $BY_\text{d}$ in tables and figures.
Signals are generated with statistics of 300 fb$^{-1}$ and rescaled to a reference luminosity of 30 fb$^{-1}$, in order to reduce statistical fluctuations.
The factorisation and renormalisation scales used equal the heavy quark mass.
We use the fast simulation {\tt AcerDET}~\cite{RichterWas:2002ch} which is a generic LHC detector simulation, neither of ATLAS nor of CMS, with standard settings.
In particular, the lepton isolation criteria require a separation $\Delta R > 0.4$ from other clusters and a maximum energy deposition $\Sigma E_T = 10$ GeV in a cone of $\Delta R = 0.2$ around the reconstructed electron or muon. Jets are reconstructed using a cone algorithm with $\Delta R = 0.4$. In this analysis we only focus on central jets with pseudo-rapidity $|\eta| < 2.5$. Forward jets with $2.5 < |\eta| < 5$ can also be present but are not considered for signal reconstruction nor for background rejection.
For central jets, a simple $b$ tagging is performed with probabilities of 60\% for $b$ jets, 10\% for charm and 1\% for light jets. We remark that the inclusion of radiation and hadronisation effects, as well as a detector simulation, is essential for our study. In an ideal situation in which the number of jets matches the number of partons in the hard process, the combinatorics to reconstruct the signals is relatively simple. In a real experiment, however, the presence of several more jets than were present at the partonic level, the radiation and the presence of mistags make it much more difficult to reconstruct and identify signals than it would be apparent with a toy parton-level simulation. An explicit example of these difficulties will be found in the single lepton channel in section~\ref{sec:1l}, where we will show that $T \bar T$ and $B \bar B$ signals can sometimes be very alike, despite the very different decay chains involved.
An adequate background calculation is another essential ingredient for our evaluations.
For multi-lepton signals, especially trileptons and like-sign dileptons, $t \bar t nj$ (where $nj$ stands for $n$ additional jets at the partonic level) is one of the largest and most dangerous backgrounds, due to its large cross section and the fact that $b$ quark decays sometimes produce isolated charged leptons. This background simply cannot be estimated with a parton-level calculation. Another important effect to be taken into account is the correct matching between the ``soft'' radiation generated by the parton shower Monte Carlo and the ``hard'' jets generated by the matrix element generator.
In order to have predictions for SM backgrounds as accurate as possible we use {\tt Alpgen}~\cite{Mangano:2002ea} to generate hard events which are interfaced to {\tt Pythia}
using the MLM prescription \cite{mlm} to perform the matching avoiding double counting.
The processes generated are collected in Table~\ref{tab:allbkg}, where we also give the equivalent luminosity generated (30 fb$^{-1}$ in most cases) and the number of events after matching. The additional SM processes $b \bar b nj$ and $c \bar c nj$, which were previously shown to be negligible after selection cuts for multi-lepton states~\cite{delAguila:2008cj} are ignored in this work. (They are very likely to be negligible in the single lepton channel too, after the transverse energy and invariant mass cuts.)
\begin{table}[t]
\begin{center}
\begin{small}
\begin{tabular}{llcc}
Process & Decay & $L$ & Events \\
\hline
$t \bar t nj$, $n=0,\dots,6$ & semileptonic & 30 fb$^{-1}$ & 6.1 M \\
$t \bar t nj$, $n=0,\dots,6$ & dileptonic & 30 fb$^{-1}$ & 1.5 M \\
$tj$ & $W \to l \nu$ & 30 fb$^{-1}$ & 0.9 M \\
$t\bar b$ & $W \to l \nu$ & 30 fb$^{-1}$ & 54 K \\
$tW$ & all & 30 fb$^{-1}$ & 1.6 M \\
$t \bar t t \bar t$ & all & 30 fb$^{-1}$ & 160 \\
$t \bar t b \bar b$ & all & 30 fb$^{-1}$ & 34 K \\
$Wnj$, $n=0,1,2$ & $W \to l \nu$ & 3 fb$^{-1}$ & 167 M \\
$Wnj$, $n=3,\dots,6$ & $W \to l \nu$ & 30 fb$^{-1}$ & 10 M \\
$W b \bar b nj$, $n=0,\dots,4$ & $W \to l \nu$ & 30 fb$^{-1}$ & 520 K \\
$W c \bar c nj$, $n=0,\dots,4$ & $W \to l \nu$ & 30 fb$^{-1}$ & 550 K \\
$W t \bar t nj$, $n=0,\dots,4$ & $W \to l \nu$ & 30 fb$^{-1}$ & 5.1 K \\
$Z/\gamma\, nj$, $n=0,1,2$, $m_{ll} < 120$ GeV
& $Z \to l^+ l^-$ & 3 fb$^{-1}$ & 16.5 M \\
$Z/\gamma\, nj$, $n=3,\dots,6$, $m_{ll} < 120$ GeV
& $Z \to l^+ l^-$ & 30 fb$^{-1}$ & 1.1 M \\
$Z/\gamma\, nj$, $n=0,\dots,6$, $m_{ll} > 120$ GeV
& $Z \to l^+ l^-$ & 30 fb$^{-1}$ & 1.7 M \\
$Z b \bar b nj$, $n=0,\dots,4$ & $Z \to l^+ l^-$ & 30 fb$^{-1}$ & 200 K \\
$Z c \bar c nj$, $n=0,\dots,4$ & $Z \to l^+ l^-$ & 30 fb$^{-1}$ & 180 M \\
$Z t \bar t nj$, $n=0,\dots,4$ & $Z \to l^+ l^-$ & 30 fb$^{-1}$ & 1.9 K \\
$WWnj$, $n=0,\dots,3$ & $W \to l \nu$ & 30 fb$^{-1}$ & 290 K \\
$WZnj$, $n=0,\dots,3$ & $W \to l \nu$, $Z \to l^+ l^-$
& 30 fb$^{-1}$ & 37.7 K \\
$ZZnj$, $n=0,\dots,3$ & $Z \to l^+ l^-$ & 30 fb$^{-1}$ & 3.7 K \\
$WWWnj$, $n=0,\dots,3$ & $2W \to l \nu$ & 30 fb$^{-1}$ & 1.5 K \\
$WWZnj$, $n=0,\dots,3$ & all & 30 fb$^{-1}$ & 4.9 K \\
$WZZnj$, $n=0,\dots,3$ & all & 30 fb$^{-1}$ & 1.5 K
\end{tabular}
\end{small}
\caption{Background processes considered in the simulations. The second column indicates the decay modes included (where $l=e,\mu,\tau$), and the third column the luminosity equivalent generated. The last column corresponds to the number of events after matching, with K and M standing for $10^3$ and $10^6$ events, respectively.}
\label{tab:allbkg}
\end{center}
\end{table}
The procedure used for estimating the statistical significance of a signal is considered case by case. To claim discovery we require both (i) a statistical significance larger tan $5\sigma$; (ii) at least 10 signal events.
In the absence of any systematic uncertainty on the background, the statistical significance would be $\mathcal{S}_0 \equiv S/\sqrt B$, where $S$ and $B$ are the number of signal and background events, or its analogous from the $P$-number for small backgrounds where Poisson statistics must be applied. Nevertheless, there are systematic uncertainties in the background evaluation from several sources: the theoretical calculation, parton distribution functions (PDFs), the collider luminosity, pile-up, ISR and FSR, etc. as well as some specific uncertainties related to the detector like the energy scale and $b$ tagging efficiency. Such uncertainties have little relevance in the cleanest channels, where the discovery luminosity is controlled by the requirement of at least 10 signal events, being the significance far above $5\sigma$. For the channels in which the background normalisation can be important, we consider whether the signal manifests as a clear peak in a distribution. In such case it would be possible in principle to normalise the background directly from data, and extract the peak significance. Otherwise, we include a 20\% background uncertainty in the significance summed in quadrature, using as estimator $\mathcal{S}_{20} \equiv S/\sqrt{B+(0.2 B)^2}$.
\section{Final state $\ell^+ \ell^+ \ell^- \ell^-$}
\label{sec:4l}
We begin our survey of the relevant final states with the one containing four leptons, which is the cleanest and less demanding one. The heavy quark signal reconstruction is not possible in most cases, but a simple event counting in several four-lepton subsamples already provides an useful test of the heavy quark signals.
Having a small branching ratio in general, four leptons can be produced in several cascade decays of heavy quark pairs, for example
\begin{align}
& T \bar T \to Zt \, W^- \bar b \to Z W^+b \, W^- \bar b
&& \quad Z \to \ell^+ \ell^- , W \to \ell \nu \,, \nonumber \\
& T \bar T \to Zt \, V \bar t \to Z W^+b \, V W^- \bar b
&& \quad Z \to \ell^+ \ell^- , W \to \ell \nu , V \to q \bar q/\nu \bar \nu \,, \nonumber \\
& B \bar B \to Zb \, Z \bar b
&& \quad Z \to \ell^+ \ell^- \,, \nonumber \\
& B \bar B \to Zb \, W^+ \bar t \to Z b W^+ W^- \bar b
&& \quad Z \to \ell^+ \ell^- , W \to \ell \nu \,, \nonumber \\
& B \bar B \to W^- t \, W^+ \bar t \to W^- W^+ b \, W^+ W^- \bar b
&& \quad W \to \ell \nu \,, \nonumber \\
& X \bar X \to W^+ t \, W^- \bar t \to W^+ W^+ b \, W^- W^- \bar b
&& \quad W \to \ell \nu \,,
\label{ec:ch4Q0}
\end{align}
with $V=Z,H$. The charge conjugate channels are implicitly included as well.
The SM background is mainly constituted by $ZZnj$, $t \bar t nj$ and $Z t \bar t nj$. The first one can be suppressed simply by requiring the presence of at least one $b$-tagged jet, which hardly affects the signals which have two or more $b$ quarks. Thus,
for signal pre-selection we demand (i) four leptons summing a zero total charge, two of them with transverse momenta $p_T > 30$ GeV and the other two with $p_T > 10$ GeV; (ii) at least one $b$-tagged jet with $p_T > 20$ GeV. We then develop three different analyses with disjoint event samples, aiming to separate the different signal sources of four leptons ($ZZ$, $ZWW$ or $WWWW$ leptonic decays). The criterion for the subdivision is the number of same-flavour, opposite-charge lepton pairs with an invariant mass consistent with $M_Z$ within some given interval, and the samples are labelled as `$ZZ$', `$Z$' and `no $Z$', respectively. The invariant mass distribution of opposite-sign pairs can be studied by choosing pairs $\ell_a^+ \ell_b^-$ as follows:
\begin{enumerate}
\item If the charged leptons can be combined to form two $Z$ candidates (there are two possibilities to construct two opposite-sign pairs), we label these pairs as $\ell_a^+ \ell_b^-$, $\ell_c^+ \ell_d^-$, ordered by transverse momentum.
\item If not, we still look for a $Z$ candidate combining opposite-sign pairs (there are four possible combinations). If found, we label this pair as $\ell_a^+ \ell_b^-$
and the remaining leptons as $\ell_c^+$, $\ell_d^-$.
\item If no $Z$ candidates can be found, we construct pairs $\ell_a^+ \ell_b^-$,
$\ell_c^+ \ell_d^-$ ordered by transverse momentum.
\end{enumerate}
The interval chosen to accept a $Z$ boson candidate is $M_Z \pm 15$ GeV, which provides a good balance between signal efficiency (for true $Z$ boson decays) and rejection of non-resonant $W^+ W^-$ decays giving opposite-charge leptons.
The $\ell_a^+ \ell_b^-$ and $\ell_c^+ \ell_d^-$ invariant mass distributions are presented in Fig.~\ref{fig:mrec-4Q0} for the six models, which are identified by the labels
$T_\text{s}$, $B_\text{s}$, $TB_{\text{d}_1}$, $TB_{\text{d}_2}$ (corresponding to the two mixing scenarios defined in section~\ref{sec:2.3}),
$XT_\text{d}$ and $BY_\text{d}$.
These plots illustrate the relative size of the different signal contributions. Most signal events have at least one $Z$ boson candidate: the $WWWW$ decays correspond to $\ell_a^+ \ell_b^-$ outside the $Z$ peak (left plot). Events with two $Z$ candidates are the ones with $\ell_c^+ \ell_d^-$ at the $Z$ peak (right plot). The distribution of signal and background events in the three samples at pre-selection is given in Table~\ref{tab:nsnb-4Q0}.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/mZ1-4Q0.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mZ2-4Q0.eps,height=5.1cm,clip=}
\end{tabular}
\caption{$\ell_a^+ \ell_b^-$, $\ell_c^+ \ell_d^-$ invariant mass distributions for the six models in the $\ell^+ \ell^+ \ell^- \ell^-$ final state (see the text).
The luminosity is 30 fb$^{-1}$.}
\label{fig:mrec-4Q0}
\end{center}
\end{figure}
\begin{table}[htb]
\begin{center}
\begin{small}
\begin{tabular}{ccccccccccc}
& Total & $ZZ$ & $Z$ & no $Z$ & \quad & & Total & $ZZ$ & $Z$ & no $Z$
\\[1mm]
$T \bar T$ ($T_\text{s}$) & 50.0 & 4.7 & 33.3 & 12.0 & & $B \bar B$ ($B_\text{s}$) & 58.9 & 12.3 & 32.2 & 14.4 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 52.4 & 3.9 & 35.2 & 13.3 & & $B \bar B$ ($TB_{\text{d}_1}$) & 54.3 & 12.4 & 28.3 & 13.6 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 114.8 & 12.1 & 77.5 & 25.2 & & $B \bar B$ ($TB_{\text{d}_2}$) & 86.3 & 1.2 & 19.7 & 65.4 \\
$X \bar X$ ($XT_\text{d}$) & 81.9 & 1.0 & 21.2 & 59.7 & & $B \bar B$ ($BY_\text{d}$) & 46.7 & 29.7 & 14.7 & 2.3\\
& & & & & & $Y \bar Y$ ($BY_\text{d}$) & 0.0 & 0.0 & 0.0 & 0.0 \\
\hline
$t \bar t nj$ & 7 & 0 & 3 & 4 & & $Z t \bar tnj$ & 15 & 0 & 15 & 0 \\
$Z b \bar bnj$ & 1 & 0 & 1 & 0 & & $ZZnj$ & 2 & 2 & 0 & 0
\end{tabular}
\end{small}
\end{center}
\caption{Number of events in the $\ell^+ \ell^+ \ell^- \ell^-$ final state for
the signals and main backgrounds with a luminosity of 30 fb$^{-1}$, at pre-selection level.}
\label{tab:nsnb-4Q0}
\end{table}
\subsection{Final state $\ell^+ \ell^+ \ell^- \ell^-$ ($ZZ$)}
In this sample we do not impose any further requirement for event selection because the background is already tiny. The numbers of signal and background events can be read from Table~\ref{tab:nsnb-4Q0}.
We observe that this final state is most useful for the model with a $(B \, Y)$ doublet where the decays $B \to Zb$ are enhanced. The presence of the heavy quark $B$ can be established by constructing a plot with the invariant mass of the $b$-tagged jet and each of the two reconstructed $Z$ bosons
(two entries per event).
This is shown in Fig.~\ref{fig:mrec-4Q0-ZZ} for the six models considered, summing the contribution of the two quarks in the case of the doublets. The background, only two events, is not included. Notice that the bumps around 200 GeV in the $TB_{\text{d}_2}$ and $XT_\text{d}$ models cannot be mistaken by a charge $-1/3$ quark even with low statistics: for such a mass the heavy quark production cross section would be more than 100 times larger.
\begin{figure}[ht]
\begin{center}
\epsfig{file=Figs/mB2x-4Q0-ZZ.eps,height=5cm,clip=}
\caption{$\ell_a^+ \ell_b^- b$, $\ell_c^+ \ell_d^- b$ invariant mass distribution for the six models in the $\ell^+ \ell^+ \ell^- \ell^-$ ($ZZ$) final state, with two entries per event. The luminosity is 30 fb$^{-1}$.}
\label{fig:mrec-4Q0-ZZ}
\end{center}
\end{figure}
We give in Table~\ref{tab:sig-4Q0-ZZ} the luminosity required to have a $5\sigma$ discovery,
including all signal contributions within a specific model. The background normalisation uncertainty has little relevance in these cases, because the background itself is very small and the discovery luminosity is mainly determined by the minimum of 10 signal events.
We also include in this table whether a mass peak can be reconstructed, although in this case the peak observation and reconstruction clearly requires more luminosity than $5\sigma$ discovery, due to the small statistics. We point out that, since the heavy resonance is observed to decay into a $Z$ boson and a $b$ quark, it can be identified as a heavy $B$ quark. This, however, can also be done in the opposite-sign dilepton final state with six times better statistics.
For the models with $T$ quarks, mass peaks could in principle be reconstructed with a high integrated luminosity, but this is far more interesting to do in the trilepton channel where statistics are larger.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ccccccc}
& $L$ & Rec. & \quad & & $L$ & Rec. \\[1mm]
$T_\text{s}$ & -- & no & & $TB_{\text{d}_2}$ & 23 fb$^{-1}$ & no \\
$B_\text{s}$ & 24 fb$^{-1}$ & $m_B$ & & $XT_\text{d}$ & 23 fb$^{-1}$ & no \\
$TB_{\text{d}_1}$ & 18 fb$^{-1}$ & $m_B$ & & $BY_\text{d}$ & 10 fb$^{-1}$ & $m_B$
\end{tabular}
\end{center}
\caption{Luminosity $L$ required to have a $5\sigma$ discovery in the $\ell^+ \ell^+ \ell^- \ell^-$ ($ZZ$) final state. A dash indicates no signal or a luminosity larger than 100 fb$^{-1}$. We also indicate whether a mass peak can be reconstructed in this final state.}
\label{tab:sig-4Q0-ZZ}
\end{table}
\subsection{Final state $\ell^+ \ell^+ \ell^- \ell^-$ ($Z$)}
Events with only one $Z$ boson candidate are selected in this sample. Additionally, the presence of two ($b$-tagged or not) extra jets with $p_T > 20$ GeV is required to reduce the $Z t \bar t nj$ background, hardly affecting the signals. The number of signal and background events at pre-selection and selection is collected in Table~\ref{tab:nsnb-4Q0-Z}.
Notice that for $X \bar X$ production, where $Z$ bosons are not produced in the decay,
in some cases a pair of charged leptons from $W^+ W^-$ decays accidentally have an invariant mass in the interval selected. Nevertheless, this non-resonant contribution is 5 times smaller than the one from pair production of its $T$ partner. The same comment applies to $B \bar B$ production in the $TB_{\text{d}_2}$ model.
\begin{table}[htb]
\begin{center}
\begin{tabular}{cccccccc}
& Pre. & Sel. & \quad & & Pre. & Sel. \\[1mm]
$T \bar T$ ($T_\text{s}$) & 33.3 & 29.5 & & $B \bar B$ ($B_\text{s}$) & 32.2 & 25.1 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 35.2 & 31.5 & & $B \bar B$ ($TB_{\text{d}_1}$) & 28.3 & 21.4 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 77.5 & 74.9 & & $B \bar B$ ($TB_{\text{d}_2}$) & 19.7 & 14.0 \\
$X \bar X$ ($XT_\text{d}$) & 21.2 & 15.5 & & $B \bar B$ ($BY_\text{d}$) & 14.7 & 12.5 \\
& & & & $Y \bar Y$ ($BY_\text{d}$) & 0.0 & 0.0 \\
\hline
$t \bar t nj$ & 3 & 0 & & $Zt \bar tnj$ & 15 & 8 \\
$Z b \bar bnj$ & 1 & 0 & & $ZZnj$ & 0 & 0 \\
\end{tabular}
\end{center}
\caption{Number of events in the $\ell^+ \ell^+ \ell^- \ell^-$ ($Z$) sample for
the signals and main backgrounds with a luminosity of 30 fb$^{-1}$.}
\label{tab:nsnb-4Q0-Z}
\end{table}
The reconstruction in this final state is very difficult due to the presence of two final state neutrinos, each resulting from the decay of one heavy quark. Thus, we restrict our analysis of this sample to the $Z$ boson identification and a simple counting of events, which can already be an useful test of the different models. In Table~\ref{tab:sig-4Q0-Z} we collect the discovery luminosities for the six models studied.
We observe that those with $T$ quarks give important signals, especially the ones with enhanced branching ratio for $T \to Zt$, and the discovery luminosities are relatively small.
In these interesting cases the background normalisation uncertainty is not important because the signals are much larger.
\begin{table}[t]
\begin{center}
\begin{tabular}{ccccccc}
& $L$ & Rec. & \quad & & $L$ & Rec. \\[1mm]
$T_\text{s}$ & 11 fb$^{-1}$ & no & & $TB_{\text{d}_2}$ & 3.4 fb$^{-1}$ & no \\
$B_\text{s}$ & 14 fb$^{-1}$ & no & & $XT_\text{d}$ & 3.3 fb$^{-1}$ & no \\
$TB_{\text{d}_1}$ & 5.7 fb$^{-1}$ & no & & $BY_\text{d}$ & 50 fb$^{-1}$ & no
\end{tabular}
\end{center}
\caption{Luminosity $L$ required to have a $5\sigma$ discovery in the $\ell^+ \ell^- \ell^+ \ell^-$ ($Z$) final state.
We also indicate whether a mass peak can be reconstructed in this final state.}
\label{tab:sig-4Q0-Z}
\end{table}
\subsection{Final state $\ell^+ \ell^+ \ell^- \ell^-$ (no $Z$)}
This sample contains the signal and background events for which all opposite-sign pairs have invariant masses $|m_{\ell_i^+ \ell_j^-} - M_Z| > 15$ GeV. We do not apply any further event selection criteria since the background at pre-selection is already rather small. The number of signal and background events can be read in Table~\ref{tab:nsnb-4Q0}. The most important signals are from $X \bar X$ production, for which the decay $X \to W^+ t \to W^+ W^+ b$ has branching ratio unity, and $B \bar B$ production in the $TB_{\text{d}_2}$ model, with unit branching ratio for
$B \to W^- t \to W^- W^+ b$. The latter decay approximately has a branching ratio of 0.25 for the $B$ singlet and $TB_{\text{d}_1}$ doublet models, and is absent for the $(B \, Y)$ doublet.
$T \bar T$ production, which in the four-lepton final state at least involves one $Z$ leptonic decay, gives a small contribution which is only due to the finite $Z$ width and energy resolution of the detector.
\begin{table}[h]
\begin{center}
\begin{tabular}{ccccccc}
& $L$ & Rec. & \quad & & $L$ & Rec. \\[1mm]
$T_\text{s}$ & 35 fb$^{-1}$ & no & & $TB_{\text{d}_2}$ & 3.3 fb$^{-1}$ & no \\
$B_\text{s}$ & 25 fb$^{-1}$ & no & & $XT_\text{d}$ & 3.5 fb$^{-1}$ & no \\
$TB_{\text{d}_1}$ & 11 fb$^{-1}$ & no & & $BY_\text{d}$ & -- & no
\end{tabular}
\end{center}
\caption{Luminosity $L$ required to have a $5\sigma$ discovery in the $\ell^+ \ell^+ \ell^- \ell^-$ (no $Z$) final state. A dash indicates no signal or a luminosity larger than 100 fb$^{-1}$.
We also indicate whether a mass peak can be reconstructed in this final state.}
\label{tab:sig-4Q0-noZ}
\end{table}
We collect in Table~\ref{tab:sig-4Q0-noZ} the luminosity required for $5\sigma$ discovery of the six models considered in this work.
The reconstruction in this final state is virtually impossible because four neutrinos are present in the final state and, in fact, all like-sign and opposite-sign dilepton distributions seem very similar. Nevertheless, as in the previous sample, the number of events itself is a very good check of the different models.
For the most interesting signals (with $(T \, B)$ and $(X \, T)$ doublets) the background normalisation is not important, while for the other cases the luminosities given are a little optimistic.
\subsection{Summary}
Four lepton final states have seldom been considered in the context of heavy quark searches, perhaps because they are less relevant for the traditionally most popular models with $T$ or $B$ singlets. Nevertheless, for the $(X \, T)$ and $(B \, Y)$ doublets and the $TB_{\text{d}_2}$ model the multi-lepton signals are larger in general: either for the decays $T \to Zt$, $B \to Zb$ (which have branching ratios two times larger than in the singlet case), or from the decays $X \to W^+ t$, $B \to W^- t$ (with unit branching ratio). Thus, the four-lepton final state can be interesting for this class of models. One has to note here that the sensitivity to heavy quark signals in other final states is much better, and discovery luminosities one order of magnitude smaller. Still, four lepton signals would be visible with a moderate luminosity and should be explored to test the models.
It is very convenient to divide the four lepton final state in three different subsets (`$ZZ$', `$Z$' and `no $Z$') depending on the number of $Z$ boson candidates (2, 1 and 0, respectively) present. This subdivision allows for some model discrimination from event counting in this final state alone, for example:
\begin{itemize}
\item If a signal is simultaneously observed in the `$Z$' and `no $Z$' samples with a similar luminosity, but not in the `$ZZ$' one, it points towards a $(X \, T)$ doublet or a $(T \, B)$ doublet predominantly mixing with the top quark ($TB_{\text{d}_2}$ model).
\item If, conversely, a signal is observed exclusively in the `$ZZ$' sample, it corresponds to a $(B \, Y)$ doublet. The presence of the heavy $B$ quark can also be established by the observation of a peak in the $Zb$ invariant mass distribution. However, this can also be done in the opposite-sign dilepton final state with six times better statistics.
\end{itemize}
Finally, it is worth mentioning that
the four lepton final state is also a possible signal of heavy charged lepton in several models~\cite{AguilarSaavedra:2009ik}, but in that case the invariant mass of three charged leptons displays a very clear and sharp peak at the heavy charged lepton mass $m_E$, and $b$ quarks are not produced. Four leptons are also produced in the decay of doubly charged scalars produced in pairs (for a detailed analysis see Ref.~\cite{delAguila:2008cj}) but for the scalar triplet signals are clearly distinguishable by the presence of narrow peaks in the like-sign dilepton invariant mass distributions.
\section{Final state $\ell^\pm \ell^\pm \ell^\mp$}
\label{sec:3l}
The trilepton final state offers a good balance between signal branching ratio in $T \bar T$, $B \bar B$ and $X \bar X$ production, and SM background. Three leptons can result from several heavy quark pair cascade decays, either involving the leptonic decay of a $Z$ and a $W$ boson, as for example in
\begin{align}
& T \bar T \to Zt \, W^- \bar b \to Z W^+b W^- \bar b
&& \quad Z \to \ell^+ \ell^- , WW \to \ell \nu q \bar q' \,, \nonumber \\
& T \bar T \to Zt \, V \bar t \to Z W^+b \, V W^- \bar b
&& \quad Z \to \ell^+ \ell^- , WW \to \ell \nu q \bar q' , V \to q \bar q/\nu \bar \nu \,, \nonumber \\
& B \bar B \to Z b \, W^+ \bar t \to Z b \, W^+ W^- \bar b
&& \quad Z \to \ell^+ \ell^- , WW \to \ell \nu q \bar q' \,,
\label{ec:ch3Q1Z}
\end{align}
with $V=Z,H$, or of three $W$ bosons,
\begin{align}
& B \bar B \to W^- t \, W^+ \bar t \to W^- W^+ b \, W^+ W^- \bar b
&& \quad 3W \to \ell \nu , 1W \to q \bar q' \,, \nonumber \\
& X \bar X \to W^+ t \, W^- \bar t \to W^+ W^+ b \, W^- W^- \bar b
&& \quad 3W \to \ell \nu , 1W \to q \bar q' \,.
\label{ec:ch3Q1noZ}
\end{align}
The charge conjugate channels are implicitly included in all cases. All these production and decay channels are interesting and a first signal discrimination can be made, as in the previous section, by the presence or not of $Z$ boson candidates in the final state.
In the sample with $Z$ candidates it is necessary to go further and try to separate the three channels in Eqs.~(\ref{ec:ch3Q1Z}). An obvious reason motivating this separation is that for $(T \, B)$ doublets both $T \bar T$ and $B \bar B$ pairs can be produced and the three processes in Eqs.~(\ref{ec:ch3Q1Z}) are present in general. Then, it is quite desirable to separate the signals of $T$ and $B$ quarks, identifying their production and decay channels. The discrimination is possible with a probabilistic analysis which classifies the events into the three processes in Eqs.~(\ref{ec:ch3Q1Z}) with a good efficiency.
The main SM backgrounds to trilepton signals are from $WZnj$ and $t \bar t nj$ production, both roughly of the same size. The latter is originated when the two $W$ bosons decay leptonically and one $b$ quark gives a third isolated lepton but, as in the like-sign dilepton final state examined in the next section, it can be significantly reduced by asking that the two like-sign leptons have high transverse momenta. Thus, for event pre-selection we require the presence of three charged leptons (summing a total charge $\pm 1$), the like-sign pair having $p_T > 30$ GeV and the third lepton with $p_T > 10$ GeV. As mentioned above, we divide the trilepton sample into two disjoint ones. The first one contains events where a $Z$ boson candidate can be identified, that is, when two same-flavour opposite-charge leptons have an invariant mass consistent with $M_Z$. The other sample contains events without $Z$ candidates. The interval in which a lepton pair is accepted as a $Z$ candidate is chosen to be of 15 GeV around $M_Z$. We can compare the signal contributions to the two samples by plotting the invariant mass of two opposite-charge leptons $\ell_a^+$, $\ell_b^-$, chosen in the following way:
\begin{enumerate}
\item If there is a $Z$ candidate, we label the corresponding leptons as $\ell_a^+$, $\ell_b^-$. In case that there are two $Z$ candidates, which can accidentally happen, the leptons with largest transverse momenta are chosen.
\item If there are no $Z$ candidates, we choose $\ell_a^+$, $\ell_b^-$ with the largest transverse momenta.
\end{enumerate}
\begin{figure}[htb]
\begin{center}
\epsfig{file=Figs/mZ1-3Q1.eps,height=5.1cm,clip=}
\caption{$\ell_a^+ \ell_b^-$ invariant mass distributions for the six models in the $\ell^\pm \ell^\pm \ell^\mp$ final state (see the text for the definition of $\ell_a^+$ and $\ell_b^-$).
The luminosity is 30 fb$^{-1}$.}
\label{fig:mZrec-3Q1}
\end{center}
\end{figure}
\begin{table}[h]
\begin{center}
\begin{tabular}{cccccccccccc}
& Total & $Z$ & no $Z$ & \quad & & Total & $Z$ & no $Z$ \\[1mm]
$T \bar T$ ($T_\text{s}$) & 320.7 & 212.4 & 108.3 & & $B \bar B$ ($B_\text{s}$) & 421.9 & 227.9 & 194.0 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 349.0 & 229.9 & 119.1 & & $B \bar B$ ($TB_{\text{d}_1}$) & 484.5 & 237.0 & 247.5 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 654.6 & 435.8 & 218.8 & & $B \bar B$ ($TB_{\text{d}_2}$) & 1174.4 & 144.0 & 1030.4 \\
$X \bar X$ ($XT_\text{d}$) & 1181.8 & 143.9 & 1037.9 & & $B \bar B$ ($BY_\text{d}$) & 106.3 & 88.3 & 18.0 \\
& & & & & $Y \bar Y$ ($BY_\text{d}$) & 0.5 & 0.1 & 0.4 \\
\hline
$t \bar t nj$ & 464 & 114 & 350 & & $WZnj$ & 4258 & 4196 & 62 \\
$W t \bar tnj$ & 78 & 11 & 67 & & $ZZnj$ & 424 & 417 & 7 \\
$Z t \bar tnj$ & 189 & 169 & 20 \\
\end{tabular}
\end{center}
\caption{Number of events in the $\ell^\pm \ell^\pm \ell^\mp $ final state for
the signals and main backgrounds with a luminosity of 30 fb$^{-1}$, at pre-selection level.}
\label{tab:nsnb-3Q1}
\end{table}
The resulting distribution is shown in Fig.~\ref{fig:mZrec-3Q1}.
We observe that there is a large off-peak signal from $B \bar B$ and $X \bar X$ decays in the $TB_{\text{d}_2}$ and $XT_\text{d}$ models, respectively. The number of events at pre-selection in each sample is given in Table~\ref{tab:nsnb-3Q1}.
A sizeable fraction of events from $T \bar T$ decays, which in the trilepton channel always involve a $Z$ boson, are classified in the `no $Z$' set, while around 10\% of the $B \bar B$ and $X \bar X$ events in which $Z$ bosons are not present are accepted in the `$Z$' sample. The rate of wrong assignments can be reduced at the cost of losing signal efficiency, by strenghtening the classification criteria. For example, $Z$ candidates could be accepted only in the interval $M_Z \pm 10$ GeV and events in the `no $Z$' subsample could be rejected if opposite-charge pairs have an invariant mass in the range $M_Z \pm 20$ GeV. This fine tuning of the analysis makes more sense with a full detector simulation, and is not necessary for model discrimination, anyway.
\subsection{Final state $\ell^\pm \ell^\pm \ell^\mp$ ($Z$)}
This final state receives important contributions from $T \bar T$ and $B \bar B$ production in the channels of Eqs.~(\ref{ec:ch3Q1Z}). We will first perform an analysis with fewer selection criteria to suppress the background and obtain the heavy quark discovery potential for this final state. Then, we will address the identification of a heavy quark signal eventually observed, strengtheining our requirements on signal and background events and using a likelihood function which assigns them to each of the decay channels in Eqs.~(\ref{ec:ch3Q1Z}). After that,
events will be reconstructed accordingly to their classification and to the kinematics assumed in each case.
\subsubsection{Discovery potential}
For events with one $Z$ candidate we ask (i) at least two light jets with $p_T > 20$ GeV; (ii) one $b$-tagged jet also with $p_T > 20$ GeV; (iii) transverse momentum $p_T > 50$ GeV for the leading charged lepton $\ell_1$; (iv) transverse energy $H_T > 500$ GeV. The kinematical distributions of these variables at pre-selection are presented in Fig.~\ref{fig:dist-3Q1-Z} for the relevant signals and the SM background.
In particular, requiring a $b$-tagged jet hardly affects the signals but practically eliminates the $WZnj$ background which does not have $b$ quarks. The cuts on transverse energy and leading charged lepton momentum are quite general to look for new heavy quarks and are not optimised for the input masses used in our calculation. Notice also that the $H_T$ distribution for the signals clearly indicates that one or more heavy particles with masses summing around 1 TeV are produced. This data will be crucial later when we address the disentanglement and reconstruction of different signal channels.
\begin{figure}[p]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/mult-3Q1-Z.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/bmult-3Q1-Z.eps,height=5.1cm,clip=} \\
\epsfig{file=Figs/ptlep1-3Q1-Z.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/HT-3Q1-Z.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Kinematical distributions of variables used in selection and recontruction criteria
for the $\ell^\pm \ell^\pm \ell^\mp$ ($Z$) final state: light jet multiplicity $N_j$, $b$ jet multiplicity $N_b$, transverse momentum of the leading lepton and total transverse energy. The luminosity is 30 fb$^{-1}$.}
\label{fig:dist-3Q1-Z}
\end{center}
\end{figure}
\begin{table}[p]
\begin{center}
\begin{tabular}{cccccccccc}
& Pre. & Sel. & Rec. & \quad & & Pre. & Sel. & Rec. \\[1mm]
$T \bar T$ ($T_\text{s}$) & 212.4 & 162.7 & 82.9 & & $B \bar B$ ($B_\text{s}$) & 227.9 & 162.4 & 65.3 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 229.9 & 181.3 & 87.1 & & $B \bar B$ ($TB_{\text{d}_1}$) & 237.0 & 174.3 & 72.6 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 435.8 & 356.1 & 211.0 & & $B \bar B$ ($TB_{\text{d}_2}$) & 144.0 & 94.9 & 34.2 \\
$X \bar X$ ($XT_\text{d}$) & 143.9 & 99.9 & 35.7 & & $B \bar B$ ($BY_\text{d}$) & 88.3 & 58.6 & 21.2 \\
& & & & & $Y \bar Y$ ($BY_\text{d}$) & 0.1 & 0.0 & 0.0 \\
\hline
$t \bar t nj$ & 114 & 1 & 0 & & $WZnj$ & 4196 & 24 & 0 \\
$W t \bar tnj$ & 11 & 3 & 1 & & $ZZnj$ & 417 & 1 & 0 \\
$Z t \bar tnj$ & 169 & 89 & 32 \\
\end{tabular}
\end{center}
\caption{Number of events at the pre-selection, selection and reconstruction levels in the $\ell^\pm \ell^\pm \ell^\mp$ ($Z$) sample for
the signals and main backgrounds with a luminosity of 30 fb$^{-1}$.}
\label{tab:nsnb-3Q1-Z}
\end{table}
The number of signal and background events at the selection level is given in Table~\ref{tab:nsnb-3Q1-Z}, also including the values at pre-selection for better comparison.
As it might be expected, the most important background after cuts is $Z t \bar t nj$, which has a $Z$ boson, two $b$ quarks, two $W$ bosons and large transverse energy, and is then quite similar to the signals. More aggresive cuts will of course reduce this and the other backgrounds but we refrain ourselves from performing such optimisations. The discovery luminosities are given in Table~\ref{tab:sig-3Q1-Z}, summing all signal contributions within a given model. We observe that this clean channel offers a good potential to discover $T$ and $B$ quarks in singlet or doublet representations. For the $(B \, Y)$ doublet the discovery luminosity may be optimistic because it does not take into account the background normalisation uncertainty, which may be important in this case where the signal is small.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ccccccc}
& $L$ & Rec. & \quad & & $L$ & Rec. \\[1mm]
$T_\text{s}$ & 3.4 fb$^{-1}$ & $m_T$ & & $TB_{\text{d}_2}$ & 0.73 fb$^{-1}$ & $m_T$ \\
$B_\text{s}$ & 3.4 fb$^{-1}$ & $m_B$ & & $XT_\text{d}$ & 0.72 fb$^{-1}$ & $m_T$ \\
$TB_{\text{d}_1}$ & 1.1 fb$^{-1}$ & $m_T$, $m_B$ & & $BY_\text{d}$ & 26 fb$^{-1}$ & $m_B$
\end{tabular}
\end{center}
\caption{Luminosity $L$ required to have a $5\sigma$ discovery in the $\ell^\pm \ell^\pm \ell^\mp$ ($Z$) final state.
We also indicate whether a mass peak can be reconstructed in this final state.}
\label{tab:sig-3Q1-Z}
\end{table}
\subsubsection{Heavy quark reconstruction}
\label{sec:3l-Z-2}
The broad sensitivity to $T \bar T$ and $B \bar B$ signals of this final state implies that if a positive excess is observed, identifying its nature will require a more elaborate analysis. Indeed, the decay modes in Eqs.~(\ref{ec:ch3Q1Z}) give signals only differing by the number of jets and the location of the resonant peaks. The identification can be done efficiently, however, by using a likelihood method which gives the probability that a given event corresponds to each of the decay modes.
We build probability distribution functions (p.d.f.) for three signal classes: ($a$) $T \bar T \to ZtWb$; ($b$) $T \bar T \to Zt V t$, with $V=H,Z$ decaying hadronically or invisibly; ($c$) $B \bar B \to Zb Wt$. We generate high-statistics samples different from the ones used for the final analysis. We do not include a separate class for the background, because the likelihood function is only used to identify signals and not to reject the background which is rather small. Nevertheless, the discriminant analysis also affects the background and, for instance, if we restrict ourselves to events
classified as resulting from $B \bar B$ production, a sizeable part of the background is classified as $T \bar T$-like and thus rejected.
Note also that an essential parameter for building the kinematical distributions for the signals is the heavy quark mass. If a heavy quark signal is observed at LHC, the approximate value of the heavy quark mass can be estimated from the transverse energy distribution for the signal,\footnote{In the case of the doublets the new states are expected to be nearly degenerate, simplifying their approximate determination from this distribution.} and then a probabilistic analysis can be performed to separate signal contributions and reconstruct the decay chain event by event.
For the signal discrimination and reconstruction we demand, in addition to the selection criteria already specified, the presence of at least two $b$-tagged jets. The number of signal and background events with this last requirement is given in Table~\ref{tab:nsnb-3Q1-Z}. We begin by finding two $W$ bosons decaying hadronically and leptonically.
The former is approximately reconstructed at this stage by selecting among the light jets with largest $p_T$ (up to a maximum of four) the two ones which give an invariant mass closest to $M_W$. The latter is approximately reconstructed from the charged lepton not included in the $Z$ candidate and the missing energy, with the procedure explained below, and selecting the solution giving the smallest neutrino energy.
The variables used in the likelihood function are:
\begin{figure}[htb]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/D-mult-3Q1-Z.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/D-bmult-3Q1-Z.eps,height=5.1cm,clip=} \\
\epsfig{file=Figs/D-mW1b1-3Q1-Z.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/D-mZb1-3Q1-Z.eps,height=5.1cm,clip=} \\
\epsfig{file=Figs/D-mW2b2Z-3Q1-Z.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/D-mW1W2b2-3Q1-Z.eps,height=5.1cm,clip=} \\
\end{tabular}
\caption{Kinematical variables used to classify the three heavy quark signals in the
$\ell^\pm \ell^\pm \ell^\mp$ ($Z$) final state.}
\label{fig:lik-3Q1-Z}
\end{center}
\end{figure}
\begin{itemize}
\item The light jet and $b$ jet multiplicities.
\item The invariant mass of the reconstructed $W$ boson (decaying hadronically or leptonically) with larger transverse momentum, labelled as $W_1$, plus the $b$ quark with largest transverse momentum, $b_1$. For $T \to Wb$ decays the $W$ boson as well as the $b$ quark are expected to have larger $p_T$, and we observe in Fig.~\ref{fig:lik-3Q1-Z} that this is often the case.
\item The invariant mass of the reconstructed $Z$ boson and the $b$ quark with highest $p_T$, which for the $B \bar B$ signal is most times the one resulting from $B \to Zb$, as we observe in the distribution of Fig.~\ref{fig:lik-3Q1-Z}.
\item The invariant mass of the reconstructed $W$ with smaller $p_T$ ($W_2$), the $Z$ boson and the $b$ quark with smaller transverse momentum ($b_2$).
\item The invariant mass of the two $W$ bosons and the $b$ quark with smallest $p_T$, which for the $B \bar B$ signal are the ones from $B \to Wt \to WWb$ in most cases.
\end{itemize}
The likelihood function evaluated on the three class samples gives the probability distributions in Fig.~\ref{fig:lik2-3Q1-Z}, where $P_a$, $P_b$, $P_c$ are the probabilities that events correspond to each of the three likelihood classes in Eq.~(\ref{ec:ch3Q1Z}).
Events are assigned to the class ($a$, $b$ or $c$) which has the highest probability $P_a$, $P_b$ or $P_c$, respectively. Table~\ref{tab:lik-3Q1-Z} shows the performance of the likelihood function on the reference samples.
Events in a class $x$ are correctly classified if $P_x > P_y,P_z$, where $y$, $z$ are the other classes. The probabilities for correct assignments are in the range $0.61-0.69$, which suffice to achieve a good reconstruction of the heavy resonances. We now describe the procedure followed in each case.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/D-Pa-3Q1-Z.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/D-Pb-3Q1-Z.eps,height=5.1cm,clip=} \\
\multicolumn{3}{c}{\epsfig{file=Figs/D-Pc-3Q1-Z.eps,height=5.1cm,clip=}}
\end{tabular}
\caption{Probability distribution functions for events in the reference samples.}
\label{fig:lik2-3Q1-Z}
\end{center}
\end{figure}
\begin{table}[t]
\begin{center}
\begin{tabular}{cccc}
Class & $P_a > P_b,P_c$ & $P_b > P_a,P_c$ & $P_c > P_a,P_b$ \\
\hline
($a$) & 0.61 & 0.24 & 0.15 \\
($b$) & 0.19 & 0.69 & 0.12 \\
($c$) & 0.15 & 0.20 & 0.65
\end{tabular}
\end{center}
\caption{Performance of the likelihood function on the $\ell^\pm \ell^\pm \ell^\mp$ event reference samples: fractions of events in each sample and their classification. Events in a class $x$ are correctly classified if $P_x > P_y,P_z$, where $y$, $z$ are the other classes.}
\label{tab:lik-3Q1-Z}
\end{table}
{\em Class} ($a$): $T \bar T \to Zt W \bar b \to Z Wb Wb$. Events which are identified as resulting from this decay chain are reconstructed using this procedure:
\begin{enumerate}
\item The $Z$ boson momentum is obtained from the opposite-sign lepton pair $Z$ candidate.
\item Two light jets are selected to form the hadronic $W$, labelled as $W_H$.
If there are only two light jets these are automatically chosen; if there are more than two, only up to four (ordered by decreasing $p_T$) are considered.
\item The leptonic $W$ (labelled as $W_L$) is obtained from the charged lepton $\ell$ not included in the $Z$ candidate and the missing energy, identifying $(p_\nu)_T = p_T\!\!\!\!\!\!\!\!\not\,\,\,\,\,\,\,$, requiring $(p_{\ell}+p_\nu)^2 = M_W^2$ and solving for the longitudinal component of the neutrino momentum. If no real solution exists, the neutrino transverse momentum is decreased in steps of 1\% and the procedure is repeated. If no solution is still found after 100 iterations, the discriminant of the quadratic equation is set to zero.
Both solutions for the neutrino momentum are kept, and the one giving best reconstructed masses is selected.
\item Two $b$ jets are selected among the ones present, to be paired with $W_H$ and $W_L$, respectively.
\item The top quark is reconstructed from one of the $Wb$ pairs, and its parent heavy quark $T_1$ from the top quark and the $Z$ boson.
\item The other heavy quark $T_2$ is reconstructed from the remaining $Wb$ pair.
\item Among all choices for $b$ and light jets and all possible pairings, the combination minimising the quantity
\begin{small}
\begin{equation}
\frac{(m_{W_H}^\text{rec}-M_W)^2}{\sigma_W^2} +
\frac{(m_{W_L}^\text{rec}-M_W)^2}{\sigma_W^2} +
\frac{(m_t^\text{rec}-m_t)^2}{\sigma_t^2} +
\frac{(m_{T_1}^\text{rec}-m_{T_2}^\text{rec})^2}{\sigma_T^2}
\end{equation}
\end{small}%
is selected, with $\sigma_W = 10$ GeV, $\sigma_t = 14$ GeV~\cite{Aad:2009wy}, $\sigma_T = 20$ GeV. Notice that we include the leptonic $W$ boson reconstructed mass in the minimisation. Since the quadratic equation is forced to have a solution in all cases, sometimes the reconstructed mass is not the $W$ mass.
\end{enumerate}
{\em Class} ($b$): $T \bar T \to Zt V \bar t \to Z Wb V Wb$. For events identified as resulting from this decay chain which have at least six jets (otherwise they they are dropped) we proceed through the same steps $1-4$ as in class $(a)$, and subsequently:
\begin{enumerate}\setcounter{enumi}{4}
\item The hadronic and leptonic tops $t_H$, $t_L$ are obtained from the two $Wb$ pairs.
\item One heavy quark $T_1$ is reconstructed from one top and the $Z$ boson.
The other heavy quark is obtained from the other top and two jets chosen among the ones present ($b$-tagged or not).
\item The combination minimising
\begin{small}
\begin{equation}
\frac{(m_{W_H}^\text{rec}-M_W)^2}{\sigma_W^2} +
\frac{(m_{W_L}^\text{rec}-M_W)^2}{\sigma_W^2} +
\frac{(m_{t_H}^\text{rec}-m_t)^2}{\sigma_t^2} +
\frac{(m_{t_L}^\text{rec}-m_t)^2}{\sigma_t^2} +
\frac{(m_{T_1}^\text{rec}-m_{T_2}^\text{rec})^2}{\sigma_T^2}
\end{equation}
\end{small}%
is finally selected.
\end{enumerate}
{\em Class} ($c$): $B \bar B \to Zb Wt \to Zb WWb$. The reconstruction of this channel proceeds through
the same steps $1-3$ as in the previous two channels, and then:
\begin{enumerate}\setcounter{enumi}{3}
\item Two $b$ jets are selected among the ones present, and one of them is paired with the $Z$ boson to reconstruct a heavy quark $B_1$.
\item The second $b$ jet is associated with one of the $W$ bosons to reconstruct a top quark, and then with the other $W$ boson to reconstruct the second heavy quark $B_2$.
\item The combination minimising
\begin{small}
\begin{equation}
\frac{(m_{W_H}^\text{rec}-M_W)^2}{\sigma_W^2} +
\frac{(m_{W_L}^\text{rec}-M_W)^2}{\sigma_W^2} +
\frac{(m_{t}^\text{rec}-m_t)^2}{\sigma_t^2} +
\frac{(m_{B_1}^\text{rec}-m_{B_2}^\text{rec})^2}{\sigma_B^2}
\end{equation}
\end{small}%
is finally selected, with $\sigma_B = 20$ GeV.
\end{enumerate}
\begin{figure}[htb]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/mtZ-3Q1-Z.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mbZ-3Q1-Z.eps,height=5.1cm,clip=} \\
\epsfig{file=Figs/mbW-3Q1-Z.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mtW-3Q1-Z.eps,height=5.1cm,clip=} \\
\multicolumn{3}{c}{\epsfig{file=Figs/mtH-3Q1-Z.eps,height=5.1cm,clip=}}
\end{tabular}
\caption{Reconstructed heavy quark masses in the $\ell^\pm \ell^\pm \ell^\mp$ ($Z$) final state.}
\label{fig:mrec-3Q1-Z}
\end{center}
\end{figure}
\begin{table}[htb]
\begin{center}
\begin{small}
\begin{tabular}{cccccccccccc}
& Total & ($a$) & ($b$) & $(c)$ & \quad & & Total & ($a$) & ($b$) & $(c)$\\[1mm]
$T \bar T$ ($T_\text{s}$) & 82.9 & 29.1 & 40.7 & 9.3 && $B \bar B$ ($B_\text{s}$) & 65.3 & 15.3 & 9.7 & 36.5 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 87.1 & 29.3 & 41.7 & 11.4 && $B \bar B$ ($TB_{\text{d}_1}$) & 72.6 & 11.4 & 12.8 & 45.2 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 211.0 & 41.0 & 133.0 & 24.2 && $B \bar B$ ($TB_{\text{d}_2}$) & 34.2 & 12.3 & 11.8 & 3.2 \\
$X \bar X$ ($XT_\text{d}$) & 35.7 & 14.2 & 12.6 & 2.3 && $B \bar B$ ($BY_\text{d}$) & 21.2 & 7.0 & 2.7 & 11.3 \\
& & & & && $Y \bar Y$ ($BY_\text{d}$) & 0.0 & 0.0 & 0.0 & 0.0 \\
\hline
$Z t \bar tnj$ & 32 & 5 & 12 & 5 \\
\end{tabular}
\end{small}
\end{center}
\caption{Number of signal and background events in the $\ell^\pm \ell^\pm \ell^\mp$ ($Z$) final state at the reconstruction level assigned to each event class. The luminosity is 30 fb$^{-1}$.}
\label{tab:nsnb-3Q1-Z-C}
\end{table}
We present our results in Fig.~\ref{fig:mrec-3Q1-Z}, including all signal contributions in a given model, as well as the SM background, and discuss them in turn.
We do not include $W$ and top reconstructed masses, which show good peaks at the true masses with the optimised method used.
The separate contributions of each process are given in Table~\ref{tab:nsnb-3Q1-Z-C}, skipping several backgrounds which are practically removed at the last stage of event selection. (The total number of events includes in each case those in class $(b)$ which are later rejected by the reconstruction algorithm because they do not have at least 6 jets.)
The first plot (up, left) shows the reconstructed $T_1$ mass for events assigned to classes $(a,b)$. This heavy quark is the one decaying $T \to Zt$, with $t$ decaying either hadronically or semileptonically. The reconstruction of this peak in the $Zt$ invariant mass distribution implies that $T$ has charge $2/3$, and also shows the vector-like nature of $T$. The counterpart for $B$ quarks is shown in the second plot (up, right), with the reconstructed mass of $B_1$, which is the quark decaying $B \to Zb$. The reconstruction of a peak in the $Zb$ invariant mass distribution shows that $B$ has charge $-1/3$ and is vector-like.
The other resonant peaks also give information regarding heavy quark decays. We show in the third plot (middle, left) the reconstructed $T_2$ mass for events in class $(a)$, which corresponds to the decay $T \to Wb$, with $W$ decaying either leptonically or hadronically. For the $T$ singlet and $(T \, B)$ doublet in scenario 1, where this decay takes place, the peaks are sharp, and they might be observed with sufficient luminosity. We point out that the presence of events with $W$ decaying leptonically (about one half of the total) clearly indicate the $T \to Wb$ decay, but this can also be established in the single lepton final state with much larger statistics. The other models with $T$ quarks in which the decay $T \to Wb$ does not happen still have a fraction of events incorrectly assigned to this class. In these models the $Wb$ invariant mass distribution, which should peak at $m_t$, is broader and shifted towards larger values because the reconstruction procedure enforces equal masses for both heavy quarks. The fourth plot (middle, right) represents the $B_2$ invariant mass distribution for events in class ($c$), from the decay $B \to Wt \to WWb$. This plot also shows the presence of a resonance decaying into a top quark and a $W$ boson, the latter reconstructed either from two jets or from a charged lepton plus missing energy. This peak establishes the decay $B \to Wt$, which is absent in the $(B \, Y)$ doublet. Finally, the fifth plot (down) shows the reconstructed $T_2$ mass in class $(b)$, corresponding to the decay $T \to V t$, with $V$ decaying into two jets. This distribution shows the presence of a resonance but does not help establish its nature, because the identity of $V$ is not determined.
A few remarks are in order. It is clear that detecting the presence of a resonant peak and drawing conclusions about the nature of the heavy quark requires a significant amount of statistics, and a compromise should be taken between having good reconstructed peaks (imposing quality cuts on class identification as well as on reconstructed $W$ boson and top quark masses, for example) and having a sufficient number of events. Here we have made no quality cuts in order to keep the signals as large as possible. But even with this conservative approach the contributions of the three cascade decays in Eqs.~(\ref{ec:ch3Q1Z}) can be disentangled, and invariant mass peaks can be reconstructed so that, if sufficient luminosity is collected, the decays $T \to Zt$, $T \to Wb$, $B \to Zb$, $B \to Wt$ can be established.
Finally, we address the discrimination of $T$ singlets and $T$ quarks of a $(T \, B)$ doublet in scenario 1, using angular distributions. (In this scenario the $T$ decay branching ratios are the same as for $T$ singlets.) In $T \to Zt$ decays, as well as in $B \to W^- t$, the top quarks are produced with a high polarisation $P = \pm 0.91$ in the helicity axis (negative for the singlets and positive for the doublets), and the opposite polarisation for antiquarks.
This allows to determine the chirality of the $WTb$ and $WtB$ couplings by looking at the charged lepton distribution in the top quark rest frame for the event subset in which the top decays leptonically. We show in Fig.~\ref{fig:cosl-3Q1-Z} (left) the theoretical distributions as computed with the Monte Carlo generator, which are the same for $T$ and $B$ quarks, since the decays $T \to Zt$ and $B \to W^- t$ involve a coupling with the same chirality, left-handed for singlets and right-handed for doublets.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/cos_l-th.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/cos_l-3Q1-Z-ab.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Left: Charged lepton distribution in the top quark rest frame for $T \to Zt$ and $B \to W^- t$ decays. Right: distribution for the $T$ singlet and $(T \, B)$ doublet after simulation.}
\label{fig:cosl-3Q1-Z}
\end{center}
\end{figure}
On the right panel we show the reconstructed distribution for the $T$ singlet and $TB_{\text{d}_1}$ model, including in the latter case the $B$ contribution which is flat and slightly smooths the slope of the distribution. It is clear that large statistics are required to discriminate both cases, but the differences are visible already without the need of unfolding detector effects.
The forward-backward asymmetries computed from the reconstructed distributions are $A_\text{FB} = -0.19$ for $T_\text{s}$ and $A_\text{FB} = 0.24$ for $TB_{\text{d}_1}$, so that with 30 fb$^{-1}$\ (corresponding to 27.7 and 38.6 events in each case) the statistical difference would amount to $2.4\sigma$. A complete analysis unfolding the detector effects and with an appropriate calculation of systematic uncertainties (see for example Ref.~\cite{AguilarSaavedra:2007rs} for a similar analysis)
is beyond the scope of this work. For $B \to W^- t$ decays the results are completely analogous but with smaller statistics. We also note that for these large heavy quark masses the $Z$ bosons produced in
$T \to Zt$, $B \to Zb$ are mostly longitudinal, and the angular distribution of the $\ell^+ \ell^-$ pair from $Z$ decay is almost indistinguishable for $T$, $B$ singlets and doublets already at the generator level.
\subsection{Final state $\ell^\pm \ell^\pm \ell^\mp$ (no $Z$)}
In the sample without $Z$ candidates we ask for event selection that (i) the leading like-sign lepton $\ell_1$ has transverse momentum $p_T > 50$ GeV; (ii) the total transverse energy $H_T$ is larger than 500 GeV. Notice again that these cuts are not optimised to reduce the background but are quite general to search for new heavy states.
The kinematical distributions of the two variables at pre-selection are shown in Fig.~\ref{fig:dist-3Q1-noZ} for the SM background and all models except the $(B \, Y)$ doublet model which has a very small signal.
As in other final states,
the $H_T$ distribution clearly indicates in all cases that one or more heavy particles, summing a mass around 1 TeV, are produced.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/ptlep1-3Q1-noZ.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/HT-3Q1-noZ.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Left: transverse momentum distribution of the leading like-sign lepton. Right: total transverse energy. The luminosity is 30 fb$^{-1}$.}
\label{fig:dist-3Q1-noZ}
\end{center}
\end{figure}
The number of events after this selection is given in Table~\ref{tab:nsnb-3Q1-noZ}, including also the numbers at pre-selection for comparison. We do not require $b$-tagged jets at this stage because it does not improve the background rejection, since most of the backgrounds have two top quarks.
\begin{table}[t]
\begin{center}
\begin{tabular}{cccccccccc}
& Pre. & Sel. & Rec. & \quad & & Pre. & Sel. & Rec. \\[1mm]
$T \bar T$ ($T_\text{s}$) & 108.3 & 96.6 & 11.6 & & $B \bar B$ ($B_\text{s}$) & 194.0 & 181.1 & 25.1 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 119.1 & 111.3 & 15.2 & & $B \bar B$ ($TB_{\text{d}_1}$) & 247.5 & 235.8 & 39.9 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 218.8 & 200.0 & 33.6 & & $B \bar B$ ($TB_{\text{d}_2}$) & 1030.4 & 977.8 & 177.0 \\
$X \bar X$ ($XT_\text{d}$) & 1037.9 & 988.9 & 187.1 & & $B \bar B$ ($BY_\text{d}$) & 18.0 & 16.9 & 1.1 \\
& & & & & $Y \bar Y$ ($BY_\text{d}$) & 0.4 & 0.3 & 0.0 \\
\hline
$t \bar t nj$ & 350 & 41 & 3 & & $WZnj$ & 62 & 16 & 0 \\
$W t \bar tnj$ & 67 & 37 & 3 & & $ZZnj$ & 7 & 1 & 0 \\
$Z t \bar tnj$ & 20 & 15 & 5 \\
\end{tabular}
\end{center}
\caption{Number of events at the pre-selection, selection and reconstruction levels in the $\ell^\pm \ell^\pm \ell^\mp$ (no $Z$) final state for the signals and main backgrounds with a luminosity of 30 fb$^{-1}$.}
\label{tab:nsnb-3Q1-noZ}
\end{table}
The $5\sigma$ discovery luminosities in this final state for the six models, including the contribution of both members in the case of the doublets, are given in Table~\ref{tab:sig-3Q1-noZ}.
The luminosity indicated for the $T$ singlet may be optimistic because the background normalisation uncertainty (unimportant for the other models which have much larger signals) may be relevant.
In this final state the heavy quark masses cannot be directly reconstructed because each heavy quark has among its decay products an invisible neutrino, and there are three neutrinos in total.
In $X \bar X$ production we can still have information about the mass of one of the heavy quarks, the one decaying $X \to W t \to WWb$, with $WW \to \ell \nu q \bar q'$, by reconstructing its decay products except the missing neutrino.\footnote{Notice that the transverse mass is not useful in this case because there are two more neutrinos from the other heavy quark decay.} For the reconstruction we demand the presence of at least one $b$-tagged jet and two light jets with $p_T > 20$ GeV, and select the hadronic $X$ decay products as follows:
\begin{table}[t]
\begin{center}
\begin{tabular}{ccccccc}
& $L$ & Rec. & \quad & & $L$ & Rec. \\[1mm]
$T_\text{s}$ & 11 fb$^{-1}$ & no & & $TB_{\text{d}_2}$ & 0.25 fb$^{-1}$ & no \\
$B_\text{s}$ & 3.5 fb$^{-1}$ & no & & $XT_\text{d}$ & 0.25 fb$^{-1}$ & no \\
$TB_{\text{d}_1}$ & 1.1 fb$^{-1}$ & no & & $BY_\text{d}$ & -- & no
\end{tabular}
\end{center}
\caption{Luminosity $L$ required to have a $5\sigma$ discovery in the $\ell^\pm \ell^\pm \ell^\mp$ (no $Z$) final state. A dash indicates no signal or a luminosity larger than 100 fb$^{-1}$.
We also indicate whether a mass peak can be reconstructed in this final state.}
\label{tab:sig-3Q1-noZ}
\end{table}
\begin{enumerate}
\item We select a $b$ jet among the ones present.
\item We select a pair of light jets $j_1$, $j_2$ among the three ones with highest $p_T$ (in case there are only two, we select these.
\item We choose the combination minimising the quantity
\begin{small}
\begin{equation}
\frac{(m_{j_1 j_2}-M_W)^2}{\sigma_W^2} +
\frac{(m_{j_1 j_2 b}-m_t)^2}{\sigma_t^2} \,,
\end{equation}
\end{small}%
with $\sigma_W = 10$ GeV, $\sigma_t = 14$ GeV.
\end{enumerate}
The ``visible'' component of the heavy quark mass $m_X^\text{vis}$ is then reconstructed as the invariant mass of these jets plus the opposite-sign lepton, of the three ones present. However,
in the decay $X \to W t \to WWb$, only half of the time the two jets and $b$ quark will correspond to the top decay. We then set a cut
\begin{equation}
140~\text{GeV} < m_{j_1 j_2 b} < 210~\text{GeV}
\label{ec:cut-3Q1-noZ}
\end{equation}
to ensure that the event topology is consistent with the decay chain assumed. The number of events after the additional reconstruction conditions, including the cut in Eq.~(\ref{ec:cut-3Q1-noZ}), can be found in Table~\ref{tab:nsnb-3Q1-noZ}.
The $m_X^\text{vis}$ distribution is shown in Fig.~\ref{fig:mrec-3Q1-noZ}. For $X \bar X$ production we observe an endpoint around 500 GeV, which is not present for the other signals nor the SM background. Hence, if a signal is observed, template methods may be used to measure $m_X$ in this decay. Notice that a similar procedure to reconstruct the mass in $B \bar B$ decays is more difficult due to combinatorics, because the two $W$ bosons from the heavy quark decay have opposite sign.
\begin{figure}[t]
\begin{center}
\epsfig{file=Figs/mQ2-3Q1-noZ.eps,height=5.1cm,clip=}
\caption{Visible reconstructed mass $m_X^\text{vis}$ distribution of one of the heavy quarks (see the text). The luminosity is 30 fb$^{-1}$.}
\label{fig:mrec-3Q1-noZ}
\end{center}
\end{figure}
Finally, it is worth remarking that the size of the signal itself would already give a strong hint that $X \bar X$ or $B \bar B$ pairs are produced, as it is apparent from the comparison of the numbers of events in Table~\ref{tab:nsnb-3Q1-noZ}.
\subsection{Summary}
In this section it has been shown that the trilepton final state has very good sensitivity to $T \bar T$, $B \bar B$ and $X \bar X$ production. Pair production of $T$, $B$ and $X$ quarks
gives final states with three leptons with branching ratios not too small, and trilepton backgrounds can be significantly reduced with not very strong selection criteria, which keep most of the signals. The discovery potential found is similar to the dilepton final states, which have larger branching ratios but also larger backgrounds, but worse than in the single lepton one. However, the main interest of the trilepton final state is not heavy quark discovery but model discrimination, via the observation or not of several quark decays in this unique channel.
Signals in the trilepton final state involve either the leptonic decay of a $Z$ and a $W$ boson, or of three $W$ bosons. Hence, we have split the sample into two subsamples, one in which a $Z$ candidate can be found (labelled as `$Z$') and the other one in which no such candidates can be identified (labelled as `no $Z$')
The $\ell^\pm \ell^\pm \ell^\mp$ ($Z$) final state is very interesting because it is very sensitive to $T \bar T$ and $B \bar B$ production, in the decay channels of Eqs.~(\ref{ec:ch3Q1Z}). $T$ and $B$ singlets with a mass of 500 GeV can both be discovered with a luminosity of 3.4 fb$^{-1}$. Discovery of a $(T \, B)$ doublet requires 1.1 fb$^{-1}$\ (0.73 fb$^{-1}$) in scenario 1 (2), and a $(X \, T)$ doublet 0.72 fb$^{-1}$\, being the main contribution to the signal from the $T$ quark in the latter two cases. But this broad sensitivity brings an additional difficulty for the discrimination of signals: the three signal channels in Eqs.~(\ref{ec:ch3Q1Z}) are interesting and it is necessary to identify event by event to which one it corresponds. In this sense, this final state is also very adequate because the kinematics of the decay chain can be fully reconstructed as there is only one light neutrino. With this purpose we have developed a likelihood analysis to discriminate among the
three signal channels and then reconstruct the events accordingly.
We have shown that the contributions of the three cascade decays in Eqs.~(\ref{ec:ch3Q1Z}) can be disentangled, invariant mass peaks can be reconstructed, and the decays $T \to Zt$, $T \to W^+ b$, $B \to Zb$, $B \to W^-t$ can be established if sufficient luminosity is collected. This clean channel, in which the combinatorics is moderate, is also a good candidate to determine the chiralities of $T$, $B$ quarks with the analysis of
angular distributions in the semileptonic decay of the top quarks produced in
$T \to Zt$, $B \to W^- t$. For example, a simple analysis presented shows that for 30 fb$^{-1}$\ the differences in a forward-backward asymmetry between a $T$ quark singlet and a $(T \, B)$ doublet in scenario 1 would amount to $2.4\sigma$. In $T \to Ht$ decays (seen for instance in the single lepton channel) the statistics is larger but the top quark polarisation is smaller and the reconstruction less clean than here.
The final state without $Z$ candidates is also very interesting because of its excellent sensitivity to $X \bar X$ and $B \bar B$ production, very similar to the one in the like-sign dilepton channel: 3.5 fb$^{-1}$\ for $B$ singlets and 0.25 fb$^{-1}$\ for both $(X \, T)$ doublets and the $TB_{\text{d}_2}$ model, with the main contribution resulting from $X \bar X$ and $B \bar B$ production, respectively. Although the masses cannot be fully reconstructed in this final state, in $X \bar X$ production the $X$ mass can be still determined from the endpoint of an invariant mass distribution. The presence of a signal for $B$ quarks gives indirect evidence for the $B \to W^- t$ decay, which is absent for the $B$ quark in a $(B \, Y)$ doublet.
To conclude this section, it is worth mentioning that the trilepton final state is also very sensitive to heavy Dirac or Majorana neutrinos in singlet, doublet or triplet representations~\cite{AguilarSaavedra:2009ik}. Those signals can be distinguished from heavy quark production because in that case the heavy neutrino can be observed as a peak in the invariant mass distribution of two opposite-charge leptons plus missing energy, with an additional peak in the distribution of the remaining lepton plus two jets. Scalar triplets also give trilepton signals (see for example Ref.~\cite{delAguila:2008cj}) but the like-sign dilepton invariant mass distribution displays a very sharp peak in scalar triplet production, which is of course absent in the case of heavy quarks.
\section{Final state $\ell^\pm \ell^\pm$}
\label{sec:2lik}
This conspicuous signal can be produced in decays of $B$ and $X$ quark pairs when two same-sign $W$ bosons decay leptonically,
\begin{align}
& B \bar B \to W^- t \, W^+ \bar t \to W^- W^+ b \, W^+ W^- \bar b
&& \quad W^\pm \to \ell^\pm \nu , W^\mp \to q \bar q' \,, \nonumber \\
& X \bar X \to W^+ t \, W^- \bar t \to W^+ W^+ b \, W^- W^- \bar b
&& \quad W^\pm \to \ell^\pm \nu , W^\mp \to q \bar q' \,,
\label{ec:ch2Q2}
\end{align}
and also in decays of $T$ quark pairs involving $Z$ bosons
\begin{align}
& T \bar T \to Zt \, W^- \bar b \to Z W^+b W^- \bar b
&& \quad Z \to \ell^+ \ell^- , WW \to \ell \nu q \bar q'
\label{ec:ch2Q2b}
\end{align}
when the opposite-charge lepton from the $Z$ decay is missed by the detector (the charge conjugate channel is also included). Like-sign dilepton signals are relatively clean, their largest SM background being $t \bar t nj$ in the semileptonic channel, where one of the two like-sign leptons results from a $b$ quark decay.
A very large source of $\ell^\pm \ell^\pm$ events but for low lepton transverse momenta is $b \bar b nj$, with a cross section of 1.4 $\mu$b (for a detailed discussion of like-sign dilepton backgrounds see Ref.~\cite{delAguila:2007em}). For example, requiring only $p_T > 15$ GeV for the charged leptons the number of like-sign dilepton events from $t \bar t nj$, $b \bar b nj$ is around 25000 and 150000, respectively, for a luminosity of 30 fb$^{-1}$~\cite{delAguila:2007em}.
In order to reduce such backgrounds, we demand for event pre-selection (i) the presence of two like-sign leptons with transverse momentum $p_T > 30$ GeV; (ii) the absence of non-isolated muons. The first condition practically eliminates $b \bar b nj$ while the latter reduces $WZnj$, which gives this final state when the opposite-charge lepton from $Z$ decay is missed by the detector. The number of signal and background events at pre-selection can be read in Table~\ref{tab:nsnb-2Q2}.
\begin{table}[htb]
\begin{center}
\begin{tabular}{cccccccccccc}
& Pre. & Sel. & Rec. & \quad & & Pre. & Sel. & Rec. \\[1mm]
$T \bar T$ ($T_\text{s}$) & 139.6 & 79.3 & 65.9 & & $B \bar B$ ($B_\text{s}$) & 291.5 & 170.1 & 137.9 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 156.3 & 90.4 & 74.8 & & $B \bar B$ ($TB_{\text{d}_1}$) & 368.6 & 223.2 & 172.8 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 263.8 & 177.8 & 149.1 & & $B \bar B$ ($TB_{\text{d}_2}$) & 1737.4 & 1122.8 & 890.0 \\
$X \bar X$ ($XT_\text{d}$) & 1684.7 & 1138.6 & 900.4 & & $B \bar B$ ($BY_\text{d}$) & 15.8 & 5.6 & 45.0 \\
& & & & & $Y \bar Y$ ($BY_\text{d}$) & 2.0 & 0.7 & 0.2 \\
\hline
$t \bar t nj$ & 1413 & 43 & 23 & & $WWnj$ & 245 & 7 & 0 \\
$W t \bar tnj$ & 184 & 47 & 34 & & $WZnj$ & 1056 & 9 & 1 \\
$Z t \bar tnj$ & 28 & 9 & 5 & & $WWWnj$ & 110 & 11 & 3 \\
\end{tabular}
\end{center}
\caption{Number of events at the pre-selection, selection and reconstruction levels in the $\ell^\pm \ell^\pm$ final state for the signals and main backgrounds with a luminosity of 30 fb$^{-1}$.}
\label{tab:nsnb-2Q2}
\end{table}
We note that it is sometimes claimed in the literature (without actually providing a proof) that SM backgrounds with charged leptons from $b$ decays, namely $t \bar t nj$ and $b \bar b nj$, can be removed or suppressed to negligible levels by isolation criteria.
However, recent analyses for supersymmetry searches performed with a full detector simulation~\cite{Aad:2009wy} arrive at the opposite conclusion. This can already be seen at the level of a fast detector simulation.
We show in Fig.~\ref{fig:dist2-2Q2} the minimum $\Delta R$ distance between the two charged leptons and the closest jet, for various signals and the $t \bar t nj$ background. We observe that these variables, which in general do not bring a tremendous improvement in the signal to background ratio, are even inadequate in this case when the signals have many hard jets.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/dRl1j-2Q2.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/dRl2j-2Q2.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Lego-plot separation between the leptons and the closest jet for several signals and the $t \bar t nj$ background, at the pre-selection level (these variables are not used for event selection). The luminosity is 30 fb$^{-1}$.}
\label{fig:dist2-2Q2}
\end{center}
\end{figure}
For the like-sign dilepton final state we first perform a ``discovery'' analysis with conditions aiming only to improve the signal significance by reducing the background. Then, we impose additional requirements (which reduce the signal statistical significance) to try to reconstruct the event kinematics and detect heavy quark mass peaks. These two analyses are presented in turn.
\subsection{Discovery potential}
To evaluate the discovery potential for heavy quark signals we require for event selection
(i) the presence of at least six jets, $b$ tagged or not, with $p_T > 20$ GeV;
(ii) transverse momentum $p_T > 50$ GeV for the leading charged lepton $\ell_1$;
(iii) missing energy $p_T\!\!\!\!\!\!\!\!\not\,\,\,\,\,\,\, > 50$ GeV;
(iv) transverse energy larger than 500 GeV.
The kinematical distributions of these variables at pre-selection are presented in Fig.~\ref{fig:dist1-2Q2}. We also show for completeness the separate multiplicity distributions of light and $b$-tagged jets. Notice that the maximum in the transverse energy distribution for the signals indicates that one or more heavy particles with a total mass around 1 TeV is produced.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/lmult-2Q2.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/bmult-2Q2.eps,height=5.1cm,clip=} \\
\epsfig{file=Figs/mult-2Q2.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/HT-2Q2.eps,height=5.1cm,clip=} \\
\epsfig{file=Figs/ptlep1-2Q2.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/ptmiss-2Q2.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Light and $b$-tagged jet multiplicity, total jet multiplicity, total transverse energy, transverse momentum of the leading lepton and missing energy. The luminosity is 30 fb$^{-1}$.}
\label{fig:dist1-2Q2}
\end{center}
\end{figure}
The number of signal and background events after our selection cuts is listed in Table~\ref{tab:nsnb-2Q2}. We observe that $t \bar t nj$ still amounts to one third of the total like-sign dilepton background after being reduced by the cuts.
The corresponding discovery luminosities for each model can be found in Table~\ref{tab:sig-2Q2}, also indicating whether a mass peak can be reconstructed (see the next subsection). It is noticeable that not only $X \bar X$ and $B \bar B$ decays give like-sign dileptons but also $T \bar T$ decays, although the discovery luminosity in the $T$ singlet model is much larger than for the rest. Finally, we note that for the $B$ singlets and , $(T \, B)$, $(X \, T)$ doublets the signals are much larger than the background and thus the uncertainty in the latter is not crucial for the evaluation of the discovery potential.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ccccccc}
& $L$ & Rec. & \quad & & $L$ & Rec. \\[1mm]
$T_\text{s}$ & 17 fb$^{-1}$ & no & & $TB_{\text{d}_2}$ & 0.23 fb$^{-1}$ & no \\
$B_\text{s}$ & 4.1 fb$^{-1}$ & no & & $XT_\text{d}$ & 0.23 fb$^{-1}$ & $m_X$ \\
$TB_{\text{d}_1}$ & 1.5 fb$^{-1}$ & no & & $BY_\text{d}$ & -- & no
\end{tabular}
\end{center}
\caption{Luminosity $L$ required to have a $5\sigma$ discovery in the $\ell^\pm \ell^\pm$ final state. A dash indicates no signal or a luminosity larger than 100 fb$^{-1}$.
We also indicate whether a mass peak can be reconstructed in this final state.}
\label{tab:sig-2Q2}
\end{table}
\subsection{Heavy quark reconstruction}
\label{sec:6.2}
In $X \bar X$ production the invariant mass of the quark decaying hadronically can be reconstructed from its decay products: a $b$ quark and four jets from $W$ decays. In order to do so, we restrict ourselves to events with at least one $b$-tagged jet and four light (non-tagged) jets.
The number of signal and background events after these additional reconstruction criteria is given in Table~\ref{tab:nsnb-2Q2}.
The reconstruction is performed as follows:
\begin{enumerate}
\item A $b$-tagged jet is selected among the ones present.
\item The four highest $p_T$ light jets are grouped in two pairs $j_1 j_2$, $j_3 j_4$ trying to reconstruct two $W$ bosons, the first one from the top quark decay and the second one from $X \to Wt$.
\item The $b$ jet is associated to the first light jet pair $j_1 j_2$ to reconstruct a top quark.
\item Among all the possible choices for the $b$ jet and light jet combinations, the one minimising the quantity
\begin{equation}
\frac{(m_{j_1 j_2}-M_W)^2}{\sigma_W^2} +
\frac{(m_{j_3 j_4}-M_W)^2}{\sigma_W^2} +
\frac{(m_{j_1 j_2 b}-m_t)^2}{\sigma_t^2}
\end{equation}
is chosen, taking $\sigma_W = 10$ GeV, $\sigma_t = 14$ GeV.
\end{enumerate}
The reconstructed heavy quark mass $m_X$ is then defined as the invariant mass of the $b$-tagged and four light jets. These distributions are presented in Fig.~\ref{fig:mrec-2Q2}. The long tails in the distributions of $W$ and top reconstructed masses are mainly caused by wrong assignments. In particular, when only one $b$-tagged jet is present in the event, half of the times it corresponds to the $b$ quark from the other heavy quark $X$ in which the $W$ bosons decay leptonically. Still, the heavy quark mass peak is clearly observed without the need of quality cuts on $W$ and top reconstructed masses (which of course sharpen the $m_X$ peak). The rest of signals and the SM background do not exhibit any resonant structure, which shows that the above procedure does not introduce any bias. Note that a similar mass reconstruction cannot be achieved for $B \bar B$ or $T \bar T$ decays, due to the missing neutrino from each of the heavy quark decays.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/mw1-2Q2.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mt-2Q2.eps,height=5.1cm,clip=} \\
\epsfig{file=Figs/mw2-2Q2.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mQ-2Q2.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Reconstructed masses of the two $W$ bosons, the top and the heavy quark. The luminosity is 30 fb$^{-1}$.}
\label{fig:mrec-2Q2}
\end{center}
\end{figure}
Although the cross section and mass reconstruction are consistent with the production of a $X \bar X$ pair we still can ask ourselves to which extent we can conclude that $X \bar X$ pairs are produced and not other possibility consistent with charge conservation and giving the same final state. We can establish $X \bar X$ production in two steps. First,
we ensure that the extra boson from the heavy quark decaying hadronically, reconstructed from two jets $j_3$ and $j_4$, is a $W$ boson and not a $Z$. It is not surprising that the $j_3 j_4$ invariant mass distribution in Fig.~\ref{fig:mrec-2Q2} has a peak around $M_W$, since this is imposed in the reconstruction procedure. The question is then what would happen if, instead of choosing the pair of jets which best reconstruct a $W$ boson, we chose the pair which best reconstruct a $Z$. The comparison between both situations is shown in Fig.~\ref{fig:mrec2-2Q2}, including the two signals in the $(X \, T)$ doublet ($X \bar X$ and $T \bar T$). We observe that if we select the pair of jets with $m_{j_3 j_4}$ closest to $M_Z$ the distribution is slightly shifted but the peak is maintained at $M_W$, and the heavy quark reconstruction is unaffected. Then, up to a more detailed study with a full detector simulation to confirm these results, it seems that the identity of the gauge boson from the heavy quark decay can be established. This leaves us with two options for the heavy quark: a $B$ (charge $-1/3$) or $X$ (charge $5/3$) quark. (The opposite charges for antiquarks are understood.)
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/mw2alt-2Q2.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mQalt-2Q2.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Comparison between the reconstructed masses of the extra boson and the heavy quark, using an alternative procedure (see the text). The luminosity is 30 fb$^{-1}$.}
\label{fig:mrec2-2Q2}
\end{center}
\end{figure}
Then, we examine the transverse mass distribution of the rest of particles in the event, restricting ourselves to events with two $b$ tags for simplicity and imposing the quality cuts
\begin{align}
40~\text{GeV} < m_{j_1 j_2} < 120~\text{GeV} \,, \notag \\
125~\text{GeV} < m_{j_1 j_2 b} < 225~\text{GeV}
\end{align}
on the reconstruction (see Fig.~\ref{fig:mrec-2Q2}).
We define the transverse mass as in Ref.~\cite{Contino:2008hi},
\begin{equation}
m_\text{tr}^2 = (E_T^{\ell \ell b} + p_T\!\!\!\!\!\!\!\!\not\,\,\,\,\,\,\,)^2 - (p_T^{\ell \ell b \nu})^2 \,,
\end{equation}
with $(E_T^{\ell \ell b})^2 = (p_T^{\ell \ell b})^2 + m_{\ell \ell b}^2$, $p_T^\nu \equiv p_T\!\!\!\!\!\!\!\!\not\,\,\,\,\,\,\,$ and the transverse momenta of different particles summed vectorially. This distribution, shown in Fig.~\ref{fig:mrec3-2Q2} for the relevant signals, has an edge around $m_X$ for the $(X \, T)$ doublet signal, showing that the like-sign charged leptons and the $b$ quark result from the decay of a 500 GeV resonance. Then, charge conservation and the absence of significant additional jet activity (which could be identified as additional partons produced in the hard process) implies the possible charge assignments $(Q_h,Q_l) = \pm (5/3,-5/3),\pm (5/3,-7/3), \pm (1/3,-5/3)$ for the heavy quark decaying
hadronically and leptonically, respectively. Of these three possibilities, the only one consistent with a small mixing of the third SM generation and the new quarks with the first two SM generations (so that $b$ quarks are produced in $b \bar b$ pairs) is the first one, corresponding to $X \bar X$ production.
\begin{figure}[htb]
\begin{center}
\epsfig{file=Figs/mQ2-2Q2.eps,height=5.1cm,clip=}
\caption{Transverse mass distribution of the two charged leptons, a $b$ jet and the missing energy. The luminosity is 30 fb$^{-1}$.}
\label{fig:mrec3-2Q2}
\end{center}
\end{figure}
\subsection{Summary}
We have shown in this section that the like-sign dilepton final state has an excellent discovery potential for $X \bar X$ production: a $(X \, T)$ doublet of 500 GeV could be discovered with only 0.23 fb$^{-1}$. This discovery potential is only matched by the trilepton and single lepton final states (but in the latter the quark observed is the $T$ partner).
Moreover, a heavy quark mass peak can be found in the invariant mass distribution of a reconstructed top quark and two extra jets, resulting from the hadronic decay of a $W$ boson. Despite the fact that heavy quark charges cannot be directly measured (unless the $b$ jet charge is measured, which is very difficult at LHC), the detailed analysis of the event kinematics can eventually establish that the signal corresponds to $X \bar X$ production if this indeed is the case.
For $B \bar B$ production the signals are also interesting and the discovery potential is also very good: 4.1 fb$^{-1}$\ for $B$ singlets and 1.1 fb$^{-1}$, 0.23 fb$^{-1}$\ for $(T \, B)$ doublets in scenarios 1 and 2, respectively. For $B \bar B$ production the heavy quark mass peaks cannot be reconstructed because each heavy quark has among its decay products an invisible neutrino. But the presence of a signal distinguishes a $B$ singlet or $(T \, B)$ doublet, which have decays $B \to W^- t$, from the $B$ quark in a $(B \, Y)$ doublet which does not.
This final state has some sensitivity to $T \bar T$ production, with one heavy quark decaying $T \to Zt$, $Z \to \ell^+ \ell^-$ and one of these leptons missed by the detector. The discovery potential is worse than in other final states, however, and the heavy quark masses cannot be reconstructed.
We note that
our results exhibit some differences with respect to previous work~\cite{Contino:2008hi}, due to two different sources:
\begin{itemize}
\item We include a fast detector simulation and pile-up, and in order to reduce SM backgrounds we must apply tighter event selection criteria. For example, the event selection in Ref.~\cite{Contino:2008hi} demands five jets with pseudo-rapidity $|\eta| < 5$. This is not sensible in the presence of pile-up, so we restrict our analysis to the central region of the calorimeter, $|\eta| < 2.5$. In fact, we have applied the cuts in Ref.~\cite{Contino:2008hi} to our simulation, obtaining a similar signal efficiency but a background 5 times larger (see Table~\ref{tab:comp-contino}). In particular $t \bar t nj$, not considered there, amounts to 52 events, about one half of the total SM contribution.
Therefore, reducing the background requires stronger selection criteria which obviously reduce the signal as well.
\item As already indicated in section \ref{sec:3}, our $5\sigma$ discovery criterion is that (i) the signal statistical significance (possibly evaluated with Poisson statistics) is larger than $5\sigma$; (ii) the number of events is larger than 10. This second condition, which determines the limits on the $X \bar X$ and $B \bar B$ signals for this final state, is not included in Ref.~\cite{Contino:2008hi}. Then, even for equal number of signal and background events, their limits are better.
\end{itemize}
Our mass reconstruction method is also more involved and adapted to the more realistic conditions and the higher jet multiplicities found in our analysis.
\begin{table}[t]
\begin{center}
\begin{tabular}{ccc}
& Ref.~\cite{Contino:2008hi} & Our analysis \\[1mm]
$X \bar X$ ($XT_\text{d}$) & 440 & 470.6 \\
$B \bar B$ ($TB_{\text{d}_2}$) & 424 & 470.7 \\
Background & 23 & 116
\end{tabular}
\end{center}
\caption{Number of signal and background events for 10 fb$^{-1}$\ using the selection criteria in Ref.~~\cite{Contino:2008hi}.}
\label{tab:comp-contino}
\end{table}
We finally comment on some other models giving the same final state. Like-sign dilepton signals without significant missing energy are characteristic of the presence of a heavy Majorana neutrinos~\cite{Keung:1983uu}, either in singlet or triplet $\text{SU}(2)_L$ representations (for a detailed comparison see Ref.~\cite{AguilarSaavedra:2009ik}). Those models can be easily distinguished from heavy quark production because in the heavy neutrino case (i) the missing energy is very small; (ii) two heavy resonances can be reconstructed, each one consisting of a charged lepton and two jets. On the other hand, like-sign dileptons with large missing energy are characteristic of heavy Dirac neutrinos in triplet $\text{SU}(2)_L$ representations~\cite{AguilarSaavedra:2009ik}, but in this case a resonance can be reconstructed with one charged lepton and two jets. Scalar triplet production also gives this final state but with the like-sign dilepton invariant mass displaying a sharp peak at the doubly charged scalar mass (see for example Ref.~\cite{delAguila:2008cj}).
\section{Final state $\ell^+ \ell^-$}
\label{sec:2opp}
This final state has large SM backgrounds which make it more difficult to observe positive signals by simply counting events with few selection criteria, as it is possible in
the cleaner final states, and demand either a signal reconstruction to observe invariant mass peaks or an efficient background reduction. We ask for pre-selection the presence of (i) two opposite-charged leptons with $p_T > 30$ GeV; (ii) two $b$-tagged jets with $p_T > 20$ GeV. Dilepton signals result from many signal decay channels, for example
\begin{align}
& T \bar T \to Zt \, W^- \bar b \to Z W^+b W^- \bar b
&& \quad Z \to \ell^+ \ell^- , W \to q \bar q' \,, \nonumber \\
& T \bar T \to Zt \, V \bar t \to Z W^+b \, V W^- \bar b
&& \quad Z \to \ell^+ \ell^- , W \to q \bar q' , V \to q \bar q/\nu \bar \nu \,, \nonumber \\
& B \bar B \to Z b \, W^+ \bar t \to Z b \, W^+ W^- \bar b
&& \quad Z \to \ell^+ \ell^- , W \to q \bar q' \,,
\label{ec:ch2Q0Z}
\end{align}
involving a $Z \to \ell^+ \ell^-$ decay (here $V=Z,H$), or
\begin{align}
& T \bar T \to W^+ b \, W^- \bar b
&& \quad W \to \ell \nu\,, \nonumber \\
& T \bar T \to W^+ b \, V \bar t \to W^+ b \, V W^- \bar b
&& \quad W \to \ell \nu , V \to q \bar q/\nu \bar \nu\,, \nonumber \\
& B \bar B \to W^- t \, W^+ \bar t \to W^- W^+ b \, W^+ W^- \bar b
&& \quad 2W \to \ell \nu , 2W \to q \bar q' \,, \nonumber \\
& X \bar X \to W^+ t \, W^- \bar t \to W^+ W^+ b \, W^- W^- \bar b
&& \quad 2W \to \ell \nu , 2W \to q \bar q' \,, \nonumber \\
& Y \bar Y \to W^- b \, W^+ \bar b
&& \quad W \to \ell \nu\,,
\label{ec:ch2Q0noZ}
\end{align}
with two leptonic $W$ decays.
As it has been done in other final states, we separate the sample into two ones, one for events with a $Z$ candidate (when the two charged leptons have the same flavour and invariant mass
$|m_{\ell^+ \ell^-} - M_Z| < 15$ GeV), and the other one with the rest of events, which do not fulfill one of these conditions. Backgrounds are also separated by this division: the ones involving $Z$ production like $Znj$ and $Z b \bar bnj$ mainly contribute to the former while $t \bar t nj$ contributes to the latter.
The total number of signal and background events at pre-selection level in both samples is given in Table~\ref{tab:nsnb-2Q0}, and the dilepton mass distribution for the signals in Fig.~\ref{fig:mZrec-2Q0}.
\begin{table}[htb]
\begin{center}
\begin{tabular}{cccccccccccc}
& Total & $Z$ & no $Z$ & \quad & & Total & $Z$ & no $Z$ \\[1mm]
$T \bar T$ ($T_\text{s}$) & 715.9 & 179.6 & 536.3 & & $B \bar B$ ($B_\text{s}$) & 819.8 & 393.4 & 426.4 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 799.4 & 174.1 & 625.3 & & $B \bar B$ ($TB_{\text{d}_1}$) & 907.5 & 388.1 & 519.4 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 1007.7 & 341.6 & 666.1 & & $B \bar B$ ($TB_{\text{d}_2}$) & 1105.4 & 55.4 & 1050.0 \\
$X \bar X$ ($XT_\text{d}$) & 1147.4 & 60.7 & 1086.7 & & $B \bar B$ ($BY_\text{d}$) & 902.5 & 780.3 & 122.2 \\
& & & & & $Y \bar Y$ ($BY_\text{d}$) & 570.4 & 25.8 & 544.6 \\
\hline
$t \bar t nj$ & 68493 & 7464 & 61029 & & $Z^*/\gamma^*nj$ & 5245 & 4875 & 370 \\
$tW$ & 2135 & 212 & 1923 & & $Zb\bar b nj$ & 10132 & 9807 & 325 \\
$t\bar tb\bar b$ & 347 & 38 & 309 & & $Zc \bar c nj$ & 931 & 883 & 48 \\
$Wt \bar t nj$ & 63 & 4 & 59 & & $Zt \bar t nj$ & 106 & 88 & 18
\end{tabular}
\end{center}
\caption{Number of events in the $\ell^+ \ell^-$ final state for
the signals and main backgrounds with a luminosity of 30 fb$^{-1}$, at pre-selection level.}
\label{tab:nsnb-2Q0}
\end{table}
\begin{figure}[htb]
\begin{center}
\epsfig{file=Figs/mZ1-2Q0.eps,height=5.1cm,clip=}
\caption{$\ell^+ \ell^-$ invariant mass distributions for the six models in the $\ell^+ \ell^-$ final state. The luminosity is 30 fb$^{-1}$.}
\label{fig:mZrec-2Q0}
\end{center}
\end{figure}
\subsection{Final state $\ell^+ \ell^-$ ($Z$)}
In the sample with $|m_{\ell^+ \ell^-}-M_Z| < 15$ GeV we first perform a generic analysis sensitive to $T$ and $B$ quarks, to obtain the discovery potential in this sample. Then, we perform a specific one aiming to detect the decay $B \to Hb$ (and thus the Higgs boson) in $B \bar B \to ZbH \bar b$ decays. This final state is interesting for Higgs boson discovery in models with doublets $(B \, Y)$ where the only decays of the $B$ quark are $B \to Zb$, $B \to Hb$ and the decay $B \to Hb$ cannot be reconstructed in the single lepton channel.
\subsubsection{Discovery potential}
Here we demand for event selection (i) at least four jets with $p_T > 20$ GeV; (ii) transverse momentum $p_T > 50$ GeV for the leading charged lepton $\ell_1$; (iii) transverse energy $H_T > 500$ GeV. The kinematical distributions of the three variables are presented in Fig.~\ref{fig:dist-2Q0-Z}.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/mult-2Q0-Z.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/ptlep1-2Q0-Z.eps,height=5.1cm,clip=} \\
\multicolumn{3}{c}{\epsfig{file=Figs/HT-2Q0-Z.eps,height=5.1cm,clip=}}
\end{tabular}
\caption{Kinematical distributions of variables used in selection criteria
for the $\ell^+ \ell^-$ ($Z$) final state: light jet multiplicity, transverse momentum of the leading lepton and total transverse energy. The luminosity is 30 fb$^{-1}$.}
\label{fig:dist-2Q0-Z}
\end{center}
\end{figure}
Most signal channels, in particular those in Eqs.~(\ref{ec:ch2Q0Z}), have at least four jets at the partonic level.
One exception is $B \bar B \to Zb V b$ with $V \to q \bar q/\nu \bar \nu$, in which additional jets are only produced by radiation or fragmentation. Still, this signal is sizeable after the multiplicity cut.
The reader may also notice that the background could be further reduced by requiring for example $H_T > 1$ TeV. However, it is not clear whether this would indeed improve the signal observability.
If the signal cannot be seen as a clear peak (or bump) over the background lineshape, its observation requires a simple event counting in which, if the background is large as it is our case, the background normalisation uncertainty plays an important role. On the other hand, if the signal displays a peak the background can be in principle normalised from off-peak measurements and its uncertainty will be smaller. The selection cuts made here represent a (conservative) compromise between having a manageable background and still observe the signal peak structure.
As it was done for the trilepton channel, we build here a likelihood function to discriminate among the three signal channels in Eqs.~(\ref{ec:ch2Q0Z}), building probability functions for three signal classes: ($a$) $T \bar T \to ZtWb$; ($b$) $T \bar T \to Zt V t$; ($c$) $B \bar B \to Zb Wt$. We generate high-statistics samples different from the ones used for the final analysis. We choose not to include a separate class for the background, because that would strongly bias it towards signal-like distributions and
jeopardise the observation of reconstructed peaks. At any rate, the
discriminant analysis implemented here rejects a large fraction of the background (which is classified as $T \bar T$-like) when we concentrate ourselves on the $B \bar B$ signal.
To build the discriminant variables we use an approximate reconstruction of the two $W$ bosons decaying hadronically, choosing among the light jets (up to a maximum of 6) the four ones which best reconstruct two $W$ bosons. Then, we use the same variables as in the trilepton channel but this time with two hadronic $W$ bosons $W_1$, $W_2$, ordered by transverse momenta as well as the $b$-tagged jets $b_1$, $b_2$. The resulting distributions are presented in Fig.~\ref{fig:lik-2Q0-Z}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/D-mult-2Q0-Z.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/D-bmult-2Q0-Z.eps,height=5.1cm,clip=} \\
\epsfig{file=Figs/D-mW1b1-2Q0-Z.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/D-mZb1-2Q0-Z.eps,height=5.1cm,clip=} \\
\epsfig{file=Figs/D-mW2b2Z-2Q0-Z.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/D-mW1W2b2-2Q0-Z.eps,height=5.1cm,clip=} \\
\end{tabular}
\caption{Kinematical variables used to classify the three heavy quark signals in the
$\ell^+ \ell^-$ ($Z$) final state.}
\label{fig:lik-2Q0-Z}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/D-Pa-2Q0-Z.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/D-Pb-2Q0-Z.eps,height=5.1cm,clip=} \\
\multicolumn{3}{c}{\epsfig{file=Figs/D-Pc-2Q0-Z.eps,height=5.1cm,clip=}}
\end{tabular}
\caption{Probability distribution functions for events in the
reference samples.}
\label{fig:lik2-2Q0-Z}
\end{center}
\end{figure}
It is seen that the
discriminating power is practically the same as in the trilepton channel, as it can be better observed in Fig.~\ref{fig:lik2-2Q0-Z} which represents the likelihood function evaluated on the three class samples, giving the probabilities $P_a$, $P_b$, $P_c$ that the events correspond to each class.
Table~\ref{tab:lik-2Q0-Z} shows the performance of the likelihood function on the reference samples.
Events in a class $x$ are correctly classified if $P_x > P_y,P_z$, where $y$, $z$ are the other classes.
\begin{table}[htb]
\begin{center}
\begin{tabular}{cccc}
Class & $P_a > P_b,P_c$ & $P_b > P_a,P_c$ & $P_c > P_a,P_b$ \\
\hline
($a$) & 0.59 & 0.25 & 0.16 \\
($b$) & 0.23 & 0.62 & 0.15 \\
($c$) & 0.17 & 0.18 & 0.65
\end{tabular}
\end{center}
\caption{Performance of the likelihood function on the $\ell^+ \ell^-$ event reference samples: fractions of events in each sample and their classification. Events in a class $x$ are correctly classified if $P_x > P_y,P_z$, where $y$, $z$ are the other classes.}
\label{tab:lik-2Q0-Z}
\end{table}
The event reconstruction proceeds in the same way as in the trilepton channel (see section~\ref{sec:3l-Z-2}) but replacing the leptonic $W$ boson by a second $W$ decaying hadronically. We use all jets pairings with a maximum of 6 light jets to construct two $W$ bosons, and for events in class ($b$) we require at least 8 jets ($b$-tagged or not), otherwise the events are rejected.
The number of signal and background events after reconstruction cuts and their distribution in the three classes is given in Table~\ref{tab:nsnb-2Q0-Z-C}. Notice that the total number of events includes those which are later rejected in reconstruction. We also remark that this signal discrimination based on topology brings an important ``cleaning'' of the background for events classified as $B \bar B$, as we already have anticipated.
\begin{table}[t]
\begin{center}
\begin{tabular}{ccccccc}
& Total & ($a$) & ($b$) & $(c)$ \\[1mm]
$T \bar T$ ($T_\text{s}$) & 137.3 & 50.4 & 56.5 & 21.7 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 135.6 & 48.6 & 56.6 & 21.3 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 297.9 & 67.4 & 165.0 & 40.5 \\
$X \bar X$ ($XT_\text{d}$) & 45.0 & 14.2 & 18.6 & 2.8 \\
$B \bar B$ ($B_\text{s}$) & 220.4 & 65.4 & 31.1 & 113.5 \\
$B \bar B$ ($TB_{\text{d}_1}$) & 218.3 & 61.1 & 28.9 & 121.3 \\
$B \bar B$ ($TB_{\text{d}_2}$) & 38.8 & 11.3 & 15.3 & 3.3 \\
$B \bar B$ ($BY_\text{d}$) & 372.2 & 134.8 & 45.9 & 180.9 \\
$Y \bar Y$ ($BY_\text{d}$) & 5.3 & 1.9 & 0.3 & 2.7 \\
\hline
$t \bar t nj$ & 450 & 113 & 129 & 16 \\
$tW$ & 9 & 4 & 1 & 0 \\
$Z^*/\gamma^*nj$ & 181 & 32 & 86 & 12 \\
$Zb\bar b nj$ & 335 & 109 & 74 & 51 \\
$Zc \bar c nj$ & 61 & 24 & 14 & 8 \\
$Zt \bar t nj$ & 65 & 15 & 28 & 8
\end{tabular}
\end{center}
\caption{Number of signal events in the $\ell^+ \ell^-$ ($Z$) final state at the selection level assigned to each event class. The luminosity is 30 fb$^{-1}$.}
\label{tab:nsnb-2Q0-Z-C}
\end{table}
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/mtZ-2Q0-Z.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mbZ-2Q0-Z.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Reconstructed heavy quark masses in the $\ell^+ \ell^-$ ($Z$) final state.
The luminosity is 30 fb$^{-1}$.}
\label{fig:mrec-2Q0-Z}
\end{center}
\end{figure}
We show in Fig.~\ref{fig:mrec-2Q0-Z} the two most interesting signal peaks, those involving the decays $T_1 \to Zt$, common to classes ($a,b$), and $B_1 \to Zb$ in class ($c$). These peaks are less biased by the reconstruction process although we can notice that, for example, the $B\to Zb$ singlet signals misidentified and reconstructed as $T \to Zt$ have a small bump around 500 GeV.
Notice that the $B_1$ peaks in the $Zb$ invariant mass distribution are very sharp and significative, resulting in an excellent discovery potential for $B$ quarks.
For the other heavy quarks $T_2$, $B_2$ with full hadronic decay the distributions are more biased, and the heavy quark peaks are almost equally well reconstructed for the signals as for the SM background: among the many jet combinations it is always possible,
especially when the background involves top quarks and $W$ bosons, to find one which is kinematically similar to the signal. These distributions are uninteresting and are not presented for brevity. Analogously, we do not present nor perform quality cuts on reconstructed $W$ and top masses, which are very similar for the signals and backgrounds.
We estimate the signal significance by performing a cut around the mass peaks,
\begin{equation}
400~\text{GeV} < m_{T_1},m_{B_1} < 600~\text{GeV} \,,
\end{equation}
giving the numbers of signal and background events in Table~\ref{tab:nsnb-2Q0-Z-C2} for completeness.
\begin{table}[t]
\begin{center}
\begin{tabular}{cccccccccc}
& $T_1$ ($a$,$b$) & $B_1$ $(c)$ & \quad & & $T_1$ ($a$,$b$) & $B_1$ $(c)$ \\[1mm]
$T \bar T$ ($T_\text{s}$) & 77.5 & 15.1 && $B \bar B$ ($B_\text{s}$) & 55.7 & 93.6 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 74.2 & 14.1 && $B \bar B$ ($TB_{\text{d}_1}$) & 52.9 & 103.7 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 159.6 & 27.6 && $B \bar B$ ($TB_{\text{d}_2}$) & 13.2 & 2.2 \\
$X \bar X$ ($XT_\text{d}$) & 18.7 & 1.3 && $B \bar B$ ($BY_\text{d}$) & 101.3 & 148.2 \\
& & && $Y \bar Y$ ($BY_\text{d}$) & 1.4 & 1.7 \\
\hline
$t \bar t nj$ & 69 & 0 && $Zb\bar b nj$ & 78 & 28\\
$tW$ & 1 & 0 && $Zc \bar c nj$ & 15 & 4 \\
$Z^*/\gamma^*nj$& 22 & 8 && $Zt \bar t nj$ & 22 & 5
\end{tabular}
\end{center}
\caption{Number of signal events in the $\ell^+ \ell^-$ ($Z$) final state at the $T_1$, $B_1$ heavy quark peaks. The luminosity is 30 fb$^{-1}$.}
\label{tab:nsnb-2Q0-Z-C2}
\end{table}
The discovery luminosities obtained summing all signal contributions within a given model and combining the significances for the $T_1$ and $B_1$ peaks are presented in Table~\ref{tab:sig-2Q0-Z}. We find that the amount of work necessary to build the likelihood function and discriminate the different signals pays off, and the discovery luminosities achieved are quite small in some cases.
The excellent result obtained for $B \bar B$ production in a $(B \, Y)$ doublet, where the $B$ quark only decays in $B \to Zb$, $B \to Hb$, deserves a special mention: in most final states examined up to now the discovery potential for this model was rather limited but the opposite-charge dilepton one constitutes a remarkable exception. The reconstructed peak in the $Zb$ invariant mass distribution shows the presence of a heavy $B$ quark with charge $-1/3$. The same can be said regarding the $Zt$ distribution and $T$ quarks, although in this case the background is much larger and the observation in the trilepton final state is easier and cleaner.
\begin{table}[t]
\begin{center}
\begin{tabular}{ccccccc}
& $L$ & Rec. & \quad & & $L$ & Rec. \\[1mm]
$T_\text{s}$ & 22 fb$^{-1}$ & $m_T$ & & $TB_{\text{d}_2}$ & 4.4 fb$^{-1}$ & $m_T$ \\
$B_\text{s}$ & 4.5 fb$^{-1}$ & $m_B$ & & $XT_\text{d}$ & 4.4 fb$^{-1}$ & $m_T$ \\
$TB_{\text{d}_1}$ & 2.4 fb$^{-1}$ & $m_T$, $m_B$ & & $BY_\text{d}$ & 1.8 fb$^{-1}$ & $m_B$
\end{tabular}
\end{center}
\caption{Luminosity $L$ required to have a $5\sigma$ discovery in the $\ell^+ \ell^-$ ($Z$) final state. We also indicate whether a mass peak can be reconstructed in this final state.}
\label{tab:sig-2Q0-Z}
\end{table}
\subsubsection{Discovery of $B \to Hb$}
We now concentrate ourselves on the process $B \bar B \to ZbH \bar b$. As selection criteria we only ask (i) the presence of four $b$-tagged jets with $p_T > 20$ GeV, which is sufficient to practically eliminate all backgrounds, and (ii) less than four light jets, to remove the overlap between this final state and the previous one.\footnote{This is not strictly necessary as long as we do not intend to combine the statistical sensitivities of both samples, but we include it for simplicity and in order to be conservative. Dropping the requirement on light jets the signals in this sample are larger and the discovery luminosity for the $(B \, Y)$ doublet is reduced about a factor of two.} We give in Table~\ref{tab:nsnb-2Q0-Zbb} the numbers of events at pre-selection and selection for all signals and backgrounds.
\begin{table}[t]
\begin{center}
\begin{tabular}{cccccccccc}
& Pre. & Sel. & \quad & & Pre. & Sel. \\[1mm]
$T \bar T$ ($T_\text{s}$) & 179.6 & 3.1 & & $B \bar B$ ($B_\text{s}$) & 393.4 & 12.9 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 174.1 & 4.1 & & $B \bar B$ ($TB_{\text{d}_1}$) & 388.1 & 14.4 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 341.6 & 7.7 & & $B \bar B$ ($TB_{\text{d}_2}$) & 55.4 & 0.4 \\
$X \bar X$ ($XT_\text{d}$) & 60.7 & 0.2 & & $B \bar B$ ($BY_\text{d}$) & 780.3 & 38.8 \\
& & & & $Y \bar Y$ ($BY_\text{d}$) & 25.8 & 0.0 \\
\hline
$t \bar t nj$ & 7464 & 0 & & $Z^*/\gamma^* nj$ & 4875 & 0 \\
$tW$ & 212 & 0 & & $Zb\bar b nj$ & 9807 & 6 \\
$t\bar tb\bar b$ & 38 & 4 & & $Zc \bar c nj$ & 883 & 0 \\
$Wt \bar t nj$ & 4 & 0 & & $Zt \bar t nj$ & 88 & 0
\end{tabular}
\end{center}
\caption{Number of events in the $\ell^+ \ell^-$ ($Z$) final state at pre-selection and with the selection requirement of four $b$-tagged jets. The luminosity is 30 fb$^{-1}$.}
\label{tab:nsnb-2Q0-Zbb}
\end{table}
This final state has an excellent discovery potential for the Higgs boson with a
$(B \, Y)$ doublet (see Table~\ref{tab:sig-2Q0-Zbb}), and moderate for the $TB_{\text{d}_1}$ model. In our estimations for the sensitivity we include a 20\% systematic uncertainty in the background when necessary.
\begin{table}[t]
\begin{center}
\begin{tabular}{ccccccc}
& $L$ & Rec. & \quad & & $L$ & Rec. \\[1mm]
$T_\text{s}$ & -- & no & & $TB_{\text{d}_2}$ & -- & no \\
$B_\text{s}$ & -- & no & & $XT_\text{d}$ & -- & no \\
$TB_{\text{d}_1}$ & 30 fb$^{-1}$ & no & & $BY_\text{d}$ & 9.2 fb$^{-1}$ & $m_B$, $M_H$ \\
\end{tabular}
\end{center}
\caption{Luminosity $L$ required to have a $5\sigma$ discovery in the $\ell^+ \ell^-$ ($Z$) final state with four $b$ tags. We also indicate whether a mass peak can be reconstructed in this final state.}
\label{tab:sig-2Q0-Zbb}
\end{table}
The Higgs boson mass can also be reconstructed when it results from a $B$ decay, doing as follows.
\begin{enumerate}
\item We select a $b$ jet to be paired with the $Z$ boson candidate and reconstruct the heavy quark $B_1$; the other heavy quark $B_2$ is reconstructed from the three remaining $b$ jets.
\item The combination minimising the mass difference $(m_{B_2}-m_{B_1})$ is chosen.
\item Among the three $b$ jets from $B_2$, we choose the two ones with minimum invariant mass to be the ones corresponding to the Higgs decay.
\end{enumerate}
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/mH-2Q0-Z.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mHalt-2Q0-Z.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Reconstructed Higgs boson mass in the $\ell^+ \ell^-$ ($Z$) final state
with four $b$ tags. The luminosity is 30 fb$^{-1}$.}
\label{fig:mhrec-2Q0-Z}
\end{center}
\end{figure}
The resulting reconstructed Higgs mass is shown in Fig.~\ref{fig:mhrec-2Q0-Z} (left). As we already have mentioned, this final state is most interesting for the $(B \, Y)$ doublet which has a large signal and in which the Higgs mas peak can be clearly reconstructed with sufficient luminosity.
For $T \to Ht$ decays a small signal could be seen with different selection criteria but we do not address this here, since $T \to Ht$ signals are far more interesting in the single lepton channel.
Finally, one may wonder whether the ``Higgs'' peak results from the presence of a resonance or if it is merely a kinematical effect. To investigate this, we can use a different reconstruction by selecting the two $b$ jets with smallest $p_T$. The resulting distribution, shown in Fig.~\ref{fig:mhrec-2Q0-Z} (right), also displays a peak at the same place although the combinatorial background is larger in this case.
\subsection{Final state $\ell^+ \ell^-$ (no $Z$)}
In this final state the signals involve two $W$ boson decays from different heavy quarks in general, and hence the heavy mass peaks are difficult to reconstruct except for $B \bar B$ production. The detection of a signal must then rely on event counting, which requires an efficient background suppression. We perform here two analyses: first a generic one aiming to discover the new quark signals, and then a specific one to reconstruct the heavy $B$ quark mass. This mass reconstruction is useful for the $TB_{\text{d}_2}$ model, where the $B \to Zb$ decay does not take place.
\subsubsection{Discovery potential}
For event selection we demand
(i) transverse momentum $p_T > 100$ GeV for the sub-leading jet ($b$-tagged or not);
(ii) transverse energy $H_T > 750$ GeV;
(iii) the invariant mass of the highest-$p_T$ $b$ jet $b_1$ and each of the two leptons must be larger than the top mass, taken here as 175 GeV. The first two conditions reduce backgrounds in general, while the third one strongly suppresses $t \bar t nj$ production, where the $b$ quarks and charged leptons result from top decays. The kinematical distributions of these variables are presented in Fig.~\ref{fig:dist-2Q0-noZ}.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/ptmax2-2Q0-noZ.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/HT-2Q0-noZ.eps,height=5.1cm,clip=} \\
\epsfig{file=Figs/ml1b1-2Q0-noZ.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/ml2b1-2Q0-noZ.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Kinematical distributions of variables used in selection criteria
for the $\ell^+ \ell^-$ (no $Z$) final state: transverse momentum of the second highest-$p_T$ jet, total transverse energy and invariant masses of the two leptons and the leading $b$ jet. The luminosity is 30 fb$^{-1}$.}
\label{fig:dist-2Q0-noZ}
\end{center}
\end{figure}
The number of events after these cuts can be read in Table~\ref{tab:nsnb-2Q0-noZ}, where we also include for better comparison the numbers of events at pre-selection.
\begin{table}[t]
\begin{center}
\begin{tabular}{ccccccc}
& Pre. & Sel. & \quad & & Pre. & Sel. \\[1mm]
$T \bar T$ ($T_\text{s}$) & 536.3 & 236.6 & & $B \bar B$ ($B_\text{s}$) & 426.4 & 170.1 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 625.3 & 274.4 & & $B \bar B$ ($TB_{\text{d}_1}$) & 519.4 & 202.7 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 666.1 & 175.6 & & $B \bar B$ ($TB_{\text{d}_2}$) & 1050.0 & 233.3 \\
$X \bar X$ ($XT_\text{d}$) & 1086.7 & 240.5 & & $B \bar B$ ($BY_\text{d}$) & 122.2 & 89.1 \\
& & & & $Y \bar Y$ ($BY_\text{d}$) & 544.6 & 359.1 \\
\hline
$t \bar t nj$ & 61029 & 80 & & $Z^*/\gamma^* nj$ & 370 & 1 \\
$tW$ & 1923 & 14 & & $Zb\bar b nj$ & 325 & 6 \\
$t\bar tb\bar b$ & 309 & 22 & & $Zc \bar c nj$ & 48 & 1 \\
$W t \bar t nj$ & 59 & 3 & & $Zt \bar t nj$ & 18 & 6
\end{tabular}
\end{center}
\caption{Number of events in the $\ell^+ \ell^-$ (no $Z$) final state (discovery analysis) for
the signals and main backgrounds with a luminosity of 30 fb$^{-1}$, at pre-selection and selection level.}
\label{tab:nsnb-2Q0-noZ}
\end{table}
We point out that among the opposite-sign $B \bar B$ events produced in the decay
\begin{align}
& B \bar B \to W^- t \, W^+ \bar t \to W^- W^+ b \, W^+ W^- \bar b
&& \quad 2W \to \ell \nu , 2W \to q \bar q' \,,
\label{ec:BBdec}
\end{align}
those surviving the $m_{\ell_1 b}$ and $m_{\ell_2 b}$ cuts mostly correspond to leptonic decay of the two opposite-sign $W$ bosons produced in $B \to W^- t$, $\bar B \to W^+ \bar t$ decays. Then, a mass reconstruction is difficult with these event selection criteria, which are anyway very efficient to reduce the $t \bar t$ background and observe a heavy quark signal.
The luminosities required for $5\sigma$ discovery are given in Table~\ref{tab:sig-2Q0-noZ}. Since in this final state the background is still relatively important, the uncertainty in its overall normalisation affects the significance of the signals. We then include a 20\% systematic uncertainty in our estimations in order to be more realistic.
\begin{table}[t]
\begin{center}
\begin{tabular}{ccccccc}
& $L$ & Rec. & \quad & & $L$ & Rec. \\[1mm]
$T_\text{s}$ & 2.7 fb$^{-1}$ & no & & $TB_{\text{d}_2}$ & 1.1 fb$^{-1}$ & $m_B$ \\
$B_\text{s}$ & 9.3 fb$^{-1}$ & $m_B$ & & $XT_\text{d}$ & 1.1 fb$^{-1}$ & no \\
$TB_{\text{d}_1}$ & 0.83 fb$^{-1}$ & $m_B$ & & $BY_\text{d}$ & 0.87 fb$^{-1}$ & no \\
\end{tabular}
\end{center}
\caption{Luminosity $L$ required to have a $5\sigma$ discovery in the $\ell^+ \ell^-$ (no $Z$) final state. We also indicate whether a mass peak can be reconstructed in this final state.}
\label{tab:sig-2Q0-noZ}
\end{table}
We point out the excellent sensitivity of this final state to $T \bar T$, $X \bar X$ and $Y \bar Y$ production. The latter is specially important, because $Y$ pair production only gives signals in the opposite-sign dilepton and single lepton channels, and the discovery of charge $-4/3$ quarks must be done in one of them.
\subsubsection{Heavy quark reconstruction}
In order to reconstruct the heavy $B$ masses we drop from the selection criteria the $\ell b$ invariant mass requirements, to allow for top quark semileptonic decays in Eq.~(\ref{ec:BBdec}).
The selection
criteria in this case are:
(i) the presence of four jets with $p_T > 20$ GeV;
(ii) transverse momentum $p_T > 100$ GeV for the sub-leading jet ($b$-tagged or not);
(iii) transverse energy $H_T > 750$ GeV.
The number of events are given in Table~\ref{tab:nsnb-2Q0-noZ-2}. We notice that the $t \bar t nj$ background is much larger here than in the previous discovery analysis (Table~\ref{tab:nsnb-2Q0-noZ}).
\begin{table}[t]
\begin{center}
\begin{tabular}{ccccccccc}
& Pre. & Sel. & Rec. & \quad & & Pre. & Sel. & Rec. \\[1mm]
$T \bar T$ ($T_\text{s}$) & 536.3 & 209.3 & 21.1 & & $B \bar B$ ($B_\text{s}$) & 426.4 & 206.4 & 24.6 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 625.3 & 248.7 & 27.0 & & $B \bar B$ ($TB_{\text{d}_1}$) & 519.4 & 249.4 & 36.2 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 666.1 & 395.7 & 42.4 & & $B \bar B$ ($TB_{\text{d}_2}$) & 1050.0 & 623.6 & 122.4 \\
$X \bar X$ ($XT_\text{d}$) & 1086.7 & 661.1 & 127.2 & & $B \bar B$ ($BY_\text{d}$) & 122.2 & 53.6 & 4.6 \\
& & & & & $Y \bar Y$ ($BY_\text{d}$) & 544.6 & 118.7 & 11.0 \\
\hline
$t \bar t nj$ & 61029 & 1419 & 139 & & $Z^*/\gamma^* nj$ & 370 & 2 & 0 \\
$tW$ & 1923 & 18 & 0 & & $Zb\bar b nj$ & 325 & 4 & 0 \\
$t\bar tb\bar b$ & 309 & 28 & 2 & & $Zc \bar c nj$ & 48 & 0 & 0 \\
$W t \bar t nj$ & 59 & 6 & 0 & & $Zt \bar t nj$ & 18 & 8 & 2
\end{tabular}
\end{center}
\caption{Number of events in the $\ell^+ \ell^-$ (no $Z$) final state (reconstruction analysis) for
the signals and main backgrounds with a luminosity of 30 fb$^{-1}$, at pre-selection and selection level, and including reconstructed mass cuts.}
\label{tab:nsnb-2Q0-noZ-2}
\end{table}
For events in which the $W^+ W^-$ pair corresponds to the same heavy quark, the invariant mass of the $W$ bosons decaying hadronically plus one of the $b$ quarks will peak at $m_B$. The heavy quark mass reconstruction is done analogously as for the $X$ quark in the like-sign dilepton channel in section~\ref{sec:6.2}, and
the reconstructed heavy quark mass $m_B$ is then defined as the invariant mass of the $b$ quark and four light jets selected. These distributions are presented in Fig.~\ref{fig:mrec-2Q0}.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/mw1-2Q0-noZ.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mt-2Q0-noZ.eps,height=5.1cm,clip=} \\
\epsfig{file=Figs/mw2-2Q0-noZ.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mQ-2Q0-noZ.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Reconstructed masses of the two $W$ bosons, the top and the heavy quark. The luminosity is 30 fb$^{-1}$.}
\label{fig:mrec-2Q0}
\end{center}
\end{figure}
We point out that, in contrast with the $X$ quark reconstruction in the like-sign dilepton channel,
half of the events in $B \bar B$ decays have opposite-sign leptons resulting from different heavy quark decays. The heavy $B$ quark mass peak is reasonably well reconstructed as it is shown in the last plot of Fig.~\ref{fig:mrec-2Q0} but the distribution is quite similar for a $X$ quark. Then, although the presence of a signal would be apparent, the observation of a clear peak and the discrimination
among these two possiblities is rather difficult, even more in the presence of a large $t \bar t$ background.\footnote{For higher masses the background suppression is more efficient via transverse energy requirments, and the $B$ quark peak may be easier to reconstruct~\cite{Skiba:2007fw}.} We then apply quality cuts to improve the reconstruction and reduce the background,
\begin{align}
60~\text{GeV} < m_{j_1 j_2} < 100~\text{GeV} \,, \notag \\
60~\text{GeV} < m_{j_3 j_4} < 100~\text{GeV} \,, \notag \\
125~\text{GeV} < m_{j_1 j_2 b} < 225~\text{GeV} \,.
\end{align}
The number of events after these cuts is given in Table~\ref{tab:nsnb-2Q0-noZ-2}. With these cuts, the reconstructed mass for the $TB_{\text{d}_2}$ model (displaying a peak at $m_B$) and the $(X \, T)$ doublet (without peak) are quite different, as it can be seen in Fig.~\ref{fig:mrec-2Q0cut} (left). Both possibilities could also be distinguished in the presence of background, as shown in the right panel.
The reconstruction of a peak in the $Wt$ invariant mass distribution shows that the heavy quark has charge $-1/3$ or $5/3$. Unfortunately the transverse mass of the $B$ quark with leptonic $W$ decays does not display a clear endpoint, and the direct identification of the quark charge is not possible as for $X$ quarks in the like-sign dilepton channel.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/mQ-2Q0-noZcut1.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mQ-2Q0-noZcut.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Left: reconstructed mass of the heavy quark for the two largest signals. Right: the same, including the SM background. The luminosity is 30 fb$^{-1}$.}
\label{fig:mrec-2Q0cut}
\end{center}
\end{figure}
\subsection{Summary}
Despite the {\em a priori} large opposite-sign dilepton backgrounds, this final state turns out to have an excellent sensitivity to heavy quark signals. These signals and the SM backgrounds result from either the leptonic decay of a $Z$ or of two $W$ bosons. Therefore, as it was done in other final states, it is advantageous to divide this final state into two subsamples, with or without a $Z$ candidate.
The dilepton sample with a $Z$ candidate has a huge background from $Znj$ production which can be practically removed by asking for the presence of two $b$-tagged jets and four light jets. Once that SM backgrounds are manageable, we have implemented a likelihood function which discriminates among the three processes in Eqs.~(\ref{ec:ch2Q0Z}), classifying events with a good efficiency. The event reconstruction is then performed according to this classification. For events identified as $B \bar B \to ZbWt$, the $Zb$ invariant mass distribution displays a sharp peak which signals the presence of a charge $-1/3$ quark. For $T \bar T$ production a peak can be observed in the $Zt$ invariant mass distribution but the background is larger in this case. The discovery luminosities are small,
as it can be seen in Table~\ref{tab:sig-2Q0-Z}. The case of $B$ quarks in the $(B \, Y)$ doublet (1.8 fb$^{-1}$\ for 500 GeV quarks) deserves a special mention, since these quarks are much harder to see in other final states. For this model, the observation of the decay $B \to Hb$, with $H \to b \bar b$ is also possible if we concentrate on final states with four $b$ tags, and $5\sigma$ discovery could be possible with a luminosity below 10 fb$^{-1}$\ for $m_B = 500$ GeV and $M_H = 115$ GeV. This decay is interesting not only to establish the identity of the $B$ quark but because it is a possible discovery channel for the Higgs boson if such doublets exist.
In the subsample without $Z$ candidates we have performed two different analyses, first a generic one which achieves the best signal significance and then a specific one to reconstruct the heavy $B$ quark mass in $B \to W^- t$ decays. The background is again important but the largest one, $t \bar t nj$, can be practically removed by requiring invariant masses $m_{\ell b} > m_t$, so that the charged leptons and $b$ quarks cannot result from a top quark decay (this requirement must be dropped in the reconstruction analysis). After background suppression, this final state also offers an excellent discovery potential for the pair production of $T$, $X$ and $Y$ quarks. For example, $5\sigma$ significance can be achieved with a luminosity around 1 fb$^{-1}$\ for the four models with quark doublets.
The sensitivity to $Y$ quarks is especially important because they only have decays $Y \to W^- b$, and their detection can only be performed in the opposite-sign dilepton and single lepton final states.
Finally, we have performed the reconstruction of the $B$ quark mass in $B \to W^- t \to W^- W^+ b$ decays (or the charge conjugate), with both $W$ bosons decaying hadronically.
This is specially interesting for the $TB_{\text{d}_2}$ model where the $B \to Zb$ decay does not take place.
In this analysis the $m_{\ell b} > m_t$ requirement must be dropped in order to keep the events which actually display a peak in the invariant mass of four light jets plus a $b$-tagged jet. With adequate reconstruction quality cuts a clear peak could be observed distinguishing $B \bar B$ and $X \bar X$ production, but the quark charge cannot be directly measured.
\section{Final state $\ell^\pm$}
\label{sec:1l}
Single lepton signals result from heavy quark pair decays when one of the $W$ bosons (up to four are present, depending on the channel) decays leptonically and the rest of $W$, $Z$ and Higgs bosons decay hadronically. Hence, single lepton signals benefit from a large branching ratio. This final state is fundamental to establish whether the $T \to Wb$ decay takes place (this can also be seen in the trilepton final state but needs about ten times more luminosity, see section~\ref{sec:3l-Z-2}). Due to the large size of the signals, one can also look for subsamples with high $b$ jet multiplicities to establish the decays $T \to Ht$, $B \to Hb$. Without being completely exhaustive, we will perform three analyses in this section. The first one, in the single lepton channel with exactly two $b$ tags, is devoted to the search of the $T \to Wb$ decay. The second one, in a sample with four $b$ tags, allows to search for $T \to Ht$ and $B \to Hb$. The third one, requiring six $b$ tags, is very useful to look for $T \to Ht$ in the models where this decay has an enhanced branching ratio. For event pre-selection we require (i) one charged lepton with $p_T > 30$ GeV; (ii) at least two $b$ jets with $p_T > 20$ GeV; (iii) at least two light jets also with $p_T > 20$ GeV. The total number of events for the signals and main backgrounds at pre-selection are given in Table~\ref{tab:nsnb-1Q1}, as well as the numbers in the three subsamples.
\begin{table}[htb]
\begin{center}
\begin{tabular}{ccccc}
& Total & $2b$ & $4b$ & $6b$ \\[1mm]
$T \bar T$ ($T_\text{s}$) & 9415.3 & 5797.8 & 874.7 & 26.5 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 10064.4 & 6172.9 & 931.8 & 29.0 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 11782.9 & 5294.3 & 1873.4 & 112.1 \\
$X \bar X$ ($XT_\text{d}$) & 9213.8 & 7506.3 & 172.9 & 0.4 \\
$B \bar B$ ($B_\text{s}$) & 6535.6 & 4460.1 & 412.9 & 2.5 \\
$B \bar B$ ($TB_{\text{d}_1}$) & 7021.4 & 4802.3 & 434.2 & 2.8 \\
$B \bar B$ ($TB_{\text{d}_2}$) & 9193.4 & 7484.4 & 164.6 & 0.0 \\
$B \bar B$ ($BY_\text{d}$) & 2146.1 & 1399.6 & 150.7 & 0.7 \\
$Y \bar Y$ ($BY_\text{d}$) & 7444.1 & 6588.5 & 58.7 & 0.3 \\
\hline
$t \bar tnj$ & 965051 & 902205 & 1629 & 0 \\
$tW$ & 31920 & 30280 & 38 & 0 \\
$t\bar tb\bar b$ & 4355 & 2287 & 423 & 2 \\
$t\bar tt\bar t$ & 27 & 12 & 1 & 0 \\
$Wnj$ & 38185 & 37236 & 14 & 0 \\
$W b \bar bnj$ & 20634 & 19920 & 16 & 0 \\
$W t \bar tnj$ & 654 & 592 & 0 & 0 \\
$Z/\gamma nj$ & 3397 & 3314 & 0 & 0 \\
$Z b \bar bnj$ & 4874 & 4715 & 5 & 0 \\
\end{tabular}
\end{center}
\caption{Number of signal and background events in the $\ell^\pm$ final state at the pre-selection level, and in the four subsamples studied. The luminosity is 30 fb$^{-1}$.}
\label{tab:nsnb-1Q1}
\end{table}
\subsection{Final state $\ell^\pm$ ($2b$)}
In this channel, the processes we are mainly interested in are
\begin{align}
& T \bar T \to W^+ b \, W^- \bar b
&& \quad WW \to \ell \nu q \bar q' \,, \nonumber \\
& Y \bar Y \to W^- b \, W^+ \bar b
&& \quad WW \to \ell \nu q \bar q' \,.
\label{ec:ch1Q12b}
\end{align}
There are other decay channels for $T$ quarks, involving $T \to Zt$ and $T \to Ht$, which give the same final state, but they are suppressed by the selection criteria.
Some production and decay channels of $B$ and $X$ quarks also give single lepton signals, for example
\begin{align}
& B \bar B \to W^- t \, W^+ \bar t \to W^- W^+ b \, W^+ W^- \bar b
&& \quad 3W \to q \bar q' , 1W \to \ell \nu \,, \nonumber \\
& X \bar X \to W^+ t \, W^- \bar t \to W^+ W^+ b \, W^- W^- \bar b
&& \quad 3W \to q \bar q' , 1W \to \ell \nu \,,
\label{ec:ch1Q12b-2}
\end{align}
but they are rather difficult to see due to their greater similarity with $t \bar t$ production. Then, we concentrate our analysis on the channels in Eqs.~(\ref{ec:ch1Q12b}). The selection criteria applied for this are:
(i) transverse momentum $p_T > 150$ GeV for both $b$ jets;
(ii) transverse energy $H_T > 750$ GeV;
(iii) the invariant mass of both $b$ jets and the charged lepton must be larger than the top mass, taken here as 175 GeV.
The distributions of the relevant variables are shown in Fig.~\ref{fig:dist-1Q1-2b}.
\begin{figure}[b]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/ptbmax2-1Q1-2b.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/HT-1Q1-2b.eps,height=5.1cm,clip=} \\
\epsfig{file=Figs/mlb1-1Q1-2b.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mlb2-1Q1-2b.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Kinematical distributions of variables used in selection criteria
for the $\ell^\pm$ ($2b$) final state: transverse momentum of the subleading $b$ jet, total transverse energy and invariant masses of the charged lepton and the two $b$ jets. The luminosity is 30 fb$^{-1}$.}
\label{fig:dist-1Q1-2b}
\end{center}
\end{figure}
The first condition is inspired by the specific decays in Eqs.~(\ref{ec:ch1Q12b}).
(The dependence on the $p_T$ cut is not very strong, and we have choosen 150 GeV for simplicity.)
The transverse energy requirement is a general one to look for high mass states, not very optimised for these heavy quark masses.
The invariant mass requirements are extremely useful to reduce the $t \bar t nj$ background (still some events remain due to mistag of the charm or light jets), and allows to improve our results over previous analyses for $T \bar T$ production in this channel~\cite{AguilarSaavedra:2005pv,AguilarSaavedra:2006gv}.
However, it reduces the $B \bar B$ and $X \bar X$ signals for which some of the decay channels have the charged lepton and $b$ quarks both resulting from a top quark.
We give in Table~\ref{tab:nsnb-1Q1-2b} the number of events at selection, also including the ones at pre-selection for better comparison.
\begin{table}[htb]
\begin{center}
\begin{tabular}{cccccccccccc}
& Pre. & Sel. & Peak & \quad &
& Pre. & Sel. & Peak
\\[1mm]
$T \bar T$ ($T_\text{s}$) & 5797.8 & 796.6 & 538.8 && $B \bar B$ ($B_\text{s}$) & 4460.1 & 391.7 & 217.7 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 6172.9 & 795.0 & 551.7 && $B \bar B$ ($TB_{\text{d}_1}$) & 4802.3 & 352.8 & 193.4 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 5294.3 & 123.8 & 55.2 && $B \bar B$ ($TB_{\text{d}_2}$) & 7484.4 & 186.5 & 84.7 \\
$X \bar X$ ($XT_\text{d}$) & 7506.3 & 165.5 & 74.8 && $B \bar B$ ($BY_\text{d}$) & 1399.6 & 307.0 & 198.8 \\
& & & && $Y \bar Y$ ($BY_\text{d}$) & 6588.5 & 1974.3 & 1508.8\\
\hline
$t \bar tnj$ & 902205 & 299 & 117 && $W b \bar bnj$ & 19920 & 125 & 52\\
$tW$ & 30280 & 68 & 34 && $W t \bar tnj$ & 592 & 12 & 1 \\
$t\bar tb\bar b$ & 2287 & 15 & 6 && $Z/\gamma nj$ & 3314 & 2 & 1 \\
$t\bar tt\bar t$ & 12 & 1 & 0 && $Z b \bar bnj$ & 4715 & 24 & 9 \\
$Wnj$ & 37236 & 49 & 22
\end{tabular}
\end{center}
\caption{Number of signal and background events in the $\ell^\pm$ ($2b$) final state at the pre-selection and selection level, and at the reconstructed mass peak. The luminosity is 30 fb$^{-1}$.}
\label{tab:nsnb-1Q1-2b}
\end{table}
We observe the excellent background reduction achieved with these simple selection criteria, especially with the $\ell b$ invariant mass cuts: the $t \bar t nj$ background is reduced by a factor of 3000, while the $Y \bar Y$ signal is kept at one third. The $T \bar T$ signal is reduced to one seventh because there are contributing channels other than $T \bar T \to W^+ b W^- \bar b$, and those are quite suppressed by the event selection necessary to reduce $t \bar t nj$.
The $T \bar T$ and $Y \bar Y$ signals are reconstructed by choosing the best pairing between $b$ jets and reconstructed $W$ bosons:
\begin{enumerate}
\item The hadronic $W$ is obtained with the two jets (among the three ones with largest $p_T$) having an invariant mass closest to $M_W$.
\item The leptonic $W$ is obtained from the charged lepton and the missing energy with the usual method, keeping both solutions for the neutrino momentum.
\item The two heavy quarks $Q=T,Y$ are reconstructed with one of the $W$ bosons and one of the $b$ jets. We label them as $Q_{1,2}$, corresponding to the hadronic and leptonic $W$, respectively.
\item The combination minimising
\begin{small}
\begin{equation}
\frac{(m_{W_H}^\text{rec}-M_W)^2}{\sigma_W^2} +
\frac{(m_{W_L}^\text{rec}-M_W)^2}{\sigma_W^2} +
\frac{(m_{Q_1}^\text{rec}-m_{Q_2}^\text{rec})^2}{\sigma_Q^2}
\end{equation}
\end{small}%
is selected, with $\sigma_W = 10$ GeV, $\sigma_Q = 20$ GeV.
\end{enumerate}
We present the reconstructed mass distributions at the selection level in Fig.~\ref{fig:mrec-1Q1-2b}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/mQ1-1Q1-2b.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mQ2-1Q1-2b.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Reconstructed heavy quark masses at the selection level in the $\ell^\pm$ ($2b$) final state.
The luminosity is 30 fb$^{-1}$.}
\label{fig:mrec-1Q1-2b}
\end{center}
\end{figure}
With the criteria applied the peaks are very good even for the $T \bar T$ signals which involve several competing decay chains giving up to six $b$ quarks. In particular, the $p_T$ cut on the subleading $b$ jet suppresses the $T \to Ht$ decays selecting only $T \to Wb$, in which we are interested. The signal significance can be estimated by performing the invariant mass cuts
\begin{equation}
350~\text{GeV} < m_{Q_{1,2}}^\text{rec} < 650~\text{GeV} \,.
\end{equation}
The numbers of signal and background events at the peak, defined by the above mass windows, can be found in Table~\ref{tab:nsnb-1Q1-2b}. The high signal significance achieved for the case of $T$ and $Y$ quarks implies an excellent discovery potential, summarised in Table~\ref{tab:sig-1Q1-2b}.
We include a 20\% systematic uncertainty in the estimations in all cases. For $T$ quarks the discovery luminosities are rather small except for the $TB_{\text{d}_2}$ and $(X \, T)$ doublet model, where the $T \to Wb$ decay does not take place. For $Y \bar Y$ production the discovery potential is even better, because the $Y \to Wb$ channel is the only one present.
For $B$ and $X$ quarks the signals are smaller,
of the size of the background itself,
implying a signal significance below $5\sigma$ even for large luminosities due to the background normalisation uncertainty assumed. However, in case that a signal is detected in other final states it should be possible to detect also $B \bar B$ and $X \bar X$ signals in the single lepton channel with two $b$ tags using a dedicated and optimised analysis.
The heavy quark reconstruction as a peak in $Wb$ invariant mass distributions implies that it has either charge $2/3$ or $-4/3$. Both possibilities cannot be distinguished unless the $b$ jet charge is measured, which is very difficult. However, for the models considered in this paper a strong hint is offered by the signal size itself, which is much larger for $Y \bar Y$ production than for $T \bar T$, and
the observation of $T \to Zt$ in the opposite-sign dilepton and trilepton final states establishes the quark charge.
\begin{table}[t]
\begin{center}
\begin{tabular}{ccccccc}
& $L$ & Rec. & \quad & & $L$ & Rec. \\[1mm]
$T_\text{s}$ & 1.1 fb$^{-1}$ & $m_T$ & & $TB_{\text{d}_2}$ & -- & no \\
$B_\text{s}$ & -- & no & & $XT_\text{d}$ & -- & no \\
$TB_{\text{d}_1}$ & 0.60 fb$^{-1}$ & $m_T$ & & $BY_\text{d}$ & 0.18 fb$^{-1}$ & $m_Y$
\end{tabular}
\end{center}
\caption{Luminosity $L$ required to have a $5\sigma$ discovery in the $\ell^\pm$ ($2b$) final state. A dash indicates no signal or a luminosity larger than 100 fb$^{-1}$.
We also indicate whether a mass peak can be reconstructed in this final state.}
\label{tab:sig-1Q1-2b}
\end{table}
Finally, it is worth mentioning that $T$ quarks singlets and those in the $TB_{\text{d}_1}$ model (which have the same decay channels) could in principle be distinguished by the $W$ helicity fractions~\cite{Kane:1991bg}, but for large $m_T$ the $W$ bosons in $T \to W^+ b$ are mainly longitudinal and the difference between a left- and right-handed $WTb$ coupling is washed out. For a 500 GeV $T$ singlet we have
$F_L \simeq 0.05$, $F_0 \simeq 0.95$, $F_R \simeq 0$, while for the $T$ quark in
a $(T \, B)$ doublet $F_L \simeq 0$, $F_0 \simeq 0.95$, $F_R \simeq 0.05$. With 500 events for a 30 fb$^{-1}$, the statistical error $\sim 1/\sqrt N$ in angular asymmetries, etc. is expected to be around 5\%, of the order of the difference between the two models. Systematic uncertainties in the measurement of helicity fractions can also be important~\cite{AguilarSaavedra:2007rs}.
\subsection{Final state $\ell^\pm$ ($4b$)}
The main interest of this final state, in addition to the discovery of new quarks, lies in the observation of the decays $T \to Ht$, $B \to Hb$, which would also allow an early discovery of a light Higgs boson if new vector-like quarks exist~\cite{delAguila:1989ba,delAguila:1989rq,AguilarSaavedra:2006gw,Sultansoy:2006cw}. The relevant decay channels are
\begin{align}
& T \bar T \to Ht \, W^- \bar b \to H W^+b W^- \bar b
&& \quad H \to b \bar b , WW \to \ell \nu q \bar q' \,, \nonumber \\
& T \bar T \to Ht \, V \bar t \to H W^+b \, V W^- \bar b
&& \quad H \to b \bar b , WW \to \ell \nu q \bar q' , V \to q \bar q/\nu \bar \nu \,, \nonumber \\
& B \bar B \to H b \, W^+ \bar t \to H b \, W^+ W^- \bar b
&& \quad H \to b \bar b , WW \to \ell \nu q \bar q' \,.
\label{ec:ch1Q14b}
\end{align}
The first and last channels in the above equation give exactly four $b$ quarks in the final state, while the second gives up to six $b$ quarks. For $T \bar T$ production alone, it has been shown~\cite{AguilarSaavedra:2006gw} that the discrimination between the first two decay chains is very involved in final states with only four $b$-tagged jets, because the signals are actually not very different. The situation is worsened if additional $B$ quarks exist, for example in models introducing a $(T \, B)$ doublet.
Here we implement a discriminating method based on a likelihood analysis similar to the ones used in previous sections.
In this final state, however, discrimination is less efficient than for multi-lepton signals due to the combinatorics resulting from presence of four $b$-tagged jets and their association to the other particles present in the event.
We use high statistics reference samples for three event classes ($a,b,c$) corresponding to the three decay channels in Eqs.~(\ref{ec:ch1Q14b}).
As selection criteria for this analysis we demand (i) four $b$-tagged jets with $p_T > 20$ GeV; (ii) transverse energy $H_T > 750$ GeV. The distribution of this variable for the signals and the SM background can be found in Fig.~\ref{fig:dist-1Q1-4b}.
\begin{figure}[t]
\begin{center}
\epsfig{file=Figs/HT-1Q1-4b.eps,height=5.1cm,clip=}
\caption{Total transverse energy distribution for the signals and backgrounds in the
for the $\ell^\pm$ ($4b$) final state. The luminosity is 30 fb$^{-1}$.}
\label{fig:dist-1Q1-4b}
\end{center}
\end{figure}
The variables used in the discrimination are obtained using a preliminary reconstruction of a top quark and the Higgs boson:
\begin{enumerate}
\item The hadronic $W$ is obtained with the two jets (among the three ones with largest $p_T$) having an invariant mass closest to $M_W$.
\item The leptonic $W$ is obtained from the charged lepton and the missing energy with the usual method, keeping both solutions for the neutrino momentum.
\item The top quark is reconstructed with one of the $W$ bosons and one of the four $b$ jets, selecting the ones which give an invariant mass closest to the nominal top mass. The $W$ boson and $b$ quark selected are labelled as $W_2$ and $b_2$.
\item The Higgs boson ``candidate'' is obtained from the two $b$ jets, among the three remaining ones, which have the minimum invariant mass.
\item The remaining $W$ boson and $b$ jet are labelled as $W_1$, $b_1$.
\end{enumerate}
The interesting variables for signal discrimination are:
\begin{itemize}
\item The light jet multiplicity.
\item The $W_1 b_1$ invariant mass , which peaks around $m_T$ in class ($a$).
\item The $H b_1$ invariant mass, which peaks at $m_B$ in class ($c$).
\item The $H W_2 b_2$ invariant mass. This corresponds to $m_T$ in class ($a$), but the distribution is very similar for the other decay channels.
\item The $W_1 W_2 b_2$ invariant mass, which is $m_B$ in class ($c$) but does not differ much for events in the other decay channels.
\end{itemize}
The normalised distributions of these variables for the reference event samples are presented in Fig.~\ref{fig:lik-1Q1-4b}, together with the resulting probability distributions $P_{a,b,c}$ that the events belong to a given class. Comparing with the trilepton and dilepton final states, we find that the separation among channels is much less clean. This is reflected in Table~\ref{tab:lik-1Q1-4b}, which collects the fractions of events correctly and incorrectly classified.
\begin{figure}[p]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/D-mult-1Q1-4b.eps,height=5.0cm,clip=} & \quad &
\epsfig{file=Figs/D-mW1b1-1Q1-4b.eps,height=5.0cm,clip=} \\
\epsfig{file=Figs/D-mHb1-1Q1-4b.eps,height=5.0cm,clip=} & \quad &
\epsfig{file=Figs/D-mtH-1Q1-4b.eps,height=5.0cm,clip=} \\
\epsfig{file=Figs/D-mW1t-1Q1-4b.eps,height=5.0cm,clip=} & \quad &
\epsfig{file=Figs/D-Pa-1Q1-4b.eps,height=5.0cm,clip=} \\
\epsfig{file=Figs/D-Pb-1Q1-4b.eps,height=5.0cm,clip=} & \quad &
\epsfig{file=Figs/D-Pc-1Q1-4b.eps,height=5.0cm,clip=}
\end{tabular}
\caption{Kinematical variables used to classify the three heavy quark signals in the
$\ell^\pm$ (4$b$) final state, and the resulting probability distributions for events in the reference samples.}
\label{fig:lik-1Q1-4b}
\end{center}
\end{figure}
\begin{table}[htb]
\begin{center}
\begin{tabular}{cccc}
Class & $P_a > P_b,P_c$ & $P_b > P_a,P_c$ & $P_c > P_a,P_b$ \\
\hline
($a$) & 0.53 & 0.17 & 0.30 \\
($b$) & 0.28 & 0.47 & 0.25 \\
($c$) & 0.27 & 0.15 & 0.58
\end{tabular}
\end{center}
\caption{Performance of the likelihood function on the $\ell^\pm$ ($4b$) event reference samples: fractions of events in each sample and their classification. Events in a class $x$ are correctly classified if $P_x > P_y,P_z$, where $y$, $z$ are the other classes.}
\label{tab:lik-1Q1-4b}
\end{table}
We give in Table~\ref{tab:nsnb-1Q1-4b} the numbers of signal and background events at the selection level, and their classification (as in the previous sections, we select the class which has the highest probability).
\begin{table}[htb]
\begin{center}
\begin{tabular}{ccccc}
& Total & ($a$) & ($b$) & $(c)$ \\[1mm]
$T \bar T$ ($T_\text{s}$) & 836.8 & 342.5 & 260.4 & 233.9 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 886.5 & 363.9 & 286.4 & 236.2 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 1780.7 & 509.9 & 841.9 & 428.9 \\
$X \bar X$ ($XT_\text{d}$) & 167.3 & 44.9 & 86.5 & 35.9 \\
$B \bar B$ ($B_\text{s}$) & 396.8 & 119.6 & 64.3 & 212.9 \\
$B \bar B$ ($TB_{\text{d}_1}$) & 416.4 & 119.7 & 67.2 & 229.5 \\
$B \bar B$ ($TB_{\text{d}_2}$) & 160.0 & 43.0 & 83.1 & 33.9 \\
$B \bar B$ ($BY_\text{d}$) & 146.1 & 62.0 & 10.5 & 73.6 \\
$Y \bar Y$ ($BY_\text{d}$) & 57.9 & 28.1 & 4.9 & 24.9 \\
\hline
$t \bar tnj$ & 404 & 122 & 228 & 54 \\
$tW$ & 5 & 3 & 0 & 2 \\
$t\bar tb\bar b$& 158 & 47 & 66 & 45 \\
$t\bar tt\bar t$& 1 & 0 & 0 & 1 \\
$Wnj$ & 1 & 0 & 1 & 0 \\
$Wb \bar b nj$ & 3 & 0 & 1 & 2 \\
$Zb \bar b nj$ & 1 & 0 & 0 & 1
\end{tabular}
\end{center}
\caption{Number of signal and background events in the $\ell^\pm$ ($4b$) final state at the selection level assigned to each event class. The luminosity is 30 fb$^{-1}$.}
\label{tab:nsnb-1Q1-4b}
\end{table}
An important remark here is the presence of $X \bar X$ and $Y \bar Y$ signals, as well as $B \bar B$ ones in the $TB_{\text{d}_2}$ model, which involve only two $b$ quarks at the partonic level. Events with four $b$-tagged jets result from the mistag of the charm quarks from $W$ decays, and light quarks to a lesser extent. The presence and size of these signals illustrates the relative importance of mistags in the processes we are interested in: for $T \bar T$ and $B \bar B$ events with four $b$ jets it may well happen that only three of them correspond to true $b$ quarks and one is a charm quark from a $W$ decay. This, added to the kinematical similarity of the signals and the several possibilities in $b$ jet assignments, makes the separation among the channels difficult.
The luminosities required for $5\sigma$ discovery are collected in Table~\ref{tab:sig-1Q1-4b}, summing all contributions in a given model and combining the significance of the three classes $(a,b,c)$. A systematic uncertainty of 20\% is included in the estimations.
\begin{table}[t]
\begin{center}
\begin{tabular}{ccccccc}
& $L$ & Rec. & \quad & & $L$ & Rec. \\[1mm]
$T_\text{s}$ & 0.70 fb$^{-1}$ & $m_T$, $M_H$ & & $TB_{\text{d}_2}$ & 0.16 fb$^{-1}$ & no \\
$B_\text{s}$ & 1.9 fb$^{-1}$ & $m_B$, $M_H$ & & $XT_\text{d}$ & 0.16 fb$^{-1}$ & no \\
$TB_{\text{d}_1}$ & 0.25 fb$^{-1}$ & $m_T$, $m_B$, $M_H$ & & $BY_\text{d}$ & 6.2 fb$^{-1}$ & no
\end{tabular}
\end{center}
\caption{Luminosity $L$ required to have a $5\sigma$ discovery in the $\ell^\pm $ ($4b$) final state.
We also indicate whether a mass peak can be reconstructed in this final state.}
\label{tab:sig-1Q1-4b}
\end{table}
Comparing with Ref.~\cite{AguilarSaavedra:2006gw}, we observe that the discovery luminosity is significantly smaller than the values quoted there. The reasons for this difference are: (i) for consistency with the other channels we are giving here the statistical significance of the signal (including the Higgs boson) compared to the ``only background'' hypothesis, while in Ref.~\cite{AguilarSaavedra:2006gw} we compared the ``Higgs'' and ``no Higgs'' hypotheses in the presence of new quarks, for which the significance is lower; (ii) multi-jet SM backgrounds in Ref.~\cite{AguilarSaavedra:2006gw} were quite pessimistically overestimated, and with our new evaluation with updated tools and improved matrix element-parton shower matching they turn out to be smaller; (iii) the new likelihood classification performed here, with the subsequent statistical combination of channels, also improves the significance.
We finally address the heavy quark and Higgs boson reconstruction, which depends on the decay channel in which events are classified. We only perform the reconstruction of events classified in the first and third classes, because those in the second class have six $b$ jets at the partonic level.
{\em Class} ($a$): $T \bar T \to Ht W \bar b \to H Wb Wb$. Events identified as resulting from this decay chain are reconstructed using this procedure:
\begin{enumerate}
\item Two light jets are selected to form the hadronic $W$, labelled as $W_H$.
If there are only two light jets these are automatically chosen; if there are more than two, only up to three (ordered by decreasing $p_T$) are considered.
\item The leptonic $W$ (labelled as $W_L$) is obtained from the charged lepton $\ell$ and the missing energy, in the way explained in previous sections.
Both solutions for the neutrino momentum are kept, and the one giving best reconstructed masses is selected.
\item Two $b$ jets are selected among the ones present, to be paired with $W_H$ and $W_L$, respectively.
\item The top quark is reconstructed from one of the $Wb$ pairs, and its parent heavy quark $T_1$ from the top quark and the two remaining $b$ jets.
\item The other heavy quark $T_2$ is reconstructed from the remaining $Wb$ pair.
\item Among all choices for $b$ and light jets and all possible pairings, the combination minimising the quantity
\begin{small}
\begin{equation}
\frac{(m_{W_H}^\text{rec}-M_W)^2}{\sigma_W^2} +
\frac{(m_{W_L}^\text{rec}-M_W)^2}{\sigma_W^2} +
\frac{(m_t^\text{rec}-m_t)^2}{\sigma_t^2} +
\frac{(m_{T_1}^\text{rec}-m_{T_2}^\text{rec})^2}{\sigma_T^2}
\end{equation}
\end{small}%
is selected, with $\sigma_t = 14$ GeV, $\sigma_T = 20$ GeV.
After the final choice, the Higgs is reconstructed from the two $b$ jets not assigned to the $W$ bosons.
\end{enumerate}
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/mtH-1Q1-4b.eps,height=5.0cm,clip=} & \quad &
\epsfig{file=Figs/mbH-1Q1-4b.eps,height=5.0cm,clip=} \\
\epsfig{file=Figs/mbW-1Q1-4b.eps,height=5.0cm,clip=} & \quad &
\epsfig{file=Figs/mtW-1Q1-4b.eps,height=5.0cm,clip=} \\
\epsfig{file=Figs/mH-1Q1-4b-a.eps,height=5.0cm,clip=} & \quad &
\epsfig{file=Figs/mH-1Q1-4b-c.eps,height=5.0cm,clip=} \\
\end{tabular}
\caption{Reconstructed heavy quark and Higgs masses in the $\ell^\pm$ ($4b$) final state.}
\label{fig:mrec-1Q1-4b}
\end{center}
\end{figure}
{\em Class} ($c$): $B \bar B \to Hb Wt \to Hb WWb$. The reconstruction of this channel proceeds through
the same steps $1-2$ as in the previous two channels, and then:
\begin{enumerate}\setcounter{enumi}{2}
\item One $b$ jet is selected and paired with one of the two $W$ bosons to form a top quark, and with the other $W$ to form the heavy quark $B_2$.
\item The three remaining $b$ jets then reconstruct the heavy quark $B_1$.
\item The combination minimising
\begin{small}
\begin{equation}
\frac{(m_{W_H}^\text{rec}-M_W)^2}{\sigma_W^2} +
\frac{(m_{W_L}^\text{rec}-M_W)^2}{\sigma_W^2} +
\frac{(m_{t}^\text{rec}-m_t)^2}{\sigma_t^2} +
\frac{(m_{B_1}^\text{rec}-m_{B_2}^\text{rec})^2}{\sigma_B^2} \,,
\end{equation}
\end{small}%
with $\sigma_B = 20$ GeV,
is finally selected. Among the three $b$ jets corresponding to $B_1$, the two with the minimum invariant mass are chosen to reconstruct the Higgs boson.
\end{enumerate}
The results are presented in Fig.~\ref{fig:mrec-1Q1-4b}. For brevity we do not include the reconstructed $W$ boson and top quark distributions, which have good peaks by construction.
\begin{figure}[b]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/W-mW1b1-1Q1-4b.eps,height=5.0cm,clip=} & \quad &
\epsfig{file=Figs/W-mtH-1Q1-4b.eps,height=5.0cm,clip=} \\
\epsfig{file=Figs/W-mW1t-1Q1-4b.eps,height=5.0cm,clip=} & \quad &
\epsfig{file=Figs/W-mHb1-1Q1-4b.eps,height=5.0cm,clip=}
\end{tabular}
\caption{Comparison between kinematical distributions for correctly and wrongly classified events (see the text).}
\label{fig:wrong-1Q1-4b}
\end{center}
\end{figure}
We observe that for $T \bar T$ signals in class $(a)$ the $T_{1,2}$ peaks are well reconstructed, as the Higgs boson peak. The same happens for $B \bar B$ signals in class $(c)$: the $B_{1,2}$ peaks are clear and the Higgs boson peak is observable. However, the intriguing fact is that, for $T \bar T$ and $B \bar B$ signals included in the ``wrong'' class (respectively, $(c)$ and $(a)$) the reconstruction procedure produces peaks which are as sharp as those for the signals ``correctly'' classified, except for the Higgs peaks. This point deserves a detailed discussion.
Clearly, if an event is assigned to a given decay channel in Eqs.~(\ref{ec:ch1Q14b}) based on its likelihood, it is because its kinematics is quite compatible with that decay channel. Then, it is
not so surprising that, for example, if a $B \bar B$ event is classified as $T \bar T$ based on its topology, when it is reconstructed as $T \bar T$ it rather looks as a $T \bar T$ event.
For the reader's illustration we present in Fig.~\ref{fig:wrong-1Q1-4b} several distributions
for the production of $T \bar T$ and $B \bar B$ events in the case of singlets in all decay channels.
In the upper part we show the normalised $W_1 b_1$ and $W_2 b_2 H$ invariant mass distributions assigned to class $(a)$ For $B \bar B$ events (incorrectly classified) the distributions display peaks very similar to the ones for $T \bar T$ events correctly included in this class. In the lower part of this figure we plot the $W_1 W_2 b_2$ and $H b_1$ distributions for $T \bar T$ and $B \bar B$ events in class ($c$). The peaks are quite similar for $T \bar T$ (wrong classification) and $B \bar B$ (correct).
Therefore, we can conclude that distinguishing $T \bar T$ and $B \bar B$ signals in this channel is a more demanding task, and the multi-leptonic channels are much more appropriate for that. Fortunately,
all these difficulties in signal discrimination do not affect the discovery potential, which is excellent for this final state.
\subsection{Final state $\ell^\pm$ ($6b$)}
The single lepton final state with six $b$ jets allows a clean reconstruction of the decay
\begin{align}
& T \bar T \to Ht \, H \bar t \to H W^+b \, H W^- \bar b
&& \quad H \to b \bar b , WW \to \ell \nu q \bar q' \,,
\label{ec:ch1Q16b}
\end{align}
with $H \to b \bar b$, which seems impossible if only four jets are tagged. This final state is most interesting for the models in which the decay $T \to Ht$ is enhanced and the $6b$ signal is larger. We do not impose any further selection criteria apart from having six $b$-tagged jets with $p_T > 20$ GeV, which defines the sample studied. The number of background events is given in Table~\ref{tab:nsnb-1Q1-6b}.
\begin{table}[htb]
\begin{center}
\begin{tabular}{cccccccccc}
& Sel. & \quad & & Sel. \\[1mm]
$T \bar T$ ($T_\text{s}$) & 26.5 & & $B \bar B$ ($B_\text{s}$) & 2.5 \\
$T \bar T$ ($TB_{\text{d}_1}$) & 29.0 & & $B \bar B$ ($TB_{\text{d}_1}$) & 2.8 \\
$T \bar T$ ($TB_{\text{d}_2}$/$XT_\text{d}$) & 112.1 & & $B \bar B$ ($TB_{\text{d}_2}$) & 0.0 \\
$X \bar X$ ($XT_\text{d}$) & 0.4 & & $B \bar B$ ($BY_\text{d}$) & 0.7 \\
& & & $Y \bar Y$ ($BY_\text{d}$) & 0.3 \\
\hline
$t\bar tb\bar b$ & 2
\end{tabular}
\end{center}
\caption{Number of events in the $\ell^\pm$ ($6b$) final state at selection level. The luminosity is 30 fb$^{-1}$.}
\label{tab:nsnb-1Q1-6b}
\end{table}
This final state is extremely clean, and discovery could be made merely by an event counting. The $5\sigma$ discovery potential for the different models is given in Table~\ref{tab:sig-1Q1-6b}, summing all signal contributions. The discovery potential (in models with $T$ quarks, when a signal is produced) is determined by the requirement of having at least 10 signal events. The background normalisation in this case has little effect on the significance, because for the discovery luminosities it is rather small.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ccccccc}
& $L$ & Rec. & \quad & & $L$ & Rec. \\[1mm]
$T_\text{s}$ & 11 fb$^{-1}$ & $m_T$, $M_H$ & & $TB_{\text{d}_2}$ & 2.7 fb$^{-1}$ & $m_T$, $M_H$ \\
$B_\text{s}$ & -- & no & & $XT_\text{d}$ & 2.7 fb$^{-1}$ & $m_T$, $M_H$ \\
$TB_{\text{d}_1}$ & 9.4 fb$^{-1}$ & $m_T$, $M_H$ & & $BY_\text{d}$ & -- & no
\end{tabular}
\end{center}
\caption{Luminosity $L$ required to have a $5\sigma$ discovery in the $\ell^\pm $ ($6b$) final state.
A dash indicates no signal or a luminosity larger than 100 fb$^{-1}$.
We also indicate whether a mass peak can be reconstructed in this final state.}
\label{tab:sig-1Q1-6b}
\end{table}
The event reconstruction can be easily done despite the large combinatorics from the six $b$ jets. The procedure is similar to the ones used in other final states:
\begin{enumerate}
\item Two light jets (among the three ones with largest $p_T$) are selected to form the hadronic $W$, labelled as $W_H$.
\item The leptonic $W$, labelled as $W_L$, is obtained from the charged lepton $\ell$ and the missing energy.
\item Two $b$ jets are selected among the ones present, to be paired with the two $W$ bosons to reconstruct the top quarks decaying hadronically and semileptonically ($t_H$ and $t_L$).
\item The four remaining $b$ jets are grouped in pairs to reconstruct the two Higgs bosons, $H_1$ and $H_2$.
\item The two heavy quarks $T_1$ (corresponding to $W_H$) and $T_2$ (with $W_L$) are reconstructed from a top quark plus a Higgs boson.
\item Among all choices for $b$ and light jets and all possible pairings, the combination minimising the quantity
\begin{small}
\begin{align}
& \frac{(m_{W_H}^\text{rec}-M_W)^2}{\sigma_W^2} +
\frac{(m_{W_L}^\text{rec}-M_W)^2}{\sigma_W^2} +
\frac{(m_{t_H}^\text{rec}-m_t)^2}{\sigma_t^2} +
\frac{(m_{t_L}^\text{rec}-m_t)^2}{\sigma_t^2} \notag \\
& + \frac{(m_{T_1}^\text{rec}-m_{T_2}^\text{rec})^2}{\sigma_T^2} +
\frac{(M_{H_1}^\text{rec}-M_{H_2}^\text{rec})^2}{\sigma_H^2}
\end{align}
\end{small}%
is selected, with $\sigma_H = 20$ GeV.
\end{enumerate}
The results are presented in Fig.~\ref{fig:mrec-1Q1-6b}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\epsfig{file=Figs/mT1-1Q1-6b.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mT2-1Q1-6b.eps,height=5.1cm,clip=} \\
\epsfig{file=Figs/mH1-1Q1-6b.eps,height=5.1cm,clip=} & \quad &
\epsfig{file=Figs/mH2-1Q1-6b.eps,height=5.1cm,clip=}
\end{tabular}
\caption{Reconstructed heavy quark and Higgs masses in the $\ell^\pm$ ($6b$) final state.}
\label{fig:mrec-1Q1-6b}
\end{center}
\end{figure}
We omit for brevity the $W$ boson and top quark reconstructed masses, which good have peaks at $M_W$ and $m_t$ by construction. It is seen that for the $TB_{\text{d}_2}$ and $(X \, T)$ models the heavy quark and Higgs peaks are quite good and, with a moderate luminosity, they would give evidence for the $T \to Ht$ decay (and, in particular, for the production of a Higgs boson). For $T$ singlets and the $TB_{\text{d}_1}$ model the signals are smaller and they would require more luminosity, not only to be discovered but also to reconstruct the peaks.
We point out that an important difference with the $4b$ final state is that six $b$ jets can only be produced (up to mistags of charm quarks) from the decay in Eq.~(\ref{ec:ch1Q16b}). Hence, the model identification is cleaner here. We also note that for $T \bar T$ production within the $TB_{\text{d}_2}$ and $(X \, T)$ models the signal in this final state is almost four times larger than in the $\ell^+ \ell^-$ ($Z$) one with four $b$ tags, so this final state is best suited to detect $T \to Ht$.
\subsection{Summary}
The single lepton final state offers the best heavy quark discovery potential for all the models studied, due to the large signal branching ratios. To achieve this result, an efficient reduction of the large backgrounds from $t \bar t nj$, $Wnj$ and $W b \bar b nj$ is necessary. We have concentrated on three different subsamples in this final state, with exactly two, four and six $b$ jets. Clearly, the discovery potential will improve further including the samples with three and five $b$ jets, which have not been considered here for brevity.
The final state with two $b$-tagged jets is the best suited for the discovery of $Y \bar Y$ production, which only requires 0.18 fb$^{-1}$\ for a 500 GeV quark. It is also very good for $T$ singlets (1.1 fb$^{-1}$\ for the same mass) and $(T \, B)$ doublets in scenario 1 (0.60 fb$^{-1}$). In this final state the $T$ and $Y$ masses can be reconstructed as peaks in the $Wb$ invariant mass distributions. The identity of the quarks cannot be established unless the $b$ jet charge is measured: the decays $T \to W^+ b$ and $\bar Y \to W^+ \bar b$ both give a $W^+$ boson plus a $b$-tagged jet, and the jet charge measurement is necessary to discriminate both possibilities. It is interesting to point out that all kinematical distributions are the same for $T$ and $Y$ quarks, including various angular asymmetries which can be built in the $W$ and top quark rest frames (see for example Ref.~\cite{AguilarSaavedra:2006fy}). The only discrimination between both possibilities comes either indirectly, from the cross section measurement (about three times larger for $Y \bar Y$ in this final state, after including efficiencies and cuts) or directly, via the observation of $T \to Zt$ in the dilepton or trilepton final states.
The final state with four $b$ jets is the best one for the discovery of $T$ quarks in either of the models considered. For heavy quark masses of 500 GeV, $5\sigma$ discovery
only requires 0.7 fb$^{-1}$\ for singlets, 0.25 and 0.16 fb$^{-1}$\ for the $TB_{\text{d}_1}$ and
$TB_{\text{d}_2}$ models, respectively, and 0.16 fb$^{-1}$\ if the $T$ quark belongs to a $(X \, T)$ doublet. For $B$ singlets the discovery potential is also the best one, with 1.9 fb$^{-1}$.
(For these results we have assumed a light Higgs, as suggested by precise electroweak data, taking $M_H = 115$ GeV.) This process could also be a discovery channel for the Higgs boson in the presence of $T$ or $B$ quarks. We have gone beyond the signal observation and studied the discrimination among $T \bar T$ and $B \bar B$ signals in this final state, which makes sense because they are both present in general for the case of the $(T \, B)$ doublet. The separation is very difficult, for several reasons: (i) the combinatorics from the presence of four $b$ jets;
(ii) two different decay chains contribute in the case of $T \bar T$; (iii) the signals are kinematically not very different; (iv) the possibility of charm quark mistags. We have implemented a likelihood method to separate $T \bar T$ and $B \bar B$ signals, which has a reasonable efficiency if we bear in mind all these difficulties. After the $T \bar T$ and $B \bar B$ events are classified, the kinematics can be reconstructed according to the decay channel expected in each case, and sharp peaks are obtained in all cases, although the rate of ``wrong'' classifications is sizeable and a better discrimination between $T$ and $B$ quarks can be achieved in the trilepton final state.
It is also interesting to remark the excellent discovery potential for $(X \, T)$ doublets in this final state: only 0.16 fb$^{-1}$\ for heavy quark masses of 500 GeV. The discovery potential in this channel is similar but better than in the like-sign dilepton and trilepton channels (0.23 and 0.25 fb$^{-1}$, respectively) altough in those final states the main signal contribution comes from the charge $5/3$ quark $X$ and here it is the $T$ quark which gives most of the four-$b$ signal. The same results are obtained for the $TB_{\text{d}_2}$ model which includes a $T$ quark with the same decay modes: $5\sigma$ discovery of $T$ is possible with 0.16 fb$^{-1}$\ in the single lepton final state with four $b$ tags, while the $B$ quark can be discovered in the like-sign dilepton and trilepton final states with 0.23 and 0.25 fb$^{-1}$, respectively.
Finally, the sample with six $b$ jets has also been studied. In this final state the decay $T \bar T \to Ht \, H \bar t$ can be cleanly determined and peaks reconstructed without contamination of other decay modes. The signals are small, however, except for the $TB_{\text{d}_2}$ and $(X \, T)$ models, for which $T \to Wb$ does not take place and thus
$T \to Ht$ has a larger branching ratio. The discovery potential is rather good for these models, 2.7 fb$^{-1}$\ for $m_T = 500$ GeV.
\section{The roadmap to top partner identification}
\label{sec:summ}
We summarise in Table~\ref{tab:summ} the discovery luminosities for the six models in the different final states examined. The comparison among them clearly shows that the single lepton channel (either with two or four $b$ jets) offers the best discovery potential for new quarks. In the case of doublets, the signals may correspond to one or both members, as it is explained in detail in the summary at the end of each section, where it is also indicated whether heavy quark masses can be reconstructed. We now discuss case by case how the different models would be discovered and identified.
\begin{table}[ht]
\begin{center}
\begin{small}
\begin{tabular}{lcccccc}
& $T_\text{s}$ & $B_\text{s}$ & $TB_{\text{d}_1}$ & $TB_{\text{d}_2}$ & $XT_\text{d}$ & $BY_\text{d}$ \\
$\ell^+ \ell^+ \ell^- \ell^-$ ($ZZ$)
& -- & 24 fb$^{-1}$ & 18 fb$^{-1}$ & 23 fb$^{-1}$ & 23 fb$^{-1}$ & 10 fb$^{-1}$ \\
$\ell^+ \ell^+ \ell^- \ell^-$ ($Z$)
& 11 fb$^{-1}$ & 14 fb$^{-1}$ & 5.7 fb$^{-1}$ & 3.4 fb$^{-1}$ & 3.3 fb$^{-1}$ & 50 fb$^{-1}$ \\
$\ell^+ \ell^+ \ell^- \ell^-$ (no $Z$)
& 35 fb$^{-1}$ & 25 fb$^{-1}$ & 11 fb$^{-1}$ & 3.3 fb$^{-1}$ & 3.5 fb$^{-1}$ & -- \\
$\ell^\pm \ell^\pm \ell^\mp$ ($Z$)
& 3.4 fb$^{-1}$ & 3.4 fb$^{-1}$ & 1.1 fb$^{-1}$ & 0.73 fb$^{-1}$ & 0.72 fb$^{-1}$ & 26 fb$^{-1}$ \\
$\ell^\pm \ell^\pm \ell^\mp$ (no $Z$)
& 11 fb$^{-1}$ & 3.5 fb$^{-1}$ & 1.1 fb$^{-1}$ & 0.25 fb$^{-1}$ & 0.25 fb$^{-1}$ & -- \\
$\ell^\pm \ell^\pm$
& 17 fb$^{-1}$ & 4.1 fb$^{-1}$ & 1.5 fb$^{-1}$ & 0.23 fb$^{-1}$ & 0.23 fb$^{-1}$ & -- \\
$\ell^+ \ell^-$ ($Z$)
& 22 fb$^{-1}$ & 4.5 fb$^{-1}$ & 2.4 fb$^{-1}$ & 4.4 fb$^{-1}$ & 4.4 fb$^{-1}$ & 1.8 fb$^{-1}$ \\
$\ell^+ \ell^-$ ($Z$, $4b$)
& -- & -- & 30 fb$^{-1}$ & -- & -- & 9.2 fb$^{-1}$ \\
$\ell^+ \ell^-$ (no $Z$)
& 2.7 fb$^{-1}$ & 9.3 fb$^{-1}$ & 0.83 fb$^{-1}$ & 1.1 fb$^{-1}$ & 1.1 fb$^{-1}$ & 0.87 fb$^{-1}$ \\
$\ell^\pm$ ($2b$)
& 1.1 fb$^{-1}$ & -- & 0.60 fb$^{-1}$ & -- & -- & 0.18 fb$^{-1}$ \\
$\ell^\pm$ ($4b$)
& 0.70 fb$^{-1}$ & 1.9 fb$^{-1}$ & 0.25 fb$^{-1}$ & 0.16 fb$^{-1}$ & 0.16 fb$^{-1}$ & 6.2 fb$^{-1}$ \\
$\ell^\pm$ ($6b$)
& 11 fb$^{-1}$ & -- & 9.4 fb$^{-1}$ & 2.7 fb$^{-1}$ & 2.7 fb$^{-1}$ & -- \\
\end{tabular}
\end{small}
\end{center}
\caption{Luminosity required to have a $5\sigma$ discovery in all final states studied.}
\label{tab:summ}
\end{table}
A $T$ singlet or the $T$ quark in the $TB_{\text{d}_1}$ model: They would be discovered in the single lepton final state with two or four $b$ jets. In the $2b$ final state, the peaks in the $Wb$ invariant mass distributions give evidence of the charged current decay but do not identify completely the new quark (it could be a charge $-4/3$ quark $Y$, although the cross section would not be consistent with that hypothesis). In the $4b$ final state, the peak in the $Ht$ distribution, with $H$ reconstructed from two $b$ jets exhibiting a peak at $M_H$, gives quite strong hints of the $T \to Ht$ decay, although $B$ quarks also give signals not very different. (The $6b$ final state does not have this ambiguity but the observation of $T \to Ht$ requires much larger luminosity.)
The best confirmation of its nature comes with a little more luminosity in the $\ell^\pm \ell^\pm \ell^\mp$ ($Z$) final state, with the observation of a peak in the $Zt$ invariant mass distribution. This peak also establishes that the quark has charge $2/3$. The analysis of the charged lepton distribution in the top quark rest frame for the subset of events in which the top quark decays semileptonically can discriminate between a $T$ singlet with a left-handed $WTb$ coupling and the $T$ quark in a $(T \, B)$ doublet for which the coupling is right-handed. With 30 fb$^{-1}$\ the differences found in the forward-backward asymmetry would amount to $2.4\sigma$, and a better sensitivity is expected by using a more sophisticated analysis with a fit to the complete distribution.
A $T$ quark in the $TB_{\text{d}_2}$ model or in a $(X \, T)$ doublet: in the single lepton final state with two $b$ quarks it does not exhibit peaks in the $Wb$ invariant mass distribution because the decay $T \to W^+ b$ does not take place and probably the signal is very difficult to separate from the $t \bar t nj$ background. In the $4b$ sample, however, the signal is very large and clean, and the quark is seen in the decay $T \to Ht$. With small luminosity, a signal should be also visible in the $6b$ sample. This quark also has enhanced decays $T \to Zt$, from which the quark charge is determined, and the signals in the trilepton and opposite-sign dilepton final states with a $Z$ candidate are $2-3$ times larger than expected for a $T$ singlet.
A $B$ singlet or the $B$ quark in the $TB_{\text{d}_1}$ model: they would be discovered in the single lepton final state with four $b$ jets. However, its discrimination from a $T$ quark might not be very clear due to combinatorics and the signal similarities. With practically the same luminosity, the $B$ quark would appear as a sharp peak in a $Zb$ invariant mass distribution in the $\ell^\pm \ell^\pm \ell^\mp$ ($Z$) final state. This would determine the quark charge, and would be confirmed by an opposite-sign dilepton signal. The evidence for the $B \to W^- t$ decay comes from the same trilepton final state, and the
charged lepton distribution in the top semileptonic decays would in principle probe the chirality of the $WtB$ coupling, but the statistics is smaller than for $T$ quarks.
Indirectly, evidence for the $B \to W^- t$ decay results from the presence of $\ell^\pm \ell^\pm \ell^\mp$ (no $Z$) and $\ell^\pm \ell^\pm$ signals, also observable with small luminosity.
A $B$ quark in the $TB_{\text{d}_2}$ model: it would be discovered in the like-sign dilepton and $\ell^\pm \ell^\pm \ell^\mp$ (no $Z$) final states with similar luminosities. An indirect indication of the quark identity, in comparison with a charge $5/3$ quark $X$, would be given by the absence of the reconstructed mass peaks and endpoints which are present for $X \bar X$ production. Signals in the $\ell^+ \ell^-$ (no $Z$) final state are also interesting, not only because of the good discovery potential but also because the mass reconstruction is possible in the hadronic decay $B \to W^- t \to W^- W^+ b$ (or the charge conjugate) with a moderate luminosity. This mass reconstruction is important because in this model the $B$ quark does not have decays $B \to Zb$ and thus trilepton and opposite-sign dilepton signals with a $Z$ candidate are absent (see the paragraph above). Single lepton signals with four $b$-tagged jets, which are very significant for other models, are also absent for the $B$ quark in this model but can be produced by its $T$ partner and are kinematically not very different.
A $B$ quark in a $(B \, Y)$ doublet: it does not give trilepton signals without a $Z$ candidate
nor like-sign dilepton ones. On the other hand, it gives large opposite-sign dilepton signals with a $Z$ candidate with a sharp peak in the $Zb$ invariant mass distribution, from which the quark charge is determined. With five times more luminosity, this is also done in the four lepton final state with two $Z$ candidates. The decay $B \to Hb$ can be seen in the $\ell^+ \ell^-$ ($Z$, $4b$) final state, also with larger luminosity.
A charge $5/3$ quark $X$: it would be simultaneously discovered in the like-sign dilepton and $\ell^\pm \ell^\pm \ell^\mp$ (no $Z$) final states. In the former, the invariant mass can be reconstructed and the quark identity ({\em i.e.} that it has charge $5/3$) can be established under reasonable assumptions. The mass could also be determined from the trilepton final state with an endpoint analysis to confirm the quark identity. A signal due to this quark should also be visible in the four lepton final state without $Z$ candidates.
A charge $-4/3$ quark $Y$: it would be discovered in the single lepton final state with two $b$ jets. The peaks in the $Wb$ invariant mass distributions would give evidence of the charged current decay and indirect evidence of its nature: the signal is three times larger than for a $T$ singlet, for example. A clean signal in the $\ell^+ \ell^-$ (no $Z$) final state would also be visible, larger than for a $T$ quark. On the other hand, all the signals characteristic of a $T \to Zt$ decay would be absent, in the trilepton and dilepton final states with a $Z$ candidate, for example.
In more complicated scenarios with several singlets or doublets the signals would add up but it still would be possible to identify the new quarks with a thorough examination of all final states. For example, with a $T$ singlet plus a $(X \, T)$ doublet in which both charge $2/3$ quarks have similar masses (which cannot be experimentally resolved), the $T \to W^+ b$ decay would be seen in the single lepton final state, and the
dilepton and trilepton signals involving $T \to Zt$ would be much larger than the ones corresponding to just one $T$ quark. Of course, the $X$ member of the doublet would also be detected. The same argument applies to a combination of a $B$ singlet and a $(B \, Y)$ doublet.
The simplest discrimination between $T$, $B$ singlets and the $T$, $B$ quarks in the $TB_{\text{d}_1}$ model is by the presence of two partners almost degenerate in mass. Still, one may imagine a situation in which a $T$ and a $B$ singlet almost degenerate were present. This scenario would be distinguished from the $TB_{\text{d}_1}$ model with the analysis of angular distributions, especially for $T \to Zt$ decays.
The discrimination of vector-like singlets and doublets from a fourth sequential generation with new quarks $t'$, $b'$
is also easy, because the latter would give different signatures (see Refs.~\cite{delAguila:2008iz,Holdom:2009rf} for recent reviews). Oblique corrections prefer a mass difference $m_{t'} > m_{b'} \sim 60$ GeV~\cite{Kribs:2007nz}, so the $t'$ quark would decay either $t' \to W^+ b$ (as a $T$ or $Y$) or $t' \to W^+ b'$ if the mass difference is larger. In the first case, the absence of $t' \to Zt$ would distinguish it from a $T$ singlet, and the absence of a degenerate $B$ partner from a $(B \, Y)$ doublet. In the second case, the decay between new heavy quarks would prove the non-singlet nature of both.
Regarding the $b'$ quark, for $m_{b'} > m_{t} + M_W$ the decay $b' \to W^- t$ would dominate, distinguishing this quark from a $B$ singlet in either model. The discovery potential for fourth generation quarks~\cite{Ozcan:2008zz,Burdman:2008qh} is similar to the models studied here.
To summarise, the analyses carried out in this paper show that the single lepton final state offers the best discovery potential, and is the one in which new vector-like quark signals would be first seen. Searches in the dilepton and trilepton channels would soon confirm a possible discovery, and with a luminosity around five times larger all the decay modes of the new quarks would be observed in these channels, establishing the nature of the new quarks.
In some models four lepton signals could be sizeable and detectable as well and, in any case, these should be investigated as a further test of the models.
\section{Conclusions}
\label{sec:concl}
In this work we have investigated in detail the LHC discovery potential for pair production of new vector-like quarks in five models: $T$ or $B$ singlets of charge $2/3$, $-1/3$ respectively,
and $(T \, B)$, $(X \, T)$, $(B \, Y)$ doublets of hypercharge $1/6$, $7/6$, $-5/6$, restricting ourselves to the case that new quarks mainly couple to the third generation, as it is expected from the SM quark mass hierarchy. In the case of $(T \, B)$ doublets we have distinguished two scenarios: that both heavy quarks have similar mixing with the top and bottom quark (model $TB_{\text{d}_1}$) and that the mixing of the top with its heavy partner is much larger than for the bottom quark (model $TB_{\text{d}_2}$), as expected from the mass hierarchy $m_t \gg m_b$ and from indirect precision data.
Using a dedicated Monte Carlo generator {\tt Protos} \cite{AguilarSaavedra:2008gt} we have computed all signal contrubutions involving all heavy quark, gauge and Higgs boson decay channels. With a fast detector simulation of signals and backgrounds we have examined twelve final states which would give evidence of the presence of new quarks, with one to four charged leptons in different kinematical regions and several $b$ jet multiplicities.
We have identified the final state with one charged lepton plus two or four $b$ jets as the most sensitive one for new quark searches. Nevertheless, model discrimination requires the observation or exclusion of the different heavy quark decay channels. To achieve this goal, the dilepton and trilepton final states are essential. These final states have also good sensitivity to heavy quark signals, and with a luminosity at most five times larger than in the single lepton channel the $5\sigma$ observation would be possible and the heavy quarks might be identified. The reconstruction of mass peaks would also be possible when a sufficient number of events is collected.
In our simulations we have taken heavy quark masses of 500 GeV, focusing on early discoveries at LHC. We have obtained an excellent discovery potential for all models:
0.70 and 1.9 fb$^{-1}$\ for $T$ and $B$ singlets, respectively; 0.25 and 0.16 fb$^{-1}$\ for the $TB_{\text{d}_1}$ and $TB_{\text{d}_2}$ models; and 0.16, 0.18 fb$^{-1}$\ for the $(X \, T)$ and $(B \, Y)$ doublets.
It is also interesting to know the mass reach for higher integrated luminosities. With a simple rescaling it can be seen that in the single lepton channel
alone and a luminosity of 100 fb$^{-1}$\ heavy $T$, $B$ singlets with masses up to 800 and 720 GeV respectively can be discovered with $5\sigma$ significance, while for the doublets the reach is higher: 850 GeV and 900 GeV for the $(T \, B)$ doublet in the two scenarios considered, 900 GeV for $(X \, T)$ and 820 GeV for $(B \, Y)$. For higher masses the experimental detection of heavy quarks can also be done using jet mass measurements~\cite{Skiba:2007fw} but model discrimination would follow similar strategies as outlined here.
We have also obtained an excellent potential for the discovery of the new quarks in decay channels containing a Higgs boson, especially in the final state with one charged lepton and four $b$-tagged jets. For heavy quark masses of 500 GeV, the discovery luminosities are
0.16 fb$^{-1}$\ for the $TB_{\text{d}_2}$ and $(X \, T)$ models, 0.25 fb$^{-1}$\ for $TB_{\text{d}_1}$ and 0.70, 1.9 fb$^{-1}$\ for $T$ and $B$ singlets, respectively. These luminosities are much smaller than the ones required for a light Higgs discovery in the SM. Indeed, it is well known since some time~\cite{delAguila:1989ba,delAguila:1989rq} that vector-like quark production can be a copious source of Higgs bosons and, if such quarks exist and the Higgs boson is light, its discovery would possibly happen in one of these channels. For a heavier Higgs with different decay modes the analyses presented here (relying on the leading decay $H \to b \bar b$) must be modified accordingly. Nevertheless, the determination of the other modes like $T \to W^+b$, $T \to Zt$, etc. would still be done in the same way as presented here, with few modifications.
In the summaries given at the end of each section we have compared the multi-lepton signals produced by new quarks with those arising from heavy leptons. Both possibilities for new fermions are easily distinguished by the different reconstructed mass peaks and the common presence of $b$ jets for quarks, which in lepton pair production only result from $H \to b \bar b$, $Z \to b \bar b$ decays.
Interestingly, a more general difference among models introducing new quarks and leptons is that the latter give signals which are more ``multi-leptonic'': for heavy leptons the trilepton signatures are usually the ones with the highest significance, while for heavy quarks the single lepton one is the most sensitive. This is not unexpected, since heavy lepton decays give SM leptons plus a gauge or Higgs boson, while heavy quarks give SM quarks instead. In the minimal supersymmetric standard model where squark and gluino pair production is large, it is also found~\cite{Aad:2009wy} that, although multi-lepton signatures are important, the final state with best discovery potential is the one with a charged lepton or large missing energy plus jets.
Finally, it is worth pointing out that, although in this work we have restricted ourselves to heavy quark pair production, electroweak single production can also have a large cross section depending on the mass and couplings of the new quarks. These interesting processes are the only ones in which the heavy quark mixing with the SM sector can be measured because the pair production cross section is determined by the heavy quark mass alone and the heavy quark total width is likely to be very difficult to measure. Further model discrimination is also possible in single production, in particular from the study of cross sections and angular asymmetries, and it will be addressed elsewhere.
\section*{Acknowledgements}
I thank F. del \'Aguila, N. Castro, R. Contino and J. Santiago for useful discussions.
This work has been supported by a MEC Ram\'on y Cajal contract, MEC project FPA2006-05294 and
Junta de Andaluc{\'\i}a projects FQM 101 and FQM 437.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.